Previous Issue

E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Table of Contents

Remote Sens., Volume 10, Issue 8 (August 2018)

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) The quantification of land subsidence in transitional environments, including lagoons, deltas, [...] Read more.
View options order results:
result details:
Displaying articles 1-136
Export citation of selected articles as:
Open AccessArticle Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations
Remote Sens. 2018, 10(8), 1300; https://doi.org/10.3390/rs10081300 (registering DOI)
Received: 6 July 2018 / Revised: 31 July 2018 / Accepted: 2 August 2018 / Published: 17 August 2018
PDF Full-text (13502 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The introduction of high-resolution Sentinels combined with the use of high-quality digital agricultural parcel registration systems is driving the move towards at-parcel agricultural monitoring. The European Union’s Common Agricultural Policy (CAP) has introduced the concept of CAP monitoring to help simplify the management
[...] Read more.
The introduction of high-resolution Sentinels combined with the use of high-quality digital agricultural parcel registration systems is driving the move towards at-parcel agricultural monitoring. The European Union’s Common Agricultural Policy (CAP) has introduced the concept of CAP monitoring to help simplify the management and control of farmers’ parcel declarations for area support measures. This study proposes a proof of concept of this monitoring approach introducing and applying the concept of ‘markers’. Using Sentinel-1- and -2-derived (S1 and S2) markers, we evaluate parcels declared as grassland in the Gelderse Vallei in the Netherlands covering more than 15,000 parcels. The satellite markers—respectively based on crop-type deep learning classification using S1 backscattering and coherence data and on detecting bare soil with S2 during the growing season—aim to identify grassland-declared parcels for which (1) the marker suggests another crop type or (2) which appear to have been ploughed during the year. Subsequently, a field-survey was carried out in October 2017 to target the parcels identified and to build a relevant ground-truth sample of the area. For the latter purpose, we used a high-definition camera mounted on the roof of a car to continuously sample geo-tagged digital imagery, as well as an app-based approach to identify the targeted fields. Depending on which satellite-based marker or combination of markers is used, the number of parcels identified ranged from 2.57% (marked by both the S1 and S2 markers) to 17.12% of the total of 11,773 parcels declared as grassland. After confirming with the ground-truth, parcels flagged by the combined S1 and S2 marker were robustly detected as non-grassland parcels (F-score = 0.9). In addition, the study demonstrated that street-level imagery collection could improve collection efficiency by a factor seven compared to field visits (1411 parcels/day vs. 217 parcels/day) while keeping an overall accuracy of about 90% compared to the ground-truth. This proposed way of collecting in situ data is suitable for the training and validating of high resolution remote sensing approaches for agricultural monitoring. Timely country-wide wall-to-wall parcel-level monitoring and targeted in-season parcel surveying will increase the efficiency and effectiveness of monitoring and implementing agricultural policies. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Figures

Graphical abstract

Open AccessArticle Extraction of Sample Plot Parameters from 3D Point Cloud Reconstruction Based on Combined RTK and CCD Continuous Photography
Remote Sens. 2018, 10(8), 1299; https://doi.org/10.3390/rs10081299 (registering DOI)
Received: 19 July 2018 / Revised: 13 August 2018 / Accepted: 13 August 2018 / Published: 17 August 2018
PDF Full-text (6129 KB) | HTML Full-text | XML Full-text
Abstract
Enriching forest resource inventory is important to ensure the sustainable management of forest ecosystems. Obtaining forest inventory data from the field has always been difficult, laborious, time consuming, and expensive. Advances in integrating photogrammetry and computer vision have helped researchers develop some numeric
[...] Read more.
Enriching forest resource inventory is important to ensure the sustainable management of forest ecosystems. Obtaining forest inventory data from the field has always been difficult, laborious, time consuming, and expensive. Advances in integrating photogrammetry and computer vision have helped researchers develop some numeric algorithms and methods that can turn 2D (images) into 3D (point clouds) and are highly applicable to forestry. This paper aimed to develop a new, highly accurate methodology that extracts sample plot parameters based on continuous terrestrial photogrammetry. For this purpose, we designed and implemented a terrestrial observation instrument combining real-time kinematic (RTK) and charge-coupled device (CCD) continuous photography. Then, according to the set observation plan, three independent experimental plots were continuously photographed and the 3D point cloud of the plot was generated. From this 3D point cloud, the tree position coordinates, tree DBHs, tree heights, and other plot characteristics of the forest were extracted. The plot characteristics obtained from the 3D point cloud were compared with the measurement data obtained from the field to check the accuracy of our methodology. We obtained the position coordinates of the trees with the positioning accuracy (RMSE) of 0.162 m to 0.201 m. The relative root mean square error (rRMSE) of the trunk diameter measurements was 3.07% to 4.51%, which met the accuracy requirements of traditional forestry surveys. The hypsometrical measurements were due to the occlusion of the forest canopy and the estimated rRMSE was 11.26% to 11.91%, which is still good reference data. Furthermore, these image-based point cloud data also have portable observation instruments, low data collection costs, high field measurement efficiency, automatic data processing, and they can directly extract tree geographic location information, which may be interesting and important for certain applications such as the protection of registered famous trees. For forest inventory, continuous terrestrial photogrammetry with its unique advantages is a solution that deserves future attention in the field of tree detection and ecological construction. Full article
(This article belongs to the Special Issue Aerial and Near-Field Remote Sensing Developments in Forestry)
Figures

Graphical abstract

Open AccessArticle Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing
Remote Sens. 2018, 10(8), 1298; https://doi.org/10.3390/rs10081298
Received: 2 July 2018 / Revised: 2 August 2018 / Accepted: 13 August 2018 / Published: 16 August 2018
PDF Full-text (822 KB)
Abstract
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the
[...] Read more.
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Open AccessArticle Comparison of Seven Inversion Models for Estimating Plant and Woody Area Indices of Leaf-on and Leaf-off Forest Canopy Using Explicit 3D Forest Scenes
Remote Sens. 2018, 10(8), 1297; https://doi.org/10.3390/rs10081297
Received: 10 June 2018 / Revised: 31 July 2018 / Accepted: 9 August 2018 / Published: 16 August 2018
PDF Full-text (13280 KB) | HTML Full-text | XML Full-text
Abstract
Optical methods require model inversion to infer plant area index (PAI) and woody area index (WAI) of leaf-on and leaf-off forest canopy from gap fraction or radiation attenuation measurements. Several inversion models have been developed previously, however, a thorough comparison of those inversion
[...] Read more.
Optical methods require model inversion to infer plant area index (PAI) and woody area index (WAI) of leaf-on and leaf-off forest canopy from gap fraction or radiation attenuation measurements. Several inversion models have been developed previously, however, a thorough comparison of those inversion models in obtaining the PAI and WAI of leaf-on and leaf-off forest canopy has not been conducted so far. In the present study, an explicit 3D forest scene series with different PAI, WAI, phenological periods, stand density, tree species composition, plant functional types, canopy element clumping index, and woody component clumping index was generated using 50 detailed 3D tree models. The explicit 3D forest scene series was then used to assess the performance of seven commonly used inversion models to estimate the PAI and WAI of the leaf-on and leaf-off forest canopy. The PAI and WAI estimated from the seven inversion models and simulated digital hemispherical photography images were compared with the true PAI and WAI of leaf-on and leaf-off forest scenes. Factors that contributed to the differences between the estimates of the seven inversion models were analyzed. Results show that both the factors of inversion model, canopy element and woody component projection functions, canopy element and woody component estimation algorithms, and segment size are contributed to the differences between the PAI and WAI estimated from the seven inversion models. There is no universally valid combination of inversion model, needle-to-shoot area ratio, canopy element and woody component clumping index estimation algorithm, and segment size that can accurately measure the PAI and WAI of all leaf-on and leaf-off forest canopies. The performance of the combinations of inversion model, needle-to-shoot area ratio, canopy element and woody component clumping index estimation algorithm, and segment size to estimate the PAI and WAI of leaf-on and leaf-off forest canopies is the function of the inversion model as well as the canopy element and woody component clumping index estimation algorithm, segment size, PAI, WAI, tree species composition, and plant functional types. The impact of canopy element and woody component projection function measurements on the PAI and WAI estimation of the leaf-on and leaf-off forest canopy can be reduced to a low level (<4%) by adopting appropriate inversion models. Full article
(This article belongs to the Section Forest Remote Sensing)
Figures

Figure 1

Open AccessArticle Virtual Structural Analysis of Jokisivu Open Pit Using ‘Structure-from-Motion’ Unmanned Aerial Vehicles (UAV) Photogrammetry: Implications for Structurally-Controlled Gold Deposits in Southwest Finland
Remote Sens. 2018, 10(8), 1296; https://doi.org/10.3390/rs10081296
Received: 19 June 2018 / Revised: 6 August 2018 / Accepted: 11 August 2018 / Published: 16 August 2018
PDF Full-text (7042 KB) | HTML Full-text | XML Full-text
Abstract
Unmanned aerial vehicles (UAVs) are rapidly growing remote sensing platforms for capturing high-resolution images of exposed rock surfaces. We used a DJI Phantom 3 Professional (P3P) quadcopter to capture aerial images that were used to generate a high-resolution three-dimensional (3-D) model of the
[...] Read more.
Unmanned aerial vehicles (UAVs) are rapidly growing remote sensing platforms for capturing high-resolution images of exposed rock surfaces. We used a DJI Phantom 3 Professional (P3P) quadcopter to capture aerial images that were used to generate a high-resolution three-dimensional (3-D) model of the Jokisivu open-pit gold deposit that is located in southwestern Finland. 158 overlapping oblique and nadir images were taken and processed with Agisoft Photoscan Pro to generate textured 3-D surface models. In addition, 69 overlapping images were taken from the steep faces of the open pit. We assessed the precision of the 3-D model by deploying ground control points (GCPs) and the average errors were found minimal along X (2.0 cm), Y (1.2 cm), and Z (5.0 cm) axes. The steep faces of the open pit were used for virtual structural measurements and kinematic analyses in CloudCompare and ArcGIS to distinguish the orientation of different fracture sets and statistical categorization, respectively. Three distinct fracture sets were observed. The NW-SE and NE-SW striking fractures form a conjugate geometry, whereas the NNW-SSE striking fractures cut the conjugate fracture set. The orientation of conjugate fractures match well with the resource model of the deposit and NW- and NE-trending segments of regional-scale anastomosing shear zones. Based on the conjugate geometry of fracture sets I and II, and the regional pattern of anastomosing shear system lead us to interpret an origin of gold mineralization in two stages. An early N-S or NNW-SSE crustal shortening, corresponding to the regional D4 (ca. 1.83–1.81 Ga) or pre-D4 (ca. 1.87–1.86 Ga) Svecofennian tectonic event(s) that produced anastomosing shear zones. Subsequent E-W directed D5 contraction (ca. 1.79–1.77 Ga) partly reactivated the anastomosing shear zones with the formation of conjugate system, which controlled the migration of fluids and gold mineralization in SW Finland. Full article
Figures

Graphical abstract

Open AccessArticle A Spatial-Temporal Adaptive Neighborhood-Based Ratio Approach for Change Detection in SAR Images
Remote Sens. 2018, 10(8), 1295; https://doi.org/10.3390/rs10081295
Received: 18 July 2018 / Revised: 11 August 2018 / Accepted: 13 August 2018 / Published: 16 August 2018
PDF Full-text (5287 KB) | HTML Full-text | XML Full-text
Abstract
The neighborhood-based method was proposed and widely used in the change detection of synthetic aperture radar (SAR) images because the neighborhood information of SAR images is effective to reduce the negative effect of speckle noise. Nevertheless, for the neighborhood-based method, it is unreasonable
[...] Read more.
The neighborhood-based method was proposed and widely used in the change detection of synthetic aperture radar (SAR) images because the neighborhood information of SAR images is effective to reduce the negative effect of speckle noise. Nevertheless, for the neighborhood-based method, it is unreasonable to use a fixed window size for the entire image because the optimal window size of different pixels in an image is different. Hence, if you let the neighborhood-based method use a large window to significantly suppress noise, it cannot preserve the detail information such as the edge of a changed area. To overcome this drawback, we propose a spatial-temporal adaptive neighborhood-based ratio (STANR) approach for change detection in SAR images. STANR employs heterogeneity to adaptively select the spatial homogeneity neighborhood and uses the temporal adaptive strategy to determine multi-temporal neighborhood windows. Experimental results on two data sets show that STANR can both suppress the negative influence of noise and preserve edge details, and can obtain a better difference image than other state-of-the-art methods. Full article
(This article belongs to the Special Issue Analysis of Multi-temporal Remote Sensing Images)
Figures

Figure 1

Open AccessFeature PaperArticle Fractal-Based Local Range Slope Estimation from Single SAR Image with Applications to SAR Despeckling and Topographic Mapping
Remote Sens. 2018, 10(8), 1294; https://doi.org/10.3390/rs10081294
Received: 26 June 2018 / Revised: 17 July 2018 / Accepted: 10 August 2018 / Published: 15 August 2018
PDF Full-text (23004 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a range slope estimation procedure from single synthetic aperture radar (SAR) images with both methodological and applicative innovations. The retrieval algorithm is based on an analytical linearized direct model, which relates the SAR intensity data to the range
[...] Read more.
In this paper, we propose a range slope estimation procedure from single synthetic aperture radar (SAR) images with both methodological and applicative innovations. The retrieval algorithm is based on an analytical linearized direct model, which relates the SAR intensity data to the range local slopes and encompasses both a surface model and an electromagnetic scattering model. Scene topography is described via fractal geometry, whereas the Small Perturbation Method is adopted to represent the scattering behavior of the surface. The range slope map is then used to estimate the surface topography and the local incidence angle map. For topographic mapping applications, also referred to as shape from shading, a regularization procedure is derived to recover the azimuth local slope and reduce distortions. Then we present a new intriguing application of the inversion procedure in the field of SAR despeckling. Proposed techniques and high-level products are tested in a wide series of experiments, where the algorithms are applied to both simulated (canonical) and actual SAR images. It is proved that the proposed range slope retrieval technique can (1) provide an estimate of the surface shape, with overall better performance w.r.t. typical models used in this field and (2) be useful in advanced despeckling techniques. Full article
Figures

Graphical abstract

Open AccessArticle Using Near-Infrared-Enabled Digital Repeat Photography to Track Structural and Physiological Phenology in Mediterranean Tree–Grass Ecosystems
Remote Sens. 2018, 10(8), 1293; https://doi.org/10.3390/rs10081293
Received: 16 July 2018 / Revised: 8 August 2018 / Accepted: 13 August 2018 / Published: 15 August 2018
PDF Full-text (8975 KB) | HTML Full-text | XML Full-text
Abstract
Tree–grass ecosystems are widely distributed. However, their phenology has not yet been fully characterized. The technique of repeated digital photographs for plant phenology monitoring (hereafter referred as PhenoCam) provide opportunities for long-term monitoring of plant phenology, and extracting phenological transition dates (PTDs, e.g.,
[...] Read more.
Tree–grass ecosystems are widely distributed. However, their phenology has not yet been fully characterized. The technique of repeated digital photographs for plant phenology monitoring (hereafter referred as PhenoCam) provide opportunities for long-term monitoring of plant phenology, and extracting phenological transition dates (PTDs, e.g., start of the growing season). Here, we aim to evaluate the utility of near-infrared-enabled PhenoCam for monitoring the phenology of structure (i.e., greenness) and physiology (i.e., gross primary productivity—GPP) at four tree–grass Mediterranean sites. We computed four vegetation indexes (VIs) from PhenoCams: (1) green chromatic coordinates (GCC), (2) normalized difference vegetation index (CamNDVI), (3) near-infrared reflectance of vegetation index (CamNIRv), and (4) ratio vegetation index (CamRVI). GPP is derived from eddy covariance flux tower measurement. Then, we extracted PTDs and their uncertainty from different VIs and GPP. The consistency between structural (VIs) and physiological (GPP) phenology was then evaluated. CamNIRv is best at representing the PTDs of GPP during the Green-up period, while CamNDVI is best during the Dry-down period. Moreover, CamNIRv outperforms the other VIs in tracking growing season length of GPP. In summary, the results show it is promising to track structural and physiology phenology of seasonally dry Mediterranean ecosystem using near-infrared-enabled PhenoCam. We suggest using multiple VIs to better represent the variation of GPP. Full article
(This article belongs to the Section Land Surface Fluxes)
Figures

Graphical abstract

Open AccessArticle Symmetric Double-Eye Structure in Hurricane Bertha (2008) Imaged by SAR
Remote Sens. 2018, 10(8), 1292; https://doi.org/10.3390/rs10081292
Received: 15 July 2018 / Revised: 10 August 2018 / Accepted: 13 August 2018 / Published: 15 August 2018
PDF Full-text (3483 KB) | HTML Full-text | XML Full-text
Abstract
Internal dynamical processes play a critical role in hurricane intensity variability. However, our understanding of internal storm processes is less well established, partly because of fewer observations. In this study, we present an analysis of the hurricane double-eye structure imaged by the RADARSAT-2
[...] Read more.
Internal dynamical processes play a critical role in hurricane intensity variability. However, our understanding of internal storm processes is less well established, partly because of fewer observations. In this study, we present an analysis of the hurricane double-eye structure imaged by the RADARSAT-2 cross-polarized synthetic aperture radar (SAR) over Hurricane Bertha (2008). SAR has the capability of hurricane monitoring because of the ocean surface roughness induced by surface wind stress. Recently, the C-band cross-polarized SAR measurements appear to be unsaturated for the high wind speeds, which makes SAR suitable for studies of the hurricane internal dynamic processes, including the double-eye structure. We retrieve the wind field of Hurricane Bertha (2008), and then extract the closest axisymmetric double-eye structure from the wind field using an idealized vortex model. Comparisons between the axisymmetric model extracted wind field and SAR observed winds demonstrate that the double-eye structure imaged by SAR is relatively axisymmetric. Associated with airborne measurements using a stepped-frequency microwave radiometer, we investigate the hurricane internal dynamic process related to the double-eye structure, which is known as the eyewall replacement cycle (ERC). The classic ERC theory was proposed by assuming an axisymmetric storm structure. The ERC internal dynamic process of Hurricane Bertha (2008) related to the symmetric double-eye structure here, which is consistent with the classic theory, is observed by SAR and aircraft. Full article
(This article belongs to the Special Issue Sea Surface Roughness Observed by High Resolution Radar)
Figures

Graphical abstract

Open AccessTechnical Note Distributed Fiber Optic Sensors for the Monitoring of a Tunnel Crossing a Landslide
Remote Sens. 2018, 10(8), 1291; https://doi.org/10.3390/rs10081291
Received: 3 July 2018 / Revised: 26 July 2018 / Accepted: 12 August 2018 / Published: 15 August 2018
PDF Full-text (1620 KB) | HTML Full-text | XML Full-text
Abstract
This work reports on the application of a distributed fiber-optic strain sensor for long-term monitoring of a railway tunnel affected by an active earthflow. The sensor has been applied to detect the strain distribution along an optical fiber attached along the two walls
[...] Read more.
This work reports on the application of a distributed fiber-optic strain sensor for long-term monitoring of a railway tunnel affected by an active earthflow. The sensor has been applied to detect the strain distribution along an optical fiber attached along the two walls of the tunnel. The experimental results, relative to a two-year monitoring campaign, demonstrate that the sensor is able to detect localized strains, identify their location along the tunnel walls, and follow their temporal evolution. Full article
Figures

Graphical abstract

Open AccessArticle Sentinel-2 Image Fusion Using a Deep Residual Network
Remote Sens. 2018, 10(8), 1290; https://doi.org/10.3390/rs10081290
Received: 4 July 2018 / Revised: 30 July 2018 / Accepted: 7 August 2018 / Published: 15 August 2018
PDF Full-text (14334 KB) | HTML Full-text | XML Full-text
Abstract
Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10
[...] Read more.
Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments. Full article
(This article belongs to the Special Issue Recent Advances in Neural Networks for Remote Sensing)
Figures

Graphical abstract

Open AccessArticle Multiscale and Multifeature Segmentation of High-Spatial Resolution Remote Sensing Images Using Superpixels with Mutual Optimal Strategy
Remote Sens. 2018, 10(8), 1289; https://doi.org/10.3390/rs10081289
Received: 8 June 2018 / Revised: 7 August 2018 / Accepted: 9 August 2018 / Published: 15 August 2018
PDF Full-text (10288 KB) | HTML Full-text | XML Full-text
Abstract
High spatial resolution (HSR) image segmentation is considered to be a major challenge for object-oriented remote sensing applications that have been extensively studied in the past. In this paper, we propose a fast and efficient framework for multiscale and multifeatured hierarchical image segmentation
[...] Read more.
High spatial resolution (HSR) image segmentation is considered to be a major challenge for object-oriented remote sensing applications that have been extensively studied in the past. In this paper, we propose a fast and efficient framework for multiscale and multifeatured hierarchical image segmentation (MMHS). First, the HSR image pixels were clustered into a small number of superpixels using a simple linear iterative clustering algorithm (SLIC) on modern graphic processing units (GPUs), and then a region adjacency graph (RAG) and nearest neighbors graph (NNG) were constructed based on adjacent superpixels. At the same time, the RAG and NNG successfully integrated spectral information, texture information, and structural information from a small number of superpixels to enhance its expressiveness. Finally, a multiscale hierarchical grouping algorithm was implemented to merge these superpixels using local-mutual best region merging (LMM). We compared the experiments with three state-of-the-art segmentation algorithms, i.e., the watershed transform segmentation (WTS) method, the mean shift (MS) method, the multiresolution segmentation (MRS) method integrated in commercial software, eCognition9, on New York HSR image datasets, and the ISPRS Potsdam dataset. Computationally, our algorithm was dozens of times faster than the others, and it also had the best segmentation effect through visual assessment. The supervised and unsupervised evaluation results further proved the superiority of the MMHS algorithm. Full article
(This article belongs to the Special Issue Superpixel based Analysis and Classification of Remote Sensing Images)
Figures

Figure 1

Open AccessArticle Improvement in Surface Solar Irradiance Estimation Using HRV/MSG Data
Remote Sens. 2018, 10(8), 1288; https://doi.org/10.3390/rs10081288
Received: 10 July 2018 / Revised: 10 August 2018 / Accepted: 13 August 2018 / Published: 15 August 2018
PDF Full-text (5117 KB) | HTML Full-text | XML Full-text
Abstract
The Advanced Model for the Estimation of Surface Solar Irradiance (AMESIS) was developed at the Institute of Methodologies for Environmental Analysis of the National Research Council of Italy (IMAA-CNR) to derive surface solar irradiance from SEVIRI radiometer on board the MSG geostationary satellite.
[...] Read more.
The Advanced Model for the Estimation of Surface Solar Irradiance (AMESIS) was developed at the Institute of Methodologies for Environmental Analysis of the National Research Council of Italy (IMAA-CNR) to derive surface solar irradiance from SEVIRI radiometer on board the MSG geostationary satellite. The operational version of AMESIS has been running continuously at IMAA-CNR over all of Italy since 2017 in support to the monitoring of photovoltaic plants. The AMESIS operative model provides two different estimations of the surface solar irradiance: one is obtained considering only the low-resolution channels (SSI_VIS), while the other also takes into account the high-resolution HRV channel (SSI_HRV). This paper shows the difference between these two products against simultaneous ground-based observations from a network of 63 pyranometers for different sky conditions (clear, overcast and partially cloudy). Comparable statistical scores have been obtained for both AMESIS products in clear and cloud situation. In terms of bias and correlation coefficient over partially cloudy sky, better performances are found for SSI_HRV (0.34 W/m2 and 0.995, respectively) than SSI_VIS (−33.69 W/m2 and 0.862) at the expense of the greater run-time necessary to process HRV data channel. Full article
(This article belongs to the Special Issue Solar Radiation, Modelling and Remote Sensing)
Figures

Graphical abstract

Open AccessArticle Building Detection from VHR Remote Sensing Imagery Based on the Morphological Building Index
Remote Sens. 2018, 10(8), 1287; https://doi.org/10.3390/rs10081287
Received: 26 June 2018 / Revised: 6 August 2018 / Accepted: 9 August 2018 / Published: 15 August 2018
PDF Full-text (5560 KB) | HTML Full-text | XML Full-text
Abstract
Automatic detection of buildings from very high resolution (VHR) satellite images is a current research hotspot in remote sensing and computer vision. However, many irrelevant objects with similar spectral characteristics to buildings will cause a large amount of interference to the detection of
[...] Read more.
Automatic detection of buildings from very high resolution (VHR) satellite images is a current research hotspot in remote sensing and computer vision. However, many irrelevant objects with similar spectral characteristics to buildings will cause a large amount of interference to the detection of buildings, thus making the accurate detection of buildings still a challenging task, especially for images captured in complex environments. Therefore, it is crucial to develop a method that can effectively eliminate these interferences and accurately detect buildings from complex image scenes. To this end, a new building detection method based on the morphological building index (MBI) is proposed in this study. First, the local feature points are detected from the VHR remote sensing imagery and they are optimized by the saliency index proposed in this study. Second, a voting matrix is calculated based on these optimized local feature points to extract built-up areas. Finally, buildings are detected from the extracted built-up areas using the MBI algorithm. Experiments confirm that our proposed method can effectively and accurately detect buildings in VHR remote sensing images captured in complex environments. Full article
Figures

Figure 1

Open AccessArticle Detection of Temporary Flooded Vegetation Using Sentinel-1 Time Series Data
Remote Sens. 2018, 10(8), 1286; https://doi.org/10.3390/rs10081286
Received: 15 July 2018 / Revised: 5 August 2018 / Accepted: 12 August 2018 / Published: 15 August 2018
PDF Full-text (17034 KB) | HTML Full-text | XML Full-text
Abstract
The C-band Sentinel-1 satellite constellation enables the continuous monitoring of the Earth’s surface within short revisit times. Thus, it provides Synthetic Aperture Radar (SAR) time series data that can be used to detect changes over time regardless of daylight or weather conditions. Within
[...] Read more.
The C-band Sentinel-1 satellite constellation enables the continuous monitoring of the Earth’s surface within short revisit times. Thus, it provides Synthetic Aperture Radar (SAR) time series data that can be used to detect changes over time regardless of daylight or weather conditions. Within this study, a time series classification approach is developed for the extraction of the flood extent with a focus on temporary flooded vegetation (TFV). This method is based on Sentinel-1 data, as well as auxiliary land cover information, and combines a pixel-based and an object-oriented approach. Multi-temporal characteristics and patterns are applied to generate novel times series features, which represent a basis for the developed approach. The method is tested on a study area in Namibia characterized by a large flood event in April 2017. Sentinel-1 times series were used for the period between September 2016 and July 2017. It is shown that the supplement of TFV areas to the temporary open water areas prevents the underestimation of the flood area, allowing the derivation of the entire flood extent. Furthermore, a quantitative evaluation of the generated flood mask was carried out using optical Sentinel-2 images, whereby it was shown that overall accuracy increased by 27% after the inclusion of the TFV. Full article
Figures

Graphical abstract

Open AccessArticle Synergetic Use of Sentinel-1 and Sentinel-2 Data for Soil Moisture Mapping at Plot Scale
Remote Sens. 2018, 10(8), 1285; https://doi.org/10.3390/rs10081285
Received: 23 May 2018 / Revised: 27 June 2018 / Accepted: 2 July 2018 / Published: 15 August 2018
PDF Full-text (1632 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents an approach for retrieval of soil moisture content (SMC) by coupling single polarization C-band synthetic aperture radar (SAR) and optical data at the plot scale in vegetated areas. The study was carried out at five different sites with dominant vegetation
[...] Read more.
This paper presents an approach for retrieval of soil moisture content (SMC) by coupling single polarization C-band synthetic aperture radar (SAR) and optical data at the plot scale in vegetated areas. The study was carried out at five different sites with dominant vegetation cover located in Kenya. In the initial stage of the process, different features are extracted from single polarization mode (VV polarization) SAR and optical data. Subsequently, proper selection of the relevant features is conducted on the extracted features. An advanced state-of-the-art machine learning regression approach, the support vector regression (SVR) technique, is used to retrieve soil moisture. This paper takes a new look at soil moisture retrieval in vegetated areas considering the needs of practical applications. In this context, we tried to work at the object level instead of the pixel level. Accordingly, a group of pixels (an image object) represents the reality of the land cover at the plot scale. Three approaches, a pixel-based approach, an object-based approach, and a combination of pixel- and object-based approaches, were used to estimate soil moisture. The results show that the combined approach outperforms the other approaches in terms of estimation accuracy (4.94% and 0.89 compared to 6.41% and 0.62 in terms of root mean square error (RMSE) and R2), flexibility on retrieving the level of soil moisture, and better quality of visual representation of the SMC map. Full article
Figures

Figure 1

Open AccessArticle Road Centerline Extraction from Very-High-Resolution Aerial Image and LiDAR Data Based on Road Connectivity
Remote Sens. 2018, 10(8), 1284; https://doi.org/10.3390/rs10081284
Received: 31 July 2018 / Revised: 31 July 2018 / Accepted: 10 August 2018 / Published: 15 August 2018
PDF Full-text (11635 KB) | HTML Full-text | XML Full-text
Abstract
The road networks provide key information for a broad range of applications such as urban planning, urban management, and navigation. The fast-developing technology of remote sensing that acquires high-resolution observational data of the land surface offers opportunities for automatic extraction of road networks.
[...] Read more.
The road networks provide key information for a broad range of applications such as urban planning, urban management, and navigation. The fast-developing technology of remote sensing that acquires high-resolution observational data of the land surface offers opportunities for automatic extraction of road networks. However, the road networks extracted from remote sensing images are likely affected by shadows and trees, making the road map irregular and inaccurate. This research aims to improve the extraction of road centerlines using both very-high-resolution (VHR) aerial images and light detection and ranging (LiDAR) by accounting for road connectivity. The proposed method first applies the fractal net evolution approach (FNEA) to segment remote sensing images into image objects and then classifies image objects using the machine learning classifier, random forest. A post-processing approach based on the minimum area bounding rectangle (MABR) is proposed and a structure feature index is adopted to obtain the complete road networks. Finally, a multistep approach, that is, morphology thinning, Harris corner detection, and least square fitting (MHL) approach, is designed to accurately extract the road centerlines from the complex road networks. The proposed method is applied to three datasets, including the New York dataset obtained from the object identification dataset, the Vaihingen dataset obtained from the International Society for Photogrammetry and Remote Sensing (ISPRS) 2D semantic labelling benchmark and Guangzhou dataset. Compared with two state-of-the-art methods, the proposed method can obtain the highest completeness, correctness, and quality for the three datasets. The experiment results show that the proposed method is an efficient solution for extracting road centerlines in complex scenes from VHR aerial images and light detection and ranging (LiDAR) data. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Figures

Graphical abstract

Open AccessReview Flood Prevention and Emergency Response System Powered by Google Earth Engine
Remote Sens. 2018, 10(8), 1283; https://doi.org/10.3390/rs10081283
Received: 4 July 2018 / Revised: 30 July 2018 / Accepted: 10 August 2018 / Published: 14 August 2018
PDF Full-text (10691 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
This paper reviews the efforts made and experiences gained in developing the Flood Prevention and Emergency Response System (FPERS) powered by Google Earth Engine, focusing on its applications at the three stages of floods. At the post-flood stage, FPERS integrates various remote sensing
[...] Read more.
This paper reviews the efforts made and experiences gained in developing the Flood Prevention and Emergency Response System (FPERS) powered by Google Earth Engine, focusing on its applications at the three stages of floods. At the post-flood stage, FPERS integrates various remote sensing imageries, including Formosat-2 optical imagery to detect and monitor barrier lakes, synthetic aperture radar imagery to derive an inundation map, and high-spatial-resolution photographs taken by unmanned aerial vehicles to evaluate damage to river channels and structures. At the pre-flood stage, a huge amount of geospatial data are integrated in FPERS and are categorized as typhoon forecast and archive, disaster prevention and warning, disaster events and analysis, or basic data and layers. At the during-flood stage, three strategies are implemented to facilitate the access of the real-time data: presenting the key information, making a sound recommendation, and supporting the decision-making. The example of Typhoon Soudelor in August of 2015 is used to demonstrate how FPERS was employed to support the work of flood prevention and emergency response from 2013 to 2016. The capability of switching among different topographic models and the flexibility of managing and searching data through a geospatial database are also explained, and suggestions are made for future works. Full article
(This article belongs to the Special Issue Google Earth Engine Applications)
Figures

Graphical abstract

Open AccessArticle Crop Classification in a Heterogeneous Arable Landscape Using Uncalibrated UAV Data
Remote Sens. 2018, 10(8), 1282; https://doi.org/10.3390/rs10081282
Received: 22 June 2018 / Revised: 3 August 2018 / Accepted: 10 August 2018 / Published: 14 August 2018
PDF Full-text (7518 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Land cover maps are indispensable for decision making, monitoring, and management in agricultural areas, but they are often only available after harvesting. To obtain a timely crop map of a small-scale arable landscape in the Swiss Plateau, we acquired uncalibrated, very high-resolution data,
[...] Read more.
Land cover maps are indispensable for decision making, monitoring, and management in agricultural areas, but they are often only available after harvesting. To obtain a timely crop map of a small-scale arable landscape in the Swiss Plateau, we acquired uncalibrated, very high-resolution data, with a spatial resolution of 0.05 m and four spectral bands, using a consumer-grade camera on an unmanned aerial vehicle (UAV) in June 2015. We resampled the data to different spatial and spectral resolutions, and evaluated the method using textural features (first order statistics and mathematical morphology), a random forest classifier for best performance, as well as number and size of the structuring elements. Our main findings suggest the overall best performing data consisting of a spatial resolution of 0.5 m, three spectral bands (RGB—red, green, and blue), and five different sizes of the structuring elements. The overall accuracy (OA) for the full set of crop classes based on a pixel-based classification is 66.7%. In case of a merged set of crops, the OA increases by ~7% (74.0%). For an object-based classification based on individual field parcels, the OA increases by ~20% (OA of 86.3% for the full set of crop classes, and 94.6% for the merged set, respectively). We conclude the use of UAV to be most relevant at 0.5 m spatial resolution in heterogeneous arable landscapes when used for crop classification. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Figures

Graphical abstract

Open AccessArticle Reconstruction of Three-Dimensional (3D) Indoor Interiors with Multiple Stories via Comprehensive Segmentation
Remote Sens. 2018, 10(8), 1281; https://doi.org/10.3390/rs10081281
Received: 3 July 2018 / Revised: 1 August 2018 / Accepted: 9 August 2018 / Published: 14 August 2018
PDF Full-text (9744 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The fast and stable reconstruction of building interiors from scanned point clouds has recently attracted considerable research interest. However, reconstructing long corridors and connected areas across multiple floors has emerged as a substantial challenge. This paper presents a comprehensive segmentation method for reconstructing
[...] Read more.
The fast and stable reconstruction of building interiors from scanned point clouds has recently attracted considerable research interest. However, reconstructing long corridors and connected areas across multiple floors has emerged as a substantial challenge. This paper presents a comprehensive segmentation method for reconstructing a three-dimensional (3D) indoor structure with multiple stories. With this method, the over-segmentation that usually occurs in the reconstruction of long corridors in a complex indoor environment is overcome by morphologically eroding the floor space to segment rooms and by overlapping the segmented room-space with partitioned cells via extracted wall lines. Such segmentation ensures both the integrity of the room-space partitions and the geometric regularity of the rooms. For spaces across floors in a multistory building, a peak-nadir-peak strategy in the distribution of points along the z-axis is proposed in order to extract connected areas across multiple floors. A series of experimental tests while using seven real-world 3D scans and eight synthetic models of indoor environments show the effectiveness and feasibility of the proposed method. Full article
(This article belongs to the Special Issue 3D Modelling from Point Clouds: Algorithms and Methods)
Figures

Graphical abstract

Open AccessArticle Drivers of Landscape Changes in Coastal Ecosystems on the Yukon-Kuskokwim Delta, Alaska
Remote Sens. 2018, 10(8), 1280; https://doi.org/10.3390/rs10081280
Received: 1 June 2018 / Revised: 15 July 2018 / Accepted: 3 August 2018 / Published: 14 August 2018
PDF Full-text (43950 KB) | HTML Full-text | XML Full-text
Abstract
The Yukon-Kuskokwim Delta (YKD) is the largest delta in western North America and its productive coastal ecosystems support globally significant populations of breeding birds and a large indigenous population. To quantify past landscape changes as a guide to assessing future climate impacts to
[...] Read more.
The Yukon-Kuskokwim Delta (YKD) is the largest delta in western North America and its productive coastal ecosystems support globally significant populations of breeding birds and a large indigenous population. To quantify past landscape changes as a guide to assessing future climate impacts to the YKD and how indigenous society may adapt to change, we photo-interpreted ecotypes at 600 points within 12 grids in a 2118 km2 area along the central YKD coast using a time-series of air photos from 1948–1955 and 1980 and satellite images from 2007–2008 (IKONOS) and 2013–2016 (WorldView). We found that ecotype classes changed 16.2% (342 km2) overall during the ~62 years. Ecotypes changed 6.0% during 1953–1980, 7.2% during 1980–2007 and 3.8% during 2007–2015. Lowland Moist Birch-Ericaceous Low Scrub (−5.0%) and Coastal Saline Flat Barrens (−2.3%) showed the greatest decreases in area, while Lowland Water Sedge Meadow (+1.7%) and Lacustrine Marestail Marsh (+1.3%) showed the largest increases. Dominant processes affecting change were permafrost degradation (5.3%), channel erosion (3.0%), channel deposition (2.2%), vegetation colonization (2.3%) and lake drainage (1.5%), while sedimentation, water-level fluctuations, permafrost aggradation and shoreline paludification each affected <0.5% of the area. Rates of change increased dramatically in the late interval for permafrost degradation (from 0.06 to 0.26%/year) and vegetation colonization (from 0.03 to 0.16%/year), while there was a small decrease in channel deposition (from 0.05 to 0.0%/year) due largely to barren mudflats being colonized by vegetation. In contrast, rates of channel erosion remained fairly constant. The increased permafrost degradation coincided with increasing storm frequency and air temperatures. We attribute increased permafrost degradation and vegetation colonization during the recent interval mostly to the effects of a large storm in 2005, which caused extensive salt-kill of vegetation along the margins of permafrost plateaus and burial of vegetation on active tidal flats by mud that was later recolonized. Due to the combination of extremely flat terrain, sea-level rise, sea-ice reduction that facilitates more storm flooding and accelerating permafrost degradation, we believe the YKD is the most vulnerable region in the Arctic to climate warming. Full article
(This article belongs to the Special Issue Remote Sensing of Dynamic Permafrost Regions)
Figures

Figure 1

Open AccessArticle A New Method for Mapping Aquatic Vegetation Especially Underwater Vegetation in Lake Ulansuhai Using GF-1 Satellite Data
Remote Sens. 2018, 10(8), 1279; https://doi.org/10.3390/rs10081279
Received: 18 July 2018 / Revised: 9 August 2018 / Accepted: 12 August 2018 / Published: 14 August 2018
PDF Full-text (3631 KB) | HTML Full-text | XML Full-text
Abstract
It is difficult to accurately identify and extract bodies of water and underwater vegetation from satellite images using conventional vegetation indices, as the strong absorption of water weakens the spectral feature of high near-infrared (NIR) reflected by underwater vegetation in shallow lakes. This
[...] Read more.
It is difficult to accurately identify and extract bodies of water and underwater vegetation from satellite images using conventional vegetation indices, as the strong absorption of water weakens the spectral feature of high near-infrared (NIR) reflected by underwater vegetation in shallow lakes. This study used the shallow Lake Ulansuhai in the semi-arid region of China as a research site, and proposes a new concave–convex decision function to detect submerged aquatic vegetation (SAV) and identify bodies of water using Gao Fen 1 (GF-1) multi-spectral satellite images with a resolution of 16 meters acquired in July and August 2015. At the same time, emergent vegetation, “Huangtai algae bloom”, and SAV were classified simultaneously by a decision tree method. Through investigation and verification by field samples, classification accuracy in July and August was 92.17% and 91.79%, respectively, demonstrating that GF-1 data with four-day short revisit period and high spatial resolution can meet the standards of accuracy required by aquatic vegetation extraction. The results indicated that the concave–convex decision function is superior to traditional classification methods in distinguishing water and SAV, thus significantly improving SAV classification accuracy. The concave–convex decision function can be applied to waters with SAV coverage greater than 40% above 0.3 m and SAV coverage 40% above 0.1 m under 1.5 m transparency, which can provide new methods for the accurate extraction of SAV in other regions. Full article
(This article belongs to the Special Issue Novel Advances in Aquatic Vegetation Monitoring in Lakes and Rivers)
Figures

Graphical abstract

Open AccessArticle SLALOM: An All-Surface Snow Water Path Retrieval Algorithm for the GPM Microwave Imager
Remote Sens. 2018, 10(8), 1278; https://doi.org/10.3390/rs10081278
Received: 19 June 2018 / Revised: 10 August 2018 / Accepted: 12 August 2018 / Published: 14 August 2018
PDF Full-text (3932 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes a new algorithm that is able to detect snowfall and retrieve the associated snow water path (SWP), for any surface type, using the Global Precipitation Measurement (GPM) Microwave Imager (GMI). The algorithm is tuned and evaluated against coincident observations of
[...] Read more.
This paper describes a new algorithm that is able to detect snowfall and retrieve the associated snow water path (SWP), for any surface type, using the Global Precipitation Measurement (GPM) Microwave Imager (GMI). The algorithm is tuned and evaluated against coincident observations of the Cloud Profiling Radar (CPR) onboard CloudSat. It is composed of three modules for (i) snowfall detection, (ii) supercooled droplet detection and (iii) SWP retrieval. This algorithm takes into account environmental conditions to retrieve SWP and does not rely on any surface classification scheme. The snowfall detection module is able to detect 83% of snowfall events including light SWP (down to 1 × 10−3 kg·m−2) with a false alarm ratio of 0.12. The supercooled detection module detects 97% of events, with a false alarm ratio of 0.05. The SWP estimates show a relative bias of −11%, a correlation of 0.84 and a root mean square error of 0.04 kg·m−2. Several applications of the algorithm are highlighted: Three case studies of snowfall events are investigated, and a 2-year high resolution 70°S–70°N snowfall occurrence distribution is presented. These results illustrate the high potential of this algorithm for snowfall detection and SWP retrieval using GMI. Full article
(This article belongs to the Special Issue Remote Sensing of Precipitation)
Figures

Graphical abstract

Open AccessArticle Potential of Multi-Temporal ALOS-2 PALSAR-2 ScanSAR Data for Vegetation Height Estimation in Tropical Forests of Mexico
Remote Sens. 2018, 10(8), 1277; https://doi.org/10.3390/rs10081277
Received: 28 June 2018 / Revised: 2 August 2018 / Accepted: 12 August 2018 / Published: 14 August 2018
PDF Full-text (5915 KB) | HTML Full-text | XML Full-text
Abstract
Information on the spatial distribution of forest structure parameters (e.g., aboveground biomass, vegetation height) are crucial for assessing terrestrial carbon stocks and emissions. In this study, we sought to assess the potential and merit of multi-temporal dual-polarised L-band observations for vegetation height estimation
[...] Read more.
Information on the spatial distribution of forest structure parameters (e.g., aboveground biomass, vegetation height) are crucial for assessing terrestrial carbon stocks and emissions. In this study, we sought to assess the potential and merit of multi-temporal dual-polarised L-band observations for vegetation height estimation in tropical deciduous and evergreen forests of Mexico. We estimated vegetation height using dual-polarised L-band observations and a machine learning approach. We used airborne LiDAR-based vegetation height for model training and for result validation. We split LiDAR-based vegetation height into training and test data using two different approaches, i.e., considering and ignoring spatial autocorrelation between training and test data. Our results indicate that ignoring spatial autocorrelation leads to an overoptimistic model’s predictive performance. Accordingly, a spatial splitting of the reference data should be preferred in order to provide realistic retrieval accuracies. Moreover, the model’s predictive performance increases with an increasing number of spatial predictors and training samples, but saturates at a specific level (i.e., at 12 dual-polarised L-band backscatter measurements and at around 20% of all training samples). In consideration of spatial autocorrelation between training and test data, we determined an optimal number of L-band observations and training samples as a trade-off between retrieval accuracy and data collection effort. In summary, our study demonstrates the merit of multi-temporal ScanSAR L-band observations for estimation of vegetation height at a larger scale and provides a workflow for robust predictions of this parameter. Full article
Figures

Graphical abstract

Open AccessArticle SnowCloudHydro—A New Framework for Forecasting Streamflow in Snowy, Data-Scarce Regions
Remote Sens. 2018, 10(8), 1276; https://doi.org/10.3390/rs10081276
Received: 19 June 2018 / Revised: 23 July 2018 / Accepted: 1 August 2018 / Published: 13 August 2018
PDF Full-text (2368 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
We tested the efficacy and skill of SnowCloud, a prototype web-based, cloud-computing framework for snow mapping and hydrologic modeling. SnowCloud is the overarching framework that functions within the Google Earth Engine cloud-computing environment. SnowCloudMetrics is a sub-component of SnowCloud that provides users with
[...] Read more.
We tested the efficacy and skill of SnowCloud, a prototype web-based, cloud-computing framework for snow mapping and hydrologic modeling. SnowCloud is the overarching framework that functions within the Google Earth Engine cloud-computing environment. SnowCloudMetrics is a sub-component of SnowCloud that provides users with spatially and temporally composited snow cover information in an easy-to-use format. SnowCloudHydro is a simple spreadsheet-based model that uses Snow Cover Frequency (SCF) output from SnowCloudMetrics as a key model input. In this application, SnowCloudMetrics rapidly converts NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) daily snow cover product (MOD10A1) into a monthly snow cover frequency for a user-specified watershed area. SnowCloudHydro uses SCF and prior monthly streamflow to forecast streamflow for the subsequent month. We tested the skill of SnowCloudHydro in three snow-dominated headwaters that represent a range of precipitation/snowmelt runoff categories: the Río Elqui in Northern Chile; the John Day River, in the Northwestern United States; and the Río Aragón in Northern Spain. The skill of the SnowCloudHydro model directly corresponded to snowpack contributions to streamflow. Watersheds with proportionately more snowmelt than rain provided better results (R2 values: 0.88, 0.52, and 0.22, respectively). To test the user experience of SnowCloud, we provided the tools and tutorials in English and Spanish to water resource managers in Chile, Spain, and the United States. Participants assessed their user experience, which was generally very positive. While these initial results focus on SnowCloud, they outline methods for developing cloud-based tools that can function effectively across cultures and languages. Our approach also addresses the primary challenges of science-based computing; human resource limitations, infrastructure costs, and expensive proprietary software. These challenges are particularly problematic in countries where scientific and computational resources are underdeveloped. Full article
(This article belongs to the Special Issue Google Earth Engine Applications)
Figures

Graphical abstract

Open AccessArticle Focusing High-Resolution Airborne SAR with Topography Variations Using an Extended BPA Based on a Time/Frequency Rotation Principle
Remote Sens. 2018, 10(8), 1275; https://doi.org/10.3390/rs10081275
Received: 4 July 2018 / Revised: 30 July 2018 / Accepted: 10 August 2018 / Published: 13 August 2018
PDF Full-text (4576 KB) | HTML Full-text | XML Full-text
Abstract
With the increasing requirement for resolution, the negligence of topography variations causes serious phase errors, which leads to the degradation of the focusing quality of the synthetic aperture (SAR) imagery, and geometric distortion. Hence, a precise and fast algorithm is necessary for high-resolution
[...] Read more.
With the increasing requirement for resolution, the negligence of topography variations causes serious phase errors, which leads to the degradation of the focusing quality of the synthetic aperture (SAR) imagery, and geometric distortion. Hence, a precise and fast algorithm is necessary for high-resolution airborne SAR. In this paper, an extended back-projection (EBP) algorithm is proposed to compensate the phase errors caused by topography variations. Three-dimensional (3D) variation will be processed in the time-domain for high-resolution airborne SAR. Firstly, the quadratic phase error (QPE) brought by topography variations is analyzed in detail for high-resolution airborne SAR. Then, the key operation, a time-frequency rotation operation, is applied to decrease the samplings in the azimuth time-domain. Just like the time-frequency rotation of the conventional two-step approach, this key operation can compress data in an azimuth time-domain and it reduces the computational burden of the conventional back-projection algorithm, which is applied lastly in the time-domain processing. The results of the simulations validate that the proposed algorithm, including frequency-domain processing and time-domain processing can obtain good focusing performance. At the same time, it has strong practicability with a small amount of computation, compared with the conventional algorithm. Full article
Figures

Figure 1

Open AccessArticle A Performance Evaluation of a Geo-Spatial Image Processing Service Based on Open Source PaaS Cloud Computing Using Cloud Foundry on OpenStack
Remote Sens. 2018, 10(8), 1274; https://doi.org/10.3390/rs10081274
Received: 13 June 2018 / Revised: 11 August 2018 / Accepted: 13 August 2018 / Published: 13 August 2018
PDF Full-text (3939 KB) | HTML Full-text | XML Full-text
Abstract
Recently, web application services based on cloud computing technologies are being offered. In the web-based application field of geo-spatial data management or processing, data processing services are produced or operated using various information communication technologies. Platform-as-a-Service (PaaS) is a type of cloud computing
[...] Read more.
Recently, web application services based on cloud computing technologies are being offered. In the web-based application field of geo-spatial data management or processing, data processing services are produced or operated using various information communication technologies. Platform-as-a-Service (PaaS) is a type of cloud computing service model that provides a platform that allows service providers to implement, execute, and manage applications without the complexity of establishing and maintaining the lower-level infrastructure components, typically related to application development and launching. There are advantages, in terms of cost-effectiveness and service development expansion, of applying non-proprietary PaaS cloud computing. Nevertheless, there have not been many studies on the use of PaaS technologies to build geo-spatial application services. This study was based on open source PaaS technologies used in a geo-spatial image processing service, and it aimed to evaluate the performance of that service in relation to the Web Processing Service (WPS) 2.0 specification, based on the Open Geospatial Consortium (OGC) after a test application deployment using the configured service supported by a cloud environment. Using these components, the performance of an edge extraction algorithm on the test system in three cases, of 300, 500, and 700 threads, was assessed through a comparison test with another test system, in the same three cases, using Infrastructure-as-a-Service (IaaS) without Load Balancer-as-a-Service (LBaaS). According to the experiment results, in all the test cases of WPS execution considered in this study, the PaaS-based geo-spatial service had a greater performance and lower error rates than the IaaS-based cloud without LBaaS. Full article
Figures

Graphical abstract

Open AccessArticle Colour Classification of 1486 Lakes across a Wide Range of Optical Water Types
Remote Sens. 2018, 10(8), 1273; https://doi.org/10.3390/rs10081273
Received: 1 July 2018 / Revised: 7 August 2018 / Accepted: 9 August 2018 / Published: 13 August 2018
PDF Full-text (8243 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Remote sensing by satellite-borne sensors presents a significant opportunity to enhance the spatio-temporal coverage of environmental monitoring programmes for lakes, but the estimation of classic water quality attributes from inland water bodies has not reached operational status due to the difficulty of discerning
[...] Read more.
Remote sensing by satellite-borne sensors presents a significant opportunity to enhance the spatio-temporal coverage of environmental monitoring programmes for lakes, but the estimation of classic water quality attributes from inland water bodies has not reached operational status due to the difficulty of discerning the spectral signatures of optically active water constituents. Determination of water colour, as perceived by the human eye, does not require knowledge of inherent optical properties and therefore represents a generally applicable remotely-sensed water quality attribute. In this paper, we implemented a recent algorithm for the retrieval of colour parameters (hue angle, dominant wavelength) and derived a new correction for colour purity to account for the spectral bandpass of the Landsat 8 Operational Land Imager (OLI). We used this algorithm to calculate water colour on almost 45,000 observations over four years from 1486 lakes from a diverse range of optical water types in New Zealand. We show that the most prevalent lake colours are yellow-orange and blue, respectively, while green observations are comparatively rare. About 40% of the study lakes show transitions between colours at a range of time scales, including seasonal. A preliminary exploratory analysis suggests that both geo-physical and anthropogenic factors, such as catchment land use, provide environmental control of lake colour and are promising avenues for future analysis. Full article
(This article belongs to the Special Issue Remote Sensing of Inland Waters and Their Catchments)
Figures

Graphical abstract

Open AccessArticle Mapping Damage-Affected Areas after Natural Hazard Events Using Sentinel-1 Coherence Time Series
Remote Sens. 2018, 10(8), 1272; https://doi.org/10.3390/rs10081272
Received: 30 June 2018 / Revised: 6 August 2018 / Accepted: 8 August 2018 / Published: 13 August 2018
PDF Full-text (15778 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The emergence of the Sentinel-1A and 1B satellites now offers freely available and widely accessible Synthetic Aperture Radar (SAR) data. Near-global coverage and rapid repeat time (6–12 days) gives Sentinel-1 data the potential to be widely used for monitoring the Earth’s surface. Subtle
[...] Read more.
The emergence of the Sentinel-1A and 1B satellites now offers freely available and widely accessible Synthetic Aperture Radar (SAR) data. Near-global coverage and rapid repeat time (6–12 days) gives Sentinel-1 data the potential to be widely used for monitoring the Earth’s surface. Subtle land-cover and land surface changes can affect the phase and amplitude of the C-band SAR signal, and thus the coherence between two images collected before and after such changes. Analysis of SAR coherence therefore serves as a rapidly deployable and powerful tool to track both seasonal changes and rapid surface disturbances following natural disasters. An advantage of using Sentinel-1 C-band radar data is the ability to easily construct time series of coherence for a region of interest at low cost. In this paper, we propose a new method for Potentially Affected Area (PAA) detection following a natural hazard event. Based on the coherence time series, the proposed method (1) determines the natural variability of coherence within each pixel in the region of interest, accounting for factors such as seasonality and the inherent noise of variable surfaces; and (2) compares pixel-by-pixel syn-event coherence to temporal coherence distributions to determine where statistically significant coherence loss has occurred. The user can determine to what degree the syn-event coherence value (e.g., 1st, 5th percentile of pre-event distribution) constitutes a PAA, and integrate pertinent regional data, such as population density, to rank and prioritise PAAs. We apply the method to two case studies, Sarpol-e, Iran following the 2017 Iran-Iraq earthquake, and a landslide-prone region of NW Argentina, to demonstrate how rapid identification and interpretation of potentially affected areas can be performed shortly following a natural hazard event. Full article
Figures

Figure 1

Open AccessArticle Spectral and Spatial Classification of Hyperspectral Images Based on Random Multi-Graphs
Remote Sens. 2018, 10(8), 1271; https://doi.org/10.3390/rs10081271
Received: 10 July 2018 / Revised: 5 August 2018 / Accepted: 9 August 2018 / Published: 12 August 2018
PDF Full-text (1767 KB) | HTML Full-text | XML Full-text
Abstract
Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial
[...] Read more.
Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial classification framework for hyperspectral images based on Random Multi-Graphs (RMGs). The RMG is a graph-based ensemble learning method, which is rarely considered in hyperspectral image classification. It is empirically verified that the semi-supervised RMG deals well with small sample setting problems. This kind of problem is very common in hyperspectral image applications. In the proposed method, spatial features are extracted based on linear prediction error analysis and local binary patterns; spatial features and spectral features are then stacked into high dimensional vectors. The high dimensional vectors are fed into the RMG for classification. By randomly selecting a subset of features to create a graph, the proposed method can achieve excellent classification performance. The experiments on three real hyperspectral datasets have demonstrated that the proposed method exhibits better performance than several closely related methods. Full article
Figures

Graphical abstract

Back to Top