Next Issue
Volume 12, February-1
Previous Issue
Volume 12, January-1

Table of Contents

Remote Sens., Volume 12, Issue 2 (January-2 2020) – 140 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Cover Story (view full-size image) We present a simplified atmospheric correction algorithm for snow/ice albedo retrievals using [...] Read more.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessArticle
Multi-Type Forest Change Detection Using BFAST and Monthly Landsat Time Series for Monitoring Spatiotemporal Dynamics of Forests in Subtropical Wetland
Remote Sens. 2020, 12(2), 341; https://doi.org/10.3390/rs12020341 - 20 Jan 2020
Viewed by 179
Abstract
Land cover changes, especially excessive economic forest plantations, have significantly threatened the ecological security of West Dongting Lake wetland in China. This work aimed to investigate the spatiotemporal dynamics of forests in the West Dongting Lake region from 2000 to 2018 using a [...] Read more.
Land cover changes, especially excessive economic forest plantations, have significantly threatened the ecological security of West Dongting Lake wetland in China. This work aimed to investigate the spatiotemporal dynamics of forests in the West Dongting Lake region from 2000 to 2018 using a reconstructed monthly Landsat NDVI time series. The multi-type forest changes, including conversion from forest to another land cover category, conversion from another land cover category to forest, and conversion from forest to forest (such as flooding and replantation post-deforestation), and land cover categories before and after change were effectively detected by integrating Breaks For Additive Seasonal and Trend (BFAST) and random forest algorithms with the monthly NDVI time series, with an overall accuracy of 87.8%. On the basis of focusing on all the forest regions extracted through creating a forest mask for each image in time series and merging these to produce an ‘anytime’ forest mask, the spatiotemporal dynamics of forest were analyzed on the basis of the acquired information of multi-type forest changes and classification. The forests are principally distributed in the core zone of West Donting Lake surrounding the water body and the southwestern mountains. The forest changes in the core zone and low elevation region are prevalent and frequent. The variation of forest areas in West Dongting Lake experienced three steps: rapid expansion of forest plantation from 2000 to 2005, relatively steady from 2006 to 2011, and continuous decline since 2011, mainly caused by anthropogenic factors, such as government policies and economic profits. This study demonstrated the applicability of the integrated BFAST method to detect multi-type forest changes by using dense Landsat time series in the subtropical wetland ecosystem with low data availability. Full article
(This article belongs to the Special Issue Monitoring Forest Change with Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Variability of the Boundary Layer Over an Urban Continental Site Based on 10 Years of Active Remote Sensing Observations in Warsaw
Remote Sens. 2020, 12(2), 340; https://doi.org/10.3390/rs12020340 - 20 Jan 2020
Viewed by 150
Abstract
Atmospheric boundary layer height (ABLH) was observed by the CHM15k ceilometer (January 2008 to October 2013) and the PollyXT lidar (July 2013 to December 2018) over the European Aerosol Research LIdar NETwork to Establish an Aerosol Climatology (EARLINET) site at the Remote Sensing [...] Read more.
Atmospheric boundary layer height (ABLH) was observed by the CHM15k ceilometer (January 2008 to October 2013) and the PollyXT lidar (July 2013 to December 2018) over the European Aerosol Research LIdar NETwork to Establish an Aerosol Climatology (EARLINET) site at the Remote Sensing Laboratory (RS-Lab) in Warsaw, Poland. Out of a maximum number of 4017 observational days within this period, a subset of quasi-continuous measurements conducted with these instruments at the same wavelength (1064 nm) was carefully chosen. This provided a data sample of 1841 diurnal cycle ABLH observations. The ABLHs were derived from ceilometer and lidar signals using the wavelet covariance transform method (WCT), gradient method (GDT), and standard deviation method (STD). For comparisons, the rawinsondes of the World Meteorological Organization (WMO 12374 site in Legionowo, 25 km distance to the RS-Lab) were used. The ABLHs derived from rawinsondes by the skew-T-log-p method and the bulk Richardson (bulk-Ri) method had a linear correlation coefficient (R2) of 0.9 and standard deviation (SD) of 0.32 km. A comparison of the ABLHs obtained for different methods and instruments indicated a relatively good agreement. The ABLHs estimated from the rawinsondes with the bulk-Ri method had the highest correlations, R2 of 0.80 and 0.70 with the ABLHs determined using the WCT method on ceilometer and lidar signals, respectively. The three methods applied to the simultaneous, collocated lidar, and ceilometer observations (July to October 2013) showed good agreement, especially for the WCT method (R2 of 0.94, SD of 0.19 km). A scaling threshold-based algorithm was proposed to homogenize ceilometer and lidar datasets, which were applied on the lidar data, and significantly improved the coherence of the results (R2 of 0.98, SD of 0.11 km). The difference of ABLH between clear-sky and cloudy conditions was on average below 230 m for the ceilometer and below 70 m for the lidar retrievals. The statistical analysis of the long-term observations indicated that the monthly mean ABLHs varied throughout the year between 0.6 and 1.8 km. The seasonal mean ABLH was of 1.16 ± 0.16 km in spring, 1.34 ± 0.15 km in summer, 0.99 ± 0.11 km in autumn, and 0.73 ± 0.08 km in winter. In spring and summer, the daytime and nighttime ABLHs appeared mainly in a frequency distribution range of 0.6 to 1.0 km. In winter, the distribution was common between 0.2 and 0.6 km. In autumn, it was relatively balanced between 0.2 and 1.2 km. The annual mean ABLHs maintained between 0.77 and 1.16 km, whereby the mean heights of the well-mixed, residual, and nocturnal layer were 1.14 ± 0.11, 1.27 ± 0.09, and 0.71 ± 0.06 km, respectively (for clear-sky conditions). For the whole observation period, the ABLHs below 1 km constituted more than 60% of the retrievals. A strong seasonal change of the monthly mean ABLH diurnal cycle was evident; a mild weakly defined autumn diurnal cycle, followed by a somewhat flat winter diurnal cycle, then a sharp transition to a spring diurnal cycle, and a high bell-like summer diurnal cycle. A prolonged summertime was manifested by the September cycle being more similar to the summer than autumn cycles. Full article
(This article belongs to the Section Atmosphere Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Arbitrary-Oriented Inshore Ship Detection based on Multi-Scale Feature Fusion and Contextual Pooling on Rotation Region Proposals
Remote Sens. 2020, 12(2), 339; https://doi.org/10.3390/rs12020339 - 20 Jan 2020
Viewed by 139
Abstract
Inshore ship detection plays an important role in many civilian and military applications. The complex land environment and the diversity of target sizes and distributions make it still challenging for us to obtain accurate detection results. In order to achieve precise localization and [...] Read more.
Inshore ship detection plays an important role in many civilian and military applications. The complex land environment and the diversity of target sizes and distributions make it still challenging for us to obtain accurate detection results. In order to achieve precise localization and suppress false alarms, in this paper, we propose a framework which integrates a multi-scale feature fusion network, rotation region proposal network and contextual pooling together. Specifically, in order to describe ships of various sizes, different convolutional layers are fused to obtain multi-scale features based on the baseline feature extraction network. Then, for the purpose of accurate target localization and arbitrary-oriented ship detection, a rotation region proposal network and skew non-maximum suppression are employed. Finally, on account of the disadvantages that the employment of a rotation bounding box usually causes more false alarms, we implement inclined context feature pooling on rotation region proposals. A dataset including port images collected from Google Earth and a public ship dataset HRSC2016 are employed in our experiments to test the proposed method. Experimental results of model analysis validate the contribution of each module mentioned above, and contrast results show that our proposed pipeline is able to achieve state-of-the-art performance of arbitrary-oriented inshore ship detection. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

Open AccessLetter
Detection of Maize Tassels from UAV RGB Imagery with Faster R-CNN
Remote Sens. 2020, 12(2), 338; https://doi.org/10.3390/rs12020338 - 20 Jan 2020
Viewed by 157
Abstract
Maize tassels play a critical role in plant growth and yield. Extensive RGB images obtained using unmanned aerial vehicle (UAV) and the prevalence of deep learning provide a chance to improve the accuracy of detecting maize tassels. We used images from UAV, a [...] Read more.
Maize tassels play a critical role in plant growth and yield. Extensive RGB images obtained using unmanned aerial vehicle (UAV) and the prevalence of deep learning provide a chance to improve the accuracy of detecting maize tassels. We used images from UAV, a mobile phone, and the Maize Tassel Counting dataset (MTC) to test the performance of faster region-based convolutional neural network (Faster R-CNN) with residual neural network (ResNet) and a visual geometry group neural network (VGGNet). The results showed that the ResNet, as the feature extraction network, was better than the VGGNet for detecting maize tassels from UAV images with 600 × 600 resolution. The prediction accuracy ranged from 87.94% to 94.99%. However, the prediction accuracy was less than 87.27% from the UAV images with 5280 × 2970 resolution. We modified the anchor size to [852, 1282, 2562] in the region proposal network according to the width and height of pixel distribution to improve detection accuracy up to 89.96%. The accuracy reached up to 95.95% for mobile phone images. Then, we compared our trained model with TasselNet without training their datasets. The average difference of tassel number was 1.4 between the calculations with 40 images for the two methods. In the future, we could further improve the performance of the models by enlarging datasets and calculating other tassel traits such as the length, width, diameter, perimeter, and the branch number of the maize tassels. Full article
Show Figures

Graphical abstract

Open AccessArticle
Backward Adaptive Brightness Temperature Threshold Technique (BAB3T): A Methodology to Determine Extreme Convective Initiation Regions Using Satellite Infrared Imagery
Remote Sens. 2020, 12(2), 337; https://doi.org/10.3390/rs12020337 - 20 Jan 2020
Viewed by 142
Abstract
Thunderstorms in southeastern South America (SESA) stand out in satellite observations as being among the strongest on Earth in terms of satellite-based convective proxies, such as lightning flash rate per storm, the prevalence for extremely tall, wide convective cores and broad stratiform regions. [...] Read more.
Thunderstorms in southeastern South America (SESA) stand out in satellite observations as being among the strongest on Earth in terms of satellite-based convective proxies, such as lightning flash rate per storm, the prevalence for extremely tall, wide convective cores and broad stratiform regions. Accurately quantifying when and where strong convection is initiated presents great interest in operational forecasting and convective system process studies due to the relationship between convective storms and severe weather phenomena. This paper generates a novel methodology to determine convective initiation (CI) signatures associated with extreme convective systems, including extreme events. Based on the well-established area-overlapping technique, an adaptive brightness temperature threshold for identification and backward tracking with infrared data is introduced in order to better identify areas of deep convection associated with and embedded within larger cloud clusters. This is particularly important over SESA because ground-based weather radar observations are currently limited to particular areas. Extreme rain precipitation features (ERPFs) from Tropical Rainfall Measurement Mission are examined to quantify the full satellite-observed life cycle of extreme convective events, although this technique allows examination of other intense convection proxies such as the identification of overshooting tops. CI annual and diurnal cycles are analyzed and distinctive behaviors are observed for different regions over SESA. It is found that near principal mountain barriers, a bimodal diurnal CI distribution is observed denoting the existence of multiple CI triggers, while convective initiation over flat terrain has a maximum frequency in the afternoon. Full article
Show Figures

Graphical abstract

Open AccessArticle
Mapping Water Quality Parameters in Urban Rivers from Hyperspectral Images Using a New Self-Adapting Selection of Multiple Artificial Neural Networks
Remote Sens. 2020, 12(2), 336; https://doi.org/10.3390/rs12020336 - 20 Jan 2020
Viewed by 163
Abstract
Protection of water environments is an important part of overall environmental protection; hence, many people devote their efforts to monitoring and improving water quality. In this study, a self-adapting selection method of multiple artificial neural networks (ANNs) using hyperspectral remote sensing and ground-measured [...] Read more.
Protection of water environments is an important part of overall environmental protection; hence, many people devote their efforts to monitoring and improving water quality. In this study, a self-adapting selection method of multiple artificial neural networks (ANNs) using hyperspectral remote sensing and ground-measured water quality data is proposed to quantitatively predict water quality parameters, including phosphorus, nitrogen, biochemical oxygen demand (BOD), chemical oxygen demand (COD), and chlorophyll a. Seventy-nine ground measured data samples are used as training data in the establishment of the proposed model, and 30 samples are used as testing data. The proposed method based on traditional ANNs of numerical prediction involves feature selection of bands, self-adapting selection based on multiple selection criteria, stepwise backtracking, and combined weighted correlation. Water quality parameters are estimated with coefficient of determination R 2 ranging from 0.93 (phosphorus) to 0.98 (nitrogen), which is higher than the value (0.7 to 0.8) obtained by traditional ANNs. MPAE (mean percent of absolute error) values ranging from 5% to 11% are used rather than root mean square error to evaluate the predicting precision of the proposed model because the magnitude of each water quality parameter considerably differs, thereby providing reasonable and interpretable results. Compared with other ANNs with backpropagation, this study proposes an auto-adapting method assisted by the above-mentioned methods to select the best model with all settings, such as the number of hidden layers, number of neurons in each hidden layer, choice of optimizer, and activation function. Different settings for ANNS with backpropagation are important to improve precision and compatibility for different data. Furthermore, the proposed method is applied to hyperspectral remote sensing images collected using an unmanned aerial vehicle for monitoring the water quality in the Shiqi River, Zhongshan City, Guangdong Province, China. Obtained results indicate the locations of pollution sources. Full article
Show Figures

Graphical abstract

Open AccessArticle
Maintaining Semantic Information across Generic 3D Model Editing Operations
Remote Sens. 2020, 12(2), 335; https://doi.org/10.3390/rs12020335 - 20 Jan 2020
Viewed by 173
Abstract
Many of today’s data models for 3D applications, such as City Geography Markup Language (CityGML) or Industry Foundation Classes (IFC) encode rich semantic information in addition to the traditional geometry and materials representation. However, 3D editing techniques fall short of maintaining the semantic [...] Read more.
Many of today’s data models for 3D applications, such as City Geography Markup Language (CityGML) or Industry Foundation Classes (IFC) encode rich semantic information in addition to the traditional geometry and materials representation. However, 3D editing techniques fall short of maintaining the semantic information across edit operations if they are not tailored to a specific data model. While semantic information is often lost during edit operations, geometry, UV mappings, and materials are usually maintained. This article presents a data model synchronization method that preserves semantic information across editing operation relying only on geometry, UV mappings, and materials. This enables easy integration of existing and future 3D editing techniques with rich data models. The method links the original data model to the edited geometry using point set registration, recovering the existing information based on spatial and UV search methods, and automatically labels the newly created geometry. An implementation of a Level of Detail 3 (LoD3) building editor for the Virtual Singapore project, based on interactive push-pull and procedural generation of façades, verified the method with 30 common editing tasks. The implementation synchronized changes in the 3D geometry with a CityGML data model and was applied to more than 100 test buildings. Full article
Show Figures

Graphical abstract

Open AccessArticle
Burned Area Detection and Mapping: Intercomparison of Sentinel-1 and Sentinel-2 Based Algorithms over Tropical Africa
Remote Sens. 2020, 12(2), 334; https://doi.org/10.3390/rs12020334 - 20 Jan 2020
Viewed by 198
Abstract
This study provides a comparative analysis of two Sentinel-1 and one Sentinel-2 burned area (BA) detection and mapping algorithms over 10 test sites (100 × 100 km) in tropical and sub-tropical Africa. Depending on the site, the burned area was mapped at different [...] Read more.
This study provides a comparative analysis of two Sentinel-1 and one Sentinel-2 burned area (BA) detection and mapping algorithms over 10 test sites (100 × 100 km) in tropical and sub-tropical Africa. Depending on the site, the burned area was mapped at different time points during the 2015–2016 fire seasons. The algorithms relied on diverse burned area (BA) mapping strategies regarding the data used (i.e., surface reflectance, backscatter coefficient, interferometric coherence) and the detection method. Algorithm performance was compared by evaluating the detected BA agreement with reference fire perimeters independently derived from medium resolution optical imagery (i.e., Landsat 8, Sentinel-2). The commission (CE) and omission errors (OE), as well as the Dice coefficient (DC) for burned pixels, were compared. The mean OE and CE were 33% and 31% for the optical-based Sentinel-2 time-series algorithm and increased to 66% and 36%, respectively, for the radar backscatter coefficient-based algorithm. For the coherence based radar algorithm, OE and CE reached 72% and 57%, respectively. When considering all tiles, the optical-based algorithm provided a significant increase in agreement over the Synthetic Aperture Radar (SAR) based algorithms that might have been boosted by the use of optical datasets when generating the reference fire perimeters. The analysis suggested that optical-based algorithms provide for a significant increase in accuracy over the radar-based algorithms. However, in regions with persistent cloud cover, the radar sensors may provide a complementary data source for wall to wall BA detection. Full article
Show Figures

Graphical abstract

Open AccessArticle
Evaluation of E Layer Dominated Ionosphere Events Using COSMIC/FORMOSAT-3 and CHAMP Ionospheric Radio Occultation Data
Remote Sens. 2020, 12(2), 333; https://doi.org/10.3390/rs12020333 - 20 Jan 2020
Viewed by 160
Abstract
At certain geographic locations, especially in the polar regions, the ionization of the ionospheric E layer can dominate over that of the F2 layer. The associated electron density profiles show their ionization maximum at E layer heights between 80 and 150 km above [...] Read more.
At certain geographic locations, especially in the polar regions, the ionization of the ionospheric E layer can dominate over that of the F2 layer. The associated electron density profiles show their ionization maximum at E layer heights between 80 and 150 km above the Earth’s surface. This phenomenon is called the “E layer dominated ionosphere” (ELDI). In this paper we systematically investigate the characteristics of ELDI occurrences at high latitudes, focusing on their spatial and temporal variations. In our study, we use ionospheric GPS radio occultation data obtained from the COSMIC/FORMOSAT-3 (Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa Satellite Mission 3) and CHAMP (Challenging Minisatellite Payload) satellite missions. The entire dataset comprises the long period from 2001 to 2018, covering the previous and present solar cycles. This allows us to study the variation of the ELDI in different ways. In addition to the geospatial distribution, we also examine the temporal variation of ELDI events, focusing on the diurnal, the seasonal, and the solar cycle dependent variation. Furthermore, we investigate the spatiotemporal dependency of the ELDI on geomagnetic storms. Full article
(This article belongs to the Special Issue Remote Sensing of Ionosphere Observation and Investigation)
Show Figures

Graphical abstract

Open AccessArticle
Regional Actual Evapotranspiration Estimation with Land and Meteorological Variables Derived from Multi-Source Satellite Data
Remote Sens. 2020, 12(2), 332; https://doi.org/10.3390/rs12020332 - 20 Jan 2020
Viewed by 157
Abstract
Evapotranspiration (ET) is one of the components in the water cycle and the surface energy balance systems. It is fundamental information for agriculture, water resource management, and climate change research. This study presents a scheme for regional actual evapotranspiration estimation using multi-source satellite [...] Read more.
Evapotranspiration (ET) is one of the components in the water cycle and the surface energy balance systems. It is fundamental information for agriculture, water resource management, and climate change research. This study presents a scheme for regional actual evapotranspiration estimation using multi-source satellite data to compute key land and meteorological variables characterizing land surface, soil, vegetation, and the atmospheric boundary layer. The algorithms are validated using ground observations from the Heihe River Basin of northwest China. Monthly data estimates at a resolution of 1 km from the proposed algorithms compared well with ground observation data, with a root mean square error (RMSE) of 0.80 mm and a mean relative error (MRE) of −7.11%. The overall deviation between the average yearly ET derived from the proposed algorithms and ground-based water balance measurements was 9.44% for a small watershed and 1% for the entire basin. This study demonstrates that both accuracy and spatial depiction of actual evapotranspiration estimation can be significantly improved by using multi-source satellite data to measure the required land surface and meteorological variables. This reduces dependence on spatial interpolation of ground-derived meteorological variables which can be problematic, especially in data-sparse regions, and allows the production of region-wide ET datasets. Full article
Show Figures

Graphical abstract

Open AccessArticle
Accurate Despeckling and Estimation of Polarimetric Features by Means of a Spatial Decorrelation of the Noise in Complex PolSAR Data
Remote Sens. 2020, 12(2), 331; https://doi.org/10.3390/rs12020331 - 20 Jan 2020
Viewed by 166
Abstract
In this work, we extended a procedure for the spatial decorrelation of fully-developed speckle, originally developed for single-polarization SAR data, to fully-polarimetric SAR data. The spatial correlation of the noise depends on the tapering window in the Fourier domain used by the SAR [...] Read more.
In this work, we extended a procedure for the spatial decorrelation of fully-developed speckle, originally developed for single-polarization SAR data, to fully-polarimetric SAR data. The spatial correlation of the noise depends on the tapering window in the Fourier domain used by the SAR processor to avoid defocusing of targets caused by Gibbs effects. Since each polarimetric channel is focused independently of the others, the noise-whitening procedure can be performed applying the decorrelation stage to each channel separately. Equivalently, the noise-whitening stage is applied to each element of the scattering matrix before any multilooking operation, either coherent or not, is performed. In order to evaluate the impact of a spatial decorrelation of the noise on the performance of polarimetric despeckling filters, we make use of simulated PolSAR data, having user-defined polarimetric features. We optionally introduce a spatial correlation of the noise in the simulated complex data by means of a 2D separable Hamming window in the Fourier domain. Then, we remove such a correlation by using the whitening procedure and compare the accuracy of both despeckling and polarimetric features estimation for the three following cases: uncorrelated, correlated, and decorrelated images. Simulation results showed a steady improvement of performance scores, most notably the equivalent number of looks (ENL), which increased after decorrelation and closely attained the value of the uncorrelated case. Besides ENL, the benefits of the noise decorrelation hold also for polarimetric features, whose estimation accuracy is diminished by the correlation. Also, the trends of simulations were confirmed by qualitative results of experiments carried out on a true Radarsat-2 image. Full article
Show Figures

Graphical abstract

Open AccessTechnical Note
Digital Drill Core Models: Structure-from-Motion as a Tool for the Characterisation, Orientation, and Digital Archiving of Drill Core Samples
Remote Sens. 2020, 12(2), 330; https://doi.org/10.3390/rs12020330 - 19 Jan 2020
Viewed by 238
Abstract
Structure-from-motion (SfM) photogrammetry enables the cost-effective digital characterisation of seismic- to sub-decimetre-scale geoscientific samples. The technique is commonly used for the characterisation of outcrops, fracture mapping, and increasingly so for the quantification of deformation during geotechnical stress tests. We here apply SfM photogrammetry [...] Read more.
Structure-from-motion (SfM) photogrammetry enables the cost-effective digital characterisation of seismic- to sub-decimetre-scale geoscientific samples. The technique is commonly used for the characterisation of outcrops, fracture mapping, and increasingly so for the quantification of deformation during geotechnical stress tests. We here apply SfM photogrammetry using off-the-shelf components and software, to generate 25 digital drill core models of drill cores. The selected samples originate from the Longyearbyen CO2 Lab project’s borehole DH4, covering the lowermost cap rock and uppermost reservoir sequences proposed for CO2 sequestration onshore Svalbard. We have come up with a procedure that enables the determination of bulk volumes and densities with precisions and accuracies similar to those of such conventional methods as the immersion in fluid method. We use 3D printed replicas to qualitatively assure the volumes, and show that, with a mean deviation (based on eight samples) of 0.059% compared to proven geotechnical methods, the photogrammetric output is found to be equivalent. We furthermore splice together broken and fragmented core pieces to reconstruct larger core intervals. We unwrap these to generate and characterise 2D orthographic projections of the core edge using analytical workflows developed for the structure-sedimentological characterisation of virtual outcrop models. Drill core orthoprojections can be treated as directly correlatable to optical borehole-wall imagery data, enabling a direct and cost-effective elucidation of in situ drill core orientation and depth, as long as any form of borehole imagery is available. Digital drill core models are thus complementary to existing physical and photographic sample archives, and we foresee that the presented workflow can be adopted for the digitisation and digital storage of other types of geological samples, including degradable and dangerous ice and sediment cores and samples. Full article
Show Figures

Graphical abstract

Open AccessArticle
Integrating Remote Sensing and Street View Images to Quantify Urban Forest Ecosystem Services
Remote Sens. 2020, 12(2), 329; https://doi.org/10.3390/rs12020329 - 19 Jan 2020
Viewed by 235
Abstract
There is an urgent need for holistic tools to assess the health impacts of climate change mitigation and adaptation policies relating to increasing public green spaces. Urban vegetation provides numerous ecosystem services on a local scale and is therefore a potential adaptation strategy [...] Read more.
There is an urgent need for holistic tools to assess the health impacts of climate change mitigation and adaptation policies relating to increasing public green spaces. Urban vegetation provides numerous ecosystem services on a local scale and is therefore a potential adaptation strategy that can be used in an era of global warming to offset the increasing impacts of human activity on urban environments. In this study, we propose a set of urban green ecological metrics that can be used to evaluate urban green ecosystem services. The metrics were derived from two complementary surveys: a traditional remote sensing survey of multispectral images and Laser Imaging Detection and Ranging (LiDAR) data, and a survey using proximate sensing through images made available by the Google Street View database. In accordance with previous studies, two classes of metrics were calculated: greenery at lower and higher elevations than building facades. In the last phase of the work, the metrics were applied to city blocks, and a spatially constrained clustering methodology was employed. Homogeneous areas were identified in relation to the urban greenery characteristics. The proposed methodology represents the development of a geographic information system that can be used by public administrators and urban green designers to create and maintain urban public forests. Full article
(This article belongs to the Special Issue Remote Sensing in Applications of Geoinformation)
Show Figures

Graphical abstract

Open AccessLetter
Greening Trends of Southern China Confirmed by GRACE
Remote Sens. 2020, 12(2), 328; https://doi.org/10.3390/rs12020328 - 19 Jan 2020
Viewed by 237
Abstract
As reported by the National Aeronautics and Space Administration (NASA), the world has been greening over the last two decades, with the highest greening occurring in China and India. The increasing vegetation will increase plant tissue accumulation and water storage capacity, and all [...] Read more.
As reported by the National Aeronautics and Space Administration (NASA), the world has been greening over the last two decades, with the highest greening occurring in China and India. The increasing vegetation will increase plant tissue accumulation and water storage capacity, and all of these variations will cause mass change. In this study, we found that the mass change related to greening in Southern China could be confirmed by Gravity Recovery and Climate Experiment (GRACE) observations. The mean mass change rate detected by GRACE is 6.7 ± 0.8 mm/yr in equivalent water height during 2003–2016 in our study region. This is consistent with the sum of vegetation tissue, soil water and groundwater change calculated using multi-source data. The vegetation accumulation is approximately 3.8 ± 1.3 mm/yr, which is the major contribution to region mass change. We also found that the change of water storage capacity related to vegetation can be detected by GRACE. Full article
(This article belongs to the Section Remote Sensing Letter)
Show Figures

Graphical abstract

Open AccessEditorial
Acknowledgement to Reviewers of Remote Sensing in 2019
Remote Sens. 2020, 12(2), 327; https://doi.org/10.3390/rs12020327 - 19 Jan 2020
Viewed by 268
Abstract
The editorial team greatly appreciates the reviewers who have dedicated their considerable time and expertise to the journal’s rigorous editorial process over the past 12 months, regardless of whether the papers are finally published or not.[...] Full article
Open AccessArticle
Comparing Performances of Five Distinct Automatic Classifiers for Fin Whale Vocalizations in Beamformed Spectrograms of Coherent Hydrophone Array
Remote Sens. 2020, 12(2), 326; https://doi.org/10.3390/rs12020326 - 19 Jan 2020
Viewed by 197
Abstract
A large variety of sound sources in the ocean, including biological, geophysical, and man-made, can be simultaneously monitored over instantaneous continental-shelf scale regions via the passive ocean acoustic waveguide remote sensing (POAWRS) technique by employing a large-aperture densely-populated coherent hydrophone array system. Millions [...] Read more.
A large variety of sound sources in the ocean, including biological, geophysical, and man-made, can be simultaneously monitored over instantaneous continental-shelf scale regions via the passive ocean acoustic waveguide remote sensing (POAWRS) technique by employing a large-aperture densely-populated coherent hydrophone array system. Millions of acoustic signals received on the POAWRS system per day can make it challenging to identify individual sound sources. An automated classification system is necessary to enable sound sources to be recognized. Here, the objectives are to (i) gather a large training and test data set of fin whale vocalization and other acoustic signal detections; (ii) build multiple fin whale vocalization classifiers, including a logistic regression, support vector machine (SVM), decision tree, convolutional neural network (CNN), and long short-term memory (LSTM) network; (iii) evaluate and compare performance of these classifiers using multiple metrics including accuracy, precision, recall and F1-score; and (iv) integrate one of the classifiers into the existing POAWRS array and signal processing software. The findings presented here will (1) provide an automatic classifier for near real-time fin whale vocalization detection and recognition, useful in marine mammal monitoring applications; and (2) lay the foundation for building an automatic classifier applied for near real-time detection and recognition of a wide variety of biological, geophysical, and man-made sound sources typically detected by the POAWRS system in the ocean. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Online Semantic Subspace Learning with Siamese Network for UAV Tracking
Remote Sens. 2020, 12(2), 325; https://doi.org/10.3390/rs12020325 - 19 Jan 2020
Viewed by 157
Abstract
In urban environment monitoring, visual tracking on unmanned aerial vehicles (UAVs) can produce more applications owing to the inherent advantages, but it also brings new challenges for existing visual tracking approaches (such as complex background clutters, rotation, fast motion, small objects, and realtime [...] Read more.
In urban environment monitoring, visual tracking on unmanned aerial vehicles (UAVs) can produce more applications owing to the inherent advantages, but it also brings new challenges for existing visual tracking approaches (such as complex background clutters, rotation, fast motion, small objects, and realtime issues due to camera motion and viewpoint changes). Based on the Siamese network, tracking can be conducted efficiently in recent UAV datasets. Unfortunately, the learned convolutional neural network (CNN) features are not discriminative when identifying the target from the background/clutter, In particular for the distractor, and cannot capture the appearance variations temporally. Additionally, occlusion and disappearance are also reasons for tracking failure. In this paper, a semantic subspace module is designed to be integrated into the Siamese network tracker to encode the local fine-grained details of the target for UAV tracking. More specifically, the target’s semantic subspace is learned online to adapt to the target in the temporal domain. Additionally, the pixel-wise response of the semantic subspace can be used to detect occlusion and disappearance of the target, and this enables reasonable updating to relieve model drifting. Substantial experiments conducted on challenging UAV benchmarks illustrate that the proposed method can obtain competitive results in both accuracy and efficiency when they are applied to UAV videos. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

Open AccessArticle
Performance Comparison of Geoid Refinement between XGM2016 and EGM2008 Based on the KTH and RCR Methods: Jilin Province, China
Remote Sens. 2020, 12(2), 324; https://doi.org/10.3390/rs12020324 - 18 Jan 2020
Viewed by 238
Abstract
The selection of an appropriate global gravity field model and refinement method can effectively improve the accuracy of the refined regional geoid model in a certain research area. We analyzed the accuracy of Experimental Geopotential Model (XGM2016) based on the GPS-leveling data and [...] Read more.
The selection of an appropriate global gravity field model and refinement method can effectively improve the accuracy of the refined regional geoid model in a certain research area. We analyzed the accuracy of Experimental Geopotential Model (XGM2016) based on the GPS-leveling data and the modification parameters of the global mean square errors in the KTH geoid refinement in Jilin Province, China. The regional geoid was refined based on Earth Gravitational Model (EGM2008) and XGM2016 using both the Helmert condensation method with an RCR procedure and the KTH method. A comparison of the original two gravity field models to the GPS-leveling benchmarks showed that the accuracies of XGM2016 and EGM2008 in the plain area of Jilin Province are similar with a standard deviation (STD) of 5.8 cm, whereas the accuracy of EGM2008 in the high mountainous area is 1.4 cm better than that of XGM2016, which may be attributed to its low resolution. The modification parameters between the two gravity field models showed that the coefficient error of XGM2016 model was lower than that of EGM2008 for the same degree of expansion. The different modification limits and integral radii may produce a centimeter level difference in global mean square error, while the influence of the truncation error caused by the integral was at the millimeter level. The terrestrial gravity data error accounted for the majority of the global mean square error. The optimal least squares modification obtained the minimum global mean square error, and the global mean square error calculated based on XGM2016 model was reduced by about 1~3 cm compared with EGM2008. The refined geoid based on the two gravity field models indicated that both KTH and RCR method can effectively improve the STD of the geoid model from about six to three centimeters. The refined accuracy of EGM2008 model using RCR and KTH methods is slightly better than that of XGM2016 model in the plain and high mountain areas after seven-parameter fitting. EGM2008 based on the KTH method was the most precise at ± 2.0 cm in the plain area and ± 2.4 cm in the mountainous area. Generally, for the refined geoid based on the two Earth gravity models, KTH produced results similar to RCR in the plain area, and had relatively better performance for the mountainous area where terrestrial gravity data is sparse and unevenly distributed. Full article
(This article belongs to the Special Issue Geodesy for Gravity and Height Systems)
Show Figures

Figure 1

Open AccessArticle
A Multi-Year Evaluation of Doppler Lidar Wind-Profile Observations in the Arctic
Remote Sens. 2020, 12(2), 323; https://doi.org/10.3390/rs12020323 - 18 Jan 2020
Viewed by 205
Abstract
Doppler light detection and ranging (lidar) wind profilers have proven their capability to measure vertical wind profiles with an accuracy comparable to anemometers and radiosondes. However, most of these comparisons were performed over short time periods or at mid-latitudes. This study presents a [...] Read more.
Doppler light detection and ranging (lidar) wind profilers have proven their capability to measure vertical wind profiles with an accuracy comparable to anemometers and radiosondes. However, most of these comparisons were performed over short time periods or at mid-latitudes. This study presents a multi-year assessment of the accuracy of Doppler lidar wind-profile measurements in the Arctic by comparing them with coincident radiosonde observations, and excellent agreement was observed. The suitability of the Doppler lidar for verification case studies of operational numerical weather prediction (NWP) models during the World Meteorological Organization’s Year of Polar Prediction is also demonstrated, by using Environment and Climate Change Canada’s (ECCC) global environmental multiscale model (GEM-2.5 km and GEM-10 km). Since 2016, identical scanning Doppler lidars were deployed at two supersites commissioned by ECCC as part of the Canadian Arctic Weather Science project. The supersites are located in Iqaluit (64°N, 69°W) and Whitehorse (61°N, 135°W) with a third Halo Doppler lidar located in Squamish (50°N, 123°W). Two lidar wind-profile measurement methodologies were investigated; the velocity-azimuth display method exhibited a smaller average bias (−0.27 ± 0.02 m/s) than the Doppler beam-swinging method (–0.46 ± 0.02 m/s) compared to the sonde. Comparisons to ECCC’s NWP models indicate good agreement, more so during the summer months, with an average bias < 0.71 m/s for the higher-resolution (GEM-2.5 km) ECCC models at Iqaluit. Larger biases were found in the mountainous terrain of Whitehorse and Squamish, likely due to difficulties in the model’s ability to resolve the topography. This provides evidence in favor of using high temporal resolution lidar wind-profile measurements to complement radiosonde observations and for NWP model verification and process studies. Full article
(This article belongs to the Special Issue Advances in Atmospheric Remote Sensing with Lidar)
Show Figures

Graphical abstract

Open AccessArticle
Correcting Image Refraction: Towards Accurate Aerial Image-Based Bathymetry Mapping in Shallow Waters
Remote Sens. 2020, 12(2), 322; https://doi.org/10.3390/rs12020322 - 18 Jan 2020
Viewed by 302
Abstract
Although aerial image-based bathymetric mapping can provide, unlike acoustic or LiDAR (Light Detection and Ranging) sensors, both water depth and visual information, water refraction poses significant challenges for accurate depth estimation. In order to tackle this challenge, we propose an image correction methodology, [...] Read more.
Although aerial image-based bathymetric mapping can provide, unlike acoustic or LiDAR (Light Detection and Ranging) sensors, both water depth and visual information, water refraction poses significant challenges for accurate depth estimation. In order to tackle this challenge, we propose an image correction methodology, which first exploits recent machine learning procedures that recover depth from image-based dense point clouds and then corrects refraction on the original imaging dataset. This way, the structure from motion (SfM) and multi-view stereo (MVS) processing pipelines are executed on a refraction-free set of aerial datasets, resulting in highly accurate bathymetric maps. Performed experiments and validation were based on datasets acquired during optimal sea state conditions and derived from four different test-sites characterized by excellent sea bottom visibility and textured seabed. Results demonstrated the high potential of our approach, both in terms of bathymetric accuracy, as well as texture and orthoimage quality. Full article
Show Figures

Graphical abstract

Open AccessArticle
Feature Dimension Reduction Using Stacked Sparse Auto-Encoders for Crop Classification with Multi-Temporal, Quad-Pol SAR Data
Remote Sens. 2020, 12(2), 321; https://doi.org/10.3390/rs12020321 - 18 Jan 2020
Viewed by 185
Abstract
Crop classification in agriculture is one of important applications for polarimetric synthetic aperture radar (PolSAR) data. For agricultural crop discrimination, compared with single-temporal data, multi-temporal data can dramatically increase crop classification accuracies since the same crop shows different external phenomena as it grows [...] Read more.
Crop classification in agriculture is one of important applications for polarimetric synthetic aperture radar (PolSAR) data. For agricultural crop discrimination, compared with single-temporal data, multi-temporal data can dramatically increase crop classification accuracies since the same crop shows different external phenomena as it grows up. In practice, the utilization of multi-temporal data encounters a serious problem known as a “dimension disaster”. Aiming to solve this problem and raise the classification accuracy, this study developed a feature dimension reduction method using stacked sparse auto-encoders (S-SAEs) for crop classification. First, various incoherent scattering decomposition algorithms were employed to extract a variety of detailed and quantitative parameters from multi-temporal PolSAR data. Second, based on analyzing the configuration and main parameters for constructing an S-SAE, a three-hidden-layer S-SAE network was built to reduce the dimensionality and extract effective features to manage the “dimension disaster” caused by excessive scattering parameters, especially for multi-temporal, quad-pol SAR images. Third, a convolutional neural network (CNN) was constructed and employed to further enhance the crop classification performance. Finally, the performances of the proposed strategy were assessed with the simulated multi-temporal Sentinel-1 data for two experimental sites established by the European Space Agency (ESA). The experimental results showed that the overall accuracy with the proposed method was raised by at least 17% compared with the long short-term memory (LSTM) method in the case of a 1% training ratio. Meanwhile, for a CNN classifier, the overall accuracy was almost 4% higher than those of the principle component analysis (PCA) and locally linear embedded (LLE) methods. The comparison studies clearly demonstrated the advantage of the proposed multi-temporal crop classification methodology in terms of classification accuracy, even with small training ratios. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Open AccessArticle
Improving Plane Fitting Accuracy with Rigorous Error Models of Structured Light-Based RGB-D Sensors
Remote Sens. 2020, 12(2), 320; https://doi.org/10.3390/rs12020320 - 18 Jan 2020
Viewed by 168
Abstract
Plane fitting is a fundamental operation for point cloud data processing. Most existing methods for point cloud plane fitting have been developed based on high-quality Lidar data giving equal weight to the point cloud data. In recent years, using low-quality RGB-Depth (RGB-D) sensors [...] Read more.
Plane fitting is a fundamental operation for point cloud data processing. Most existing methods for point cloud plane fitting have been developed based on high-quality Lidar data giving equal weight to the point cloud data. In recent years, using low-quality RGB-Depth (RGB-D) sensors to generate 3D models has attracted much attention. However, with low-quality point cloud data, equal weight plane fitting methods are not optimal as the range errors of RGB-D sensors are distance-related. In this paper, we developed an accurate plane fitting method for a structured light (SL)-based RGB-D sensor. First, we derived an error model of a point cloud dataset from the SL-based RGB-D sensor through error propagation from the raw measurement to the point coordinates. A new cost function based on minimizing the radial distances with the derived rigorous error model was then proposed for the random sample consensus (RANSAC)-based plane fitting method. The experimental results demonstrated that our method is robust and practical for different operating ranges and different working conditions. In the experiments, for the operating ranges from 1.23 meters to 4.31 meters, the mean plane angle errors were about one degree, and the mean plane distance errors were less than six centimeters. When the dataset is of a large-depth-measurement scale, the proposed method can significantly improve the plane fitting accuracy, with a plane angle error of 0.5 degrees and a mean distance error of 4.7 cm, compared to 3.8 degrees and 16.8 cm, respectively, from the conventional un-weighted RANSAC method. The experimental results also demonstrate that the proposed method is applicable for different types of SL-based RGB-D sensor. The rigorous error model of the SL-based RGB-D sensor is essential for many applications such as in outlier detection and data authorization. Meanwhile, the precise plane fitting method developed in our research will benefit algorithms based on high-accuracy plane features such as depth calibration, 3D feature-based simultaneous localization and mapping (SLAM), and the generation of indoor building information models (BIMs). Full article
Show Figures

Figure 1

Open AccessArticle
Exploring the Impact of Various Spectral Indices on Land Cover Change Detection Using Change Vector Analysis: A Case Study of Crete Island, Greece
Remote Sens. 2020, 12(2), 319; https://doi.org/10.3390/rs12020319 - 18 Jan 2020
Viewed by 201
Abstract
The main objective of this study was to explore the impact of various spectral indices on the performance of change vector analysis (CVA) for detecting the land cover changes on the island of Crete, Greece, between the last two decades (1999–2009 and 2009–2019). [...] Read more.
The main objective of this study was to explore the impact of various spectral indices on the performance of change vector analysis (CVA) for detecting the land cover changes on the island of Crete, Greece, between the last two decades (1999–2009 and 2009–2019). A set of such indices, namely, normalized difference vegetation index (NDVI), soil adjusted vegetation index (SAVI), albedo, bare soil index (BSI), tasseled cap greenness (TCG), and tasseled cap brightness (TCB), representing both the vegetation and soil conditions of the study area, were estimated on Landsat satellite images captured in 1999, 2009, and 2019. Change vector analysis was then applied for five different index combinations resulting to the relative change outputs. The evaluation of these outputs was performed towards detailed land cover maps produced by supervised classification of the aforementioned images. The results from the two examined periods revealed that the five index combinations provided promising performance results in terms of kappa index (with a range of 0.60–0.69) and overall accuracy (with a range of 0.86–0.96). Moreover, among the different combinations, the use of NDVI and albedo were found to provide superior results against the other combinations. Full article
(This article belongs to the collection Feature Papers for Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
A Framework for Correcting Ionospheric Artifacts and Atmospheric Effects to Generate High Accuracy InSAR DEM
Remote Sens. 2020, 12(2), 318; https://doi.org/10.3390/rs12020318 - 18 Jan 2020
Viewed by 175
Abstract
Repeat-pass interferometric synthetic aperture radar is a well-established technology for generating digital elevation models (DEMs). However, the interferogram usually has ionospheric and atmospheric effects, which reduces the DEM accuracy. In this paper, by introducing dual-polarization interferograms, a new approach is proposed to mitigate [...] Read more.
Repeat-pass interferometric synthetic aperture radar is a well-established technology for generating digital elevation models (DEMs). However, the interferogram usually has ionospheric and atmospheric effects, which reduces the DEM accuracy. In this paper, by introducing dual-polarization interferograms, a new approach is proposed to mitigate the ionospheric and atmospheric errors of the interferometric synthetic aperture radar (InSAR) data. The proposed method consists of two parts. First, the range split-spectrum method is applied to compensate for the ionospheric artifacts. Then, a multiresolution correlation analysis between dual-polarization InSAR interferograms is employed to remove the identical atmospheric phases, since the atmospheric delay is independent of SAR polarizations. The corrected interferogram can be used for DEM extraction. Validation experiments, using the ALOS-1 PALSAR interferometric pairs covering the study areas in Hawaii and Lebanon of the U.S.A., show that the proposed method can effectively reduce the ionospheric artifacts and atmospheric effects, and improve the accuracy of the InSAR-derived DEMs by 64.9% and 31.7% for the study sites in Hawaii and Lebanon of the U.S.A., respectively, compared with traditional correction methods. In addition, the assessment of the resulting DEMs also includes comparisons with the high-precision Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) altimetry data. The results show that the selection of reference data will not affect the validation results. Full article
Show Figures

Figure 1

Open AccessArticle
Classification of 3D Point Clouds Using Color Vegetation Indices for Precision Viticulture and Digitizing Applications
Remote Sens. 2020, 12(2), 317; https://doi.org/10.3390/rs12020317 - 18 Jan 2020
Viewed by 205
Abstract
Remote sensing applied in the digital transformation of agriculture and, more particularly, in precision viticulture offers methods to map field spatial variability to support site-specific management strategies; these can be based on crop canopy characteristics such as the row height or vegetation cover [...] Read more.
Remote sensing applied in the digital transformation of agriculture and, more particularly, in precision viticulture offers methods to map field spatial variability to support site-specific management strategies; these can be based on crop canopy characteristics such as the row height or vegetation cover fraction, requiring accurate three-dimensional (3D) information. To derive canopy information, a set of dense 3D point clouds was generated using photogrammetric techniques on images acquired by an RGB sensor onboard an unmanned aerial vehicle (UAV) in two testing vineyards on two different dates. In addition to the geometry, each point also stores information from the RGB color model, which was used to discriminate between vegetation and bare soil. To the best of our knowledge, the new methodology herein presented consisting of linking point clouds with their spectral information had not previously been applied to automatically estimate vine height. Therefore, the novelty of this work is based on the application of color vegetation indices in point clouds for the automatic detection and classification of points representing vegetation and the later ability to determine the height of vines using as a reference the heights of the points classified as soil. Results from on-ground measurements of the heights of individual grapevines were compared with the estimated heights from the UAV point cloud, showing high determination coefficients (R² > 0.87) and low root-mean-square error (0.070 m). This methodology offers new capabilities for the use of RGB sensors onboard UAV platforms as a tool for precision viticulture and digitizing applications. Full article
(This article belongs to the Special Issue Remote Sensing in Viticulture)
Show Figures

Graphical abstract

Open AccessArticle
Deep Neural Network Cloud-Type Classification (DeepCTC) Model and Its Application in Evaluating PERSIANN-CCS
Remote Sens. 2020, 12(2), 316; https://doi.org/10.3390/rs12020316 - 18 Jan 2020
Viewed by 209
Abstract
Satellite remote sensing plays a pivotal role in characterizing hydrometeorological components including cloud types and their associated precipitation. The Cloud Profiling Radar (CPR) on the Polar Orbiting CloudSat satellite has provided a unique dataset to characterize cloud types. However, data from this nadir-looking [...] Read more.
Satellite remote sensing plays a pivotal role in characterizing hydrometeorological components including cloud types and their associated precipitation. The Cloud Profiling Radar (CPR) on the Polar Orbiting CloudSat satellite has provided a unique dataset to characterize cloud types. However, data from this nadir-looking radar offers limited capability for estimating precipitation because of the narrow satellite swath coverage and low temporal frequency. We use these high-quality observations to build a Deep Neural Network Cloud-Type Classification (DeepCTC) model to estimate cloud types from multispectral data from the Advanced Baseline Imager (ABI) onboard the GOES-16 platform. The DeepCTC model is trained and tested using coincident data from both CloudSat and ABI over the CONUS region. Evaluations of DeepCTC indicate that the model performs well for a variety of cloud types including Altostratus, Altocumulus, Cumulus, Nimbostratus, Deep Convective and High clouds. However, capturing low-level clouds remains a challenge for the model. Results from simulated GOES-16 ABI imageries of the Hurricane Harvey event show a large-scale perspective of the rapid and consistent cloud-type monitoring is possible using the DeepCTC model. Additionally, assessments using half-hourly Multi-Radar/Multi-Sensor (MRMS) precipitation rate data (for Hurricane Harvey as a case study) show the ability of DeepCTC in identifying rainy clouds, including Deep Convective and Nimbostratus and their precipitation potential. We also use DeepCTC to evaluate the performance of the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) product over different cloud types with respect to MRMS referenced at a half-hourly time scale for July 2018. Our analysis suggests that DeepCTC provides supplementary insights into the variability of cloud types to diagnose the weakness and strength of near real-time GEO-based precipitation retrievals. With additional training and testing, we believe DeepCTC has the potential to augment the widely used PERSIANN-CCS algorithm for estimating precipitation. Full article
Show Figures

Graphical abstract

Open AccessArticle
Ground Based Hyperspectral Imaging to Characterize Canopy-Level Photosynthetic Activities
Remote Sens. 2020, 12(2), 315; https://doi.org/10.3390/rs12020315 - 18 Jan 2020
Viewed by 193
Abstract
Improving plant photosynthesis provides the best possibility for increasing crop yield potential, which is considered a crucial effort for global food security. Chlorophyll fluorescence is an important indicator for the study of plant photosynthesis. Previous studies have intensively examined the use of spectrometer, [...] Read more.
Improving plant photosynthesis provides the best possibility for increasing crop yield potential, which is considered a crucial effort for global food security. Chlorophyll fluorescence is an important indicator for the study of plant photosynthesis. Previous studies have intensively examined the use of spectrometer, airborne, and spaceborne spectral data to retrieve solar induced fluorescence (SIF) for estimating gross primary productivity and carbon fixation. None of the methods, however, had a spatial resolution and a scanning throughput suitable for applications at the canopy and sub-canopy levels, thereby limiting photosynthesis analysis for breeding programs and genetics/genomics studies. The goal of this study was to develop a hyperspectral imaging approach to characterize plant photosynthesis at the canopy level. An experimental field was planted with two cotton cultivars that received two different treatments (control and herbicide treated), with each cultivar-treatment combination having eight replicate 10 m plots. A ground mobile sensing system (GPhenoVision) was configured with a hyperspectral module consisting of a spectrometer and a hyperspectral camera that covered the spectral range from 400 to 1000 nm with a spectral sampling resolution of 2 nm. The system acquired downwelling irradiance spectra from the spectrometer and reflected radiance spectral images from the hyperspectral camera. On the day after 24 h of the DCMU (3-(3,4-dichlorophenyl)-1,1-dimethylurea) application, the system was used to conduct six data collection trials in the experiment field from 08:00 to 18:00 with an interval of two hours. A data processing pipeline was developed to measure SIF using the irradiance and radiance spectral data. Diurnal SIF measurements were used to estimate the effective quantum yield and electron transport rate, deriving rapid light curves (RLCs) to characterize photosynthetic efficiency at the group and plot levels. Experimental results showed that the effective quantum yields estimated by the developed method highly correlated with those measured by a pulse amplitude modulation (PAM) fluorometer. In addition, RLC characteristics calculated using the developed method showed similar statistical trends with those derived using the PAM data. Both the RLC and PAM data agreed with destructive growth analyses. This suggests that the developed method can be used as an effective tool for future breeding programs and genetics/genomics studies to characterize plant photosynthesis at the canopy level. Full article
Show Figures

Graphical abstract

Open AccessLetter
Drift of the Earth’s Principal Axes of Inertia from GRACE and Satellite Laser Ranging Data
Remote Sens. 2020, 12(2), 314; https://doi.org/10.3390/rs12020314 - 18 Jan 2020
Viewed by 231
Abstract
The location of the Earth’s principal axes of inertia is a foundation for all the theories and solutions of its rotation, and thus has a broad effect on many fields, including astronomy, geodesy, and satellite-based positioning and navigation systems. That location is determined [...] Read more.
The location of the Earth’s principal axes of inertia is a foundation for all the theories and solutions of its rotation, and thus has a broad effect on many fields, including astronomy, geodesy, and satellite-based positioning and navigation systems. That location is determined by the second-degree Stokes coefficients of the geopotential. Accurate solutions for those coefficients were limited to the stationary case for many years, but the situation improved with the accomplishment of Gravity Recovery and Climate Experiment (GRACE), and nowadays several solutions for the time-varying geopotential have been derived based on gravity and satellite laser ranging data, with time resolutions reaching one month or one week. Although those solutions are already accurate enough to compute the evolution of the Earth’s axes of inertia along more than a decade, such an analysis has never been performed. In this paper, we present the first analysis of this problem, taking advantage of previous analytical derivations to simplify the computations and the estimation of the uncertainty of solutions. The results are rather striking, since the axes of inertia do not move around some mean position fixed to a given terrestrial reference frame in this period, but drift away from their initial location in a slow but clear and not negligible manner. Full article
Show Figures

Figure 1

Open AccessArticle
Measuring the Directional Ocean Spectrum from Simulated Bistatic HF Radar Data
Remote Sens. 2020, 12(2), 313; https://doi.org/10.3390/rs12020313 - 18 Jan 2020
Viewed by 206
Abstract
HF radars are becoming important components of coastal operational monitoring systems particularly for currents and mostly using monostatic radar systems where the transmit and receive antennas are colocated. A bistatic configuration, where the transmit antenna is separated from the receive antennas, offers some [...] Read more.
HF radars are becoming important components of coastal operational monitoring systems particularly for currents and mostly using monostatic radar systems where the transmit and receive antennas are colocated. A bistatic configuration, where the transmit antenna is separated from the receive antennas, offers some advantages and has been used for current measurement. Currents are measured using the Doppler shift from ocean waves which are Bragg-matched to the radio signal. Obtaining a wave measurement is more complicated. In this paper, the theoretical basis for bistatic wave measurement with a phased-array HF radar is reviewed and clarified. Simulations of monostatic and bistatic radar data have been made using wave models and wave spectral data. The Seaview monostatic inversion method for waves, currents and winds has been modified to allow for a bistatic configuration and has been applied to the simulated data for two receive sites. Comparisons of current and wave parameters and of wave spectra are presented. The results are encouraging, although the monostatic results are more accurate. Large bistatic angles seem to reduce the accuracy of the derived oceanographic measurements, although directional spectra match well over most of the frequency range. Full article
(This article belongs to the Special Issue Bistatic HF Radar)
Show Figures

Graphical abstract

Open AccessCorrection
Correction: Laengner, M. L., et al. Trends in the Seaward Extent of Saltmarshes across Europe from Long-Term Satellite Data. Remote Sensing 2019, 11, 1653
Remote Sens. 2020, 12(2), 312; https://doi.org/10.3390/rs12020312 - 17 Jan 2020
Viewed by 262
Abstract
In this paper [...] Full article
(This article belongs to the Section Environmental Remote Sensing)
Previous Issue
Back to TopTop