Next Article in Journal
Accelerated RAPID Model Using Heterogeneous Porous Objects
Next Article in Special Issue
Machine Learning-Based Slum Mapping in Support of Slum Upgrading Programs: The Case of Bandung City, Indonesia
Previous Article in Journal
Land Surface Albedo Derived on a Ten Daily Basis from Meteosat Second Generation Observations: The NRT and Climate Data Record Collections from the EUMETSAT LSA SAF
Previous Article in Special Issue
Mapping Burned Areas in Tropical Forests Using a Novel Machine Learning Framework
Open AccessArticle

Machine Learning Using Hyperspectral Data Inaccurately Predicts Plant Traits Under Spatial Dependency

1
Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, 7500AE, Enschede, The Netherlands
2
Department of Environmental Science, Macquarie University, Sydney, NSW 2106, Australia
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(8), 1263; https://doi.org/10.3390/rs10081263
Received: 21 June 2018 / Revised: 21 July 2018 / Accepted: 8 August 2018 / Published: 11 August 2018
(This article belongs to the Special Issue Machine Learning Applications in Earth Science Big Data Analysis)
Spectral, temporal and spatial dimensions are difficult to model together when predicting in situ plant traits from remote sensing data. Therefore, machine learning algorithms solely based on spectral dimensions are often used as predictors, even when there is a strong effect of spatial or temporal autocorrelation in the data. A significant reduction in prediction accuracy is expected when algorithms are trained using a sequence in space or time that is unlikely to be observed again. The ensuing inability to generalise creates a necessity for ground-truth data for every new area or period, provoking the propagation of “single-use” models. This study assesses the impact of spatial autocorrelation on the generalisation of plant trait models predicted with hyperspectral data. Leaf Area Index (LAI) data generated at increasing levels of spatial dependency are used to simulate hyperspectral data using Radiative Transfer Models. Machine learning regressions to predict LAI at different levels of spatial dependency are then tuned (determining the optimum model complexity) using cross-validation as well as the NOIS method. The results show that cross-validated prediction accuracy tends to be overestimated when spatial structures present in the training data are fitted (or learned) by the model. View Full-Text
Keywords: remote sensing; radiative transfer models; spatial autocorrelation; data simulation; model accuracy remote sensing; radiative transfer models; spatial autocorrelation; data simulation; model accuracy
Show Figures

Graphical abstract

  • Externally hosted supplementary file 1
    Doi: 10.4121/uuid:2016d562-cf6e-4060-ac13-5db9477b6512
MDPI and ACS Style

Rocha, A.D.; Groen, T.A.; Skidmore, A.K.; Darvishzadeh, R.; Willemen, L. Machine Learning Using Hyperspectral Data Inaccurately Predicts Plant Traits Under Spatial Dependency. Remote Sens. 2018, 10, 1263.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop