Next Article in Journal
Structure-Aware Convolution for 3D Point Cloud Classification and Segmentation
Next Article in Special Issue
Quantifying Flood Water Levels Using Image-Based Volunteered Geographic Information
Previous Article in Journal
Modelling and Terrestrial Laser Scanning Methodology (2009–2018) on Debris Cones in Temperate High Mountains
Previous Article in Special Issue
Modifying an Image Fusion Approach for High Spatiotemporal LST Retrieval in Surface Dryness and Evapotranspiration Estimations
Open AccessArticle

Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images

1
Department of Civil Engineering, and Innovation and Development Center of Sustainable Agriculture, National Chung Hsing University, Taichung 402, Taiwan
2
Pervasive AI Research (PAIR) Labs, Hsinchu 300, Taiwan
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(4), 633; https://doi.org/10.3390/rs12040633
Received: 31 December 2019 / Revised: 10 February 2020 / Accepted: 12 February 2020 / Published: 14 February 2020
A rapid and precise large-scale agricultural disaster survey is a basis for agricultural disaster relief and insurance but is labor-intensive and time-consuming. This study applies Unmanned Aerial Vehicles (UAVs) images through deep-learning image processing to estimate the rice lodging in paddies over a large area. This study establishes an image semantic segmentation model employing two neural network architectures, FCN-AlexNet, and SegNet, whose effects are explored in the interpretation of various object sizes and computation efficiency. Commercial UAVs imaging rice paddies in high-resolution visible images are used to calculate three vegetation indicators to improve the applicability of visible images. The proposed model was trained and tested on a set of UAV images in 2017 and was validated on a set of UAV images in 2019. For the identification of rice lodging on the 2017 UAV images, the F1-score reaches 0.80 and 0.79 for FCN-AlexNet and SegNet, respectively. The F1-score of FCN-AlexNet using RGB + ExGR combination also reaches 0.78 in the 2019 images for validation. The proposed model adopting semantic segmentation networks is proven to have better efficiency, approximately 10 to 15 times faster, and a lower misinterpretation rate than that of the maximum likelihood method. View Full-Text
Keywords: semantic segmentation; deep learning; lodging; UAV; vegetation index semantic segmentation; deep learning; lodging; UAV; vegetation index
Show Figures

Graphical abstract

MDPI and ACS Style

Yang, M.-D.; Tseng, H.-H.; Hsu, Y.-C.; Tsai, H.P. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sens. 2020, 12, 633.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop