Next Article in Journal
Enhanced Tissue Ablation Efficiency with a Mid-Infrared Nonlinear Frequency Conversion Laser System and Tissue Interaction Monitoring Using Optical Coherence Tomography
Next Article in Special Issue
3-D Imaging Systems for Agricultural Applications—A Review
Previous Article in Journal
Robust Stability of Scaled-Four-Channel Teleoperation with Internet Time-Varying Delays
Previous Article in Special Issue
VitiCanopy: A Free Computer App to Estimate Canopy Vigor and Porosity for Grapevine
Article Menu

Export Article

Open AccessArticle
Sensors 2016, 16(5), 594; doi:10.3390/s16050594

A Comparative Analysis of Machine Learning with WorldView-2 Pan-Sharpened Imagery for Tea Crop Mapping

Department of Urban Planning and Spatial Information, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
Academic Editor: Simon X. Yang
Received: 25 January 2016 / Revised: 8 April 2016 / Accepted: 19 April 2016 / Published: 26 April 2016
(This article belongs to the Special Issue Sensors for Agriculture)
View Full-Text   |   Download PDF [17673 KB, uploaded 26 April 2016]   |  

Abstract

Tea is an important but vulnerable economic crop in East Asia, highly impacted by climate change. This study attempts to interpret tea land use/land cover (LULC) using very high resolution WorldView-2 imagery of central Taiwan with both pixel and object-based approaches. A total of 80 variables derived from each WorldView-2 band with pan-sharpening, standardization, principal components and gray level co-occurrence matrix (GLCM) texture indices transformation, were set as the input variables. For pixel-based image analysis (PBIA), 34 variables were selected, including seven principal components, 21 GLCM texture indices and six original WorldView-2 bands. Results showed that support vector machine (SVM) had the highest tea crop classification accuracy (OA = 84.70% and KIA = 0.690), followed by random forest (RF), maximum likelihood algorithm (ML), and logistic regression analysis (LR). However, the ML classifier achieved the highest classification accuracy (OA = 96.04% and KIA = 0.887) in object-based image analysis (OBIA) using only six variables. The contribution of this study is to create a new framework for accurately identifying tea crops in a subtropical region with real-time high-resolution WorldView-2 imagery without field survey, which could further aid agriculture land management and a sustainable agricultural product supply. View Full-Text
Keywords: WorldView-2; tea crops; GLCM texture; pixel and object-based image analysis; random forest; support vector machine WorldView-2; tea crops; GLCM texture; pixel and object-based image analysis; random forest; support vector machine
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Chuang, Y.-C.M.; Shiu, Y.-S. A Comparative Analysis of Machine Learning with WorldView-2 Pan-Sharpened Imagery for Tea Crop Mapping. Sensors 2016, 16, 594.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top