Next Article in Journal
A Combined Approach for Filtering Ice Surface Velocity Fields Derived from Remote Sensing Methods
Next Article in Special Issue
High Resolution Mapping of Cropping Cycles by Fusion of Landsat and MODIS Data
Previous Article in Journal
Mapping the Dabus Wetlands, Ethiopia, Using Random Forest Classification of Landsat, PALSAR and Topographic Data
Previous Article in Special Issue
Multiscale Remote Sensing to Map the Spatial Distribution and Extent of Cropland in the Sudanian Savanna of West Africa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras

1
College of Resource and Environment, Huazhong Agricultural University, 1 Shizishan Street, Wuhan 430070, China
2
USDA-Agricultural Research Service, Aerial Application Technology Research Unit, 3103 F & B Road, College Station, TX 77845, USA
3
Northwest A&F University, College of Mechanical and Electronic Engineering, 22 Xinong Road, Yangling 712100, China
4
Department of Biosystems and Agricultural Engineering, University of Nebraska, P.O. Box 830726, Lincoln, NE 68583, USA
5
Anhui Engineering Laboratory of Agro-Ecological Big Data, Anhui University, 111 Jiulong Road, Hefei 230601, China
6
College of Engineering, Huazhong Agricultural University, 1 Shizishan Street, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(10), 1054; https://doi.org/10.3390/rs9101054
Submission received: 26 August 2017 / Revised: 25 September 2017 / Accepted: 10 October 2017 / Published: 17 October 2017
(This article belongs to the Special Issue Monitoring Agricultural Land-Use Change and Land-Use Intensity)

Abstract

:
Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original and resolution-reduced images taken from two consumer-grade cameras, a RGB camera and a modified near-infrared (NIR) camera, for crop identification and leaf area index (LAI) estimation. Airborne RGB and NIR images taken over a 6.5-square-km cropping area were mosaicked and aligned to create a four-band mosaic with a spatial resolution of 0.4 m. The spatial resolution of the mosaic was then reduced to 1, 2, 4, 10, 15 and 30 m for comparison. Six supervised classifiers were applied to the RGB images and the four-band images for crop identification, and 10 vegetation indices (VIs) derived from the images were related to ground-measured LAI. Accuracy assessment showed that maximum likelihood applied to the 0.4-m images achieved an overall accuracy of 83.3% for the RGB image and 90.4% for the four-band image. Regression analysis showed that the 10 VIs explained 58.7% to 83.1% of the variability in LAI. Moreover, spatial resolutions at 0.4, 1, 2 and 4 m achieved better classification results for both crop identification and LAI prediction than the coarser spatial resolutions at 10, 15 and 30 m. The results from this study indicate that imagery from consumer-grade cameras can be a useful data source for crop identification and canopy cover estimation.

Graphical Abstract

1. Introduction

Precision agriculture (PA) technologies are becoming increasingly used by farmers because of their potential economic and environmental benefits [1]. In PA, detailed information on crops from remotely sensed data can play an important role. High resolution satellite imagery has been an important data source for PA [2]. However, satellite imagery may not be available for a desired target area at a specified time due to satellite orbits, weather conditions, and competition with other customers for the same time slot. On the contrary, cost-effective airborne platforms can be deployed any time weather permits.
Airborne imaging platforms, such as agricultural aircraft and unmanned aircraft systems (UAS), equipped with low-cost and easy-to-use consumer-grade cameras provide an alternative for farm-level image collection. Before digital consumer-grade cameras became available for remote sensing, film-based cameras had already been used in cartographic aerial mapping [3] and resource surveys [4]. However, these applications could only be conducted by professional departments and trained personnel due to the sophistication of image acquisition and processing [5]. Consumer-grade cameras such as digital single-lens reflex (DSLR) [6,7] and even pocket cameras [8] have been used as an remote sensing tool for 3D reconstruction of plant height [9] and terrain [10,11], plant phenology monitoring [2], and high-throughput phenotyping estimation [12]. For low attitude remote sensing, consumer-grade cameras have carried on manned-aircraft [13], UAS [14,15] and balloons [16] to capture aerial images. The popularity of these platforms is not only for their low-cost, but also for other important reasons. For example, these platforms can carry various imaging sensors and fly at the low altitude to get ultra-high spatial resolution images according to specific imaging needs [17]. Moreover, they offer the flexibility to choose flight time to obtain cloud-free images even under partly cloudy conditions.
Although consumer-grade cameras are being used as a remote sensing tool for many applications, they differ from scientific-grade cameras in that consumer-grade cameras employ a Bayer color filter mosaic to obtain true-color images with one single sensor [18]. Therefore, much research on image processing related to consumer-grade cameras, including vignetting and geometric correction [19], radiometric calibration [20,21], and images mosaicking [14,22], has been reported. These image processing procedures can be implemented automatically or semi-automatically by image processing software. However, most scientific-grade cameras have the capability to capture near-infrared (NIR) spectral information, which is important for plant growth monitoring, whereas consumer-grade cameras do not capture NIR information. In fact, to improve imaging quality for normal color photography, most consumer-grade cameras are fitted with a NIR-blocking filter in front of the CCD (charge coupled device) or CMOS (complementary metal-oxide-semiconductor) sensor [7]. Thus, NIR images can be obtained if the NIR-blocking filter is replaced with a NIR filter. Based on the types of filters, there are two ways to modify the camera. One is to replace the NIR-blocking filter with a long-pass NIR filter. By this method, the modified camera can only capture the NIR image and is often synchronized with another non-modified camera to capture both NIR and RGB images [7]. The other method is to choose a dual band-pass filter that can obtain blue, green, and NIR bands [20,23]. This approach only needs a single camera and the band images are already registered. However, this method often causes a large difference in exposure between the bands [2]. In addition, the spectral response of the blue and green bands is influenced more or less after the NIR-blocking filter is removed. Nevertheless, modified and non-modified consumer-grade cameras provide a cost-effective and easy deployment choice.
The performance of consumer-grade cameras has been evaluated for crop identification [24], biomass [20] and leaf area index (LAI) [23] assessment, and nitrogen status monitoring [19,25]. Although most studies showed promising results, few of them evaluated consumer-grade cameras over large-scale commercial agricultural land with diverse crops and other land cover types [19]. In addition, there has been limited research for evaluating different image classification and analysis techniques for crop identification and crop growth assessment. Furthermore, with most low-attitude image acquisition platforms, ultra-high spatial resolution images can be easily obtained. However, a higher spatial resolution does not always mean a better result as the spectral response of fine resolution pixels can be easily impacted by horizontal radiation [26,27]. Therefore, the effect of image spatial resolution needs to be examined.
To systematically evaluate the performance of consumer-grade cameras, RGB and NIR imagery from two consumer-grade cameras was used for crop identification and LAI estimation. The specific objectives of this study were to: (1) evaluate pixel-based classification methods for crop identification; (2) compare different vegetation indices (VIs) for LAI estimation; and (3) analyze the effects of image band combinations and spatial resolution on image classification and LAI estimation.

2. Materials and Methods

2.1. Study Area

A 6.5-km2 cropping area located along the Brazos River near College Station, Texas was selected as the study site. In the 2015 growing season, four major crops including cotton, sorghum, soybean, and watermelon were grown in the study area. Other land cover types included impervious surfaces (i.e., buildings and paved roads), bare soil and fallow, water bodies, grass, and forest. In this area, sorghum is typically planted in February to March and harvested in July to August. Cotton is usually planted in March to April and harvested in August to September. However, due to the rainy weather around the planting time in 2015, cotton was planted over a multi-week period and therefore had varying growth stages for the season.
Image acquisition was carried out using a low-cost dual-camera imaging system on the 15 July 2015 under sunny conditions. One camera was a non-modified RGB camera and the other was the same model but modified to obtain NIR images by replacing the infrared-blocking filter with a 720 nm long-pass filter (Life Pixel Infrared, Mukilteo, WA, USA). The modified camera still had three channels, though all were NIR. As the red channel had high spectral response, it was selected as the NIR band to create the four-band image. Detailed information on the imaging system and platform could be found in references [13,24]. In this study, the two cameras simultaneously and independently captured 71 pairs of RGB and NIR images at altitudes of 1752 ± 30 m above ground level to achieve a ground spatial resolution of 0.35 m. The flight paths and the thumbnails for the geotagged RGB images are shown in Figure 1. The captured images were recorded in Nikon RAW data format with the file extension NEF with 14 bits, which were converted to TIFF format with 16-bit at the image pre-processing.

2.2. Image Pre-Processing

Image pre-processing of this study mainly involved four steps: geometric and vignetting correction, images mosaicking, band registration, and radiometric calibration. The sequence and the tools used are shown in Figure 2. First, the free software Capture NX-D 1.2.1 (Nikon, Inc., Tokyo, Japan) provided with the cameras was used to correct the vignetting and geometric distortion of the images. Second, the Pix4DMapper software (Pix4D, Inc., Lausanne, Switzerland) was used to mosaic the RGB and NIR images separately. Second, in order to improve the accuracy of the mosaicking result, 11 white plastic square panels with a side of 1 m were placed evenly across the study area as ground control points (GCPs). The positions of these GCPs were measured using a Trimble GPS Pathfinder ProXRT receiver (Trimble Navigation Limited, Sunnyvale, CA, USA) with a 0.2-m horizontal accuracy. According to the suggestions of Pix4D, 5–10 GCPs should be enough for geo-referencing of a relatively flat area as in this study [28]. Third, the mosaicked RGB and NIR images were registered by the AutoSync module in ERDAS Imagine 2014 (Intergraph Corporation, Madison, AL, USA) to produce the four-band imagery. Last, to quantitatively calculate VIs for LAI estimation, the four-band mosaic in digital numbers (DN) was converted to relative reflectance values. The radiometric calibration process is described in Section 2.4. Some detailed information on image resolution and registration errors after image mosaicking and registration is shown in Table 1.
The spatial resolutions for the initial RGB and NIR mosaics were very close to 0.4 m, which were then resampled to 0.4 m. To simulate the performance from high to medium resolution imagery, the 0.4-m image was simply aggregated to generate images with six coarser resolutions at 1, 2, 4, 10, 15 and 30 m. Obviously, appropriate point spread function (PSF) to simulate coarser resolution images could provide more accurate results. However, the sensor PSF includes several components: the optical PSF, the image motion PSF, the electronic PSF, and the detector PSF [29]. For this research, it is very difficult to calculate the PSF for the consumer grade cameras. In addition, simple pixel averaging/aggregation methods have also been used to simulate coarser resolution images [30]. Therefore, the resampling tool of resize data in ENVI 5.3 was used in this research which uses the average value of all the pixel values falling within the larger pixel as its value. The three-band RGB and four-band images are shown in Figure 3. Subset RGB and color-infrared (CIR) images at all seven spatial resolutions are also shown in the Figure 3.

2.3. Crop Identification

Some common and classical supervised classifiers, such as minimum distance (MD), Mahalanobis distance (MAHD), maximum likelihood (ML), spectral angle mapper (SAM), neural net (NN), and support vector machines (SVM), were applied to the three-band and four-band images with seven spatial resolutions. The MD classifier calculates the Euclidean distance from each unknown pixel to the class mean derived from the training data of each class and all pixels are classified to the nearest class [31]. The MAHD classifier is a direction-sensitive distance classifier similar to MD, except that the covariance matrix is used in the calculation to consider correlations in the data set [32]. ML classification is based on probability, which makes use of a discriminant function to assign pixel to the class with the highest likelihood [33]. The SAM classifier is a physically-based spectral classification that uses an n-D angle to match pixels to reference spectra [34]. The NN classifier uses standard backpropagation for supervised learning that adjusts the weights in the node to minimize the difference between the output node activation and the output [31]. The SVM classifier separates the classes with a decision surface by use of a kernel function that maximizes the margin between the classes [35]. All of these classifiers are included in ENVI 5.3 (Exelis Visual Information Solutions, Boulder, CO, USA), which was used for image classification. The same training samples were used for the three-band and four-band images classification by the six supervised classifiers. Because this research mainly focused on evaluating the performance of consumer-grade cameras for agricultural applications, the default parameters for the classifiers in ENVI were chosen for image classification.
As noted previously, there were nine main land cover types in the study area. Totally, 241 training samples (forest 25, grass 37, impervious 24, bare soil 23, water 18, sorghum 38, cotton 35, watermelon 26, and soybean 5) with known ground cover were collected from each cover type. Classification maps were generated for the three-band and four-band images at the seven spatial resolutions using the six classifiers. The nine cover types were regrouped into six classes: non-vegetation (impervious, bare soil, and water), non-crop vegetation (grass, forest), soybean, watermelon, sorghum, and cotton. For classification accuracy assessment, 950 validation points (non-crop includes impervious 64, bare soil 142, and water 53; and non-vegetation includes forest 79, grass 123; sorghum 153; cotton 184; watermelon 93; and soybean 59) were used to calculated overall accuracy [36], confusion matrix [37] and kappa coefficient [38] in ERDAS Imagine 2014. In addition, a reference land cover map produced based on ground surveys was used for classification accuracy assessment.

2.4. LAI Data Collection and Radiometric Calibration

For LAI estimation, the imagery was calibrated using the empirical line approach [39,40]. Four 8 m × 8 m tarps with nominal reflectance values of 4%, 16%, 32% and 48% were placed in the imaging area during image acquisition, as shown in the upper-right corner of Figure 4. A HandHeld 2 portable spectroradiometer (ASD Inc., Boulder, CO, USA) was used to measure actual reflectance values of the tarps. The instrument had a spectral range of 350–1100 nm with a spectral resolution of <3 nm at 700 nm and a sampling interval of 1 nm. As shown in Figure 4, the four solid lines in different gray colors represent the actual reflectance curves of the four calibration tarps measured by the portable spectroradiometer. Bandwidths were calculated based on the full width at half maximum (FWHM) of the camera spectral response curves in each of the four spectral bands [41], which are also shown in Figure 4.
Compared with scientific-grade cameras, consumer-grade cameras have wider spectral ranges for individual bands and more overlap between the bands. It was necessary to evaluate the coefficient of variation of reflectance (CVR) of each tarp at each band within the FWHM range. The CVR is defined as the ratio of the standard deviation σ to the mean μ and can be calculated as Equation (1).
CVR = σ / μ × 100 %  
μ = 1 n i = 1 n x i
σ = 1 n i = 1 n ( x i μ ) 2 .
where x is the reflectance at each 1-nm wavelength, and n is the number of 1-nm wavebands within the FWHM range for each band. The CVR values are shown in Figure 5 with a range of 0.26–3.55%. Higher CVR values occurred on the 32% and 48% tarps for the blue and green bands, as can also be easily seen in Figure 4. Most of the CVR values were less than 1.5%, indicating the spectral response of the tarps within the FWHM was consistent and suitable for image calibration.
The average reflectance of each band for each tarp was calculated according to the FWHM ranges. Moreover, the average digital number (DN) of each tarp was calculated by averaging the DNs extracted from the central area (4-m square about 100 pixels) of the tarp. By the empirical line calibration method, linear regression equations relating reflectance to DNs were determined for each band (Table 2).
A total of 10 VIs shown in Table 3 were used to estimate crop LAI. Half of them were calculated from the RGB images and the other half were calculated from the NIR band and red or green band. LAI values were measured at different crop fields using an ACCUPAR LP-80 leaf area meter (Decagon Devices, Inc., Pullman, WA, USA). An external point sensor was used to collect instantaneous above canopy Photosynthetically Active Radiation (PAR) measurements when sampling under a canopy. The coordinates of sample points were collected by the Trimble GPS Pathfinder ProXRT receiver. A total of 86 samples were collected to represent different crops. At each sample point, LAI was measured 10 times over a 1-m segment along the crop row. Besides, because the half width of most of crop row in this investigation was 0.5 m, we just used five 0.1-m segments of the probe, which totally have eight segments. The VIs were calculated for each image and the values for each sample point were extracted using Python in ArcGIS 10.3 (ESRI, Inc., Redlands, CA, USA). Correlation and regression analyses between LAIs and VIs were performed at the seven spatial resolutions in Matlab R2013a (MathWorks, Inc., Natick, MA, USA).

3. Results and Discussion

3.1. Crop Classification

Summaries of accuracy assessment results for six-class classification maps generated from the three-band RGB and four-band images at the seven spatial resolutions using the six classifiers are presented in Table 4 and Table 5. No MAHD and ML based classification maps were generated for the 30-m four-band images because the MAHD and ML classification need sample points more than image bands in each ROI. To maintain consistency, all classification processing had used same sample points. Hence, the image resize processing caused this problem, which led some sample points to be located in the same coarser pixel, and reduced the sample point number.
Obviously, among the six classification methods, the SAM method had the lowest overall accuracy and kappa coefficient values among the six classifiers. In addition, the classification accuracy results based on MD, NN, and MAHD were close to those based on SAM. ML obtained the best classification accuracy at 0.4-m resolution. ML and SVM had very similar results and were better than the four other classifiers. However, SVM is a time-consuming classification method. The best six-class classification maps derived from the 0.4-m images based on ML are shown in Figure 6.
The four-band images had better classification performance than the three-band images for all classification maps except for the two ML-based classifications with spatial resolutions of 10 m and 15 m. As shown in Figure 6, one obvious difference between the three-band and four-band classification maps is the consistency of class distribution at each field. For example, for the biggest cotton and sorghum fields at the north of the study area, the ML-based classification with the 0.4-m, four-band image obtained less misclassification and better uniformity than the corresponding three-band image. It is also worth noting that some areas of the river and ponds with high silt content, which were misclassified as bare soil in the three-band image, were correctly classified in the four-band image at the initial nine-class classification maps that were then combined into the six-class maps. Moreover, misclassification between cotton and sorghum were reduced with the NIR band.
As shown in Figure 7, the differences in overall classification accuracy between the three-band and four-band images for the six classifiers were not consistent across the seven different spatial resolutions. For the ML method, which had the best performance, the accuracy for the four-band classification was 7.16% higher than that for the three-band classification at the finest resolution (0.4 m), but were 1.06% and 1.05% lower at 10 m and 15 m, respectively. For the five other classifiers, the four-band classification maps had higher overall accuracy than the three-band classification maps. Since the consumer-grade cameras had some overlaps between the adjacent spectral bands, the poor performance of SAM was partly related to the purity of the end-members for each class, which is critical to this classifier [52].
In comparison with traditional crop identification results, the performance of the RGB and NIR imagery from two consumer-grade cameras was similar [53]. Unlike narrow-band scientific-grade sensors, the spectral overlaps in consumer-grade cameras can cause some errors for crop classification. Despite that, the results achieved with the two consumer-grade cameras were encouraging. Furthermore, if more advanced classification methods such as object-based classifiers with auxiliary information were used, better results could have been achieved [24].
Although using a modified NIR camera will have benefits and is necessary for applications that require NIR data, a single RGB camera may be sufficient for many applications. Using a single RGB camera will avoid the hassle of a second camera and camera modification. It also simplifies image processing procedures such as image registration and stacking. In fact, single RGB cameras are commonly used in UAS-based remote sensing platform.
In general, higher resolutions resulted in higher classification accuracy values, but as shown in Figure 8, the improvements were relatively small for images with resolutions from 0.4 to 2 m. Since images with coarser resolutions can save processing time and storage space, resolutions between 1 m and 2 m may be a good compromise.

3.2. Crop LAI Assessment Application

Ground LAI measurements were made at 86 locations in selected crop fields including watermelon (nine locations), sorghum (38 locations), cotton (36 locations), and soybean (three locations). Most of the sorghum fields surveyed were at the physiological maturity or early senescence stage. Cotton, soybean, and watermelon were at varying vegetative production stages due to different planting dates and management strategies, which resulted in larger standard deviations in the measured LAI values (Figure 9).
Regression results between the ground-measured LAI and the VIs listed in Table 3 at the seven spatial resolutions are summarized in Table 6. To better visualize the performance of each VI for LAI estimation, the R2 values for the regression equations are plotted in Figure 10.
The best three R2 values between LAI and VIs for each of the 10 VIs were resulted from the images at the three higher resolutions of 0.4, 1 and 2 m. All the VIs had the highest R2 values at the 0.4-m resolution except GNDVI. Nevertheless, there were very small differences in R2 values at the three higher resolutions for LAI estimation in this study. The LAI measurement method and GPS accuracy may have had some effect on the results as the ground measurements were made over a 1-m2 area at each sampling location. At the spatial resolutions from 0.4 to 2 m and even at 4-m resolution, most VIs provided good and consistent results.
Among the VIs derived from the RGB image, CIVE obtained best performance at the higher spatial resolutions from 0.4 to 4 m, and the best R2 value was 0.831 with the 0.4-m resolution. CIVE can be easily calculated from RGB images taken from a non-modified consumer-grade camera. In comparison, the best R2 value from the NIR band and red bands was 0.774 based on DVI. The late season data collection may have partly contributed to the lower correlations of the NIR-based VIs because plants tended to lose their vigor at later growth stages. It is worth noting that CIVE uses all three visible bands, whereas DVI and the other NIR-based VIs only use two bands. More research is needed to evaluate these VIs at different crop growth stages during the growing season. There was no clear indication that narrow-band NIR sensors could improve the reliability of crop vegetation index assessments [54]. In contrast, some research found that broadband consumer-grade cameras had better performance than narrow-band cameras [55] and VIs from RGB bands had better results to predict LAI or biomass than NIR-based indices [56]. Certainly, these results were based on particular experimental conditions. As consumer-grade cameras are becoming widely used, they need to be evaluated extensively under diverse environmental conditions.
The performance of LAI estimation for the two best RGB-based VIs (CIVE and ExG) and the two best NIR-based VIs (DVI and RDVI) at 0.4-m spatial resolution for each crop was comparable and consistent as shown in Figure 10. From the 1:1 comparative graphs between actual LAI by ground measurement and estimated LAI, it is easy to observe that most cotton and watermelon estimated LAI values deviated from the 1:1 line which was shown in Figure 11.
The main purpose of this research was to evaluate both original and modified consumer-grade cameras for two important agricultural applications—crop classification and LAI estimation. The results from this study showed that images from the consumer-grade RGB camera and the modified NIR camera were sufficient for these particular applications. Certainly, the particular experimental conditions had some effect on the results. For example, the image acquisition date was a little late for the growing season. Some of the crops such as sorghum were close to physiological maturity stages. In the future, images need to be taken a little earlier or on multiple dates to improve crop classification and LAI estimation results. Moreover, more sophisticated classification methods can also be evaluated. Although consumer-grade cameras may not be as accurate as scientific-grade sensors, their use is on the rise because of their low-cost and ease of use. Although much could be improved, this study should provide useful information for both researchers and practitioners on the potential of consumer-grade cameras for practical applications.

4. Conclusions

As consumer-grade cameras are gaining popularity to be used on agricultural aircraft and UAS platforms for agricultural applications, this study comprehensively compared and evaluated the performance of two consumer-grade cameras (a normal RGB and a modified NIR) for crop identification and LAI estimation at the original and six degraded spatial resolutions.
The results from this study indicate that the imagery from the RGB camera and the modified NIR camera combined can be used for crop identification and LAI estimation with sufficient accuracy. Although finer spatial resolutions tended to produce better results, similar results were obtained with resolutions from 0.4 to 4 m for both crop identification and LAI estimation. As most crops have a row spacing of about 1 m and most agricultural machines such as ground-based applicators or aerial applicators have operation swaths more than 4 m, images at coarser resolutions (i.e., 2–4 m) can be more efficient without compromising the performance.
Although consumer-grade cameras provide a low-cost and easy-to-use alternative to scientific- or industrial-grade cameras for remote sensing applications, many issues such as NIR filter selection and radiometric calibration related to consumer-grade cameras need to be addressed. Moreover, extensive research is needed to compare this type of cameras with scientific-grade multispectral and hyperspectral cameras for different remote sensing applications.

Supplementary Materials

Supplementary File 1

Acknowledgments

This project was conducted as part of a visiting scholar research program, and the first author was financially supported by the National Natural Science Foundation of China (Grant Nos. 41201364 and 31501222) and the Fundamental Research Funds for the Central Universities (Grant No. 2662017JC038). The author wishes to thank Jennifer Marshall and Nicholas Mondrik of the College of Science of Texas A & M University, College Station, Texas, for allowing us to use their monochromator and assisting in the measurements of the spectral response of the cameras. Thanks are also extended to Fred Gomez and Lee Denham of USDA-ARS in College Station, Texas, for acquiring the images for this study.

Author Contributions

Jian Zhang designed and conducted the experiment, processed and analyzed the imagery, and wrote the manuscript. Chenghai Yang guided the study design, participated in camera testing and image collection, advised in data analysis, and revised the manuscript. Biquan Zhao, Huaibo Song, W. Clint Hoffmann, Yeyin Shi, Dongyan Zhang, and Guozhong Zhang were involved in the process of the experiment, ground data collection, or manuscript revision. All authors reviewed and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Disclaimer

Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture.

References

  1. Schimmelpfennig, D.; Ebel, R. Sequential adoption and cost savings from precision agriculture. J. Agric. Resour. Econ. 2016, 41, 97–115. [Google Scholar]
  2. Nijland, W.; de Jong, R.; de Jong, S.M.; Wulder, M.A.; Bater, C.W.; Coops, N.C. Monitoring plant condition and phenology using infrared sensitive consumer grade digital cameras. Agric. For. Meteorol. 2014, 184, 98–106. [Google Scholar] [CrossRef]
  3. Light, D.L. Film cameras or digital sensors? The challenge ahead for aerial imaging. Photogramm. Eng. Remote Sens. 1996, 62, 285–291. [Google Scholar]
  4. Quilter, M.C.; Anderson, V.J. Low altitude/large scale aerial photographs: A tool for range and resource managers. Rangel. Arch. 2000, 22, 13–17. [Google Scholar] [CrossRef]
  5. Yuan, H.; Yang, G.; Li, C.; Wang, Y.; Liu, J.; Yu, H.; Feng, H.; Xu, B.; Zhao, X.; Yang, X. Retrieving soybean leaf area index from unmanned aerial vehicle hyperspectral remote sensing: Analysis of RF, ANN, and SVM regression models. Remote Sens. 2017, 9, 309. [Google Scholar] [CrossRef]
  6. Yang, C.; Odvody, G.N.; Thomasson, J.A.; Isakeit, T.; Nichols, R.L. Change detection of cotton root rot infection over 10-year intervals using airborne multispectral imagery. Comput. Electron. Agric. 2016, 123, 154–162. [Google Scholar] [CrossRef]
  7. Yang, C.; Westbrook, J.K.; Suh, C.P.-C.; Martin, D.E.; Hoffmann, W.C.; Lan, Y.; Fritz, B.K.; Goolsby, J.A. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing. Remote Sens. 2014, 6, 5257–5278. [Google Scholar] [CrossRef]
  8. Hu, Z.; He, F.; Yin, J.; Lu, X.; Tang, S.; Wang, L.; Li, X. Estimation of fractional vegetation cover based on digital camera survey data and a remote sensing model. J. China Univ. Min. Echnol. 2007, 17, 116–120. [Google Scholar] [CrossRef]
  9. Zarco-Tejada, P.J.; Diaz-Varela, R.; Angileri, V.; Loudjani, P. Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (uav) and automatic 3d photo-reconstruction methods. Eur. J. Agron. 2014, 55, 89–99. [Google Scholar] [CrossRef]
  10. Prosdocimi, M.; Calligaro, S.; Sofia, G.; Dalla Fontana, G.; Tarolli, P. Bank erosion in agricultural drainage networks: New challenges from structure-from-motion photogrammetry for post-event analysis. Earth Surf. Process. Landf. 2015, 40, 1891–1906. [Google Scholar] [CrossRef]
  11. Prosdocimi, M.; Burguet, M.; Di Prima, S.; Sofia, G.; Terol, E.; Rodrigo Comino, J.; Cerdà, A.; Tarolli, P. Rainfall simulation and structure-from-motion photogrammetry for the analysis of soil water erosion in mediterranean vineyards. Sci. Total Environ. 2017, 574, 204–215. [Google Scholar] [CrossRef] [PubMed]
  12. Shi, Y.; Thomasson, J.A.; Murray, S.C.; Pugh, N.A.; Rooney, W.L.; Shafian, S.; Rajan, N.; Rouze, G.; Morgan, C.L.S.; Neely, H.L.; et al. Unmanned aerial vehicles for high-throughput phenotyping and agronomic research. PLoS ONE 2016, 11, e0159781. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, C.; Hoffmann, W.C. Low-cost single-camera imaging system for aerial applicators. J. Appl. Remote Sens. 2015, 9, 096064. [Google Scholar] [CrossRef]
  14. Wellens, J.; Midekor, A.; Traore, F.; Tychon, B. An easy and low-cost method for preprocessing and matching small-scale amateur aerial photography for assessing agricultural land use in burkina faso. Int. J. Appl. Earth Obs. Geoinf. 2013, 23, 273–278. [Google Scholar] [CrossRef]
  15. Diaz-Varela, R.; Zarco-Tejada, P.J.; Angileri, V.; Loudjani, P. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution dsms and multispectral imagery obtained from an unmanned aerial vehicle. J. Environ. Manag. 2014, 134, 117–126. [Google Scholar] [CrossRef] [PubMed]
  16. Jensen, T.; Apan, A.; Young, F.; Zeller, L. Detecting the attributes of a wheat crop using digital imagery acquired from a low-altitude platform. Comput. Electron. Agric. 2007, 59, 66–77. [Google Scholar] [CrossRef] [Green Version]
  17. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  18. Bayer, B.E. Color Imaging Array. US Patent, 3,971,065, 20 July 1976. [Google Scholar]
  19. Lelong, C.C.D. Assessment of unmanned aerial vehicles imagery for quantitative monitoring of wheat crop in small plots. Sensors 2008, 8, 3557–3585. [Google Scholar] [CrossRef] [PubMed]
  20. Miller, C.D.; Fox-Rabinovitz, J.R.; Allen, N.F.; Carr, J.L.; Kratochvil, R.J.; Forrestal, P.J.; Daughtry, C.S.T.; McCarty, G.W.; Hively, W.D.; Hunt, E.R. Nir-green-blue high-resolution digital images for assessment of winter cover crop biomass. GISci. Remote Sens. 2011, 48, 86–98. [Google Scholar]
  21. Akkaynak, D.; Treibitz, T.; Xiao, B.; Gürkan, U.A.; Allen, J.J.; Demirci, U.; Hanlon, R.T. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration. JOSA A 2014, 31, 312–321. [Google Scholar] [CrossRef] [PubMed]
  22. Song, H.; Yang, C.; Zhang, J.; Hoffmann, W.C.; He, D.; Thomasson, J.A. Comparison of mosaicking techniques for airborne images from consumer-grade cameras. J. Appl. Remote Sens. 2016, 10, 016030. [Google Scholar] [CrossRef]
  23. Hunt, E.R.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.; McCarty, G.W. Acquisition of nir-green-blue digital photographs from unmanned aircraft for crop monitoring. Remote Sens. 2010, 2, 290–305. [Google Scholar] [CrossRef]
  24. Zhang, J.; Yang, C.; Song, H.; Hoffmann, W.; Zhang, D.; Zhang, G. Evaluation of an airborne remote sensing platform consisting of two consumer-grade cameras for crop identification. Remote Sens. 2016, 8, 257. [Google Scholar] [CrossRef]
  25. Lebourgeois, V.; Bégué, A.; Labbé, S.; Houlès, M.; Martiné, J.F. A light-weight multi-spectral aerial imaging system for nitrogen crop monitoring. Precis. Agric. 2012, 13, 525–541. [Google Scholar] [CrossRef]
  26. Widlowski, J.-L.; Lavergne, T.; Pinty, B.; Gobron, N.; Verstraete, M.M. Towards a high spatial resolution limit for pixel-based interpretations of optical remote sensing data. Adv. Space Res. 2008, 41, 1724–1732. [Google Scholar] [CrossRef]
  27. Kobayashi, H.; Suzuki, R.; Nagai, S.; Nakai, T.; Kim, Y. Spatial scale and landscape heterogeneity effects on fapar in an open-canopy black spruce forest in interior alaska. IEEE Geosci. Remote Sens. Lett. 2014, 11, 564–568. [Google Scholar] [CrossRef]
  28. Pix4D. Getting GCPs in the Field or through Other Sources. Available online: https://support.pix4d.com/hc/en-us/articles/202557489-Step-1-Before-Starting-a-Project-4-Getting-GCPs-on-the-field-or-through-other-sources-optional-but-recommended-#gsc.tab=0 (acessed on 22 August 2017).
  29. Tan, B.; Woodcock, C.; Hu, J.; Zhang, P.; Ozdogan, M.; Huang, D.; Yang, W.; Knyazikhin, Y.; Myneni, R. The impact of gridding artifacts on the local spatial properties of modis data: Implications for validation, compositing, and band-to-band registration across resolutions. Remote Sens. Environ. 2006, 105, 98–114. [Google Scholar] [CrossRef]
  30. Schowengerdt, R.A. Remote sensing: Models and Methods for Image Processing; Academic Press: San Diego, CA, USA, 2006; pp. 67–83. [Google Scholar]
  31. Richards, J.A.; Richards, J. Remote Sensing Digital Image Analysis; Springer: Berlin, Germany, 2013. [Google Scholar]
  32. Davis, S.M.; Landgrebe, D.A.; Phillips, T.L.; Swain, P.H.; Hoffer, R.M.; Lindenlaub, J.C.; Silva, L.F. Remote sensing: The Quantitative Approach; McGraw-Hill International Book Co.: New York, NY, USA, 1978; p. 405. [Google Scholar]
  33. Asmala, A. Analysis of maximum likelihood classification on multispectral data. Appl. Math. Sci. 2012, 6, 6425–6436. [Google Scholar]
  34. Kruse, F.; Lefkoff, A.; Boardman, J.; Heidebrecht, K.; Shapiro, A.; Barloon, P.; Goetz, A. The spectral image processing system (sips)—Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  35. Hsu, C.; Chang, C.; Lin, C. A Practical Guide to Support Vector Classification. Available online: http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf (acessed on 22 August 2017).
  36. Story, M.; Congalton, R.G. Accuracy assessment: A user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  37. Stehman, S.V.; Czaplewski, R.L. Design and analysis for thematic map accuracy assessment: Fundamental principles. Remote Sen. Environ. 1998, 64, 331–344. [Google Scholar] [CrossRef]
  38. Rosenfield, G.H.; Fitzpatrick-Lins, K. A coefficient of agreement as a measure of thematic classification accuracy. Photogramm. Eng. Remote Sens. 1986, 52, 223–227. [Google Scholar]
  39. Wehrhan, M.; Rauneker, P.; Sommer, M. Uav-based estimation of carbon exports from heterogeneous soil landscapes—A case study from the carbozalf experimental area. Sensors 2016, 16, 255. [Google Scholar] [CrossRef] [PubMed]
  40. Kelcey, J.; Lucieer, A. Sensor correction and radiometric calibration of a 6-band multispectral imaging sensor for UAV remote sensing. ISPRS Int. Arch. Photogramm. Remote Sens. Space Inf. Sci. 2012, 1, 393–398. [Google Scholar] [CrossRef]
  41. Li, H.; Liu, W.; Dong, B.; Kaluzny, J.V.; Fawzi, A.A.; Zhang, H.F. Snapshot hyperspectral retinal imaging using compact spectral resolving detector array. J. Biophotonics 2017, 10, 830–839. [Google Scholar] [CrossRef] [PubMed]
  42. Rouse, J.W., Jr.; Haas, R.; Schell, J.; Deering, D. Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309–317. [Google Scholar]
  43. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 663–666. [Google Scholar] [CrossRef]
  44. Richardson, A.J.; Everitt, J.H. Using spectral vegetation indices to estimate rangeland productivity. Geocarto Int. 1992, 7, 63–69. [Google Scholar] [CrossRef]
  45. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from eos-modis. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  46. Roujean, J.-L.; Breon, F.-M. Estimating par absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  47. Woebbecke, D.; Meyer, G.; Von Bargen, K.; Mortensen, D. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  48. Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. Proc. SPIE 1999, 3543. [Google Scholar] [CrossRef]
  49. Camargo Neto, J. A Combined Statistical-Soft Computing Approach for Classification and Mapping Weed Species in Minimum-Tillage Systems. Ph.D. Thesis, University of Nebraska, Lincoln, NE, USA, 2004. [Google Scholar]
  50. Kataoka, T.; Kaneko, T.; Okamoto, H. Crop growth estimation system using machine vision. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Kobe, Japan, 20–24 July 2003; pp. 1079–1083. [Google Scholar]
  51. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Plant species identification, size, and enumeration using machine vision techniques on near-binary images. Proc. SPIE 1993, 1836, 208–219. [Google Scholar]
  52. Petropoulos, G.P.; Vadrevu, K.P.; Xanthopoulos, G.; Karantounias, G.; Scholze, M. A comparison of spectral angle mapper and artificial neural network classifiers combined with landsat tm imagery analysis for obtaining burnt area mapping. Sensors 2010, 10, 1967–1985. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Yang, C.; Everitt, J.H.; Murden, D. Evaluating high resolution spot 5 satellite imagery for crop identification. Comput. Electron. Agric. 2011, 75, 347–354. [Google Scholar] [CrossRef]
  54. Rasmussen, J.; Ntakos, G.; Nielsen, J.; Svensgaard, J.; Poulsen, R.N.; Christensen, S. Are vegetation indices derived from consumer-grade cameras mounted on uavs sufficiently reliable for assessing experimental plots? Eur. J. Agron. 2016, 74, 75–92. [Google Scholar] [CrossRef]
  55. Haghighattalab, A.; González Pérez, L.; Mondal, S.; Singh, D.; Schinstock, D.; Rutkoski, J.; Ortiz-Monasterio, I.; Singh, R.P.; Goodin, D.; Poland, J. Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries. Plant Method. 2016, 12, 35. [Google Scholar] [CrossRef] [PubMed]
  56. Heesup, Y.; Hak-Jin, K.; Kido, P.; Kyungdo, L.; Sukyoung, H. Use of an uav for biomass monitoring of hairy vetch. In Proceedings of the ASABE Annual International Meeting, New Orleans, IN, USA, 26–29 July 2015; p. 1. [Google Scholar]
Figure 1. Study area and imaging platform. This area covered 6.5 km2 near College Station, Texas. The yellow line is the flight path and red arrows indicate the start and end points. The imaging system consisted of two cameras, which were mounted on the right step of an Air Tractor AT-402B. The flight height was 1752 ± 30 m above ground level.
Figure 1. Study area and imaging platform. This area covered 6.5 km2 near College Station, Texas. The yellow line is the flight path and red arrows indicate the start and end points. The imaging system consisted of two cameras, which were mounted on the right step of an Air Tractor AT-402B. The flight height was 1752 ± 30 m above ground level.
Remotesensing 09 01054 g001
Figure 2. The sequence of image pre-processing.
Figure 2. The sequence of image pre-processing.
Remotesensing 09 01054 g002
Figure 3. Mosaicked images and resolution-reduced subset images: (a) three-band RGB image; (b) color-infrared (CIR) image; (c) subset RGB images at different spatial resolutions; and (d) subset CIR images at different spatial resolutions.
Figure 3. Mosaicked images and resolution-reduced subset images: (a) three-band RGB image; (b) color-infrared (CIR) image; (c) subset RGB images at different spatial resolutions; and (d) subset CIR images at different spatial resolutions.
Remotesensing 09 01054 g003
Figure 4. Spectral response of a RGB camera and a NIR camera and reflectance curves of four ground calibration tarps. Four 8 m × 8 m tarps with nominal reflectance values of 4%, 16%, 32% and 48% were placed near the study area as shown at the upper-right corner. The solid color lines represent the spectral response curves of the non-modified camera and the dot curve depicts the modified NIR camera. The four gray solid lines stand for the actual reflectance curves of the four tarps at the same time of image acquisition. The thin dot lines stand for the FWHM for each band.
Figure 4. Spectral response of a RGB camera and a NIR camera and reflectance curves of four ground calibration tarps. Four 8 m × 8 m tarps with nominal reflectance values of 4%, 16%, 32% and 48% were placed near the study area as shown at the upper-right corner. The solid color lines represent the spectral response curves of the non-modified camera and the dot curve depicts the modified NIR camera. The four gray solid lines stand for the actual reflectance curves of the four tarps at the same time of image acquisition. The thin dot lines stand for the FWHM for each band.
Remotesensing 09 01054 g004
Figure 5. Coefficient of variation of reflectance (CVR) within the FWHM ranges on four calibration tarps for four spectral bands.
Figure 5. Coefficient of variation of reflectance (CVR) within the FWHM ranges on four calibration tarps for four spectral bands.
Remotesensing 09 01054 g005
Figure 6. Six-class classification maps generated from: three-band (left); and four-band (right) mosaics at 0.4-m resolution using the ML (maximum likelihood) method.
Figure 6. Six-class classification maps generated from: three-band (left); and four-band (right) mosaics at 0.4-m resolution using the ML (maximum likelihood) method.
Remotesensing 09 01054 g006
Figure 7. The difference in overall classification accuracy between four- and three-band images for six classifiers at seven spatial resolutions. MD = minimum distance; MAHD = Mahalanobis distance; ML = maximum likelihood; SAM = spectral angle mapper; NN = Neural Network; and SVM = support vector machine.
Figure 7. The difference in overall classification accuracy between four- and three-band images for six classifiers at seven spatial resolutions. MD = minimum distance; MAHD = Mahalanobis distance; ML = maximum likelihood; SAM = spectral angle mapper; NN = Neural Network; and SVM = support vector machine.
Remotesensing 09 01054 g007
Figure 8. Overall accuracy for classification maps for three- and four-band images for six classifiers at seven spatial resolutions. MD = minimum distance; MAHD = Mahalanobis distance; ML = maximum likelihood; SAM = spectral angle mapper; NN = Neural Network; and SVM = support vector machine.
Figure 8. Overall accuracy for classification maps for three- and four-band images for six classifiers at seven spatial resolutions. MD = minimum distance; MAHD = Mahalanobis distance; ML = maximum likelihood; SAM = spectral angle mapper; NN = Neural Network; and SVM = support vector machine.
Remotesensing 09 01054 g008
Figure 9. Box plots for ground leaf area index (LAI) value of four crops.
Figure 9. Box plots for ground leaf area index (LAI) value of four crops.
Remotesensing 09 01054 g009
Figure 10. R2 values between ground-measured LAI in crop fields and different VIs extracted from radiometrically calibrated image mosaics at different spatial resolutions.
Figure 10. R2 values between ground-measured LAI in crop fields and different VIs extracted from radiometrically calibrated image mosaics at different spatial resolutions.
Remotesensing 09 01054 g010
Figure 11. 1:1 comparative graphs between actual LAI by ground measurement and estimated LAI by two RGB-based and two NIR-based best VIs extracted from a radiometrically calibrated image mosaic at 0.4-m spatial resolution.
Figure 11. 1:1 comparative graphs between actual LAI by ground measurement and estimated LAI by two RGB-based and two NIR-based best VIs extracted from a radiometrically calibrated image mosaic at 0.4-m spatial resolution.
Remotesensing 09 01054 g011
Table 1. Image resolution and registration after image mosaicking and registration.
Table 1. Image resolution and registration after image mosaicking and registration.
ImageAfter MosaickingAfter RegistrationResolutions Used for Image Processing and Analysis (m)
Resolution (m)Absolute Horizontal Position Accuracy (m)Root Mean Square Error of Registration (m)
RGB0.3990.4700.20.4, 1, 2, 4, 10, 15 and 30
Near-infrared0.3940.701
Table 2. Average digital numbers and actual ground reflectance of four calibration tarps for four spectral bands and the corresponding linear regression equations.
Table 2. Average digital numbers and actual ground reflectance of four calibration tarps for four spectral bands and the corresponding linear regression equations.
BandBandwidth Range (nm)DNs from ImageryActual Reflectance from a SpectroradiometerLinear Regression Model
4%16%32%48%4%16%32%48%EquationRMSE
Blue418–5103,96715,29395,85826,55320.0760.1590.2870.345 Y = 10 5 x 0.3666 0.940.050
Green490–5803,43774,74445,43016,30840.0760.1590.3040.370 Y = 10 5 x 0.3085 0.950.047
Red573–6453,23664,46845,33626,27930.0780.1580.3220.398 Y = 10 5 x 0.2975 0.960.061
NIR705–8202,31633,63764,66205,63670.0780.1530.3410.427 Y = 10 5 x 0.2021 0.960.056
Table 3. List of vegetation indices (VIs) from mosaicked imagery.
Table 3. List of vegetation indices (VIs) from mosaicked imagery.
Vegetation Indexes (VIs) Name
Normalized Difference Vegetation Index (NDVI) = (NIR − R)/(NIR + R) [ 42]
Ratio Vegetation Index (RVI) = NIR/R [ 43]
Difference Vegetation Index (DVI) = NIR − R [ 44]
Green Normalized Difference Vegetation Index (GNDVI) = (NIR − G)/(NIR + G) [ 45]
Renormalized Difference Vegetation Index(RDVI) =   NDVI DVI [46]
B* = B/(B + G + R), G* = G/(B + G + R), R* = R/(B + G + R)
Excess Green (ExG) = 2G* − R* − B* [ 47]
Excess Red (ExR) = 1.4R* − G* [ 48], ExG − ExR [49]
CIVE = 0.441R − 0.811G + 0.385B + 18.78745 [ 50]
Normalized Difference Index (NDI) = (G − R)/(G + R) [ 51]
Table 4. Overall accuracy values (%) for six-class classification maps generated from two types of images at seven spatial resolutions using six classifiers.
Table 4. Overall accuracy values (%) for six-class classification maps generated from two types of images at seven spatial resolutions using six classifiers.
Resolution (m)1 MDMAHDMLNNSAMSVM
Three-BandFour-BandThree-BandFour-BandThree-BandFour-BandThree-BandFour-BandThree-BandFour-BandThree-BandFour-Band
0.467.3772.6373.5876.6383.2690.4265.5873.4761.0567.3780.6388.32
167.7972.2173.1675.0583.3789.5867.3771.3761.1668.6380.0084.74
267.4771.5873.5874.5382.0084.5364.0072.6363.0570.1179.2682.11
464.3271.3773.2675.4778.5379.0561.0571.4760.9569.3774.9578.63
1062.7468.0070.3273.3772.9571.8962.9563.5856.8462.1169.3774.84
1560.2165.8967.2669.6869.2668.2162.8465.3753.4760.7466.8469.47
3056.9562.1158.42-61.16-56.5363.2648.7451.1659.4760.74
is the best result for the four-band images at the same resolution.is the best result for the three-band RGB images at the same resolution.
is the poorest result for the four-band images at the same resolution.is the poorest result for the three-band RGB images at the same resolution.
1 MD = minimum distance; MAHD = Mahalanobis distance; ML = maximum likelihood; SAM = spectral angle mapper; NN = Neural Net; and SVM = support vector machine. - Means no results were available.
Table 5. Overall Kappa coefficients for six-class classification maps generated from two types of images at seven spatial resolutions using six classifiers.
Table 5. Overall Kappa coefficients for six-class classification maps generated from two types of images at seven spatial resolutions using six classifiers.
Resolution (m)1 MDMAHDMLNNSAMSVM
Three-BandFour-BandThree-BandFour-BandThree-BandFour-BandThree-BandFour-BandThree-BandFour-BandThree-BandFour-Band
0.40.5930.6600.6730.7110.7920.8810.5650.6640.5150.5980.7590.866
10.5970.6550.6690.6920.7920.8700.5860.6360.5150.6140.7510.809
20.5920.6480.6730.6850.7740.8050.5400.6520.5360.6310.7400.776
40.5510.6450.6690.6970.7300.7360.5050.5390.5080.6210.6850.732
100.5340.6000.6310.6700.6590.6440.5220.5370.4540.5270.6150.685
150.5000.5750.5940.6230.6130.5980.5320.5620.4150.5080.5810.617
300.4600.5270.482-0.511-0.4480.5360.3490.3830.4860.504
is the best result for the four-band images at the same resolution.is the best result for the three-band RGB images at the same resolution.
is the poorest result for the four-band images at the same resolution.is the poorest result for the three-band RGB images at the same resolution.
1 MD = minimum distance; MAHD = Mahalanobis distance; ML = maximum likelihood; SAM = spectral angle mapper; NN = Neural Net; and SVM = support vector machine. - means no results were available.
Table 6. Regression analysis results between ground-measured LAI in crop fields and 10 VIs extracted from radiometrically calibrated image mosaics.
Table 6. Regression analysis results between ground-measured LAI in crop fields and 10 VIs extracted from radiometrically calibrated image mosaics.
Resolutions (m) Second-Degree Regression ModelR2RMSE Second-Degree Regression ModelR2RMSE
0.4y = –3.136x2 + 12.260x + 0.3430.7350.869y = –28.026x2 + 22.164x + 2.1990.7860.782
1y = –4.261x2 + 12.781x + 0.3220.7290.880y = –29.984x2 + 22.380x + 2.2170.7760.800
2y = –2.288x2 + 11.339x + 0.5430.7170.899y = –29.172x2 + 21.984x + 2.2340.7600.828
4NDVIy = 0.482x2 + 9.429x + 0.7970.6680.974NDIy = –26.863x2 + 21.414x + 2.2450.7260.884
10y = 2.315x2 + 8.396x + 0.9290.6281.030y = –25.644x2 + 20.953x + 2.2910.6740.964
15y = 2.864x2 + 7.455x + 1.2090.5151.177y = –29.336x2 + 20.953x + 2.3980.5751.101
30y = 3.309x2 + 7.462x + 1.3750.4551.247y = –33.347x2 + 22.199x + 2.5300.5291.160
0.4y = –0.985x2 + 6.500x – 4.9110.7380.864y = 7.160x2 + 18.325x + 1.9340.8200.716
1y = –1.036x2 + 6.671x – 5.0210.7300.877y = 4.397x2 + 18.705x + 1.9430.8100.736
2y = –0.914x2 + 6.100x – 4.4180.7180.897y = –2.358x2 + 19.773x + 1.9460.7940.766
4RVIy = –0.669x2 + 5.006x – 3.3400.6660.976ExGy = –9.882x2 + 21.319x + 1.9300.7680.814
10y = –0.639x2 + 4.886x – 3.2100.6281.030y = –17.495x2 + 22.580x + 1.9630.7150.903
15y = –0.62412 + 4.690x – 2.8240.5161.175y = –35.471x2 + 25.053x + 2.0360.6181.044
30y = –0.425x2 + 3.938x – 1.9840.4541.248y = –46.548x2 + 27.938x + 2.1170.5861.088
0.4y = 1.224x2 + 14.486x + 0.4340.7740.804y = –40.893x2 – 16.067x + 5.1180.7830.787
1y = –1.483x2 + 15.479x + 0.3850.7690.811y = –44.037x2 – 15.498x + 5.1170.7730.805
2y = 1.093x2 + 14.018x + 0.5610.7560.34y = –42.559x2 – 15.349x + 5.0890.7550.835
4DVIy = 3.284x2 + 12.800x + 0.7000.7050.918ExRy = –38.091x2 – 15.661x + 5.0640.7190.895
10y = 4.972x2 + 12.151x + 0.7680.6610.984y = –34.933x2 – 15.725x + 5.0610.6660.976
15y = 1.844x2 + 12.531x + 0.9220.5411.144y = –39.279x2 – 14.288x + 5.0570.5661.113
30y = 1.155x2 + 13.080x + 1.0240.4891.207y = –43.991x2 – 14.311x + 5.2800.5181.173
0.4GNDVIy = 17.322x2 + 12.225x − 0.0580.5871.086ExG-ExRy = –3.772x2 + 10.486 x + 3.5120.8120.732
1y = 10.846x2 + 14.749x − 0.2570.5881.084y = –4.507x2 + 10.447x + 3.5350.8020.752
2y = 22.313x2 + 9.590x + 0.2950.5971.072y = –5.302x2 + 10.386x + 3.5550.7860.781
4y = 36.054x2 + 2.922x + 1.0150.5451.140y = –6.276x2 + 10.450x + 3.5840.7550.835
10y = 48.872x2 – 1.945x + 1.4410.5341.153y = –7.132x2 + 10.403x + 3.6370.7020.922
15y = 47.662x2 – 2.728x + 1.7530.4181.289y = –10.209x2 + 10.191x + 3.7610.6031.064
30y = 64.286x2 – 8.581x + 2.4130.3561.356y = –12.854x2 + 10.706x + 3.9910.5631.117
0.4y = –2.997x2 + 14.125x + 0.2980.7580.831y = 384.199x2 – 14,479.250x + 136,420.2640.8310.694
1y = –4.851x2 + 14.900x + 0.2590.7520.841y = 319.268x2 – 12,042.058x + 113,550.2780.8200.717
2y = –2.690x2 + 13.499x + 0.4520.7400.862y = 235.416x2 – 8894.888x + 84,020.0970.8020.751
4RDVIy = 0.794x2 + 11.425x + 0.6960.6880.944CIVEy = 213.405x2 – 8070.489x + 76,300.7290.7810.791
10y = 2.901x2 + 10.422x + 0.8090.6461.005y = 101.976x2 – 3888.242x + 37,058.0580.7190.895
15y = 1.439x2 + 10.415x + 0.9840.5291.159y = –99.169x2 + 3663.297x – 33,818.1120.6221.039
30y = 2.492x2 + 10.099x + 1.1830.4721.227y = –174.059x2 + 6471.318x – 60,139.7300.5961.073
is the best results at the same VIis the second result at the same VIis the third result at the same VI

Share and Cite

MDPI and ACS Style

Zhang, J.; Yang, C.; Zhao, B.; Song, H.; Clint Hoffmann, W.; Shi, Y.; Zhang, D.; Zhang, G. Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras. Remote Sens. 2017, 9, 1054. https://doi.org/10.3390/rs9101054

AMA Style

Zhang J, Yang C, Zhao B, Song H, Clint Hoffmann W, Shi Y, Zhang D, Zhang G. Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras. Remote Sensing. 2017; 9(10):1054. https://doi.org/10.3390/rs9101054

Chicago/Turabian Style

Zhang, Jian, Chenghai Yang, Biquan Zhao, Huaibo Song, Wesley Clint Hoffmann, Yeyin Shi, Dongyan Zhang, and Guozhong Zhang. 2017. "Crop Classification and LAI Estimation Using Original and Resolution-Reduced Images from Two Consumer-Grade Cameras" Remote Sensing 9, no. 10: 1054. https://doi.org/10.3390/rs9101054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop