Next Article in Journal
Glide-Symmetric Holey Structures Applied to Waveguide Technology: Design Considerations
Next Article in Special Issue
Monitoring and Landscape Dynamic Analysis of Alpine Wetland Area Based on Multiple Algorithms: A Case Study of Zoige Plateau
Previous Article in Journal
Utilization of an OLED-Based VLC System in Office, Corridor, and Semi-Open Corridor Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Semi-Automated Method to Extract Green and Non-Photosynthetic Vegetation Cover from RGB Images in Mixed Grasslands

1
Department of Ecology, College of Biology and the Environment, Nanjing Forestry University, Nanjing 210037, China
2
Co-Innovation Center for Sustainable Forestry in Southern China, Nanjing Forestry University, Nanjing 210037, China
3
Department of Geography and Planning, University of Saskatchewan, 117 Science Place, Saskatoon, SK S7N5C8, Canada
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(23), 6870; https://doi.org/10.3390/s20236870
Submission received: 9 October 2020 / Revised: 24 November 2020 / Accepted: 30 November 2020 / Published: 1 December 2020
(This article belongs to the Special Issue Remote Sensing Application for Monitoring Grassland)

Abstract

:
Green (GV) and non-photosynthetic vegetation (NPV) cover are both important biophysical parameters for grassland research. The current methodology for cover estimation, including subjective visual estimation and digital image analysis, requires human intervention, lacks automation, batch processing capabilities and extraction accuracy. Therefore, this study proposed to develop a method to quantify both GV and standing dead matter (SDM) fraction cover from field-taken digital RGB images with semi-automated batch processing capabilities (i.e., written as a python script) for mixed grasslands with more complex background information including litter, moss, lichen, rocks and soil. The results show that the GV cover extracted by the method developed in this study is superior to that by subjective visual estimation based on the linear relation with normalized vegetation index (NDVI) calculated from field measured hyper-spectra (R2 = 0.846, p < 0.001 for GV cover estimated from RGB images; R2 = 0.711, p < 0.001 for subjective visual estimated GV cover). The results also show that the developed method has great potential to estimate SDM cover with limited effects of light colored understory components including litter, soil crust and bare soil. In addition, the results of this study indicate that subjective visual estimation tends to estimate higher cover for both GV and SDM compared to that estimated from RGB images.

1. Introduction

Fractional vegetation cover, defined as the percentage of vegetation that is vertically projected in a unit area [1], is an important indicator of plant growth [2], vegetation status [3], crop health [4], habitat selection [5] and ecosystem change [6]. Vegetation cover is also closely related to leaf area index, net primary productivity, biomass, soil stability, photosynthesis and ecological processes [6,7,8]. In grassland ecosystems, systematic, accurate and repeatable surveys of vegetation cover are essential for monitoring grassland condition, protection of soil erosion and grassland management [9,10,11].
Fractional cover estimation in grasslands relies on field measurements combined with remote sensing technology [7]. Field measured cover data are fundamental for quantitative models using remotely sensed images [12,13] and are necessary to validate empirical models to estimate grassland cover [6,14]. Visual (i.e., non-destructive) estimation is a commonly used field method for grassland vegetation cover [15]. This is a rapid and repeatable evaluation of vegetation cover [16] and is sufficiently accurate for relative (as opposed to absolute) assessments of cover data [7]. However, visual estimation is subjective and prone to observer biases [5], which can lead to inconsistent data among observers and observation periods [3,16]. Attempts have been made to reduce visual estimation bias in grassland vegetation cover, including fishnet grids [9], cardboard cutouts of specific shapes and size, and observer training [5]. However, trained observers are not able to distinguish cover intervals or changes less than 10% [16].
An alternative approach for field vegetation cover measurement is analysis of low altitude RGB (true color composition: Red, Green, Blue) images taken with digital cameras in the field [7]. Because field-taken digital RGB images with high spatial resolution potentially provide more accurate estimation of vegetation cover than visual methods by reducing the impact of human subjectivity [1,14]. They have been widely used for estimating forest canopy cover i.e., gap fraction analysis, [17,18], forest understory cover [16], crop cover [4,19,20,21,22], crop residual cover [23] and grassland vegetation cover [5,24].
Many studies have demonstrated the potential of field-taken digital RGB images to extract the green vegetation (GV) coverage [10,24,25,26] or crop residuals from soil background [23]. However, there has been less success separating GV and non-photosynthetic vegetation (NPV) using the RGB images. In arid grasslands, green and senescent vegetation, important indicators of grassland managements, are often intermixed and very difficult to differentiate [27]. The situation is even more complex in mixed grasslands with more components, including standing dead matter (SDM), litter, soil crust (moss and lichen), rocks and bare soil, in a heterogeneous mix [13].
The heterogeneity of mixed grassland components is challenging not only for GV cover estimation but also for SDM extraction from digital RGB images. Current analytical methods using digital RGB images, include unsupervised classification, supervised classification with training sites, objected oriented classification, RGB-based color indices and threshold algorithms [1,2,5,6,9,12,14,15,27]. Nearly all methods require human intervention (e.g., the SamplePoint software requires user inputs of classification for each sample point [28]), lack automation, batch processing capabilities and extraction accuracy [1,3,18,29]. Therefore, there is an opportunity to develop a fast, objective, repeatable and consistent analytical method to improve mixed grassland cover estimation, which would effectively support fieldwork for collecting low-altitude cover data. We aimed to develop a digital image analysis method to extract both GV and SDM fraction cover from field-taken digital RGB images with semi-automated batch processing capabilities. Our specific objectives were to: (1) extract GV and SDM cover separately from field-taken RGB images semi-automatically; (2) validate the extracted GV and SDM cover using hyperspectral vegetation indices.

2. Materials and Methods

2.1. Study Area

This research is conducted in Grasslands National Park (GNP: West Block, 49° N, 107° W, Figure 1) in the southern part of Saskatchewan, Canada. The study area is characterized as a semi-arid mixed prairie ecosystem (i.e., annual precipitation: 340 mm; annual mean temperature: 3.4 °C) [13]. Three main vegetation communities are upland (Figure 1b), sloped (Figure 1d) and valley (Figure 1c) grasslands, including disturbed herbaceous communities (Figure 1e–g). The dominant species are described in Table 1. In 1984, GNP was first acquired as a national park [30], at which time all larger grazers were removed until 2006. This has led to approximately 30 years of accumulation of a large amount NPV including SDM and litter, which brings a challenge to estimate GV and SDM cover using field collected digital RGB images.

2.2. Field Data Collection

Fieldwork was performed during 20 June to 2 July 2014 in the peak growing season of GNP. A stratified random sampling design was used to select 14 sites with consideration of accessibility (Figure 1: 4 sites in upland grassland, 5 in sloped grassland, 3 in valley grassland and 2 in disturbed communities). Two 100 m transects were surveyed perpendicular to one another and crossing in the center at each site. Twenty, 50 cm × 50 cm quadrats at 10 m intervals (excluding the center point) were surveyed along the transects.
This design is intended to collect the heterogeneity of biophysical parameters on the representative grasslands. Percent ground cover, including grass, shrub, forb, SDM, litter, moss, lichen, rock and bare soil coverage were visual estimated at each quadrat. The descriptive statistics of GV cover (i.e., sum of grass and forb cover), SDM and NPV cover (i.e., sum of SDM and litter cover) are shown in Table 2. Nadir (i.e., downward facing) RGB images were taken by a commercially available digital camera (Nikon S8000, Nikon Imaging Japan Inc., Tokyo, Japan) at each quadrat (i.e., the corresponding RGB pictures for each quadrat in the 14 sites of Table 2 are listed in Supplementary S2) at 1 m above the ground. A 0° camera angle enables fractional cover estimation when compared to oblique angles tested [31]. Hyper-spectra (wavelength from 350 nm to 2500 nm) were also measured at each quadrat with an Analytical Spectral Devices (ASD) field-portable FieldSpec® Pro Spectroradiometer between 10:00 a.m. and 14:00 p.m. under clear sky (i.e., without any cloud cover).

2.3. Methods

The methodological workflow for the proposed semi-automatic method included preprocessing digital RGB images, developing a python script to extract GV and SDM separately and calculating GV and SDM percentage cover automatically (Figure 2). The result of semi-automatically estimated GV and SDM cover with visual estimated cover data and vegetation indices were validated based on hyperspectral remote sensing (Figure 2).

2.3.1. Pre-Processing for the Field-Taken RGB Images

RGB images were first cropped to the quadrat area and then processed to their actual size (50 cm × 50 cm) with 300 pixels/inch (dpi) using Adobe Photoshop CS6 (Figure 2).
Because light conditions differed slightly among field-taken RGB images, blue, green and red bands of cropped pictures were standardized independently to maintain consistency among study sites (Equation (1)).
D N s t d = ( D N D N ¯ ) s t d ( D N )
where D N s t d is the standardized pixel value, D N is the original pixel value, D N ¯ is the mean value of all the pixel values in a single band, and s t d ( x ) is the standard deviation for all the pixel values in a single band.
After each band for all the pictures was standardized, all the pixel values fit in a range from −1 to 1. Standardized images were then normalized (Equation (2)) as images with pixel value range from 0 to 1023 (10 bit integer data format).
D N n o r = 1023 × ( D N s t d min ( D N s t d ) ) ( max ( D N s t d ) min ( x D N s t d ) )
where D N n o r is the normalized pixel value, D N s t d is standardized pixel value, min ( x D N s t d ) is the minimum value of all the pixel values in each standardized band, and max ( D N s t d ) is the maximum of all the pixel values in each standardized band.

2.3.2. Developing a Python Script to Semi-Automate GV and SDM Cover Extraction from Preprocessed RGB Images

GV pixels in the RGB images were extracted based on the spectral characteristics of GV (i.e., reflectance for green leaves in the green band is larger than that in both red and blue bands; Equation (3)). GV pixels were masked out before further process for extracting SDM pixels.
( D N n o r G D N n o r R ) > g 1   and   ( D N n o r G D N n o r B ) > g 2
where D N n o r G , D N n o r R , D N n o r B are the normalized pixel value for the green band, red band and blue band of the field-taken RGB pictures, g 1 ,   g 2 are constants (i.e., default values were set to 60 in this study). The values of g 1 ,   g 2 were determined after exploring the spectral characteristics of green leaves for narrow leaved native grasses, shrubs, invasive species in disturbed communities (their values are discussed in Section 3.1).
Even though SDM has similar spectral characteristics such as litter, soil crust and bare soil, SDM in the canopy have much brighter color tong in all the three visible bands in field-taken RGB images because the understory components receive limited sunlight in comparison with the vegetation canopy. Therefore, SDM is extracted based on the criteria that the SDM has higher pixel values than the understory components in the normalized visible bands after GV pixels were removed (Equation (4)).
D N n o r R > d × D N n o r R ¯   and   D N n o r G > d × D N n o r G ¯   and   D N n o r B > d × D N n o r B ¯
where D N n o r G , D N n o r R , D N n o r B are the normalized pixel values for green, red and blue bands of the field-taken RGB images, D N n o r G ¯ , D N n o r R ¯ , D N n o r B ¯ are the mean pixel values of the green, red and blue normalized bands of the field-taken RGB images, d is a constant set as a default value of 1 for our study. The setting of constant d was discussed in Section 3.2 when litter and soil background with light color tong challenging the extraction of SDM.
After GV and SDM pixels were separated from the pre-processed RGB images, GV and SDM cover were calculated. Both GV and SDM pixel counts were divided by total pixel counts for fractional cover. These cover estimations were compared with field observed cover data by visual estimation.
All classification and cover estimation processes for both GV and SDM were conducted using a python script developed in this study (see the stand-alone python script and ArcToolbox in the Supplementary Materials and the description of ArcToolbox in Supplementary S1).

2.3.3. Validation of Extracted GV and SDM Cover from RGB Images

Alternative methods for GV and NPV cover estimation (i.e., vegetation indices based on hyperspectral remote sensing) were used to validate estimated GV and SDM cover from field-taken digital RGB images. Normalized difference vegetation index (NDVI), an index strongly correlated to GV, has been widely used to evaluate GV cover in grasslands [10]. The cellulose absorption index (CAI) is effective for estimating NPV fractional cover (i.e., including SDM and litter cover) from GV and soil background [32,33,34]. Therefore, NDVI calculated from field measured hyper-spectra (Equation (5)) was used to test the accuracy of GV cover extracted from field-taken RGB images based on linear regression analysis in R software (i.e., it is written by John Chambers and his colleagues of the Bell Laboratories, Murray Hill, NJ, USA). The CAI calculated from field-collected hyper-spectra (Equation (6)) was used to validate the accuracy of SDM extraction from field-taken RGB images with linear regression analysis in R software (i.e., the NPV cover used for linear regression with CAI is the sum of RGB image extracted SDM cover and visual estimated litter cover).
N D V I   = ( ρ 800 ρ 670 ) ( ρ 800 + ρ 670 )
where ρ 800 and ρ 670 are the reflectance in the wavelength of 800 nm and 670 nm from field-collected hyper-spectra.
C A I   = 100 ( 0.5 ( ρ 2030 + ρ 2210 ) ρ 2100 )
where ρ 2030 , ρ 2100 and ρ 2210 are the reflectance in the wavelength of 2030 nm, 2100 nm and 2210 nm from field-collected hyper-spectra.

3. Results

3.1. Determination of the Constants g 1 , g 2 for GV Extraction

After data exploration of 25 sample RGB images for different species in different conditions (Table 3, see the sample RGB photos in Supplementary S3), values of g 1 ,   g 2 were found to exceed 60 when the photograph was taken without high exposure under strong light (normal light conditions). Under normal light conditions, the minimum values of g 1 ,   g 2 for broad-leaved vegetation (e.g., Sweet Clover) was higher than narrow-leaved grasses (Table 3: id 5 and 6). When the RGB images were taken with high exposure or when the vegetation had begun to senesce, constant g 1 needed to be set at a value lower than 60 (32.09–44.13; Table 3). The value of g 2 was not affected by these conditions but was affected by bluish leaves (i.e., dominated western wheat grass and sagebrush). In these instances, the value of g 2 needed to be set lower (i.e., 32.09–40.12; Table 3).

3.2. Setting of the Constant d for SDM Extraction

SDM, as one part of grassland canopy, has higher pixel values in all three bands of field-taken RGB images than bare soil, litter and soil crust after GV pixels were masked out (Equation (4)). Therefore, it is effectively extracted by default constant d (set at 1) when the understory background, including soil, litter and soil crust, is dark (Figure 3a,a1,b,b1). When the percentage of the dead component in the canopy is high (i.e., visual estimation of dead material for Figure 3c is 87% and for Figure 3d is 90%), SDM in the canopy is brighter than the lower layer (Figure 3c,d), including litter that has similar spectral characteristics with SDM. In this situation, d must be set lower to extract more standing dead material in the darker, lower canopy (Figure 3c,d). The extracted fraction of SDM is 41.7% (Figure 3c1), 61.5% (Figure 3c2) and 72.1% (Figure 3c3) when d was set as 1, 0.7 and 0.5, respectively (Figure 3c) and the estimated percentage of SDM was 43.7% (Figure 3d1), 65.6% (Figure 3d2) and 80.3% (Figure 3d3) when d was set as 1, 0.7 and 0.5, respectively (Figure 3d).
Light colored, undecomposed litter has a large effect on the extraction of standing dead material (Figure 3e: visual estimation for standing dead materials is 40% and for litter is 20%) when the canopy cover (i.e., sum of GV and SDM cover) is low, thus, d needs to be set higher than the default value of one. The extracted SDM was 30.8% (Figure 3e1), 20.2% (Figure 3e2) and 6.5% (Figure 3e3) when d was set at 1, 1.2 and 1.5, respectively. Light colored soil crust as the canopy background also has a strong influence in SDM extraction in the study area (Figure 3f: visual estimation for standing dead materials is 15%). In this case, d must be set higher to reduce the influence of soil crust (i.e., moss and lichen). The extracted SDM was 27.8% (Figure 3f1), 8.6% (Figure 3f2) and 0.4% (Figure 3f3) when d was set at 1, 1.5 and 2, respectively. Light colored bare soil also influences the extraction of SDM with a default value of d (Figure 3g: visual estimation for standing dead material is 10%). d must be set higher to reduce the effects of bare soil (Figure 3g3: d was set at 2) providing high accuracy of standing dead material extraction (0.8%, Figure 3g3) compared to 11.9% extraction where some bare soil pixels were extracted (Figure 3g2: d was set at 1.5) and 25.9% extraction where a large amount of bare soil pixels were present in the extraction results (Figure 3g1: d was set at 1). Especially when canopy cover is low, light colored bare soil still has great effects on SDM extraction even though d is set appropriately for extracting SDM pixels (Figure 3h1: d was set at 1.5; Figure 3i1: d was set at 1.7).

3.3. GV and SDM Cover Estimated from RGB Images

Compared to subjective visual estimation, GV cover is under-estimated by the method developed in this study (Figure 4a). The difference between estimated GV cover and subjective visual estimated GV cover is larger when GV cover is lower (Figure 4a). Estimated cover of SDM in this study is under-estimated compared to that from subjective visual estimation. The underestimation becomes more distinct when SDM cover is higher (Figure 4b).

3.4. Validation of GV and NPV Estimated from RGB Images

Based on the relationship between GV cover and NDVI, the estimated GV cover (Figure 5b: R2 = 0.846, p < 0.001) in this study is more precise than that from subjective visual estimation (Figure 5a: R2 = 0.711, p < 0.001).
Theoretically, CAI can be used to evaluate NPV cover including cover of SDM and litter. There is no significant linear relationship of CAI and estimated SDM cover. Therefore, cover used for validation is comprised of total NPV including SDM and litter, thus, the estimated dead cover (Figure 6b) is the sum of the estimated cover of SDM and field observed litter cover. The R square of estimated dead cover increased from 0.687 (p < 0.001) with subjective visual estimation to 0.734 (p < 0.001) with the linear regression of CAI (Figure 6a).

4. Discussion

4.1. GV and SDM Cover Estimation Based on RGB Images and Visual Estimation

Our semi-automated method to classify RGB images predicts lower GV cover than visual estimates (Figure 4a). The difference between visual estimation and extraction of GV coverage from RGB images is higher when the GV coverage is relatively low (Figure 4a). Previous research also suggests that subjective visual estimation tends to predict higher cover (i.e., overestimation) than GV cover estimates from digital image analysis based on field-taken RGB pictures [8,35]. Macfarlane and Ogden’s results show that subjective visual estimation accuracy is ±10–20% [16]. This indicates that green cover collected by visual estimation may consistently overestimate real GV cover. Moreover, the linear regression of GV cover and NDVI, an alternative method based on remote sensing imagery for GV cover estimation, indicates that GV cover extracted from field-taken RGB images (Figure 5b: R2 = 0.846, p < 0.001) in this study is superior to the GV cover from the subjective visual estimation in the field (Figure 5a: R2 = 0.711, p < 0.001). We also compared our GV extraction with the extraction results from Canopeo (http://www.canopeoapp.com), a powerful tool for measuring GV cover in grassland [36] which has been proven to show good performance for measuring GV cover of narrow-leaved vegetation [11]. The comparison results show that the extracted GV cover with our semi-automated method is consistent with that of Canopeo (R2 = 0.86, p < 0.001) and GV cover by Canopeo also has high linear relationship with NDVI (R2 = 0.85, p < 0.001). It indicates that the method developed in this study has high potential to assess GV cover effectively and accurately. Moreover, this method has batch capacity, which would effectively support field data collection.
The results of linear regression between NPV cover and CAI (an alternative method for NPV cover estimation based on remote sensing approaches) show that cover estimates from RGB images in this study (Figure 6b: R2 = 0.734, p < 0.001) are superior to subjective visual estimates (Figure 6a: R2 = 0.687, p < 0.001). Estimated cover of SDM based on field-taken RGB images in this study is lower than subjective visual estimates. The difference between visual estimation and extraction from RGB pictures for SDM cover becomes larger when SDM cover increases. When SDM cover is larger, SDM in the lower layer is darker than in the top layer, which challenges the extraction of SDM from field-taken RGB images. Therefore, SDM cover might be underestimated by RGB images when the cover of SDM is very high. However, SDM cover might be overestimated from RGB images when SDM cover is low with background soil that has a light tone.

4.2. Estimated Green Cover from RGB Pictures

After standardizing and normalizing red (band 1), green (band 2) and blue (band 3) from the RGB images (Equations (1) and (2)), we found that the green band had the highest pixel value for green vegetation. Therefore, the constants g 1 ,   g 2 (Equation (3)) can be used as thresholds to separate green vegetation from SDM, litter, soil crust (moss and lichen), rocks and bare soil. In this study, green vegetation was extracted accurately in most cases when g 1 ,   g 2 were set to the default value of 60 (Figure 7a–d).
However, default values of g 1 ,   g 2 should be tied to vegetation type, phenology stage, soil crust and the light conditions when taking RGB pictures. Previous research indicates that GV cover extracted from field-taken RGB images is influenced by resolution, exposure and ground complexity [3]. In our study area, sagebrush and western wheatgrass are a pale blue color (Figure 7e,f). For this case, we adjusted g 2 lower (32) and moved g 1 to be lower than 60 to capture more sage leaves (Figure 7e,f; g 1 ,   g 2 were set as 40 and 32, respectively). When soil crust, especially moss with green color, influences green vegetation extraction from RGB pictures, g 1 could be set higher than g 2 because the difference between normalized green band and normalized blue band is far greater than the difference between normalized green band and normalized red band for green moss (Figure 7g; g 1 ,   g 2 were set as 60 and 40, respectively). Because the python script is designed to extract GV and SDM pixels separately to quantify ground cover of GV and SDM (Figure 7h; g 1 ,   g 2 were set as 30 and 60, respectively). If the vegetation is in early or late senescence, g 1 should be set as a lower value than the default 60 to extract more GV pixels which are not completely senesced.
When RGB photographs were taken near noon, the issue of high exposure reduced the greenness in the green band. Therefore, g 1 should be set lower than 60 (32, Table 3). Even though it improves the accuracy to extract green vegetation from RGB images by changing the parameters g 1 ,   g 2 in the python script we developed, the effects of sage and green moss are still not eliminated. They are, however, reduced. Green vegetation is effectively extracted by our method when GV was overlapping in the original RGB images (Figure 7) and compared with the GV extraction from Canopeo. When green moss was present, this principle did not necessarily hold (Figure 7g). For all these limitations of GV cover estimation, we suggest taking RGB pictures of each quadrat in the maximum growing season to avoid the senesced vegetation issue and avoid high exposure issue at noon.

4.3. Estimated SDM from RGB Images

SDM has high pixel values in all three normalized visible bands. Equation (4) was designed based on this concept. Green vegetation was masked out for the normalized RGB images before extracting SDM to completely eliminate effects from green canopy, but the averaged pixel values of each normalized band were calculated before the green cover was masked out (Equation (4)). The parameter d is designed to separate standing dead cover from litter and a light soil background (Equation (4)). In this way, SDM was extracted accurately under moderate and high canopy cover (Figure 3a,b; d was set as default value 1).
Undecomposed litter has similar spectral characteristics as SDM. When the canopy cover (sum of GV and SDM cover) is low, litter has large effects on the accuracy for extracting SDM pixels. In normalized RGB images, litter is slightly darker than SDM in all the three bands because light exposure differs for the canopy and understory, and the color tone of litter becomes darker when it begins to decompose. To reduce the impact of litter when extracting SDM, the constant d was set higher (in the range one to two in this study) than the default value of one (Figure 3e3; d was set as 1.5).
Dry bare soil with light color tone is another issue for extracting SDM when the canopy cover is low. The errors caused by light soil background can be reduced by setting the constant d to a higher value (Figure 3g3; d was set as two). When the actual SDM cover is high, SDM cover may be underestimated (Figure 3c,d) with our method. SDM in the lower canopy has a darker color tone than that in the upper canopy when the SDM cover is high. Thus, the SDM in the lower canopy will be treated as litter to be excluded in the output of SDM cover (Figure 3c1,d1). In this specific case, we set the parameter d lower than the default value 1 (Figure 3c1,d1 when d was set as default value 1; Figure 3c2,d2 when d was set as 0.7; Figure 3c3,d3 when d was set as 0.5). In addition, the extracted results of SDM were overestimated due to the influence of soil crust covering the ground surface (Figure 3f; Figure 3f1 with d set as one). The effects of soil crust were reduced when d was set to 1.5 (Figure 3f2). However, cover was underestimated when d was set to two to eliminate the effects of soil crust (Figure 3f3). Flowers, especially white flowers in the mixed grassland, have large effects on extracting SDM (Figure 3f, Figure 8a,b). However, the effect of flowers cannot be eliminated by using higher d values (Figure 8b1: d = 1; Figure 8b2: d = 1.5; Figure 8b3: d = 1.7; Figure 8b4: d = 1.8; Figure 8b5: d = 1.9).

5. Conclusions

Our main conclusions are: (1) based on the linear relationship with NDVI, GV cover extracted from the method developed in this study (R2 = 0.846, p < 0.001) is superior to that from subjective visual estimation in the field (R2 = 0.711, p < 0.001), and the extracted GV cover is consistent with that estimated by Canopeo (i.e., a powerful tool for measuring GV cover in grassland). (2) The semi-automatic method of this study has high potential to extract SDM cover when the canopy cover (including both GV and SDM cover) is high or when the understory including the effects of litter, soil crust and bare soil, is limited. (3) Subjective visual estimation in the field tended to predict higher cover for both GV and SDM compared to that estimated from RGB images in this study.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/20/23/6870/s1, Supplementary S1: explanation of the parameters in the python script for extracting green cover and standing dead cover from field--taken RGB images; Supplementary S2: RGB pictures for each quadrat in 14 sites; Supplementary S3: Sample RGB pictures for data exploration of constants g 1 ,   g 2 in Table 3; Python script, ArcToolbox.

Author Contributions

D.X. developed the initial idea, conducted fieldwork, developed the methodology and the python script, analyzed the data and wrote the manuscript. Y.P. improved the methodology and the python script. X.G. guided the organization of the initial idea and research direction, and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science and Engineering Research Council of Canada (GRPIN-2016-03960), Six Talent Peaks Project of Jiangsu Province (TD-XYDXX-006), Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

Acknowledgments

The authors would like to acknowledge the field crew who helped collecting the field data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci. 2011, 23, 406–418. [Google Scholar] [CrossRef]
  2. Lee, K.J.; Lee, B.-W. Estimating canopy cover from color digital camera image of rice field. J. Crop. Sci. Biotechnol. 2011, 14, 151–155. [Google Scholar] [CrossRef]
  3. Hu, J.; Dai, M.X.; Peng, S.T. An automated (novel) algorithm for estimating green vegetation cover fraction from digital image: UIP-MGMEP. Environ. Monit. Assess. 2018, 190, 687. [Google Scholar] [CrossRef]
  4. Zhao, C.; Li, C.; Wang, Q.; Meng, Q.; Wang, J. Automated Digital Image Analyses For Estimating Percent Ground Cover of Winter Wheat Based on Object Features. In International Conference on Computer and Computing Technologies in Agriculture; Li, D., Zhao, C., Eds.; Springer Science and Business Media LLC: Boston, MA, USA, 2009; Volume 293, pp. 253–264. [Google Scholar]
  5. Luscier, J.D.; Thompson, W.L.; Wilson, J.M.; Gorham, B.E.; Dragut, L.D. Using digital photographs and object-based image analysis to estimate percent ground cover in vegetation plots. Front. Ecol. Environ. 2006, 4, 408–413. [Google Scholar] [CrossRef] [Green Version]
  6. Song, W.; Mu, X.; Yan, G.; Huang, S. Extracting the Green Fractional Vegetation Cover from Digital Images Using a Shadow-Resistant Algorithm (SHAR-LABFVC). Remote Sens. 2015, 7, 10425–10443. [Google Scholar] [CrossRef] [Green Version]
  7. Büchi, L.; Wendling, M.; Mouly, P.; Charles, R. Comparison of Visual Assessment and Digital Image Analysis for Canopy Cover Estimation. Agron. J. 2018, 110, 1289–1295. [Google Scholar] [CrossRef]
  8. Olmstead, M.A.; Wample, R.; Greene, S.; Tarara, J. Nondestructive Measurement of Vegetative Cover Using Digital Image Analysis. HortScience 2004, 39, 55–59. [Google Scholar] [CrossRef]
  9. Baxendale, C.; Ostle, N.J.; Wood, C.M.; Oakley, S.; Ward, S.E.; Ostle, N.J. Can digital image classification be used as a standardised method for surveying peatland vegetation cover? Ecol. Indic. 2016, 68, 150–156. [Google Scholar] [CrossRef] [Green Version]
  10. Kim, J.; Kang, S.; Seo, B.; Narantsetseg, A.; Han, Y.-J. Estimating fractional green vegetation cover of Mongolian grasslands using digital camera images and MODIS satellite vegetation indices. GIScience Remote Sens. 2019, 57, 49–59. [Google Scholar] [CrossRef]
  11. Xiong, Y.; West, C.P.; Brown, C.; Green, P. Digital Image Analysis of Old World Bluestem Cover to Estimate Canopy Development. Agron. J. 2019, 111, 1247–1253. [Google Scholar] [CrossRef] [Green Version]
  12. Zhou, Q.; Robson, M. Automated rangeland vegetation cover and density estimation using ground digital images and a spectral-contextual classifier. Int. J. Remote Sens. 2001, 22, 3457–3470. [Google Scholar] [CrossRef]
  13. Xu, D.; Guo, X.; Li, Z.; Yang, X.; Yin, H. Measuring the dead component of mixed grassland with Landsat imagery. Remote Sens. Environ. 2014, 142, 33–43. [Google Scholar] [CrossRef]
  14. Makanza, R.; Zaman-Allah, M.; Cairns, J.E.; Magorokosho, C.; Tarekegne, A.; Olsen, M.; Prasanna, B.M. High-Throughput Phenotyping of Canopy Cover and Senescence in Maize Field Trials Using Aerial Digital Canopy Imaging. Remote Sens. 2018, 10, 330. [Google Scholar] [CrossRef] [Green Version]
  15. Lynch, T.M.H.; Barth, S.; Dix, P.J.; Grogan, D.; Grant, J.; Grant, O.M. Ground Cover Assessment of Perennial Ryegrass Using Digital Imaging. Agron. J. 2015, 107, 2347–2352. [Google Scholar] [CrossRef]
  16. Macfarlane, C.; Ogden, G.N. Automated estimation of foliage cover in forest understorey from digital nadir images. Methods Ecol. Evol. 2011, 3, 405–415. [Google Scholar] [CrossRef]
  17. Chianucci, F.; Chiavetta, U.; Cutini, A. The estimation of canopy attributes from digital cover photography by two different image analysis methods. iForest-Biogeosciences For. 2014, 7, 255–259. [Google Scholar] [CrossRef] [Green Version]
  18. Alivernini, A.; Fares, S.; Ferrara, C.; Chianucci, F. An objective image analysis method for estimation of canopy attributes from digital cover photography. Trees-Struct. Funct. 2018, 32, 713–723. [Google Scholar] [CrossRef]
  19. Marcial-Pablo, M.D.J.; Gonzalez-Sanchez, A.; Jimenez-Jimenez, S.I.; Ontiveros-Capurata, R.E.; Ojeda-Bustamante, W. Estimation of vegetation fraction using RGB and multispectral images from UAV. Int. J. Remote Sens. 2019, 40, 420–438. [Google Scholar] [CrossRef]
  20. Bin Zhang, Z.; Liu, C.X.; Xu, X.D. A Green Vegetation Extraction Based-RGB Space in Natural Sunlight. Adv. Mater. Res. 2011, 660–665. [Google Scholar] [CrossRef]
  21. Chen, A.; Orlov-Levin, V.; Meron, M. Applying high-resolution visible-channel aerial imaging of crop canopy to precision irrigation management. Agric. Water Manag. 2019, 216, 196–205. [Google Scholar] [CrossRef]
  22. Comar, A.; Burger, P.; De Solan, B.; Baret, F.; Daumard, F.; Hanocq, J.-F. A semi-automatic system for high throughput phenotyping wheat cultivars in-field conditions: Description and first results. Funct. Plant Biol. 2012, 39, 914–924. [Google Scholar] [CrossRef] [PubMed]
  23. Velázquez-García, J.; Oleschko, K.; Muñoz-Villalobos, J.A.; Velásquez-Valle, M.; Menes, M.M.; Parrot, J.-F.; Korvin, G.; Cerca, M. Land cover monitoring by fractal analysis of digital images. Geoderma 2010, 160, 83–92. [Google Scholar] [CrossRef]
  24. Liu, N.; Treitz, P. Modelling high arctic percent vegetation cover using field digital images and high resolution satellite data. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 445–456. [Google Scholar] [CrossRef]
  25. Fuentes-Peailillo, F.; Ortega-Farias, S.; Rivera, M.; Bardeen, M.; Moreno, M. Comparison of vegetation indices acquired from RGB and Multispectral sensors placed on UAV. In Proceedings of the 2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA), Concepcion, Chile, 17–19 October 2018; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, 2018; pp. 1–6. [Google Scholar]
  26. Roth, L.; Aasen, H.; Walter, A.; Liebisch, F. Extracting leaf area index using viewing geometry effects—A new perspective on high-resolution unmanned aerial system photography. ISPRS J. Photogramm. Remote Sens. 2018, 141, 161–175. [Google Scholar] [CrossRef]
  27. Laliberte, A.; Rango, A.; Herrick, J.; Fredrickson, E.L.; Burkett, L. An object-based image analysis approach for determining fractional cover of senescent and green vegetation with digital plot photography. J. Arid. Environ. 2007, 69, 1–14. [Google Scholar] [CrossRef]
  28. Booth, D.T.; Cox, S.E.; Berryman, R.D. Point Sampling Digital Imagery with ‘Samplepoint’. Environ. Monit. Assess. 2006, 123, 97–108. [Google Scholar] [CrossRef]
  29. Mora, M.; Ávila, F.; Carrasco-Benavides, M.; Maldonado, G.; Olguín-Cáceres, J.; Fuentes, S. Automated computation of leaf area index from fruit trees using improved image processing algorithms applied to canopy cover digital photograpies. Comput. Electron. Agric. 2016, 123, 195–202. [Google Scholar] [CrossRef] [Green Version]
  30. Xu, D.; Guo, X. A Study of Soil Line Simulation from Landsat Images in Mixed Grassland. Remote Sens. 2013, 5, 4533–4550. [Google Scholar] [CrossRef] [Green Version]
  31. Rasmussen, J.; Nørremark, M.; Bibby, B. Assessment of leaf cover and crop soil cover in weed harrowing research using digital images. Weed Res. 2007, 47, 299–310. [Google Scholar] [CrossRef] [Green Version]
  32. Nagler, P.L.; Inoue, Y.; Glenn, E.; Russ, A.; Daughtry, C.S.T. Cellulose absorption index (CAI) to quantify mixed soil–plant litter scenes. Remote Sens. Environ. 2003, 87, 310–325. [Google Scholar] [CrossRef]
  33. Ren, H.; Zhou, G.; Zhang, F.; Zhang, X. Evaluating cellulose absorption index (CAI) for non-photosynthetic biomass estimation in the desert steppe of Inner Mongolia. Chin. Sci. Bull. 2012, 57, 1716–1722. [Google Scholar] [CrossRef] [Green Version]
  34. Aguilar, J.; Evans, R.; Daughtry, C.S.T. Performance assessment of the cellulose absorption index method for estimating crop residue cover. J. Soil Water Conserv. 2012, 67, 202–210. [Google Scholar] [CrossRef]
  35. Richardson, M.D.; Karcher, D.E.; Purcell, L.C. Quantifying Turfgrass Cover Using Digital Image Analysis. Crop. Sci. 2001, 41, 1884–1888. [Google Scholar] [CrossRef]
  36. Patrignani, A.; Ochsner, T.E. Canopeo: A Powerful New Tool for Measuring Fractional Green Canopy Cover. Agron. J. 2015, 107, 2312–2320. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Vegetation communities in Grassland National Park (GNP). (a) Vegetation communities in GNP first surveyed in 1983 and disturbed community data updated in 1995. (b) Upland grassland. (c) Valley grassland. (d) Sloped grassland. (e) Disturbed community with Smooth Brome (Bromus inermis Layss.). (f) Disturbed community with Crested Wheatgrass (Agropyron cristatum). (g) Disturbed community with Sweet Clover (Melilotus officinalis).
Figure 1. Vegetation communities in Grassland National Park (GNP). (a) Vegetation communities in GNP first surveyed in 1983 and disturbed community data updated in 1995. (b) Upland grassland. (c) Valley grassland. (d) Sloped grassland. (e) Disturbed community with Smooth Brome (Bromus inermis Layss.). (f) Disturbed community with Crested Wheatgrass (Agropyron cristatum). (g) Disturbed community with Sweet Clover (Melilotus officinalis).
Sensors 20 06870 g001
Figure 2. Flowchart of the methodology for this study.
Figure 2. Flowchart of the methodology for this study.
Sensors 20 06870 g002
Figure 3. Extracted SDM from field-taken RGB images. (a) RGB image taken in valley grassland. (a1) extraction of standing dead matter (SDM) from (a) with d = 1. (b) RGB image taken in disturbed communities. (b1) Extraction of standing dead material from (b) with d = 1. (c) RGB image taken in valley grassland. (c1c3) Extraction of SDM from (c) with d = 1, 0.7, 0.5, respectively. (d) RGB image taken in sloped grassland. (d1d3) Extraction of SDM from (d) with d = 1, 0.7, 0.5, respectively. (e) RGB image taken in sloped grassland. (e1e3) Extraction of SDM from (e) with d = 1, 1.2, 1.5, respectively. (f) RGB image taken in upland grassland. (f1f3) Extraction of SDM from (f) with d = 1, 1.5, 2, respectively. (g) RGB image taken in valley grassland. (g1g3) Extraction of SDM from (g) with d = 1, 1.5, 2, respectively. (h) RGB image taken in disturbed communities. (h1) Extraction of SDM from (h) with d = 1.5. (i) RGB image taken in valley grassland. (i1) Extraction of SDM from (i) with d = 1.7.
Figure 3. Extracted SDM from field-taken RGB images. (a) RGB image taken in valley grassland. (a1) extraction of standing dead matter (SDM) from (a) with d = 1. (b) RGB image taken in disturbed communities. (b1) Extraction of standing dead material from (b) with d = 1. (c) RGB image taken in valley grassland. (c1c3) Extraction of SDM from (c) with d = 1, 0.7, 0.5, respectively. (d) RGB image taken in sloped grassland. (d1d3) Extraction of SDM from (d) with d = 1, 0.7, 0.5, respectively. (e) RGB image taken in sloped grassland. (e1e3) Extraction of SDM from (e) with d = 1, 1.2, 1.5, respectively. (f) RGB image taken in upland grassland. (f1f3) Extraction of SDM from (f) with d = 1, 1.5, 2, respectively. (g) RGB image taken in valley grassland. (g1g3) Extraction of SDM from (g) with d = 1, 1.5, 2, respectively. (h) RGB image taken in disturbed communities. (h1) Extraction of SDM from (h) with d = 1.5. (i) RGB image taken in valley grassland. (i1) Extraction of SDM from (i) with d = 1.7.
Sensors 20 06870 g003
Figure 4. Comparing estimated green and dead cover with field observed cover (the red solid line is the regression line; the blue line is 1:1). (a) Comparison between green vegetation (GV) cover from visual estimation (i.e., field observed green cover) and that from GV extraction by the method developed in this study. (b) Comparison between SDM from visual estimation (i.e., field observed standing dead cover) and that from SDM extraction from the method of this study.
Figure 4. Comparing estimated green and dead cover with field observed cover (the red solid line is the regression line; the blue line is 1:1). (a) Comparison between green vegetation (GV) cover from visual estimation (i.e., field observed green cover) and that from GV extraction by the method developed in this study. (b) Comparison between SDM from visual estimation (i.e., field observed standing dead cover) and that from SDM extraction from the method of this study.
Sensors 20 06870 g004
Figure 5. Validation of RGB extracted green cover. (a) The relationship between normalized vegetation index (NDVI) and visual estimated GV cover (i.e., field observed green cover). (b) The relationship between NDVI and GV cover extracted by the method of this study.
Figure 5. Validation of RGB extracted green cover. (a) The relationship between normalized vegetation index (NDVI) and visual estimated GV cover (i.e., field observed green cover). (b) The relationship between NDVI and GV cover extracted by the method of this study.
Sensors 20 06870 g005
Figure 6. Validation of RGB extracted standing dead cover. (a) The relationship between cellulose absorption index (CAI) and visual estimated non-photosynthetic vegetation (NPV) cover (i.e., sum of visual estimated SDM and litter cover). (b) The relationship between CAI and the estimated NPV cover (i.e., sum of extracted cover of standing dead matter (SDM) by the method of this study and visually estimated cover of litter in the field).
Figure 6. Validation of RGB extracted standing dead cover. (a) The relationship between cellulose absorption index (CAI) and visual estimated non-photosynthetic vegetation (NPV) cover (i.e., sum of visual estimated SDM and litter cover). (b) The relationship between CAI and the estimated NPV cover (i.e., sum of extracted cover of standing dead matter (SDM) by the method of this study and visually estimated cover of litter in the field).
Sensors 20 06870 g006
Figure 7. Extracted green vegetation from RGB images. (a) RGB image taken in disturbed communities; extraction of green vegetation (GV) with g 1 = 60 and g 2 = 60. (b) RGB image taken in upland grassland; extraction of GV with g 1 = 60 and g 2 = 60. (c) RGB image taken in valley grassland; extraction of GV with g 1 = 60 and g 2 = 60. (d) RGB image taken in sloped grassland; extraction of GV with g 1 = 60 and g 2 = 60. (e,f) RGB image taken in valley grassland; extraction of GV with g 1 = 40 and g 2 = 32. (g) RGB image taken in sloped grassland; extraction of GV with g 1 = 60 and g 2 = 40. (h) RGB image taken in upland grassland; extraction of GV with g 1 = 30 and g 2 = 60.
Figure 7. Extracted green vegetation from RGB images. (a) RGB image taken in disturbed communities; extraction of green vegetation (GV) with g 1 = 60 and g 2 = 60. (b) RGB image taken in upland grassland; extraction of GV with g 1 = 60 and g 2 = 60. (c) RGB image taken in valley grassland; extraction of GV with g 1 = 60 and g 2 = 60. (d) RGB image taken in sloped grassland; extraction of GV with g 1 = 60 and g 2 = 60. (e,f) RGB image taken in valley grassland; extraction of GV with g 1 = 40 and g 2 = 32. (g) RGB image taken in sloped grassland; extraction of GV with g 1 = 60 and g 2 = 40. (h) RGB image taken in upland grassland; extraction of GV with g 1 = 30 and g 2 = 60.
Sensors 20 06870 g007
Figure 8. Flower effects on the extraction of standing dead matter (SDM) from RGB images. (a) RGB image taken in valley grassland. (a1) Extraction of SDM from (a) with d = 1. (b) RGB image taken in disturbed communities. (b1b5) Extraction of standing dead materials from (b) with d = 1, 1.5, 1.7, 1.8 and 1.9, respectively.
Figure 8. Flower effects on the extraction of standing dead matter (SDM) from RGB images. (a) RGB image taken in valley grassland. (a1) Extraction of SDM from (a) with d = 1. (b) RGB image taken in disturbed communities. (b1b5) Extraction of standing dead materials from (b) with d = 1, 1.5, 1.7, 1.8 and 1.9, respectively.
Sensors 20 06870 g008
Table 1. Dominant species in upland, sloped, valley and disturbed communities.
Table 1. Dominant species in upland, sloped, valley and disturbed communities.
Vegetation CommunityDominated Species
upland grasslandwestern wheatgrass (Agropyron smithii Rydb.)
blue grama grass (Bouteloua gracilis (HBK) Lang. ex Steud.)
needle-and-thread grass (Stipa comata Trin. and Rupr.)
valley grasslandnorthern wheatgrass (Agropyron dasystachym)
western wheatgrass (Agropyron smithii Rydb.)
with high density of shrub species
sloped grasslandnorthern wheatgrass (Agropyron dasystachym)
western wheatgrass (Agropyron smithii Rydb.)
needle-and-thread grass (Stipa comata Trin. and Rupr.)
blue grama grass (Bouteloua gracilis (HBK) Lang. ex Steud.)
disturbed communitiescrested wheatgrass (Agropyron cristatum)
smooth brome (Bromus inermis Layss.)
sweet clover (Melilotus officinalis)
Table 2. Descriptive statistics for the coverage data based on visual estimation.
Table 2. Descriptive statistics for the coverage data based on visual estimation.
IDVegetation CommunityGreen Cover (GV)Standing Dead Matter (SDM) CoverNon-Photosynthetic Vegetation (NPV) Cover
MeanStandard Deviation (STD)MeanSTDMeanSTD
1upland grassland48.8513.027.758.5039.608.77
2upland grassland42.1514.7231.859.1851.8511.89
3upland grassland48.246.917.145.8245.906.36
4upland grassland40.559.6116.509.7539.4019.37
5valley grassland42.8514.0424.7511.9727.3012.21
6valley grassland37.9512.3924.7511.5334.1014.97
7valley grassland45.7116.3026.5218.9040.7120.08
8sloped grassland28.2513.3515.3516.3330.3022.66
9sloped grassland35.3510.9226.4514.3344.6520.05
10sloped grassland39.7110.6223.5714.2446.7112.36
11sloped grassland44.957.3515.958.7539.9011.71
12sloped grassland38.958.969.675.1832.1911.60
13disturbed communities62.4018.8612.507.8612.507.86
14disturbed communities49.7611.3414.7610.3014.7610.30
Table 3. Data exploration for constant g 1 ,   g 2 for different species and conditions.
Table 3. Data exploration for constant g 1 ,   g 2 for different species and conditions.
IDVegetation CommunitySpeciesConditiong1g2
MINMAXMEANSTDMINMAXMEANSTD
1disturbed communitysmooth brome/forbnormal condition60.18276.81139.9441.2660.53533.56199.8760.55
2disturbed communitysmooth bromenormal condition60.23268.79133.3638.9760.24517.52192.3565.15
3disturbed communitysmooth brome/forbnormal condition60.18260.76124.7336.3760.33577.69226.3267.68
4disturbed communitysmooth bromehigh exposure38.12224.6694.3623.4160.15545.60267.2755.50
5disturbed communitysweet clovernormal condition64.22649.91198.4274.8966.131014.98434.55167.75
6disturbed communitysweet clovernormal condition64.11328.96170.9751.2566.03776.71333.9498.38
7sloped grasslandneedle and thread/northern wheat grasshigh exposure32.09577.69119.2750.3360.47774.27201.4383.10
8sloped grasslandwestern wheat grass/needle and threadhigh exposure36.11284.84106.9933.4460.36585.72147.9259.82
9sloped grasslandneedle and threadsenesced grass40.12196.5879.2518.2660.33469.38180.6963.62
10sloped grasslandneedle and thread/western wheat grasshigh exposure44.13517.52117.6850.4760.23786.31248.9685.93
11sloped grasslandJune grass/needle and thread/forbhigh exposure, senesced grass44.13244.7296.7927.0160.34625.84256.1681.21
12sloped grasslandJune grass/western wheat grassnormal condition61.02284.84117.4138.1360.15501.47155.8160.03
13sloped grasslandJune grass/western wheat grassnormal condition60.39280.82129.4841.2061.11509.49165.8663.25
14sloped grasslandJune grass/western wheat grass/forbnormal condition61.54252.74106.0133.7660.02469.38132.6854.99
15upland grasslandnorthern wheat grasshigh exposure, senesced grass36.11445.31127.3750.6960.03710.47299.9783.84
16upland grasslandJune grass/northern wheat grasssenesced grass40.12296.87108.0434.2762.11585.72185.9477.32
17upland grasslandnorthern wheat grass/needle and threadhigh exposure, senesced grass36.11240.71100.9028.5463.12557.64264.6760.56
18upland grasslandwestern wheat grass/needle and threadnormal condition61.27629.85132.7655.7260.01826.42281.6796.35
19valley grasslandwestern wheat grassbluish leaves61.22224.6686.3522.0132.09453.33120.1143.96
20valley grasslandcrested wheat grassbluish leaves60.14260.76116.3234.0936.11561.65199.2874.63
21valley grasslandwestern wheat grass/sagebrushbluish leaves60.18256.7595.0425.5940.12533.56157.2765.15
22valley grasslandsmooth brome/forbnormal condition66.13300.88162.8447.6764.19429.26177.6555.57
23valley grasslandnorthern wheat grassnormal condition61.22276.81114.8536.4960.23485.42137.3155.02
24valley grasslandwestern wheat grass/Little blue steamnormal condition62.12216.6490.9823.6160.01477.40168.6851.78
25valley grasslandwestern wheat grass/forbbluish leaves60.28232.6890.5922.4040.12272.80125.4941.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, D.; Pu, Y.; Guo, X. A Semi-Automated Method to Extract Green and Non-Photosynthetic Vegetation Cover from RGB Images in Mixed Grasslands. Sensors 2020, 20, 6870. https://doi.org/10.3390/s20236870

AMA Style

Xu D, Pu Y, Guo X. A Semi-Automated Method to Extract Green and Non-Photosynthetic Vegetation Cover from RGB Images in Mixed Grasslands. Sensors. 2020; 20(23):6870. https://doi.org/10.3390/s20236870

Chicago/Turabian Style

Xu, Dandan, Yihan Pu, and Xulin Guo. 2020. "A Semi-Automated Method to Extract Green and Non-Photosynthetic Vegetation Cover from RGB Images in Mixed Grasslands" Sensors 20, no. 23: 6870. https://doi.org/10.3390/s20236870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop