Next Article in Journal
Land Surface Condition-Driven Emissivity Variation and Its Impact on Diurnal Land Surface Temperature Retrieval Uncertainty
Previous Article in Journal
Differences in Time Comparison and Positioning of BDS-3 PPP-B2b Signal Broadcast Through GEO
Previous Article in Special Issue
Remote-Sensed Evidence of Fire Alleviating Forest Canopy Water Stress Under a Drying Climate
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating UAV LiDAR and Field Spectroscopy for Estimating Residual Dry Matter Across Conservation Grazing Lands

1
Department of Geography, San Diego State University, 5500 Campanile Dr, San Diego, CA 92182, USA
2
The Nature Conservancy, 830 S St., Sacramento, CA 95811, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(14), 2352; https://doi.org/10.3390/rs17142352
Submission received: 30 May 2025 / Revised: 28 June 2025 / Accepted: 3 July 2025 / Published: 9 July 2025

Abstract

Residual dry matter (RDM) is a term used in rangeland management to describe the non-photosynthetic plant material left on the soil surface at the end of the growing season. RDM measurements are used by agencies and conservation entities for managing grazing and fire fuels. Measuring the RDM using traditional methods is labor-intensive, costly, and subjective, making consistent sampling challenging. Previous studies have assessed the use of multispectral remote sensing to estimate the RDM, but with limited success across space and time. The existing approaches may be improved through the use of spectroscopic (hyperspectral) sensors, capable of capturing the cellulose and lignin present in dry grass, as well as Unmanned Aerial Vehicle (UAV)-mounted Light Detection and Ranging (LiDAR) sensors, capable of capturing centimeter-scale 3D vegetation structures. Here, we evaluate the relationships between the RDM and spectral and LiDAR data across the Jack and Laura Dangermond Preserve (Santa Barbara County, CA, USA), which uses grazing and prescribed fire for rangeland management. The spectral indices did not correlate with the RDM (R2 < 0.1), likely due to complete areal coverage with dense grass. The LiDAR canopy height models performed better for all the samples (R2 = 0.37), with much stronger performance (R2 = 0.81) when using a stratified model to predict the RDM in plots with predominantly standing (as opposed to laying) vegetation. This study demonstrates the potential of UAV LiDAR for direct RDM quantification where vegetation is standing upright, which could help improve RDM mapping and management for rangelands in California and beyond.

1. Introduction

In California, rangelands (grazed grasslands and oak woodlands) provide key ecosystem services to humans and wildlife. Rangelands contain an estimated one-third of the state’s total soil carbon stock, promote biodiversity through habitat provisioning, and provide economically vital livestock forage for the meat and dairy industries [1,2,3,4,5]. Despite their importance, rangelands are the most heavily altered ecosystems in the state [6]. Invasive species, desertification, erosion, and overgrazing pose a significant threat to California’s rangeland health and productivity [7,8,9]. Wildfires are an additional threat and are becoming more frequent and intense due to severe drought, fuel build-ups from fire suppression, invasive species encroachment (particularly involving invasive annual grasses), and climate change [10,11,12]. In light of these challenges, the sustainable management and monitoring of rangeland resources is critical to ensure rangelands remain productive and to preserve the ecosystem services they provide to humans and wildlife in California.
Understanding how to effectively monitor grazing is critical, but the current practices are labor-intensive, costly, and time-consuming [13,14]. In California rangelands, which are dominated by annual grasses, the most common proxy for assessing the rangeland conditions is the residual dry matter (RDM) [15,16,17]. The RDM is the aboveground biomass left on the ground at the end of the growing season [15]. The RDM directly influences fire behavior (combustion time, fire occurrences, and frequency) in rangelands [18,19,20] and has been shown to affect the future forage production and species composition through carbon sequestration and nutrient cycling [21,22]. One of the protocols most widely used by conservation organizations like The Nature Conservancy (TNC) for estimating the RDM in California involves clipping and weighing senescent (no longer photosynthesizing) grasses and forbs in randomized locations across grazed landscapes using a standardized hoop and then converting those measurements into units of lbs/acre or kgs/hectare [15,16]. Some managers use visual estimates of the RDM instead of clipping, while others convert measurements of small clipped plots into landscape-scale estimates [15,16]. Ground-based RDM data collection, including using visual estimates, can be time-intensive, expensive, and subjective [14].
Remote sensing has been proposed as a tool to map the RDM and combat the challenges of time-intensiveness, expensiveness, and subjectivity associated with field sampling [23,24]. Despite advances in using multispectral remote sensing for monitoring grazing practices and conservation impacts across rangelands, including by predicting the levels of RDM [23,24], no approach has thus far been developed that would allow land managers to directly quantify the RDM each year in a way that is accurate and generalizable across landscapes over space and time. This is because multispectral spaceborne sensors like NASA’s Landsat (30 m) and MODIS (250–1000 m) struggle to distinguish non-photosynthetic vegetation (NPV) from bare soil [25,26]. There is a more-than-50-year history of predicting green vegetation amounts using spectral indices derived from spaceborne data like the Normalized Difference Vegetation Index (NDVI) [27,28,29,30,31]; however, applying indices that were developed for green vegetation to monitoring NPV does not provide accurate results [26,32].
In contrast, spectroscopic (hyperspectral) sensors with a high spectral resolution in the shortwave infrared (SWIR) region (1300–2500 nm) can reliably distinguish NPV from soil due to its characteristic lignin and cellulose absorption features [32,33,34]. Multispectral indices such as the Soil-Adjusted Total Vegetation Index (SATVI) [35] and Shortwave Infrared Normalized Difference Residue Index (SINDRI) [32] have been developed using spaceborne data to attempt to detect NPV; however, they are less accurate than narrowband hyperspectral indices such as the Cellulose Absorption Index (CAI) [36], Lignin–Cellulose Absorption Index (LCAI) [37], and Normalized Difference Lignin Index (NDLI) [38]. These narrowband indices, developed for SWIR-capable spectroscopic sensors, are more accurate for separating NPV from background soils and have shown high potential for biomass mapping [26,39,40,41,42].
Active sensors, such as Light Detection and Ranging (LiDAR) sensors, which emit laser pulses and are not impacted by lightning or the cloud cover, have also shown success in estimating the structural properties of vegetation such as its biomass, cover, height, and volume in rangelands at the plot and pasture scales [43,44,45,46,47]. Unmanned Aerial Vehicle (UAV)-based LiDAR is useful for mapping vegetation properties at high resolutions (typically at the centimeter scale) and is becoming increasingly used due to its accessibility and improving software and hardware [45,46]. UAV LiDAR can penetrate vegetation canopies to detect bare soil and isolate the vegetation canopy for feature extraction [48]. Creating allometric relationships between vegetation and LiDAR data often involves computing canopy height models (CHMs) by calculating the difference between digital terrain models (DTMs) and digital surface models (DSMs), then calibrating these with field data using linear models such as linear regression (LR) and, more recently, machine learning models such as random forest (RF) [45,48].
The combined use of spectroscopic and LiDAR sensors offers potential for advancing the current methods of RDM quantification. For instance, multisensor/multi-data fusion has been applied to classify plant species as well as map soil and vegetation characteristics across the field plot and pasture scales [49,50,51,52]. If successful and generalizable, such approaches could transform rangeland management and monitoring by providing detailed and scalable biomass data, eliminating the need to conduct cost-, time-, and labor-intensive field surveys to estimate the biomass. Direct RDM measurement could also have additional applications outside of assessing grazing impacts and fire fuels, including mapping and quantifying tillage practices [53], drought stress [26], agricultural productivity [54], and functional indicators of biodiversity [46]. This paper explores the use of spectroscopic and UAV LiDAR remote sensing technologies to directly quantify the RDM in a rangeland conservation management context in California.
This research aimed to evaluate field reflectance spectroscopy, UAV LiDAR, and combined approaches to directly retrieve the RDM at the plot scale. The specific questions guiding the research were as follows:
1. How well can narrowband spectral vegetation indices measured using a portable reflectance spectrometer correlate with RDM at the plot scale, as evaluated using linear and random forest regression?
2. How well do the canopy height model metrics obtained from UAV LiDAR data correlate with RDM at the plot scale, as evaluated using linear and random forest regression?
3. Do combined optical (spectroscopic) and active (LiDAR) data correlate better with RDM than single-sensor models, as evaluated using these same modeling approaches?

2. Materials and Methods

2.1. Study Area

The Jack and Laura Dangermond Preserve (JLDP, Figure 1) in California (USA), managed by The Nature Conservancy [55], offered a valuable landscape to test the use of remote sensing for RDM quantification because of its large size and the variety of grazed and ungrazed management units with varying RDM levels [56]. The JLDP’s history of implementing imaging technology in ecosystem management and monitoring, as well as its extensive airborne and spaceborne remote sensing data coverage, made it an ideal site for this study [57,58,59,60]. The JLDP sits on Point Conception along California’s Central Coast and spans approximately 24,000 acres (Figure 1). The climate is Mediterranean, with a mean annual precipitation of around 43 cm, concentrated in the winter months [60]. The JLDP is both topographically and ecologically diverse, with elevations ranging from sea level to over 500 m, with over 600 plant and animal species (58 with a legal conservation status) found throughout its oak woodland, savannah, chaparral, and grassland habitats [55]. Grazed grassland and oak woodland ecosystems (rangelands) throughout the JLDP are dominated by non-native annual grasses, with a small percentage of native perennial bunch grasses [55].
TNC’s goals for grazing management include reducing fire fuels and the threat from catastrophic wildfires, reducing the cover of non-native invasive and noxious species, and creating and maintaining habitats for native plant and animal species [56]. TNC uses ground-based RDM monitoring protocols [16] as well as RDM compliance predictions [23,24] using their remote monitoring platform [61], Lens Rangelands. The grazing season at the JLDP runs from the first rain in the Fall (October/November) through the end of the Summer (September). RDM monitoring typically occurs in October through November.
Figure 1. Map of the Jack and Laura Dangermond Preserve study areas with an overview of the JLDP rangeland management zones. Red stars represent field site locations where RDM and remote sensing data was collected. The Mediterranean climate patterns can be observed in the bar graph showing the average monthly precipitation from 1981 to 2025 obtained using the Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset in Google Earth Engine with an 8 km spatial resolution [62]. The two photographs (taken in September 2024) show the dominance of RDM across the JLDP, which resembles many of the Mediterranean oak woodland and grassland rangelands across California. The map inset on the bottom right shows the location of the JLDP on California’s Central Coast.
Figure 1. Map of the Jack and Laura Dangermond Preserve study areas with an overview of the JLDP rangeland management zones. Red stars represent field site locations where RDM and remote sensing data was collected. The Mediterranean climate patterns can be observed in the bar graph showing the average monthly precipitation from 1981 to 2025 obtained using the Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset in Google Earth Engine with an 8 km spatial resolution [62]. The two photographs (taken in September 2024) show the dominance of RDM across the JLDP, which resembles many of the Mediterranean oak woodland and grassland rangelands across California. The map inset on the bottom right shows the location of the JLDP on California’s Central Coast.
Remotesensing 17 02352 g001

2.2. Study Design

Eight 90 × 90 m sampling areas were selected across the JLDP to represent low-, medium-, and high-RDM management zones and levels (Figure 1 and Figure 2). Each of those 8 sites consisted of homogenous areas of RDM which had been grazed at different intensities during the previous growing season (Fall 2023–Summer 2024). Vehicle access was another criterion for site selection. The data collection for this study occurred on 19–20 September 2024 in partly cloudy conditions in the mornings and evenings and clear sky conditions in the afternoons.
At each 90 × 90 m site, the RDM was clipped in the 0.09 m2 circular area delineated by a sampling hoop in the center of each of the 9 plots (Figure 2), following the protocol from [16]. The field samples were georeferenced with UAV LiDAR data using the UTM Zone 10N (EPSG: 32610) coordinate reference system to visually identify each hoop in the colorized point cloud and RGB raster. Two black ½″ irrigation tubes (marking hoops) were placed around each RDM sampling hoop to ensure the hoop’s visibility in the UAV imagery. Clipping and marking hoops were positioned after spectral data collection to ensure that these materials were not included in the RDM spectra. A reference photo was taken on an iPhone 11 at nadir approximately 2 feet above the canopy before clipping each sample. The reference photos were later used to classify the vegetation structure for analysis. The RDM within the hoop was clipped as close to the bare soil as possible, ensuring that no soil or non-RDM material (e.g., rocks, roots, or clumps of soil) was collected. This process was repeated for all the sampling points (n = 9) in each study area (n = 8), resulting in a total of 72 samples that reflected the RDM variability of each study area.
Once clipped, the RDM was placed in a paper bag and weighed to the nearest gram using a tared scale. Shortly after the field campaign, the RDM was oven-dried in laboratory drying ovens at 65 °C for 24 h [21]. The ‘Oven Weight’ was used in statistical analyses to correlate the RDM with the remote sensing data. The mean percent difference between the Field Weight and Oven Weight of the RDM was 8.16% with a standard deviation of 2.5% (Figure A1).

2.3. Spectral Data

Spectral data was collected with a hand-held Spectra Vista Corporation (SVC) HR-1024i reflectance spectrometer. This instrument has 1024 spectral channels over a spectral range of 350–2500 nm. The nominal bandwidth from 350 to 1000 nm is 1.5 nm, from 1000 to 1890 is 3.8 nm, and from 1890 to 2500 nm is 2.5 nm. The full-width half-maximum at 700 nm is 3.3 nm, at 1500 is 9.5 nm, and at 2100 nm is 6.5 nm. A 25°-field-of-view armored fiber optic cable was used in the field with a 10″ × 10″ SVC 99% reflectance Spectralon panel for reference scans. The canopy reflectance was captured at a fixed height of 0.76 m to capture the reflectance of the entire canopy within a spatially integrated circle matching the RDM hoop [41,63].
Before capturing the target radiance of the RDM canopy in each hoop, a reference radiance was collected using the SVC Spectralon panel (Spectra Vista Corporation, Poughkeepsie, NY, USA). Field spectra were collected in clear sky conditions within +1/−1 h of solar noon to minimize the variations in the solar geometry. Co-located spectra were only collected at Steve’s Flat, Jalama Bull, East Tinta, and Cojo Cow (n = 36) (Figure 1) because cloudy conditions in the mornings and evenings precluded spectral data collection.
The spectra were offloaded from the instrument and analyzed and visualized in Python 3.11.5 [64] using the Pycharm 2023.3.1 IDE with the numpy and matplotlib libraries (Figure 3). The following spectral indices were used as spectral features in the correlation analyses:
The Cellulose Absorption Index [36]:
C A I = 0.5 ( ρ 2000 + ρ 2200 ) ρ 2100
The Lignin–Cellulose Absorption Index [37]:
L C A I = 100 [ ( ρ 2185 ρ 2145 ) + ( ρ 2185 ρ 2295 ) ]
The Normalized Difference Lignin Index [38]:
N D L I = l o g ( 1 ρ 1754 ) l o g ( 1 ρ 1680 ) l o g ( 1 ρ 1754 ) + l o g ( 1 ρ 1680 )
where ρ λ is the reflectance at wavelength λ . The final output was a table including the oven-dried RDM (g) and its corresponding index for the four measured sites. The reflectance was measured using the radiance of the field spectrometer divided by the radiance of the Spectralon reference panel (Figure 3).

2.4. LiDAR Data

The LiDAR data was collected with a GeoCue TrueView 540 sensor mounted on a DJI Matrice 350 RTK UAV. This discrete-return sensor has an accuracy of 5 mm and a precision of 15 mm, utilizes a single-beam laser scanner (1535 nm wavelength) with up to 8 returns, and has a 45-megapixel RGB camera for colorizing point clouds and generating orthomosaics. The flight parameters for this study were 120 m above ground level and 4 m/s, yielding 558 points/m2 with 2.3 cm line spacing between the scans. The flight spacing of 82.3 m between the scans allowed us ample room to fly 3 flight lines over each study area. The 82.3 m gap between the flight lines was auto-calculated based on the point cloud side overlaps (56%) and imagery side overlaps (60%) to ensure that we had 3 even flight lines covering the entirety of each study area (see Figure 2). An iGage iG5 static global navigation satellite system (GNSS) receiver was used as the reference base station for all the UAV flights to provide post-processed kinematic (PPK) GNSS corrections. The base station was placed at an arbitrary location that was located well within the 6-mile (9.66 km) radius of the flight area (required to achieve accurate corrections using the base station). Each study area was flown twice: (1) before clipping the RDM to create a digital surface model (DSM) of the canopy and (2) after clipping the RDM to create a digital terrain model (DTM) of the bare ground within each hoop.
Once the LiDAR data had been collected, it was processed in LP360 v2024.2.55.0 software (GeoCue™; Huntsville, AL, USA) using the recommended TV540 processing workflow. A detailed, step-by-step guide on how to process data obtained using a TV540 sensor can be found here: https://support.lp360.com/hc/en-us/articles/36167759331091-TV540-processing-with-LP360 (accessed on 23 September 2024). Raw data from the iG5 GNSS base station receiver was submitted to the Online Positioning User Service (OPUS) to determine an accurate base station reference position, and this position was then used (along with the base station and drone’s raw GNSS data) for the post-processing of the LiDAR data. The flight lines within each 90 × 90 m study area were retained and all the flight lines outside this (i.e., those used during takeoff and landing) were discarded during processing. A 40° clip angle was used to remove unwanted edges from the survey area. The LP360 ‘Strip Align’ tool was used to verify the flight line alignment and correct any misalignments between the flights.
Two raster layers with a 3 cm spatial resolution were created for each dataset collected: an elevation model (DSM or DTM) and a colorized RGB raster. The LiDAR points were aggregated to 3 cm pixels using the LP360 Export Wizard, with a Triangulated Irregular Network (or TIN) method used to convert the LiDAR points to a raster surface. A 3 cm spatial resolution was used because of the 2.3 cm line spacing between the scans (used to prevent NODATA errors in the 3 cm pixels). A colorized RGB raster was used to visually identify each of the RDM hoops within the study areas, which were clearly delineated by the marking hoops, so it was not necessary to produce an orthomosaic for visual identification.
The coregistration between the scans was evaluated using a qualitative visual assessment and quantitative analysis of the pre-clipping and post-clipping DSMs for an area of a roof with a square skylight. The visual assessment involved stacking the pre/post-clipping DSMs into a single raster, displaying the pre-clipping and post-clipping rasters in different colors to visually reveal areas of misregistration with red or cyan colors. Figure A3 shows a circle that represents a 33.66 cm diameter RDM clipping hoop atop an orthophoto, a pre-clip DSM, a post-clip DSM, and a multiband raster, enabling the visualization of misregistration (Figure A3) at the scale of an RDM clipping hoop. A false-color RGB composite (red color = pre-clip DSM or Band 1, while green and blue colors = post-clip DSM or Band 2) revealed minimal misregistration at the level of a single 3 cm pixel shift (red and cyan colors at the edges of the roof’s skylight illustrate misregistration), demonstrating a high degree of alignment precision between the scans and ensuring minimal errors during analysis. In addition, the measurements of the horizontal and vertical errors were on average 2 cm each, with a standard deviation of approximately 1.3 cm. These measurements align with the TV540’s 5 mm repeated ranging accuracy, which allows it to achieve exceptional absolute accuracy from 2 to 5 cm (https://geocue.com/sensors/drone-lidar/trueview-540/, accessed on 23 September 2024).
The DSM and DTM rasters were imported into ArcGIS Pro 3.4.0 to compute a canopy height model (CHM) for each RDM hoop by taking the difference between the DSM and DEM rasters using the ‘Raster Calculator’ Spatial Analyst tool. A 6.625 in (16.83 cm) radial buffer (½ of the 13.25 in diameter of the hoop used for clipping the RDM) was placed around the centroid of each RDM hoop to ensure the RDM hoop and irrigation tubes were not included in the canopy height model analysis (Figure 4). Each CHM had 100 pixels within the clipping hoop, with each pixel representing a 3 × 3 cm canopy height. The ‘Zonal Statistics as Table’ Spatial Analyst tool in ArcGIS Pro was used to extract the maximum (chm_max), range (chm_range), mean (chm_mean), standard deviation (chm_std), median (chm_median), and 90th percentile (chm_pct90) of each zonal CHM for statistical analysis to represent the structural variation in the canopy. Similar studies, such as that by Zhang et al. (2018) [47], used a zonal statistics approach to predict the green vegetation biomass and height at the quadrat scale using canopy height metrics (chm_mean, chm_median, chm_max, chm_min, chm_std) derived from photogrammetric point clouds. The final output of the analysis was a table containing each zonal statistic for each labeled RDM plot.
A workflow diagram of the field protocol can be found in Figure A2.

3. Analysis

The relationship between the RDM and remote sensing predictor variables was evaluated using linear regression (LR) and random forest (RF) models. All the statistical analyses were conducted in R [65] using the RStudio 2024.04.2 IDE. The predictor sets are listed in a tabular format in Appendix A (Table A1). LR assumed a linear relationship between the RDM and remote sensing predictor variables and fitted a linear model to minimize the sum of squares between the observed targets and the targets. RF used an ensemble decision tree to capture more complex relationships between the RDM and predictors [66]. The LR model was configured using the lm() function from the built-in stats package, and the RF model was configured using the randomForest() function from the randomForest package [67].
The RF model was configured with 1000 trees (ntree = 1000), a maximum number of 30 nodes (maxnodes = 30), and a node size of five (nodesize = 5). The feature importances were computed using the RF model and plotted using ggplot2 as the percentage contribution of each predictor to the total importance. The default parameters for linear regression (standard least squares, all predictors included, and an intercept) were used for the linear models to fit a multiple linear regression between the remote sensing variables and RDM. The predictors were standardized and placed on a common scale using the scale() function prior to model implementation.
The RF and LR models were validated using leave-one-out cross-validation (LOOCV). LOOCV prevents model overfitting and ensures generalizability by iteratively using one observation as test data and using the remainder of the dataset as training data, which is often an efficient cross-validation approach for small datasets [68]. The model performances were assessed using a coefficient of determination (R2) to indicate the proportion of the variance in the dependent variable (RDM) that could be explained by the independent variable (remote sensing data) and the mean absolute error (MAE) to measure the accuracy of the predictions of the RDM—a continuous biophysical variable estimated from remote sensing data and measured in grams [69]. Lastly, probability values (p-values) were calculated using the linear models and the standard deviations of the residuals (error estimates based on the mean) were calculated using the sd() function.
Moran’s I was calculated with the residuals from the RF and LR models to assess the degree of spatial autocorrelation in the model errors. Due to the nested study design, it was important to ensure that the models were not overfitting due to a lack of independence in the sample resulting from the spatial clustering of the data (which would be suggested by a positive spatial autocorrelation in the residuals). Moran’s I for the model residuals was calculated locally for each study area (nine samples per study area) using a 3 × 3 queen contiguity matrix and cell2nb() and nb2listw() from the spdep package [70].
Lastly, we observed during data collection that the vertical structure of the RDM varied among the sites (Table A2), and so new models were developed using a subset of the data corresponding to the structure categories. The sites were stratified a posteriori (after the sites were selected) into three categories: sites with standing (upright), laying (horizontal), and mixed (a combination of vegetation that was standing and flattened) vegetation. The standing sites with predominantly upright RDM included the CMT Ungrazed and Jalama Horse sites. The sites with a combination of upright and varying degrees of laying/horizontal RDM included the Jalachichi, Steve’s Flat, Jalama Bull, and East Tinta sites. The sites with flattened vegetation included the Cojo Cow and Jalama Mare sites. The vegetation height ranged from ground level to approximately 30 cm, which was spatially averaged over the area inside the RDM clipping hoop and estimated using LiDAR data and a consultation with the land manager.

4. Results

All the linear models (LR) fitted a significant (p < 0.05) linear relationship between the remote sensing data and RDM (Table 1). LR outperformed the RF models in terms of its predictive performance (higher R2, lower MAE), suggesting a linear relationship between individual remote sensing predictors and the field-validated RDM (Figure A4). The standard deviation in the error was similar across the LR and RF models (ranging from 21.29 to 25.69 g). However, the LiDAR LR model had the lowest error (MAE = 16.68) compared to the other models, which aligned with the LiDAR LR model having the highest coefficient of determination (R2 = 0.37) compared to all the other models, explaining approximately 37% of the RDM variability.
The spectral models performed poorly, with RF (R2 = 0.06, MAE = 18.70) and LR (R2 = 0.09, MAE = 18.26) having comparable performance. The linear models’ (LR) error was greater in the combined models versus those using LiDAR alone, suggesting degraded model performance when introducing spectral features into the combined model. The feature importances for the spectral data show that all the indices (NLDI, CAI, and LCAI) contributed similarly to the RF model’s performance, although the LCAI had marginally higher importance than the other indices (Figure 5A). The LR coefficient for the CAI had the lowest p-value, but interestingly the coefficient was negative (Table A3). The feature importances for the LiDAR and combined models (Figure 5B,C) show that the median canopy height (chm_median) was the strongest predictor of the RDM in the LiDAR and combined RF models (Figure 5). The NDLI was the third most important feature in the combined RF model (Figure 5C). Only the coefficients for chm_std and chm_pct90 (negative coefficients) were significantly different from zero in the LiDAR LR model, while those two plus chm_max (negative coefficient) and chm_mean (positive coefficient) were significant in the combined LR model (Table A3).
Moran’s I assessed the degree of spatial autocorrelation in the unexplained variance in the remote sensing predictive models. A negative Moran’s I indicates overdispersion in the model residuals, while a positive value indicates clustering (which would suggest a lack of spatial independence in the dependent variable or a missing, spatially autocorrelated predictor). In the LiDAR models, the median local Moran’s I values were not significantly different from zero, indicating patterns of residuals that were not significantly different from complete spatial randomness, for LR, nor for the combined RF model (Figure 6). In the spectral models, median local Moran’s I values of >0 for RF and LR indicate overdispersion. This suggests that the dependent variable may have had a patchy pattern that could have resulted from competitive processes or a missing predictor. In the combined models, only the Moran’s I values for LR indicated the clustering of errors when both predictor sets were used. The patterns of Moran’s I indicate that the spectral models which performed poorly, as well as the combined LR model, may also have been affected by the violation of the independent and identically distributed (i.i.d) assumption, which can lead to overfitting and model misspecification. RF was more robust to the effects of a spatial autocorrelation on the model performance than LR.
When evaluating the performance of the LiDAR models within the canopy structure classes after the a posteriori stratification of the plots into those containing standing, mixed, and laying vegetation (Table 2, Figure 7), the LiDAR models predicted the RDM better in plots with standing vegetation (R2 = 0.63 for LR and R2 = 0.81 for RF) as opposed to mixed (R2 = 0.05 for LR and R2 = 0.21 for RF) and laying vegetation (R2 = 0.16 for LR and R2 = 0.01 for RF). RF outperformed LR when the vegetation was standing and mixed, but LR outperformed RF when the vegetation was flattened. However, when the two outliers present in the LR laying model were removed (Figure 7), RF outperformed LR, although both performed very poorly (R2 = 0.03 for LR and R2 = 0.07 for RF; Figure A5). The RF MAE (ranging from MAE = 7.70 in standing models to MAE = 17.78 in laying models) was consistently lower than that of LR (MAE = 11.23 in standing models to MAE = 26.42 in laying models) across all the models. However, the RF and LR MAEs increased twofold when the vegetation was mixed and flattened. In the RF models, the median canopy height (chm_median) was the most important variable when the vegetation was mixed and flattened (accounting for approximately 25% of the importance in mixed vegetation and 30% in laying vegetation); however the mean canopy height (chm_mean) was the most important variable when the vegetation was standing, with an importance of approximately 25% (Figure 8).

5. Discussion

5.1. Key Findings

The key findings from this study are as follows:
  • LiDAR-based estimates only perform well in standing vegetation plots.
  • Spectral models are essentially non-functional.
When comparing estimates of the RDM based on spectral features used for detecting non-photosynthetic vegetation (derived from hand-held spectroscopy) versus canopy height model metrics derived from UAV-borne LIDAR, we found that the LIDAR metrics performed better than the spectral features. The three NPV spectral indices examined explained less than 10% of the variance in our RDM data and degraded the performance of models combining LiDAR and spectral predictors compared to those using LiDAR only. While the LiDAR canopy height metrics performed better than the spectral features, these models only explained 22–37% of the variance in our sample, which is promising but still not a strong relationship from an operational standpoint. It was only when we considered the variation in the RDM structure observed in the field that we observed that LiDAR-based RDM estimates only performed well in standing RDM plots, where the amount of vertically structured RDM correlated very strongly with the LiDAR canopy height metrics and where the RF model explained 81% of the variance.
In the following subsections, we discuss the modeling methods used, insights from the spectral data, the consideration of the vegetation structure for making LiDAR estimates, and future work to improve the model performance and the scalability of UAV LiDAR data.

5.2. Comparing Analysis Methodologies

Comparing the linear (LR) and nonlinear regression models (RF) between the RDM and remote sensing predictors revealed the strengths and weaknesses of both approaches. LR outperformed the RF models globally; however RF outperformed LR for LiDAR models where the vegetation was classified as standing and mixed but performed more poorly when the vegetation was flattened. RF is a more useful model in application because it estimates the RDM when it is standing and mixed with a lower absolute error and is less sensitive to spatial autocorrelations in the data. Linear models such as LR are more parsimonious, model-driven, interpretable, suitable for making inferences from small calibration datasets that meet assumptions, and more computationally efficient than ML models like RF (Table A3) [71]. Future work can explore alternative modeling approaches such as Artificial Neural Networks (ANNs) and ensemble methods to improve the model performance and generalizability for predicting the RDM.

5.3. Spectral Data

Our analysis of the spectroscopic relationships was fundamentally limited by the data availability, since the environmental conditions only permitted usable data collection at four out of eight sites. This illustrates a key limitation of reflectance spectroscopy, namely its sensitivity to the illumination and atmospheric conditions. For the sites where reflectance spectra were collected, we found no meaningful relationship between any of the spectral indices and the RDM when using the RF and LR models. It is a known limitation of optical imagery that it tends to saturate for high-density vegetation with nearly complete area coverage [45,72]. Notably, an exploratory partial least squares regression (PLSR) was also conducted using 1024 individual spectral channels as candidate predictors for the RDM, and the results were comparable to those obtained using the combined SWIR indices, with the model explaining little of the variance in the RDM.
However, all three NPV SWIR indices did contribute to the RF and LR models (Figure 5A, Table A3). This is consistent with some previous studies which retrieved the NPV biomass from optical remote sensing data [26]. Future studies could more rigorously evaluate the use of SWIR data for RDM estimation using spectroscopic datasets over a wider range of vegetation density and grazing management conditions [26,40,49,73,74,75,76]. In addition, future studies could focus on machine learning-based spectral feature selection [26] for predicting the NPV biomass and exploring UAV-mounted spectroscopic–LiDAR data fusion methods for wall-to-wall comparisons of the scalability of LiDAR and spectral data.

5.4. LiDAR Data

The LiDAR models outperformed the spectral models in estimating the plot-scale RDM, with the median canopy height being the most important predictor across the LiDAR and combined RF model types. Overall, at the JLDP, the mean vegetation height (chm_mean) produced the most effective predictions of the RDM where the vegetation was standing (R2 = 0.72–0.74). When the vegetation structure was heterogeneous (R2 = 0.23–0.36), the median vegetation height (chm_median) was the most important variable. The significant effect of chm_median in the unstratified LiDAR-only and combined models indicates that the central tendency of the vegetation height extracted from the LiDAR data was more important than the canopy height extremes (chm_max or chm_pct90) for predicting the RDM across the plots, which aligns with the existing literature on using machine learning-based structural metric selection to quantify the grassland biomass using LiDAR data [48].
When trying to operationalize UAV LiDAR to predict the plot-scale RDM, the vegetation structure (whether the vegetation is standing, flattened, or mixed) is critical to consider. The results in this study were highly dependent on the a posteriori stratification of the vegetation sites into those with standing, mixed, and laying RDM. The LiDAR models performed poorly when the RDM was ‘laying over’ horizontally as dense thatch as opposed to standing upright, suggesting that the canopy height is a good RDM proxy when the vegetation is standing upright (Figure 9). This finding aligns with [47], a study that used photogrammetric point clouds to estimate the green biomass and vegetation height from point cloud data at the 1 × 1 m quadrat scale in an area where grassland vegetation was alive and standing.
In order for LiDAR data to be used operationally for RDM monitoring across California’s conservation grazing lands, the accuracy of traditional RDM sampling methods (clipping and visual observations) should be compared directly to that of methods using remote sensing platforms to determine whether the MAE decreases. Stratifying the results into broad classes (as used by conservation entities like The Nature Conservancy), such as low, medium, and high, as described in Table A2, can determine whether stratifying the results obtained from UAV LiDAR data is more accurate than the current best practices for management across pastures to direct fuel treatments or targeted grazing where necessary.
Future studies should also consider the phenological (seasonal) stages of RDM and the direct effects of the species composition on RDM quantification. Given that the LiDAR data performed better when the vegetation was standing, future studies should consider performing UAV LiDAR earlier in the season and predicting the Fall RDM levels from environmental data (time since last grazed, wind, and/or rainfall). The species richness has a direct effect on models of the aboveground biomass in grasslands [45], and environmental data like biotic (plant coverage, evenness, species richness) and abiotic (soil texture, topography, evapotranspiration) variables can be used as predictors to model the aboveground biomass in semi-arid rangelands using regression models such as RF and LR [77,78]. Combining grazing treatments (as fixed effects) with remote sensing in generalized linear mixed models [79] can account for environmental complexity and site-specific conditions in different rangeland management units.
Photogrammetric point clouds can also be derived from high-resolution color photographs (an optical remote sensing approach) capturing vegetation at nadir and oblique angles and processed using structure-from-motion (SfM) [47] and voxel-based approaches to extract the canopy volume [80]. Photogrammetry is often a more cost-effective approach for modeling the plant structure than LiDAR sensors but is less reliable for direct measurements of the aboveground biomass. This is because photogrammetry can confuse bare soil with RDM (since it is a passive sensor approach) and often struggles more than LiDAR to retrieve soil surfaces in the presence of dense canopies [81,82]. However the integration of UAV LiDAR data with SfM has also shown improved biomass mapping accuracy by combining SfM’s sensitivity to the upper canopy height with LiDAR’s ability to penetrate dense vegetation [83].

5.5. Future Work to Increase LiDAR Data’s Scalability

The LiDAR methodology used in this study involved clipping plots to calibrate models and conducting two flights to extract both the vegetation canopy (digital surface model) and bare ground (digital terrain model). Future studies should explore single-flight methods for retrieving the bare ground using advanced ground classification approaches such as Cloth Simulation Filtering [84] and multiscale curvature classification [85] and comparing the performance of ground classification algorithms across software [86] for dense laying and upright RDM canopies. Removing the step of calibrating remote sensing models with field data can reduce the cost and labor-intensive fieldwork associated with sensor calibration in pursuit of the scalable mapping and monitoring of the RDM using UAV LiDAR.

6. Conclusions

With continuing refinements of the use of UAV LiDAR in extracting vegetation information and improvements in analysis software, our results suggest that UAV LiDAR can complement field sampling for fine-scale RDM mapping in Californian annual grasslands, especially where senescent herbaceous vegetation is standing or earlier in the season before the vegetation begins to fall over and form dense thatch. In contrast, the models built using field spectra alone showed almost zero predictive power for the RDM in these data. The goal of this study was to take the first step in exploring active (LiDAR) and passive (spectral) approaches to determine which sensor is best at characterizing the RDM variability. While UAV LiDAR systems show promise for RDM quantification where bare ground can be detected, their broad adoption by land managers will require reduced costs and processing complexity and perhaps even automated models that calibrate, calculate, and output metrics like the aboveground biomass or RDM for the user. An essential next step following on from this study will be to compare UAV LiDAR results to those for airborne and spaceborne sensors to determine whether the methods that showed promise in this study are scalable across space and time.

Author Contributions

Conceptualization, B.M., H.S.B. and D.S.; methodology, B.M., H.S.B., J.F., L.C. and D.S.; software, B.M., L.C. and D.S.; validation, B.M., L.C. and D.S.; formal analysis, B.M., D.S. and J.F.; investigation, B.M., H.S.B., J.F., L.C. and D.S.; resources, D.S. and H.S.B.; data curation, B.M., L.C. and M.K.; writing—original draft preparation, B.M., H.S.B. and D.S.; writing—review and editing, B.M., H.S.B., J.F., L.C., M.K. and D.S.; visualization, B.M.; supervision, D.S., J.F. and H.S.B.; project administration, D.S. and H.S.B.; funding acquisition, D.S. and H.S.B. All authors have read and agreed to the published version of the manuscript.

Funding

B.M. gratefully acknowledges funding from the NASA FireSense program (Grant #80NSSC24K0145), The Oren Pollack Memorial Research Fund (The Nature Conservancy), The Master’s Research Scholarship (SDSU), The McFarland Geography Scholarship (SDSU), The Richard Wright Award in Cartography (SDSU), The Pitt and Virginia Warner Endowed Scholarship (SDSU), and The William and Vivian Finch Scholarship in Remote Sensing (SDSU). D.S. additionally acknowledges funding from the NASA FireSense Implementation Team (Grant #80NSSC24K1320), the NASA Land-Cover/Land Use Change program (Grant #NNH21ZDA001N-LCLUC), the EMIT Science and Applications Team program (Grant #80NSSC24K0861), the NASA Remote Sensing of Water Quality program (Grant #80NSSC22K0907), the NASA Applications-Oriented Augmentations for Research and Analysis Program (Grant #80NSSC23K1460), the NASA Commercial Smallsat Data Analysis Program (Grant #80NSSC24K0052), the USDA NIFA Sustainable Agroecosystems program (Grant #2022-67019-36397), the USDA AFRI Rapid Response to Extreme Weather Events Across Food and Agricultural Systems program (Grant #2023-68016-40683), the California Climate Action Seed Award Program, and the NSF Signals in the Soil program (Award #2226649).

Data Availability Statement

All the LiDAR data used in this study are publicly available in the Open Topography Community Dataspace: https://doi.org/10.5069/G9S180QV [87]. The residual dry matter weights, sampling locations, analysis data (containing all the indices and canopy height model values), and field spectra textfiles can be found in the Knowledge Network for Biocomplexity: doi:10.5063/F1M043WF [88].

Acknowledgments

We acknowledge the volunteers (Jon Witsell and Valerie Neale) and staff at the Jack and Laura Dangermond Preserve for their fieldwork assistance. We acknowledge Wes Ramos for his help with UAV piloting and fieldwork planning. We pay our respects to and acknowledge the traditional custodians of the land on which this research took place—the Chumash People. We understand the importance of recognizing this area’s rich history and culture, both past and present, as well as the vital role of Indigenous People in learning and research activities.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Boxplot showing the distribution of RDM (g) across the study areas based on its ‘Field Weight’ and ‘Oven Weight’ after drying for 24 h at 65 °C.
Figure A1. Boxplot showing the distribution of RDM (g) across the study areas based on its ‘Field Weight’ and ‘Oven Weight’ after drying for 24 h at 65 °C.
Remotesensing 17 02352 g0a1
Figure A2. Fieldwork workflow at the JLDP for each of the study areas in Figure 1. The first step was skipped in areas with no spectral data coverage due to the cloud cover preventing the accurate collection of spectral data.
Figure A2. Fieldwork workflow at the JLDP for each of the study areas in Figure 1. The first step was skipped in areas with no spectral data coverage due to the cloud cover preventing the accurate collection of spectral data.
Remotesensing 17 02352 g0a2
Figure A3. Coregistration visualization of pre-clipping and post-clipping digital surface models derived for a roof skylight (stable reference object) from UAV LiDAR scans taken at 120 m above ground level (AGL) with a 3 cm spatial resolution and a 33.66 cm diameter circle representing the area of a residual dry matter (RDM) clipping hoop. (A) Orthophoto of the stable reference object. (B) Pre-clipping digital surface model (DSM). (C) Post-clipping DSM (D). False-color RGB composite combining pre- and post-clipping DSM data into a single color composite display. The pre-clipping DSM (Band 1 of the multi-layer color composite) is shown in red, while the post-clipping DSM (Band 2 of the multi-layer color composite) is shown in green and blue colors. Visual inspection revealed minimal misregistration along the edges of this roof’s skylight (misregistration was identified based on red and cyan coloration at the edges). The misregistration was in the order of a single 3 cm pixel or less. The overlapping DSMs demonstrate high-precision coregistration between the scans, which ensures the accurate computation of canopy height models at the scale of a clipping hoop.
Figure A3. Coregistration visualization of pre-clipping and post-clipping digital surface models derived for a roof skylight (stable reference object) from UAV LiDAR scans taken at 120 m above ground level (AGL) with a 3 cm spatial resolution and a 33.66 cm diameter circle representing the area of a residual dry matter (RDM) clipping hoop. (A) Orthophoto of the stable reference object. (B) Pre-clipping digital surface model (DSM). (C) Post-clipping DSM (D). False-color RGB composite combining pre- and post-clipping DSM data into a single color composite display. The pre-clipping DSM (Band 1 of the multi-layer color composite) is shown in red, while the post-clipping DSM (Band 2 of the multi-layer color composite) is shown in green and blue colors. Visual inspection revealed minimal misregistration along the edges of this roof’s skylight (misregistration was identified based on red and cyan coloration at the edges). The misregistration was in the order of a single 3 cm pixel or less. The overlapping DSMs demonstrate high-precision coregistration between the scans, which ensures the accurate computation of canopy height models at the scale of a clipping hoop.
Remotesensing 17 02352 g0a3
Figure A4. Comparing linear regression (open circles) and random forest (filled circles) RDM prediction across spectral (left), LiDAR (middle), and combined (right) models.
Figure A4. Comparing linear regression (open circles) and random forest (filled circles) RDM prediction across spectral (left), LiDAR (middle), and combined (right) models.
Remotesensing 17 02352 g0a4
Figure A5. LiDAR linear regression (LR) vs. random forest (RF) RDM prediction with 2 outliers removed from laying vegetation model shown in Figure 7. RF was less sensitive than LR to outliers. Open circles are LR and closed circles are RF.
Figure A5. LiDAR linear regression (LR) vs. random forest (RF) RDM prediction with 2 outliers removed from laying vegetation model shown in Figure 7. RF was less sensitive than LR to outliers. Open circles are LR and closed circles are RF.
Remotesensing 17 02352 g0a5
Table A1. Overview of spectral and LiDAR predictors (independent variables) used for statistical analyses to predict plot-scale RDM across JLDP using LR and RF.
Table A1. Overview of spectral and LiDAR predictors (independent variables) used for statistical analyses to predict plot-scale RDM across JLDP using LR and RF.
ModelDependent VariablePredictors
LiDAR (LR)RDM (g): n = 72‘chm_max’, ‘chm_range’, ‘chm_mean’, ‘chm_std’, ‘chm_median’, ‘chm_pct90’
LiDAR (RF)RDM (g): n = 72‘chm_max’, ‘chm_range’, ‘chm_mean’, ‘chm_std’, ‘chm_median’, ‘chm_pct90’
Spectral (LR)RDM (g): n = 36‘LCAI’, ‘NDLI’, ‘CAI’
Spectral (RF)RDM (g): n = 36‘LCAI’, ‘NDLI’, ‘CAI’
Combined (LR)RDM (g): n = 36‘chm_max’, ‘chm_range’, ‘chm_mean’, ‘chm_std’, ‘chm_median’, ‘chm_pct90’, ‘LCAI’, ‘NDLI’, ‘CAI’
Combined (RF)RDM (g): n = 36‘chm_max’, ‘chm_range’, ‘chm_mean’, ‘chm_std’, ‘chm_median’, ‘chm_pct90’, ‘LCAI’, ‘NDLI’, ‘CAI’
Table A2. Information on vegetation structure (whether RDM was predominantly standing, mixed, or flattened) and RDM management levels from Fall quarter of 2023 to Fall quarter of 2024.
Table A2. Information on vegetation structure (whether RDM was predominantly standing, mixed, or flattened) and RDM management levels from Fall quarter of 2023 to Fall quarter of 2024.
Site NameManagement ZoneVegetation StructureGrazing Intensity (Based on RDM Levels Before Fall 2024 Field Campaign)
CMT UngrazedCojo CoastStandingNone
Steve’s FlatCojo CoastMixedMed–High
Jalama BullArmy CampMixedLow–Med
JalachichiJalachichiMixedMed–High
Jalama HorseTintaStandingLow–Med
East TintaTintaMixedMed–High
Cojo CowCojo RanchLayingMed–High
Jalama MareArmy CampLayingMed–High
Table A3. Linear regression table for the spectral-only, LiDAR-only, and combined models using the predictor sets listed in Table A1. The predictor, model set, β coefficients, Standard Error (STD Error), t, and p-values are listed.
Table A3. Linear regression table for the spectral-only, LiDAR-only, and combined models using the predictor sets listed in Table A1. The predictor, model set, β coefficients, Standard Error (STD Error), t, and p-values are listed.
PredictorModelβSTD Errortp-Value
(Intercept)Spectral Only54.283.8614.080
caiSpectral Only−18.736.71−2.790.01
lcaiSpectral Only9.425.411.740.09
ndliSpectral Only13.135.442.410.02
(Intercept)LiDAR Only49.112.3420.990
chm_maxLiDAR Only−0.2117.55−0.010.99
chm_rangeLiDAR Only−25.4821.79−1.170.25
chm_meanLiDAR Only57.7549.881.160.25
chm_stdLiDAR Only73.5429.132.520.01
chm_medianLiDAR Only5.324.130.220.83
chm_pct90LiDAR Only−70.2635.22−1.990.05
(Intercept)Spectral + LiDAR54.283.4415.80
caiSpectral + LiDAR−8.68.21−1.050.3
lcaiSpectral + LiDAR8.225.721.440.16
ndliSpectral + LiDAR3.885.620.690.5
chm_maxSpectral + LiDAR−56.5531.88−1.770.09
chm_rangeSpectral + LiDAR36.2129.191.240.23
chm_meanSpectral + LiDAR113.5851.842.190.04
chm_stdSpectral + LiDAR44.9625.911.740.09
chm_medianSpectral + LiDAR−40.3425.83−1.560.13
chm_pct90Spectral + LiDAR−84.8643.68−1.940.06

References

  1. Allen-Diaz, B.H.; Jackson, R.D. Herbaceous Responses to Livestock Grazing in Californian Oak Woodlands: A Review for Habitat Improvement and Conservation Potential. USDA For. Serv. 2005, 127–144. [Google Scholar]
  2. Buckley Biggs, N.; Huntsinger, L. Managed Grazing on California Annual Rangelands in the Context of State Climate Policy. Rangel. Ecol. Manag. 2021, 76, 56–68. [Google Scholar] [CrossRef]
  3. Dass, P.; Houlton, B.Z.; Wang, Y.; Warlind, D. Grasslands May Be More Reliable Carbon Sinks than Forests in California. Environ. Res. Lett. 2018, 13, 074027. [Google Scholar] [CrossRef]
  4. Ferranto, S.; Huntsinger, L.; Getz, C.; Nakamura, G.; Stewart, W.; Drill, S.; Valachovic, Y.; DeLasaux, M.; Kelly, M. Forest and Rangeland Owners Value Land for Natural Amenities and as Financial Investment. Calif. Agric. 2011, 65, 184–191. [Google Scholar] [CrossRef]
  5. Huntsinger, L.; Oviedo, J.L. Ecosystem Services Are Social–Ecological Services in a Traditional Pastoral System: The Case of California’s Mediterranean Rangelands. Ecol. Soc. 2014, 19, art8. [Google Scholar] [CrossRef]
  6. Jantz, P.A.; Preusser, B.F.L.; Fujikawa, J.K.; Kuhn, J.A.; Bersbach, C.J.; Gelbard, J.L.; Davis, F.W. Regulatory Protection and Conservation. In California Grasslands: Ecology and Management; University of California Press: Oakland, CA, USA, 2007; pp. 297–318. [Google Scholar]
  7. Bardgett, R.D.; Bullock, J.M.; Lavorel, S.; Manning, P.; Schaffner, U.; Ostle, N.; Chomel, M.; Durigan, G.L.; Fry, E.; Johnson, D.; et al. Combatting Global Grassland Degradation. Nat. Rev. Earth Environ. 2021, 2, 720–735. [Google Scholar] [CrossRef]
  8. Bestelmeyer, B.T.; Okin, G.S.; Duniway, M.C.; Archer, S.R.; Sayre, N.F.; Williamson, J.C.; Herrick, J.E. Desertification, Land Use, and the Transformation of Global Drylands. Front. Ecol. Env. 2015, 13, 28–36. [Google Scholar] [CrossRef]
  9. Cameron, D.R.; Marty, J.; Holland, R.F. Whither the Rangeland?: Protection and Conversion in California’s Rangeland Ecosystems. PLoS ONE 2014, 9, e103468. [Google Scholar] [CrossRef]
  10. HilleRisLambers, J.; Yelenik, S.G.; Colman, B.P.; Levine, J.M. California Annual Grass Invaders: The Drivers or Passengers of Change? J. Ecol. 2010, 98, 1147–1156. [Google Scholar] [CrossRef]
  11. MacDonald, G.; Wall, T.; Enquist, C.A.F.; LeRoy, S.R.; Bradford, J.B.; Breshears, D.D.; Brown, T.; Cayan, D.; Dong, C.; Falk, D.A.; et al. Drivers of California’s Changing Wildfires: A State-of-the-Knowledge Synthesis. Int. J. Wildland Fire 2023, 32, 1039–1058. [Google Scholar] [CrossRef]
  12. Polley, H.W.; Briske, D.D.; Morgan, J.A.; Wolter, K.; Bailey, D.W.; Brown, J.R. Climate Change and North American Rangelands: Trends, Projections, and Implications. Rangel. Ecol. Manag. 2013, 66, 493–511. [Google Scholar] [CrossRef]
  13. Larson-Praplan, S. History of Rangeland Management in California. Rangelands 2014, 36, 11–17. [Google Scholar] [CrossRef]
  14. Butterfield, H.S.; Tsalyuk, M.; Schloss, C. Remote Sensing Increases the Cost Effectiveness and Long-Term Sustainability of the Nature Conservancy’s Residual Dry Matter Monitoring Program; The Nature Conservancy: San Francisco, CA, USA, 2014; p. 21. [Google Scholar]
  15. Bartolome, J.W. Guidelines for Residual Dry Matter on Coastal and Foothill Rangelands in California; University of California: Oakland, CA, USA, 2002; Publication 8092. [Google Scholar]
  16. Guenther, K.; Hayes, G. Monitoring Annual Grassland Residual Dry Matter: A Mulch Manager’s Guide for Monitoring Success, 2nd ed.; Wildland Solutions: Concord, CA, USA, 2008. [Google Scholar]
  17. Harris, N.R.; Frost, W.E.; McDougald, N.K.; George, M.R.; Nielsen, D.L. Long-term residual dry matter mapping for monitoring California hardwood rangelands. In Proceedings of the Fifth Symposium on Oak Woodlands: Oaks in California’s Challenging Landscape, San Diego, CA, USA, 22–25 October 2001; Standiford, R.B., Ed.; Pacific Southwest Research Station, Forest Service, U.S. Department of Agriculture: Albany, CA, USA, 2002. Gen. Tech. Rep. PSW-GTR-184. pp. 87–96. [Google Scholar]
  18. Fusco, E.J.; Finn, J.T.; Balch, J.K.; Nagy, R.C.; Bradley, B.A. Invasive Grasses Increase Fire Occurrence and Frequency across US Ecoregions. Proc. Natl. Acad. Sci. USA 2019, 116, 23594–23599. [Google Scholar] [CrossRef] [PubMed]
  19. Stechman, J. Fire Hazard Reduction Practices for Annual-Type Grassland. Rangelands 1983, 5, 2. [Google Scholar]
  20. Hulet, A.; Boyd, C.S.; Davies, K.W.; Svejcar, T.J. Prefire (Preemptive) Management to Decrease Fire-Induced Bunchgrass Mortality and Reduce Reliance on Postfire Seeding. Rangel. Ecol. Manag. 2015, 68, 437–444. [Google Scholar] [CrossRef]
  21. Larsen, R.E.; Shapero, M.W.K.; Striby, K.; Althouse, L.; Meade, D.E.; Brown, K.; Horney, M.R.; Rao, D.R.; Davy, J.S.; Rigby, C.W.; et al. Forage Quantity and Quality Dynamics Due to Weathering over the Dry Season on California Annual Rangelands. Rangel. Ecol. Manag. 2021, 76, 150–156. [Google Scholar] [CrossRef]
  22. Bartolome, J.W.; Allen-Diaz, B.H.; Barry, S.; Ford, L.D.; Hammond, M.; Hopkinson, P.; Ratcliff, F.; Spiegal, S.; White, M.D. Grazing for Biodiversity in Californian Mediterranean Grasslands. Rangelands 2014, 36, 36–43. [Google Scholar] [CrossRef]
  23. Ford, L.D.; Butterfield, H.S.; Van Hoorn, P.A.; Allen, K.B.; Inlander, E.; Schloss, C.; Schuetzenmeister, F.; Tsalyuk, M. Testing a Remote Sensing-Based Interactive System for Monitoring Grazed Conservation Lands. Rangelands 2017, 39, 123–132. [Google Scholar] [CrossRef]
  24. Tsalyuk, M.; Kelly, M.; Koy, K.; Getz, W.M.; Butterfield, H.S. Monitoring the Impact of Grazing on Rangeland Conservation Easements Using MODIS Vegetation Indices. Rangel. Ecol. Manag. 2015, 68, 173–185. [Google Scholar] [CrossRef]
  25. Butterfield, H.S.; Malmström, C.M. The Effects of Phenology on Indirect Measures of Aboveground Biomass in Annual Grasses. Int. J. Remote Sens. 2009, 30, 3133–3146. [Google Scholar] [CrossRef]
  26. Verrelst, J.; Halabuk, A.; Atzberger, C.; Hank, T.; Steinhauser, S.; Berger, K. A Comprehensive Survey on Quantifying Non-Photosynthetic Vegetation Cover and Biomass from Imaging Spectroscopy. Ecol. Indic. 2023, 155, 110911. [Google Scholar] [CrossRef]
  27. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309. [Google Scholar]
  28. Tucker, C.J.; Sellers, P.J. Satellite Remote Sensing of Primary Production. Int. J. Remote Sens. 1986, 7, 1395–1416. [Google Scholar] [CrossRef]
  29. Todd, S.W.; Hoffer, R.M.; Milchunas, D.G. Biomass Estimation on Grazed and Ungrazed Rangelands Using Spectral Indices. Int. J. Remote Sens. 1998, 19, 427–438. [Google Scholar] [CrossRef]
  30. Xue, J.; Su, B. Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef]
  31. Fern, R.R.; Foxley, E.A.; Bruno, A.; Morrison, M.L. Suitability of NDVI and OSAVI as Estimators of Green Biomass and Coverage in a Semi-Arid Rangeland. Ecol. Indic. 2018, 94, 16–21. [Google Scholar] [CrossRef]
  32. Serbin, G.; Daughtry, C.S.T.; Hunt, E.R.; Reeves, J.B.; Brown, D.J. Effects of Soil Composition and Mineralogy on Remote Sensing of Crop Residue Cover. Remote Sens. Environ. 2009, 113, 224–238. [Google Scholar] [CrossRef]
  33. Daughtry, C.S.T.; Hunt, E.R.; McMurtrey, J.E. Assessing Crop Residue Cover Using Shortwave Infrared Reflectance. Remote Sens. Environ. 2004, 90, 126–134. [Google Scholar] [CrossRef]
  34. Nagler, P.L.; Daughtry, C.S.T.; Goward, S.N. Plant Litter and Soil Reflectance. Remote Sens. Environ. 2000, 71, 207–215. [Google Scholar] [CrossRef]
  35. Marsett, R.C.; Qi, J.; Heilman, P.; Biedenbender, S.H.; Carolyn Watson, M.; Amer, S.; Weltz, M.; Goodrich, D.; Marsett, R. Remote Sensing for Grassland Management in the Arid Southwest. Rangel. Ecol. Manag. 2006, 59, 530–540. [Google Scholar] [CrossRef]
  36. Nagler, P.L.; Inoue, Y.; Glenn, E.P.; Russ, A.L.; Daughtry, C.S.T. Cellulose Absorption Index (CAI) to Quantify Mixed Soil-Plant Litter Scenes. Remote Sens. Environ. 2003, 87, 310–325. [Google Scholar] [CrossRef]
  37. Daughtry, C.S.T.; Hunt, E.R.; Doraiswamy, P.C.; McMurtrey, J.E. Remote Sensing the Spatial Distribution of Crop Residues. Agron. J. 2005, 97, 864–871. [Google Scholar] [CrossRef]
  38. Serrano, L.; Peñuelas, J.; Ustin, S.L. Remote Sensing of Nitrogen and Lignin in Mediterranean Vegetation from AVIRIS Data. Remote Sens. Environ. 2002, 81, 355–364. [Google Scholar] [CrossRef]
  39. Numata, I.; Roberts, D.; Chadwick, O.; Schimel, J.; Galvao, L.; Soares, J. Evaluation of Hyperspectral Data for Pasture Estimate in the Brazilian Amazon Using Field and Imaging Spectrometers. Remote Sens. Environ. 2008, 112, 1569–1583. [Google Scholar] [CrossRef]
  40. Roberts, D.A.; Smith, M.O.; Adams, J.B. Green Vegetation, Nonphotosynthetic Vegetation, and Soils in AVIRIS Data. Remote Sens. Environ. 1993, 44, 255–269. [Google Scholar] [CrossRef]
  41. Ren, H.; Zhou, G. Estimating Aboveground Green Biomass in Desert Steppe Using Band Depth Indices. Biosyst. Eng. 2014, 127, 67–78. [Google Scholar] [CrossRef]
  42. Pepe, M.; Pompilio, L.; Ranghetti, L.; Nutini, F.; Boschetti, M. Mapping Spatial Distribution of Crop Residues Using PRISMA Satellite Imaging Spectroscopy. Eur. J. Remote Sens. 2022, 56, 2122872. [Google Scholar] [CrossRef]
  43. Anderson, K.E.; Glenn, N.F.; Spaete, L.P.; Shinneman, D.J.; Pilliod, D.S.; Arkle, R.S.; McIlroy, S.K.; Derryberry, D.R. Estimating Vegetation Biomass and Cover across Large Plots in Shrub and Grass Dominated Drylands Using Terrestrial Lidar and Machine Learning. Ecol. Indic. 2018, 84, 793–802. [Google Scholar] [CrossRef]
  44. Barnetson, J.; Phinn, S.; Scarth, P. Estimating Plant Pasture Biomass and Quality from UAV Imaging across Queensland’s Rangelands. AgriEngineering 2020, 2, 523–543. [Google Scholar] [CrossRef]
  45. Bazzo, C.O.G.; Kamali, B.; Hütt, C.; Bareth, G.; Gaiser, T. A Review of Estimation Methods for Aboveground Biomass in Grasslands Using UAV. Remote Sens. 2023, 15, 639. [Google Scholar] [CrossRef]
  46. Retallack, A.; Finlayson, G.; Ostendorf, B.; Clarke, K.; Lewis, M. Remote Sensing for Monitoring Rangeland Condition: Current Status and Development of Methods. Environ. Sustain. Indic. 2023, 19, 100285. [Google Scholar] [CrossRef]
  47. Zhang, H.; Sun, Y.; Chang, L.; Qin, Y.; Chen, J.; Qin, Y.; Du, J.; Yi, S.; Wang, Y. Estimation of Grassland Canopy Height and Aboveground Biomass at the Quadrat Scale Using Unmanned Aerial Vehicle. Remote Sens. 2018, 10, 851. [Google Scholar] [CrossRef]
  48. Bazrafkan, A.; Delavarpour, N.; Oduor, P.G.; Bandillo, N.; Flores, P. An Overview of Using Unmanned Aerial System Mounted Sensors to Measure Plant Above-Ground Biomass. Remote Sens. 2023, 15, 3543. [Google Scholar] [CrossRef]
  49. Li, L.; Mu, X.; Jiang, H.; Chianucci, F.; Hu, R.; Song, W.; Qi, J.; Liu, S.; Zhou, J.; Chen, L.; et al. Review of Ground and Aerial Methods for Vegetation Cover Fraction (fCover) and Related Quantities Estimation: Definitions, Advances, Challenges, and Future Perspectives. ISPRS J. Photogramm. Remote Sens. 2023, 199, 133–156. [Google Scholar] [CrossRef]
  50. Li, Z.; Guo, X. Remote Sensing of Terrestrial Non-Photosynthetic Vegetation Using Hyperspectral, Multispectral, SAR, and LiDAR Data. Prog. Phys. Geogr. Earth Environ. 2016, 40, 276–304. [Google Scholar] [CrossRef]
  51. Sankey, J.B.; Sankey, T.T.; Li, J.; Ravi, S.; Wang, G.; Caster, J.; Kasprak, A. Quantifying Plant-Soil-Nutrient Dynamics in Rangelands: Fusion of UAV Hyperspectral-LiDAR, UAV Multispectral-Photogrammetry, and Ground-Based LiDAR-Digital Photography in a Shrub-Encroached Desert Grassland. Remote Sens. Environ. 2021, 253, 112223. [Google Scholar] [CrossRef]
  52. Norton, C.L.; Hartfield, K.; Collins, C.D.H.; Van Leeuwen, W.J.D.; Metz, L.J. Multi-Temporal LiDAR and Hyperspectral Data Fusion for Classification of Semi-Arid Woody Cover Species. Remote Sens. 2022, 14, 2896. [Google Scholar] [CrossRef]
  53. Zhang, W.; Yu, Q.; Tang, H.; Liu, J.; Wu, W. Conservation Tillage Mapping and Monitoring Using Remote Sensing. Comput. Electron. Agric. 2024, 218, 108705. [Google Scholar] [CrossRef]
  54. Zheng, B.; Campbell, J.B.; Serbin, G.; Galbraith, J.M. Remote Sensing of Crop Residue and Tillage Practices: Present Capabilities and Future Prospects. Soil Tillage Res. 2014, 138, 26–34. [Google Scholar] [CrossRef]
  55. Butterfield, H.S.; Reynolds, M.; Gleason, M.G.; Merrifield, M.; Cohen, B.S.; Heady, W.N.; Cameron, D.; Rick, T.; Inlander, E.; Katkowski, M.; et al. Jack and Laura Dangermond Preserve Integrated Resources Management Plan; The Nature Conservancy: Arlington, VA, USA, 2019; p. 112. [Google Scholar]
  56. Butterfield, H.S.; Katkowski, M.; Cota, J.; Sage, O.; Sage, C.; Easterday, K.; Zeleke, D.; Riege, L.; Gennet, S.; Lin, K.; et al. Jack and Laura Dangermond Preserve Rangeland Management Plan; The Nature Conservancy: Arlington, VA, USA, 2020; p. 54. [Google Scholar]
  57. Chadwick, K.D.; Davis, F.; Miner, K.R.; Pavlick, R.; Reynolds, M.; Townsend, P.A.; Brodrick, P.G.; Ade, C.; Allen, J.; Anderegg, L.; et al. Unlocking ecological insights from sub-seasonal visible-to-shortwave infrared imaging spectroscopy: The SHIFT campaign. Ecosphere 2025, 16, e70194. [Google Scholar] [CrossRef]
  58. Queally, N.; Davis, F.W.; Chadwick, K.D.; Ade, C.; Anderegg, L.; Angel, Y.; Baker, B.; Baskaran, L.; Boving, I.; Braghiere, R.K.; et al. SBG High Frequency Time Series (SHIFT) SHIFT: Vegetation Plot Photos, Santa Barbara, CA, USA, 2022. Available online: https://daac.ornl.gov/SHIFT/guides/SHIFT_Plot_Level_Photos.html#:~:text=The%20Surface%20Biology%20and%20Geology,imagery%20across%20the%20study%20area (accessed on 22 January 2025).
  59. Sousa, D.; Davis, F.W.; Easterday, K.; Reynolds, M.; Riege, L.; Butterfield, H.S.; Katkowski, M. Predictive Ecological Land Classification From Multi-Decadal Satellite Imagery. Front. For. Glob. Change 2022, 5, 867369. [Google Scholar] [CrossRef]
  60. Genua, L.; Anderson, B.; Bowen, M.; Ives, G.; Liu, O.; Paschos, T.; Butterfield, H.S.; Easterday, K.; Reynolds, M.; Thorne, J.H. Spatial Patterns of Vegetation Change in a Fire-Suppressed Coastal California Landscape. Madroño 2024, 70, 210–224. [Google Scholar] [CrossRef]
  61. Inlander, E.; Andrews, K.; Chin, J.; Pollock, S.; McFadden, M.; Hardage, S.; Butterfield, H.S.; Rubin, T. Remote Property Monitoring at The Nature Conservancy in California 2020. Available online: https://www.scienceforconservation.org/products/remote-property-monitoring (accessed on 2 July 2025).
  62. Funk, C.; Peterson, P.; Landsfeld, M.; Pedreros, D.; Verdin, J.; Shukla, S.; Husak, G.; Rowland, J.; Harrison, L.; Hoell, A.; et al. The Climate Hazards Infrared Precipitation with Stations—A New Environmental Record for Monitoring Extremes. Sci. Data 2015, 2, 150066. [Google Scholar] [CrossRef] [PubMed]
  63. Marabel, M.; Alvarez-Taboada, F. Spectroscopic Determination of Aboveground Biomass in Grasslands Using Spectral Transformations, Support Vector Machine and Partial Least Squares Regression. Sensors 2013, 13, 10027–10051. [Google Scholar] [CrossRef]
  64. Python Software Foundation. Python Language Reference, Version 3.11.5 2023; Python Software Foundation: Wilmington, DE, USA, 2023. [Google Scholar]
  65. R Core Team. R: A Language and Environment for Statistical Computing 2024; R Foundation for Statistical Computing: Vienna, Austria, 2024. [Google Scholar]
  66. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  67. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  68. Brovelli, M.A.; Crespi, M.; Fratarcangeli, F.; Giannone, F.; Realini, E. Accuracy Assessment of High Resolution Satellite Imagery Orientation by Leave-One-out Method. ISPRS J. Photogramm. Remote Sens. 2008, 63, 427–440. [Google Scholar] [CrossRef]
  69. Franklin, J.; Turner, D.L. The Application of a Geometric Optical Canopy Reflectance Model to Semiarid Shrub Vegetation. IEEE Trans. Geosci. Remote Sens. 1992, 30, 293–301. [Google Scholar] [CrossRef]
  70. Bivand, R. R Packages for Analyzing Spatial Data: A Comparative Case Study with Areal Data. Geogr. Anal. 2022, 54, 488–518. [Google Scholar] [CrossRef]
  71. Hastie, T.J.; Tibshirani, R.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer series in statistics; 2nd ed.; Springer: New York, NY, USA, 2009; ISBN 978-0-387-84858-7. [Google Scholar]
  72. Mutanga, O.; Masenyama, A.; Sibanda, M. Spectral Saturation in the Remote Sensing of High-Density Vegetation Traits: A Systematic Review of Progress, Challenges, and Prospects. ISPRS J. Photogramm. Remote Sens. 2023, 198, 297–309. [Google Scholar] [CrossRef]
  73. Adams, J.B.; Smith, M.O.; Johnson, P.E. Spectral Mixture Modeling: A New Analysis of Rock and Soil Types at the Viking Lander 1 Site. J. Geophys. Res. 1986, 91, 8098–8112. [Google Scholar] [CrossRef]
  74. Asner, G.P.; Heidebrecht, K.B. Spectral Unmixing of Vegetation, Soil and Dry Carbon Cover in Arid Regions: Comparing Multispectral and Hyperspectral Observations. Int. J. Remote Sens. 2002, 23, 3939–3958. [Google Scholar] [CrossRef]
  75. Gillespie, A.R. Interpretation of Residual Images: Spectral Mixture Analysis of AVIRIS Images, Owens Valley, California. In Proceedings of the Second Airborne Visible/Infrared Imaging Spectrometer(AVIRIS)Workshop, Pasadena, CA, USA, 6–8 May 1986; pp. 243–270. [Google Scholar]
  76. Smith, M.O.; Ustin, S.L.; Adams, J.B.; Gillespie, A.R. Vegetation in Deserts: I. A Regional Measure of Abundance from Multispectral Images. Remote Sens. Environ. 1990, 31, 1–26. [Google Scholar] [CrossRef]
  77. Kaveh, N.; Ebrahimi, A.; Asadi, E. Comparative Analysis of Random Forest, Exploratory Regression, and Structural Equation Modeling for Screening Key Environmental Variables in Evaluating Rangeland above-Ground Biomass. Ecol. Inform. 2023, 77, 102251. [Google Scholar] [CrossRef]
  78. Sanaei, A.; Chahouki, M.A.Z.; Ali, A.; Jafari, M.; Azarnivand, H. Abiotic and Biotic Drivers of Aboveground Biomass in Semi-Steppe Rangelands. Sci. Total Environ. 2018, 615, 895–905. [Google Scholar] [CrossRef]
  79. Reintsma, K.M.; Szczypinski, M.; Running, S.W.; Coons, S.P.; Dreitz, V.J. Sagebrush Steppe Productivity, Environmental Complexity, and Grazing: Insights From Remote Sensing and Mixed-Effect Modeling. Rangel. Ecol. Manag. 2024, 95, 20–29. [Google Scholar] [CrossRef]
  80. Enterkine, J.; Hojatimalekshah, A.; Vermillion, M.; Van Der Weide, T.; Arispe, S.A.; Price, W.J.; Hulet, A.; Glenn, N.F. Voxel Volumes and Biomass: Estimating Vegetation Volume and Litter Accumulation of Exotic Annual Grasses Using Automated Ultra-High-Resolution SFM and Advanced Classification Techniques. Ecol. Evol. 2025, 15, e70883. [Google Scholar] [CrossRef]
  81. Huang, C.; Marsh, S.E.; McClaran, M.P.; Archer, S.R. Postfire Stand Structure in A Semiarid Savanna: Cross-Scale Challenges Estimating Biomass. Ecol. Appl. 2007, 17, 1899–1910. [Google Scholar] [CrossRef]
  82. Théau, J.; Lauzier-Hudon, É.; Aubé, L.; Devillers, N. Estimation of Forage Biomass and Vegetation Cover in Grasslands Using UAV Imagery. PLoS ONE 2021, 16, e0245784. [Google Scholar] [CrossRef]
  83. Fernández-Guisuraga, J.M.; Calvo, L.; Enterkine, J.; Price, W.J.; Dinkins, J.B.; Jensen, K.S.; Olsoy, P.J.; Arispe, S.A. Estimating Vegetation and Litter Biomass Fractions in Rangelands Using Structure-from-Motion and LiDAR Datasets from Unmanned Aerial Vehicles. Landsc. Ecol. 2024, 39, 181. [Google Scholar] [CrossRef]
  84. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  85. Evans, J.S.; Hudak, A.T. A Multiscale Curvature Algorithm for Classifying Discrete Return LiDAR in Forested Environments. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1029–1038. [Google Scholar] [CrossRef]
  86. Klápště, P.; Urban, R.; Moudrý, V. Ground Classification of UAV Image-Based Point Clouds through Different Algorithms: Open Source vs. Commercial Software. In Proceedings of the 6th International Conference on Small Unmanned Aerial Systems for Environmental Research, Split, Croatia, 27–29 June 2018. [Google Scholar]
  87. Markman, B.; Butterfield, H.S.; Coulter, L.; Franklin, J.; Katkowski, M.; Sousa, D. UAV LiDAR Survey of RDM at Jack and Laura Dangermond Preserve, CA 2024; OpenTopography: San Diego, CA, USA, 2025. [Google Scholar] [CrossRef]
  88. Markman, B.; Butterfield, S.; Franklin, J.; Coulter, L.; Katkowski, M.; Sousa, D. Estimating Residual Dry Matter with Field Spectroscopy and UAV LiDAR at the Jack and Laura Dangermond Preserve; KNB: Santa Barbara, CA, USA, 2025. [Google Scholar] [CrossRef]
Figure 2. Example of the sample stratification using 90 × 90 m RDM plots in each study area. This stratification allowed for the easy identification of RDM hoops in the field and in the imagery for georeferencing. The map inset shows the 8 (0.09 m2) sampling locations across the JLDP, as shown in Figure 1.
Figure 2. Example of the sample stratification using 90 × 90 m RDM plots in each study area. This stratification allowed for the easy identification of RDM hoops in the field and in the imagery for georeferencing. The map inset shows the 8 (0.09 m2) sampling locations across the JLDP, as shown in Figure 1.
Remotesensing 17 02352 g002
Figure 3. RDM reflectance spectra. Reflectance spectra of RDM at 4 sites at JLDP (9 plots per site; n = 36). Reflectance (%) is plotted against wavelength (nm). Data near 1850 nm (specifically in 1850–1950 nm range) are whited out and were not used in calculations due to loss of signal caused by absorption of atmospheric water vapor. Spectral regions where narrowband indices were computed (LCAI, NDLI, CAI) are highlighted. Blue–red color ramp was added to visualize variability in RDM (g) among plots.
Figure 3. RDM reflectance spectra. Reflectance spectra of RDM at 4 sites at JLDP (9 plots per site; n = 36). Reflectance (%) is plotted against wavelength (nm). Data near 1850 nm (specifically in 1850–1950 nm range) are whited out and were not used in calculations due to loss of signal caused by absorption of atmospheric water vapor. Spectral regions where narrowband indices were computed (LCAI, NDLI, CAI) are highlighted. Blue–red color ramp was added to visualize variability in RDM (g) among plots.
Remotesensing 17 02352 g003
Figure 4. Example of gridded LiDAR CHM pixels at a 3 cm × 3 cm spatial resolution within the 13.25 in (33.66 cm) diameter RDM hoop. There were 100 pixels within each hoop (n = 100), which were used for extracting zonal statistics for the statistical analyses. The sampling areas were visually inspected prior to analysis and adjusted to make sure that only the clipped vegetation was used in the analysis and that the sampling hoop was not included in the delineation.
Figure 4. Example of gridded LiDAR CHM pixels at a 3 cm × 3 cm spatial resolution within the 13.25 in (33.66 cm) diameter RDM hoop. There were 100 pixels within each hoop (n = 100), which were used for extracting zonal statistics for the statistical analyses. The sampling areas were visually inspected prior to analysis and adjusted to make sure that only the clipped vegetation was used in the analysis and that the sampling hoop was not included in the delineation.
Remotesensing 17 02352 g004
Figure 5. RF feature importance plot for (A) spectral-only data, (B) lidar-only data, (C) combined data.
Figure 5. RF feature importance plot for (A) spectral-only data, (B) lidar-only data, (C) combined data.
Remotesensing 17 02352 g005
Figure 6. Moran’s I boxplots calculated using LR (gray) and RF (black) model residuals from each study area across JLDP, compared across spectral-only, LiDAR-only, and combined models.
Figure 6. Moran’s I boxplots calculated using LR (gray) and RF (black) model residuals from each study area across JLDP, compared across spectral-only, LiDAR-only, and combined models.
Remotesensing 17 02352 g006
Figure 7. Comparing RF and LR model scatterplots for predicting plot-scale RDM across vegetation structure classes. Vegetation structure classes are color coded as red for standing, green for mixed, and blue for laying vegetation. All open circles are Linear Regression and all closed circles are Random Forest.
Figure 7. Comparing RF and LR model scatterplots for predicting plot-scale RDM across vegetation structure classes. Vegetation structure classes are color coded as red for standing, green for mixed, and blue for laying vegetation. All open circles are Linear Regression and all closed circles are Random Forest.
Remotesensing 17 02352 g007
Figure 8. RF variable importance for standing, mixed, and laying vegetation. Color scheme matches that in Figure 7.
Figure 8. RF variable importance for standing, mixed, and laying vegetation. Color scheme matches that in Figure 7.
Remotesensing 17 02352 g008
Figure 9. Photographic depiction of standing (left) vs. laying (right) vegetation, which was considered during a posteriori stratification of vegetation into three classes: standing vegetation, mixed vegetation (sites with combination of standing and laying vegetation), and laying vegetation (where vegetation was lying in piles of dense thatch).
Figure 9. Photographic depiction of standing (left) vs. laying (right) vegetation, which was considered during a posteriori stratification of vegetation into three classes: standing vegetation, mixed vegetation (sites with combination of standing and laying vegetation), and laying vegetation (where vegetation was lying in piles of dense thatch).
Remotesensing 17 02352 g009
Table 1. Linear regression (LR) and random forest (RF) models of RDM cross-validation performance for spectral-only, LiDAR-only, and combined models. Leave-one-out cross validation (LOOCV) coefficient of determination (R2) and mean absolute error (MAE), p-values for linear models, error’s standard deviation (SD), number of predictors, and sample size (N) are listed for each model. The background color visually separates model sets.
Table 1. Linear regression (LR) and random forest (RF) models of RDM cross-validation performance for spectral-only, LiDAR-only, and combined models. Leave-one-out cross validation (LOOCV) coefficient of determination (R2) and mean absolute error (MAE), p-values for linear models, error’s standard deviation (SD), number of predictors, and sample size (N) are listed for each model. The background color visually separates model sets.
ModelLOOCV R2LOOCV MAEError’s SDp-ValueNumber of PredictorsSample Size (N)
Spectral (LR)0.0918.2624.33<0.05336
Spectral (RF)0.0618.7024.47N/A336
LiDAR (LR)0.3716.6821.29<0.05672
LiDAR (RF)0.2218.2023.67N/A672
Combined (LR)0.1520.5425.69<0.05936
Combined (RF)0.0718.4524.44N/A936
Table 2. Performance assessment comparing RF and LR in predicting RDM separately for 3 vegetation structure classes (standing, mixed, laying). RF R2 and MAE after LOOCV as well as LR R2 and MAE after LOOCV for each vegetation structure class are listed, along with sites and sample sizes associated with each structural class. The background color is used to separate each stratified vegetation model for visualization purposes.
Table 2. Performance assessment comparing RF and LR in predicting RDM separately for 3 vegetation structure classes (standing, mixed, laying). RF R2 and MAE after LOOCV as well as LR R2 and MAE after LOOCV for each vegetation structure class are listed, along with sites and sample sizes associated with each structural class. The background color is used to separate each stratified vegetation model for visualization purposes.
Vegetation StructureRF R2 After LOOCVRF MAE After LOOCVLR R2 After LOOCVLR MAE After LOOCVSitesSample Size
(N)
Standing0.817.700.6311.23CMT Ungrazed, Jalama Horse18
Mixed0.2118.720.0522.00Jalachichi, Steve’s Flat, Jalama Bull, East Tinta36
Laying0.0117.780.1626.42Cojo Cow, Jalama Mare18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Markman, B.; Butterfield, H.S.; Franklin, J.; Coulter, L.; Katkowski, M.; Sousa, D. Evaluating UAV LiDAR and Field Spectroscopy for Estimating Residual Dry Matter Across Conservation Grazing Lands. Remote Sens. 2025, 17, 2352. https://doi.org/10.3390/rs17142352

AMA Style

Markman B, Butterfield HS, Franklin J, Coulter L, Katkowski M, Sousa D. Evaluating UAV LiDAR and Field Spectroscopy for Estimating Residual Dry Matter Across Conservation Grazing Lands. Remote Sensing. 2025; 17(14):2352. https://doi.org/10.3390/rs17142352

Chicago/Turabian Style

Markman, Bruce, H. Scott Butterfield, Janet Franklin, Lloyd Coulter, Moses Katkowski, and Daniel Sousa. 2025. "Evaluating UAV LiDAR and Field Spectroscopy for Estimating Residual Dry Matter Across Conservation Grazing Lands" Remote Sensing 17, no. 14: 2352. https://doi.org/10.3390/rs17142352

APA Style

Markman, B., Butterfield, H. S., Franklin, J., Coulter, L., Katkowski, M., & Sousa, D. (2025). Evaluating UAV LiDAR and Field Spectroscopy for Estimating Residual Dry Matter Across Conservation Grazing Lands. Remote Sensing, 17(14), 2352. https://doi.org/10.3390/rs17142352

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop