Next Article in Journal
Evaluating GPS and Galileo Precise Point Positioning (PPP) Under Various Ionospheric Conditions During Solar Cycle 25
Previous Article in Journal
Dense Time Series of Harmonized Landsat Sentinel-2 and Ensemble Machine Learning to Map Coffee Production Stages
Previous Article in Special Issue
Classification of Forest Stratification and Evaluation of Forest Stratification Changes over Two Periods Using UAV-LiDAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RUSH: Rapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecast Models

by
Chak Wa (Winston) Cheang
1,*,
Kristin B. Byrd
1,
Nicholas M. Enwright
2,
Daniel D. Buscombe
3,
Christopher R. Sherwood
4 and
Dean B. Gesch
5
1
U.S. Geological Survey Western Geographic Science Center, Moffett Field, CA 94035, USA
2
U.S. Geological Survey Wetland and Aquatic Research Center, Lafayette, LA 70506, USA
3
Applied Coastal Research and Engineering, Washington State Department of Ecology, Olympia, WA 98513, USA
4
U.S. Geological Survey Woods Hole Coastal and Marine Science Center, Woods Hole, MA 02543, USA
5
U.S. Geological Survey Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(18), 3165; https://doi.org/10.3390/rs17183165
Submission received: 16 May 2025 / Revised: 22 August 2025 / Accepted: 29 August 2025 / Published: 12 September 2025

Abstract

Coastal vegetated ecosystems, including tidal marshes, vegetated dunes, and shrub- and forest-dominated wetlands, can mitigate hurricane impacts such as coastal flooding and erosion by increasing surface roughness and reducing wave energy. Land cover maps can be used as input to improve simulations of surface roughness in advanced hydro-morphological models. Consequently, there is a need for efficient tools to develop up-to-date land cover maps that include the accurate distribution of vegetation types prior to an extreme storm. In response, we developed the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models). RUSH delivers high-resolution maps of coastal vegetation for near-real-time or historical conditions via a Jupyter Notebook application and a graphical user interface (GUI). The application generates 3 m spatial resolution land cover maps with classes relevant to coastal settings, especially along mainland beaches, headlands, and barrier islands, as follows: (1) open water; (2) emergent wetlands; (3) dune grass; (4) woody wetlands; and (5) bare ground. These maps are developed by applying one of two seasonal random-forest machine learning models to Planet Labs SuperDove multispectral imagery. Cool Season and Warm Season Models were trained on 665 and 594 reference points, respectively, located across study regions in the North Carolina Outer Banks, the Mississippi Delta in Louisiana, and a portion of the Florida Gulf Coast near Apalachicola. Cool Season and Warm Season Models were tested with 666 and 595 independent points, with an overall accuracy of 93% and 94%, respectively. The Jupyter Notebook application provides users with a flexible platform for customization for advanced users, whereas the GUI, designed with user-experience feedback, provides non-experts access to remote sensing capabilities. This application can also be used for long-term coastal geomorphic and ecosystem change assessments.

1. Introduction

In recent years, an increasing number of hurricanes have caused extreme flooding and erosion in coastal regions [1]. For example, in 2024 Hurricane Helene produced a storm surge in Florida, USA, that was two meters above mean higher high water (MHHW) [2]. Hurricane Sandy, on the U.S. Atlantic Coast in 2012, resulted in a significant wave height (Hs) and eroded four meters of sediment in certain breach locations [3]. These events endanger lives and impact property, infrastructure, and socio-economic activities in coastal regions. On average, since 1980, each hurricane event in the U.S. has caused USD 22.8 billion in damages [4]. Hurricanes pose a threat to the natural environment, as well as to the built environment. Due to the rising number of extreme hurricane events, coastal managers and researchers need a better understanding of how hurricanes can alter the coast and the role of the coastal environment in controlling their impacts.
Coastal vegetated ecosystems, including tidal marshes, vegetated dunes, and shrub- and forest-dominated wetlands (including mangroves), help to lessen the impacts of hurricanes and extreme storms. For several hurricanes, research has shown that vegetation cover has significantly reduced the likelihood and/or severity of coastal flooding, erosion, and barrier island breaching [5,6,7]. In addition to cover, the spatial distribution of vegetation also influences the impact of hurricanes [5,8]. However, these ecosystems are subject to degradation and change each year from extreme storms and human activity (e.g., conversion or loss due to development). Based on the National Oceanographic and Atmospheric Administration (NOAA) Coastal Change Analysis Program (C-CAP) data, some of the most extensive coastal land cover changes have occurred in the southeastern U.S., mainly due to conversion of wetlands to open water and urban development along the Gulf Coast [9]. Therefore, spatially characterizing coastal land cover types is crucial for advancing our understanding of the potential impacts of hurricanes.
Vegetation can influence hydrological impacts by increasing ground surface roughness or friction, which reduces flow velocity. Surface roughness is often characterized by Manning’s n, a roughness coefficient used in coastal models [10,11]. In these models, different vegetation types are assigned Manning’s n values from a lookup table according to their typical structure, density, and height, and they have higher values than those assigned to bare surfaces. For example, the presence of woody wetlands and emergent wetlands increases the Manning’s n values relative to bare ground [8,10,12,13].
Coastal numerical models that predict hurricane impacts, such as COAWST (Coupled-Ocean-Atmosphere-Wave-Sediment Transport Modeling System) [14], Delft3D [15], Xbeach [16], and SFINCS (Super-Fast INundation of CoastS) [17], utilize gridded data on vegetation type and characteristics that are then translated into surface roughness coefficients [5,12]. Modelers have often used existing land cover maps to derive roughness coefficient values. For example, Passeri et al. (2018) [5] and van der Lugt et al. (2019) [12] used 30 m land cover data from the U.S. Geological Survey National Land Cover Database (NLCD) to generate spatially explicit Manning’s n values, which significantly enhanced morphologic modeling results. Sherwood and others [8] also demonstrated how the geographic distribution of the Manning’s n coefficient derived from land cover type in Bolivar Peninsula, Texas, affected hydro-morphological change through simulations in Delft3D-4. Hegermiller and others [3] derived vegetation land cover data from 60 cm U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) orthoimagery to demonstrate how three-dimensional drag created by dune grass and wetlands affect water movement in the COAWST vegetation module.
Recent advances in coastal models and computational power allow for grid sizes of less than 10 m. However, modelers are limited by the lack of up-to-date high-resolution coastal-vegetation-cover input data needed to generate near-real-time surface roughness model inputs. For example, 30 m NLCD is updated annually, and NAIP imagery is collected every two years. Development of a C-CAP product with a spatial resolution of 1 m is currently underway for coastal areas along the contiguous U.S., but the mapping frequency will be every five years [18]. Global products that are updated regularly like Dynamic World [19] do not have vegetation classes and training data specific to coastal regions. Additionally, for existing coastal products, dune grass is often not represented in a standalone class and may be classified as grassland, wetland, or developed. Finally, a major barrier to obtaining current vegetation cover data is the need for programming skills and remote sensing knowledge to generate useful products.
Prior coastal mapping efforts demonstrated how high-spatial-resolution coastal land cover mapping can be used to quantify how coastal vegetation and its habitats vary due to natural-disaster-induced coastal change and disturbance [18,20]. An increasing number of commercial smallsat satellites with high spatial and temporal resolution create opportunities for generating these maps on-demand through image classification techniques. For example, in 2020, Planet Labs launched the constellation of SuperDove multispectral satellites that provide 3 m near-daily coverage of the globe [21]. With high temporal frequency and eight spectral bands, these data provide the capability for rapid-repeat updates of vegetation maps.
To address the need for improved coastal model inputs on surface roughness, we developed a rapid-repeat, user-friendly, and open-source Jupyter Notebook (Jupyter version 2025.7.0) application and graphical user interface (GUI) that delivers high-resolution maps of coastal vegetation showing near-real-time conditions. Our application, the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models), uses Planet SuperDove images to generate 3 m resolution near-daily coastal land cover maps for a user-defined Area of Interest (AOI) and timeframe. The RUSH tool generates land cover products that are tailored to the needs of advanced hydro-morphological models in forecasting coastal storm and hurricane impacts. In developing this tool, we addressed the following questions: (1) Can we develop a single, scalable remote sensing model of coastal land cover for multiple coastal regions? (2) How can we streamline the retrieval and processing of satellite images? (3) How can we best deliver the results that users need? To design an integrated tool that addresses these questions, we assessed the accuracy, efficiency, and interoperability of each tool component.

2. Materials and Methods

2.1. Study Areas

The RUSH tool was designed and tested in the following three regions: (1) the Outer Banks of North Carolina; (2) the Mississippi River Delta in Louisiana, and (3) a portion of the Florida Gulf Coast near Apalachicola, Florida (Figure 1). We selected these regions because of concurrent hurricane-impact modeling occurring there and the rich archive of land cover, vegetation, and imagery data available for these areas [22,23]. The North Carolina Outer Banks are composed of barrier islands that parallel to the North Carolina coastline. The Mississippi River Delta in Louisiana includes diverse wetland types ranging from fresh to brackish to saline, which together represent approximately 40% of the coastal wetlands in the contiguous U.S. [24]. Coastal Louisiana experiences greater land area loss than all other states in the contiguous U.S. combined, with a net change of −4833 km2 from 1932 to 2016 [25]. The Florida site covers Saint Joseph Bay, Saint Vincent Island U.S. Fish and Wildlife National Wildlife Refuges, and several barrier islands along the east side of Apalachicola Bay. This region was a focus area for coastal numerical modeling and validation around Hurricane Michael, a Category 5 hurricane that made landfall in 2018 in the Florida Panhandle [26].
Within these AOIs, we developed masks from land cover classes in the 30 m 2016 C-CAP dataset and other region-specific datasets to constrain areas where the RUSH land cover maps would be generated. For the North Carolina and Florida sites, we restricted the AOIs to areas occupied by the following C-CAP natural vegetation classes: (1) Grassland/Herbaceous classes; (2) Forested (i.e., Deciduous Forest, Evergreen Forest, and Mixed Forest); (3) Scrub/Shrub classes; (4) Palustrine and Estuarine Wetlands (i.e., Palustrine Forested Wetland, Palustrine Scrub/Shrub Wetland, Palustrine Emergent Wetland (Persistent), Estuarine Forested Wetland, Estuarine Scrub/Shrub Wetland, and Estuarine Emergent Wetland); (5) Unconsolidated Shore class; (6) Barren Land class; and (7) Open Water classes (i.e., Open Water, Palustrine Aquatic Bed, and Estuarine Aquatic Bed). The state AOI extended 1 km from the shore into the C-CAP Open Water class. For our Louisiana site, we limited the mapping extent to wetland types classified as Open Water and Brackish, Intermediate, and Saline Wetlands in the 2021 Coastal Louisiana Vegetation Types map [24]. We excluded freshwater wetlands because we anticipated mapping challenges from the presence of floating aquatic vegetation and the reduced likelihood of hurricane impacts, given that these marshes are farther inland. Within the selected wetland types, we further limited the AOIs to areas mapped as the following C-CAP classes: (1) Grassland/Herbaceous classes; (2) Palustrine and Estuarine Wetlands (i.e., Palustrine Forested Wetland, Palustrine Scrub/Shrub Wetland, Palustrine Emergent Wetland (Persistent), Estuarine Forested Wetland, Estuarine Scrub/Shrub Wetland, and Estuarine Emergent Wetland; and (3) Open Water classes (i.e., Open Water, Palustrine Aquatic Bed, and Estuarine Aquatic Bed). The AOI extended 1 km from the shore into the C-CAP open water class.

2.2. Developing the Vegetation Classification Model

The workflow for the remote sensing model of coastal landcover is summarized in Figure 2. First, we used a wide range of high-resolution images to manually label the landcover types of the sampling points. Next, we used it to extract the Planet image reflectance corresponding to those landcover types to train and test the random forest model. Finally, we built a Jupyter Notebook application and a GUI that utilize the Planet API and pre-trained random forest model to generate landcover maps based on user inputs (Figure 2).

2.2.1. Land Cover Classes

We selected five general land cover classes for the RUSH tool, which included (1) open water; (2) dune grass; (3) emergent wetlands; (4) woody wetlands; and (5) bare ground (Figure 3). We selected these classes because they represent a broad range of Manning’s n values [11,12] and were present in each AOI. A goal for our study was to generate a single classification model that performed well in all three regions. With this goal in mind, we assumed that maintaining a small set of classes could increase the likelihood of map accuracy.

2.2.2. Planet Satellite Imagery

We selected Planetscope SuperDove (PSB.SD) satellite images for the RUSH tool because of their 3 m spatial resolution, near-daily coverage, and eight spectral bands (red, green, blue, near-infrared, red edge, green 1, coastal blue, and yellow). The tool uses PlanetScope Ortho Scene products, which are orthorectified-surface-reflectance image products [21]. Scenes include eight usable data masks including a cloud mask. SuperDove images are freely available for U.S.-government-funded scientific use through the National Aeronautics and Space Administration (NASA) Commercial Satellite Data Acquisition program, though imagery acquisition includes a one-month latency period (https://www.earthdata.nasa.gov/about/csda (accessed on 6 January 2023)).

2.2.3. Reference Data

We developed a dataset of labeled reference points for the RUSH tool that included 1331 points, including 401 points in Florida, 469 points in North Carolina, and 461 points in Louisiana. Points were randomly generated using Esri ArcGIS Pro (Redlands, CA, USA) and stratified by C-CAP classes within each AOI. Planet SuperDove images have a 3 m resolution, so we retained datapoints if they were at least 10 m away from other points and the land cover was consistently homogenous within a 10 m buffer around the datapoint. We assigned each datapoint to one of the RUSH model classes by visual interpretation of the most recent 0.6 m resolution NAIP imagery, a collection of high-resolution aerial orthoimagery provided by the U.S. Department of Agriculture (Table 1). A second reviewer verified these labels. We used high-resolution orthoimagery collected in North Carolina [23] to label dune-grass points on the Outer Banks. In Louisiana, we also used the Vegetation Types in Coastal Louisiana in the 2021 dataset [24] to label the reference points.

2.2.4. Model Training and Testing

We selected, trained, and tested our model using our reference points and Planet SuperDove 8-band, 3 m resolution, surface-reflectance, and orthorectified imagery with corresponding cloud masks for dates closest to the NAIP image dates (Table 1). We matched each labeled reference point with the corresponding Planet SuperDove image band reflectance values according to location. Points falling into the Planet SuperDove cloud mask pixels were discarded. For each reference point, we calculated several vegetation indices commonly used in wetland vegetation mapping and two-band vegetation indices (Table 2). The final attributed reference data were split in half, with half of the points used for training and half used for accuracy assessment.

2.2.5. Random Forest Model Description

The optimal machine learning model for image classification was selected after evaluating more than 20 model types in the Python lazypredict package [42]. We selected the random forest model due to its high accuracy and computational speed [43]. We used the random-forest machine learning model and training data attributed with Planet SuperDove band values and vegetation indices to classify the Planet SuperDove images into the RUSH tool land cover classes in the Python scikit-learn package [44]. We saved the trained random forest model in the pickle format using the Python joblib package [45]. We calculated the variable importance for each predictor variable, which evaluates the contribution of each random-forest input variable to an improved model fit [46]. We created a confusion matrix using the test reference points that included overall accuracy and the producer’s accuracy (1—omission error) and the user’s accuracy (1—commission error) for each class.

2.2.6. Building a Year-Round Model

Our project’s goal was to develop a land-cover-classification modeling approach that could be applied at any time of the year. The initial random forest model (hereafter referred to as the “Cool Season Model”) was trained and tested using Planet SuperDove imagery collected in November for Louisiana, January for Florida, and April/May for North Carolina, which matched the timing of the most current NAIP imagery used to label the reference points (Table 1). Based on this initial effort, we extended our classification modeling to include a “Warm Season Model”, which was trained and tested with Planet SuperDove image data from August and September of the same year as the original images. Our intention for developing this second model was to increase the ability to map land cover across timepoints representing different vegetation phenology (greenness stages).
The Warm Season Model was developed by evaluating and, if necessary, relabeling reference points by visual interpretation of August and September Planet SuperDove images (Table 1) after eliminating 142 training and test points that fell into areas of cloud cover or uninterpretable pixels. There were a total of 1189 points in the new dataset, including 417 points in North Carolina, 414 points in Louisiana, and 358 points in Florida. We used the Warm Season reference points to train a new random-forest classification model using variables and steps described above and to generate an accuracy assessment. We tested the accuracy of the Warm Season Model using the Warm Season Model testing points as described above.

2.2.7. Model Selection by Region and Season

While a single Cool Season Model and a single Warm Season Model were developed with data from all three regions, we developed a ruleset based on regional phenology differences to determine which months to use each model in RUSH within each state. For Louisiana and Florida, we initially assigned the Cool Season Model to the months of November through February based on expert input on when plants were senescent (Camille Stagg, Ph.D., Research Ecologist, USGS Wetland and Aquatic Research Center, Lafayette, LA, USA, written and oral communication, 6 November 2023; Michael Osland, Ph.D., Research Ecologist, USGS Wetland and Aquatic Research Center, Lafayette, LA, USA, written and oral communication, 31 October 2023) and literature on phenology and temperature trends [47,48]. The ruleset was grounded in a series of iterative accuracy assessments, in which we tested the regional accuracy of each model for different time periods. Based on these assessments, we assigned the RUSH tool to use the Warm Season Model in Louisiana all year round, and we assigned the tool to use the Cool Season Model in North Carolina all year round. For Florida, we assigned the tool to apply the Warm Season Model from March through October and the Cool Season Model from November through February.

2.3. Software Components

We used Python for the development of the RUSH tool [49]. We provide access to the RUSH tool through either a Jupyter Notebook [50] or a graphical user interface (GUI) built using tkinter [51,52]. The RUSH tool has the following three main components: (1) an application programming interface (API) option module that handles Planet API communications via a Planet account; (2) an AOI analysis module that checks geometries of Planet images; (3) and a land cover map module that generates the land cover classification maps. The RUSH tool design was streamlined to maximize the functionality, usability, and outreach of the software based on feedback from users (Appendix A).
The first step (API option module) allows users to enter inputs including the following: (1) Sensing Date (Start/End); (2) Maximum Cloud Cover; (3) Planet API Key; (4) Folder Path, and (5) a user-defined AOI file (Figure 4B). The module offers the user the option to inspect one or more thumbnail images for cloud cover and image quality. The user then chooses images that suit their needs (Figure 4C). The second step (AOI analysis module), which is optional, offers users a custom-designed geospatial analytics tool that compares the extent of the selected Planet SuperDove images to the user-defined AOI (Figure 4D,E) and ensures selected Planet SuperDove images fully cover the user-defined AOI. The third step (Land cover Map module) generates the raster data that represent the land cover map (Figure 4F) (Refer to Appendix B for more information).
We compiled a 26-page user manual with detailed descriptions of every aspect of the GUI and Jupyter Notebook interfaces to accompany the tool [51]. The user manual is divided into the following two major parts: the GUI User Guide and the Jupyter Notebook programming guide. This ensures that users with different levels of programing knowledge can effectively utilize the software for their needs. The GUI portion of the User Guide is designed to allow anyone, at any level of coding experience, to operate the software and obtain results quickly and easily. The Jupyter Notebook Programing Guide is designed for advanced developers to access the source code of the software.

2.4. Hurricane Impact Case Study

We assessed the ability of the RUSH tool to detect coastal vegetation change with a case study for Hurricane Ida (2021). Hurricane Ida was a hurricane that made landfall in Louisiana and resulted in significant damage [53]. On 20 August 2021, it made a landfall as a Category 4 hurricane with a 130 kt wind speed on the Louisiana coast near the Mississippi River Delta. We selected Planet SuperDove images dated 9 August 2021 and 15 October 2021 to compare land cover between these dates on barrier islands northeast of the Mississippi River Delta. We selected the image dates given the similarly in tide-level measurements from the Grand Isle tide gauge (NOAA station identifier 8761724; https://tidesandcurrents.noaa.gov/stationhome.html?id=8761724 (accessed on 15 June 2024)) at the time of image acquisition.

3. Results

3.1. Model Accuracy and Variable Importance Assessments

The overall accuracy for both the Cool Season and Warm Season Models was 93% (Table 3) (Refer to Appendix C for regional accuracy assessment tables). The producer’s and user’s accuracies of vegetated classes varied between 88% and 93%. The accuracy of non-vegetated classes varied between 93% and 98%. In addition, we also generated variable importance scores [43] for both models (Figure 5). The variable importance results indicate that NDVI has the highest importance in the Warm Season Model. For the Cool Season Model, the SR variable had the highest importance score.

3.2. Final Map Products

The RUSH tool delivers both intermediate and final outputs. The intermediate output contains original Planet SuperDove assets and different levels of pre-processed Planet SuperDove images. These include individual images (or tiles), cloud masks associated with each of the Planet SuperDove images, a mosaicked image, a mosaicked cloud mask, a cloudless mosaicked image, two additional cloudless mosaicked images that are (1) clipped to the user-defined AOI and (2) clipped to both the user-defined AOI and the state AOI, and a cloudless mosaicked image that is clipped to both the state AOI and the user-defined AOI with 23 bands, including 8 bands from the Planet SuperDove images and 15 bands of vegetation. The final outputs consist of the following three main products: (1) a land cover classification map in raster format; (2) a land cover classification map in cloud-optimized raster format (refer to Figure 6 for examples); and (3) an accuracy assessment table.

3.3. Hurricane Impact Case Study

We used the RUSH tool to observe land cover changes, including changes in bare ground, water, and coastal vegetation, in a pre- and post-storm comparison. Our maps displayed the change from vegetated to non-vegetated land cover on the northeastern tip of Grand Isle after Hurricane Ida (Figure 7B,C). Shifts in bare ground were also mapped on the eastern side of the study area [54].

4. Discussion

This study used a combination of coastal ecology, remote sensing, and software design to enhance the usability of coastal remote sensing products for use in hurricane impact modeling. We developed the RUSH tool to generate accurate, on-demand, and high-resolution coastal vegetation maps for three U.S. coastal regions using Planet SuperDove satellite imagery. RUSH has multiple potential uses in the field of coastal ecology and hydro-morphology research, including long-term wetland ecological and geomorphic change assessments after hurricanes and storms.
The coastal environment is constantly changing due to storms, tides, waves, and human activities. Coastal monitoring techniques including lidar and cameras that monitor shoreline conditions help to reliably assess coastal processes [55,56]. Building on these techniques, satellite remote sensing capabilities for coastal monitoring are rapidly expanding [57]. However, coastal processes, including shoreline change, dune erosion and dune growth, overwash, outwash, vegetation changes, and other processes that reshape the coastal environment, have signatures that can occupy large areas (many square kilometers) but require high-resolution (~10 m or finer) for identification. Existing satellite remote sensing products, including NLCD and C-CAP, and coastal monitoring techniques are either only suitable for site-specific measurements or have a spatial and temporal resolution too low to monitor these processes. The RUSH tool is a remote sensing solution that can be scaled to a large regional area to provide high spatial and temporal maps, which would help fulfill the need for monitoring fine-scale coastal processes (Figure 8).

4.1. Remote Sensing Model Performance

The accuracy assessment results indicated that our random forest models performed strongly across three states (Appendix C). Tides, phenology, and wetland vegetation pose challenges in remote sensing of the coastal environment [58,59], but the flexibility of the random forest model allowed for good transferability of the model across sites. Vegetation classes were mapped with high accuracy, with an accuracy ranging from 82% to 95%. High map accuracy is important for our application given coastal models are sensitive to higher surface roughness [8,13].
The lowest accuracy (82%) was associated with dune grass. Among the vegetation classes, dune grass was slightly more difficult to identify correctly. The dune grass in our study areas mostly occupied foredunes and was usually interspersed with, or surrounded by, sand with gradations in vegetation density. Thus, it was challenging to label dune-grass reference points. Our assigned reference points were intended to be homogenous areas 10 m in diameter (3 × 3 Planet pixels). Because it was difficult to find dune-grass areas that entirely satisfied this requirement, we used dune-grass reference points where at least 50 percent of land cover was visually identified as dune. Ultimately, our dune-grass classification accuracy was similar to that of other coastal mapping efforts (e.g., the Barrier Island Comprehensive Monitoring report [20]).
We compared the variable importance scores of each random-forest input variable for both the Cool Season and Warm Season Models. Based on model testing with location as a variable, the Cool Season Model’s overall accuracy was 92.6% (without the location variable was 92.4%) while the Warm Season Model’s overall accuracy was 93.4% (without the location variable was 93.6%). We did not include location as a random-forest variable input in the final models due to its low variable importance. The accuracies of the Cool Season and Warm Season Models were not significantly compromised by the lack of a location variable, which is a good indication that the random-forest remote sensing models are transferable across all three state AOIs and work reliably regardless of location.

4.2. Software Development Road Map and Next Steps

One of the primary goals of this project was to connect remote sensing, coastal modeling, and user experience to remove barriers to using high-resolution land cover data in coastal morphodynamic models. The RUSH tool was made by remote sensing scientists alongside coastal modelers to make sure all functionalities were thoughtfully designed. The collaboration facilitated and improved project design at both the user and developer levels. At the user level, the workflow was designed to fulfill the land cover requirements of coastal models and to provide products that are analysis-ready for models that can incorporate spatial land cover parameters. At the developer level, the remote sensing and geospatial processing workflow were crafted to develop easily accessible and accurate products.
The land cover mapping and useability aspects of RUSH are modeled after CoastSat [60], a Python-based Google Earth Engine (GEE) [61] script that generates satellite-derived coastlines. Another closely related software, CoastSeg, is also a geospatial software that generates satellite-derived shorelines through Landsat and Sentinel images [62]. While the RUSH tool is more focused on land cover classification rather than satellite-derived shorelines, both RUSH and CoastSeg are highly automated remote sensing software that map the coastal environment. However, the RUSH tool uses high-spatial- and -temporal-frequency Planet SuperDove images, instead of Landsat or Sentinel images, to enhance near-real-time mapping capability. Future enhancement of the RUSH tool could include the capability to map waterline and shoreline changes. An expanded classification model could improve the capability to distinguish wet and dry sand at the waterline under low-tide conditions. Detecting accurate waterlines near dry and wet sand is a challenge, as they look optically similar. Overcoming this challenge could significantly enhance satellite-derived shoreline accuracy [63].
The RUSH tool may benefit from an expansion in both spatial extent and spectral resolution. Its spatial extent has the potential to be expanded to other coastal regions by further training and testing the random forest model with new data. These new data would account for the variation in vegetation type, environmental conditions, and phenology at different coastal sites [64]. Further accuracy assessments would be needed to confirm the land cover classification accuracy of an updated model. The current framework of the API option module and the land cover map module could be edited to adopt the Sentinel API, such that users may also use Sentinel-2 images to generate land cover maps. Furthermore, adopting the use of deep learning models with high-resolution multispectral or even hyperspectral satellite imagery could help map vegetation species and characteristics relevant to coastal models, such as density and height, but ground truth data are needed to implement and validate these methods [65,66]. Deep learning models can also support submerged vegetation mapping with high-resolution multispectral satellite images [67,68].
In addition to providing land cover data for coastal morphodynamic models, the remote sensing tool could also be utilized for disaster response and operational purposes. The highly automated workflow with high-resolution daily land cover classification maps offers decision makers information to rapidly assess impacts soon after a storm on a large spatial scale. However, landcover map availability is subject to cloud coverage conditions.
Image pre-processing and machine learning classification require considerable computational power. Consequently, we needed to limit the maximum user-defined AOI to 100 km2 for commonly available computers because image pre-processing requires substantial computer memory to temporarily store image data. To resolve issues caused by computational limitations, the next steps in software development would be to move this tool to a cloud computing platform. GEE and Amazon Web Services (AWS) could be options for hosting and running the RUSH tool. GEE enables easy and fast access to Planet images, while AWS provides a flexible development and operating platform to host the RUSH tool, enhancing usability and computational capacity. These enhancements would significantly reduce the processing time and increase the spatial extent of the area of interest.

5. Conclusions

The goal of this project was to develop software to efficiently generate land cover maps for coastal areas for hurricane impact forecasting. Our study showed that the development of such software is feasible and that the finished product can produce maps that support a better understanding of coastal ecosystem resilience and hydro-morphological change due to hurricane and storm events. The accuracy assessments demonstrated that the random forest model was accurate and transferable across the three study areas. The all-season modeling approach accounted for changing phenology and ensured that accuracy remained consistent throughout the year. With Planet SuperDove images, we were able to compute near-daily land cover classification maps with a three-meter resolution. The automated geospatial and remote sensing workflow allows for seamless connections between processing steps and significantly reduces the time required for users to discover and process satellite imagery. By combining aspects of coastal ecology, remote sensing, and hydro-morphology, this software demonstrated the strength of multidisciplinary design and how the connection and collaboration among scientific fields fostered the development of this coastal remote sensing model.

Author Contributions

Conceptualization, C.W.C., K.B.B., N.M.E., D.B.G. and D.D.B.; methodology, C.W.C., K.B.B., N.M.E., D.D.B., C.R.S. and D.B.G.; software, C.W.C., D.D.B. and C.R.S.; validation, C.W.C., K.B.B. and N.M.E.; formal analysis, C.W.C., K.B.B. and N.M.E.; investigation, C.W.C., K.B.B., N.M.E., D.D.B., C.R.S. and D.B.G.; resources, K.B.B. and D.B.G.; data curation, C.W.C.; writing—original draft preparation, C.W.C., K.B.B. and N.M.E.; writing—review and editing, C.W.C., K.B.B., N.M.E., D.D.B., C.R.S. and D.B.G.; visualization, C.W.C.; supervision, K.B.B. and D.B.G.; project administration, K.B.B. and D.B.G.; funding acquisition, K.B.B. and D.B.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Oceanographic Partnership Program, which was funded by U.S. Department of the Navy Office of Naval Research, the USGS Community for Data Integration, the USGS Coastal/Marine Hazards and Resources Program Remote Sensing Coastal Change project, and USGS National Land Imaging Program. Planet satellite images were made available through NASA’s Commercial Satellite Data Acquisition Program.

Data Availability Statement

The RUSH tool, including the Jupyter Notebook and Graphical User Interface, referred herein is publicly available on USGS Gitlab (https://code.usgs.gov/western-geographic-science-center-public/a-tool-for-rapid-repeat-high-resolution-coastal-vegetation-maps-to-improve-forecasting-of-hurricane-impacts-and-coastal-resilience, accessed on 7 August 2025, [51]). The Gitlab page includes a user manual that describes the specifications and procedures for using the software.

Acknowledgments

The authors are very thankful for the coastal modelers and partners who provided support and feedback throughout the research. The authors would like to thank Kathryn Smith, Sean Vitousek, Rachel Atkins, Mark Carson, and Aaron Murphy, who tested and offered feedback for the RUSH tool. The authors would also like to thank John Warner and Sean Vitousek, who wrote letters of recommendation on our behalf to the USGS Community for Data Integration (CDI)’s call for proposals, which granted the authors the funding to make the RUSH tool available. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RUSHRapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecasts
GUIGraphical User Interface
MHHWMean Higher High Water Level
HsSignificant Wave Height
NOAANational Oceanographic and Atmospheric Administration
C-CAPCoastal Change Analysis Program
COAWSTCoupled-Ocean-Atmosphere-Wave-Sediment Transport Modeling System
SFINCSSuper-Fast INundation of CoastS
NLCDNational Land Cover Database
NAIPAgriculture’s National Agriculture Imagery Program
AOIArea of Interest
PSB.SDPlanetscope SuperDove
NIRNear-Infrared
NDVINormalized Difference Vegetation Index
NDWINormalized Difference Water Index
WDRVI2Wide Dynamic Range Vegetation Index
WDRVI5Wide Dynamic Range Vegetation Index
SRSimple Ratio
DVIDifference Vegetation Index
GDVIGreen Difference Vegetation Index
GNDVIGreen Normalized Difference Vegetation Index
GRVIGreen Ratio Vegetation Index
IPVIInfrared Percentage Vegetation Index
TBVI81Two-Band Vegetation Index (band 8, band 1)
TBVI82Two-Band Vegetation Index (band 8, band 2)
TBVI41Two-Band Vegetation Index (band 4, band 1)
TBVI64Two-Band Vegetation Index (band 6, band 4)
NDCINormalized Difference Chlorophyll Index
GEEGoogle Earth Engine
AWSAmazon Web Services
UXUser Experience
NOPPNational Oceanographic Partnership Program
NHCIHurricane Coastal Impacts
CRSCoordinate Reference System
COGCloud-Optimized GeoTIFF

Appendix A. Developing a Flexible Software Interface

Instead of orienting the software toward a product-focused design, we employed a user-centric design that enables users with a range of experience levels to fully utilize the remote sensing models and obtain results intuitively. We prioritized the quality of the human–product interaction to deliver the optimal user experience during the system’s operations. Thus, we explored and documented the feasibility of using a GUI to run our remote sensing models. UX (user experience) surveys were conducted among USGS scientists to ensure that the UX was maximized and the functionalities were thoughtfully designed. We qualitatively evaluated user-experience (UX) feedback from individuals (eight USGS scientists) new to Python and experienced coastal modelers with advanced Python skills. UX feedback was collected through a usability survey, in-person meetings, and moderated remote and in-person usability studies.
Usability survey: We queried testers about the level of convenience, usability, security, and efficiency of the software before the testing began. Users evaluated and provided responses on the software based on the questions given. While usability surveys usually only allow one-way communication, they enabled us to widely recruit testers from different geographic regions and technical backgrounds.
In-person meetings: During the GUI design phase, we met testers in-person during the 2023 American Geophysical Union Annual Meeting and during a project meeting of the National Oceanographic Partnership Program (NOPP) Hurricane Coastal Impacts (NHCI). The NHCI project aims to enhance the understanding of hurricane impacts, and several experienced coastal modelers offered opinions on graphical renderings of alternative RUSH model interfaces.
Moderated remote and in-person usability studies: We set up multiple one-on-one in-person and remote meetings with our testers, and we observed their operation of both the GUI and the Jupyter Notebook application in real-time. To reduce the psychological influence on the performance (e.g., to avoid testers feeling anxious about someone watching them code or using the GUI), we used technical help and demo sessions to observe user behaviors. Based on users’ feedback, we included various error-detection mechanisms that monitor both the Planet API status and the user’s inputs. When an error occurred, the system was able to accurately locate the issue and provide users with a checklist to troubleshoot the issues. For instance, when the API failed to respond, the program analyzed the user inputs and the server status code at the same time so that it could pinpoint whether it was a user error or API failure. In addition, selected image previews are automatically saved in the backend so that users do not have to specify which images they choose during each run.
Throughout the user study sessions, we observed that users may accidentally select Planet SuperDove images outside of the user-defined AOI. Thus, we built the AOI Geometry Analysis Tool, which compares the geometries of the user-defined AOI and Planet SuperDove images. The AOI Geometry Analysis Tool also indicates what percentage of the user-defined AOI was covered by the Planet SuperDove image. In addition, we found that users might draw a user-defined AOI that was too large to be handled by the application. Thus, we added a function that calculates the user-defined AOI size and displays that in the AOI Geometry Analysis Tool.
Testers gave us positive feedback on the processing speed of the software and the graphic design of the user manual and felt that the instructions were easy to follow. In addition, testers also suggested that the user manual could provide more instructions about non-standardized steps, including the process of drawing user-defined AOIs and defining folder paths. Based on the above feedback, we added more instructions about defining folder paths and the requirements of user-defined AOIs.
During the NOPP in-person meeting, when the GUI was still in the design phase, users provided positive feedback on the renderings and were satisfied about the user interface layout design. During the user study phase, users also found the workflow intuitive and mentioned that the coupling of the GUI and geospatial processing workflow significantly enhance the usability of remote sensing models.

Appendix B. RUSH Tool Modules

API Option Module: The first step (API option module) allows users to enter inputs including: (1) Sensing Date (Start/End); (2) Maximum Cloud Cover; (3) Planet API Key; (4) Folder Path; and (5) a user-defined AOI file. These inputs are used to select and download Planet SuperDove image previews (thumbnails) to the specified local folder. The user then chooses images that suit their needs, and the RUSH model uses the API key to submit a download request to Planet. In case the request is too large, rate-limiting techniques are applied to avoid the API becoming overwhelmed.
AOI Analysis Module: The second step (AOI analysis module), which is optional, compares the geometries of both the user-defined AOI and Planet SuperDove image AOI. The tool was developed to ensure that the selected images cover the area of the desired final land cover map. The tool shows the percentage of the user-defined AOI that is covered by the Planet SuperDove image geometries and visually depict the geometry of both the user-defined AOI and Planet SuperDove images on a map through the contextily package [69]. This package livestreams a collection of basemaps with global coverage online and is used by the tool to display and overlay the geometries of user-defined AOI and Planet SuperDove images on a basemap. We used matplotlib [70] to display all maps. The final output is the percent overlap between the Planet SuperDove images and user-defined AOI calculated using the shapely package [71]. We use geopandas [72] to read the shapefile or geojson files and make sure they are all converted to the WGS 84 coordinate system. In addition, the tool uses the area package [73] to calculate the user’s AOI size to ensure it is within the recommended range (~100 square kilometers) to run the tool on a desktop or laptop computer. Users with advanced computers can use larger AOIs. The tool retrieves the IDs from images that the user selects and uses them to obtain geometries of Planet SuperDove images from Planet API. This method reduces the computation time by only downloading the geometries data and avoiding downloading the entire Planet SuperDove images.
Land Cover Map Module: The third step (Land Cover Map module) generates the land cover classification maps. The user input state name and date are used to choose either the Cool Season or Warm Season Model, and the coordinate reference system (CRS) code is used to georeferenced the raster output. As with the other modules, the RUSH model sends the user-defined requests to the Planet API and parses the responses. After checking that all response links are being downloaded and are free of rate limiting, the responses are decoded, and the RUSH downloads the requested Planet SuperDove images and their cloud masks. The images and cloud masks are mosaicked separately using rasterio and rioxarray. Cloud masks are mosaicked to maximize the number of cloud pixels in the output. The cloud-free images are then masked to the user-defined AOI and state AOI. The winter or summer model is selected based on the date and state AOI using the rule set provided above. The model then uses the random forest model and the vegetation indices from the Planet SuperDove image pixels to generate a land cover map. The tool outputs the land cover map in traditional and cloud-optimized GeoTIFF (COG) formats. A regional confusion matrix is also provided based on the state where the user-defined AOI is located.

Appendix C. Regional Accuracy Assessment Tables

Florida Warm Season Model AccuracyOpen WaterEmergent WetlandsDune GrassWoody WetlandsBare Ground
Prediction/truth12345TotalUser’s accuracy
150000050100%
20254203181%
30025012696%
40104004198%
500003131100%
Total5026294232179
Producer’s accuracy100%96%86%95%97% Overall Accuracy
96%
Florida Cool Season Model AccuracyOpen WaterEmergent WetlandsDune GrassWoody WetlandsBare Ground
Prediction/truth12345TotalUser’s accuracy
149000049100%
20271303187%
31929014073%
400042042100%
500003838100%
Total5036304539200
Producer’s accuracy98%75%97%93%97% Overall Accuracy
93%
Louisiana Warm Season Model AccuracyOpen WaterEmergent WetlandsDune GrassWoody WetlandsBare Ground
Prediction/truth12345TotalUser’s accuracy
152000052100%
21510305593%
30024052983%
40502803385%
51051313882%
Total5456293236207
Producer’s accuracy96%91%83%88%86% Overall Accuracy
90%
Louisiana Cool Season Model AccuracyOpen WaterEmergent WetlandsDune GrassWoody WetlandsBare Ground
Prediction/truth12345TotalUser’s accuracy
15600025897%
20590206197%
312290032.091%
40703103882%
51000414298%
Total58.068.029.033.043.0231
Producer’s accuracy97%87%100%94%95% Overall Accuracy:
94%
North Carolina Warm Season Model AccuracyOpen WaterEmergent WetlandsDune GrassWoody WetlandsBare Ground
Prediction/truth12345TotalUser’s accuracy
145000045100%
20201102291%
300600060100%
40304204593%
50040333789%
Total4523654333209
Producer’s accuracy100%87%92%98%100% Overall Accuracy
96%
North Carolina Cool Season Model AccuracyOpen WaterEmergent WetlandsDune GrassWoody WetlandsBare Ground
Prediction/truth12345TotalUser’s accuracy
14820035391%
21220202588%
30068027097%
40304204593%
50070354283%
Total4927754440235
Producer’s accuracy98%81%91%95%88%Overall Accuracy
91%
Appendix C. Cool Season and Warm Season models’ regional confusion matrices with user’s accuracy and producer’s accuracy for the state areas of interest in Florida, Louisiana, and North Carolina.

References

  1. Radfar, S.; Moftakhari, H.; Moradkhani, H. Rapid Intensification of Tropical Cyclones in the Gulf of Mexico Is More Likely during Marine Heatwaves. Commun. Earth Environ. 2024, 5, 421. [Google Scholar] [CrossRef]
  2. Hagen, A.B.; Cangialosi, J.P.; Chenard, M.; Alaka, L.; Delgado, S. National Hurricane Center Tropical Cyclone Report; National Hurricane Center: Miami, FL, USA, 2024. [Google Scholar]
  3. Hegermiller, C.A.; Warner, J.C.; Olabarrieta, M.; Sherwood, C.R.; Kalra, T.S. Modeling of Barrier Breaching During Hurricanes Sandy and Matthew. J. Geophys. Res. Earth Surf. 2022, 127, e2021JF006307. [Google Scholar] [CrossRef]
  4. National Oceanic and Atmospheric Administration National Centers for Environmental Information. Hurricane Costs. Available online: https://coast.noaa.gov/states/fast-facts/hurricane-costs.html (accessed on 8 September 2025).
  5. Passeri, D.L.; Long, J.W.; Plant, N.G.; Bilskie, M.V.; Hagen, S.C. The Influence of Bed Friction Variability Due to Land Cover on Storm-Driven Barrier Island Morphodynamics. Coast. Eng. 2018, 132, 82–94. [Google Scholar] [CrossRef]
  6. Schambach, L.; Grilli, A.R.; Grilli, S.T.; Hashemi, M.R.; King, J.W. Assessing the Impact of Extreme Storms on Barrier Beaches along the Atlantic Coastline: Application to the Southern Rhode Island Coast. Coast. Eng. 2018, 133, 26–42. [Google Scholar] [CrossRef]
  7. Zhu, Z.; Vuik, V.; Visser, P.J.; Soens, T.; Van Wesenbeeck, B.; Van De Koppel, J.; Jonkman, S.N.; Temmerman, S.; Bouma, T.J. Historic Storms and the Hidden Value of Coastal Wetlands for Nature-Based Flood Defence. Nat. Sustain. 2020, 3, 853–862. [Google Scholar] [CrossRef]
  8. Sherwood, C.R.; Van Dongeren, A.; Doyle, J.; Hegermiller, C.A.; Hsu, T.-J.; Kalra, T.S.; Olabarrieta, M.; Penko, A.M.; Rafati, Y.; Roelvink, D.; et al. Modeling the Morphodynamics of Coastal Responses to Extreme Events: What Shape Are We In? Annu. Rev. Mar. Sci. 2022, 14, 457–492. [Google Scholar] [CrossRef]
  9. National Oceanic and Atmospheric Administration Coastal Land Cover Change Summary Report. 2024. Available online: https://Coast.Noaa.Gov/Data/Digitalcoast/Pdf/Landcover-Report-West-Coast.Pdf (accessed on 31 July 2025).
  10. De Vet, P.L.M.; Mccall, R.T.; Den Bieman, J.P.; Stive, M.J.F.; Van Ormondt, M. Modelling Dune Erosion, Overwash And Breaching At Fire Island (NY) During Hurricane Sandy. In The Proceedings of the Coastal Sediments 2015; World Scientific: San Diego, CA, USA, 2015. [Google Scholar]
  11. Mattocks, C.; Forbes, C. A Real-Time, Event-Triggered Storm Surge Forecasting System for the State of North Carolina. Ocean Model. 2008, 25, 95–119. [Google Scholar] [CrossRef]
  12. Van Der Lugt, M.A.; Quataert, E.; Van Dongeren, A.; Van Ormondt, M.; Sherwood, C.R. Morphodynamic Modeling of the Response of Two Barrier Islands to Atlantic Hurricane Forcing. Estuar. Coast. Shelf Sci. 2019, 229, 106404. [Google Scholar] [CrossRef]
  13. Wamsley, T.V.; Cialone, M.A.; Smith, J.M.; Atkinson, J.H.; Rosati, J.D. The Potential of Wetlands in Reducing Storm Surge. Ocean Eng. 2010, 37, 59–68. [Google Scholar] [CrossRef]
  14. Warner, J.C.; Armstrong, B.; He, R.; Zambon, J.B. Development of a Coupled Ocean–Atmosphere–Wave–Sediment Transport (COAWST) Modeling System. Ocean Model. 2010, 35, 230–244. [Google Scholar] [CrossRef]
  15. Roelvink, J.A.; van Banning, K.F.M. Design and Development of DELFT3D and Application to Coastal Morphodynamics. Oceanogr. Lit. Rev. 1994, 94, 451–456. [Google Scholar]
  16. Roelvink, D.; Reniers, A.; Van Dongeren, A.; van Thiel de Vries, J.; Lescinski, J.; McCall, R. XBeach Model Description and Manual; Unesco-IHE Institute for Water Education, Deltares and Delft University of Technology: Delft, The Netherlands, June 2010; Volume 21, p. 2. [Google Scholar]
  17. Leijnse, T.; Van Ormondt, M.; Nederhoff, K.; Van Dongeren, A. Modeling Compound Flooding in Coastal Systems Using a Computationally Efficient Reduced—Physics Solver: Including Fluvial, Pluvial, Tidal, Wind—And Wave-Driven Processes. Coast. Eng. 2021, 163, 103796. [Google Scholar] [CrossRef]
  18. National Oceanic and Atmospheric Administration Digital Coast Announcing High-Resolution Land Cover for the Coast. Available online: https://Coast.Noaa.Gov/Data/Digitalcoast/Pdf/Ccap-Highres-Products-Explained.Pdf (accessed on 6 November 2024).
  19. Brown, C.F.; Brumby, S.P.; Guzder-Williams, B.; Birch, T.; Hyde, S.B.; Mazzariello, J.; Czerwinski, W.; Pasquarella, V.J.; Haertel, R.; Ilyushchenko, S.; et al. Dynamic World, Near Real-Time Global 10 m Land Use Land Cover Mapping. Sci. Data 2022, 9, 251. [Google Scholar] [CrossRef]
  20. Enwright, N.M.; SooHoo, W.M.; Dugas, J.L.; Conzelmann, C.P.; Laurenzano, C.; Lee, D.M.; Mouton, K.; Stelly, S.J. Louisiana Barrier Island Comprehensive Monitoring Program: Mapping Habitats in Beach, Dune, and Intertidal Environments Along the Louisiana Gulf of Mexico Shoreline, 2008 and 2015–16; Open-File Report; US Geological Survey: Reston, VA, USA, 2020. [Google Scholar]
  21. Planet Labs Planet Imagery Product Specifications. 2023. Available online: https://Pubs.Usgs.Gov/of/2021/1030/f/Ofr20211030f.Pdf (accessed on 24 March 2025).
  22. Louisiana Coastal Protection and Restoration Authority Coastal Information Management System (CIMS). Available online: https://Cims.Coastal.Louisiana.Gov/ (accessed on 24 March 2025).
  23. Ritchie, A.C.; Over, J.R.; Kranenburg, C.J.; Brown, J.A.; Buscombe, D.D.; Sherwood, C.R.; Warrick, J.A.; Wernette, P.A. Aerial Photogrammetry Data and Products of the North Carolina Coast. 2022. Available online: https://www.Usgs.Gov/Data/Aerial-Photogrammetry-Data-and-Products-North-Carolina-Coast (accessed on 28 August 2023).
  24. Nyman, J.A.; Reid, C.S.; Sasser, C.E.; Linscombe, J.; Hartley, S.B.; Couvillion, B.R.; Villani, R.K. Vegetation Types in Coastal Louisiana in 2021 (Ver. 2.0, April 2023); US Geological Survey: Reston, VA, USA, 2022. [Google Scholar]
  25. Couvillion, B.R.; Beck, H.; Schoolmaster, D.; Fischer, M. Land Area Change in Coastal Louisiana (1932 to 2016); Scientific Investigations Map; US Geological Survey: Reston, VA, USA, 2017. [Google Scholar]
  26. Byrne, M.J. Monitoring Storm Tide from Hurricane Michael along the Northwest Coast of Florida, October 2018; Open-File Report; US Geological Survey: Reston, VA, USA, 2019. [Google Scholar]
  27. Nifosi, J.M. Sand-Flower. Available online: https://www.usgs.gov/media/images/sand-flower (accessed on 6 August 2025).
  28. U.S. Geological Survey Coastal Wetlands Near Port Fourchon, La., Northern Gulf of Mexico. Available online: https://www.usgs.gov/media/images/coastal-wetlands-near-port-fourchon-la-northern-gulf-mexico (accessed on 6 August 2025).
  29. Vargas-Babilonia, P. Elevation Data Collected on a Beach. Available online: https://www.usgs.gov/media/images/elevation-data-collected-a-beach (accessed on 6 August 2025).
  30. Sutton-Grier, A.E. Coastal Marsh in Louisiana. Available online: https://www.usgs.gov/media/images/coastal-marsh-louisiana (accessed on 6 August 2025).
  31. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS Symposium; NASA: Washington, DC, USA, 1974; pp. 309–317. [Google Scholar]
  32. Gao, B. NDWI—A Normalized Difference Water Index for Remote Sensing of Vegetation Liquid Water from Space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  33. Jackson, T. Vegetation Water Content Mapping Using Landsat Data Derived Normalized Difference Water Index for Corn and Soybeans. Remote Sens. Environ. 2004, 92, 475–482. [Google Scholar] [CrossRef]
  34. Gitelson, A.A. Wide Dynamic Range Vegetation Index for Remote Quantification of Biophysical Characteristics of Vegetation. J. Plant Physiol. 2004, 161, 165–173. [Google Scholar] [CrossRef]
  35. Birth, G.S.; McVey, G.R. Measuring the Color of Growing Turf with a Reflectance Spectrophotometer1. Agron. J. 1968, 60, 640–643. [Google Scholar] [CrossRef]
  36. Tucker, C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  37. Sripada, R.P. Determining In-Season Nitrogen Requirements for Corn Using Aerial Color-Infrared Photography. Ph.D. Thesis, North Carolina State University, Raleigh, NC, USA, 2005. [Google Scholar]
  38. Gitelson, A.A.; Merzlyak, M.N. Remote Sensing of Chlorophyll Concentration in Higher Plant Leaves. Adv. Space Res. 1998, 22, 689–692. [Google Scholar] [CrossRef]
  39. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial Color Infrared Photography for Determining Early In-Season Nitrogen Requirements in Corn. Agron. J. 2006, 98, 968–977. [Google Scholar] [CrossRef]
  40. Crippen, R. Calculating the Vegetation Index Faster. Remote Sens. Environ. 1990, 34, 71–73. [Google Scholar] [CrossRef]
  41. Thenkabail, P.S.; Enclona, E.A.; Ashton, M.S.; Van Der Meer, B. Accuracy Assessments of Hyperspectral Waveband Performance for Vegetation Analysis Applications. Remote Sens. Environ. 2004, 91, 354–376. [Google Scholar] [CrossRef]
  42. Pandala, S.R. Lazypredict. 2019. Available online: https://Lazypredict.Readthedocs.Io/En/Latest/ (accessed on 7 March 2025).
  43. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  44. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  45. Varoquaux, G. Joblib. 2009. Available online: https://Joblib.Readthedocs.Io/En/Stable/ (accessed on 7 March 2025).
  46. Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  47. Mo, Y.; Momen, B.; Kearney, M.S. Quantifying Moderate Resolution Remote Sensing Phenology of Louisiana Coastal Marshes. Ecol. Model. 2015, 312, 191–199. [Google Scholar] [CrossRef]
  48. Stagg, C.L.; Schoolmaster, D.R.; Piazza, S.C.; Snedden, G.; Steyer, G.D.; Fischenich, C.J.; McComas, R.W. A Landscape-Scale Assessment of Above- and Belowground Primary Production in Coastal Wetlands: Implications for Climate Change-Induced Community Shifts. Estuaries Coasts 2017, 40, 856–879. [Google Scholar] [CrossRef]
  49. Van Rossum, G.; Drake, F.L. Python Reference Manual; Centrum voor Wiskunde en Informatica: Amsterdam, The Netherlands, 1995. [Google Scholar]
  50. Kluyver, T.; Ragan-Kelley, B.; Perez, F.; Granger, B.; Bussonnier, M.; Frederic, J.; Kelley, K.; Hamrick, J.; Grout, J.; Corlay, S.; et al. A Publishing Format for Reproducible Computational Workflows. In Positioning and Power in Academic Publishing: Players, Agents and Agendas; IOS Press: Amsterdam, The Netherlands, 2016. [Google Scholar]
  51. Chak, W.C.; Kristin, B.B.; Nicholas, M.E.; Daniel, D.B.; Dean, B.G. RUSH: Rapid Remote Sensing Updates of Landcover for Storm and Hurricane Forecasts (Version 1.0.0); U.S. Geological Survey: Moffett Field, CA, USA, 2024. [Google Scholar]
  52. Lundh, F. An Introduction to Tkinter. 1999. Available online: https://Docs.Python.Org/3/Library/Tkinter.Html (accessed on 7 March 2025).
  53. Shen, J.; Murray-Tuite, P.; Wernstedt, K.; Guikema, S. Estimating Pre-Impact and Post-Impact Evacuation Behaviors—An Empirical Study of Hurricane Ida in Coastal Louisiana and Mississippi. J. Transp. Geogr. 2024, 118, 103925. [Google Scholar] [CrossRef]
  54. Yao, Q.; Cohen, M.C.L.; Liu, K.; De Souza, A.V.; Rodrigues, E. Nature versus Humans in Coastal Environmental Change: Assessing the Impacts of Hurricanes Zeta and Ida in the Context of Beach Nourishment Projects in the Mississippi River Delta. Remote Sens. 2022, 14, 2598. [Google Scholar] [CrossRef]
  55. Oades, E.M.; Mulligan, R.P.; Palmsten, M.L. Evaluation of Nearshore Bathymetric Inversion Algorithms Using Camera Observations and Synthetic Numerical Input of Surface Waves during Storms. Coast. Eng. 2023, 184, 104338. [Google Scholar] [CrossRef]
  56. Stockdonf, H.F.; Sallenger, A.H., Jr.; List, J.H.; Holman, R.A. Estimation of Shoreline Position and Change Using Airborne Topographic Lidar Data. J. Coast. Res. 2002, 18, 502–513. [Google Scholar]
  57. Vitousek, S.; Buscombe, D.; Vos, K.; Barnard, P.L.; Ritchie, A.C.; Warrick, J.A. The Future of Coastal Monitoring through Satellite Remote Sensing. Camb. Prism. Coast. Futures 2023, 1, e10. [Google Scholar] [CrossRef]
  58. Miller, G.J.; Dronova, I.; Oikawa, P.Y.; Knox, S.H.; Windham-Myers, L.; Shahan, J.; Stuart-Haëntjens, E. The Potential of Satellite Remote Sensing Time Series to Uncover Wetland Phenology under Unique Challenges of Tidal Setting. Remote Sens. 2021, 13, 3589. [Google Scholar] [CrossRef]
  59. Stagg, C.L.; Osland, M.J.; Moon, J.A.; Hall, C.T.; Feher, L.C.; Jones, W.R.; Couvillion, B.R.; Hartley, S.B.; Vervaeke, W.C. Quantifying Hydrologic Controls on Local- and Landscape-Scale Indicators of Coastal Wetland Loss. Ann. Bot. 2019, 2, 365–376. [Google Scholar] [CrossRef] [PubMed]
  60. Vos, K.; Splinter, K.D.; Harley, M.D.; Simmons, J.A.; Turner, I.L. CoastSat: A Google Earth Engine-Enabled Python Toolkit to Extract Shorelines from Publicly Available Satellite Imagery. Environ. Model. Softw. 2019, 122, 104528. [Google Scholar] [CrossRef]
  61. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  62. Fitzpatrick, S.; Buscombe, D.; Warrick, J.A.; Lundine, M.A.; Vos, K. CoastSeg: An Accessible and Extendable Hub Forsatellite-Derived-Shoreline (SDS) Detection and Mapping. J. Open Source Softw. 2024, 9, 6683. [Google Scholar] [CrossRef]
  63. Graffin, M.; Taherkhani, M.; Leung, M.; Vitousek, S.; Kaminsky, G.; Ruggiero, P. Monitoring Interdecadal Coastal Change along Dissipative Beaches via Satellite Imagery at Regional Scale. Camb. Prism. Coast. Futures 2023, 1, e42. [Google Scholar] [CrossRef]
  64. Byrd, K.B.; Ballanti, L.; Thomas, N.; Nguyen, D.; Holmquist, J.R.; Simard, M.; Windham-Myers, L. A Remote Sensing-Based Model of Tidal Marsh Aboveground Carbon Stocks for the Conterminous United States. ISPRS J. Photogramm. Remote Sens. 2018, 139, 255–271, Erratum in ISPRS J. Photogramm. Remote Sens. 2020, 166, 63–67. [Google Scholar] [CrossRef]
  65. Gong, P.; Pu, R.; Biging, G.S.; Larrieu, M.R. Estimation of Forest Leaf Area Index Using Vegetation Indices Derived from Hyperion Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1355–1362. [Google Scholar] [CrossRef]
  66. Holman, F.; Riche, A.; Michalski, A.; Castle, M.; Wooster, M.; Hawkesford, M. High Throughput Field Phenotyping of Wheat Plant Height and Growth Rate in Field Plot Trials Using UAV Based Remote Sensing. Remote Sens. 2016, 8, 1031. [Google Scholar] [CrossRef]
  67. Coffer, M.M.; Schaeffer, B.A.; Zimmerman, R.C.; Hill, V.; Li, J.; Islam, K.A.; Whitman, P.J. Performance across WorldView-2 and RapidEye for Reproducible Seagrass Mapping. Remote Sens. Environ. 2020, 250, 112036. [Google Scholar] [CrossRef]
  68. Islam, K.A.; Hill, V.; Schaeffer, B.; Zimmerman, R.; Li, J. Semi-Supervised Adversarial Domain Adaptation for Seagrass Detection Using Multispectral Images in Coastal Areas. Data Sci. Eng. 2020, 5, 111–125. [Google Scholar] [CrossRef]
  69. Arribas-Bel, D. Contextily 2016. Available online: https://github.com/geopandas/contextily (accessed on 7 March 2025).
  70. Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
  71. Gillies, S. Shapely: Manipulation and Analysis of Geometric Objects. 2007. Available online: https://Github.Com/Toblerity/Shapely (accessed on 7 March 2025).
  72. Jordahl, K.; Van den Bossche, J.; Fleischmann, M.; Wasserman, J.; McBride, J.; Gerard, J.; Tratner, J.; Perry, M.; Badaracco, A.G.; Farmer, C.; et al. Geopandas. 2014. Available online: https://zenodo.org/records/3946761 (accessed on 7 March 2025).
  73. Alireza, J. Area 2015. Available online: https://github.com/scisco/area (accessed on 7 March 2025).
Figure 1. (A) Areas of interest (AOIs) used for development of the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models). (B) North Carolina AOI that includes the Outer Banks. (C) Louisiana AOI that includes the Mississippi River Delta. (D) Florida AOI that includes part of the Gulf Coast. The basemap shows the ESRI ArcGIS World Terrain Maps.
Figure 1. (A) Areas of interest (AOIs) used for development of the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models). (B) North Carolina AOI that includes the Outer Banks. (C) Louisiana AOI that includes the Mississippi River Delta. (D) Florida AOI that includes part of the Gulf Coast. The basemap shows the ESRI ArcGIS World Terrain Maps.
Remotesensing 17 03165 g001
Figure 2. Summary diagram of the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models). The RUSH tool receives inputs, including area of interest, sensing date, cloud cover percentage, and state, from users and processes those inputs through a series of automated geospatial processing steps. Final outputs, including raster land cover classification maps and accuracy assessment tables, are generated based on the requirements defined in the user inputs. The tool also generates vegetation index values as an intermediate output. NAIP, U.S. Department of Agriculture National Agriculture Imagery Program; NC, North Carolina; LA, Louisiana; C-CAP, National Oceanic and Atmospheric Administration Coastal Change Analysis Program; API, application programming interface.
Figure 2. Summary diagram of the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models). The RUSH tool receives inputs, including area of interest, sensing date, cloud cover percentage, and state, from users and processes those inputs through a series of automated geospatial processing steps. Final outputs, including raster land cover classification maps and accuracy assessment tables, are generated based on the requirements defined in the user inputs. The tool also generates vegetation index values as an intermediate output. NAIP, U.S. Department of Agriculture National Agriculture Imagery Program; NC, North Carolina; LA, Louisiana; C-CAP, National Oceanic and Atmospheric Administration Coastal Change Analysis Program; API, application programming interface.
Remotesensing 17 03165 g002
Figure 3. Photos of different land cover types: (A) open water land cover; (B) emergent wetland land cover; (C) dune grass land cover; (D) woody wetlands land cover; (E) bare ground land cover. Land cover images courtesy of the U.S. Geological Survey [27,28,29,30].
Figure 3. Photos of different land cover types: (A) open water land cover; (B) emergent wetland land cover; (C) dune grass land cover; (D) woody wetlands land cover; (E) bare ground land cover. Land cover images courtesy of the U.S. Geological Survey [27,28,29,30].
Remotesensing 17 03165 g003
Figure 4. The RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) graphical user interface (GUI): (A) main menu; (B) Application Programming Interface (API) Options page that allows users to filter Planet SuperDove images based on user preferences, such as sensing dates, sensing time, cloud cover, and area of interest (AOI); (C) Select Image page that allows users to inspect and select Planet SuperDove images for download and to generate land cover maps; (D) the AOI Analysis tool calculates how well the Planet SuperDove image geometries overlap with the user AOI geometries; this tool notifies users when image selection does not overlap with the user-defined AOI; (E) the results page of the AOI Analysis shows how well the user-defined AOI overlaps with the Planet SuperDove image geometries; (F) the Generate Maps page is the final step of the operation that allows users to define geographical requirement and output directory of the final products.
Figure 4. The RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) graphical user interface (GUI): (A) main menu; (B) Application Programming Interface (API) Options page that allows users to filter Planet SuperDove images based on user preferences, such as sensing dates, sensing time, cloud cover, and area of interest (AOI); (C) Select Image page that allows users to inspect and select Planet SuperDove images for download and to generate land cover maps; (D) the AOI Analysis tool calculates how well the Planet SuperDove image geometries overlap with the user AOI geometries; this tool notifies users when image selection does not overlap with the user-defined AOI; (E) the results page of the AOI Analysis shows how well the user-defined AOI overlaps with the Planet SuperDove image geometries; (F) the Generate Maps page is the final step of the operation that allows users to define geographical requirement and output directory of the final products.
Remotesensing 17 03165 g004
Figure 5. Variable importances scores for the Warm Season and Cool Season Models. Refer to Table 2 for predictor variables.
Figure 5. Variable importances scores for the Warm Season and Cool Season Models. Refer to Table 2 for predictor variables.
Remotesensing 17 03165 g005
Figure 6. Sample outputs from the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) for each of the study AOIs. (A) shows the state area of interest locations. (B) shows the land cover classification map of the Buxton Woods Coastal Reserve, FL, USA, and its surrounding region. (C) shows the land cover classification map corresponding to the region highlighted by the green bounding box in (B). In (C), building pixels are displayed as empty pixels at the bottom left of the display window. (D) shows the land cover classification map of the Saint Vincent Island U.S. Fish and Wildlife National Wildlife Refuges, FL, USA, and its surrounding region. (E) shows the land cover classification map corresponding to the region highlighted by the green bounding box in (D). (F) shows the land cover classification map of the South-eastern tip of the Bird’s Foot in the Mississippi River Delta, Louisiana, USA. (G) shows the land cover classification map corresponding to the region highlighted by the green bounding box in (F). In (F), freshwater wetlands pixels were excluded in the land cover map. The basemap shown is the ESRI ArcGIS World Topographic Map.
Figure 6. Sample outputs from the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) for each of the study AOIs. (A) shows the state area of interest locations. (B) shows the land cover classification map of the Buxton Woods Coastal Reserve, FL, USA, and its surrounding region. (C) shows the land cover classification map corresponding to the region highlighted by the green bounding box in (B). In (C), building pixels are displayed as empty pixels at the bottom left of the display window. (D) shows the land cover classification map of the Saint Vincent Island U.S. Fish and Wildlife National Wildlife Refuges, FL, USA, and its surrounding region. (E) shows the land cover classification map corresponding to the region highlighted by the green bounding box in (D). (F) shows the land cover classification map of the South-eastern tip of the Bird’s Foot in the Mississippi River Delta, Louisiana, USA. (G) shows the land cover classification map corresponding to the region highlighted by the green bounding box in (F). In (F), freshwater wetlands pixels were excluded in the land cover map. The basemap shown is the ESRI ArcGIS World Topographic Map.
Remotesensing 17 03165 g006
Figure 7. Pre- and post-hurricane conditions in Grand Isle State Park, LA: (A) event location; (B) Planet SuperDove RGB image captured on 9 August 2021. (C) model outputs that show land cover types of the barrier island on 9 August 2021; (D) Planet SuperDove RGB image captured on 15 October 2021. (E) model outputs that show land cover types of the barrier island on 15 October 2021. Images show the change in bare ground after the hurricane, which is highlighted by the green and black bounding boxes. The basemap shown in (A) is the ESRI ArcGIS World Topographic Map.
Figure 7. Pre- and post-hurricane conditions in Grand Isle State Park, LA: (A) event location; (B) Planet SuperDove RGB image captured on 9 August 2021. (C) model outputs that show land cover types of the barrier island on 9 August 2021; (D) Planet SuperDove RGB image captured on 15 October 2021. (E) model outputs that show land cover types of the barrier island on 15 October 2021. Images show the change in bare ground after the hurricane, which is highlighted by the green and black bounding boxes. The basemap shown in (A) is the ESRI ArcGIS World Topographic Map.
Remotesensing 17 03165 g007
Figure 8. Remote sensing observation products that capture the multi-scale nature of coastal change process. The graph is modified from [57].
Figure 8. Remote sensing observation products that capture the multi-scale nature of coastal change process. The graph is modified from [57].
Remotesensing 17 03165 g008
Table 1. Dates of image reference data for the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) random forest model. Reference data were primarily derived from visual interpretation of U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) imagery. The Cool Season Model was developed with Planet SuperDove image data that matched the timing of the most current NAIP imagery used to label reference points. The second, the Warm Season Model, was developed to expand the capability of the RUSH tool to map land cover year-round.
Table 1. Dates of image reference data for the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) random forest model. Reference data were primarily derived from visual interpretation of U.S. Department of Agriculture National Agriculture Imagery Program (NAIP) imagery. The Cool Season Model was developed with Planet SuperDove image data that matched the timing of the most current NAIP imagery used to label reference points. The second, the Warm Season Model, was developed to expand the capability of the RUSH tool to map land cover year-round.
NAIPPlanet SuperDove Cool Season ModelPlanet SuperDove Warm
Season Model
North Carolina16 May 2020, 18 October 2020April 2020 to May 2020July 2020 to September 2020
Florida9 January 2022, 17 January 2022, 23 January 2022, 29 January 2022January 2022September 2022
Louisiana15 November 2021, 16 November 2021, 15 November 2021, 23 November 2021, 29 November 2021, 30 November 2021, 1 December 2021, 2 December 2021, 3 January 2022, 22 January 2022, 18 January 2022, 22 January 2022, 23 January 2022, 30 January 2022November 2021September 2022
Table 2. Vegetation indices included in the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) random forest model. B1, coastal blue band; B2, blue band; B3, green 1; B4, green; B5, yellow; B6, red; B7, red edge; B8, near-infrared (NIR).
Table 2. Vegetation indices included in the RUSH tool (Rapid remote sensing Updates of land cover for Storm and Hurricane forecast models) random forest model. B1, coastal blue band; B2, blue band; B3, green 1; B4, green; B5, yellow; B6, red; B7, red edge; B8, near-infrared (NIR).
Vegetation IndexFormulaReferences
NDVI (Normalized Difference Vegetation Index)(B8 − B6)/(B8 + B6)[31]
NDWI (Normalized Difference Water Index)(B4 − B8)/(B4 + B8)[32,33]
WDRVI2 (Wide Dynamic Range Vegetation Index)(0.2 × B8 − B6)/(B8 + B6)[34]
WDRVI5 (Wide Dynamic Range Vegetation Index)(0.5 × B8 − B6)/(B8 + B6)[34]
SR (Simple Ratio)B8/B6[35]
DVI (Difference Vegetation Index)B8 − B6[36]
GDVI (Green Difference Vegetation Index)B8 − B4[37]
GNDVI (Green Normalized Difference Vegetation Index)(B8 − B4)/(B8 + B4)[38]
GRVI (Green Ratio Vegetation Index)B8/B4[39]
IPVI (Infrared Percentage Vegetation Index)B8/(B8 + B6)[40]
TBVI81 (two-band vegetation index)(B8 − B1)/(B8 + B1)[41]
TBVI82 (two-band vegetation index)(B8 − B2)/B8 + B2)[41]
TBVI41 (two-band vegetation index)(B4 − B1)/(B4 + B1)[41]
TBVI64 (two-band vegetation index)(B6 − B4)/(B6 + B4)[41]
NDCI (Normalized Difference Chlorophyll Index)(B7 − B6)/(B7 + B6)[41]
Table 3. Confusion matrices with user’s accuracy and producer’s accuracy for land cover classes in the Cool Season (a) and Warm Season Models (b).
Table 3. Confusion matrices with user’s accuracy and producer’s accuracy for land cover classes in the Cool Season (a) and Warm Season Models (b).
(a)
Cool Season ModelOpen WaterEmergent WetlandsDune GrassWoodyBare Ground
Rows: Prediction/
Columns: truth
12345TotalUser’s accuracy
1153200516096%
2110817011792%
32111260314289%
40100115012592%
5107011412293%
Total157131134122122666
Producer’s accuracy97%82%94%94%93% Overall accuracy: 92%
(b)
Warm Season ModelOpen WaterEmergent WetlandsDune GrassWoodyBare Ground
Rows: Prediction/
Columns: truth
12345TotalUser’s accuracy
11470000147100%
219656010889%
3001090611595%
4090110011992%
510919510690%
Total149105123117101595
Producer’s accuracy99%91%89%94%94% Overall accuracy: 94%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheang, C.W.; Byrd, K.B.; Enwright, N.M.; Buscombe, D.D.; Sherwood, C.R.; Gesch, D.B. RUSH: Rapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecast Models. Remote Sens. 2025, 17, 3165. https://doi.org/10.3390/rs17183165

AMA Style

Cheang CW, Byrd KB, Enwright NM, Buscombe DD, Sherwood CR, Gesch DB. RUSH: Rapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecast Models. Remote Sensing. 2025; 17(18):3165. https://doi.org/10.3390/rs17183165

Chicago/Turabian Style

Cheang, Chak Wa (Winston), Kristin B. Byrd, Nicholas M. Enwright, Daniel D. Buscombe, Christopher R. Sherwood, and Dean B. Gesch. 2025. "RUSH: Rapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecast Models" Remote Sensing 17, no. 18: 3165. https://doi.org/10.3390/rs17183165

APA Style

Cheang, C. W., Byrd, K. B., Enwright, N. M., Buscombe, D. D., Sherwood, C. R., & Gesch, D. B. (2025). RUSH: Rapid Remote Sensing Updates of Land Cover for Storm and Hurricane Forecast Models. Remote Sensing, 17(18), 3165. https://doi.org/10.3390/rs17183165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop