Next Article in Journal
Sentinel-2 Reference Fire Perimeters for the Assessment of Burned Area Products over Latin America and the Caribbean for the Year 2019
Previous Article in Journal
Research on Collapse Detection in Old Coal Mine Goafs Based on Space–Sky–Earth Remote Sensing Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantifying Seagrass Density Using Sentinel-2 Data and Machine Learning

1
Department of Geography & Geoinformation Science, Global Environment and Natural Resources Institute, George Mason University, 4400 University Dr, Fairfax, VA 22030, USA
2
Centennial High School, Ellicott City, MD 21042, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(7), 1165; https://doi.org/10.3390/rs16071165
Submission received: 17 February 2024 / Revised: 14 March 2024 / Accepted: 25 March 2024 / Published: 27 March 2024

Abstract

:
Seagrasses, rooted aquatic plants growing completely underwater, are extremely important for the coastal ecosystem. They are an important component of the total carbon burial in the ocean, they provide food, shelter, and nursery to many aquatic organisms in coastal ecosystems, and they improve water quality. Due to human activity, seagrass coverage has been rapidly declining, and there is an urgent need to monitor seagrasses consistently. Seagrass coverage has been closely monitored in the Chesapeake Bay since 1970 using air photos and ground samples. These efforts are costly and time-consuming. Many studies have used remote sensing data to identify seagrass bed outlines, but few have mapped seagrass bed density. This study used Sentinel-2 satellite data and machine learning in Google Earth Engine and the Chesapeake Bay Program field data to map seagrass density. We used seagrass density data from the Chincoteague and Sinepuxent Bay to train machine learning algorithms and evaluate their accuracies. Out of the four machine learning models tested (Naive Bayes (NB), Classification and Regression Trees (CART), Support Vector Machine (SVM), and Random Forest (RF)), the RF model outperformed the other three models with overall accuracies of 0.874 and Kappa coefficients of 0.777. The SVM and CART models performed similarly and NB performed the poorest. We tested two different approaches to assess the models’ accuracy. When we used all the available ground samples to train the models, whereby our analysis showed that model performance was associated with seagrass density class, and that higher seagrass density classes had better consumer accuracy, producer accuracy, and F1 scores. However, the association of model performance with seagrass density class disappeared when using the same training data size for each class. Very sparse and dense seagrass classes had replacedhigherbetter accuracies than the sparse and moderate seagrass density classes. This finding suggests that training data impacts machine learning model performance. The uneven training data size for different classes can result in biased assessment results. Selecting proper training data and machine learning models are equally important when using machine learning and remote sensing data to map seagrass density. In summary, this study demonstrates the potential to map seagrass density using satellite data.

Graphical Abstract

1. Introduction

Seagrasses are rooted aquatic plants that grow completely underwater and are extremely important for coastal ecosystems. Seagrass meadows accumulate organic carbon at an annual rate of 83 gC m−2 yr−1, which is larger than what most terrestrial ecosystems accumulate [1]. Despite only being a relatively small area of the coastal ocean, they are a significant component of the total carbon burial in the ocean, amounting to a global carbon burial rate of 27∼44 TgC yr−1 (10–18% of total carbon burial in the ocean) [2]. A large-scale study investigating Bahamian seagrass carbon stocks and sequestration rates showed that the mapped Bahamian seagrass extent covers an area up to 46,792 km2, translating into a carbon storage of 723 Mg C and a sequestration rate of 123 Mt CO2 annually, equaling about 68 times the amount of CO2 emitted by the Bahamas in 2018 [3]. Seagrass meadows also provide important ecosystem services. Firstly, seagrass canopies act as filters, meaning organic matter from other sources accumulates in the sediments in the meadows [4]. Additionally, the plants efficiently remove excess nitrogen and phosphorus from coastal waters [5]. Furthermore, seagrass ecosystems reduce the water’s exposure to bacterial pathogens and improve water quality [6]. Finally, seagrasses also provide food and shelter to many aquatic organisms in coastal ecosystems, from tiny invertebrates to large fish, crabs, turtles, marine mammals, and birds. A change in seagrass density and types impacts sea creature habitats and the population of these sea creatures [7]. For example, in the last decade, the Chesapeake Bay has experienced a shift in seagrass species from eelgrass to widgeon grass due to anthropogenic impacts. This shift results in a seagrass emergence time shift from eelgrass emergence in early spring to widgeon grass emergence in summer. This emergence time shift impacts the population density of species such as blue crabs and black sea bass, which need seagrass habitats when they migrate into the bay during the spring months [7].
Despite the high economic value provided by the aforementioned ecosystem services, seagrass extent is in global decline due to a range of factors, including alterations in coastal habitats [8,9,10]. Nearshore marine ecosystems are experiencing an increase in human impact which is deteriorating water quality in coastal regions [9,11]. Across the globe, there has been a dramatic decrease in seagrass extent due to anthropogenic impacts, which has resulted in the loss of seagrass ecosystem services [12]. This human activity results in rapidly disappearing seagrass, and about 29% of the known areal extent has disappeared since seagrass areas were initially recorded in 1879. Seagrass meadow area is therefore potentially being lost at an estimated average global rate of 1.5% per year [8]. A recent study over seven seagrass bioregions at the global scale found an overall 19.1% loss of the total area surveyed since 1880 [9]. However, the true extent of seagrass loss remains uncertain due to estimates of the global seagrass extent being unknown [13]. Seagrass losses are expected to continue, emerging as a pressing challenge for coastal management. Active restoration efforts are being conducted in the US [14,15], Australia, and New Zealand [13] with some success. Monitoring the loss and recovery of the meadows globally is critical due to a lack of in situ data.
The decline of seagrass density can lead to fragmentation and potentially permanent habitat loss. As seagrass density decreases, the habitat complexity decreases, meaning seagrass beds are less suitable for many animal life stages, including vulnerable life stages such as juveniles seeking to avoid predation. For example, studies showed juvenile crab density was a positive exponential function of seagrass density at a broad spatial scale in Chesapeake Bay [16,17]. Additionally, many marine species, such as dugongs, manatees, and sea turtles, depend directly on these habitats for food [18]. Seagrass density is also associated with fish abundance [19] and benthic biodiversity of bacteria and macroinvertebrates [18]. The faunal community species diversity and richness were significantly higher in high-cover seagrass than in low-cover seagrass, indicating increasing habitat value as density increases [20]. In essence, maintaining and mapping seagrass density is a priority for conservation efforts.

1.1. Remote Sensing of Seagrass

To date, seagrass is primarily monitored using in situ approaches, including ground surveys and hovercraft-based mapping. However, satellites offer a cheaper and more consistent approach to mapping seagrass distribution at high temporal resolution [21]. Currently, a range of high-resolution multispectral data is being used to map seagrass meadows [22]. The success of remote sensing methods is highly dependent on the spatial/spectral resolution of the data and mapping methods. Higher spatial resolution data allows for higher accuracy in mapping seagrass than lower resolution data [21]. However, high-resolution data often comes at the cost of spatial coverage, as seen in the many local studies using aerial photography to map seagrass meadows [23,24]. Due to low spatial coverage, aerial photography is too expensive to be conducted for all seagrass meadows. In the past few years, studies have used medium resolution approaches such as Sentinel-2 and Landsat data to map seagrass at regional and local scales due to its high spatial coverage [22,25,26,27,28,29,30,31]. Many of these studies map seagrass extent, but few map seagrass density.
Remote sensing data is used to detect seagrass with a variety of methods. Most of them are based on the spectral reflectance of chlorophyll and other constituents in leaves in the visible and near-infrared spectrum. Seagrass typically contains chlorophyll-a, a pigment that generates a spectral response characterized by a strong visible wavelength absorption and a higher near-infrared reflection. The leaf reflectance of seagrass characteristics can exhibit a broad peak centered at 550 nm, a trough at 670–680 nm, and a sharp transition to 700 nm (the red edge) with a gently decreasing infrared plateau above 750 nm [32]. Other methods use Normalized Difference Vegetation Index (NDVI) and the modified vegetation indices derived from airborne multispectral satellite data have been used to map seagrass distribution [33,34]. The differences in visible spectral reflectance have also been adopted to detect seagrass [21]. For satellites to detect the seagrass at the seabed, the light undergoes atmospheric scattering and absorption by phytoplankton, suspended sediments, and dissolved organic substances in the water column before reaching the bottom of the seabed. Then, the light reflected by the seagrass on the seabed passes through the water column and atmosphere layer again before reaching the satellite sensor for recording. Thus, atmospheric correction is required to remove the atmospheric effect. A correction is also required to remove the underwater scattering and absorption in order to correctly detect seagrasses [22]. Complicated radiative transfer models are used to retrieve seagrass information [35,36,37,38]. Lastly, machine learning algorithms are readily available, and many studies have adopted various machine learning algorithms to map seagrass distribution [22,25,27,28,29].

1.2. Use of Machine Learning with Remote Sensing Data to Map Seagrass

As classification techniques advanced, machine learning has been used to classify seagrass and non-seagrass pixels based on satellite reflectance measurements. The classification models include both supervised learning (Random Forest (RF), Gaussian Naïve Bayes (GNB), Support Vector Machine (SVM)), and unsupervised learning (K-Nearest Neighbors (KNN)) [39]. Supervised machine learning techniques have achieved better results [40]. Deep learning algorithms have also gained extraordinary attention [41]. Many studies tested different machine learning algorithms for mapping seagrass density and found that, out of all the machine learning methods, RF and SVM are the most effective algorithms [32,41]. Some studies found that SVM performed the best for seagrass mapping. However, the cause of their differences has not been identified.

1.3. Usage of Google Earth Engine for Monitoring Seagrass

Cloud computing platforms are a powerful and efficient tool for processing large-scale, medium to high-resolution satellite data. Google Earth Engine (GEE) hosts a publicly available catalog of satellite imagery, including Landsat data, Copernicus Sentinel-2 data, and geospatial datasets with planetary-scale analysis capabilities and machine learning algorithms [42]. GEE has been widely used in research communities for various applications of monitoring changes in land surface at a global scale [43]. Sentinel-2 data hosted on GEE were used to map seagrass at a larger scale in the Mediterranean region through machine learning [25], and long-term Landsat data has been used to monitor seagrass change in the last two decades through GEE [44]. GEE is the state of the art method for remote sensing application studies.

1.4. Chesapeake Bay Program

Seagrass has been closely monitored in the Chesapeake Bay since 1970 using RGB and NIR aerial imagery and ground samples, which is extremely expensive and time-consuming. Each year, multispectral (RGB and NIR) aerial imagery at 24 cm spatial resolution is collected. Concurrent ground surveys are conducted to confirm the existence of, and provide species data for, the seagrass beds. These efforts are very expensive and unfeasible for use in larger regions [45]. These techniques are localized in order to be utilized on a larger scale. The goal of this study was to see if freely available medium-resolution Sentinel-2 data in Google Earth Engine (GEE) could be used with machine learning models to map seagrass density in the Chesapeake Bay region in order to assess machine learning model accuracy.

2. Materials and Methods

2.1. Study Site

The study site is the Maryland Coastal Bays watershed, also known as Delmarva (Delaware, Maryland, and Virginia) Bays. It includes five coastal bays—Chincoteague Bay, Newport Bay, Sinepuxent Bay, Isle of Wight Bay, Assawoman Bay, and St. Martin River. It covers 71,000 acres of water, 248 miles of shoreline, and 35,000 acres of wetlands. It is one of Maryland’s most ecologically diverse regions and is home to various wildlife, including 108 rare, endangered, or threatened species (https://mdcoastalbays.org/ (accessed on 24 March 2024)) (Figure 1). The mapped seagrass shows a majority of the area with seagrass to be densely vegetated and less of the area to be sparsely vegetated. The amount of seagrass in this area has been undulating over the past decade, and a slight increase in seagrass vegetation was observed from 2019 to 2020, as reported in Chesapeake Bay Program (https://www.vims.edu/research/units/programs/sav/access/charts/segments/ (accessed on 24 March 2024)).

2.2. Datasets

2.2.1. Satellite Data

This study uses Copernicus Sentinel-2 reflectance data in the GEE archive. It has 13 spectral bands in the visible, near infrared, and short-wave infrared part of the spectrum, with varying spatial resolutions for each band (see Table 1). Sentinel-2 has a 5-day temporal resolution at our study site. The Sentinel-2 surface reflectance data of the period from June 2020 to October 2020 was used. All spectral bands except band 10 (cirrus cloud band) were used in this study. These bands are in 10 m, 20 m, and 60 m spatial resolution (see Table 1), but GEE reprojected all the bands into 10 m spatial resolution in the WGS84 World Geodetic System (See Figure 2).
Table 1. Sentinel-2 spectral bands and associated spatial resolution.
Table 1. Sentinel-2 spectral bands and associated spatial resolution.
Sentinel-2 BandsWavelength (nm)Bandwidth (nm)Resolution (m)
Band 1—Coastal aerosol442.72160
Band 2—Blue492.46610
Band 3—Green559.83610
Band 4—Red664.63110
Band 5—Vegetation red edge704.11520
Band 6—Vegetation red edge740.51520
Band 7—Vegetation red edge782.82020
Band 8—NIR832.810610
Band 8A—Narrow NIR864.72120
Band 9—Water vapour945.12060
Band 10—SWIR—Cirrus1373.53160
Band 11—SWIR1613.79120
Band 12—SWIR2202.417520

2.2.2. Field Data

The ground truth seagrass data were obtained from the Virginia Institute of Marine Science (https://www.vims.edu/index.php (accessed on 24 March 2024)). Aerial photography and in situ data were used to map the distribution, density, and species-composition of seagrass in the mid-Atlantic region. The imagery is multispectral (RGB and NIR) and is acquired at an approximate altitude of 4023 m, with a ground sample distance of approximately 24 cm. Flight lines used to obtain the photography are positioned to include all areas known to have seagrass, as well as most areas that could potentially have seagrass (i.e., all areas where water depths are less than 2 m at mean low water). Flights are timed during the peak growing seasons of the species known to inhabit each area. The time of the flights is selected based on VIMS guidelines, which determine whether an image is under nearly optimal conditions for detecting seagrass and can be accurately interpreted. The photos are interpreted by identifying and delineating all seagrass beds. All available information is utilized, including knowledge of aquatic grass signatures on screen, distribution of seagrass from prior years, and ground survey information.
Acting as supplemental information to the aerial photography, the in situ data consists of ground surveys in various regions of Chesapeake Bay to confirm the existence and density of some seagrass beds mapped from the aerial imagery, as well as seagrass beds that were too small to be visible on the imagery. However, not all areas of Chesapeake Bay are ground surveyed (https://www.vims.edu/research/units/programs/sav/methods/ (accessed on 24 March 2024)).
In addition to delineating seagrass bed boundaries, seagrass density is estimated within each bed by visually comparing it to an enlarged crown density scale similar to those developed for estimating the crown cover of forest trees from aerial photography [46]. Bed density is categorized into four classes based on a subjective comparison with the density scale. These are 1 (very sparse, <10% coverage); 2 (sparse, 10–40%); 3 (moderate, 40–70%); or 4 (dense, 70–100%). Either the entire bed or subsections of a bed are assigned a density number (1 to 4) corresponding to these density classes.

2.3. Methodology

This study uses Sentinel-2 satellite data hosted on Google Earth Cloud and four supervised learning models available through the Google Earth Engine platform to classify seagrass density in the Chesapeake Bay area.The field seagrass density class data collected through the Chesapeake Program were used to train the models. A schematic representation of the methodology is shown in Figure 3. The following describes the preprocessing and the machine learning models used in this study and the training and classification procedure.

2.3.1. Preprocessing Copernicus Sentinel 2 Satellite Data

Sentinel-2 surface reflectance images were filtered for the period of 1 June 2020 to 30 October 2020 and pixels with a cloud cover of <5%. The median of all the pixels was sampled to produce one image to be most representative of the ground survey data, which were taken throughout this time period. To achieve a more efficient machine learning algorithm, land and ocean water pixels were masked out The ocean water was manually cut out using the land bordering the coastal water, and the land pixels were filtered out with a Normalized Difference Water Index (NDWI) value > 0.65. NDWI was calculated using surface reflectance from green (Sentinel-2 Band 3) and shortwave infrared (SWIR) (Sentinel-2 Band 11):
N D W I = B 3 B 11 B 3 + B 11
where B3 and B11 are Band 3 and 11 surface reflectance values. NDWI has often been used to identify the coastlines [47]. NDWI values show clear differences for land and water bodies (Figure 4); the threshold (0.65) was selected based on the NDWI histogram (Figure 4) and visual comparison with the coastline to separate water bodies from land. Figure 4 shows the area of the clipped coastal water body.
Many studies remove the scattering from the atmosphere and absorption from the water column. The atmospheric effect has been corrected by the Sentinel-2 team based on the LIBRADTRAN radiative transfer model [48]. The interference from light scattering and absorption in the water column was not corrected in this study due to a lack of bathymetry data. Two studies showed that water column correction does not significantly improve classification results compared to using solely atmospherically corrected data [31,40].

2.3.2. Model

This study adopted four supervised machine learning algorithms available in GEE:
  • Naive Bayes (NB) is a probabilistic classification algorithm based on Bayes’ theorem. It is widely used for text classification tasks [49]. The algorithm assumes that the presence or absence of a particular feature is independent of the presence or absence of other features, which may not always be true in real-world scenarios.
  • Classification and Regression Trees (CART) is a decision tree algorithm used for both classification and regression tasks [50]. CART works by recursively partitioning the dataset into subsets based on the values of different features, creating a tree-like structure where each node represents a decision based on a specific feature.
  • Support Vector Machine (SVM) classifies data points by creating a hyperplane that distinctly separates the data points, making as large of a margin as possible between the different sets of data points [51]. A radial basis function (RBF) kernel was used. SVM classifies data points by finding the hyperplane that best separates different classes.
  • Random Forest (RF) uses multiple decision trees to vote for the outcomes; the data are split into different trees, and the decision is made by voting all decisions from all the trees [52]. The wisdom of crowds allows for the decision trees to perform more accurately. This study used 15 decision trees with 12 variables per split.

2.3.3. Training and Testing the Machine Learning Model and Seagrass Density Classification

The ground truth of seagrass density data was used for training and testing the machine-learning algorithms. In addition to the seagrass density data, the non-seagrass polygons were manually drawn in the clear water area and merged with seagrass density data for training and testing (Figure 5 and Table 2). The combined dataset’s pixels were randomly assigned to be either training data or validation data, with 70% of the pixels being used for training and the remaining 30% used for validation. This study used all spectral bands except for band 10.
A confusion matrix was created for each trained machine learning algorithm based on the validation data. The trained machine learning algorithms were validated by calculating the producer and consumer accuracies and F1-scores for each density class, the overall accuracy, and Cohen’s Kappa coefficient. Overall accuracy reports the overall proportion of pixels that are correctly mapped. Kappa coefficient evaluates how well the classification performed after removing the accuracies caused by random error. Producer accuracy is the probability that a certain ground truth class is classified correctly. It is the accuracy from the point of view of a map producer. User accuracy is the probability that a value predicted to be in a certain class really is in that class. It is the accuracy from the point of view of map users. The F1-score blends precision (user accuracy) and recall (producer accuracy) using their harmonic mean. The F1 score will only be high if both precision and recall are high, ensuring a good balance of both.
This study used two different strategies to train and test the machine learning modules.
  • Using all ground truth data to train and test the models: To maximize the ground samples (Table 2), the models were trained using the full (unbalanced) ground samples. Due to the computational time limit on GEE, the models had to be trained and tested at 100 m spatial resolution. At 100 m spatial resolution, the amount of training and testing samples was low for the very sparse seagrass density class (roughly 48 pixels). To remedy the low training and testing data problem, we tested the models in additional scenarios: combining very sparse and spare seagrass density classes into one and training the machine learning algorithm at 10 m spatial resolution but with a fraction of the available ground sample data.
  • Using a stratified sampling approach to train and test the models: The ground truth data is unevenly distributed between classes (see Table 2). There were fewer samples of very sparse and sparse seagrass density classes than the other classes. However, machine learning algorithms are designed to assume an equal distribution of classes. The same amount of ground samples for each class was used to train and test the models and assess the model performance and the impact of uneven ground samples on model accuracies. We trained and tested the models using the number of samples in all classes as the lowest density class.

3. Results

3.1. Using All Ground Unbalanced Samples to Train and Test the Models

First of all, to maximize the ground samples Table 2, the models were trained using the full ground samples at 100 m spatial resolution. Secondly, due to the low number of samples at 100 m spatial resolution for the very sparse seagrass density class, the models were also trained and tested using samples with the merged class. Lastly, due to the computational limit on GEE, the models had to be trained and tested at 100 m spatial resolution. The models were also trained and tested at the original 10 m spatial resolution but with a fraction of the available ground sample data. The results are discussed separately.

3.1.1. Using All Ground Unbalanced Samples at 100 m Resolution

Table 3 compares the overall accuracy and Kappa coefficient for these four machine learning models: Naive Bayes (NB), Classification and Regression Trees (CART), Support Vector Machine (SVM), and Random Forest (RF). Out of the four machine learning algorithms, NB performed the worst with an overall accuracy and Kappa coefficient of 0.653 and 0.415, respectively, and then CART performed the second worst with an overall accuracy and Kappa coefficient of 0.825 and 0.706. Both SVM and RF outperformed NB and CART. The performances of SVM and RF were very similar, with RF slightly better than SVM. The overall accuracy was 0.868 for SVM and 0.874 for RF, whereby the overall accuracy difference was only 0.006. The Kappa coefficients were 0.766 for SVM and 0.777 for RF, wherein their difference was only 0.011.
Table 4 has the same information as Table 3 except that the very sparse and sparse seagrass density classes were merged into one class. The model showed slight improvement, but not much after merging. The NB model had the largest improvement with overall accuracy, and Kappa coefficients increased by 0.05. The performance of CART improved by very little with the overall accuracy, and the Kappa coefficient increased by 0.01 and 0.003, respectively. The overall accuracy for SVM increased by 0.005, but the Kappa coefficient did not increase. The performance of RF was worse; the overall accuracy and Kappa coefficient were decreased by 0.03 and 0.017, respectively. Since both SVM and RF performed the best in overall accuracy, only consumer and producer accuracy are discussed below.
Table 3. Table of overall accuracy and Kappa coefficient of all four machine learning models using all available ground samples to train and test the models.
Table 3. Table of overall accuracy and Kappa coefficient of all four machine learning models using all available ground samples to train and test the models.
ML ModelsOverall AccuracyKappa Coefficient
NB0.6530.415
CART0.8250.706
SVM0.8680.766
RF0.8740.777
Table 4. The same as Table 3 except with the merged very sparse and sparse seagrass density classes.
Table 4. The same as Table 3 except with the merged very sparse and sparse seagrass density classes.
ML ModelsOverall AccuracyKappa Coefficient
NB0.7020.477
CART0.8350.709
SVM0.8720.765
RF0.8710.760
  • SVM machine learning results: The SVM machine learning algorithm accuracies are shown in Table 5 (four seagrass density classes) and Table 6 (three seagrass density classes). For this study site, the sample size for each class increased with seagrass density, except for the non-seagrass class.
    The consumer accuracies for SVM were 0.994, 0.091, 0.163, 0.626, and 0.855, while the producer accuracies were 0.983, 1.0, 0.509, 0.605, and 0.754 for non-seagrass, very sparse (<10% cover), sparse (10–40% cover), moderate (40–70% cover), and dense (>70% cover) seagrass, respectively. The higher the seagrass density, the higher the consumer accuracies, except for the non-seagrass case, which had the highest accuracy. In this study, denser seagrass density classes had larger sample sizes than sparser seagrass density classes. The results also indicated that consumer accuracies were associated with training size. The more training data, the higher the consumer accuracies. F1-scores showed a strong dependence on sample size.
    The producer accuracy showed less dependence on training size than the consumer accuracy. The very sparse seagrass density class had the highest producer accuracy, followed by the non-seagrass class (0.983). This high producer accuracy in the very sparse seagrass density class (<10% cover) likely had a large uncertainty due to its very low sample area (0.488 km2) for training and testing. For sparse (10–40% cover), moderate (40–70% cover), and dense (>70% cover) seagrass classes, producer accuracy increased with seagrass density or sample sizes. The high non-seagrass accuracy was likely a result of having much more uniform data compared to the seagrass regions. Both accuracies showed dependence on seagrass density or sample size. Compared to consumer accuracy, producer accuracy had less variability than consumer accuracy, and consumer accuracy showed more dependence on seagrass density and sample size than producer accuracy.
    Due to limited ground samples for the very sparse seagrass density class, it was merged into the sparse seagrass density class. It eliminated the extremely high accuracy case (1.0) and the results were more reasonable (Table 6). Overall, the new accuracies had similar patterns as shown in Table 5: Both the consumer accuracy increased with the seagrass density or training size and the producer accuracy depended less on the seagrass density or training size. It also seems that consumer accuracy was associated with training size, and the producer accuracy was less dependent on training size.
Table 5. Table of seagrass density class, sample area, consumer and producer accuracies, and F1-score for SVM using all available ground samples to train and test the models.
Table 5. Table of seagrass density class, sample area, consumer and producer accuracies, and F1-score for SVM using all available ground samples to train and test the models.
DensityArea (km2)Consumer AccuracyProducer AccuracyF1-Score
<10%0.4880.0911.00.167
10–40%4.8000.1630.5090.248
40–70%10.5160.6260.6050.615
>70%18.6510.8550.7540.801
Non-seagrass53.0140.9940.9830.987
Table 6. The same as Table 5 with the merged very sparse and sparse seagrass density classes.
Table 6. The same as Table 5 with the merged very sparse and sparse seagrass density classes.
DensityArea (km2)Consumer AccuracyProducer AccuracyF1-Score
<40%5.2880.3080.6760.423
40–70%10.5160.5900.5810.585
>70%18.6510.8510.7470.795
Non-seagrass53.0140.9930.9850.989
  • RF machine learning results: The accuracies for RF are shown in Table 7. The consumer accuracies were 0.996, 0.045, 0.386, 0.646, and 0.812, and the producer accuracies were 0.968, 0.167, 0.508, 0.670, and 0.795 for the non-seagrass, very sparse (<10% cover), sparse (10–40% cover), moderate (40–70% cover), and dense (>70% cover) seagrass density classes, respectively. Both the consumer and producer accuracies and F1-scores had similar accuracy patterns to the SVM consumer accuracy: the higher the seagrass density, the better the consumer and producer accuracies, except for the non-seagrass case, which had the highest accuracy. The results also suggested that both consumer and producer accuracies and F1-scores were associated with training size. The producer accuracy showed slightly less dependence on seagrass density or training size, which is to say that the low seagrass density classes very sparse (<10% cover), sparse (10–40% cover), moderate (40–70% cover) had higher producer accuracy than consumer accuracy. Only the very dense seagrass class (>70% cover) had slightly better consumer accuracy than producer accuracy. Merging the very sparse and sparse seagrass density classes did not improve the accuracies compared to the case without merging (Table 8). However, the accuracy patterns were exactly the same as the case without merging.
    Compared to SVM, RF had slightly better producer accuracy but worse consumer accuracy for moderate (40–70% cover) and dense (>70% cover) seagrass classes. The opposite was true for sparse seagrass classes.

3.1.2. Using an Equal Fraction of Unbalanced Ground Samples to Train and Test the Models

The model performance was also assessed using an equal fraction of available ground samples to train and test the machine learning models. The overall accuracies and Kappa coefficients showed a similar pattern as using all available ground samples (Table 9). Among the four models tested, RF outperformed the rest, CART was slightly better than SVM, and NB did the worst. However, the overall accuracy and Kappa coefficient values slightly decreased compared to using all the available data for training and testing. For example, the overall accuracy for RF changed from 0.874 to 0.835 and the Kappa coefficient did not change (0.777 to 0.777).
The consumer and producer accuracies and F1-score showed similar patterns as discussed before (Table 10 and Table 11). Consumer accuracy for SVM had the highest dependence on sample size. Producer accuracy and F1 scores showed less dependence on sample size.
Table 9. Table of overall accuracy and kappa coefficient for all machine learning models using an equal fraction of ground samples for each class to train and test the models.
Table 9. Table of overall accuracy and kappa coefficient for all machine learning models using an equal fraction of ground samples for each class to train and test the models.
ML ModelsOverall AccuracyKappa Coefficient
NB0.5090.360
CART0.7690.688
SVM0.7550.666
RF0.8350.777

3.2. Using Stratified Ground Samples to Train and Test the Models

The stratified sample approach identified the smallest sample size as the very sparse seagrass density class with a sample size of 4880 pixels at 10 m spatial resolution. In total, 4880 pixels were randomly selected from their ground samples to train and test their models. To assess the impact of sample size on the model performance, the models were also trained and tested with half of the size (2500 pixels at 10 m spatial resolution). Table 12 and Table 13 show the overall accuracies and Kappa coefficients. Out of the four models tested, RF had the highest accuracy. The performance of SVM and CART were similar, and the worst was NB. Decreasing the sample size showed slightly decreased accuracies for all models except SVM. For example, the overall accuracy for RF decreased from 0.818 to 0.783, and the Kappa coefficient decreased from 0.774 to 0.729. Compared to the accuracies using all ground samples, the overall accuracy decreased for RF from 0.874 to 0.818 and for the Kappa coefficient from 0.777 to 0.774. This result suggested a tradeoff between using all the ground samples versus using all equal samples for each class to train the model. More training data can result in more accurate overall model performance.
CART had a similar pattern, and NB was the worst model. The following discusses the consumer and producer accuracies and F1-scores for each class using the RF and SVM models only.
Table 12. Table of sample size (# of 10 m pixel), overall accuracy, and kappa coefficient for all four machine learning models using the equal sample size for each class to train and test the model.
Table 12. Table of sample size (# of 10 m pixel), overall accuracy, and kappa coefficient for all four machine learning models using the equal sample size for each class to train and test the model.
ML ModelsSample SizeOverall AccuracyKappa Coefficient
NB48800.4640.333
CART48800.7430.679
SVM48800.7270.657
RF48800.8180.774
Table 13. Table of sample size (# of 10 m pixel), overall accuracy, and kappa coefficient for four machine learning models using the equal sample size for each class to train and test the model.
Table 13. Table of sample size (# of 10 m pixel), overall accuracy, and kappa coefficient for four machine learning models using the equal sample size for each class to train and test the model.
ML ModelsSample SizeOverall AccuracyKappa Coefficient
NB25000.4410.305
CART25000.7130.641
SVM25000.7520.690
RF25000.7830.729
Contrary to the results using all available ground samples to train and test the models, the results using equal sample size do not show any dependence on seagrass density classes for both the SMV and RF models (Table 14 and Table 15). The no-grass class still had the highest accuracy, and the very sparse seagrass class tended to have the second-highest accuracy, followed by the dense seagrass class. The medium seagrass density classes had the lowest accuracy. Consumer accuracy, producer accuracy, and F1-score all showed consistent patterns. Both the SVM and RF models had very similar patterns of accuracies (Table 14 and Table 15).

3.3. Comparison of Classification Maps

Figure 6 and Figure 7 compare the seagrass density classification maps for the four machine learning models. Figure 6 illustrates classification maps using all available ground samples for each class to train and test the models and Figure 7 displays the maps using the equal sample size (4880 pixels at 10 m spatial resolution) for each class to train and test the models. Among different machine learning models, visual inspection indicates that the NB classification map shows significant differences from the other three maps. It tended to classify the non-seagrass class into a very sparse seagrass density class when comparing the other three maps. Compared with the other three maps, the ones by CART and RF are very similar. The SVM model tended to classify more areas as dense seagrass rather than sparse seagrass compared to RF.
The classification maps had large differences when the models were trained differently (Figure 6 and Figure 7). When the models were trained using all ground samples, they tended to map more of the dense seagrass density class than other classes. This is likely due to the low accuracy of the low seagrass density class using this training approach. In contrast, when the models were trained using equal ground samples for each class, the classification maps generated by these models had more medium and sparse seagrass density classes. The classification map shows more seagrass in the coastal areas and less in deep water bodies in the center of the bays. This observation is consistent with what is usually observed in ground observations.

4. Discussion

Out of the four machine learning algorithms (Naive Bayes (NB), Classification and Regression Trees (CART), Support Vector Machine (SVM), and Random Forest (RF)) tested in this study, RF outperformed the other three models, SVM and CART had very similar performance, and NB performed the poorest.
Using the available ground samples, both SVM and RF produced better consumer and producer accuracies for dense and moderate seagrass classes than for sparse and very sparse seagrass classes. The better performance for dense seagrass classes was reported in previous studies. For example, a study using hyperspectral imagery to map seagrass, also with a decision tree approach, found better performance at high densities of seagrass meadows [32]. Another study used Maxar’s WorldView-2 and WorldView-3 high-spatial resolution commercial satellite to map with a deep convolution neural network (DCNN) and also found that satellite classification performed best in areas of dense, continuous seagrass compared to areas of sparse, discontinuous seagrass [53]. A similar study used hyperspectral data and a maximum likelihood classification method to map seagrass density and achieved higher consumer accuracy for denser seagrass than sparser seagrass meadows and the producer accuracy was less dependent on the seagrass density [33]. This finding is in agreement with our accuracy result using the SVM classifier. The conclusion seems logical. Satellite measurements cannot capture the fine spatial patterns in the distribution of plants and biomass within seagrass meadows, particularly in sparsely vegetated areas, due to its mixing with water column scattering [33].
The analysis showed that model performance for each class depends on the training data size used. More training data available for denser seagrass classes may cause better model accuracy for dense seagrass classes. Previous studies have shown that training set size can greatly impact classification accuracy and consequently has been a major focus of attention in research [54]. This research has generally found a positive relationship between classification accuracy and the size of the training set, following a power function for a wide range of classifiers [55].
Further analysis of using a different training approach showed that when strategically using the same amount of ground samples for each class to train the models, the model accuracies were unrelated to seagrass density classes. This was evident using both the SVM and RF models (Table 14 and Table 15). The very sparse seagrass class tended to have the highest accuracy, followed by the dense seagrass class. The medium seagrass density classes had the lowest accuracy. Consumer accuracy, producer accuracy, and F1-score all showed consistent patterns using both the SVM and RF models. Our analysis demonstrates that the training data and how they can be properly trained can impact the model performance dramatically. We should be careful to implement the machine learning models properly.
Using the same sample size for each class for training and testing has some tradeoffs. It will reduce the overall accuracy. For example, when comparing two training methods (using all the ground samples and the equal sample size for each class), the overall accuracy decreased for RF from 0.874 to 0.818 and the Kappa coefficient from 0.777 to 0.774. This result suggests that there is a tradeoff when using all the ground samples versus using an equal number of ground samples for each class to train the model. More training data can result in more accurate overall model performance. Collecting a large training data set for supervised classifiers can be challenging, especially for studies covering a large area, which may be typical of many real-world applied projects [56]. More innovative ideas, such as bootstrapping or Monte Carlo simulations, may help.
Many parts of the preprocessing could have been improved upon: In the pre-processing procedure of generating a clear composite image, we used the median value. A previous study showed that the first quartile yields less noisy image composites because it filters higher reflectances (clouds and sunlight) [25]. Implementing the first quartile to generate a clear composite image might also improve the model’s performance.
Incorporating bathymetry data could improve the accuracy and make the model more transferable. The scattering and absorption in the water column impose an additional interference on the remotely sensed measurement of submerged habitats. The bathymetric dataset was used to correct light attenuation by the water column for resolving bottom reflectance [26,32]. The bathymetric data can be obtained from lidar and multispectral remote sensing data [57]. Adding bathymetry data for this region could improve the model performance. However, a few studies found that water column correction had little improvement on mapping seagrass density, particularly when using machine learning classifiers on atmospherically corrected surface reflectance measurements [31,40]. The study in [40] found that water column correction models only provide minor improvements in calm, clear, and shallow waters. Since the assumption of the water column correction model is invalid in complex and deep areas, water column corrections are discouraged in more complex scenarios with challenging water depths (25 and 35 m). However the study in [31] argued that both models (random forests) are data-driven. Their finding should be restricted to single image analyses based on optimal water conditions (calm surface, lowest turbidity). The accuracy is an indicator of how well the model can fit the data. Similar accuracies do not indicate similar model transferability. However, they hypothesized that the water column corrected data might be better suited to transfer a random forest model to other image acquisitions.
The water body may have organic material other than seagrass, such as macroalgae and macrophytes, which use chlorophyll for photosynthesis [58,59]. Many studies classified seagrass and microalgae as separate classes [3,32,60]. This study does not differentiate seagrass and other non-seagrass plant-like algae due to a lack of ground macroalgae data for training. Thus, the accuracy of identifying seagrass in this study could be overestimated if algae other than seagrass exist in Chesapeake Bay. Future work includes collecting ground macroalgae data and training the classifiers using ground samples of both seagrass and macroalgae.

5. Conclusions

This study introduced a proof-of-concept procedure to map seagrass density in the Chesapeake Bay region using freely available medium-high spatial resolution (10 m, 20 m, and 60 m) Sentinel-2 data through machine learning models on Google Earth Engine, a flexible, time- and cost-efficient cloud-based system. Out of the four machine learning algorithms (Naive Bayes (NB), Classification and Regression Trees (CART), Support Vector Machine (SVM), and Random Forest (RF)) tested in this study, RF outperformed the other three models with overall accuracies of 0.874 and Kappa coefficients of 0.777. CART and SVM were very similar, and NB performed the worst. We tested two different approaches to assess the model’s accuracy. When using all the available training data, our analysis suggested the model performance was associated with seagrass density classes, wherein the denser the seagrass class, the better the models perform. However, when using the same size of training data for each class, the association of model performance with seagrass density classes disappeared. This analysis implies a false relationship of accuracies with seagrass density classes using all available ground samples to train and test the models. This finding suggested that one important factor determining overall accuracy is the training data and how to properly train the models in addition to the models selected. This study demonstrates the potential to map seagrass density using satellite data. For future work, instead of randomly splitting for training and testing, a new approach is required to remove the spatial-autocorrelation of satellite pixels to obtain the true accuracy of the machine learning models.

Author Contributions

Conceptualization, M.M. and J.J.Q.; methodology, M.M.and J.J.Q.; software, M.M.; validation, M.M.; formal analysis, M.M.; investigation, M.M. and J.J.Q.; resources, M.M. and J.J.Q.; data curation, M.M.; writing—original draft preparation, M.M.; writing—review and editing, M.M. and J.J.Q.; visualization, M.M.; supervision, J.J.Q.; project administration, J.J.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Duarte, C.M.; Middelburg, J.J.; Caraco, N. Major role of marine vegetation on the oceanic carbon cycle. Biogeosciences 2005, 2, 1–8. [Google Scholar] [CrossRef]
  2. Kennedy, H.; Beggins, J.; Duarte, C.M.; Fourqurean, J.W.; Holmer, M.; Marbà, N.; Middelburg, J.J. Seagrass sediments as a global carbon sink: Isotopic constraints. Glob. Biogeochem. Cycles 2010, 24, GB4026. [Google Scholar] [CrossRef]
  3. Blume, A.; Pertiwi, A.P.; Lee, C.B.; Traganos, D. Bahamian seagrass extent and blue carbon accounting using Earth Observation. Front. Mar. Sci. 2023, 10, 1058460. [Google Scholar] [CrossRef]
  4. Hendriks, I.E.; Sintes, T.; Bouma, T.J.; Duarte, C.M. Experimental assessment and modeling evaluation of the effects of the seagrass Posidonia oceanica on flow and particle trapping. Mar. Ecol. Prog. Ser. 2008, 356, 163–173. [Google Scholar] [CrossRef]
  5. Su, F.; Li, Z.; Li, Y.; Xu, L.; Li, Y.; Li, S.; Chen, H.; Zhuang, P.; Wang, F. Removal of Total Nitrogen and Phosphorus Using Single or Combinations of Aquatic Plants. Int. J. Environ. Res. Public Health 2019, 16, 4663. [Google Scholar] [CrossRef] [PubMed]
  6. Lamb, J.B.; van de Water, J.A.J.M.; Bourne, D.G.; Altier, C.; Hein, M.Y.; Fiorenza, E.A.; Abu, N.; Jompa, J.; Harvell, C.D. Seagrass ecosystems reduce exposure to bacterial pathogens of humans, fishes, and invertebrates. Science 2017, 355, 731–733. [Google Scholar] [CrossRef] [PubMed]
  7. Hensel, M.; Patrick, C.; Orth, R.; Wilcox, D.; Dennison, W.; Gurbisz, C.; Hannam, M.; Landry, J.; Moore, K.; Murphy, R.; et al. Rise of Ruppia in Chesapeake Bay: Climate change–driven turnover of foundation species creates new threats and management opportunities. Proc. Natl. Acad. Sci. USA 2023, 120, e2220678120. [Google Scholar] [CrossRef] [PubMed]
  8. Waycott, M.; Duarte, C.M.; Carruthers, T.J.B.; Orth, R.J.; Dennison, W.C.; Olyarnik, S.; Calladine, A.; Fourqurean, J.W.; Heck, K.L.; Hughes, A.R.; et al. Accelerating loss of seagrasses across the globe threatens coastal ecosystems. Proc. Natl. Acad. Sci. USA 2009, 106, 12377–12381. [Google Scholar] [CrossRef] [PubMed]
  9. Dunic, J.C.; Brown, C.J.; Connolly, R.M.; Turschwell, M.P.; Côté, I.M. Long-term declines and recovery of meadow area across the world’s seagrass bioregions. Glob. Chang. Biol. 2021, 27, 4096–4109. [Google Scholar] [CrossRef]
  10. Orth, R.J.; Carruthers, T.J.B.; Dennison, W.C.; Duarte, C.M.; Fourqurean, J.W.; Heck, K.L.; Hughes, A.R.; Kendrick, G.A.; Kenworthy, W.J.; Olyarnik, S.; et al. A Global Crisis for Seagrass Ecosystems. BioScience 2006, 56, 987. [Google Scholar] [CrossRef]
  11. Zhang, Y.; Yu, X.; Chen, Z.; Wang, Q.; Zuo, J.; Yu, S.; Guo, R. A Review of Seagrass Bed Pollution. Water 2023, 15, 3754. [Google Scholar] [CrossRef]
  12. Arias-Ortiz, A.; Serrano, O.; Masqué, P.; Lavery, P.S.; Mueller, U.; Kendrick, G.A.; Rozaimi, M.; Esteban, A.; Fourqurean, J.W.; Marbà, N.; et al. A marine heatwave drives massive losses from the world’s largest seagrass carbon stocks. Nat. Clim. Chang. 2018, 8, 338–344. [Google Scholar] [CrossRef]
  13. Tan, Y.M.; Dalby, O.; Kendrick, G.A.; Statton, J.; Sinclair, E.A.; Fraser, M.W.; Macreadie, P.I.; Gillies, C.L.; Coleman, R.A.; Waycott, M.; et al. Seagrass Restoration Is Possible: Insights and Lessons From Australia and New Zealand. Front. Mar. Sci. 2020, 7, 617. [Google Scholar] [CrossRef]
  14. Orth, R.J.; Lefcheck, J.S.; McGlathery, K.S.; Aoki, L.; Luckenbach, M.W.; Moore, K.A.; Oreska, M.P.J.; Snyder, R.; Wilcox, D.J.; Lusk, B. Restoration of seagrass habitat leads to rapid recovery of coastal ecosystem services. Sci. Adv. 2020, 6, eabc6434. [Google Scholar] [CrossRef] [PubMed]
  15. Allan, J.D.; McIntyre, P.B.; Smith, S.D.P.; Halpern, B.S.; Boyer, G.L.; Buchsbaum, A.; Burton, G.A.; Campbell, L.M.; Chadderton, W.L.; Ciborowski, J.J.H.; et al. Joint analysis of stressors and ecosystem services to enhance restoration effectiveness. Proc. Natl. Acad. Sci. USA 2013, 110, 372–377. [Google Scholar] [CrossRef] [PubMed]
  16. Ralph, G.; Seitz, R.; Orth, R.; Knick, K.; Lipcius, R. Broad-scale association between seagrass cover and juvenile blue crab density in Chesapeake Bay. Mar. Ecol. Prog. Ser. 2013, 488, 51–63. [Google Scholar] [CrossRef]
  17. Hovel, K.A.; Lipcius, R.N. Effects of seagrass habitat fragmentation on juvenile blue crab survival and abundance. J. Exp. Mar. Biol. Ecol. 2002, 271, 75–98. [Google Scholar] [CrossRef]
  18. Alsaffar, Z.; Pearman, J.K.; Cúrdia, J.; Ellis, J.; Calleja, M.L.; Ruiz-Compean, P.; Roth, F.; Villalobos, R.; Jones, B.H.; Morán, X.A.G.; et al. The role of seagrass vegetation and local environmental conditions in shaping benthic bacterial and macroinvertebrate communities in a tropical coastal lagoon. Sci. Rep. 2020, 10, 13550. [Google Scholar] [CrossRef] [PubMed]
  19. Horinouchi, M.; Sano, M.; Taniuchi, T.; Shimizu, M. Effects of changes in leaf height and shoot density on the abundance of two fishes, Rudarius ercodes and Acentrogobius pflaumii, in azostera bed. Ichthyol. Res. 1999, 46, 49–56. [Google Scholar] [CrossRef]
  20. McCloskey, R.M.; Unsworth, R.K. Decreasing seagrass density negatively influences associated fauna. PeerJ 2015, 3, e1053. [Google Scholar] [CrossRef] [PubMed]
  21. Veettil, B.K.; Ward, R.D.; Lima, M.D.A.C.; Stankovic, M.; Hoai, P.N.; Quang, N.X. Opportunities for seagrass research derived from remote sensing: A review of current methods. Ecol. Indic. 2020, 117, 106560. [Google Scholar] [CrossRef]
  22. Pu, R.; Bell, S.; Meyer, C. Mapping and assessing seagrass bed changes in Central Florida’s west coast using multitemporal Landsat TM imagery. Estuar. Coast. Shelf Sci. 2014, 149, 68–79. [Google Scholar] [CrossRef]
  23. Orth, R.J.; Dennison, W.C.; Lefcheck, J.S.; Gurbisz, C.; Hannam, M.; Keisman, J.; Landry, J.B.; Moore, K.A.; Murphy, R.R.; Patrick, C.J.; et al. Submersed Aquatic Vegetation in Chesapeake Bay: Sentinel Species in a Changing World. BioScience 2017, 67, 698–712. [Google Scholar] [CrossRef]
  24. Duffy, J.P.; Pratt, L.; Anderson, K.; Land, P.E.; Shutler, J.D. Spatial assessment of intertidal seagrass meadows using optical imaging systems and a lightweight drone. Estuar. Coast. Shelf Sci. 2018, 200, 169–180. [Google Scholar] [CrossRef]
  25. Traganos, D.; Aggarwal, B.; Poursanidis, D.; Topouzelis, K.; Chrysoulakis, N.; Reinartz, P. Towards Global-Scale Seagrass Mapping and Monitoring Using Sentinel-2 on Google Earth Engine: The Case Study of the Aegean and Ionian Seas. Remote Sens. 2018, 10, 1227. [Google Scholar] [CrossRef]
  26. Traganos, D.; Reinartz, P. Mapping Mediterranean seagrasses with Sentinel-2 imagery. Mar. Pollut. Bull. 2018, 134, 197–209. [Google Scholar] [CrossRef] [PubMed]
  27. Krause, J.R.; Hinojosa-Corona, A.; Gray, A.B.; Burke Watson, E. Emerging Sensor Platforms Allow for Seagrass Extent Mapping in a Turbid Estuary and from the Meadow to Ecosystem Scale. Remote Sens. 2021, 13, 3681. [Google Scholar] [CrossRef]
  28. Ha, N.T.; Manley-Harris, M.; Pham, T.D.; Hawes, I. A Comparative Assessment of Ensemble-Based Machine Learning and Maximum Likelihood Methods for Mapping Seagrass Using Sentinel-2 Imagery in Tauranga Harbor, New Zealand. Remote Sens. 2020, 12, 355. [Google Scholar] [CrossRef]
  29. Topouzelis, K.; Makri, D.; Stoupas, N.; Papakonstantinou, A.; Katsanevakis, S. Seagrass mapping in Greek territorial waters using Landsat-8 satellite images. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 98–113. [Google Scholar] [CrossRef]
  30. Chen, C.F.; Lau, V.K.; Chang, N.B.; Son, N.T.; Tong, P.H.S.; Chiang, S.H. Multi-temporal change detection of seagrass beds using integrated Landsat TM/ETM+/OLI imageries in Cam Ranh Bay, Vietnam. Ecol. Inform. 2016, 35, 43–54. [Google Scholar] [CrossRef]
  31. Kuhwald, K.; von Deimling, J.S.; Schubert, P.; Oppelt, N. How can Sentinel-2 contribute to seagrass mapping in shallow, turbid Baltic Sea water? Remote Sens. Ecol. Conserv. 2021, 8, 328–346. [Google Scholar] [CrossRef]
  32. Pe’eri, S.; Morrison, J.R.; Short, F.; Mathieson, A.; Lippmann, T. Eelgrass and Macroalgal Mapping to Develop Nutrient Criteria in New Hampshire’s Estuaries using Hyperspectral Imagery. J. Coast. Res. 2016, 76, 209–218. [Google Scholar] [CrossRef]
  33. Valle, M.; Palà, V.; Lafon, V.; Dehouck, A.; Garmendia, J.M.; Borja, A.; Chust, G. Mapping estuarine habitats using airborne hyperspectral imagery, with special focus on seagrass meadows. Estuar. Coast. Shelf Sci. 2015, 164, 433–442. [Google Scholar] [CrossRef]
  34. Kohlus, J.; Stelzer, K.; Müller, G.; Smollich, S. Mapping seagrass (Zostera) by remote sensing in the Schleswig-Holstein Wadden Sea. Estuar. Coast. Shelf Sci. 2020, 238, 106699. [Google Scholar] [CrossRef]
  35. Sagawa, T.; Komatsu, T. Simulation of seagrass bed mapping by satellite images based on the radiative transfer model. Ocean Sci. J. 2015, 50, 335–342. [Google Scholar] [CrossRef]
  36. Dekker, A.G.; Brando, V.E.; Anstee, J.M.; Fyge, S.; Malthus, T.; Karpouzli, E. Remote Sensing of Seagrass Ecosystems: Use of Spaceborne and Airborne Sensors. In Seagrasses: Biology, Ecology and Conservation; Larkumetal, A.W.D., Ed.; Springer: Berlin/Heidelberg, Germany, 2007; Volume 7, pp. 347–359. [Google Scholar]
  37. Hedley, J.D.; Russell, B.J.; Randolph, K.; Pérez-Castro, M.A.; Vásquez-Elizondo, R.M.; Enríquez, S.; Dierssen, H.M. Remote Sensing of Seagrass Leaf Area Index and Species: The Capability of a Model Inversion Method Assessed by Sensitivity Analysis and Hyperspectral Data of Florida Bay. Front. Mar. Sci. 2017, 4, 362. [Google Scholar] [CrossRef]
  38. Hedley, J.D.; Velázquez-Ochoa, R.; Enríquez, S. Seagrass Depth Distribution Mirrors Coastal Development in the Mexican Caribbean—An Automated Analysis of 800 Satellite Images. Front. Mar. Sci. 2021, 8, 733169. [Google Scholar] [CrossRef]
  39. Pham, T.D.; Xia, J.; Ha, N.T.; Bui, D.T.; Le, N.N.; Tekeuchi, W. A Review of Remote Sensing Approaches for Monitoring Blue Carbon Ecosystems: Mangroves, Seagrassesand Salt Marshes during 2010–2018. Sensors 2019, 19, 1933. [Google Scholar] [CrossRef] [PubMed]
  40. Mederos-Barrera, A.; Marcello, J.; Eugenio, F.; Hernández, E. Seagrass mapping using high resolution multispectral satellite imagery: A comparison of water column correction models. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 102990. [Google Scholar] [CrossRef]
  41. McKenzie, L.J.; Langlois, L.A.; Roelfsema, C.M. Improving Approaches to Mapping Seagrass within the Great Barrier Reef: From Field to Spaceborne Earth Observation. Remote Sens. 2022, 14, 2604. [Google Scholar] [CrossRef]
  42. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  43. Wang, L.; Diao, C.; Xian, G.; Yin, D.; Lu, Y.; Zou, S.; Erickson, T.A. A summary of the special issue on remote sensing of land change science with Google earth engine. Remote Sens. Environ. 2020, 248, 112002. [Google Scholar] [CrossRef]
  44. Sebastian, T.; Sreenath, K.; Sreeram, M.P.; Ranith, R. Dwindling seagrasses: A multi-temporal analysis on Google Earth Engine. Ecol. Inform. 2023, 74, 101964. [Google Scholar] [CrossRef]
  45. Lefcheck, J.S.; Wilcox, D.J.; Murphy, R.R.; Marion, S.R.; Orth, R.J. Multiple stressors threaten the imperiled coastal foundation species eelgrass (Zostera marina) in Chesapeake Bay. Glob. Chang. Biol. 2017, 23, 3474–3483. [Google Scholar] [CrossRef] [PubMed]
  46. Paine, D.P. Aerial Photography and Image Interpretation for Resource Management; John Wiley & Sons: Hoboken, NJ, USA, 1981. [Google Scholar]
  47. Liu, Y.; Wang, X.; Ling, F.; Xu, S.; Wang, C. Analysis of Coastline Extraction from Landsat-8 OLI Imagery. Water 2017, 9, 816. [Google Scholar] [CrossRef]
  48. Mayer, B.; Kylling, A. Technical note: The libRadtran software package for radiative transfer calculations—Description and examples of use. Atmos. Chem. Phys. 2005, 5, 1855–1877. [Google Scholar] [CrossRef]
  49. Shobha, G.; Rangaswamy, S. Chapter 8—Machine Learning. In Handbook of Statistics; Gudivada, V.N., Rao, C., Eds.; Elsevier: Amsterdam, The Netherlands, 2018; Volume 38, pp. 197–228. [Google Scholar] [CrossRef]
  50. Loh, W. Classification and regression trees. WIREs Data Min. Knowl. Discov. 2011, 1, 14–23. [Google Scholar] [CrossRef]
  51. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  52. Breiman, L. Random Forest. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  53. Coffer, M.M.; Graybill, D.D.; Whitman, P.J.; Schaeffer, B.A.; Salls, W.B.; Zimmerman, R.C.; Hill, V.; Lebrasse, M.C.; Li, J.; Keith, D.J.; et al. Providing a framework for seagrass mapping in United States coastal ecosystems using high spatial resolution satellite imagery. J. Environ. Manag. 2023, 337, 117669. [Google Scholar] [CrossRef] [PubMed]
  54. Foody, G.M.; Mathur, A.; Sanchez-Hernandez, C.; Boyd, D.S. Training set size requirements for the classification of a specific class. Remote Sens. Environ. 2006, 104, 1–14. [Google Scholar] [CrossRef]
  55. Adey, B. Investigating ML Model Accuracy as Training Size Increases. 27 April 2021. Telstra Purple. Available online: https://purple.telstra.com/blog/investigating-ml-model-accuracy-as-training-size-increases (accessed on 14 March 2024).
  56. Ramezan, C.A.; Warner, T.A.; Maxwell, A.E.; Price, B.S. Effects of Training Set Size on Supervised Machine-Learning Land-Cover Classification of Large-Area High-Resolution Remotely Sensed Data. Remote Sens. 2021, 13, 368. [Google Scholar] [CrossRef]
  57. Xu, N.; Wang, L.; Zhang, H.S.; Tang, S.; Mo, F.; Ma, X. Machine Learning Based Estimation of Coastal Bathymetry From ICESat-2 and Sentinel-2 Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 1748–1755. [Google Scholar] [CrossRef]
  58. Kutser, T.; Hedley, J.; Giardino, C.; Roelfsema, C.; Brando, V.E. Remote sensing of shallow waters—A 50 year retrospective and future directions. Remote Sens. Environ. 2020, 240, 111619. [Google Scholar] [CrossRef]
  59. Blanco, A.; Qu, J.J.; Roper, W.E. Spectral signatures of hydrilla from a tank and field setting. Front. Earth Sci. 2012, 6, 453–460. [Google Scholar] [CrossRef]
  60. Lee, C.B.; Martin, L.; Traganos, D.; Antat, S.; Baez, S.K.; Cupidon, A.; Faure, A.; Harlay, J.; Morgan, M.; Mortimer, J.A.; et al. Mapping the National Seagrass Extent in Seychelles Using PlanetScope NICFI Data. Remote Sens. 2023, 15, 4500. [Google Scholar] [CrossRef]
Figure 1. The Maryland Coastal Bays watershed including five coastal bays—Chincoteague Bay, Newport Bay, Sinepuxent Bay, Isle of Wight Bay, Assawoman Bay, and St. Martin River.
Figure 1. The Maryland Coastal Bays watershed including five coastal bays—Chincoteague Bay, Newport Bay, Sinepuxent Bay, Isle of Wight Bay, Assawoman Bay, and St. Martin River.
Remotesensing 16 01165 g001
Figure 2. True color composite image of the clearest Sentinel-2 scene from 1 June 2020–30 October 2020 of the study site.
Figure 2. True color composite image of the clearest Sentinel-2 scene from 1 June 2020–30 October 2020 of the study site.
Remotesensing 16 01165 g002
Figure 3. A schematic representation of the methodology. Blue arrows indicate data input.
Figure 3. A schematic representation of the methodology. Blue arrows indicate data input.
Remotesensing 16 01165 g003
Figure 4. NDWI image, its histogram, and the clipped coastal water for this study.
Figure 4. NDWI image, its histogram, and the clipped coastal water for this study.
Remotesensing 16 01165 g004
Figure 5. Seagrass density training and testing datasets.
Figure 5. Seagrass density training and testing datasets.
Remotesensing 16 01165 g005
Figure 6. Seagrass density classification maps using all ground samples for training and testing.
Figure 6. Seagrass density classification maps using all ground samples for training and testing.
Remotesensing 16 01165 g006
Figure 7. Seagrass density classification maps using one sample size (4880 10 m pixels) for all classes of ground samples for training and testing.
Figure 7. Seagrass density classification maps using one sample size (4880 10 m pixels) for all classes of ground samples for training and testing.
Remotesensing 16 01165 g007
Table 2. Table of sample areas of seagrass density classes.
Table 2. Table of sample areas of seagrass density classes.
DensityArea (km2)
<10%0.488
10–40%4.800
40–70%10.516
>70%18.651
Non-seagrass53.014
Table 7. Table of seagrass density class, sample area, consumer and producer accuracy, and F1-score using RF using all available ground samples to train and test the models.
Table 7. Table of seagrass density class, sample area, consumer and producer accuracy, and F1-score using RF using all available ground samples to train and test the models.
DensityArea (km2)Consumer AccuracyProducer AccuracyF1-Score
<10%0.4880.0450.1670.071
10–40%4.8000.3860.5080.439
40–70%10.5160.6460.6700.658
>70%18.6510.8120.7950.803
Non-seagrass53.0140.9960.9680.982
Table 8. The same as Table 7 with the merged very sparse and sparse seagrass density classes.
Table 8. The same as Table 7 with the merged very sparse and sparse seagrass density classes.
DensityArea (km2)Consumer AccuracyProducer AccuracyF1-Score
<40%5.2880.3880.5610.460
40–70%10.5160.5800.6080.594
>70%18.6510.8040.7850.794
Non-seagrass53.0140.9990.9660.982
Table 10. Table of seagrass density class, sample size, consumer accuracy, producer accuracy, and F1-score for SVM using an equal fraction of ground samples for each class to train and test the models.
Table 10. Table of seagrass density class, sample size, consumer accuracy, producer accuracy, and F1-score for SVM using an equal fraction of ground samples for each class to train and test the models.
DensitySample SizeConsumer AccuracyProducer AccuracyF1-Score
<10%12000.3110.8120.45
10–40%50000.4670.5800.594
40–70%75000.6200.5700.594
>70%10,0000.7840.7500.766
Non-seagrass12,5000.9710.9280.949
Table 11. Table of seagrass density class, sample size, consumer accuracy, producer accuracy, and F1-score for RF using an equal fraction of ground samples for each class to train and test the models.
Table 11. Table of seagrass density class, sample size, consumer accuracy, producer accuracy, and F1-score for RF using an equal fraction of ground samples for each class to train and test the models.
DensitySample SizeConsumer AccuracyProducer AccuracyF1-Score
<10%12000.7440.8040.772
10–40%50000.7140.7200.717
40–70%75000.7190.7360.728
>70%10,0000.8190.8280.823
Non-seagrass12,5000.9760.9450.960
Table 14. Table of consumer accuracy and producer accuracy and F1-score for SVM using the equal sample size for each class to train and test the model.
Table 14. Table of consumer accuracy and producer accuracy and F1-score for SVM using the equal sample size for each class to train and test the model.
DensityConsumer AccuracyProducer AccuracyF1-Score
<10%0.8540.7700.810
10–40%0.6170.6010.609
40–70%0.5750.5780.576
>70%0.6720.7690.718
Non-seagrass0.9300.9230.927
Table 15. Table of consumer accuracy and producer accuracy and F1-score for RF using the equal sample size for each class to train and test the model.
Table 15. Table of consumer accuracy and producer accuracy and F1-score for RF using the equal sample size for each class to train and test the model.
DensityConsumer AccuracyProducer AccuracyF-Score
<10%0.9750.8620.915
10–40%0.7770.7560.767
40–70%0.6690.7300.700
>70%0.7350.8030.768
Non-seagrass0.9610.9360.948
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meister, M.; Qu, J.J. Quantifying Seagrass Density Using Sentinel-2 Data and Machine Learning. Remote Sens. 2024, 16, 1165. https://doi.org/10.3390/rs16071165

AMA Style

Meister M, Qu JJ. Quantifying Seagrass Density Using Sentinel-2 Data and Machine Learning. Remote Sensing. 2024; 16(7):1165. https://doi.org/10.3390/rs16071165

Chicago/Turabian Style

Meister, Martin, and John J. Qu. 2024. "Quantifying Seagrass Density Using Sentinel-2 Data and Machine Learning" Remote Sensing 16, no. 7: 1165. https://doi.org/10.3390/rs16071165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop