Next Article in Journal
Climate, CO2, and Anthropogenic Drivers of Accelerated Vegetation Greening in the Haihe River Basin
Next Article in Special Issue
Application of Unmanned Aerial Vehicles and Image Processing Techniques in Monitoring Underwater Coastal Protection Measures
Previous Article in Journal
Improved Mask R-CNN for Rural Building Roof Type Recognition from UAV High-Resolution Images: A Case Study in Hunan Province, China
Previous Article in Special Issue
Methodology for Combining Data Acquired by Unmanned Surface and Aerial Vehicles to Create Digital Bathymetric Models in Shallow and Ultra-Shallow Waters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Open-Source Analysis of Submerged Aquatic Vegetation Cover in Complex Waters Using High-Resolution Satellite Remote Sensing: An Adaptable Framework

by
Arthur de Grandpré
1,2,*,
Christophe Kinnard
1,3 and
Andrea Bertolo
1,2
1
Centre de recherche sur les interactions bassins versants-écosystèmes aquatiques (RIVE), Université du Québec à Trois-Rivières, 3351 Boul. des Forges, C.P. 500, Trois-Rivieres, QC G9A 5H7, Canada
2
Interuniversity Research Group in Limnology (GRIL), Université de Montréal, C.P. 6128, Succursale Centre-Ville, Montreal, QC H3C 3J7, Canada
3
Centre d’études nordiques (CEN), Université Laval, A.P. 1202, Quebec, QC G1V 0A6, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(2), 267; https://doi.org/10.3390/rs14020267
Submission received: 12 November 2021 / Revised: 28 December 2021 / Accepted: 3 January 2022 / Published: 7 January 2022
(This article belongs to the Special Issue Advances in Remote Sensing of the Inland and Coastal Water Zones)

Abstract

:
Despite being recognized as a key component of shallow-water ecosystems, submerged aquatic vegetation (SAV) remains difficult to monitor over large spatial scales. Because of SAV’s structuring capabilities, high-resolution monitoring of submerged landscapes could generate highly valuable ecological data. Until now, high-resolution remote sensing of SAV has been largely limited to applications within costly image analysis software. In this paper, we propose an example of an adaptable open-sourced object-based image analysis (OBIA) workflow to generate SAV cover maps in complex aquatic environments. Using the R software, QGIS and Orfeo Toolbox, we apply radiometric calibration, atmospheric correction, a de-striping correction, and a hierarchical iterative OBIA random forest classification to generate SAV cover maps based on raw DigitalGlobe multispectral imagery. The workflow is applied to images taken over two spatially complex fluvial lakes in Quebec, Canada, using Quickbird-02 and Worldview-03 satellites. Classification performance based on training sets reveals conservative SAV cover estimates with less than 10% error across all classes except for lower SAV growth forms in the most turbid waters. In light of these results, we conclude that it is possible to monitor SAV distribution using high-resolution remote sensing within an open-sourced environment with a flexible and functional workflow.

Graphical Abstract

1. Introduction

The ecological importance of submerged aquatic vegetation (SAV; syn. hydrophytes or macrophytes) has been recognized for a long time in both marine and freshwater sciences. It is acknowledged that SAV provides diverse and important functions at the local, ecosystem, and global scales. By generating productive habitats able to sustain multiple trophic levels [1,2,3] and playing major roles in the biogeochemical cycles [4,5,6,7,8,9], SAV represents a major asset for both conservation and climate change research [10,11]. Because of the interactions between their engineering capabilities and their sensitivity to environmental stress, SAV has been identified as a good functional group of candidates for sentinel species [10]. On one hand, their role as ecological engineers is related to their ability to regulate water quality through density-dependent feedback mechanisms acting as sediment traps, nutrient sinks, and biodiversity hotspots, all of which modulate the availability of resources to other species in the ecosystem [6,12,13,14,15]. On the other hand, at lower densities of SAV, individuals become more exposed to the impact of multiple ecological stressors and their distribution starts to depend on environmental factors related to ecological integrity, including nutrient and light availability, hydrology, and climate [10,16,17,18].
Despite the recognition of SAV’s importance, large-scale observations of their ecological integrity are lacking, and their monitoring is generally limited to qualitative point sampling [19]. While some efforts have been made to model SAV distribution at multiple scales [20,21,22,23], it remains difficult to generalize, transfer and validate these results because calibration data is rare, costly, and difficult to obtain. This lack of data can be attributed to the inherent difficulty to observe and quantify submerged landscapes. This is especially true in complex and dynamic systems such as fluvial lakes and estuaries. As a complement to classical field observations, medium to high-resolution satellite remote sensing can be used to efficiently monitor SAV cover and canopies, given the right atmospheric and hydrological conditions [19,24]. Optical satellite remote sensing should be able to provide environmental scientists with some level of information regarding large-scale SAV landscapes. However, shallow, complex, and dynamic inland waters are usually excluded from most remote sensing products because their high variability generates large radiometry changes that can lead to modeling errors and biases. The inherent optical properties (IOPs) of water (turbidity, color, and dissolved organic matter), water depth, atmospheric and meteorological conditions, phytoplankton biomass and species, adjacency to land as well as sensor noise all play a role in modulating the quantity and quality of the remote sensing signal of SAV [24,25]. Those issues are vastly documented in the remote sensing literature and impose limits to current aquatic remote sensing applications. For these reasons, classical remote sensing methods that would require precise quantification of SAV’s expected and observed reflectance spectra currently represent a very complex endeavor.
Regardless of these limitations, some of SAV’s ecological sentinel abilities do not require precise radiometric measurements to be monitored. Indeed, large parts of SAV’s engineering capabilities can be attributed to its structuring effects within aquatic landscapes, which increases their spatial complexity [26,27,28]. While radiometric accuracy is of high importance for tasks such as species distribution or biomass studies, shifting the focus towards the quantification of the spatial structure of SAV landscapes using vegetation cover maps could reveal precious information about ecosystem integrity. Some authors have even suggested that SAV landscape patterns can be linked with ecological status, and even act as early warning signals towards non-linear ecological transitions [29,30,31]. In this sense, methods capable of detecting local reflectance gradients and contrasts generated by SAV landscapes would represent strong candidates to generate ecologically relevant SAV maps.
As an alternative to pixel-based modeling, object-based image analysis (OBIA) offers a framework that can be used to integrate more complex data than the raw remote sensing signal by including different levels of spatial information related to scale, shape, and texture [32,33,34,35,36]. This higher level of information offers a more robust comparative basis than pixel-based approaches when radiometric accuracy is uncertain, such as in complex water cases. Unfortunately, OBIA workflows in remote sensing frequently rely on proprietary software that diminishes analysis replicability, passing through “black box” functions that limit the interpretation of the results [37]. This is especially critical considering the delicate nature of remote sensing over complex inland waters where calibration and validation data are often limited. We suggest that, with sufficient understanding of the radiometric challenges and acknowledging some limitations, large-scale SAV distribution data can be obtained from optical remote sensing using OBIA in an open-source environment.
Because calibration and validation data are difficult to obtain (and often non-existent, especially for older satellite imagery), evaluating the accuracy of remotely sensed SAV metrics can be difficult. In some cases, it might be necessary to rely upon expert knowledge to train classifiers and determine whether the results depict an acceptable representation of reality [32,33,34]. This decision relies on factors such as appropriate knowledge of the study system, the sensitivity of the application of the results to type I and II statistical errors, and the capacity to interpret and correctly communicate the limitations of the data generating process (from sensing to classification). Since this process can be subjective, complete transparency is necessary when describing methods, discussing results, and transferring results. Based on this principle, open data and open-source solutions should be preferred when possible.
Here, we propose an open-source high-resolution optical remote sensing workflow to detect SAV cover in high-resolution satellite remote sensing imagery, applicable to optically complex waters and going from raw data processing to vegetation cover map. The workflow is presented in the form of general and adaptable guidelines based on the use of the R, QGIS, and Orfeo Toolbox (OTB) software [38,39,40]. This is followed by real-world applications of the workflow on high-resolution DigitalGlobe imagery products in two large shallow fluvial lakes of the Saint-Lawrence River system, Canada, with the specific objective of generating binary vegetation cover maps from detectable SAV. The steps that will be addressed here are pre-processing, including radiometric calibration, atmospheric correction, and image de-striping, followed by image segmentation, feature extraction, training set building, and finally, the application of a two-level hierarchical random forest image classification.

2. Materials and Methods

2.1. Workflow Overview

The suggested image processing workflow is based on the concept of geographic object-based image analysis (OBIA, or GEOBIA) where objects within an image are delineated by segmentation, described by feature extraction, zonal statistics and landscape indices, and identified by object classification [41] (Figure 1). As opposed to pixel-based approaches, objects can be described not only by zonal statistics generated by pixel values, but also by their shape, texture, and spatial relationship to one another. This allows for the generation of highly diverse descriptors which can then be used as inputs into machine learning methods to produce complex classification rules.
In the case of SAV remote sensing using high-resolution satellite imagery such as DigitalGlobe satellite products, the pre-processing should at least include steps such as radiometric calibration, atmospheric corrections, and de-striping of push-broom sensor artifacts. Following pre-processing, image segmentation, feature extraction, and a hierarchical random forest image classification are used to generate the OBIA results (Figure 2). This workflow has been scripted within the R software [39] in the R Markdown format and is available on Github (URL: https://github.com/arthurdegrandpre/Open_HRRS_W2 (accessed on 1 November 2021)). While the following workflow could be used as is, it should nonetheless be adapted to fit the user’s needs and resources, such as specific objectives, available imagery, available calibration and validation data, and access to computing power.

2.2. Pre-Processing

In this section, the process of preparing high-resolution multispectral imagery for OBIA will be detailed, going from raw digital numbers data to surface reflectance values using DigitalGlobe multispectral products. While OBIA is robust to radiometric errors, absolute radiometric calibration and atmospheric correction allows us to make images comparable and favors better interpretability of the data.

2.2.1. Metadata Extraction

In order to perform the subsequent steps, it is recommended to parse through image and satellite metadata to extract or generate the required information for radiometric calibration and atmospheric correction. In the case of an image collection with multiple data sources, this includes for each image their date and time of acquisition in the appropriate format, their product identification numbers and parts (in case it was split) for mosaic building, the identity of the satellite used for sensing and its orientation, including the solar zenith angle and the viewing angle. For every band, their name, effective bandwidth, absolute calibration factor and time delay integration value (TDI) are used for radiometric calibration, and image correction. Finally, for every satellite, the altitude, gain and offset for every band, as well as estimated exoatmospheric radiance values, are extracted (available from DigitalGlobe’s technical notes [42,43,44]).

2.2.2. Radiometric Calibration

Since the raw sensor products are typically delivered with relative digital numbers (DN) as radiance units, radiometric calibration is necessary to generate absolute reflectance values. Doing so is a first step towards obtaining radiometrically comparable values across different remote sensing products. DigitalGlobe’s technical notes [42,43] recommend the use of the following equation for radiometric calibration:
L = GAIN × DN × (abscalfactor / effectivebandwidth) + OFFSET
where L is the top-of-atmosphere radiance (Wμm−1 m−2 sr−1), DN is the digital number raw pixel value, and GAIN, OFFSET, abscal factor, and effective bandwidth are band-specific radiometric calibration for image and sensor parameters available within the metadata.

2.2.3. Atmospheric Correction and Mosaic Building

Atmospheric correction is a controversial topic over complex inland waters, even more so when shallow waters and submerged aquatic vegetation are expected. Since the state-of-the-art methods are very context-dependent, heavily debated, and often rely on large amounts of external cal/val inputs [45,46], an image-based empirical atmospheric correction has been implemented, based on the dark object subtraction method (DOS) and the cosine of the solar zenith angle (COST) [47]. This correction is based on the assumptions that (i) a dark object exists within an image, and that its remote sensing signal can be attributed to atmospheric effects, (ii) the atmospheric signal is relatively homogeneous across the image, and (iii) that the cosine of the solar zenith angle represents an acceptable approximation of the atmospheric transmittance. Because of those assumptions, it is recommended to build mosaics of compatible images, i.e., by grouping (tiling) and correcting images from the same satellite overpass together, thus providing more dark pixel candidates to smaller tiles. Once the images are grouped accordingly, the following equation is used to apply the correction:
ρ (BOA)λ = ((Lλ − min(Lλ)) × d2 × π)/(Eλ × cos(θS) × TAUZ)
where ρ(BOA)λ is the bottom of atmosphere reflectance (ρ) at a given wavelength (λ), Lλ is the at sensor radiance as previously calculated, d is the distance between the earth and the sun given in astronomical units, Eλ is the exoatmospheric radiance (here the mean of the three values estimated by DigitalGlobe was used), θS is the solar zenith angle and TAUz is the estimated atmospheric transmittance along the path from the sun to the surface, which is roughly equal to cos(θS) according to the COST model. More recent and promising methods such as dark spectrum fitting (DSF) are currently under development but have yet to be tested on DigitalGlobe products [48].

2.2.4. Image De-Striping

While push-broom optical scanners are very efficient tools for generating high-resolution multispectral imagery, they also bring some challenges. Over areas with low remote sensing signals such as water, even small sensor noise and irregularities can have great effects on the quality of the data. The push broom architecture is composed of multiple side-by-side scanners that together generate a larger image, and while the image is corrected and calibrated as a whole, it is not perfectly calibrated for the radiometric differences that can occur between each parallel sensor, sometimes generating striping artifacts along the orbital line that can be quite significant. Correction of such artifacts over complex waters is difficult because homogeneity between stripes cannot be assumed. For this reason, we suggest the application of an empirical correction based on Marmorino & Chen’s work [49] where “jumps” between contiguous stripes are identified manually, and their height estimated by comparing homogeneous objects on both sides of the jumps. Using QGIS, jumps were identified as line vectors, and for every line, 3 to 5 homogeneous polygon objects were drawn on each side. In R, a function was built to retrieve the values on both sides of every jump and to apply an offset to fix the height difference, effectively correcting most of the striping effect.

2.3. Object-Based Image Analysis

Pixel-based classification of submerged features being a very complex task across complex water environments, alternative object-based approaches offer a wide variety of spatial descriptors that can inform on landscape composition. Object-oriented workflows depend on three major steps: Image segmentation, feature extraction, and object classification. A wide variety of methods exist for each of these steps, and they should be adapted to fit the end-users needs. OBIA is often applied in a hierarchical workflow, defining broader regions, and then working towards more precise identifications. To apply it efficiently over SAV, two levels of OBIA classification have been performed: land masking and vegetation mapping.

2.3.1. Image Segmentation

Segmentation separates the image into a mosaic of smaller objects based on various homogeneity criteria that varies greatly between algorithms. A great variety of methods exists, the broader categories being edge- or region-based algorithms, respectively detecting steep gradients or homogeneous regions [50]. For its ease of implementation, speed, scalability, support of multiband rasters, and open code, we used OTB’s region-based mean-shift segmentation algorithm [38] to perform image segmentation, calling it from R software’s shell function. Segmentation parameters were tuned to fit the spatial and radiometric range of expected objects, with the spatial range set to 5 pixels (~10 to 12 m, depending on the sensor resolution), the radiometric range set between 0.001 and 0.003 depending on the intensity of the water-leaving reflectance, a spectral threshold of 0.0005, and a minimum size of 5 pixels.

2.3.2. Feature Extraction

Feature extraction can be described as the generation of information describing the objects. Features can include radiometric indices such as the normalized difference vegetation index (NDVI), local statistical moments, zonal statistics, shape and size of the object, and its spatial relationships to its neighbors, textural information such as grey-level co-occurrence matrix (GLCM) based metrics, etc. Due to the high diversity of possible features, many OBIA applications end up using very large arrays of features that could be useful when generating end results, relying on machine learning and computing capabilities to prune the less important candidates [32,33]. In this paper, we used a conservative array of features to favor the interpretability of the results, preventing a black-box effect (Table 1 and Table 2).

2.3.3. Image Classification

Image classification needs to be able to deal with a high number of predictors (hundreds) over a high number of objects (millions). For those reasons, supervised machine learning algorithms are preferred, such as support vector machines (SVM), random-forest classifiers (RF), or deep learning-based methods such as convoluted neural networks (CNN). The main difficulty of using such methods is the construction of an appropriate training set, which is often built upon expert knowledge and image interpretation. While the use of ground validation data is to be preferred, especially for validation purposes, such data is often non-existent or unreliable. Because of its ease of use, both high interpretability and performance, and strong robustness to large amounts of predictors, we have used R package Caret’s parallel random forest (parRF) algorithm. The training sets were built manually by expert image interpretation using the segmentation output and QGIS with classes determined prior to the process. To reduce the number of classes and generate more relevant out-of-bag accuracy estimates, two levels of classification were performed. The first one is for masking terrestrial objects, including wetlands, forests, agriculture, and human structures, and the second is for discriminating objects within the aquatic part of the scene, such as canopy-forming or submerged macrophytes, shallow or deep water, and water masses. An additional 10 folds cross-validation was performed on the final training set for each image to better explore risks of overfitting.
To construct the training sets, object candidates were selected one class at a time, by manual photointerpretation by an observer familiar with the SAV distribution at the study sites. Objects were identified by overlapping the segmentation results to the color-composite image, alternating between true-color and false-color using blue, green, and NIR, and then assigning a class to the selected objects. A representative sample of high confidence identifications was collected across the scene, looking for candidates of multiple sizes, shapes, and positions. To favor the independence of training objects, non-adjacent objects were preferred. While a single classification was usually satisfactory for level-1 classification, level-2 training sets were built iteratively, adding objects into the training sets in areas where obvious classification errors were observed. The sampling effort for every class should be proportional to their importance in the image and their importance relative to the classification objectives. In cases where adjacency effects were observed such as certain types of emergent vegetation, classes were split into two sub-classes, identifying “regular” object candidates and “adjacency affected” candidates. When relevant, distinct water masses were also separated in the training set to facilitate discrimination of SAV, especially in cases where high turbidity differences were expected.

2.4. Application to Real Ecosystems

The workflow was applied to produce vegetation cover maps for two fluvial lakes of the Saint-Lawrence River system, Quebec, Canada (Figure 3). The first scene classified is a Quickbird-2 four bands multispectral image of the lake Saint-Pierre (lat. 46.2°, lon. −72.8°), a shallow enlargement of the Saint-Lawrence River fed by multiple watersheds, creating a complex and dynamic mosaic of water masses varying in color and turbidity (Figure 3). The image was taken on 5 September 2009T15:50:00 GMT with a mean off-nadir view angle of 12.3° and a ground sampling distance of 2.4 m. Because of its complexity and high NIR signal near the shores, additional classes were created in the training set to favor narrower classes and diminish classification errors. The second scene is a WorldView-03 eight bands multispectral image of the lake Saint-François (lat. 45.15°, lon. −74.4°) a shallow fluvial lake mostly fed by clear water coming from the Great Lakes (Figure 3). It was taken on 2 August 2019T16:12:16 GTM with a mean off-nadir view angle of 7.3°, a ground sampling distance of 2 m, and shows traces of wave-generated sun glint and push-broom striping artifacts.

2.4.1. Level 1 Classification (Land Masking)

Land masking, while being in principle optional, highly facilitates subsequent classifications. The goal of this step is to exclude terrestrial objects, reducing the expected reflectance range and the global number of objects to classify for computation purposes. Because of the high reflectance differences between emerged and submerged objects, level 1 classification is relatively robust to poor segmentation optimization and training set calibration. Performance is evaluated by an expert observer based on random forest out-of-bag accuracy and visual validation of the results. For both test images, the same parameters have been used for segmentation, feature selection, and object classes (see Table 1 for details).
Following visual validation of the results, land classes are excluded from the dataset, a land mask is built from the polygons, and a new raster image is extracted to perform the level 2 classification.

2.4.2. Level 2 Classification (Vegetation Cover Mapping)

Vegetation mapping is a very sensitive step because reflectance spectra overlap between different classes of objects. In cases where direct validation data is non-existent or insufficient, performance is evaluated by an expert observer based on random forest out-of-bag accuracy and visual validation of the results. For this reason, we suggest an iterative modeling approach where the training set is manually adjusted three to six times or until the end results are satisfactory. When adjusting the training set, the focus should be put on the most obvious type 1 and 2 errors concerning the classes of higher interest. Such obvious error cases include water objects wrongly classified as vegetation because of the presence of optical edges within the image (laminar water masses, steep bathymetry changes, etc.). To facilitate accuracy assessment, simple metrics are computed from the confusion matrices, called meaningful false positives (mFP) and meaningful false negatives (mFN). Those metrics exclude the same group classes from the error rate calculations, thus giving a better representation of the actual performances relative to the objectives. As an example, confusion between submerged and emerged vegetation would not be considered meaningful errors in this case, since we are ultimately seeking to produce a vegetation cover map. In contrast to level 1 classification, segmentation parameters are important to capture the right spatial scale for the end-user. Hence, object classes should be defined in accordance with the context of the scene and their potential for differentiation, and special attention should be given to training set calibration, reducing bias in out-of-bag accuracy estimation. This may mean creating additional classes to balance for steep turbidity gradients, different vegetation types, or in some cases zones of high environmental noise caused by wind. The parameters, features, classes, and training sets used are defined in Table 2.

2.5. R Libraries

Scripts used in the suggested R workflow are highly dependent on external libraries published by open source developers. At the moment of submitting this paper, the workflow is based on the following packages and their own dependencies: Caret, data.table, doParallel, foreach, gdalUtils, GeoLight, GSIF, kableExtra, landscapemetrics, plyr, raster, rgdal, satellite, sf, snowfall, sp, spatialEco, tidyverse, randomForest and velox [52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72].

3. Results

3.1. Radiometric Calibration and Atmospheric Correction

Concerning radiometric corrections and calibration, the radiometric spectra of deep-water pixels across all products (raw, absolute radiometric correction, top and bottom-of-atmosphere reflectances) of both images behaved as expected across all pre-processing, resulting in similar bottom-of-atmosphere reflectance (Figure 4). The raw level 1 product displays high DN values optimized for product visualization. Absolute radiometric calibration performed with Equation (1) generates radiance values very similar for both images, which translates into top-of-atmosphere (TOA) reflectance that are high in the lower wavelengths due to aerosol scattering and lower in the higher wavelengths due to aerosol absorption. The COST method (Equation (2)) for atmospheric correction used to generate bottom of atmosphere reflectances appears to efficiently correct those biases, effectively flattening the signal of both images into a comparable range close to 0.05 reflectance units for deep-water pixels.

3.2. Empirical Image De-Striping

Because of the architecture of push-broom sensors, noticeable striping artifacts can sometimes appear over low reflectance areas (i.e., water). While no noticeable reflectance jumps were observed in the 2009 Quickbird-02 image, severe reflectance jumps were observed in the 2019 Worldview-03 image (Figure 5a). Those jumps happened approximatively every 1110 m, and their height varied between 0.00001 and 0.00389 reflectance units, the worst offenders being in the green, red, and near-infrared wavelengths with mean jumps of 0.00127, 0.00092, and 0.00076, respectively. The method adapted from Marmorino & Chen (2019) generated a product with noticeably fewer striping artifacts, equalizing the image to the mean of all objects used for jump height quantification (Figure 5b).

3.3. Level 1 OBIA, Land Masking

The first level of OBIA is a coarse classification of land and aquatic objects to narrow the scope of subsequent classifications. Using OTB’s Meanshift segmentation with the parameters described in Table 1, image 1, the 2009 Quickbird-02 image generated 206,890 segments of (mean ± 1 S.D.) 83.3 ± 75.6 m2 in size while image 2, the 2019 Worldview-03 image, generated 450,941 segments of 39.3 ± 39.9 m2. This difference is associated with the different image spatial resolutions, but also different segmentation parameters. Because land and water reflectance are very different, a single random forest classification based run on a simple training set using the features from Table 1 was sufficient to generate a robust land mask for both images (Figure 6).
For image 1, the most important features used in the land masking random forest models were largely based on the NIR signal and vegetation indexes zonal statistics, with the most important features being mean EVI, NIR, NDAVI, and maximum WAVI (Figure S1a). On the other hand, image 2 used more contrasting features, led by mean red reflectance and including NIR-based Haralick textures such as energy and entropy, green, NIR, blue, yellow, and red-edge region zonal statistics, as well as the NDVI indice (Figure S1b).
The random forest model for the 2009 image predicted the training set with an average accuracy of 98.17%, with the most confused classes being within the land and water-based classes, which is a non-issue for land masking purposes (Table S1). Boats (l_boats) were wrongly classified as emerged vegetation (v_em), but this was mainly linked to their low representation in the training set and the image. The 2019 training set was predicted with an average accuracy of 98.29%, again with low confusion between land and water classes (Table S2). The most confused classes were shallow water (w_shallow), deep water (w_deep), and boat waves (o_boatwave), all of which could be dealt with in the level 2 OBIA.

3.4. Level 2 OBIA, Vegetation Cover Mapping

The second level of OBIA is an iterative process of object classification targeted towards the identification of submerged aquatic vegetation. Using OTB’s Meanshift segmentation with the parameters described in Table 2, the 2009 Quickbird-02 image generated (mean ± 1 S.D.), 112,615 segments of 94 ± 98.2 m2 size while the 2019 Worldview-03 image generated 1,152,826 segments of 42.8 ± 32.5 m2 size. Since water objects tend to have low reflectance across all wavelengths, differentiating between classes can represent a challenge both for the expert building the training set, and the algorithm building classification rules. For this reason, the training and predictions of object classes are performed multiple times, until the results are satisfactory (Figure 7). For images 1 and 2, four and six runs were required, respectively, to obtain results satisfactory to the expert observer (Figure 8).
The most important features used in the random forest for SAV cover mapping for the 2009 image were again largely based on the NIR signal and vegetation indexes, but with the addition of blue-based Haralick correlation (Figure S2a). The 2019 image again used more contrasting features, led by minimum yellow, green and blue reflectance, followed by some vegetation indexes and blue-based Haralick inertia (Figure S1b).
The random forest model for the first image predicted the training set with an average OOB accuracy of 92.74% and a 10 folds accuracy of 93.6%, which suggests a low probability of overfitting. While the global performance is very high, some classes of interest had large errors, with meaningful false negatives and positives up to 23.46% and 10.62%, respectively, for the low submerged vegetation class (v_sub_low), which was mainly confused with blue-green water (w_bg) (Figure 7, Table S3), suggesting an underestimation of the total vegetation cover. The second most confused classes were high submerged vegetation (v_sub_high) and dark water (w_dark), with comparable amounts of false negatives and positives (Table S3), suggesting a trade-off between both classes. As for the second image, the average random forest model OOB accuracy for SAV cover mapping was 96.87% and 93.8% for the k-fold cross-validation. Again, the highest error rates were associated with the low submerged vegetation class (v_sub_low), which was often confused with shallow (w_shallow) and deep water (w_deep) (Table S4). Nevertheless, meaningful false negatives were more important than meaningful false positives (8.86% vs. 5.88%) for low submerged vegetation, indicating conservative SAV cover estimates.

4. Discussion

The proposed workflow allowed the production of image-based submerged aquatic vegetation cover maps for two different high-resolution multispectral sensors. By using open-source OBIA methods, the need for precise radiometric measurements was circumvented, shifting the modeling effort towards the quantification of local gradients and textural information.
Pre-processing generated radiometrically comparable images, as shown by the bottom of atmosphere reflectance spectra (Figure 4b). Although the lack of field radiometric measurements prevented the validation of the observed reflectance values, the magnitude and general shape of the deep-water reflectance spectra are in accordance with what has been found elsewhere for inland waters when compared with the literature. Our deep-water spectra show fewer variations and a higher baseline, but the peak values appear at the expected wavelengths, with around 0.05 surface reflectance units around the 500–600 nm region [24,25,73]. The high reflectance baseline and the flatness of the reflectance spectra could be attributed to a combination of the simple atmospheric correction, adjacency effects, traces of bottom reflection, and the IOPs of the target water bodies, which are subject to high loads of suspended matter, especially in the case of the lake Saint-Pierre [74].
De-striping offered a visually satisfying result, despite minor hiccups at closer inspection. Because the distance between every stripe jump, and their orientation is estimated and assumed constant, some pixels located very close to the jumps can remain problematic and be under/overcorrected. Also, the jumps do not occur over a single pixel, but rather over a range of about 7 pixels [49], so that the ideal correction would take into account this transition.
Level-1 OBIA random forest classification generated robust land masks despite low object counts within the training set together with low attention put into classes equalization (large variations between the number of objects per class). As expected, NIR reflectance-based features were of great importance for distinguishing between land and water objects, although a more complex array of features was used in the lake Saint-François (Figure S1), including lower wavelength bands such as red, green, yellow, and blue. This might be related to higher amounts of noise within the 2019 lake Saint-François image, partly due to boat waves and higher water transparency. In other studies, land-masking is often not discussed and land is roughly excluded [33,75]. On the other hand, in a knowledge-based OBIA classification of SAV, Visser et al. [34] used NDVI, mean NIR, and red reflectance thresholds as classification rules to classify exposed bank and bank vegetation, generating similar rules to our random forest. The confusion matrices for both images show low confusion between land and water-based classes (respectively identified by the l_ and w_ or v_ prefixes; Tables S2 and S3). Looking at meaningful misclassifications, classes with low object count are the most prone to errors, but they are also the less important in terms of cover, such as foam formations, boats, and boat waves, which could be compensated on level-2 OBIA.
Level-2 OBIA random forest classification performed relatively well across the classes of interest, with errors distributing as expected among classes and deeper growth forms displaying relatively high confusion with optically deep-water objects. The most important features for classification were quite different between the two images, being heavily reliant upon vegetation indices and NIR reflectance for the lake Saint-Pierre image, and again much more diverse for the lake Saint-François, including zonal statistics from the yellow, green, and blue bands (Figure S2). This is probably related to the higher turbidity of the lake Saint-Pierre, which limits the growth of SAV species with short height and favors canopy-forming species that can reach the top of the water column. Accordingly, image exploration reveals a much higher proportion of SAV showing traces of NIR reflectance (higher when SAV is close or at the surface) in the lake Saint-Pierre, especially within the brown and mixed water masses (see Figure 3a), as opposed to the blue-green waters from the main channel which is similar to the lake Saint-François, where most SAV is truly submerged, reflecting almost no NIR signal. This suggests that the features fed to the random forest account for enough variation to be robust across optically different fluvial ecosystems. Although SAV cover mapping is more sensitive to error than land masking, the confusion matrices show satisfactory classification results, with less than 10% meaningful false positives and negatives for all vegetation classes, except low submerged vegetation in the lake Saint-Pierre. Understandably, low submerged vegetation has the weakest remote sensing signal and is expected to be the least performing class. Ecologically, this weakness is partially compensated by the fact that shorter SAV would be expected to perform less as an engineer (i.e., generating smaller environmental feedbacks) than taller SAV growth forms [76], making them less important as sentinels, but nonetheless valuable as indicators. This also justifies that it is acceptable in this context for parts of the SAV landscape to be completely undetectable because of high turbidity. For the lake Saint-François image, floating/drifting vegetation generates large errors, is often confused with optically deep water. This is acceptable to us because drifting vegetation debris are not an object of interest to this classification despite their observed abundance in the field. Their large classification errors are associated with the size of the detached debris which is often smaller than the minimum segmentation object size, behaving similarly to white caps or NIR remote sensing noise. Finally, and more importantly, false negatives appear to be more common than false positives for SAV classes (Figure 7, Tables S3 and S4), suggesting overall conservative estimates of SAV cover. Further examination of Figure 7 also shows that visual validation criteria can be of higher importance than prediction accuracy over the training set, where an actual decrease in accuracy estimates can result in more satisfactory cover maps, especially for submerged aquatic vegetation in low contrast regions. To further validate the performance of the classification for a specific application, a multi-observer approach should be used to either estimate the error of the expert training set object identification, or the variability of resulting vegetation cover maps from multiple observers.
Despite acceptable classification results for routine monitoring of SAV, this paper represents a basic application of a very adaptable workflow, which could be improved and optimized in multiple ways. Furthermore, for the sake of the exercise, we used only image-based information and expert knowledge to calibrate and validate the model, which represents a worst-case scenario in terms of data availability. From calibration to validation, field observations should be capitalized upon whenever possible, although their availability and relevancy rapidly decrease with time passed since image capture. For those reasons, the rest of this section will serve as application guidelines, discussing best practices and limits regarding the state-of-the-art methods in the context of highly limited data availability.
Starting with data acquisition, multiple factors can affect classification results and require further consideration. These factors include sensor noise, spatial resolution, spectral resolution, and off-nadir angle, as well as environmental factors such as sun elevation angle, atmospheric conditions, wind, waves, water level, turbidity, the progress of macrophytes growing season, and terrain. Concerning sensor choice, Mouw et al. [45] offer an in-depth review of how and why most current water sensors are ill-fitted to study inland complex waters, leading researchers to use terrestrial earth observation missions such as Landsat, or in this case, Quickbird-2 and Worldview-3. The main issues with this approach are that those satellites were not built for low reflectance targets such as submerged objects, generating high signal-to-noise ratios (SNR) [25], and that their slow revisit time can be problematic for dynamic systems such as temperate fluvial lakes. Water level variations, water mass mixing, turbidity spikes, and growth phenology all affect how much plant canopy can be detected by satellite remote sensing. Nonetheless, methods such as multi-temporal, high-resolution OBIA based remote sensing of SAV have been successfully performed in the past in coastal ecosystems using appropriate field validation and proprietary software [35]. Understanding the effects of such environmental conditions on the temporal variability of the remote sensing signal is crucial for the correct use of remotely sensed SAV cover maps and represents one of the major challenges of this method.
Remote sensing noise can originate from multiple sources, including sensor architecture (as seen in Figure 5), wind and waves generating localized glint, white caps, water depth, and surface angle variations, as well as atmospheric and adjacency effects [77]. In addition, different bands can have slightly different sensing times, as indicated by the time delay integration (TDI) in DigitalGlobe’s products, thus visualizing different waves at different wavelengths. An example of high noise can be seen by observing deep-water areas of the lake Saint-François Worldview image, where a sub-scene can generate reflectance values ranging from 0.02 to 0.1 just in the 725 nm band due to what appears to be dominantly wave action and despite the absence of identifiable submerged features (Figure 9). While glint can be corrected over optically deep waters, shallow, and turbid waters glint correction applications are very complex and challenging, and it is usually preferred to select images where the sun and off-nadir angles minimize glinting [78]. Because of low SNR, atmospheric correction becomes disproportionately important over water pixels [25,46]. While this remains valid, the narrow swath width of high-resolution sensors allows the assumption that atmospheric conditions are homogeneous across a single scene, arguably allowing whole-image atmospheric corrections as opposed to pixel-based methods [48]. One last source of noise that should be considered is the presence of adjacency effects caused by nearby landmasses that can have strong effects on the radiometry of close-by water objects because of the atmospheric scattering of upwelling signals. While some methods, such as the dark spectrum fitting, tend to limit those effects [48], no generalized solution currently exists for adjacency effect correction. For the lake Saint-Pierre image, additional object classes were thus included such as “v_p_em” and “w_p_dark”, to represent emerging vegetation and dark water objects in proximity to land and diminish errors associated with adjacency effects and emergent vegetation.
Regarding the OBIA workflow, segmentation tuning, feature selection/engineering, training set building, and interpretation of results all add considerable flexibility to its applications, but also make it more vulnerable to inconsistent usage. Despite the growing interest towards OBIA in remote sensing, guidelines for the correct application of these steps are scarce and rarely generalizable across studies. We believe that further research is required to truly understand the importance of segmentation methods, spatial and spectral resolution, the choice of spectral, textural, and contextual indices included in the predictors, and the construction of robust and standardized training sets. In previous vegetation studies, different segmentation methods were used, including ENVI’s watershed segmentation [33] and eCognition’s multi-resolution segmentation [34,36,79], with little justification to their choice of segmentation solution despite their licensing fees. Following the same pattern, little information is available to guide feature selection in building the modeling data sets, with similar sets of features being widely used (zonal statistics plus vegetation and textural indexes), despite little apparent research to understand the effects of the feature array, especially regarding their specific tuning (such as scale parameters when calculating Haralick texture indexes) [33,36,79,80]. While machine learning-based models allow for some robustness despite wild arrays of inputs, widespread routine monitoring of SAV would highly benefit from a stronger understanding of feature engineering for its detection. In this paper, relative feature importance as estimated by the random forest algorithm provides some insight regarding feature selection in contrasted systems (Figure S4), where NIR-based metrics performed better in SAV-rich, turbid water, while shorter wavelengths and texture-based metrics were more appropriate in clear waters where SAV rarely reaches the surface.
In terms of modeling, the building of the training sets represents a critical task that would ideally require large amounts of field observations designed for training and validation of such methods, which is non-existent for most water bodies across time. While this paper suggests that it is possible to use Earth observation tools to estimate SAV coverage using only expert visual observation, this exercise would be hardly transposable to ecosystems unknown to the observer or the study of more complex metrics such as community composition. Additionally, little is known about the importance of the number of end members to the random forest classifier, the balancing of the training sets, or the error associated with the expert training approach. Further investigations regarding the transferability of models between dates and ecosystems could give us access to proper tools for large-scale monitoring of SAV landscapes across time and space. Finally, attention should also be given to quantify the impact of different observers on the classification results. Many of the issues inherent to expert or knowledge-based approaches are well discussed in the study by Visser et al. [34] about the knowledge-based mapping of SAV, which further reinforces the arguments for establishing open standards for SAV monitoring
Given the many challenges and knowledge gaps yet to be addressed, we believe high-resolution remote sensing of SAV would highly benefit from more open-source-based research, allowing true testing and validation of methods. The workflow proposed in this study represents a useful step in this direction. By adapting generic methods to the specific needs of SAV remote sensing, not only highlights the potential of OBIA for SAV mapping, but also the large amount of work remaining before achieving a truly generalizable high-resolution remote sensing framework for SAV. Until then, this workflow offers a foundation for further open-source research, and acts as a testimony for better SAV field monitoring programs in complex waters, built to accommodate calibration and validation of high-resolution remote sensing products.

5. Conclusions

In this paper, we propose an adaptable framework and an open-sourced workflow to map SAV cover in complex waters using OBIA. By shifting the focus of the classification efforts from precise radiometric measurements to local gradient quantification, we were able to conduct satisfactory SAV mapping in two contrasted fluvial lakes, using two different high-resolution multi-spectral sensors. By using an iterative expert approach, conservative estimates of SAV cover area could be extracted both for a recent image acquisition and an older image from over 10-years back, despite the absence of ground-truthing data, hinting towards the large potential of similar methods for long-term monitoring of aquatic ecosystems. To better calibrate and validate this process, we argue for the importance of implementing and maintaining remote-sensing appropriate field observations and call for more transparent research in aquatic remote sensing OBIA applications. By proposing a workflow for high-resolution remote sensing of SAV covers in complex systems, we hope to enable better monitoring of sensible ecosystems and facilitate landscape-level research about SAV dynamics in a changing world.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/rs14020267/s1, Table S1: level 1 confusion matrix for image 1, Table S2: level 1 confusion matrix for image 2, Table S3: level 2 confusion matrix for image 1, Table S4: level 2 confusion matrix for image 2, Figure S1: most important features for level 1 classification, Figure S2: most important features for level 2 classification.

Author Contributions

Conceptualization, A.d.G., A.B. and C.K.; methodology, A.d.G. and C.K.; software, A.d.G.; validation, A.d.G.; formal analysis, A.d.G.; investigation, A.d.G., A.B. and C.K.; resources, A.B. and C.K.; data curation, A.d.G.; writing—original draft preparation, A.d.G.; writing—review and editing, A.d.G., A.B. and C.K.; visualization, A.d.G.; supervision, A.B. and C.K.; project administration, A.B. and C.K.; funding acquisition, A.d.G., A.B. and C.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by the Canada Research Chair program (grant number 231380, Christophe Kinnard), the Natural Sciences and Engineering Research Council of Canada (NSERC discovery grant CRSNG-RGPIN-2017-05451, Andrea Bertolo), the Réseau Québec Maritime (UQAR-RQM-FRQNT 307819), the Interuniversity Research Group in Limnology (GRIL, ÉcoLac—NSERC CREATE training program, doctoral scholarship), the Centre de Recherche sur les Interactions Bassins Versants—Écosytèmes Aquatiques (RIVE, doctoral scholarships) and the Ministère des Forêts, de la Faune et des Parcs (MFFP, contracts 2019-1-SGHAPP-251 & 2020-1-SGHAPP-328).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The satellite imagery used in this paper were obtained in collaboration with the MFFP and licensed by Digital Globe. Inc. The imagery products are found in the Digital Globe catalog under the DigitalGlobe Catalog IDs A0100104D7CB2A00 and A0100104CD914100 for the 2009 Quickbird and 2019 Worldview images respectively. The scripts to run the workflow used to map the vegetation cover is available on Github under the cc-by-nc license under the following URL: https://github.com/arthurdegrandpre/Open_HRRS_W2 (accessed on 1 November 2021).

Acknowledgments

We gratefully acknowledge all of our funders for supporting our work. We would also like to acknowledge the MFFP for providing the DigitalGlobe satellite imagery with the appropriate licenses for us to be able to conduct this research. Also, we thank our anonymous reviewers for their knowledgeable and relevant comments that made this work stronger. Finally, we would like to thank all the open-source software and library developers that made this workflow possible.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heck, K.L.; Able, K.W.; Roman, C.T.; Fahay, M.P. Composition, abundance, biomass, and production of macrofauna in a New England estuary: Comparisons among eelgrass meadows and other nursery habitats. Estuaries 1995, 18, 379–389. [Google Scholar] [CrossRef]
  2. Pang, S.; Zhang, S.; Lv, X.Y.; Han, B.; Liu, K.; Qiu, C.; Wang, C.; Wang, P.; Toland, H.; He, Z. Characterization of bacterial community in biofilm and sediments of wetlands dominated by aquatic macrophytes. Ecol. Eng. 2016, 97, 242–250. [Google Scholar] [CrossRef]
  3. Law, A.; Baker, A.; Sayer, C.; Foster, G.; Gunn, I.D.M.; Taylor, P.; Pattison, Z.; Blaikie, J.; Willby, N.J. The effectiveness of aquatic plants as surrogates for wider biodiversity in standing fresh waters. Freshw. Biol. 2019, 64, 1664–1675. [Google Scholar] [CrossRef]
  4. Caffrey, J.; Kemp, W. Nitrogen cycling in sediments with estuarine populations of Potamogeton perfoliatus and Zostera marina. Mar. Ecol. Prog. Ser. 1990, 66, 147–160. [Google Scholar] [CrossRef]
  5. Jeppesen, E.; Søndergaard, M.; Søndergaard, M.; Christoffersen, K. The Structuring Role of Submerged Macrophytes in Lakes; Ecological Studies; Springer: New York, NY, USA, 1998; Volume 131, ISBN 978-1-4612-6871-0. [Google Scholar]
  6. Marion, L.; Paillisson, J.M. A mass balance assessment of the contribution of floating-leaved macrophytes in nutrient stocks in an eutrophic macrophyte-dominated lake. Aquat. Bot. 2003, 75, 249–260. [Google Scholar] [CrossRef]
  7. Fourqurean, J.W.; Duarte, C.M.; Kennedy, H.; Marbà, N.; Holmer, M.; Mateo, M.A.; Apostolaki, E.T.; Kendrick, G.A.; Krause-Jensen, D.; McGlathery, K.J.; et al. Seagrass ecosystems as a globally significant carbon stock. Nat. Geosci. 2012, 5, 505–509. [Google Scholar] [CrossRef]
  8. Duarte, C.M.; Krause-Jensen, D. Export from Seagrass Meadows Contributes to Marine Carbon Sequestration. Front. Mar. Sci. 2017, 4, 13. [Google Scholar] [CrossRef] [Green Version]
  9. Xing, Y.; Xie, P.; Yang, H.; Wu, A.; Ni, L. The change of gaseous carbon fluxes following the switch of dominant producers from macrophytes to algae in a shallow subtropical lake of China. Atmos. Environ. 2006, 40, 8034–8043. [Google Scholar] [CrossRef]
  10. Orth, R.J.; Dennison, W.C.; Lefcheck, J.S.; Gurbisz, C.; Hannam, M.; Keisman, J.; Landry, J.B.; Moore, K.A.; Murphy, R.R.; Patrick, C.J.; et al. Submersed aquatic vegetation in chesapeake bay: Sentinel species in a changing world. Bioscience 2017, 67, 698–712. [Google Scholar] [CrossRef] [Green Version]
  11. Macreadie, P.I.; Anton, A.; Raven, J.A.; Beaumont, N.; Connolly, R.M.; Friess, D.A.; Kelleway, J.J.; Kennedy, H.; Kuwae, T.; Lavery, P.S.; et al. The future of Blue Carbon science. Nat. Commun. 2019, 10, 3998. [Google Scholar] [CrossRef] [Green Version]
  12. De Boer, W.F. Seagrass-sediment interactions, positive feedbacks and critical thresholds for occurrence: A review. Hydrobiologia 2007, 591, 5–24. [Google Scholar] [CrossRef]
  13. Barko, J.W.; James, W.F. Effects of Submerged Aquatic Macrophytes on Nutrient Dynamics, Sedimentation, and Resuspension. In The Structuring Role of Submerged Macrophytes in Lakes; Ecological Studies; Springer: New York, NY, USA, 1998; pp. 197–214. [Google Scholar] [CrossRef]
  14. Challen Hyman, A.; Frazer, T.K.; Jacoby, C.A.; Frost, J.R.; Kowalewski, M. Long-term persistence of structured habitats: Seagrass meadows as enduring hotspots of biodiversity and faunal stability. Proc. R. Soc. B Biol. Sci. 2019, 286, 20191861. [Google Scholar] [CrossRef] [PubMed]
  15. Jones, C.G.; Lawton, J.H.; Shachak, M. Organisms as Ecosystem Engineers. Oikos 1994, 69, 373. [Google Scholar] [CrossRef]
  16. Koch, E.W. Beyond light: Physical, geological, and geochemical parameters as possible submersed aquatic vegetation habitat requirements. Estuaries 2001, 24, 1–17. [Google Scholar] [CrossRef]
  17. Lacoul, P.; Freedman, B. Environmental influences on aquatic plants in freshwater ecosystems. Environ. Rev. 2006, 14, 89–136. [Google Scholar] [CrossRef]
  18. Bornette, G.; Puijalon, S. Response of aquatic plants to abiotic factors: A review. Aquat. Sci. 2011, 73, 1–14. [Google Scholar] [CrossRef]
  19. Madsen, J.D.; Wersal, R.M. A review of aquatic plant monitoring and assessment methods. J. Aquat. Plant Manag. 2017, 55, 1–12. [Google Scholar]
  20. Kemp, W.M.; Batiuk, R.; Bartleson, R.; Bergstrom, P.; Carter, V.; Gallegos, C.L.; Hunley, W.; Karrh, L.; Koch, E.W.; Landwehr, J.M.; et al. Habitat requirements for submerged aquatic vegetation in Chesapeake bay: Water quality, light regime, and physical-chemical factors. Estuaries 2004, 27, 363–377. [Google Scholar] [CrossRef]
  21. Jayathilake, D.R.M.; Costello, M.J. A modelled global distribution of the seagrass biome. Biol. Conserv. 2018, 226, 120–126. [Google Scholar] [CrossRef]
  22. McKenzie, L.J.; Nordlund, L.M.; Jones, B.L.; Cullen-Unsworth, L.C.; Roelfsema, C.; Unsworth, R.K.F. The global distribution of seagrass meadows. Environ. Res. Lett. 2020, 15, 74041. [Google Scholar] [CrossRef]
  23. Jin, K.R.; Ji, Z.G. A long term calibration and verification of a Submerged aquatic vegetation model for Lake Okeechobee. Ecol. Process. 2013, 2, 23. [Google Scholar] [CrossRef] [Green Version]
  24. Rowan, G.S.L.; Kalacska, M. A review of remote sensing of submerged aquatic vegetation for non-specialists. Remote Sens. 2021, 13, 623. [Google Scholar] [CrossRef]
  25. Giardino, C.; Brando, V.E.; Gege, P.; Pinnel, N.; Hochberg, E.; Knaeps, E.; Reusen, I.; Doerffer, R.; Bresciani, M.; Braga, F.; et al. Imaging Spectrometry of Inland and Coastal Waters: State of the Art, Achievements and Perspectives. Surv. Geophys. 2019, 40, 401–429. [Google Scholar] [CrossRef] [Green Version]
  26. Bolduc, P.; Bertolo, A.; Pinel-Alloul, B. Does submerged aquatic vegetation shape zooplankton community structure and functional diversity? A test with a shallow fluvial lake system. Hydrobiologia 2016, 778, 151–165. [Google Scholar] [CrossRef]
  27. Madsen, J.D.; Chambers, P.A.; James, W.F.; Koch, E.W.; Westlake, D.F. The interaction between water movement, sediment dynamics and submersed macrophytes. Hydrobiologia 2001, 444, 71–84. [Google Scholar] [CrossRef]
  28. Koch, E.W.; Ackerman, J.D.; Verduin, J.; Keulen, M. Van Fluid dynamics in seagrass ecology-from molecules to ecosystems. In Seagrasses: Biology, Ecology and Conservation; Springer: Dordrecht, The Netherlands, 2006; pp. 193–225. ISBN 140202942X. [Google Scholar]
  29. Ruiz-Reynés, D.; Gomila, D.; Sintes, T.; Hernández-García, E.; Marbà, N.; Duarte, C.M. Fairy circle landscapes under the sea. Sci. Adv. 2017, 3, e1603262. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Rietkerk, M.; van de Koppel, J. Regular pattern formation in real ecosystems. Trends Ecol. Evol. 2008, 23, 169–175. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, R.; Dearing, J.A.; Langdon, P.G.; Zhang, E.; Yang, X.; Dakos, V.; Scheffer, M. Flickering gives early warning signals of a critical transition to a eutrophic lake state. Nature 2012, 492, 419–422. [Google Scholar] [CrossRef] [PubMed]
  32. Husson, E.; Ecke, F.; Reese, H. Comparison of manual mapping and automated object-based image analysis of non-submerged aquatic vegetation from very-high-resolution UAS images. Remote Sens. 2016, 8, 724. [Google Scholar] [CrossRef] [Green Version]
  33. Chabot, D.; Dillon, C.; Shemrock, A.; Weissflog, N.; Sager, E.P.S. An object-based image analysis workflow for monitoring shallow-water aquatic vegetation in multispectral drone imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 294. [Google Scholar] [CrossRef] [Green Version]
  34. Visser, F.; Buis, K.; Verschoren, V.; Schoelynck, J. Mapping of submerged aquatic vegetation in rivers from very high-resolution image data, using object-based image analysis combined with expert knowledge. Hydrobiologia 2018, 812, 157–175. [Google Scholar] [CrossRef] [Green Version]
  35. Roelfsema, C.M.; Lyons, M.; Kovacs, E.M.; Maxwell, P.; Saunders, M.I.; Samper-Villarreal, J.; Phinn, S.R. Multi-temporal mapping of seagrass cover, species and biomass: A semi-automated object based image analysis approach. Remote Sens. Environ. 2014, 150. [Google Scholar] [CrossRef]
  36. Mishra, N.B.; Crews, K.A. Mapping vegetation morphology types in a dry savanna ecosystem: Integrating hierarchical object-based image analysis with Random Forest. Int. J. Remote Sens. 2014, 35, 1175–1198. [Google Scholar] [CrossRef]
  37. Hay, G.J.; Castilla, G. Geographic object-based image analysis (GEOBIA): A new name for a new discipline. In Object-Based Image Analysis; Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Blaschke, T., Lang, S., Hay, G.J., Eds.; Springer Science and Business Media Deutschland GmbH: Berlin/Heidelberg, Germany, 2008; pp. 75–89. ISBN 9783540770589. [Google Scholar]
  38. Grizonnet, M.; Michel, J.; Poughon, V.; Inglada, J.; Savinaud, M.; Cresson, R. Orfeo ToolBox: Open source processing of remote sensing images. Open Geospatial Data, Softw. Stand. 2017, 2, 15. [Google Scholar] [CrossRef] [Green Version]
  39. R Core Team. R: A Language and Environment for Statistical Computing; R Core Team: Vienna, Austria, 2021. [Google Scholar]
  40. QGIS Development Team. QGIS Geographic Information System. 2009. Available online: https://en.wikipedia.org/wiki/QGIS (accessed on 1 November 2021).
  41. Blaschke, T.; Hay, G.J.; Kelly, M.; Lang, S.; Hofmann, P.; Addink, E.; Queiroz Feitosa, R.; van der Meer, F.; van der Werff, H.; van Coillie, F.; et al. Geographic Object-Based Image Analysis—Towards a new paradigm. ISPRS J. Photogramm. Remote Sens. 2014, 87, 180–191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Updike, T.; Comp, C. Radiometric Use of WorldView-2 Imagery; DigitalGlobe: Westminster, CO, USA, 2010; pp. 1–17. [Google Scholar]
  43. Kuester, M. Radiometric Use of WorldView-3 Imagery; DigitalGlobe: Westminster, CO, USA, 2016; pp. 1–12. [Google Scholar]
  44. Kuester, M. Absolute Radiometric Calibration: 2016v0; DigitalGlobe: Westminster, CO, USA, 2017; pp. 1–8. [Google Scholar]
  45. Mouw, C.B.; Greb, S.; Aurin, D.; DiGiacomo, P.M.; Lee, Z.; Twardowski, M.; Binding, C.; Hu, C.; Ma, R.; Moore, T.; et al. Aquatic color radiometry remote sensing of coastal and inland waters: Challenges and recommendations for future satellite missions. Remote Sens. Environ. 2015, 160, 15–30. [Google Scholar] [CrossRef]
  46. Moses, W.J.; Sterckx, S.; Montes, M.J.; De Keukelaere, L.; Knaeps, E. Atmospheric Correction for Inland Waters. In Bio-Optical Modeling and Remote Sensing of Inland Waters; Elsevier Inc.: Amsterdam, The Netherlands, 2017; pp. 69–100. ISBN 9780128046548. [Google Scholar]
  47. Chavez, P.S. Image-based atmospheric corrections—Revisited and improved. Photogramm. Eng. Remote Sens. 1996, 62, 1025–1036. [Google Scholar]
  48. Vanhellemont, Q.; Ruddick, K. Atmospheric correction of metre-scale optical satellite data for inland and coastal water applications. Remote Sens. Environ. 2018, 216, 586–597. [Google Scholar] [CrossRef]
  49. Marmorino, G.; Chen, W. Use of WorldView-2 along-track stereo imagery to probe a Baltic Sea algal spiral. Remote Sens. 2019, 11, 865. [Google Scholar] [CrossRef] [Green Version]
  50. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  51. Villa, P.; Mousivand, A.; Bresciani, M. Aquatic vegetation indices assessment through radiative transfer modeling and linear mixture simulation. Int. J. Appl. Earth Obs. Geoinf. 2014, 30, 113–127. [Google Scholar] [CrossRef]
  52. Kuhn, M. Caret: Classification and Regression Training. 2020. Available online: https://cran.r-project.org/web/packages/caret/caret.pdf (accessed on 1 November 2021).
  53. Dowle, M.; Srinivasan, A. Data.Table: Extension of ‘Data.Frame’. 2021. Available online: https://rdrr.io/cran/data.table/ (accessed on 1 November 2021).
  54. Corporation, M.; Weston, S. DoParallel: Foreach Parallel Adaptor for the “Parallel” Package. 2020. Available online: https://cran.r-project.org/web/packages/doParallel/index.html (accessed on 1 November 2021).
  55. Microsoft; Weston, S. Foreach: Provides Foreach Looping Construct. 2020. Available online: https://cran.r-project.org/web/packages/foreach/index.html (accessed on 1 November 2021).
  56. Greenberg, J.A.; Mattiuzzi, M. gdalUtils: Wrappers for the Geospatial Data Abstraction Library (GDAL) Utilities. 2020. Available online: https://rdrr.io/cran/gdalUtils/ (accessed on 1 November 2021).
  57. Lisovski, S.; Hahn, S. GeoLight—Processing and analysing light-based geolocation in R. Methods Ecol. Evol. 2012. [Google Scholar] [CrossRef]
  58. Hengl, T. GSIF: Global Soil Information Facilities. 2020. Available online: https://rdrr.io/rforge/GSIF/ (accessed on 1 November 2021).
  59. Zhu, H. kableExtra: Construct Complex Table with “kable” and Pipe Syntax. 2020. Available online: https://mran.microsoft.com/package/kableExtra (accessed on 1 November 2021).
  60. Hesselbarth, M.H.K.; Sciaini, M.; With, K.A.; Wiegand, K.; Nowosad, J. Landscapemetrics: An open-source R tool to calculate landscape metrics. Ecography 2019, 42, 1648–1657. [Google Scholar] [CrossRef] [Green Version]
  61. Wickham, H. The Split-Apply-Combine Strategy for Data Analysis. J. Stat. Softw. 2011, 40, 1–29. [Google Scholar] [CrossRef] [Green Version]
  62. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  63. Hijmans, R.J. Raster: Geographic Data Analysis and Modeling. 2020. Available online: https://cran.r-project.org/web/packages/raster/raster.pdf (accessed on 1 November 2021).
  64. Bivand, R.; Keitt, T.; Rowlingson, B. Rgdal: Bindings for the “Geospatial” Data Abstraction Library. 2020. Available online: https://cran.r-project.org/web/packages/rgdal/index.html (accessed on 1 November 2021).
  65. Nauss, T.; Meyer, H.; Detsch, F.; Appelhans, T. Manipulating Satellite Data with {Satellite}. 2019. Available online: https://cran.r-project.org/web/packages/satellite/satellite.pdf (accessed on 1 November 2021).
  66. Pebesma, E. Simple Features for R: Standardized Support for Spatial Vector Data. R J. 2018, 10, 439–446. [Google Scholar] [CrossRef] [Green Version]
  67. Knaus, J. Snowfall: Easier Cluster Computing (Based On Snow). 2015. Available online: https://rdrr.io/cran/snowfall/ (accessed on 1 November 2021).
  68. Evans, J.S. spatialEco. 2020. Available online: https://cran.r-project.org/web/packages/spatialEco/index.html (accessed on 1 November 2021).
  69. Wickham, H.; Averick, M.; Bryan, J.; Chang, W.; McGowan, L.D.; François, R.; Grolemund, G.; Hayes, A.; Henry, L.; Hester, J.; et al. Welcome to the {tidyverse}. J. Open Source Softw. 2019, 4, 1686. [Google Scholar] [CrossRef]
  70. Hunziker, P. velox: Fast Raster Manipulation and Extraction. 2018. Available online: https://www.r-bloggers.com/2016/09/velox-fast-raster-manipulation-and-extraction-in-r/ (accessed on 1 November 2021).
  71. Pebesma, E.J.; Bivand, R.S. Classes and methods for spatial data in {R}. R News 2005, 5, 9–13. [Google Scholar]
  72. Bivand, R.S.; Pebesma, E.; Gomez-Rubio, V. Applied Spatial Data Analysis with {R}, 2nd ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  73. Ma, S.; Zhou, Y.; Gowda, P.H.; Dong, J.; Zhang, G.; Kakani, V.G.; Wagle, P.; Chen, L.; Flynn, K.C.; Jiang, W. Application of the water-related spectral reflectance indices: A review. Ecol. Indic. 2019, 98, 68–79. [Google Scholar] [CrossRef]
  74. Rondeau, B.; Cossa, D.; Gagnon, P.; Bilodeau, L. Budget and sources of suspended sediment transported in the St. Lawrence River, Canada. Hydrol. Process. 2000, 14, 21–36. [Google Scholar] [CrossRef]
  75. Reshitnyk, L.; Costa, M.; Robinson, C.; Dearden, P. Evaluation of WorldView-2 and acoustic remote sensing for mapping benthic habitats in temperate coastal Pacific waters. Remote Sens. Environ. 2014, 153, 7–23. [Google Scholar] [CrossRef]
  76. Vilas, M.P.; Marti, C.L.; Adams, M.P.; Oldham, C.E.; Hipsey, M.R. Invasive Macrophytes Control the Spatial and Temporal Patterns of Temperature and Dissolved Oxygen in a Shallow Lake: A Proposed Feedback Mechanism of Macrophyte Loss. Front. Plant Sci. 2017, 8, 2097. [Google Scholar] [CrossRef] [Green Version]
  77. Ruddick, K.; Vanhellemont, Q.; Dogliotti, A.I.; Nechad, B.; Pringle, N.; Van der Zande, D. New opportunities and challenges for high resolution remote sensing of water colour. In Proceedings of the Ocean Optics 2016, Victoria, BC, Canada, 23–28 October 2016. [Google Scholar]
  78. Kay, S.; Hedley, J.; Lavender, S.; Kay, S.; Hedley, J.D.; Lavender, S. Sun Glint Correction of High and Low Spatial Resolution Images of Aquatic Scenes: A Review of Methods for Visible and Near-Infrared Wavelengths. Remote Sens. 2009, 1, 697–730. [Google Scholar] [CrossRef] [Green Version]
  79. Husson, E.; Reese, H.; Ecke, F. Combining spectral data and a DSM from UAS-images for improved classification of non-submerged aquatic vegetation. Remote Sens. 2017, 9, 247. [Google Scholar] [CrossRef] [Green Version]
  80. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
Figure 1. Typical remote sensing object-based image analysis (OBIA) workflow, where a sequence of classifications is performed on image segments (objects) based on training sets and large arrays of features.
Figure 1. Typical remote sensing object-based image analysis (OBIA) workflow, where a sequence of classifications is performed on image segments (objects) based on training sets and large arrays of features.
Remotesensing 14 00267 g001
Figure 2. Detailed schematic representation of the high-resolution multispectral remote sensing workflow of submerged aquatic vegetation (SAV) proposed in this study. The associated scripts are available online, on Github, URL: https://github.com/arthurdegrandpre/Open_HRRS_W2 (accessed on 1 November 2021).
Figure 2. Detailed schematic representation of the high-resolution multispectral remote sensing workflow of submerged aquatic vegetation (SAV) proposed in this study. The associated scripts are available online, on Github, URL: https://github.com/arthurdegrandpre/Open_HRRS_W2 (accessed on 1 November 2021).
Remotesensing 14 00267 g002
Figure 3. Images used in this study and their location. (a) Image 1: Quickbird-2 four band imagery of lake Saint-Pierre (© 2022 Maxar Technologies) (b) Image 2: Worlview-03 eight band imagery of lake Saint-François (© 2022 Maxar Technologies) (c) Location of the two fluvial lakes in southern Quebec, Canada (map data © 2022 Google). Purple is (a), green is (b) and the red square in the insert shows the location within Canada.
Figure 3. Images used in this study and their location. (a) Image 1: Quickbird-2 four band imagery of lake Saint-Pierre (© 2022 Maxar Technologies) (b) Image 2: Worlview-03 eight band imagery of lake Saint-François (© 2022 Maxar Technologies) (c) Location of the two fluvial lakes in southern Quebec, Canada (map data © 2022 Google). Purple is (a), green is (b) and the red square in the insert shows the location within Canada.
Remotesensing 14 00267 g003
Figure 4. Reflectance spectra of deep-water pixels from both images for all 4 product types. The full line and circles refer to image 1 (QB02), and the dotted lines and triangles refer to image 2 (WV3). (a) Shows the raw image in digital numbers (DN) and the absolute radiometric calibrated (arc) at-sensor radiance product. (b) Shows the top and bottom of atmosphere reflectance spectra (toa and boa, respectively).
Figure 4. Reflectance spectra of deep-water pixels from both images for all 4 product types. The full line and circles refer to image 1 (QB02), and the dotted lines and triangles refer to image 2 (WV3). (a) Shows the raw image in digital numbers (DN) and the absolute radiometric calibrated (arc) at-sensor radiance product. (b) Shows the top and bottom of atmosphere reflectance spectra (toa and boa, respectively).
Remotesensing 14 00267 g004
Figure 5. RGB composite of the 2019 Worldview-03 image of lake Saint-François (© 2022 Maxar Technologies) before (a) and after (b) empirical de-striping.
Figure 5. RGB composite of the 2019 Worldview-03 image of lake Saint-François (© 2022 Maxar Technologies) before (a) and after (b) empirical de-striping.
Remotesensing 14 00267 g005
Figure 6. Visualisation of the coarse initial segmentation of the 2009 (a) and 2019 (b) images and their corresponding land masked products (c,d) (© 2022, 2019 Maxar Technologies).
Figure 6. Visualisation of the coarse initial segmentation of the 2009 (a) and 2019 (b) images and their corresponding land masked products (c,d) (© 2022, 2019 Maxar Technologies).
Remotesensing 14 00267 g006
Figure 7. Visualization of the iterative classification process over image 1, the 2009 lake Saint-Pierre Quickbird-2 image (© 2022 Maxar Technologies). From top to bottom, we see the first to the last classification attempt, where the main difficulty was to prevent overestimation of vegetated areas in low contrast areas. On the left we see the predicted classes for all objects using the random forest model, where the orange boxes represent areas to be corrected in the next iteration by adjusting the training set. On the right we see the proportions of meaningful classification errors (false negatives and false positives) for every class with the total OOB model accuracy. The classes prefixes stand for vegetation (v) and water (w), and the suffixes for emergent (em), submerged (sub), blue-green (bg), and proximity to land (p).
Figure 7. Visualization of the iterative classification process over image 1, the 2009 lake Saint-Pierre Quickbird-2 image (© 2022 Maxar Technologies). From top to bottom, we see the first to the last classification attempt, where the main difficulty was to prevent overestimation of vegetated areas in low contrast areas. On the left we see the predicted classes for all objects using the random forest model, where the orange boxes represent areas to be corrected in the next iteration by adjusting the training set. On the right we see the proportions of meaningful classification errors (false negatives and false positives) for every class with the total OOB model accuracy. The classes prefixes stand for vegetation (v) and water (w), and the suffixes for emergent (em), submerged (sub), blue-green (bg), and proximity to land (p).
Remotesensing 14 00267 g007
Figure 8. Final vegetation cover map as produced by the proposed workflow. Panel (a) shows the 2009 lake Saint-Pierre Quickbird-2 image (© 2022 Maxar Technologies) and panel (b) shows the 2019 lake Saint-François image (© 2022 Maxar Technologies).
Figure 8. Final vegetation cover map as produced by the proposed workflow. Panel (a) shows the 2009 lake Saint-Pierre Quickbird-2 image (© 2022 Maxar Technologies) and panel (b) shows the 2019 lake Saint-François image (© 2022 Maxar Technologies).
Remotesensing 14 00267 g008aRemotesensing 14 00267 g008b
Figure 9. Value range for all bands in a deep-water area affected by surface waves. (a) Shows a seemingly homogenous area of deep water in the eastern part of lake St-François. (b) Shows minimum, mean (dot) and maximum pixel values at every wavelength within this area.
Figure 9. Value range for all bands in a deep-water area affected by surface waves. (a) Shows a seemingly homogenous area of deep water in the eastern part of lake St-François. (b) Shows minimum, mean (dot) and maximum pixel values at every wavelength within this area.
Remotesensing 14 00267 g009
Table 1. Parameters, features and classes used for object-based image analysis level 1 classification (land masking).
Table 1. Parameters, features and classes used for object-based image analysis level 1 classification (land masking).
Meanshift Segmentation and Haralick Texture Extraction Parameters.Features per ObjectsObject Classes
Meanshift
  • Spatial range = 5
  • Spectral range = 0.0025
  • Treshold = 0.001
  • Max. iterations = 100
  • Min. size = 5
Haralick (NIR band)
  • Min. = 0
  • Max. = 1
  • X radius = 5
  • Y radius = 5
  • X offset = 10
  • Y offset = 10
  • N bins = 64
Zonal statistics for all spectral bands (4 or 8)
  • Min/Max
  • Mean/Standard deviation
Simple Haralick textures for NIR band
  • Energy
  • Entropy
  • Correlation
  • Inverse difference moment
  • Inertia
  • Cluster shade
  • Cluster prominence
  • Haralick correlation
Vegetation indices 1
  • Normalized difference vegetation index (NDVI)
  • Submerged aquatic vegetation index (SAVI)
  • Enhanced vegetation index (EVI)
  • Normalized difference aquatic vegetation index (NDAVI)
  • Water adjusted vegetation index (WAVI)
Land
  • Anthropological (buildings/roads)
  • Boats
  • Wetlands (high NDVI, attached to land)
  • Shadows
  • Agriculture/grasslands
  • Forest
  • Sand/soil
Water
  • Blue/green-deep
  • Blue/green-shallow
  • Brown
  • Dark
Aquatic vegetation
  • Floating (debris in deep water)
  • Emergent (high NIR signal)
  • Submerged—high (NIR signal)
  • Submerged—low (no NIR signal)
Other
  • Boat waves
  • Floating foam
Table 2. Parameters, features and classes used for object-based image analysis level 2 classification (vegetation cover mapping).
Table 2. Parameters, features and classes used for object-based image analysis level 2 classification (vegetation cover mapping).
Meanshift Segmentation and Haralick Texture Extraction Parameters (Image 1/Image 2)Features per ObjectsObject Classes, Image 1Object Classes, Image 2
Meanshift
  • Spatial range = 5
  • Spectral range = (0.0025/0.001)
  • Threshold = 0.001
  • Max. iterations = 100
  • Min. size = 5
Haralick (NIR, blue and NDVI bands)
  • Min. = 0
  • Max. = 0.3
  • X radius = 5
  • Y radius = 5
  • X offset = 10
  • Y offset = 10
  • N bins = 64
Zonal statistics for all spectral bands (4 or 8)
  • Min/Max
  • Mean/Standard deviation
  • Simple Haralick textures for NIR, blue and NDVI bands
  • Energy
  • Entropy
  • Correlation
  • Inverse difference moment
  • Inertia
  • Cluster shade
  • Cluster prominence
  • Haralick correlation
Vegetation indexes 1
  • Normalized difference vegetation index (NDVI)
  • Submerged aquatic vegetation index (SAVI)
  • Enhanced vegetation index (EVI)
  • Normalized difference aquatic vegetation index (NDAVI)
  • Water adjusted vegetation index (WAVI)
Water
  • Blue/green-deep
  • Shallow
  • Brown
  • Dark
  • Dark—adjacency effect
Aquatic vegetation
  • Emergent (high NIR signal)
  • Emergent—adjacency effect (very high NIR signal due to land adjacency)
  • Submerged—high (NIR signal)
  • Submerged—low (no NIR signal)
Water
  • Deep
  • Shallow
  • Aquatic vegetation
  • Floating (debris in deep water)
  • Submerged—high (NIR signal)
  • Submerged—low (no NIR signal)
1 See [51].
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

de Grandpré, A.; Kinnard, C.; Bertolo, A. Open-Source Analysis of Submerged Aquatic Vegetation Cover in Complex Waters Using High-Resolution Satellite Remote Sensing: An Adaptable Framework. Remote Sens. 2022, 14, 267. https://doi.org/10.3390/rs14020267

AMA Style

de Grandpré A, Kinnard C, Bertolo A. Open-Source Analysis of Submerged Aquatic Vegetation Cover in Complex Waters Using High-Resolution Satellite Remote Sensing: An Adaptable Framework. Remote Sensing. 2022; 14(2):267. https://doi.org/10.3390/rs14020267

Chicago/Turabian Style

de Grandpré, Arthur, Christophe Kinnard, and Andrea Bertolo. 2022. "Open-Source Analysis of Submerged Aquatic Vegetation Cover in Complex Waters Using High-Resolution Satellite Remote Sensing: An Adaptable Framework" Remote Sensing 14, no. 2: 267. https://doi.org/10.3390/rs14020267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop