Next Article in Journal
Discrimination of Degraded Pastures in the Brazilian Cerrado Using the PlanetScope SuperDove Satellite Constellation
Next Article in Special Issue
A New Angle-Calibration Method for Precise Ultra-Short Baseline Underwater Positioning
Previous Article in Journal
iblueCulture: Data Streaming and Object Detection in a Real-Time Video Streaming Underwater System
Previous Article in Special Issue
Georeferencing Strategies in Very Shallow Waters: A Novel GCPs Survey Approach for UCH Photogrammetric Documentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds

1
Department of Geodesy and Geoinformation, TU Wien, Wiedner Hauptstr. 8-10, 1040 Vienna, Austria
2
Institute of Landscape and Plant Ecology (320), University of Hohenheim, Ottilie-Zeller-Weg 2, 70599 Stuttgart, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2024, 16(13), 2257; https://doi.org/10.3390/rs16132257
Submission received: 15 May 2024 / Revised: 12 June 2024 / Accepted: 14 June 2024 / Published: 21 June 2024

Abstract

:
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel approach for the classification of submerged vegetation captured with bathymetric LiDAR (Light Detection And Ranging) as a basis for monitoring their state and change, and we validated the results against established monitoring techniques. Employing full-waveform airborne laser scanning, which is routinely used for topographic mapping and forestry applications on dry land, we extended its application to the detection of underwater vegetation in Lake Constance. The primary focus of this research lies in the automatic classification of bathymetric 3D LiDAR point clouds using a decision-based approach, distinguishing the three vegetation classes, (i) Low Vegetation, (ii) High Vegetation, and (iii) Vegetation Canopy, based on their height and other properties like local point density. The results reveal detailed 3D representations of submerged vegetation, enabling the identification of vegetation structures and the inference of vegetation types with reference to pre-existing knowledge. While the results within the training areas demonstrate high precision and alignment with the comparison data, the findings in independent test areas exhibit certain deficiencies that are likely addressable through corrective measures in the future.

Graphical Abstract

1. Introduction

Lakes are complex and dynamic ecosystems that support a diverse range of aquatic life. Submerged macrophytes play a critical role in maintaining the ecological balance of these systems. The significance of submerged aquatic vegetation lies in its ability to sustain clear-water conditions in shallow water [1], which in return enhances habitat diversity by offering organic matter, producing shade and shelter, regulating temperature, and creating aquatic habitat structures [2]. At the same time, the dynamics of the occurrence of submerged macrophytes in inland waters is an important indicator for determining the ecological status of water bodies [3,4], which are influenced (directly or indirectly) by both anthropogenic interventions and climate change. Therefore, assessing the spatial distribution and growth of submerged macrophytes is an important tool for the lake management and conservation of inland waters [5,6] as well as for climate research.
Conventional manual monitoring methods for submerged macrophytes often require labor-intensive, time-consuming, and potentially destructive fieldwork [7]. Besides the manual in situ visual assessment of species distribution, comprehensive sampling plays an important role in traditional monitoring. The elevated error rate observed in manual monitoring can be explained by the presence of multiple contributing factors, including observer misidentification, imprecise estimation, and restricted accessibility to specific locations, which may lead to an incomplete representation of the ecosystem’s heterogeneity [8].
For decades, remote sensing has been gaining importance in the field of mapping the Earth’s land mass, and more recently, it has also become significant in surveying water bodies [9]. Nevertheless, working in aquatic environments presents new challenges, primarily due to the presence of a water column that weakens benthic reflectance signals and creates heterogeneity within the scene, making the analysis more intricate [10,11]. The focus in aquatic surveying lies on passive techniques that use airborne and satellite-based methods, which include pan-chromatic, true-color, multispectral, and hyperspectral imaging. But also, active methods have recently been gaining increasing attention [8,12,13]. In particular, the active method of Airborne Laser Scanning (ALS), also referred to as Airborne Laser Bathymetry (ALB), has found extensive applications in shallow water area surveying [14,15,16]. The technology of bathymetric Light Detection and Ranging (LiDAR) in particular, which works with laser pulses in the green range of the electromagnetic spectrum, has extraordinary potential here [17]. Unlike infrared light, green laser radiation is able to penetrate clear shallow water [18] and is reflected off the bottom surface, objects located in the water body, and the water column itself.
Airborne laser bathymetry (ALB) is a laser scanning technique used to measure water body bottom topography [19,20,21,22] or to detect the presence of underwater objects [23]. LiDAR has been used occasionally for mapping submerged vegetation, focusing mainly on distinguishing between the presence and absence of vegetation [15,24,25,26,27]. Despite its potential to overcome some of the limitations of conventional methods, there is little research with more detailed quantitative or qualitative detection and modeling of submerged macrophytes in shallow lakes with ALS, resulting in little research on analyzing the collected data. This especially applies to the classification of ALS bathymetric point clouds, which constitutes an imperative step for the 3D mapping of submerged vegetation, and possibly the differentiation of vegetation types.
In general, bathymetric LiDAR sensors are tuned to maximize the depth measurement performance, as light is strongly absorbed in medium water [20]. This applies to clear water, but also, to a higher extent, to turbid water, where dissolved or solid particles contribute to scattering and signal absorption in the water column [28]. A side effect of the high sensitivity of the sensor is a high number of volume backscatter points in the water column. Many of these points are neither reflections from the water surface or the bottom nor reflections from objects in between, such as submerged macrophytes. This is generally a typical situation for bathymetric 3D LiDAR point clouds, especially when the laser waveform analysis used aims to obtain very weak echoes [29].
Standard strategies for classifying 3D ALS point clouds into ground and low/medium/high vegetation start with filtering terrain and off-terrain points and classifying vegetation afterward, based on height thresholds [30]. Examples for existing software working on this principle include SCOP++/Trimble [31], LAStools/rapidlasso [32], and TerraScan/Terrasolid. Using height thresholds above the terrain only, however, is not suitable for the classification of submerged vegetation due to the high number of volume backscattering points.
Modern classification strategies utilize machine learning (ML) in general and deep learning (DL) in particular [33,34,35]. The problem with ML-based methods is the existence of labeled ground truth data. While such data are increasingly available for topographic ALS via open government data from National Mapping Agencies [36], proper benchmark datasets are still missing for bathymetric LiDAR [37].
Therefore, tailored processing strategies are required for bathymetric LiDAR. This paper introduces a novel method for detecting and mapping submerged vegetation in the littoral zone of Lake Constance through automatic point cloud classification based on modern topo-bathymetric airborne laser scanning, referred to as ALB in the following. The premise is that the accuracy of point cloud classification plays a vital role in the usability and potential of ALB for the task of characterizing and quantifying submersed macrophytes. More specifically, this study focused on (1) the automatic classification of ALB point clouds to identify underwater vegetation and differentiate between three vegetation classes: Low Vegetation, High Vegetation, and Vegetation Canopy, (2) the creation of 3D digital surface models (DSM) of submerged vegetation, and (3) the final classification of the point cloud for 3D vegetation mapping. The distinction between vegetation classes is based on characteristics that can be acquired through LiDAR surveys (volumetric density, reflectance, etc.), rather than height markers only. It is imperative to highlight that the inherent quality of the laser dataset is compromised by diverse biological and technical factors, causing an unusually dense point cloud with a high number of noise points. Consequently, alongside the laser point cloud, the data analysis incorporates reference datasets, which play an essential role in the processing chain.
The study aimed to contribute to the field of remote sensing and environmental monitoring by evaluating the potential of ALB as a cost-effective and efficient tool for collecting information on submerged vegetation through automatic point cloud classification. The work described in this article was conducted within the research project Seewandel, which comprises multiple research endeavors aimed at investigating various aspects of Lake Constance and seeks to gain a comprehensive understanding of the lake’s ecological, hydrological, and environmental characteristics [38].
The remainder of this article is structured as follows. Section 2 introduces the study area, the available ALB and reference data, and the used software framework. In Section 3, we provide a detailed description of the employed classification strategy and explain our quality assessment approach. We present the results in Section 4, critically validate them in Section 5, and discuss the them in Section 6. This article ends with concluding remarks in Section 7.

2. Materials

2.1. Study Area and Research Project

Lake Constance, also known as Bodensee, is the second-largest pre-alpine European lake [39] with a surface area of 536 km2 [40] and shorelines in Germany, Switzerland, and Austria (Figure 1a,b). The lake and its region are intensively influenced by local anthropogenic activities [39,41] including dramatic changes in submersed vegetation in the last decades [40] as well as affected by climate change, and it is said to be one of the best-examined lakes with limnological research dating back more than 100 years. In 2018, the research project SeeWandel was implemented by the IGKB (Internationale Gewässerschutzkommission für den Bodensee) [42] to further explore how Lake Constance responds to changing environmental conditions [38].
The research team specifically investigated the resilience dynamics of submerged macrophytes in the littoral zone of Lake Constance, focusing on recording the current macrophyte populations at the species level and conducting a spatio-temporal analysis of species composition and vegetation structure. In addition to conventional monitoring methods, a LiDAR underwater vegetation survey approach was also tested and is the focus of this paper. Figure 1 shows the locations of ten area-of-interest (AOI) tiles, each consisting of a regular hexagon with a side length of approx. 200 m. The data analysis was performed specifically for these tiles and was subsequently validated against two larger test areas from the same dataset (T1 and T2).

2.2. Dataset

The ALB data used in this research were generated by the Austrian company Airborne Hydro Mapping (AHM) using a RIEGL VQ-880-G topo-bathymetric laser scanner. The VQ-880-G system utilizes a green laser operating at a wavelength of 532 nm and has an accuracy of 25 mm in the vertical and horizontal dimensions [43]. The scan pattern is circular (Palmer scanner) with a constant off-nadir angle of 20°. Data collection took place on 9 July 2019, a date expected to align with the peak of vegetation growth. The following environmental conditions were obtained during the flight campaign: the Secchi depth measured was 3.7 m, indicating a calcite precipitation event, and the water level at gauge was 447 cm (gauge zero point at 391.89 m a.s.l.). The mean water level was 332 cm, which means that the water level was 115 cm above the mean, indicating typical summer flood conditions. The data were stored as a point cloud, comprising data points in a 3D ETRS89/UTM zone 32N coordinate reference system, in compressed LAS format (LAZ). LAS/LAZ is a widely adopted industry standard for LiDAR [44]. Each point in the cloud corresponds to a unique measurement reflecting the laser beam from features such as the water surface, ground, aquatic vegetation, or particles in the water column [29].
In addition to the spatial information (i.e., 3D coordinates allowing for the accurate positioning and mapping of the features represented), additional attributes were recorded and stored for each point. The additional attributes included the PointId (assignment to a specific flight strip); Reflectance (measure for the amplitude, or strength, of the reflected signal providing information on the reflectivity of the target from which the emitted signal was reflected); NumberOfReturns (indicating the number of echoes received from a single transmitted laser pulse for each data point); and Pre-Classification (prior differentiation of water, water surface, and noise conducted by the AHM). These attributes play crucial roles in the data analysis process.
The unusual environmental factors—including a calcite precipitation event and flood conditions—given at the time of recording, coupled with the calculation approach employed by the AHM, where each deflection in the backscattered signal generates a point irrespective of its significance [29], result in a point cloud characterized by a high number of noise points resembling a dense ”point fog” that is difficult to interpret. This makes the assistance of supplementary input data indispensable: the Digital Terrain Model (DTM), named Tiefenschärfe-DTM [45] and formatted in LAS, renders a comprehensive 3D depiction of Lake Constance’s bottom topography. (Tiefenschärfe was the name of a project aiming for a complete survey of the bathymetry of Lake Constance and the surrounding littoral terrain with data collection in 2013 and 2014. The term stems from optics and literally translates to “depth of field”, but is not used in this context. The real meaning is revealed when separating the two words “Tiefe” (depth) and “Schärfe” (acuity/sharpness). The project aimed to provide a sharp geometric model of Lake Constance.). Additionally, the results of the Aerial Photo Interpretation (based on ground truth field survey data from a boat with a recording period from June to September 2019 and additional mapping in July 2021) also conducted as part of the SeeWandel project are available. These manually generated polygonal representations of submerged macrophyte patches provide a general understanding of the distribution and density of aquatic vegetation. It is important to emphasize that this classification only shows the dominant vegetation class of a patch, and the presence of additional vegetation classes can never be excluded. Furthermore, due to the limited visibility of submersed vegetation in the aerial photos taken during the LiDAR campaign in July, the aerial photo interpretation was based on aerial photos of August 2019, resulting in a time shift of about one month. Table 1 lists the vegetation classes distinguished with the corresponding species. The Aerial Photo Interpretation with the temporally corresponding orthophotos are compared with the classification results of the LiDAR point cloud for evaluation in Section 5.
In general, the time difference between laser data and the respective reference data must always be taken into account during the classification process, as well as during the subsequent validation and discussion of the results.

2.3. Software Framework

For the 3D mapping of submerged macrophytes based on dense ALB point clouds, we used the scientific laser scanning software OPALS [46]. The centerpiece of the system is the OPALS data manager [47], a component that provides (i) efficient spatial access to large point clouds and (ii) a dynamic system for managing user-defined point attributes. The software, available on Microsoft Windows and Linux, is composed of small components, referred to as modules. The individual modules can be freely combined into complex workflows using either shell or Python scripts. Our classification procedure was implemented on the basis of a batch script. Processing was carried out with OPALS version 2.5.0 on a standard desktop computer with Windows 10.

3. Methods

3.1. Airborne Laser Scanning Data Processing

The method for classifying full-waveform (FWF) airborne laser bathymetry point clouds mainly consists of (1) data preparation, (2) the classification of candidates—separately for each vegetation class, (3) digital surface model creation, and (4) the final point cloud classification. Figure 2 depicts a schematic representation of the data processing workflow. As described in Section 2.3, the processing pipeline was implemented using the modular program system OPALS [46] and Python 3.6.8 [48].
The original geo-referenced point clouds and the bathymetric Digital Terrain Model Tiefenschärfe-DTM [45] were used as primary input data. The DTM was assembled from SONAR (Sound Navigation And Ranging) and ALB data. While multibeam echo sounding (MBES) data served as base data for the 3D reconstruction of the pelagic (open water) and benthic (bottom) zone, ALB was used for the littoral (shallow water) area. The latter was acquired before the macrophytes’ growth season, i.e., optimal conditions for mapping the lake bottom topography. However, in recent decades, some parts of the littoral zone show overwintering, submerged vegetation [49], which may have impaired the accuracy of the DTM. In addition to the DTM, field survey, aerial photo interpretation and orthophotos were considered as comparative data to aid in the interpretation of properties such as reflectance and point density, which significantly contribute to the classification process. However, comparative data were not directly incorporated into the classification workflow, which ensures that independent results are obtained.

3.1.1. Data Preparation

For the subsequent processing, the point clouds of multiple overlapping flight strips were merged, and the AOIs are depicted in Figure 1 are cut out of the combined dataset. Each flight strip consists of two sets of point clouds composed of either the points of the laser beams looking backward or forward in the direction of flight (Figure 3). To filter out “noise points”, an existing pre-Classification of the point cloud as well as the Tiefenschärfe-DTM were used.

3.1.2. Classification of Candidates

The actual point cloud classification is preceded by the classification of the candidate points, which is the core of this research work. Unlike in the final classification, aiming at assigning a well-defined class to each point of the point cloud, we first identify some but not necessarily all points that characterize a certain vegetation class. These points are later used as representative points or candidate points for the further process. This step is conducted for each vegetation class individually. However, the same scheme is used for each vegetation class (Figure 4).
At this point, the classification process can be conceptualized as an iterative filtering process. Initially, an indicator variable (e.g., reflectance, average distance to neighboring points, etc.) is defined based on existing point attributes or newly computed attributes. We then examine the confounding variables and try to mitigate their effects by normalizing them using empirical formulas. We then analyze the distributions of the calculated attributes and automatically determine suitable threshold values based on characteristic distribution patterns. Points exceeding or falling below the threshold with respect to the considered variables are filtered out.
A visualization and comparison with reference data are conducted to ascertain whether the remaining points accurately represent the corresponding vegetation class, or if the iteration necessitates the recalculation of attributes. If the result is satisfactory, the remaining points are retained as candidates for the respective vegetation class. Note that the entire strategy is not based on hard-coded threshold values, but that suitable values for class delimitation are derived from the analysis of attribute distributions.
To illustrate the application of this approach, the first step of the processing chain, i.e., defining an indicator variable, is illustrated for each of the three vegetation classes Low Vegetation, High Vegetation and Vegetation Canopy in Figure 5, Figure 6 and Figure 7. In some tiles, a more precise distinction is made between Low Vegetation and Low Vegetation 2 if, within a processed point cloud, the class can be clearly distinguished into two sub-classes of different heights. No fixed height difference is set as a limit value, but if the evaluation process shows a clearly bimodal distribution of vegetation height within the Low Vegetation class, the limit value is calculated automatically based on the height distribution. In this way, the additional information of clear height differences is included in the classification scheme. The actual heights can be read in the 3D view.

3.1.3. Calculation of Digital Surface Models

The DSM calculation utilizes the classified candidates of the vegetation and ground classes as inputs and performs an interpolation to derive the surface models overlying the respective candidate points, as shown in Figure 8. The definition of ground candidates is based on the available Tiefenschärfe-DTM. The interpolation algorithm used for calculating the surface models was designed to only compute values at locations where candidate points exist. This results in incomplete surface models that do not cover the entire area of interest (cf. Section 4.1, Figure 10b).
When creating the DSMs of the respective vegetation classes, a grid width of 0.3 m (with the exception of 0.6 m for the vegetation canopy class), 32 neighbors, a maximum search range of 0.2 m, and the interpolation method “mean” (i.e., the average height of all neighbor points) were used. This combination of parameters ensures that the structure of the vegetation has a relatively good spatial resolution while not being negatively influenced by individual outlier points, which would be a problem with fewer neighboring points.
As the surface models are not continuous, they can easily be used to calculate the area coverage of the individual vegetation classes (and ground class). This calculation relies solely on the point count ( p d s m c l a s s ) information within each surface model, which is compared against the polygon area ( A p o l y g o n ) and grid width ( g r i d w i d t h ) of the model (Equation (1)).
c o v e r a g e c l a s s = p d s m c l a s s A p o l y g o n g r i d w i d t h 2 100 [ % ]

3.1.4. Classification of Point Cloud

For the actual classification of the entire point cloud, the point cloud and the DSMs are superimposed. Points beneath the surface model of a particular class that have not yet been classified are assigned the class ID of the respective surface model. The classification order is a critical factor and is visually represented in Figure 9. Additionally, classes for ground, water, and water surface are defined for the completion of the spatial representation.

3.2. Processing of Additional Data for Quality Assessment

In order to be able to better assess the quality of the automatic classification, two additional areas (T1 and T2) are included (Figure 1) in addition to the ten AOIs. The two test areas were surveyed at the same time as the AOI areas. To match the training area’s data size (ten hexagonal tiles), the test areas are segmented into squares with a 200 m width and analyzed in a piecemeal fashion.
The validation process adopts a qualitative approach, wherein results are manually compared with reference data—consisting of orthophotos and aerial photo interpretations—to assess the accuracy and consistency.

4. Results

4.1. Classification Results

The output of the data processing is the ten classified point clouds of the respective AOI and the results of the two larger test areas consisting of several separately processed, complementary point clouds. As an example, the top view of tile ETL4 with prominent vegetation features—including the corresponding DSMs—is visualized in Figure 10. Additional results—including the test areas—can be seen in the Appendix A (Figure A1, Figure A3, Figure A5, Figure A7, Figure A9, Figure A11, Figure A13, Figure A15, Figure A17 and Figure A19). It should be noted that the fully classified point clouds are shown in the following illustration. For some applications, however, the representation of the candidate points is more suitable (e.g., 3D representations, cross sections, or the highlighting of vegetation density).
Figure 10. Results of the automatic classification of tile ETL4.
Figure 10. Results of the automatic classification of tile ETL4.
Remotesensing 16 02257 g010
Table 2 shows the calculations of the percentage coverage based on the respective vegetation classes (and ground) in the individual tiles based on the DSMs.

4.2. Comparison with Reference Data

While precise accuracy metrics are challenging to define without accurate ground truth data, the results are compared with the previous mentioned reference data—orthophotos and field survey-supported aerial photo interpretations. In general, the time lag of one month between LiDAR recordings and reference data must be taken into account in this comparison. This applies to both the orthophotos and aerial photo interpretations. Figure 11 shows the comparison using the example of ETL4, while the Appendix also contains comparisons of the other AOIs (Figure A2, Figure A4, Figure A6, Figure A8, Figure A10, Figure A12, Figure A14, Figure A16, Figure A18 and Figure A19).
To enable the comparison of the automatic classification results (Figure 11a) with the available manually delineated 2D polygons based on orthophotos (Figure 11c), Figure 12 provides an overview comparison of the respective “dominant” classes. It is important to emphasize that no one-to-one comparison of the classes is possible, as the subjective delineation of patches from orthophotos is basically more generalized.
The class defined as Low Vegetation denotes vegetation near the bottom with the identifying characteristic of a higher point density compared to the water body above. While no specific height threshold is defined, typically, Low Vegetation is classified up to approximately 1 m above ground level. This class is further subdivided into Low Vegetation and Low Vegetation 2 if there are two different areas of this class with a recognizable average height difference within a tile. This vegetation class can be compared to the small (≤30 cm high) and medium charophyte (30–60 cm) vegetation classes as well as to the small Elodeids (typical height ≤ 60 cm) used as a category in the aerial photo-based polygon classification (Figure 12).
The simplified class High Vegetation describes vegetation in the water column (excluding the water surface) that is characterized by a higher Reflectance than its surroundings. This class can be compared with the vegetation categories of tall (120–600 cm) Elodeids from the aerial photo interpretation. Generally, the classes overlap to a limited extent due to their fuzzy definitions.
The defined class Vegetation Canopy, which is reserved for plants reaching the water surface and which is additionally characterized by a low NumberOfReturns, can be assigned to the class of tall Elodeids in the polygons derived from aerial image interpretation. However, tall Elodeids with a height of 120–600 cm cover far more than what is included in the Vegetation Canopy class (Figure 12).

5. Validation

Figure 11 illustrates the overall satisfactory outcome of the automatic classification in relation to the comparative data. The discernible structures of vegetation areas are evident across all representations (cf. Figure 11a–c). Supplementary results in the Appendix A further demonstrate the comparable effectiveness for the classification method, with notable exceptions in tiles ETN3 and ETN4 (cf. Figure A13 and Figure A15). Since quality deviations were observed for these two tiles, which may be due to sub-optimal data quality in the corresponding flight strips, they are not discussed further in the following section. However, the Tiefenschärfe-DTM might also have its limitations due to the fact that overwintering, submerged macrophytes may have impaired its accuracy, potentially influencing the classification process. Validation was performed separately for the individual vegetation classes, as the separate processing requires individual validation. This is followed by a summary of the classification process’s overall success, including the identification of its strengths and weaknesses.

5.1. Validation of Ground and Low Vegetation Class

The comparison with the polygon classification highlights that the Low Vegetation class shows a strong correlation with the Charophytes polygon class (cs, cm in Figure 12). This can be seen particularly well in Figure 11, Figure A2, Figure A4, Figure A6, Figure A10 and Figure A12 due to the high-percentage coverage of class Low Vegetation in these tiles (Table 2).
When comparing the reference data and classification outcomes, a favorable classification result for the Low Vegetation class is observed across all tiles. The classification quality is particularly noticeable in tiles comprising solely of Low Vegetation and sediment, as is evident in the tiles ETL2 (Figure A3 and Figure A4) and ETN1 (Figure A9 and Figure A10). This also shows that the class can be detected in great detail and that even the smallest areas that change between Low Vegetation and sediment are detected, providing insights into the high density of Low Vegetation.
The distinction between the sub-classes in Low Vegetation and Low Vegetation 2 also generally works well. Figure 13 shows the recognizable height difference of the sub-classes using a section view of tile ETN2, which is the only tile that shows a large coverage of this class (Table 2).
The classification of Low Vegetation is based on the limit value calculation of variables that are based on the point density. However, this limit value calculation occasionally shows errors, such as with tile ETN8 (Figure A17 and Figure A18). Here, an incorrect limit value was calculated in one of the two resulting flight strips (and thus point clouds), which led to Low Vegetation being incorrectly not recognized. It should be noted that in such cases, a less sophisticated algorithm is responsible for the incorrect limit value calculation, but the calculated variables still provide a good basis for distinguishing Low Vegetation and its surroundings when checked manually.
Another source of error is the variation in water depth within a tile. Since the indicator variable is calculated using the distance to nearest neighbors, a measure of the 3D density, and the density of points decreases with increasing water depth due to signal attenuation, the average point density depends on the water depth. Now, if parts of the tile exhibit a water depth that deviates greatly from the average depth of the tile, the threshold will not be appropriately chosen for these deviating areas. More precisely, this means that Low Vegetation is incorrectly classified in ETL1 in the nearshore areas (Figure A1 and Figure A2) because the selected threshold in the nearshore areas would be significantly lower in a separate analysis.

5.2. Validation of High Vegetation Class

By comparison to the orthophoto, Figure 11 shows that the High Vegetation was detected very well. The vegetation boundaries match almost perfectly with the vegetation boundaries of the orthophoto. Even smaller patches of vegetation as well as small gaps in the vegetation were detected by the classification method. It is striking, however, that the High Vegetation class in Figure 11 and Figure A8 (i.e., tiles with a high proportion of High Vegetation according to Table 2) unexpectedly does not only correspond with the class of tall Elodeids but also similarly well with the polygon classes of small and large-leaved Elodeids. In addition to the temporal difference between the two classifications, the explanation that only the dominant vegetation class is shown in the aerial photo interpretation plays a role here, as the orthophoto, again, clearly shows high vegetation.
The three-dimensionality of the result is particularly important for this class. The structures of the vegetation within the water body can be recognized and displayed, as can be seen in the classification result of test area T2 (Figure 14).
In general, test area T2 (Figure A19) clearly shows that the classification of the High Vegetation is homogeneous (across tile boundaries) and agrees well with the orthophoto, which forms the basis for comparison. The biggest risk for misclassification is the threshold setting of the underlying variables, which can be corrected manually.

5.3. Validation of Vegetation Canopy Class

Figure 11 clearly shows that Vegetation Canopy was classified exactly where there is visible tall vegetation in the orthophotos. The small vegetation areas, which often appear in a circle, are mostly located in the vicinity of High Vegetation.
While the classification results of the class Vegetation Canopy for the ten training tiles are quite satisfactory and even very small vegetation areas can be recognized by the algorithm and clearly distinguished from their surroundings, this is not the case, to the same extent, for the classification results of test area T2 (Figure A19). The individual classification results are not homogeneous, which led to inconsistent results across the process boundaries. This can be explained by the fact that most of the training areas have little or no Vegetation Canopy (Table 2), and therefore, the algorithm for calculating the threshold value has not been sufficiently trained.
Figure 15 illustrates that this is merely an instability in the statistical analysis and that the actual classification method is nevertheless successful with regard to the underlying indicator variable.
It can be seen that indicator variable dist4nn (based on NumberOfReturns) also behaves homogeneously across the tile boundaries, and therefore, with a better threshold calculation, a similarly homogeneous result as for the class High Vegetation and a better match with the orthophoto can be expected.

6. Discussion

6.1. Summary of the Validation

In general, the results suggest that the classification methodology successfully distinguishes between various vegetation classes as indicated by the correct detection of vegetation and the distinction between vegetation and its surroundings. The selected vegetation indicators, such as density for Low Vegetation, Reflectance for High Vegetation, and NumberOfReturns for Vegetation Canopy, are indicative of their respective vegetation classes. However, the conditions during the LiDAR data acquisition were not optimal. Due to a lime precipitation event, the Secchi depth of less than 3 m was very low compared to the maximum achievable values of around 10 m measured during the vegetation period in 2019. Furthermore, the high lake level indicated summer-flood conditions. It is important to emphasize that despite the moderate quality of the LiDAR data, a significant classification success was achieved. With regard to the data collection, we note that ALB flight campaigns based on crewed aircraft require longer planning in advance. As a result, they can hardly react to short-term changing conditions as this would, for instance, be the case for drone-based ALB [50].
The errors in classification primarily stem from undetected or miscalculated thresholds. As such, the issue is not so much with the classification workflow itself, but with the mathematical or statistical procedures. It is important to note that, while a training area comprising ten individual analysis areas is sufficient for automatic threshold value computations, it does not cover the full range of distributions and distribution forms of a variable required to compute accurate thresholds for other AOIs with high reliability. It is worth mentioning that this study’s focus was not on evaluating the distributions mathematically, but rather on the general classification concept. Hence, a significant enhancement of the functions used for threshold calculation is possible, but it is beyond the scope of this study.
Another significant source of errors arises from heterogeneous analysis regions. A deeper penetration of the laser pulse into the water body causes a loss of signal strength, which strongly affects crucial attributes like 3D point density, Reflectance, and NumberOfReturns. Although efforts have been made to adjust for water depth or distance to the water bottom when calculating attributes, they cannot be entirely eliminated as confounding variables.
In general, it can be asserted that the classification method is successful, with the exception of tiles ETN3 and ETN4, as the classification results of all other training areas are good and match well with the comparison data. Classification boundaries in the classified point cloud match well with color changes in the orthophotos. However, it is difficult to determine the type, and impossible to estimate the height of vegetation, by solely examining the orthophotos. When comparing with the polygons classified from aerial photographs and field surveys, the point cloud classification results also reflect the structure of the polygons, but with a higher level of detail. As a result, the boundaries of the polygons only partially correspond to those of the classified point cloud. Furthermore, the temporal displacement of one month between the aerial photographs used for polygon classification and LiDAR data led to some differences, such as with ETN1 (Figure A10), where the point cloud classification of High Vegetation is less consistent with the classified polygons due to temporal changes in vegetation. Since some high-growing species such as P. pectinatus start their senescence often already at the end of July and may lay down on the ground due to storm events, as it happened in July 2019; they can hardly be classified accurately in aerial photo interpretations. Nevertheless, the biggest challenge in comparing the two classification methods is the distinction and representation of different vegetation classes. While the point cloud classification distinguishes by vegetation height and a main indicator variable, the polygon classification, supported by field survey data, distinguishes by vegetation type, which limits their comparability. In addition, the polygon classification only shows the “dominant” vegetation class in the selected patch, which leads to a considerable loss of information if multiple vegetation types are present in one polygon.

6.2. Vertical Complexity of Macrophyte Stands

One advantage of LiDAR point cloud classification over orthophotos and polygon classification lies in the ability to provide three-dimensional results. This allows for the identification of clear structures of vegetation surfaces, beyond the mere presence or absence of vegetation classes. However, the three-dimensional nature of the result is constrained, as the vegetation classes may obscure each other.
The water current and associated orientation of vegetation in the water appear to play a significant role in these observations. For instance, tile ETL4 is subjected to a notable water flow. Concurrently, the High Vegetation largely obscures the Low Vegetation and the ground in the classification results (Figure 10). This current promotes an inclined position of the high vegetation, thereby impeding laser penetration and resulting in the inability to image the multiple layers of the vegetation structure. Nonetheless, the presence of Low Vegetation beneath is not entirely ruled out and is even likely based on reference data.
In contrast, tile ETL1 experiences minimal water flow, and upon examining the surface models (Figure A1), Low Vegetation is clearly defined beneath the High Vegetation. The calm water encourages a vertical orientation of the High Vegetation, facilitating laser penetration. The classification revealing gaps in Low Vegetation DSM can be attributed to the small area covered by Vegetation Canopy at the water surface, which can be explained by the horizontal alignment of leaves on the water surface.
In addition to the flow conditions, the physical constraints of the data acquisition also play a major role in the ability to recognize the entire three-dimensional structure of the vegetation and several vegetation layers on top of each other.

6.3. Potential for Improvement and Extensions

In addition to increasing accuracy and robustness of data processing, LiDAR data can also provide opportunities for further analysis beyond submerged macrophythes classification. These potential extensions include the following:
  • The calculation of the vegetation volume and biomass volume by combining the knowledge of vegetation densities.
  • The extension of data analysis for determining vegetation density.
  • The determination of leaf size could also be included in the analysis, following the aerial photo-based classification. The hypothesis is that plants with large leaf sizes may allow less of the signal of the laser beam to penetrate compared to those with small leaf size.
  • The most ambitious extension of LiDAR data analysis would be the development of an advanced classification process that allows for detailed vegetation class distinctions or even the identification of vegetation types by combining various indicator attributes. Instead of using only one main indicator for each vegetation class, a combination of several attributes such as vegetation height, vegetation area size, leaf size, vegetation density, water depth, Reflectance, NumberOfReturns, and other influencing variables could lead to a more precise classification. This idea could be further developed by incorporating additional knowledge about vegetation types and their characteristics.

6.4. Transferability

To evaluate the transferability of the research findings to the field of monitoring submerged macrophytes, it was crucial to estimate the method’s applicability to other inland water datasets. However, it is essential to note that the entire methodology was developed solely based on the Lake Constance dataset collected on 9 July 2019, between 7:30 a.m. and 10:00 a.m. Also, for example, Low Vegetation was not generally classified, but only Low Vegetation with a high local point density as a detection feature, which is analogous for the other classes. In other inland waters, Low Vegetation species may occur that are not identifiable by a high point density, which may impede the transfer of the method without prior adjustments. Nevertheless, the classification method could be applied to other inland waters where submerged aquatic vegetation similar to that in Lake Constance is expected, either based on previous research or due to similar climatic and environmental conditions. This could, for instance, apply to other Alpine lakes. However, for reliable vegetation classification in waters different from Lake Constance, a separate verification of the represented vegetation classes and their typical characteristics is necessary for a corresponding LiDAR data analysis.
Moreover, the algorithm might be more suitable for the temporal analysis of the same area than for application to different water bodies. This means that to analyze the temporal change in submerged vegetation in Lake Constance, the method can be applied to another dataset of the same area but at a different time. A prior check of the data’s similarity and quality is still necessary because even under the same external recording conditions (such as the same scanner, time of year, data preparation, etc.), external factors such as deviations in water quality can lead to significant differences in the dataset, requiring adjustments in the analysis.

6.5. Applications

The classification of LiDAR data results in bio-volume data of submersed vegetation. These can be parametrized by field measurements of vegetation biomass and thus serve as a measure of the littoral primary production, which is an important measure used to characterize a lake ecosystem [51]. This applies, in particular, to Lake Constance, where the re-oligotrophication process leads to a shift of primary production from the pelagial zone more to the littoral zone [40]. Furthermore, the classification results provide an accurate 3D representation of submersed vegetation structures, which serve as habitats for macro-invertebrates and fishes [52]. Thus, they provide a good basis for a quantitative assessment of habitats.

7. Conclusions

In this paper, we introduced a novel method for classifying 3D topo-bathymetric LiDAR point clouds into three main height-oriented vegetation classes—Low Vegetation, High Vegetation, and Vegetation Canopy. The clouds were compared with reference data (orthophotos and field survey-supported aerial photo interpretations) to create a separate classification scheme for each vegetation class. These schemes consist of calculations of threshold values for indicator attributes to classify representative points for each class. These candidate points were then used to create digital surface models, which in turn served as the basis for the final classification of the point clouds.
This research reveals that the automatic classification of LiDAR point clouds holds potential for detecting submerged vegetation in Lake Constance and differentiating between various categories of vegetation, namely, Low Vegetation, High Vegetation, and Vegetation Canopy. The detection capability surpasses the mere identification of the presence or absence of a vegetation class because it provides insight (i) into vegetation height and distribution, enabling 3D mapping, and (ii) also captures the density of the vegetation. It is noteworthy that the method exhibits a high level of precision in detecting vegetation, identifying even the smallest vegetated areas and effectively distinguishing them from their surroundings.
Generally, it can be stated that the field of monitoring aquatic submerged macrophytes through airborne LiDAR data is in its early stages, with the necessary technological developments currently underway. New surveying devices offering both better depth penetration and higher point density increase the potential of this field of research as well as in the domain of data science, where machine learning and deep learning will be of great significance in the future. The results of this study demonstrate a high quality of automatic point cloud classification for the classification of submerged vegetation, with enormous potential for the entire surveying of littoral water zones through the further development of data processing methods.

Author Contributions

N.W. (TU Wien) was responsible for LiDAR data processing and analysis, for the conceptualization and software implementation of the point cloud classification pipeline, and for the validation of the results based on the reference data provided. Nike Wagner also drafted main parts of the manuscript. G.M. (TU Wien) supervised the LiDAR data analysis, contributed to the conceptualization of the classification procedure, and sketched the overall structure of the article. G.F. (University of Hohenheim) was in charge of field data analysis and supervised the creation of the reference polygon maps used for the validation. For the manuscript, G.F. and K.S. (University of Hohenheim) contributed texts for the introduction, study area, dataset, validation, discussion, and conclusion sections. K.S. was also responsible for the subject-specific interpretation of the classification results. G.M. and K.S. were responsible for the overall project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the grant SeeWandel: Life in Lake Constance—the past, present and future within the framework of the Interreg V programme Alpenrhein–Bodensee–Hochrhein (Germany/Austria/Switzerland/Liechtenstein), with funds provided by the European Regional Development Fund as well as the Swiss Confederation and cantons. The funders had no influence on the study design, data collection or analysis, decision to publish, or preparation of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Acknowledgments

The authors express their kind acknowledgment to IGKB for initiating the SeeWandel project and providing the Tiefenschärfe DTM data. Furthermore, we thank Gabriella Vives and Sophia Deinhardt for their support in the field survey, aerial photo interpretation, and digitization of vegetation patches. And special thanks go to Christian Mayr and Michael Cramer from the University of Stuttgart (IfP) for processing the digital orthophoto mosaic of the aerial photos.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Classification Results and Comparative Data

Figure A1. Results of the automatic classification of tile ETL1.
Figure A1. Results of the automatic classification of tile ETL1.
Remotesensing 16 02257 g0a1
Figure A2. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL1.
Figure A2. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL1.
Remotesensing 16 02257 g0a2
Figure A3. Results of the automatic classification of tile ETL2.
Figure A3. Results of the automatic classification of tile ETL2.
Remotesensing 16 02257 g0a3
Figure A4. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL2.
Figure A4. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL2.
Remotesensing 16 02257 g0a4
Figure A5. Results of the automatic classification of tile ETL3.
Figure A5. Results of the automatic classification of tile ETL3.
Remotesensing 16 02257 g0a5
Figure A6. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL3.
Figure A6. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL3.
Remotesensing 16 02257 g0a6
Figure A7. Results of the automatic classification of tile ETL5.
Figure A7. Results of the automatic classification of tile ETL5.
Remotesensing 16 02257 g0a7
Figure A8. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL5.
Figure A8. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETL5.
Remotesensing 16 02257 g0a8
Figure A9. Results of the automatic classification of tile ETN1.
Figure A9. Results of the automatic classification of tile ETN1.
Remotesensing 16 02257 g0a9
Figure A10. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN1.
Figure A10. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN1.
Remotesensing 16 02257 g0a10
Figure A11. Results of the automatic classification of tile ETN2.
Figure A11. Results of the automatic classification of tile ETN2.
Remotesensing 16 02257 g0a11
Figure A12. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN2.
Figure A12. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN2.
Remotesensing 16 02257 g0a12
Figure A13. Results of the automatic classification of tile ETN3.
Figure A13. Results of the automatic classification of tile ETN3.
Remotesensing 16 02257 g0a13
Figure A14. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN3.
Figure A14. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN3.
Remotesensing 16 02257 g0a14
Figure A15. Results of the automatic classification of tile ETN4.
Figure A15. Results of the automatic classification of tile ETN4.
Remotesensing 16 02257 g0a15
Figure A16. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN4.
Figure A16. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN4.
Remotesensing 16 02257 g0a16
Figure A17. Results of the automatic classification of tile ETN8.
Figure A17. Results of the automatic classification of tile ETN8.
Remotesensing 16 02257 g0a17
Figure A18. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN8.
Figure A18. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) (legend presented in Figure 12) for ETN8.
Remotesensing 16 02257 g0a18
Figure A19. Comparison of classification results (only candidate points) (a,b) with orthophoto (c,d) for T1 (a,c) and T2 (b,d). In each figure, the polygon boundaries of the field survey-supported aerial photo interpretation are depicted in the background.
Figure A19. Comparison of classification results (only candidate points) (a,b) with orthophoto (c,d) for T1 (a,c) and T2 (b,d). In each figure, the polygon boundaries of the field survey-supported aerial photo interpretation are depicted in the background.
Remotesensing 16 02257 g0a19

References

  1. Carpenter, S.; Lodge, D. Effects of submersed macrophytes on ecosystem processes. Aquat. Bot. 1986, 26, 341–370. [Google Scholar] [CrossRef]
  2. Yamasaki, T.N.; Jiang, B.; Janzen, J.G.; Nepf, H.M. Feedback between vegetation, flow, and deposition: A study of artificial vegetation patch development. J. Hydrol. 2021, 598, 126232. [Google Scholar] [CrossRef]
  3. Coops, H.; Kerkum, F.C.M.; van den Berg, M.S.; van Splunder, I. Submerged macrophyte vegetation and the European Water Framework Directive: Assessment of status and trends in shallow, alkaline lakes in the Netherlands. In Shallow Lakes in a Changing World: Proceedings of the 5th International Symposium on Shallow Lakes, Dalfsen, The Netherlands, 5–9 June 2005; Springer: Amsterdam, The Netherlands, 2007; pp. 395–402. [Google Scholar] [CrossRef]
  4. Schneider, S. Macrophyte trophic indicator values from a European perspective. Limnologica 2007, 37, 281–289. [Google Scholar] [CrossRef]
  5. Zhang, T.; Ban, X.; Wang, X.; Li, E.; Yang, C.; Zhang, Q. Spatial relationships between submerged aquatic vegetation and water quality in Honghu Lake, China. FResenius Environ. Bull. 2016, 25, 896–909. [Google Scholar]
  6. Lehmann, A.; Lachavanne, J.B. Changes in the water quality of Lake Geneva indicated by submerged macrophytes. Freshw. Biol. 1999, 42, 457–466. [Google Scholar] [CrossRef]
  7. Espel, D.; Courty, S.; Auda, Y.; Sheeren, D.; Elger, A. Submerged macrophyte assessment in rivers: An automatic mapping method using Pleiades imagery. Water Res. 2020, 186, 116353. [Google Scholar] [CrossRef] [PubMed]
  8. Rowan, G.S.L.; Kalacska, M. A Review of Remote Sensing of Submerged Aquatic Vegetation for Non-Specialists. Remote Sens. 2021, 13, 623. [Google Scholar] [CrossRef]
  9. Luo, J.; Li, X.; Ma, R.; Li, F.; Duan, H.; Hu, W.; Qin, B.; Huang, W. Applying remote sensing techniques to monitoring seasonal and interannual changes of aquatic vegetation in Taihu Lake, China. Ecol. Indic. 2016, 60, 503–513. [Google Scholar] [CrossRef]
  10. Nelson, S.A.C.; Cheruvelil, K.S.; Soranno, P.A. Satellite remote sensing of freshwater macrophytes and the influence of water clarity. Aquat. Bot. 2006, 85, 289–298. [Google Scholar] [CrossRef]
  11. Schmieder, K. Littoral zone—GIS of Lake Constance: A useful tool in lake monitoring and autecological studies with submersed macrophytes. Aquat. Bot. 1997, 58, 333–346. [Google Scholar] [CrossRef]
  12. Mandlburger, G. A Review of Active and Passive Optical Methods in Hydrography. Int. Hydrogr. Rev. 2022, 28, 8–52. [Google Scholar] [CrossRef]
  13. Collin, A.; Ramambason, C.; Pastol, Y.; Casella, E.; Rovere, A.; Thiault, L.; Espiau, B.; Siu, G.; Lerouvreur, F.; Nakamura, N.; et al. Very high resolution mapping of coral reef state using airborne bathymetric LiDAR surface-intensity and drone imagery. Int. J. Remote Sens. 2018, 39, 5676–5688. [Google Scholar] [CrossRef]
  14. Guo, K.; Li, Q.; Mao, Q.; Wang, C.; Zhu, J.; Liu, Y.; Xu, W.; Zhang, D.; Wu, A. Errors of Airborne Bathymetry LiDAR Detection Caused by Ocean Waves and Dimension-Based Laser Incidence Correction. Remote Sens. 2021, 13, 1750. [Google Scholar] [CrossRef]
  15. Klemas, V.V. Remote Sensing of Submerged Aquatic Vegetation. In Seafloor Mapping along Continental Shelves: Research and Techniques for Visualizing Benthic Environments; Finkl, C., Makowski, C., Eds.; Springer International Publishing: Cham, Switzerland, 2016; Volume 13, pp. 125–140. [Google Scholar] [CrossRef]
  16. Kinzel, P.J.; Legleiter, C.J.; Nelson, J.M. Mapping River Bathymetry With a Small Footprint Green LiDAR: Applications and Challenges. JAWRA J. Am. Water Resour. Assoc. 2013, 49, 183–204. [Google Scholar] [CrossRef]
  17. Meneses, N.C.; Baier, S.; Geist, J.; Schneider, T. Evaluation of Green-LiDAR Data for Mapping Extent, Density and Height of Aquatic Reed Beds at Lake Chiemsee, Bavaria—Germany. Remote Sens. 2017, 9, 1308. [Google Scholar] [CrossRef]
  18. Mandlburger, G.; Jutzi, B. On the Feasibility of Water Surface Mapping with Single Photon LiDAR. ISPRS Int. J. Geo-Inf. 2019, 8, 188. [Google Scholar] [CrossRef]
  19. Guenther, G.; Cunningham, A.; Laroque, P.; Reid, D. Meeting the accuracy challenge in airborne lidar bathymetry. In Proceedings of the EARSeL-SIG-Workshop LIDAR, Dresden, Germany, 16–17 June 2000. [Google Scholar]
  20. Philpot, W. (Ed.) Airborne Laser Hydrography II; Cornell University Library (eCommons): Ithaca, NY, USA, 2019; p. 289. [Google Scholar] [CrossRef]
  21. Maas, H.G.; Mader, D.; Richter, K.; Westfeld, P. Improvements in LiDAR bathymetry data analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W10, 113–117. [Google Scholar] [CrossRef]
  22. Gong, Z.; Liang, S.; Wang, X.; Pu, R. Remote Sensing Monitoring of the Bottom Topography in a Shallow Reservoir and the Spatiotemporal Changes of Submerged Aquatic Vegetation Under Water Depth Fluctuations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5684–5693. [Google Scholar] [CrossRef]
  23. Wang, D.; Xing, S.; He, Y.; Yu, J.; Xu, Q.; Li, P. Evaluation of a New Lightweight UAV-Borne Topo-Bathymetric LiDAR for Shallow Water Bathymetry and Object Detection. Sensors 2022, 22, 1379. [Google Scholar] [CrossRef]
  24. Parrish, C.E.; Dijkstra, J.A.; O’Neil-Dunne, J.P.M.; McKenna, L.; Pe’eri, S. Post-Sandy Benthic Habitat Mapping Using New Topobathymetric Lidar Technology and Object-Based Image Classification. J. Coast. Res. 2016, 76, 200–208. [Google Scholar] [CrossRef]
  25. Fritz, C.; Dörnhöfer, K.; Schneider, T.; Geist, J.; Oppelt, N. Mapping submerged aquatic vegetation using RapidEye satellite data: The example of Lake Kummerow (Germany). Water 2017, 9, 510. [Google Scholar] [CrossRef]
  26. Shuchman, R.A.; Sayers, M.J.; Brooks, C.N. Mapping and monitoring the extent of submerged aquatic vegetation in the Laurentian Great Lakes with multi-scale satellite remote sensing. J. Great Lakes Res. 2013, 39, 78–89. [Google Scholar] [CrossRef]
  27. Luo, J.; Duan, H.; Ma, R.; Jin, X.; Li, F.; Hu, W.; Shi, K.; Huang, W. Mapping species of submerged aquatic vegetation with multi-seasonal satellite images and considering life history information. Int. J. Appl. Earth Obs. Geoinf. 2017, 57, 154–165. [Google Scholar] [CrossRef]
  28. Richter, K.; Maas, H.G.; Westfeld, P.; Weiß, R. An Approach to Determining Turbidity and Correcting for Signal Attenuation in Airborne Lidar Bathymetry. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2017, 85, 31–40. [Google Scholar] [CrossRef]
  29. Steinbacher, F.; Dobler, W.; Benger, W.; Baran, R.; Niederwieser, M.; Leimer, W. Integrated Full-Waveform Analysis and Classification Approaches for Topo-Bathymetric Data Processing and Visualization in HydroVISH. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2021, 89, 159–175. [Google Scholar] [CrossRef]
  30. Pfeifer, N.; Mandlburger, G. LiDAR data filtering and Digital Terrain Model generation. In Topographic Laser Ranging and Scanning—Principles and Processing, 2nd ed.; Shan, J., Toth, C.K., Eds.; CRC Press: Boca Raton, FL, USA, 2018; pp. 349–378. [Google Scholar]
  31. Pfeifer, N.; Stadler, P.; Briese, C. Derivation of Digital Terrain Models in the SCOP++ Environment. In Proceedings of the OEEPE Workshop on Airborne Laserscanning and Interferometric SAR for Digital Elevation Models, Stockholm, Sweden, 1–3 March 2001. [Google Scholar]
  32. M. Isenburg LAStools—Efficient LiDAR Processing Software. (Version 141017). Available online: http://rapidlasso.com/LAStools (accessed on 18 June 2024).
  33. Widyaningrum, E.; Bai, Q.; Fajari, M.K.; Lindenbergh, R.C. Airborne Laser Scanning Point Cloud Classification Using the DGCNN Deep Learning Method. Remote Sens. 2021, 13, 859. [Google Scholar] [CrossRef]
  34. Zhu, J.; Sui, L.; Zang, Y.; Zheng, H.; Jiang, W.; Zhong, M.; Ma, F. Classification of Airborne Laser Scanning Point Cloud Using Point-Based Convolutional Neural Network. ISPRS Int. J. Geo-Inf. 2021, 10, 444. [Google Scholar] [CrossRef]
  35. Winiwarter, L.; Mandlburger, G.; Schmohl, S.; Pfeifer, N. Classification of ALS Point Clouds Using End-to-End Deep Learning. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2019, 87, 75–90. [Google Scholar] [CrossRef]
  36. Walicka, A.; Pfeifer, N. Semantic Segmentation of Buildings Using Multisource ALS Data. In Recent Advances in 3D Geoinformation Science, Proceedings of the 18th 3D GeoInfo Conference, Munich, Germany, 13–14 September 2023; Kolbe, T.H., Donaubauer, A., Beil, C., Eds.; Springer: Cham, Switzerland, 2024; pp. 381–390. [Google Scholar]
  37. Calantropio, A.; Menna, F.; Skarlatos, D.; Balletti, C.; Mandlburger, G.; Agrafiotis, P.; Chiabrando, F.; Lingua, A.M.; Giaquinto, A.; Nocerino, E. Under and Through Water Datasets for Geospatial Studies: The 2023 ISPRS Scientific Initiative “NAUTILUS”. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2024, X-2-2024, 33–40. [Google Scholar] [CrossRef]
  38. Spaak, P.; Alexander, J. Seewandel. 2018. Available online: https://seewandel.org (accessed on 18 June 2024).
  39. Muller, H. Lake Constance—A model for integrated lake restoration with international cooperation. Water Sci. Technol. 2002, 46, 93–98. [Google Scholar] [CrossRef]
  40. Murphy, F.; Schmieder, K.; Baastrup-Spohr, L.; Pedersen, O.; Sand-Jensen, K. Five decades of dramatic changes in submerged vegetation in Lake Constance. Aquat. Bot. 2018, 144, 31–37. [Google Scholar] [CrossRef]
  41. Wahl, B.; Peeters, F. Effect of climatic changes on stratification and deep-water renewal in Lake Constance assessed by sensitivity studies with a 3D hydrodynamic model. Limnol. Oceanogr. 2014, 59, 1035–1052. [Google Scholar] [CrossRef]
  42. Internationale Gewässerschutzkommission für den Bodensee (IGKB). Available online: https://www.igkb.org (accessed on 18 June 2024).
  43. Rottman, H.; Auer, B.R.; Kamps, U. Q-880-G. 2022. Available online: http://www.riegl.com/uploads/tx_pxpriegldownloads/RIEGL_VQ-880-GII_Datasheet_2022-04-04.pdf (accessed on 18 June 2024).
  44. Isenburg, M. LASzip: Lossless Compression of Lidar Data. Photogramm. Eng. Remote Sens. 2013, 79, 209–217. [Google Scholar] [CrossRef]
  45. Wessels, M.; Anselmetti, F.; Artuso, R.; Baran, R.; Daut, G.; Gaide, S.; Geiger, A.; Groeneveld, J.; Hilbe, M.; Möst, K. Bathymetry of Lake Constance—A High-Resolution Survey in a Large, Deep Lake. ZFV-Zeitschrift Geodasie Geoinf. 672 Landmanag. 2015, 140, 204. [Google Scholar] [CrossRef]
  46. Pfeifer, N.; Mandlburger, G.; Otepka, J.; Karel, W. OPALS—A framework for Airborne Laser Scanning data analysis. Comput. Environ. Urban Syst. 2014, 45, 125–136. [Google Scholar] [CrossRef]
  47. Otepka, J.; Mandlburger, G.; Karel, W. The OPALS data mananger—Efficient data management for large airborne laser scanning projects. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 153–159. [Google Scholar] [CrossRef]
  48. Python 3.6.8. 2018. Available online: https://www.python.org/downloads/release/python-368/ (accessed on 18 June 2024).
  49. Schmieder, K.; Werner, S.; Bauer, H.G. Submersed macrophytes as a food source for wintering waterbirds at Lake Constance. Aquat. Bot. 2006, 84, 245–250. [Google Scholar] [CrossRef]
  50. Mandlburger, G.; Pfennigbauer, M.; Schwarz, R.; Floery, S.; Nussbaumer, L. Concept and Performance Evaluation of a Novel UAV-Borne Topo-Bathymetric LiDAR Sensor. Remote Sens. 2020, 12, 986. [Google Scholar] [CrossRef]
  51. Vander Zanden, J.; Vadeboncoeur, Y.; Chandra, S. Fish Reliance on Littoral–Benthic Resources and the Distribution of Primary Production in Lakes. Ecosystems 2011, 14, 894–903. [Google Scholar] [CrossRef]
  52. Walker, P.; Wijnhoven, S.; Van der Velde, G. Macrophyte presence and growth form influence macroinvertebrate community structure. Aquat. Bot. 2013, 104, 80–87. [Google Scholar] [CrossRef]
Figure 1. Location of the study area at the country (a) and local level (b). Distribution of areas of interest (AOIs) and test areas T1 and T2 at the Lake Constance Lower Lake (c). Coordinate Reference System: ETRS89/UTM zone 32N.
Figure 1. Location of the study area at the country (a) and local level (b). Distribution of areas of interest (AOIs) and test areas T1 and T2 at the Lake Constance Lower Lake (c). Coordinate Reference System: ETRS89/UTM zone 32N.
Remotesensing 16 02257 g001
Figure 2. Airborne laser scanning (ALS) processing chain applied for automatic classification of submerged macrophytes.
Figure 2. Airborne laser scanning (ALS) processing chain applied for automatic classification of submerged macrophytes.
Remotesensing 16 02257 g002
Figure 3. Coverage of an AOI polygon using different flight strips (PointIds).
Figure 3. Coverage of an AOI polygon using different flight strips (PointIds).
Remotesensing 16 02257 g003
Figure 4. Processing chain of vegetation candidate classification.
Figure 4. Processing chain of vegetation candidate classification.
Remotesensing 16 02257 g004
Figure 5. Measure of the local 3D point density (variable dist_all, which indicates the sum of the distances to the next 20 neighboring points and is therefore inversely proportional to the point density) as a detection feature for Low Vegetation in LiDAR point cloud demonstrated on a cross section.
Figure 5. Measure of the local 3D point density (variable dist_all, which indicates the sum of the distances to the next 20 neighboring points and is therefore inversely proportional to the point density) as a detection feature for Low Vegetation in LiDAR point cloud demonstrated on a cross section.
Remotesensing 16 02257 g005
Figure 6. Reflectance values as a detection feature for High Vegetation in LiDAR point cloud demonstrated on a point cloud cross section.
Figure 6. Reflectance values as a detection feature for High Vegetation in LiDAR point cloud demonstrated on a point cloud cross section.
Remotesensing 16 02257 g006
Figure 7. NumberOfReturns values as a detection feature for Vegetation Canopy in LiDAR point cloud demonstrated on a point cloud cross section.
Figure 7. NumberOfReturns values as a detection feature for Vegetation Canopy in LiDAR point cloud demonstrated on a point cloud cross section.
Remotesensing 16 02257 g007
Figure 8. Visualization of the DSM calculation principle for each vegetation class based on a cross section. Candidate points for Low Vegetation (green), High Vegetation (light green), and Vegetation Canopy (orange).
Figure 8. Visualization of the DSM calculation principle for each vegetation class based on a cross section. Candidate points for Low Vegetation (green), High Vegetation (light green), and Vegetation Canopy (orange).
Remotesensing 16 02257 g008
Figure 9. Classification order for automatic point cloud classification using DSMs.
Figure 9. Classification order for automatic point cloud classification using DSMs.
Remotesensing 16 02257 g009
Figure 11. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) for ETL4.
Figure 11. Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey-supported aerial photo interpretation (c) for ETL4.
Remotesensing 16 02257 g011
Figure 12. Legend for aerial photo-based classification compared to LiDAR data-based classification classes.
Figure 12. Legend for aerial photo-based classification compared to LiDAR data-based classification classes.
Remotesensing 16 02257 g012
Figure 13. Results of the automatic classification of tile ETN2; (a) top view and (b) selected cross section.
Figure 13. Results of the automatic classification of tile ETN2; (a) top view and (b) selected cross section.
Remotesensing 16 02257 g013
Figure 14. Results of the automatic point cloud classification of test area T2 (a) and selected cross section (b) illustrating the structure of the class High Vegetation within the water column. Only candidate points are presented.
Figure 14. Results of the automatic point cloud classification of test area T2 (a) and selected cross section (b) illustrating the structure of the class High Vegetation within the water column. Only candidate points are presented.
Remotesensing 16 02257 g014
Figure 15. Indicator variable of the candidate classification of class Vegetation Canopy (dist4nn) in (a) an orthophoto of (b) test area T2. Polygons of the aerial photo interpretation are superimposed on both.
Figure 15. Indicator variable of the candidate classification of class Vegetation Canopy (dist4nn) in (a) an orthophoto of (b) test area T2. Polygons of the aerial photo interpretation are superimposed on both.
Remotesensing 16 02257 g015
Table 1. Vegetation classes of the Aerial Photo Interpretation with corresponding species.
Table 1. Vegetation classes of the Aerial Photo Interpretation with corresponding species.
ClassHeight [cm]Species
Charophytes small (cs)5–30Chara aspera Willd., Chara aspera var. subinermis Kütz., Chara tomentosa L., Chara virgata Kütz., Nitella hyalina (DC.) C. Agardh
Charophytes medium (cm)30–60Chara contraria A. Braun ex Kütz., Chara dissoluta A. Braun ex Leonhardi, Chara globularis Thuill., Nitella flexilis (L.) C. Agardh, Nitellopsis obtusa (Desv.) J. Groves
Elodeids tall, large-leaved (etl)120–600Potamogeton angustifolius J. Presl, Potamogeton crispus L., Potamogeton lucens L., Potamogeton perfoliatus L.
Elodeids tall, narrow-leaved (etn)120–600Ceratophyllum demersum L., Myriophyllum spicatum L., Potamogeton helveticus (G. Fisch.) W. Koch, Potamogeton pectinatus L., Potamogeton pusillus L., Potamogeton trichoides Cham & Schltdl., Ranunculus circinatus Sibth., Ranunculus trichophyllus Chaix, Ranunuculus fluitans Lam., Zannichellia palustris L. (tall)
Elodeids small, large-leaved (esl)30–60Elodea canadensis Michx., Elodea nuttallii (Planch.) H. St. John, Groenlandia densa (L.) Fourr.
Elodeids small, narrow-leaved (esn)30–60Alisma gramineum Lej., Alisma lanceolatum With., Najas marina subsp. intermedia (Wolfg. Ex Gorski) Casper, Potamogeton friesii Rupr., Potamogeton gramineus L., Zannichellia palustris L. (small)
Other macroalgae (o)no dataCladophora sp. Kütz., Ulva (Enteromorpha) sp. L., Hydrodictyon sp. Roth, Spirogyra sp. Link, Vaucheria sp. A.P. de Candolle
Table 2. Area covered by vegetation class and tile in percentage as calculated using Equation (1).
Table 2. Area covered by vegetation class and tile in percentage as calculated using Equation (1).
TileGroundLow VegetationLow Vegetation 2High VegetationVegetation Canopy
ETL185.3476.020.02.290.21
ETL268.8964.180.00.00.0
ETL381.41101.750.00.470.40
ETL475.3060.690.038.661.56
ETL567.400.00.069.3857.66
ETN139.5346.550.00.00.82
ETN261.4950.3257.3010.268.82
ETN391.4970.03.100.010.0
ETN462.7539.975.420.090.0
ETN856.3544.500.022.093.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wagner, N.; Franke, G.; Schmieder, K.; Mandlburger, G. Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds. Remote Sens. 2024, 16, 2257. https://doi.org/10.3390/rs16132257

AMA Style

Wagner N, Franke G, Schmieder K, Mandlburger G. Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds. Remote Sensing. 2024; 16(13):2257. https://doi.org/10.3390/rs16132257

Chicago/Turabian Style

Wagner, Nike, Gunnar Franke, Klaus Schmieder, and Gottfried Mandlburger. 2024. "Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds" Remote Sensing 16, no. 13: 2257. https://doi.org/10.3390/rs16132257

APA Style

Wagner, N., Franke, G., Schmieder, K., & Mandlburger, G. (2024). Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds. Remote Sensing, 16(13), 2257. https://doi.org/10.3390/rs16132257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop