1. Introduction
Lakes are complex and dynamic ecosystems that support a diverse range of aquatic life. Submerged macrophytes play a critical role in maintaining the ecological balance of these systems. The significance of submerged aquatic vegetation lies in its ability to sustain clear-water conditions in shallow water [
1], which in return enhances habitat diversity by offering organic matter, producing shade and shelter, regulating temperature and creating aquatic habitats [
2,
3]. At the same time, the dynamics of the occurrence of submerged macrophytes in inland waters is an important indicator for determining the ecological status of water bodies [
4,
5], which are influenced (directly or indirectly) by both anthropogenic interventions and climate change. Therefore, assessing the spatial distribution and growth of submerged macrophytes is an important tool for lake management and conservation of inland waters [
6,
7], as well as for climate research.
Conventional manual monitoring methods for submerged macrophytes often require labour-intensive, time-consuming and potentially destructive fieldwork [
8]. Besides manual in situ visual assessment of species distribution, comprehensive sampling plays an important role in traditional monitoring. The elevated error rate observed in manual monitoring can be explained by the presence of multiple contributing factors, including observer misidentification, imprecise estimation and restricted accessibility to specific locations, which may lead to an incomplete representation of the ecosystem’s heterogeneity [
9].
For decades, remote sensing has been gaining importance in the field of mapping the Earth’s land mass, and more recently, it has also become significant in surveying water bodies [
10]. Nevertheless, working in aquatic environments presents new challenges, primarily due to the presence of the water column that weakens benthic reflectance signals and creates heterogeneity within a scene, making the analysis more intricate [
11,
12]. The focus in aquatic surveying lies on passive techniques that use airborne and satellite-based methods, which include photography, multispectral and hyperspectral imaging, but also active methods have recently been gaining increasing attention [
9,
13,
14]. In particular, the active method of Airborne Laser Scanning (ALS) - or Airborne Laser Bathymetry (ALB) - finds extensive attention in shallow water area surveying [
15,
16,
17]. The technology of bathymetric LiDAR (Light Detection and Ranging) in particular, which works with laser pulses in the green range of the electromagnetic spectrum, has extraordinary potential here [
18]. The green laser light, unlike infrared light, is able to to penetrate clear shallow water [
19], and is reflected off the bottom surface, objects located in the water body or the water column itself.
Airborne Laser Bathymetry is a laser scanning technique used to measure water body bottom topography [
20,
21,
22,
23] or to detect the presence of underwater objects [
24]. LiDAR has been used occasionally for mapping submerged vegetation, focusing mainly on distinguishing between the presence and absence of vegetation [
16,
25,
26,
27,
28]. Despite its potential to overcome some of the limitations of conventional methods, there is little to no experience with more detailed quantitative or qualitative detection and modelling of submerged macrophytes in shallow lakes with ALS, resulting in little experience in analysing the collected data. This especially applies to the classification of ALS bathymetric point clouds, which constitutes an imperative step for three-dimensional mapping of submerged vegetation and possibly the differentiation of vegetation types.
Therefore, the aim of this study is to assess the effectiveness of modern topo-bathymetric Airborne Laser Scanning, referred to as ALB in the following, for detecting and mapping submerged vegetation in the shore region of Lake Constance through automatic point cloud classification. The premise is that the accuracy of point cloud classification plays a vital role in the usability and potential of ALB for the task of characterizing and quantifying submersed macrophytes. More specifically, the article focuses on (1) the automatic classification of ALB point clouds to identify underwater vegetation and differentiate between three vegetation classes: Low Vegetation, High Vegetation, and Vegetation Canopy, (2) the creation of three-dimensional Digital Surface Models (DSM) of submerged vegetation and (3) the final classification of the point cloud for three-dimensional vegetation mapping. The distinction between vegetation classes is based on characteristics that can be acquired through LiDAR surveys (volumetric density, reflectance, etc.), rather than height markers only. It is imperative to highlight that the inherent quality of the laser dataset is compromised by diverse biological and technical factors, causing an unusually dense point cloud. Consequently, alongside the laser point cloud, data analysis incorporates reference datasets, which play an essential role in the processing chain.
The study seeks to contribute to the field of remote sensing and environmental monitoring by evaluating the potential of ALS as a cost-effective and efficient tool for collecting information on submerged vegetation trough automatic point cloud classification. The work described in this article was conducted within the research project "Seewandel", which comprises multiple research endeavors aimed at investigating various aspects of Lake Constance and seeks to gain a comprehensive understanding of the lake’s ecological, hydrological, and environmental characteristics [
29].
The remainder of the article is structured as follows.
Section 2 introduces the study area and the available ALB and reference data. In
Section 3, we provide a detailed description of the employed classification strategy and explain our quality assessment approach. We present the results in
Section 4 and critically validate and discuss the them in
Section 6. The article ends with concluding remarks in
Section 7.
2. Study Area and Data Set
2.1. Study Area and Research Project
Lake Constance, also known as Bodensee, is the second largest pre-alpine European lake [
30] with a surface area of 536
[
31] and shorelines in Germany, Switzerland and Austria (
Figure 1 a) and b)). The lake and its region are intensively influenced by local anthropogenic activities [
30,
32] including dramatic changes of submersed vegetation in the last decades [
31] as well as being affected by climate change, while it is said to be one of the best examined lakes with limnological research dating back more than 100 years. In 2018, the research project SeeWandel was implemented by the IGKB (Internationale Gewässerschutzkommission für den Bodensee) [
33] to further explore how Lake Constance responds to changing environmental conditions [
29].
The research team, led by Klaus Schmieder consisting of Gunnar Franke, Gottfried Mandlburger, and Nike Wagner investigated as part of its sub-project specifically the resilience dynamics of submerged macrophytes in the littoral zone of Lake Constance, focusing on recording the current macrophyte populations at the species level and conducting a spatio-temporal analysis of species composition and vegetation structure [
34]. In addition to conventional monitoring methods, a LiDAR underwater vegetation survey approach was also tested and is the focus of this paper.
Figure 1 shows the locations of ten area of interest (AOI) tiles, each consisting of a regular hexagon with a side length of approx. 200 m (tile). The data analysis was performed specifically for these and was subsequently validated against two larger additional areas from the same data set.
2.2. Data Set
The bathymetric LiDAR data used in this research was generated by the Austrian company Airborne Hydro Mapping (AHM) using the
RIEGL VQ-880-G topo-bathymetric laser scanner. The VQ-880-G system utilizes a green laser operating at a wavelength of 532 nm and has an accuracy of 25 mm in the vertical and horizontal dimension [
35]. The scan pattern is circular (Palmer scanner) with a constant off-nadir angle of 20 degrees. Data collection took place on July 9th, 2019, a date expected to align with the peak of vegetation growth. The following environmental conditions were given during the flight campaign: the Secchi depth measured 3.7 m, indicating a calcite precipitation event, and the water level at gauge Constance was 447 cm (gauge zero is 250 cm, which is 391.89 m a.s.l.). The mean water level is 343 cm, which means that the water level was 104 cm above mean, indicating typical summer flood conditions. The data is stored as a point cloud, comprising data points in a three-dimensional ETRS89/UTM zone 32N coordinate system, in compressed LAS format (LAZ). LAS/LAZ is a widely adopted industry standard for LiDAR [
36]. Each point in the cloud corresponds to a unique measurement reflecting the laser beam from features such as the water surface, ground, aquatic vegetation, or particles in the water column.
Additionally to the spatial information (three-dimensional coordinates allowing for accurate positioning and mapping of the features represented), additional attributes are recorded and stored for each point. The additional attributes include
PointId (assignment to a specific flight strip),
Reflectance (measure for the amplitude, or strength, of the reflected signal - providing information on the reflectivity of the target from which the emitted signal was reflected),
NumberOfReturns (indicating the number of echoes received from a single transmitted signal for each data point) and
Pre-Classification (prior differentiation of water, water surface and noise conducted by AHM [
37]). These attributes play a crucial role in the data analysis process.
The unusual environmental factors - including calcite precipitation event and flood conditions - given at the time of recording, coupled with the calculation approach employed by AHM, where each deflection in the backscattered signal generates a point irrespective of its significance, results in a point cloud characterized by poor data quality resembling a dense ”point fog” that is difficult to interpret. This makes the assistance of supplementary input data indispensable: The Digital Terrain Model (DTM) named "
Tiefenschärfe-DTM"
1[
38] formatted in LAS, furnishes a comprehensive three-dimensional depiction of Lake Constance’s bottom topography. Additionally, the results of
Aerial Photo Interpretation (based on ground truth field survey data from boat with recording period from June to September 2019 and additional mapping in July 2021) also conducted as part of the SeeWandel project are available. These manually generated polygonal representations of submerged macrophyte patches provide a general understanding of the distribution and density of aquatic vegetation. It is important to emphasize that this classification only shows the dominant vegetation class of a patch, and the presence of additional vegetation classes can never be excluded. Furthermore, due to the limited visibility of submersed vegetation in the aerial photos taken during the LiDAR campaign in July, aerial photo interpretation was based on aerial photos of August 2019, resulting in a time shift of about one month. A
Table 1 lists the vegetation classes distinguished with the corresponding species. The Aerial Photo Interpretation with the temporally corresponding orthophotos are compared with the classification results of the LiDAR point cloud for evaluation in chapter
Section 4.2.
In general, the time difference between laser data and the respective reference data must always be taken into account during the classification process, as well as during the subsequent validation and discussion of the results.
3. Methods
3.1. Airborne Laser Scanning Data Processing
The method for classifying Full Waveform (FWF) Airborne Laser Bathymetry point clouds mainly consists of (1) data preparation, (2) classification of candidates - separately for each vegetation class -, (3) digital surface model creation and (4) the final point cloud classification.
Figure 2 depicts a schematic representation of the data processing workflow. The process was executed using the modular program system OPALS [
39] and Python 3.6.8 [
40].
The original geo-referenced point clouds and the bathymetric Digital Terrain Model "Tiefenschärfe-DTM" [
41] 1 function as primary input data. The DTM was assembled from SONAR (Sound Navigation And Ranging) and topo-bathymetric LiDAR data. While multibeam echo sounding (MBES) data served as base data for 3D reconstruction of the pelagic (open water) and benthic (bottom) zone, ALB was used for the littoral (shallow water) area. The latter was acquired before the macrophytes’ growth season, i.e. optimal conditions for mapping the lake bottom topography. However, in the last decades some parts of the littoral zone show overwintering submerged vegetation [
42], which may have impaired the accuracy of the DTM. In addition to the DTM, field survey, aerial photo interpretation and orthophoto are considered as comparative data to aid in the interpretation of properties such as reflectance and point density, which significantly contribute to the classification process. However, comparative data are not directly incorporated into the classification workflow, which ensures that independent results are obtained.
3.1.1. Data Preparation
For the subsequent processing, the point clouds of multiple overlapping flight strips are merged and certain AOIs are cut out of the combined data set. Each flight strip consists of two sets of point clouds composed of either the points of the laser beams looking back or forward in the direction of flight (
Figure 3). To filter out "noise points", an existing
pre-Classification of the point cloud as well as the "Tiefenschärfe-DTM" are used.
3.1.2. Classification of Candidates
The actual point cloud classification is preceded by the classification of the candidate points, which is the core of this research work. Unlike in the final classification, aiming at assigning a well-defined class to each point of the point cloud, we first identify some but not necessarily all points that characterize a certain vegetation class. These points are later used as representative points or candidate points for the further process. This step is conducted for each vegetation class individually. However, the same scheme is used for each vegetation class (
Figure 4).
At this point, the classification process can be conceptualized as an iterative filtering process. Initially, an indicator variable (e.g., reflectance, average distance to neighboring points, etc.) is defined based on existing point attributes or newly computed attributes. We then examine the confounding variables and try to mitigate their effects by normalizing them using empirical formulas. We then plot the distributions of the calculated attributes and automatically determine suitable threshold values based on characteristic distribution patterns. Points exceeding or falling below the threshold with respect to the considered variables are filtered out.
A visualisation and comparison with reference data ascertain whether the remaining points accurately represent the corresponding vegetation class, or if the iteration necessitates the recalculation of attributes. If the result is satisfactory, the remaining points are retained as candidates for the respective vegetation class.
To illustrate the application of this approach, the first step of the processing chain, i.e. defining an indicator variable, is illustrated for each of the three vegetation classes
Low Vegetation2,
High Vegetation and
Vegetation Canopy in
Figure 5,
Figure 6 and
Figure 7.
indicates the sum of the distances to the next 20 neighboring points and is therefore inversely proportional to the point density
3.1.3. Calculation of Digital Surface Models
The DSM calculation utilizes the classified candidates of the vegetation and ground
4 classes as inputs and perform interpolation to determine the surface models situated above the respective candidate points as shown in
Figure 8. The interpolation algorithm used for calculating the surface models is designed to only compute values at locations where candidate points exist. This results in incomplete surface models that do not cover the entire area of interest (Figure 10 b).
When creating the DSMs of the respective vegetation classes, a grid width of 0.3 m (with the exception of 0.6 m for the vegetation canopy class), a number of 32 neighbors, a maximum search range of 0.2 m and the interpolation method "mean" (i.e. average height of all neighbor points) were used. This combination of parameters ensures that the structure of the vegetation has a relatively good spatial resolution while not being negatively influenced by individual outlier points, which would be a problem with fewer neighboring points.
As the surface models are not continuous, they can easily be used to calculate the area coverage of the individual vegetation classes (and ground class). This calculation relies solely on the point count (
) information within each surface model, which is compared against the polygon area (
) and grid width (
) of the model (Formula
1).
3.1.4. Classification of Point Cloud
For the actual classification of the entire point cloud, the point cloud and the DSMs are superimposed. Points beneath the surface model of a particular class that have not yet been classified are assigned the class ID of the respective surface model. The classification order is a critical factor and is visually represented in
Figure 9. Additionally classes for ground, water and water surface are defined for completion of the spatial representation.
3.2. Processing of Additional Data for Quality Assessment
In order to be able to better assess the quality of the automatic classification later, two additional areas (T1 and T2) are included (
Figure 1), in addition to the ten AOIs that were concurrently surveyed. To match the training area’s data size (ten hexagonal tiles), the test areas are segmented into squares with a 200 m width and analyzed in a piecemeal fashion.
The validation process adopts a qualitative approach, wherein results are manually compared with reference data - consisting of orthophotos and aerial photo interpretations - to assess accuracy and consistency.
4. Results
4.1. Classification Results
The output of the data processing are the ten classified point clouds of the respective AOI and the results of the two larger test areas consisting of several separately processed, complementary point clouds. As an example, the top view of the tile ETL4 with prominent vegetation features - including the corresponding DSMs - is visualised in
Figure 10. Additional results - including the test areas - can be seen in the appendix (
Figure A1,
Figure A3,
Figure A5,
Figure A7,
Figure A9,
Figure A11,
Figure A13,
Figure A15,
Figure A17, and
Figure A19). It should be noted that for the following presentation, the fully classified point clouds are shown. While for some applications, the representation of the candidate points is better suited (e.g. 3D representations, representation of cross-sections or emphasis of vegetation density).
Table 3 shows the calculations of the percentage coverage by the respective vegetation classes (and ground) in the individual tiles based on the DSMs.
4.2. Comparison with Reference Data
While precise accuracy metrics are challenging to define without accurate ground truth data, the results are compared with previous mentioned reference data - orthophotos and field survey supported aerial photo interpretations. In general, the time lag of one month between LiDAR recordings and reference data must be taken into account in this comparison. This applies to both orthophoto and aerial photo interpretation.
Figure 11 shows the comparison using the example of ETL4, while the appendix also contains comparisons of the other AOIs (
Figure A2,
Figure A4,
Figure A6,
Figure A8,
Figure A10,
Figure A12,
Figure A14,
Figure A16,
Figure A18, and
Figure A19).
To enable the comparison of the automatic classification results (
Figure 11 (a)) with the available manually delineated 2D polygons based on orthophotos (
Figure 11 (c)),
Figure 12 provides an overview comparison of the respective "dominant" classes. It is important to emphasize that no one-to-one comparison of the classes is possible, as the subjective delineation of patches from orthophotos is basically more generalised.
The class defined as
Low Vegetation denotes vegetation near the bottom with the identifying characteristic of higher point density compared to the water body above. While no specific height threshold is defined, typically
Low Vegetation is classified up to approximately 1 m above ground level. The class is further subdivided into
Low Vegetation and
Low Vegetation 2 if there are two different areas of this class with a recognizable average height difference within a tile. This vegetation class can be compared to the small (≤ 30 cm high) and medium Charophytes (30 -60cm) vegetation classes as well as to the small Elodeids (typical height ≤ 60 cm) used as a category in the aerial photo-based polygon classification (
Figure 12).
The simplified class High Vegetation describes vegetation in the water column (excluding the water surface) that is characterized by higher Reflectance than its surroundings. This class can be compared with the vegetation categories of tall (120-600 cm) Elodeids from the aerial photo interpretation. Generally the classes overlap to a limited extent due to their fuzzy definitions.
The defined class
Vegetation Canopy, which is reserved for plants reaching the water surface and which is additionally characterized by low
NumberOfReturns, can be assigned to the classes tall Elodeids in the polygons derived from aerial image interpretation. However, these described tall Elodeids from 120-600 cm include far more than the
Vegetation Canopy class describes (
Figure 12).
5. Validation
Figure 11 illustrates the overall satisfactory outcome of the automatic classification in relation to the comparative data. The discernible structures of vegetation areas are evident across representations
Figure 11a, b, and c. Supplementary results in the appendix further demonstrate comparable effectiveness for the classification method, with notable exceptions in tiles ETN3 and ETN4 (cf.
Figure A13 and
Figure A15). Since quality deviations were observed for these two tiles, which may be due to suboptimal data quality in the corresponding flight strips, they are not discussed further in the following section. However, also the Tiefenschärfe-DTM might have its limitations due to the fact that overwintering submerged macrophytes may have impaired its accuracy, potentially influencing the classification process. Validation is performed separately for the individual vegetation classes, as the separate processing requires individual validation. This is followed by a summary of the classification process’s overall success, including identification of its strengths and weaknesses.
5.1. Validation of Ground and Low Vegetation Class
The comparison with the polygon classification highlights that the
Low Vegetation class shows a strong correlation with the Charophytes polygon class (
cs,
cm in
Figure 12). This can be seen particularly well in
Figure 11,
Figure A2,
Figure A4,
Figure A6,
Figure A10, and
Figure A12 due to the high percentage coverage of class
Low Vegetation in these tiles (
Table 3).
When comparing the reference data and classification outcomes, a favorable classification result for the
Low Vegetation class is observed across all tiles. The classification quality is particularly noticeable in tiles comprising solely of
Low Vegetation and sediment, as is evident in the tiles ETL2 (
Figure A3 and
Figure A4) and ETN1 (
Figure A9 and
Figure A10). It also shows that the class can be detected in great detail, and that even the smallest areas that change between
Low Vegetation and sediment are detected providing insights into the high density of
Low Vegetation.
The distinction between the sub-classes in
Low Vegetation and
Low Vegetation 2 also generally works well.
Figure 13 shows the recognizable height difference of the sub-classes using a section view of tile ETN2, which is the only tile that shows a large coverage of this class (
Table 3).
The classification of
Low Vegetation is based on the limit value calculation of variables that are based on the point density. However, this limit value calculation occasionally shows errors, such as with tile ETN8 (
Figure A17 and
Figure A18). Here, an incorrect limit value was calculated in one of the two resulting flight strips (and thus point clouds), which led to
Low Vegetation being incorrectly not recognized. It should be noted that in such cases a less sophisticated algorithm is responsible for the incorrect limit value calculation, but the calculated variables still provide a good basis for distinguishing
Low Vegetation and its surroundings when checked manually.
Another source of error is the variation of water depth within a tile. Since the indicator variable is calculated using the distance to nearest neighbors, a measure of three-dimensional density, and the density of points decreases with increasing water depth due to decreasing signal strength, the average point density depends on water depth. Now, if parts of the tile exhibit a water depth that deviates greatly from the average depth of the tile, the threshold will not be appropriately chosen for these deviating areas. More precisely this means that
Low Vegetation is incorrectly classified in ETL1 in the nearshore areas (
Figure A1 and
Figure A2), because the selected threshold in the nearshore areas would be significantly lower in a separate analysis.
5.2. Validation of High Vegetation Class
By comparison to the orthophoto,
Figure 11 shows that the
High Vegetation was detected very well. The vegetation boundaries match almost perfectly with the vegetation boundaries of the orthophoto. Even smaller patches of vegetation as well as small gaps in the vegetation are detected by the classification method. It is striking, however, that the
High Vegetation class in
Figure 11,
Figure A8, and
Figure A8 (i.e. tiles with a high proportion of
High Vegetation according to
Table 3) unexpectedly does not only correspond with the class tall Elodeids but also similarly well with the polygon classes of small, large-leaved Elodeids. In addition to the temporal difference between the two classifications, the explanation that only the dominant vegetation class is shown in the aerial photo interpretation plays a role here, as the orthophoto again clearly shows high vegetation.
The three-dimensionality of the result is particularly important for this class. Structures of the vegetation within the water body can be recognized and displayed, as can be seen in the classification result of test area T2 (
Figure 14).
In general, test area T2 (
Figure A19) shows clearly that the classification of the
High Vegetation is homogeneous (across tile boundaries) and agrees well with the orthophoto, which forms the basis for comparison. The biggest risk for misclassification is the threshold setting of the underlying variables, which can be corrected manually.
5.3. Validation of Vegetation Canopy Class
Figure 11 clearly shows that
Vegetation Canopy was classified exactly where there is visible tall vegetation in the orthophotos. The small vegetation areas, which often appear in a circle, are mostly located in the vicinity of
High Vegetation.
While the classification results of the class
Vegetation Canopy for the ten training tiles are quite satisfactory and even very small vegetation areas can be recognized by the algorithm and clearly distinguished from their surroundings, this is not the case to the same extent for the classification results of test area T2 (
Figure A19). The individual classification results are not homogeneous, which leads to inconsistent results across the process boundaries. This can be explained by the fact that most of the training areas have little or no
Vegetation Canopy (
Table 3) and therefore the algorithm for calculating the threshold value has not been sufficiently trained.
Figure 15 illustrates that this is merely an instability in the statistical analysis and that the actual classification method is nevertheless successful with regard to the underlying indicator variable.
It can be seen that indicator variable dist4nn (based on NumberOfReturns) also behaves homogeneously across the tile boundaries and therefore with a better threshold calculation a similarly homogeneous result as for the class High Vegetation and a better match with the orthophoto could be expected.
6. Discussion
6.1. Summary of the Validation
In general, the results suggest that the classification methodology successfully distinguishes between various vegetation classes as indicated by the correct detection of vegetation and the distinction between vegetation and its surroundings. The selected vegetation indicators, such as density for
Low Vegetation,
Reflectance for
High Vegetation, and
NumberOfReturns for
Vegetation Canopy, appear to be indicative of their respective vegetation classes. However, conditions during LiDAR data acquisition were not optimal. Due to a lime precipitation event, the Secchi depth of less than 3 m was very low compared to maximum achievable values of around 10 m measured during the vegetation period in 2019. Furthermore, the high lake level indicated summer flood conditions. It is important to emphasize that despite the insufficient quality of the LiDAR data, this significant classification success was achieved
5.
The errors in classification primarily stem from undetected or miscalculated thresholds. As such, the issue is not so much with the classification workflow itself, but with the mathematical or statistical procedures. It is important to note that, while a training area comprising ten individual analysis areas is sufficient for automatic threshold value computations, it does not cover the full range of distributions and distribution forms of a variable required to compute accurate thresholds for other AOIs with high reliability. It is worth mentioning that this study’s focus is not on evaluating the distributions mathematically, but rather on the general classification concept. Hence, significant enhancement of the functions used for threshold calculation is possible, but it is beyond the scope of this study.
Another significant source of errors arises from heterogeneous analysis regions. Deeper penetration of the laser pulse into the water body causes a loss of signal strength, which strongly affects crucial attributes like three-dimensional point density, Reflectance, and NumberOfReturns. Although efforts have been made to adjust for water depth or distance to the water bottom when calculating attributes, they cannot be entirely eliminated as confounding variables.
In general, it can be asserted that the classification method is successful, with the exception of tiles ETN3 and ETN4, as the classification results of all other training areas are good and match well with the comparison data. Classification boundaries in the classified point cloud match well with color changes in the orthophotos. However, it is difficult to determine the type or height of vegetation by solely examining the orthophotos. When comparing with the polygons classified from aerial photographs and field surveys, the point cloud classification results also reflect the structure of the polygons, but with a higher level of detail. As a result, the boundaries of the polygons only partially correspond to those of the classified point cloud. Furthermore, the temporal displacement of one month between the aerial photographs used for polygon classification and LiDAR data leads to some differences, such as with ETN1 (
Figure A10), where the point cloud classification at
High Vegetation is less consistent with the classified polygons due to temporal changes in vegetation. Since some high growing species as
P. pectinatus start their senescence often already end of July and may lay down on the ground due to storm events, as happened in July 2019, they can hardly be classified accurately in aerial photo interpretation. Nevertheless, the biggest challenge in comparing the two classification methods is the distinction and representation of different vegetation classes. While the point cloud classification distinguishes by vegetation height and a main indicator variable, the polygon classification, supported by field survey data, distinguishes by vegetation type, which limits their comparability. Additionally, the polygon classification only displays the "dominant" vegetation class in the selected patch, resulting in a significant loss of information if multiple vegetation types are present in one polygon.
6.2. Vertical Complexity of Macrophyte Stands
One advantage of LiDAR point cloud classification over orthophotos and polygon classification lies in the ability to provide three-dimensional results. This allows for the identification of clear structures of vegetation surfaces, beyond the mere presence or absence of vegetation classes. However, the three-dimensional nature of the result is constrained, as the vegetation classes may obscure each other. The water current and associated orientation of vegetation in the water appear to play a significant role in these observations. For instance, tile ETL4 is subjected to a notable water flow. Concurrently, the
High Vegetation largely obscures the
Low Vegetation and the ground in the classification results (
Figure 10). This current promotes an inclined position of the high vegetation, thereby impeding laser penetration and resulting in the inability to image the multiple layers of the vegetation structure. Nonetheless, the presence of
Low Vegetation beneath is not entirely ruled out and is even likely based on reference data.
In contrast, tile ETL1 experiences minimal water flow, and upon examining the surface models (
Figure A1),
Low Vegetation is clearly defined beneath the
High Vegetation. The calm water likely encourages a vertical orientation of the
High Vegetation, facilitating laser penetration. The classification revealing gaps in
Low Vegetation DSM can be attributed to the small area covered by
Vegetation Canopy at the water surface, which can be explained by the horizontal alignment of leaves on the water surface.
In addition to the flow conditions, the physical constraints of data acquisition also play a major role in the ability to recognize the entire three-dimensional structure of the vegetation and several vegetation layers on top of each other.
6.3. Potential for Improvement and Extensions
In addition to increasing accuracy and robustness of data processing, LiDAR data can also provide opportunities for further analysis beyond submerged macrophythes classification. These potential extensions include:
(1) Calculation of the vegetation volume and biomass volume by combining the knowledge of vegetation densities.
(2) Extension of data analysis for determining vegetation density.
(3) Determination of leaf size could also be included in the analysis, following the aerial photo-based classification. The hypothesis is that plants with large leaf size may allow less signal of the laser beam to penetrate than those with small leaf size.
(4) The most ambitious extension of LiDAR data analysis would be the development of an advanced classification process that allows for detailed vegetation class distinctions or even identification of vegetation types by combining various indicator attributes. Instead of using only one main indicator for each vegetation class, a combination of several attributes such as vegetation height, vegetation area size, leaf size, vegetation density, water depth, Reflectance, NumberOfReturns, and other influencing variables could lead to a more precise classification. This idea could be further developed by incorporating additional knowledge about vegetation types and their characteristics.
6.4. Transferability
To evaluate the transferability of the research findings to the field of "Monitoring of submerged macrophytes", it is crucial to estimate the method’s applicability to other inland water data sets. However, it is essential to note that the entire methodology was developed solely based on the Lake Constance data set collected on July 9, 2019, between 7:30 a.m. and 10:00 a.m. Also, for example Low Vegetation was not generally classified, but only Low Vegetation with a high local point density as detection feature - analogously for the other classes. In other inland waters, Low Vegetation species may occur that are not identifiable by the high point density, which does not allow to transfer the method generally without prior adjustments. Nevertheless, the classification method could be applied to other inland waters where similar submerged aquatic vegetation as in Lake Constance is expected, either based on previous research or due to similar climatic and environmental conditions. However, for reliable vegetation classification in waters different from Lake Constance, a separate verification of the represented vegetation classes and their typical characteristics is necessary for a corresponding LiDAR data analysis.
Moreover, the algorithm might be more suitable for the temporal analysis of the same area than for application to different water bodies. This means that to analyze the temporal change of submerged vegetation in Lake Constance, the method can be applied to another dataset of the same area but at a different time. A prior check of the data’s similarity and quality is still necessary because even under the same external recording conditions (such as same scanner, time of year, data preparation etc.), external factors such as deviations in water quality can lead to significant differences in the dataset and requiring adjustments in the analysis.
6.5. Applications
The classification of LiDAR data results in bio-volume data of submersed vegetation. These can be parametrized by field measurements of vegetation biomass and thus serve as a measure of littoral primary production, which is an important measure to characterize a lake ecosystem [
43]. This applies in particular to Lake Constance, where the re-oligotrophication process leads to a shift of primary production from the pelagial more to the littoral zone [
34]. Furthermore, the classification results provide an accurate 3D representation of submersed vegetation structures, which serve as habitats for macro-invertebrates and fishes [
44]. Thus, they provide a good basis for a quantitative assessment of habitats. .
7. Conclusion
In this paper, we introduced a novel method for classifying 3D topo-bathymetric LiDAR point clouds into three main height-oriented vegetation classes - Low Vegetation, High Vegetation and Vegetation Canopy. The clouds were compared with reference data (orthophotos and field survey supported aerial photo interpretations) to create a separate classification scheme for each vegetation class. These schemes consist of calculations of threshold values of indicator attributes to classify representative points for each class. These candidate points were then used to create digital surface models, which in turn served as the basis for the final classification of the point clouds.
The research reveals that the automatic classification of LiDAR point clouds holds the potential to detect submerged vegetation in Lake Constance and differentiate between various categories of vegetation namely Low Vegetation, High Vegetation and Vegetation Canopy. The detection capability surpasses the mere identification of the presence or absence of a vegetation class, because it provides insight into the height enabling three-dimensional mapping and vaguely into the density of the vegetation. It is noteworthy that the method exhibits a high level of precision in detecting vegetation, identifying even the smallest vegetated areas and effectively distinguishing them from their surroundings.
Generally, it can be stated that the field of monitoring of aquatic submerged macrophytes through airborne LiDAR data is in its early stages, with the necessary technological development currently underway. New surveying devices offering both better depth penetration and higher point density increase the potential in the field of research, as well as in the domain of data science, where machine learning and deep learning will be of great significance in the future. The results of this study have demonstrated a high quality of automatic point cloud classification for the classification of submerged vegetation, with enormous potential for the entire surveying of littoral water zones through further development of data processing.
Author Contributions
Nike Wagner (TU Wien) was responsible for LiDAR data processing and analysis, for conceptualization and software implementation of the point cloud classification pipeline, as well as for validation of the results based on the reference data provided. Nike Wagner also drafted main parts of the manuscript. Gottfried Mandlburger (TU Wien) supervised the LiDAR data analysis, contributed to the conceptualization of the classification procedure, and sketched the overall structure of the article. Gunnar Franke (University of Hohenheim) was in charge of field data analysis and the supervised the creation of the reference polygon maps used for validation. For the manuscript, Gunnar Franke and Klaus Schmieder (University of Hohenheim) contributed texts for the introduction, study area and data set, validation, discussion, and conclusions sections. Klaus Schmieder was also responsible for the subject-specific interpretation of the classification results. Gottfried Mandlburger and Klaus Schmieder were responsible for the overall project administration. All authors equally contributed to reviewing, editing, and proof reading the manuscript.
Funding
This study was supported by the grant “SeeWandel: Life in Lake Constance - the past, present and future” within the framework of the Interreg V programme “Alpenrhein-Bodensee-Hochrhein (Germany/Austria/Switzerland/Liechtenstein)” which funds are provided by the European Regional Development Fund as well as the Swiss Confederation and cantons. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Institutional Review Board Statement
Not applicable
Informed Consent Statement
Not applicable
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Acknowledgments
The authors express their kind acknowledgemenst to IGKB for initiating the SeeWandel project and providing the "Tiefenschärfe" DTM data. Furthermore, we thank Gabriella Vives and Sophia Deinhardt for their support in field survey, aerial photo interpretation and digitizing of vegetation patches. And special thanks go to Christian Mayr and Dr. Michael Cramer from University of Stuttgart (IfP) for processing the digital orthophoto-mosaic of the aerial photos.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Appendix A.1. Classification Results and Comparative Data
Figure A1.
Results of the automatic classification of tile ETL1
Figure A1.
Results of the automatic classification of tile ETL1
Figure A2.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL1
Figure A2.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL1
Figure A3.
Results of the automatic classification of tile ETL2
Figure A3.
Results of the automatic classification of tile ETL2
Figure A4.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL2
Figure A4.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL2
Figure A5.
Results of the automatic classification of tile ETL3
Figure A5.
Results of the automatic classification of tile ETL3
Figure A6.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL3
Figure A6.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL3
Figure A7.
Results of the automatic classification of tile ETL5
Figure A7.
Results of the automatic classification of tile ETL5
Figure A8.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL5
Figure A8.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETL5
Figure A9.
Results of the automatic classification of tile ETN1
Figure A9.
Results of the automatic classification of tile ETN1
Figure A10.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN1
Figure A10.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN1
Figure A11.
Results of the automatic classification of tile ETN2
Figure A11.
Results of the automatic classification of tile ETN2
Figure A12.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN2
Figure A12.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN2
Figure A13.
Results of the automatic classification of tile ETN3
Figure A13.
Results of the automatic classification of tile ETN3
Figure A14.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN3
Figure A14.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN3
Figure A15.
Results of the automatic classification of tile ETN4
Figure A15.
Results of the automatic classification of tile ETN4
Figure A16.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN4
Figure A16.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN4
Figure A17.
Results of the automatic classification of tile ETN8
Figure A17.
Results of the automatic classification of tile ETN8
Figure A18.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN8
Figure A18.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) (legend presented in
Figure 12) for ETN8
References
- Carpenter, S.; Lodge, D. Effects of submersed macrophytes on ecosystem processes. Aquatic Botany 1986, 26, 341–370. [Google Scholar] [CrossRef]
- Yamasaki, T.N.; Jiang, B.; Janzen, J.G.; Nepf, H.M. Feedback between vegetation, flow, and deposition: A study of artificial vegetation patch development. Journal of Hydrology 2021, 598. [Google Scholar] [CrossRef]
- Jeppesen, E.; Søndergaard, M.; Søndergaard, M.; Christoffersen, K. The Structuring Role of Submerged Macrophytes in Lakes; Springer New York, NY, 1998.
- Coops, H.; Kerkum, F.C.M.; van den Berg, M.S.; van Splunder, I. Submerged macrophyte vegetation and the European Water Framework Directive: assessment of status and trends in shallow, alkaline lakes in the Netherlands. Hydrobiologia 2007, 584, 395–402. [Google Scholar] [CrossRef]
- Schneider, S. Macrophyte trophic indicator values from a European perspective. Limnologica 2007, 37, 281–289. [Google Scholar] [CrossRef]
- Zhang, T.; Ban, X.; Wang, X.; Li, E.; Yang, C.; Zhang, Q. Spatial relationships between submerged aquatic vegetation and water quality in Honghu Lake, China. Fresenius Environmental Bulletin 2016, 25, 896–909. [Google Scholar]
- Lehmann, A.; Lachavanne, J.B. Changes in the water quality of Lake Geneva indicated by submerged macrophytes. Freshwater Biology 1999, 42, 457–466. [Google Scholar] [CrossRef]
- Espel, D.; Courty, S.; Auda, Y.; Sheeren, D.; Elger, A. Submerged macrophyte assessment in rivers: An automatic mapping method using Pleiades imagery. Water Research 2020, 186. [Google Scholar] [CrossRef] [PubMed]
- Rowan, G.S.L.; Kalacska, M. A Review of Remote Sensing of Submerged Aquatic Vegetation for Non-Specialists. Remote Sensing 2021, 13. [Google Scholar] [CrossRef]
- Luo, J.; Li, X.; Ma, R.; Li, F.; Duan, H.; Hu, W.; Qin, B.; Huang, W. Applying remote sensing techniques to monitoring seasonal and interannual changes of aquatic vegetation in Taihu Lake, China. Ecological Indicators 2016, 60, 503–513. [Google Scholar] [CrossRef]
- Nelson, S.A.C.; Cheruvelil, K.S.; Soranno, P.A. Satellite remote sensing of freshwater macrophytes and the influence of water clarity. Aquatic Botany 2006, 85, 289–298. [Google Scholar] [CrossRef]
- Schmieder, K. Littoral zone — GIS of Lake Constance: a useful tool in lake monitoring and autecological studies with submersed macrophytes. Aquatic Botany 1997, 58, 333–346. [Google Scholar] [CrossRef]
- Mandlburger, G. A Review of Active and Passive Optical Methods in Hydrography. The International Hydrographic Review 2022, 28, 8–52. [Google Scholar] [CrossRef]
- Collin, A.; Ramambason, C.; Pastol, Y.; Elisa Casella and, Alessio Rovere, L. T.; Espiau, B.; Siu, G.; Lerouvreur, F.; Nakamura, N.; Hench, J.L.; Schmitt, R.J.; Holbrook, S.J.; Troyer, M.; Davies, N. Very high resolution mapping of coral reef state using airborne bathymetric LiDAR surface-intensity and drone imagery. International Journal of Remote Sensing 2018, 39, 5676–5688. [Google Scholar] [CrossRef]
- Guo, K.; Li, Q.; Mao, Q.; Wang, C.; Zhu, J.; Liu, Y.; Xu, W.; Zhang, D.; Wu, A. Errors of Airborne Bathymetry LiDAR Detection Caused by Ocean Waves and Dimension-Based Laser Incidence Correction. Remote Sensing 2021, 13. [Google Scholar] [CrossRef]
- Klemas, V.V. Remote Sensing of Submerged Aquatic Vegetation. In Seafloor Mapping along Continental Shelves: Research and Techniques for Visualizing Benthic Environments; Finkl, C.; Makowski, C., Eds.; 2016; Vol. 13, Coastal Research Library, pp. 125–140. [CrossRef]
- Kinzel, P.J.; Legleiter, C.J.; Nelson, J.M. Mapping River Bathymetry With a Small Footprint Green LiDAR: Applications and Challenges. JAWRA Journal of the American Water Resources Association 2013, 49, 183–204. [Google Scholar] [CrossRef]
- Meneses, N.C.; Baier, S.; Geist, J.; Schneider, T. Evaluation of Green-LiDAR Data for Mapping Extent, Density and Height of Aquatic Reed Beds at Lake Chiemsee, BavariaGermany. Remote Sensing 2017, 9. [Google Scholar] [CrossRef]
- Mandlburger, G.; Jutzi, B. On the Feasibility of Water Surface Mapping with Single Photon LiDAR. ISPRS International Journal of Geo-Information 2019, 8. [Google Scholar] [CrossRef]
- Guenther, G.; Cunningham, A.; Laroque, P.; Reid, D. Meeting the accuracy challenge in airborne lidar bathymetry. Proceedings of the 20th EARSeL Symposium: Workshop on Lidar Remote Sensing of Land and Sea;, 2000.
- Philpot, W. (Ed.) Airborne Laser Hydrography II; Cornell University Library (eCommons): Coernell, 2019; p. 289. [Google Scholar] [CrossRef]
- Maas, H.G.; Mader, D.; Richter, K.; Westfeld, P. Improvements in LiDAR bathymetry data analysis. Underwater 3D Recording and Modelling; Skarlatos, D.; Agrafiotis, P.; Nocerino, E.; Menna, F.; Bruno, F.; Vlachos, M., Eds. Int Soc Photogrammetry & Remote Sensing, 2019, Vol. 42-2, International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, pp. 113–117. 2019; -03. [Google Scholar] [CrossRef]
- Gong, Z.; Liang, S.; Wang, X.; Pu, R. Remote Sensing Monitoring of the Bottom Topography in a Shallow Reservoir and the Spatiotemporal Changes of Submerged Aquatic Vegetation Under Water Depth Fluctuations. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, 14, 5684–5693. [Google Scholar] [CrossRef]
- Wang, D.; Xing, S.; He, Y.; Yu, J.; Xu, Q.; Li, P. Evaluation of a New Lightweight UAV-Borne Topo-Bathymetric LiDAR for Shallow Water Bathymetry and Object Detection. Sensors 2022, 22. [Google Scholar] [CrossRef]
- Parrish, C.E.; Dijkstra, J.A.; O’Neil-Dunne, J.P.M.; McKenna, L.; Pe’eri, S. Post-Sandy Benthic Habitat Mapping Using New Topobathymetric Lidar Technology and Object-Based Image Classification. Journal of Coastal Research. [CrossRef]
- Fritz, C.; Dörnhöfer, K.; Schneider, T.; Geist, J.; Oppelt, N. Mapping submerged aquatic vegetation using RapidEye satellite data: the example of Lake Kummerow (Germany). Water 2017, 9, 510. [Google Scholar] [CrossRef]
- Shuchman, R.A.; Sayers, M.J.; Brooks, C.N. Mapping and monitoring the extent of submerged aquatic vegetation in the Laurentian Great Lakes with multi-scale satellite remote sensing. Journal of Great Lakes Research 2013, 39, 78–89. [Google Scholar] [CrossRef]
- Luo, J.; Duan, H.; Ma, R.; Jin, X.; Li, F.; Hu, W.; Shi, K.; Huang, W. Mapping species of submerged aquatic vegetation with multi-seasonal satellite images and considering life history information. International Journal of Applied Earth Observation and Geoinformation 2017, 57, 154–165. [Google Scholar] [CrossRef]
- Spaak, P.; Alexander, J. Seewandel. https://seewandel.org, 2018.
- Muller, H. Lake Constance - a model for integrated lake restoration with international cooperation. Water Science and Technology 2002, 46, 93–98. [Google Scholar] [CrossRef]
- Murphy, F.; Schmieder, K.; Baastrup-Spohr, L.; Pedersen, O.; Sand-Jensen, K. Five decades of dramatic changes in submerged vegetation in Lake Constance. Aquatic Botany 2018, 144, 31–37. [Google Scholar] [CrossRef]
- Wahl, B.; Peeters, F. Effect of climatic changes on stratification and deep-water renewal in Lake Constance assessed by sensitivity studies with a 3D hydrodynamic model. Limnology and Oceanography 2014, 59, 1035–1052. [Google Scholar] [CrossRef]
- Internationale Gewässerschutzkommission für den Bodensee (IGKB). https://www.igkb.org.
- Franke, G.; Deinhardt, S.; Buchmann, C.; Schmieder, K. Spatial heterogeneity in Lake Constance: Changing in underwater lakescape shaped by submerged macrophyte composition in the last decades 2024.
- Rottman, H. ; R. Auer, B.; Kamps, U. Q-880-G. http: //www.riegl.com/uploads/tx_pxpriegldownloads/RIEGL_VQ-880-GII_Datasheet_2022-04-04.pdf, 2022. [Google Scholar]
- Isenburg, M. LASzip: Lossless Compression of Lidar Data. Photogrammetric Engineering & Remote Sensing 2013, 79, 209–217. [Google Scholar] [CrossRef]
- Steinbacher, F.; Dobler, W.; Benger, W.; Baran, R.; Niederwieser, M.; Leimer, W. Integrated Full-Waveform Analysis and Classification Approaches for Topo-Bathymetric Data Processing and Visualization in HydroVISH. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science 2021, 89, 159–175. [Google Scholar] [CrossRef]
- Wessels, M.; Anselmetti, F.; Baran, R.; Gessler, M.H.S.; Wintersteller, P. Tiefenschärfe – Hochauflösende Vermessung Bodensee. https://www.igkb.org/forschungsprojekte/tiefenschaerfe, 2015.
- Pfeifer, N.; Mandlburger, G.; Otepka, J.; Karel, W. OPALS - A framework for Airborne Laser Scanning data analysis. Computers, Environment and Urban Systems 2014, 45, 125–136. [Google Scholar] [CrossRef]
- Python 3.6.8. https://www.python.org/downloads/release/python-368/, 2018.
- Wessels, M.; Anselmetti, F.; Artuso, R.; Baran, R.; Daut, G.; Gaide, S.; Geiger, A.; Groeneveld, J.; Hilbe, M.; Möst, K.; Klauser, B.; Niemann, S.; Roschlaub, R.; Steinbacher, D.; Wintersteller, P.; Zahn, E. Bathymetry of Lake Constance – a High-Resolution Survey in a Large, Deep Lake. ZFV - Zeitschrift fur Geodasie, Geoinformation und Landmanagement 2015, 140, 203. [Google Scholar] [CrossRef]
- Schmieder, K.; Werner, S.; Bauer, H.G. Submersed macrophytes as a food source for wintering waterbirds at Lake Constance. Aquatic Botany 2006, 84, 245–250. [Google Scholar] [CrossRef]
- Vander Zanden, J.; Vadeboncoeur, Y.; Chandra, S. Fish Reliance on Littoral–Benthic Resources and the Distribution of Primary Production in Lakes. Ecosystems 2011, 14, 894–903. [Google Scholar] [CrossRef]
- Walker, P.; Wijnhoven, S.; Van der Velde, G. Macrophyte presence and growth form influence macroinvertebrate community structure. Aquatic Botany 2013, 104, 80–87. [Google Scholar] [CrossRef]
1 |
"Tiefenschärfe" was the name of a project aiming at a complete survey of the bathymetry of Lake Constance and the surrounding littoral terrain with data collection in 2013 and 2014. The term stems from optics and literally translates to "depth of field", but is not used in this context. The real meaning is revealed when separating the two words "Tiefe" (depth) and "Schärfe" (acuity/sharpness). The project aimed at providing a sharp geometric model of Lake Constance. |
2 |
In some tiles, a more precise distinction is made between Low Vegetation and Low Vegetation 2 if, within a processed point cloud, the class can be clearly distinguished into two sub-classes of different heights |
3 |
indicates the sum of the distances to the next 20 neighboring points and is therefore inversely proportional to the point density |
4 |
The definition of ground candidates is based on the available Tiefenschärfe-DTM
|
5 |
With regard to data collection, aircraft-based laser flights are less flexible than drone flights and can hardly react to short-term changing conditions as crewed flights require longer planning in advance |
Figure 1.
Location of the study area at country (a) and local level (b). Distribution of Areas of Interest (AOIs) at the Lake Constance Lower Lake (c). Coordinate System is ETRS89/UTM zone 32N.
Figure 1.
Location of the study area at country (a) and local level (b). Distribution of Areas of Interest (AOIs) at the Lake Constance Lower Lake (c). Coordinate System is ETRS89/UTM zone 32N.
Figure 2.
Airborne Laser Scanning (ALS) Processing Chain applied for automatic Classification of submerged Macrophytes.
Figure 2.
Airborne Laser Scanning (ALS) Processing Chain applied for automatic Classification of submerged Macrophytes.
Figure 3.
Coverage of an AOI polygon by different flight strips (PointIds)
Figure 3.
Coverage of an AOI polygon by different flight strips (PointIds)
Figure 4.
Processing chain of vegetation candidate classification
Figure 4.
Processing chain of vegetation candidate classification
Figure 5.
Measure of the local three-dimensional point density (variable
3) as detection feature for
Low Vegetation in LiDAR point cloud demonstrated on a cross section.
Figure 5.
Measure of the local three-dimensional point density (variable
3) as detection feature for
Low Vegetation in LiDAR point cloud demonstrated on a cross section.
Figure 6.
Reflectance values as detection feature for High Vegetation in LiDAR point cloud demonstrated on a point cloud cross section.
Figure 6.
Reflectance values as detection feature for High Vegetation in LiDAR point cloud demonstrated on a point cloud cross section.
Figure 7.
NumberOfReturns values as detection feature for Vegetation Canopy in LiDAR point cloud demonstrated on a point cloud cross section.
Figure 7.
NumberOfReturns values as detection feature for Vegetation Canopy in LiDAR point cloud demonstrated on a point cloud cross section.
Figure 8.
Visualisation of the DSM calculation principle for each vegetation class based on a cross section. Candidate points for Low Vegetation (green), High Vegetation (light green) and Vegetation Canopy (orange).
Figure 8.
Visualisation of the DSM calculation principle for each vegetation class based on a cross section. Candidate points for Low Vegetation (green), High Vegetation (light green) and Vegetation Canopy (orange).
Figure 9.
Classification Order for automatic point cloud classification using DSMs
Figure 9.
Classification Order for automatic point cloud classification using DSMs
Figure 10.
Results of the automatic classification of tile ETL4
Figure 10.
Results of the automatic classification of tile ETL4
Figure 11.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) for ETL4.
Figure 11.
Comparison of classification results (only candidate points) (a) with orthophoto (b) and field survey supported aerial photo interpretation (c) for ETL4.
Figure 12.
Legend for aerial photo based classification compared to LiDAR data based classification classes.
Figure 12.
Legend for aerial photo based classification compared to LiDAR data based classification classes.
Figure 13.
Results of the automatic classification of tile ETN2; (a) top view and (b) selected cross section.
Figure 13.
Results of the automatic classification of tile ETN2; (a) top view and (b) selected cross section.
Figure 14.
Results of the automatic point cloud classification of test area T2 (a) and selected cross section (b) illustrating the structure of the class High Vegetation within the water column. Only candidate points are presented.
Figure 14.
Results of the automatic point cloud classification of test area T2 (a) and selected cross section (b) illustrating the structure of the class High Vegetation within the water column. Only candidate points are presented.
Figure 15.
Indicator variable of the candidate classification of class Vegetation Canopy (dist4nn) in (a) and orthophoto in (b) of the test area T2. Polygons of aerial photo interpretation are superimposed to both.
Figure 15.
Indicator variable of the candidate classification of class Vegetation Canopy (dist4nn) in (a) and orthophoto in (b) of the test area T2. Polygons of aerial photo interpretation are superimposed to both.
Table 1.
Vegetation classes of the Aerial Photo Interpretation with corresponding species.
Table 1.
Vegetation classes of the Aerial Photo Interpretation with corresponding species.
Class |
Height [cm] |
Species |
Charophytes small (cs) |
5 - 30 |
Chara asperaWilld., Chara aspera var. subinermisKütz., Chara tomentosaL., Chara virgataKütz., Nitella hyalina(DC.) C. Agardh |
Charophytes medium (cm) |
30 - 60 |
Chara contrariaA. Braun ex Kütz., Chara dissolutaA. Braun ex Leonhardi, Chara globularisThuill., Nitella flexilis(L.) C. Agardh, Nitellopsis obtusa(Desv.) J. Groves |
Elodeids tall, large-leaved (etl) |
120 - 600 |
Potamogeton angustifoliusJ. Presl, Potamogeton crispusL., Potamogeton lucensL., Potamogeton perfoliatusL. |
Elodeids tall, narrow-leaved (etn) |
120 - 600 |
Ceratophyllum demersumL., Myriophyllum spicatumL., Potamogeton helveticus(G. Fisch.) W. Koch, Potamogeton pectinatusL., Potamogeton pusillusL., Potamogeton trichoidesCham & Schltdl., Ranunculus circinatusSibth., Ranunculus trichophyllusChaix, Ranunuculus fluitansLam., Zannichellia palustrisL. (tall) |
Elodeids small, large-leaved (esl) |
30 - 60 |
Elodea canadensisMichx., Elodea nuttallii(Planch.) H. St. John, Groenlandia densa(L.) Fourr. |
Elodeids small, narrow-leaved (esn) |
30 - 60 |
Alisma gramineumLej., Alisma lanceolatumWith., Najas marina subsp. intermedia(Wolfg. Ex Gorski) Casper, Potamogeton friesiiRupr., Potamogeton gramineusL., Zannichellia palustrisL. (small) |
Other macroalgae (o) |
no data |
Cladophora sp.Kütz., Ulva (Enteromorpha) sp.L., Hydrodictyon sp.Roth, Spirogyra sp.Link, Vaucheria sp.A.P. de Candolle |
Table 3.
Covered area by vegetation class and tile in percent as calculated by Formula
1
Table 3.
Covered area by vegetation class and tile in percent as calculated by Formula
1
Tile |
Ground |
Low Vegetation |
Low Vegetation 2 |
High Vegetation |
Vegetation Canopy |
ETL1 |
85.34 |
76.02 |
0.0 |
2.29 |
0.21 |
ETL2 |
68.89 |
64.18 |
0.0 |
0.0 |
0.0 |
ETL3 |
81.41 |
101.75 |
0.0 |
0.47 |
0.40 |
ETL4 |
75.30 |
60.69 |
0.0 |
38.66 |
1.56 |
ETL5 |
67.40 |
0.0 |
0.0 |
69.38 |
57.66 |
ETN1 |
39.53 |
46.55 |
0.0 |
0.0 |
0.82 |
ETN2 |
61.49 |
50.32 |
57.30 |
10.26 |
8.82 |
ETN3 |
91.49 |
70.0 |
3.10 |
0.01 |
0.0 |
ETN4 |
62.75 |
39,97 |
5.42 |
0.09 |
0.0 |
ETN8 |
56.35 |
44.50 |
0.0 |
22.09 |
3.59 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).