Preprint
Article

Classification of River Sediments Fractions over a Wide Range including Shallow Water Areas Based on Aerial Images from Uav with Convolution Neural Network

Altmetrics

Downloads

85

Views

31

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

30 November 2023

Posted:

30 November 2023

You are already at the latest version

Alerts
Abstract
River bed materials serve multiple environmental functions as a habitat for aquatic invertebrates and fishes. At the same time, the particle size of the bed material reflects the tractive force of the flow regime in a flood and provides useful information for flood control. The traditional river bed particle size surveys, such as sieving, require time and labor to investigate river bed materials. The authors proposed a method to classify aerial images taken by unmanned aerial vehicle (UAV) using convolutional neural networks (CNN), our previous study showed that terrestrial riverbed material could be classified with high accuracy. In this study, we attempted to classify riverbed materials distributed in shallow waters where the bottom can be seen from UAVs. After training the CNN to classify the images with the same grain size as being in the same class even if the surface flow types taken overlapping the riverbed material were different, the total accuracy reached 90.3%. Moreover, the proposed method was applied to the wide-ranging area to determine the distribution of the particle size. In parallel, the microtopography was surveyed using Lidar-UAV, and the relationship between the microtopography and particle size distribution was discussed. In the steep section, coarse particles were distributed and formed a rapid. Fine particles were deposited on the upstream side of those rapids, where the slope had become gentler due to the damming. There was good agreement between the microtopographical trends and the grain size distribution.
Keywords: 
Subject: Environmental and Earth Sciences  -   Remote Sensing

1. Introduction

Mapping the particle size of river bed materials is an important part of defining river habitat. Riverbed materials fulfill multifaceted environmental functions as feeding grounds, spawning grounds, and shelters for aquatic invertebrates and fish [1,2,3]. Their particle size reflects the tractive force during the flooding term [4,5]. In addition, in flood simulations, the roughness of riverbeds is determined based on the particle size of the material, so the evaluated particle size affects the reproducibility of the simulation [6]. Furthermore, if the vegetation that is adapted to the grain size characteristics develops, it regulates flow regimes and induces new sediment transport characteristics and river morphology [7,8]. The distribution of river bed material particle size is essential information from the viewpoint of flood control and the environmental management of river channels.
The conventional field methods such as the grid-by-number and volumetric methods [9,10] for riverbed material investigation require a lot of labor and time for sieving or direct measurement of particle size. Therefore, conducting the surveys in a wide-ranging area with high density would require an extremely large amount of cost and would not be carried out. In addition, the spatial representativeness is also an issue because the sampling is within a quadrat with a limited area of less than 1m2 while the site ranges some square kilometers.
In recent years, unmanned aerial vehicles (UAVs), which are used for a variety of purposes, are capable of comprehensively photographing a wide range. If particle size can be measured by automated extraction of individual particles from a large number of captured images, it is possible to map the particle size distribution of vast river areas. Evaluation of particle size distribution by image analysis is not limited to the application of aerial images taken from UAVs. There are several applications in fields other than river bed material surveys. Yokota et al. [11] used stereo image analysis to evaluate blasting shear during tunnel excavation. They developed a technology to calculate particle size accumulation curves with high accuracy, and its effectiveness has been demonstrated. Igathinathane et al. [12] photographed airborne wood dust particles of wood using a document scanner and performed dimensional measurements and size distribution analysis using ImageJ (machine vision plug-in). The results showed that dimensions larger than 4 μm can be measured with an accuracy of 98.9% in about 8 seconds per image, and it is an effective method equivalent to sieving.
In river management, there are some applications of image recognition to identify information on river channels and other features. As an example of river bed materials, M. Detert et al. [13,14] developed the software BASEGRAIN which automatically identifies each particle in an image taken of a riverbed, measures the major axis and minor axis, and automatically calculates the particle size distribution. Harada et al. [15] verified the analytical accuracy of his BASEGRAIN with the results obtained by actually performing the volumetric method. It showed that the particle size distributions of the two have a good agreement, and they mapped the particle size of a wide range of river bed materials. However, although BASEGRAIN is an effective method for obtaining the particle size distribution of materials contained in a single image, compared to the particle size evaluation by the image analysis in which the shooting conditions are fixed to some extent as introduced above, the manual operations are required depending on the brightness and shadow of each image because it is taken outdoor and. In cases where the shadows that initially appear on a single stone surface are counted as multiple particles, the obtained particle size distribution may be significantly different unless the analysis conditions are adjusted and the population brake is applied [16]. Such tuning processes increase effort and time to spend. On the other hand, in the actual river management scene or when providing a distribution of riverbed roughness for two-dimensional shallow water flow simulation, it is often sufficient to be able to map a representative grain size such as D50. Therefore, if the purpose is limited to mapping, there is no need to extract individual particles from the images; it is sufficient to divide the image into a large number of grids and determine the representative particle size for each grid.
A similar analysis method is the land cover classification using remote sensing data and machine learning by comparing pixel values of satellite and aerial photographs with ground truth data. [17,18,19,20]. This method is effective when a wide area can be photographed under the same conditions, but the problem is that it cannot be classified properly if the pixel values change due to differences in photographing time or weather. On the other hand, image recognition using deep learning can capture more advanced image features rather than simple pixel values, so it can be expected to achieve robust classification against differences in brightness, etc. [21,22].
Since the development of Deep Learning to semantic segmentation [23], which classifies regions of an entire image, it has been applied in various fields of manufacturing and resource management. The accuracy of semantic segmentation has been evaluated using the benchmark datasets, and there are also datasets for cityscapes and roadscapes, in recent years, the most accurate models have been announced every year for these datasets. Research on the application of deep learning to satellite images and aerial photography is progressing [24], and detection of not only land cover but also features such as buildings [25,26,27], roads [28,29,30], and land cover [31,32,33] is available. A large part of the examples of semantic segmentation applying to rivers are the river channel detection and the river width measurement using satellite images [34,35,36]. As the examples related to river management, there are attempts to detect the estuary sandbars [37], the water level monitoring during floods [38,39,40,41], the water binary segmentation task to aid in fluvial scene autonomous navigation [42], and fine-grained river ice [43]. The number of research on semantic segmentation for river scenes is less than that for terrestrial areas because the benchmark datasets are smaller than those for terrestrial data. Any cases mentioned above are based on the analysis of two-dimensional images, such as satellite images, UAV aerial images, and surveillance videos.
In recent years, not only 2-dimensional image information but also 3-dimensional point cloud data obtained from SfM analysis [44,45] and terrestrial laser scanning (TLS) [46] have been analyzed to investigate riverbed morphology, roughness, and surface sedimentation.
However, the point cloud density obtained by SfM and TLS is not homogeneous depending on the characteristics of the ground surface pattern and the distance from the instrument. If the main purpose is a mapping that requires high precision such as roughness or surface sedimentology, careful attention must be paid to data acquisition and selection). Furthermore, with SfM, it is possible to analyze underwater river bed materials in rivers with high transparency and shallow water depth, but with TLS, measurements are impossible because the laser is absorbed on the water surface.
Onaka et al. used the Convolution Neural Network (CNN) to determine the presence or absence of clam habitat [47] and to classify sediment materials in tidal flatwater areas [48]. These targets are stagnant water areas where there is no current and no diffuse reflection of waves from the surface of the water. On the other hand, in river channels, as Hedger et al. [49] pointed out, automated extraction of information on river habitats from remote sensing imagery is difficult due to a large number of confounding factors. There are some attempts to apply artificial intelligence techniques to map surface cover types [50], hydromorphological features [51], mesohabitats [52], salmon redds [53], etc., but there is a possibility of misclassification due to water surface waves [54]. The authors have previously attempted to automatically classify the particle size of riverbed material in terrestrial areas using UAV aerial images and image recognition using artificial intelligence (AI) during normal water conditions. [55]. The popular pre-trained CNN, GoogLeNet, was retrained to perform a new task using the 70 riverbed images using transfer learning. The overall accuracy of the image classification reaches 95.4%.
In this study, we applied the same method to the shallow water images taken from UAVs over the river channels when the water was highly transparent and attempted the classification. The representative particle size, which serves as a reference for training data and test data, was determined by the bed material samples taken from the quadrat underwater surrounded with fences to prevent advection during collection. Furthermore, the network trained in transfer learning was used to map the particle sizes in the river channel range. Trends of the particle size were compared with the detailed topographical changes in the longitudinal direction of the river channel measured by Lidar-UAV and the relationship between particle size and flow regime was discussed.

2. Materials and Methods

2.1. Study Area

The study area which is the same as our previous study [55], Mimikawa River located in Miyazaki Prefecture, Japan, has a length of 94.8 km and a watershed area of 884.1 km2. It has 7 dams, developed between the 1920s and the 1960s, whose purpose is only power generation, not flood control (Figure 1). However, sedimentation in the reservoirs breaks the natural transportation of solid matter along the channel [56,57], although sediment supplied from upstream areas greatly contributes to the conservation of the river channel and the coastal ecosystems [58,59]. On the other hand, the southern part of Kyushu, Japan, where the basin of the Mimikawa River is located, was severely flooded as a result of a typhoon in 2005. These old dams, which did not have sufficient discharge capacity, became obstacles during floods.
As a solution to these two disparate issues, the renovation of the dams and the lowering of the dam body to enhance the sluicing of sediment were planned [60,61]. The three dams constructed in succession downstream on the Mimikawa River were remodeled, and a function was added to the dams to remove the sediment at the time of flooding. Retrofitting of Saigo Dam, the second from the lower side of Mimikawa River was finished in 2017, and then the operation was started. Remodeling an operating dam requires advanced, world-leading techniques, and sediment in Japanese rivers is actively monitored [62]. In the revised operation, the water level during floods was decreased to enhance the scrubbing and transportation of the settled sediment on the upstream side of the dam body. Due to those operations, sand bars which have diverse grain sizes found especially on the downward section of Saigo Dam in recent years.
In our previous study [55], the image classification method according to fractions categories by using CNN is only applied to the terrestrial area above the water surface on the day of photographing, and it does not include the area under the water surface where the color is affected by water absorption and wetting on the material surface, or where the visibility may be affected by waves or reflections on the water surface. This study extends the target of image recognition to the shallow underwater area. Fortunately, the water was very transparent during our survey, and when there were no waves, we could see the bottom of the water, which was about 1 meter deep.

2.2. Aerial Photography and sieving for validation

The UAV used for aerial photography was a Phantom 4 Pro, manufactured by DJI. Aerial photography from the UAV was conducted at 108 locations (76 locations in terrestrial areas + 32 locations in shallow waters) in parallel with the sieving and classification work described below. Pictures were taken from 10 m flight altitudes, as same as our previous study. The previous study has shown that the resolution of the pictures taken at that flight altitude is sufficient to distinguish between the smallest particle size we want to classify and the next particle size (Table 1).
At the same 108 points, the particle size distribution of bed material was measured by sieving and weight scale. Mesh sizes of the sieves were 75, 53, 37.5, 26.5, 19, 9.5, 4.75, and 2 mm (JIS Z 8801-1976) that have intervals considering the logarithmic axis of the graph showing the cumulative curve of particle size. Riverbed material in a 0.5 m × 0.5 m quadrat with a depth of around 10 cm was sampled. Sieving and weighing were carried out in the field. The riverbed material in the shallow water area was sampled by setting up the fences surrounding the target area to eliminate washout by river currents. The collected materials were in a wet state, so after sieving them and measuring their wet weights on the site, some of them were sealed and brought back to the laboratory, where they were dried and the moisture content was determined. The dry weight was determined by subtracting the mass equivalent to the moisture content from the wet weight measured on-site.
Although the above sampling was performed in a 0.5 m × 0.5 m square by the volumetric method, in this study, the application area of the proposed photographing was set to a 1.0 m × 1.0 m square centered on the 0.5 m × 0.5 m square. This is because stones with a size of about 20 cm could be seen at some points, and the image range of 50 cm square may not be large enough. Specifications of the camera follow: image size was 5472 pixels × 364 pixels, lens focal length was 8.8 mm, and sensor size was 13.2 mm × 8.8 mm. When photographing at an altitude of 10m using this camera, the image resolution was about 2.74 mm/pixel. Therefore, 365 × 365 pixels (Equivalent to 1.0 m × 1.0 m square) were trimmed from the original image.
Only the 108 data points of particle size distribution were obtained by the volumetric method. This number was insufficient for training data for the calibration of the parameters in CNN and for the test data for evaluating the accuracy of CNN tuned in this study. Then, we increased the number of images available for the training and testing, by analyzing the surrounding area with BASEGRAIN.
For the materials on the terrestrial area, the images of adjacent grids in 8 directions from the sieving quadrat were clipped as in the previous study. If BASEGRAIN determined that the particle size distribution of the sieve quadrat image and the surrounding 8 images were approximately the same, the surrounding 8 images were nominated as the same class of particle size as that of the sieving quadrat. On the other hand, for the materials located under shallow water, the images around the quadrat area, as shown in Figure 2, were clipped because the class of particle size was kept in flow direction while the class changes drastically in width or depth direction.
When determining the typical particle size value based on the result by BASEGRAIN, we took into account the fact that BASEGRAIN has difficulty distinguishing fine particles, as in previous studies. When the number of particles recognized by BASEGRAIN is very small, the image is considered to be the finest particle class. Furthermore, considering the correlation between the results of the volumetric method and BASEGRAIN, the threshold value of BASEGRAIN between Class 1 (Medium) and Class 2 (small) was set to 28.2 mm, corresponding to the threshold value of 24.5 mm by the volumetric method [55].
Figure 3. Image preprocessing and how to increase the image data for the image on the terrestrial area. (Red: volumetric method and BASEGRAIN; Yellow: BASEGRAIN only).
Figure 3. Image preprocessing and how to increase the image data for the image on the terrestrial area. (Red: volumetric method and BASEGRAIN; Yellow: BASEGRAIN only).
Preprints 91865 g002
Figure 4. Image preprocessing and how to increase the image data for the image in shallow water. (Red: volumetric method and BASEGRAIN; Yellow: BASEGRAIN only).
Figure 4. Image preprocessing and how to increase the image data for the image in shallow water. (Red: volumetric method and BASEGRAIN; Yellow: BASEGRAIN only).
Preprints 91865 g003

2.3. Training and Test of CNN

The image recognition code was built with MATLAB®. The CNNs were installed in the code as a module. CNN consists of input and output layers, as well as several hidden layers. The hidden part of CNN is the combination of convolutional layers, pooling layers realizing the extraction of visual signs, and a fully connected classifier, which is a perceptron that processes the features obtained on the previous layers [63].
In our previous study [55], GoogleNet (2014) [64] showed the best performance in differentiating river sediment fractions on a terrestrial area in the images, among the major networks pre-trained on ImageNet [65], so the following show the results using Googlenet as CNN. The networks can be retrained to perform a new task using transfer learning [66]. Finetuning a network with transfer learning is usually much faster and easier than training a network from scratch with randomly initialized weights. The features learned before transfer learning can be quickly transferred to the new task using a smaller number of training images.
In the previous study, we set 3 classes of particle size (Table 1) and tried image classification based on these classification criteria for the images of the materials in terrestrial areas. In this study, materials in water are also subject to classification, so we are changing the number of classifications and searching for several classifications that can be classified with higher accuracy. For example, if the images of bed materials with the same particle size are divided into two conditions, whether they are on a terrestrial area or underwater, the six classes are defined by multiplying the three particle size classes listed above with the conditions.

2.4. Projection of the classification results and microtopographic survey

The UAV was automatically navigated to continuously photograph the two study sites, and each image was divided into small meshes, and the classification results for each mesh were projected on a map at the end of this study. However, the study Site was in the narrow canyon and the UAV flight altitude was relatively low to obtain the sufficient photographic resolution. As a result, depending on the flight periods, the signal acquisition from GPS satellites was insufficient, making it impossible to capture images along the planned automatic navigation route. In the second observation (11th November 2023), we were able to predict the defect in advance, so after completing the automatic navigations, we confirmed the missing area and manually flew the UAV over it to take supplementary photographs. There was a lack of photographs in the first observation (20th September 2023) when we did not notice it on the site.
When taken from an altitude of 10m, each image captures a ground surface approximately the area of 9m x 14m. This was divided into 1m meshes and the surface condition was classified using CNN. The position of each mesh is expressed in relative coordinates with the center of the image as the origin, and the relative coordinate values were transformed based on the UAV coordinates extracted and the Yaw angle of the camera from the XMPMetadata. The results of the classification with CNN were projected onto the map.
In addition, to discuss the relationship between the grain size map and the microtopography, the microtopography at the study sites was surveyed using Lidar-UAV (Matrice 300 RTK + Zenmuse L1 manufactured by DJI).

3. Results

3.1. Classification of terrestrial and underwater samples

First, we attempted to determine particle size from the photographs of the underwater samples using the network trained using only the samples on terrestrial areas in a previous study [55]. As previously reported, the network achieved a total accuracy of 95.4% for the terrestrial samples. Here, underwater samples were used only as test data. The accuracy is shown as the confusion matrix in Table 2. Various confounding factors cause such a reduction of the accuracy [49]. In this case, the decrease in accuracy is a natural result because the classification of the underwater samples was performed using a network trained on terrestrial samples.
Next, we classified the images into six classes: three particle size ranges by terrestrial or underwater (two classes), and applied transfer learning to the network, in an attempt to classify the test data into six classes (Table 3). Since the same number of training data and test data are used for more classes and the number of images in each class was reduced, it cannot be evaluated evenly as the previous results, but its discrimination accuracy is significantly lower. The underwater images cannot be classified at all. In particular, images of Class 1 & 2 underwater samples are classified as Class 3 with different particle sizes. Figure 5 shows examples of images in each class. In particular, between the underwater and terrestrial images of Class 3, the watercolor is overlaid and has a bluish tinge, but both of them appear to be similar with little shading and high brightness. On the other hand, in Class 1&2, the brightness decreases due to the wetness of the stone surface. Based on these visual trends, the misclassification progressed on Class 3 images, which have a high degree of similarity between underwater and terrestrial, and the error rate also increased for Class 1 and 2 underwater images.

3.2. Uniform class only for the Class3 of the particle size

To prevent overfitting, we conducted training and testing using highly similar Class 3 terrestrial and underwater images as the same class. The results are shown in Table 4. For Class 3, by training the terrestrial and underwater samples as the same class without classifying them, we can obtain the same classification accuracy as previous research for Class 1 & 2 terrestrial samples, and at the same time, the classification accuracy for Class 1 and 2 underwater samples and Class 3. has improved compared to Table 3.

4. Discussion

4.1. Reduction of the error factors by the diversity of training data

Hedger et al. [54] proposed using CNN to classify water surface waves into smooth, rippled, and standing waves, and to combine them with water surface slope information to compartmentalize them into hydromorphological units (HMU) for each mesohabitat type. The results suggested that CNN is capable of recognizing and classifying water waves in images. In other words, in the classification of riverbed material particle size, which is the target of this research, the water surface waves overlaid on the image are also the target of feature extraction, and are also a source of error in determining the bed material particle size. There are two possible ways to reduce the error in bed material classification due to the surface waves. The first one is to attempt learning and classification using 20 types in total (5 classes of particle size, which have been shown in this study so far, x 4 types of water surface waves). However, as the number of classes increases, the amount of learning per class decreases in this research which the total number of images available for training is limited. In addition, as seen in Table 3, classifying highly similar images into two classes reduces the accuracy. The other method aims to classify particle size into the 5 classes mentioned above based solely on particle size information and performs training by arranging images with different water surface wave conditions in the same class in the training data. It is supposed that this method orders the CNN to ignore waves on the water surface and to focus on the bottom bed condition for classification.
In this study, the number of classifications is limited to the five classes mentioned above, and we evaluate how the classification accuracy changes depending on whether or not the training data includes various types of water surface waves. If the cases where river bed material cannot be photographed due to blisters or white water were excluded from the Surface Flow Type (SFT) shown by Hedger [54] and Newson et al. [67], (i) Unbroken standing wave, (ii) Rippled, (iii) Broken standing wave in shallow water (Rapid) (Figure 6) and smooth water surface remain. Here, the total accuracy of the test results for the network trained only with the images of “smooth” without waves on the water surface, is the Control for the following comparison. Then, the networks trained by sequentially adding images that contain waves (i) to (iii) were prepared. The same test data set as 3.2 was used for both verifications and compared in terms of total accuracy.
Figure 7 shows a comparison of total accuracy for all five classes. When training by adding one type of each of the three types of water surface waves in (i)-(iii) (in Figure 7, the three on the left adjacent to the Control), a significant improvement in accuracy was seen when (i)unbroken standing wave was added. If trained without the images containing unbroken standing waves, the wave pattern with a similar wavelength to riverbed material was not recognized as a surface wave, and misclassifications occurred. Therefore, when (i) is included in the learning data, there is a strong tendency for the accuracy to be improved. Similarly, for those that include two of the three types of SFT as training data, whether or not (i) is included has a large influence. On the other hand, whether or not (iii) Broken standing waves in shallow water (Rapid) is included does not contribute much to improving the accuracy. Broken standing wave in shallow water is under a rapid state, where the fine material as Class 3 is never settled on the bed. In fact, among the pictures taken in this research, there was no image of Class 3 of fine bed material with the wave type of (iii). Therefore, whether or not the wave (iii) is learned does not contribute to improving the accuracy of fine material discrimination. In addition, when visually checking the image of the water surface of broken standing waves in shallow water created by comparatively rough bed materials of Class 1 & 2, the particle diameter and wavelength are similar. So, whether or not the images of Class 1&2 with the wave type of (iii) are included does not have a large effect on the accuracy.
Finally, the accuracy was achieved at 90.3% when training included all three types of SFTs (i) to (iii). The total accuracy was higher than the results shown in Table 5, but this is because there were several more SFT images of (i) added to the training data, and Class 2 with underwater conditions could be discriminated more accurately. As mentioned above, many factors need to be taken into account that affect the brightness and pattern of images taken under natural conditions. By taking these into account and including images taken under various conditions in the learning data, it is possible to reduce this influence.

4.2. Mapping of the wide-ranging area

It is possible to create a spatial distribution map of particle size by photographing the wide-ranging area with the automatic navigation system of UAV and applying the proposed image classification. However, when the UAV was flown at a low altitude to obtain sufficient photographic resolution to determine particle size in a canyon area with steep slopes on both sides, GPS reception was insufficient, resulting in missing the coordination of images when the UAV was operated automatically. In this study, we manually photographed the missing points and rearranged the classification results on the map based on the information on the photographing angle.
Figure 8 shows an example of the shooting locations when the UAV flew automatic navigation at the study site. To avoid the UAV crashing, an automated flight plan is created to avoid getting too close to cliffs with a lot of vegetation. The miss-shooting area due to the loss of GPS signals is shown as the orange hatching area. Immediately after taking pictures using automatic navigation, we extracted the coordination of each image, checked the areas where no images due to the above reason were obtained, and then flew and took pictures manually.
Each picture taken both automatically and manually was divided into small grids (1m x 1m), as shown in Figure 9, to classify with the network. The values of the local coordination on a picture with that center as the origin were given to the result of the classification for each grid. The results were projected on a map by converting the local coordinate values on the picture into the map coordinate values based on the GPS and Yaw angle of each photo extracted from its XMPMetadata.
The classification with CNN mentioned above aimed the validation, so that the classes were only focused on the particle size and the terrestrial or underwater condition. In the practical mapping, the surface of channels is not covered only with sediments. In our study site, there are vegetation parts and deep pools where the bed material is not visible. The classes “Grass” and “Deep Pool” were added to the original 5 classes which were applied in the previous chapter. Totally among the 7 classes, the misclassification between terrestrial and underwater for the same classes has no problem practically. So, the test result was evaluated with the confusion matrix in which the same classes of particle size under terrestrial and underwater conditions were combined into one class (Table 5). The 17 images in Class 3 (Fine gravel and coarse sand) were misclassified to Class 2. The results reduced the accuracy but generally, the scores were acceptable. Then, this network was applied to mapping the wide-ranging area in our study site.
To discuss the relationship between the distribution of bed material and hydraulic features, the topographic point clouds were obtained via UAV-(Matrice-300-RTK: DJI Japan, Japan) mounted LiDAR (Zenmuse-L1: DJI Japan, Japan). LiDAR–UAV can acquire point clouds at an altitude of 100 m, with an accuracy of 3 cm, but that accuracy is insufficient to distinguish the bed material particle on the study site. The scanning mode of the LiDAR was a repeat scanning (repeat) pattern, and the return mode was triple, to acquire the topography under the trees. The flight route was set at an altitude of about 80 m, with one survey line in the middle of the river channel. The topographic surveys were conducted only on 11th November 2023 due to the procurement of the apparatus while the sediment particle size was monitored on 4th September and 11th November 2023, before and after Typhoon No.14 on 20th September 2023.
Figure 10 and Figure 11 show the spatial distribution map of river bed grain size and the longitudinal cross-sectional diagram created from point cloud data for Site 1 and Site 2, respectively. The spatial distribution map of particle size shows particle size, water, and plants as points in five colors, and the water edges of both sides at the time when the observation on low water day are shown as a black line. The flow direction is shown by the light blue arrow below; in Figure 10, the water flows from the upper left to the lower right, and in Figure 11, the water flows from the upper right to the lower left. In the vertical profile, the vertical axis represents the altitude, and the horizontal axis represents the distance along the shorelines from the upper border of the measurements. Also, the numbers on the longitudinal map and the numbers on the spatial distribution map indicate the same points.
Upstream of the bend of the channel in Site 1(around the center in Figure 10b), Class 3 of the fine sediment is widely distributed within the low water channel after the flood. When the flow channel gradually narrowed to the left bank during the decreasing phase of the flood, coarse sediment accumulated on the right bank on the inside of the bend, first. As the water level was decreasing, relatively large grains were deposited in front of the bend, forming a rapid (section 3 to 5). The upstream side was dammed up and became a slow-flowing area, and the fine-grain sediment transported during the period of declining water flow was widely deposited on the flow path. Seeing the longitudinal profile, the right bank channel of the 2 branched low channels has a steep slope between 3 and 5, and it can be confirmed that relatively large grain-size sediment has been deposited in that section. In addition, the grain size of the sediment along the left bank channel is uniform. In the longitudinal section, the left bank side shows an almost uniform slope without pools and falls, and the channel width is also almost uniform. This slope is consistent with the absence of fine sediment deposition. The topography before the flood was not obtained. However, the rapid accumulation of coarse material at the bend and the fine sediment accumulation on its upper side are also seen in the upper half of the distribution map before the flood.
Comparing the distribution maps before and after the typhoon, the deep pool near the downstream end of the observation area has shrunk, and the fine-grained sediment on the right bank side of the deep pool has disappeared and been replaced by Class 2 sediment. Before the typhoon, the area near the downstream end was dammed up due to the accumulation of sediment outside the photographed area, and due to the slow flow of the deep pool, fine-grained sediment was deposited. It is presumed that the rapids outside the area were washed away by the flood, and the accumulation disappeared, resulting in the accumulation of Class 2 sediment. Looking at the longitudinal profile after the typhoon, the downstream part of the photographed area shows an almost uniform slope.
At Site 2, which is a point a little far from the dam, Class 2&3 are distributed above the low water channel on the right bank downstream in both of the distribution maps before and after the typhoon. The reason for this is different from the fine-grained sediment deposited in the slow basin that was dammed behind the small rapids at Site 1. Looking at the longitudinal profile after the typhoon, it can be seen that the slope between 4 to 7 is gentle generally, but there is an alternation of the course (Class 1) and the fine (Class 3). The channel of Site 2 is wider than Site 1 and other upstream areas, resulting in lower flow velocity and accumulation.
Comparing the distribution maps before and after the flood, it is found that before the flood, Class 2 was deposited downstream and there was no branch on the right bank on the low-water days. On the other hand, after the flood, the branching channel appeared on the right bank and its bed is the alternation of Class 1 and 3. Especially the range of Class 1 (coarse) was extended. This is because the sluicing operations had been carried out before 2023, but only on Saigo dam. In 2023, the Yamasubaru dam also started sediment sluicing after completing its retrofitting. So, there is a possibility that the larger grain-size particles were transported from the upper dam.

5. Conclusions

In this study, the particle size classification method using CNN with UAV aerial images, which has so far targeted only terrestrial bed materials in our previous study, to the classification of materials underwater in shallow waters where the water bottom can be seen. When completely ignoring whether the sample is terrestrial or underwater, we could predict that the classification accuracy would drop significantly, even without the trial. The accuracy was further reduced by trying to classify similar images into different classes by doubling the number of classes. If the image similarity between the classes is high, increasing the number of classes is not a good idea. We were able to improve accuracy by using the same class, especially for fine grains that have high similarity between terrestrial and underwater. On the other hand, if the particles are large enough and can be identified on the image, the colors are different between terrestrial and underwater, so classifying them into the other would have resulted in higher classification accuracy.
In addition, the waves on the water surface were thought to be a factor causing the errors, but increasing the number of classes was not considered a good idea, so we did not add classes accordingly. Since the purpose of this research is to classify river bed images based on particle size, we were able to obtain sufficient accuracy by focusing only on the particle size and setting classes without considering the difference in wave types. In particular, it was shown that accuracy could be improved by including a variety of SFTs in the training data.
It suggests that classification based on the main target (particle size in this study), without being distracted by other environmental factors that may cause changes in the image, provides a certain level of accuracy. This is not limited to SFT. Furthermore, if the training data includes images taken in as diverse conditions as possible concerning other environmental factors, the biases caused by the other factors will be reduced and higher accuracy will be obtained. In this study, photographs were taken in two relatively close areas of the river, so environmental conditions other than waves on the water surface were relatively similar. Therefore, to make the method applicable to many other rivers generally, it is necessary to photograph under various environmental conditions with different flow rates, current speeds, parent rocks, watercolor, particle shape characteristics, riverbed gradients, etc. It is preferable to prepare more training data that has been verified.
In addition, the validity of this method was supported by the fact that there was a reasonable relationship between the gradient of the microtopography observed by Lidar and the trend of the wide-area distribution of particle size. Images of river channels taken from UAVs are the result of a combination of various environmental information. It is difficult to classify by considering multiple types of environmental conditions at the same time. However, if you simply classify by focusing on one item and provide training data without considering other environmental items, sufficient accuracy can be obtained. This is true not only for classification based on riverbed particle size but also for creating environmental classification maps based on image classification.

Author Contributions

Conceptualization, M.I.; methodology, S.A., T.S., and M.I.; software, S.A., T.S., and M.I.; validation, S.A., T.S., and M.I.; formal analysis, S.A., T.S., and M.I.; investigation, M.I.; resources, M.I.; data curation, S.A., T.S., and M.I.; writing—original draft preparation, M.I.; writing—review and editing, M.I.; visualization, S.A., T.S.; supervision, M.I.; project administration, M.I.; funding acquisition, M.I. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by JSPS KAKENHI Grant Number 17H03314.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. David, A.J.; Castillo, M.M. Stream Ecology: Structure and Function of Running Waters, 2nd ed.; Springer: Dordrecht, The Netherlands, 2007; pp. 1–436.
  2. Alexandre, C.M.; Ferreira, T.F.; Almeida, P.R. Fish assemblages in non-regulated and regulated rivers from permanent and temporary Iberian systems. River Res. Appl. 2013, 29, 1042-1058. [CrossRef]
  3. Almeida, D.; Merino-Aguirre, R.; Angeler, D.G. Benthic invertebrate communities in regulated Mediterranean streams and least-impacted tributaries. Limnologica 2013, 43, 34-42. [CrossRef]
  4. Eekhout, J.P.C.; Hoitink, A.J.F. Chute cutoff as a morphological response to stream reconstruction: The possible role of backwater, Water Resour. Res. 2015, 51, 3339–3352. [CrossRef]
  5. Tsubaki, R.; Baranya, S.; Muste, M.; Toda, Y.; Spatio-temporal patterns of sediment particle movement on 2D and 3D bedforms. Exp. Fluids 2018, 59, 93. [CrossRef]
  6. Liu L. Drone-based photogrammetry for riverbed characteristics extraction and flood discharge modeling in Taiwan’s mountainous rivers. Measurement 2023, 220. [CrossRef]
  7. Zhu R.; Tsubaki R.; Toda Y. Effects of vegetation distribution along river transects on the morphology of a gravel bed braided river. Acta Geophysica 2023. [CrossRef]
  8. Kang, T.; Kimura, I.; Shimizu, Y. Responses of bed morphology to vegetation growth and flood discharge at a sharp river bend. Water 2018, 10, 223. [CrossRef]
  9. Bunte, K.; Abt, S.R. Sampling Surface and Subsurface Particle-Size Distributions in Wadable Gravel- and Cobble-Bed Streams for Analyses in Sediment Transport, Hydraulics, and Streambed Monitoring; General Technical Report RMRS-GTR-74; U. S. Department of Agriculture: 2001; pp. 166–170. [CrossRef]
  10. Kellerhals, R.; Bray, D. Sampling procedure for Coarse Fluvial Sediments. J. Hydraul. Div. ASCE 1971, 97, 1165–1180. [CrossRef]
  11. Yokota, Y.; Date, K.; Yamamoto, T.; Akoshima, M.; Takahashi, H.; Mizuguchi, Y. Measurement technique of particle size distribution of rock fragmentation by the blasting using stereo image processing, Proc. 13th Jp Symp. Rock Mech. 2012. https://avsc.jp/images/pdf/Particle_size_Distribution_Measure.pdf.
  12. Igathinathane, C.; Melin, S.; Sokhansanj, S.; Bi, X.; Lim, C.J.; Pordesimo, L.O.; Columbus, E.P. Machine vision based particle size and size distribution determination of airborne dust particles of wood and bark pellets. Powder Technol. 2009, 196, 202–212. [CrossRef]
  13. Detert, M.; Weitbrecht, V. Automatic object detection to analyze the geometry of gravel grains—A free stand-alone tool. In River Flow 2012; Muños, R.M., Ed.; Taylor & Francis Group: London, UK, 2012; 595–600.
  14. Detert, M.; Weitbrecht, V. User guide to gravelometric image analysis by BASEGRAIN, Advances in River Sediment Research, Advances in River Sediment Res. 2013, 1789-1796.
  15. Harada, M.; Arakawa, T.; Ooi, T.; Suzuki, H.; Sawada, K. Development of bed topography survey technique by underwater imaging progress for UAV photogrammetry. Proc. River Eng. 2016, 22, 67–72. (In Japanese).
  16. Hirao, S.; Azumi T.; Yoshimura M., Nishiguchi Y.; Kawai S.; Fundamental study on grain size distribution of river bed surface by analyzing UAV photograph, Proc. River Eng. 2018, 24, 263–266. (In Japanese). [CrossRef]
  17. Rogan J.; Chen D. Remote sensing technology for mapping, and monitoring land-cover and land-use change. Prog. Plan. 2004, 61, 301–325. [CrossRef]
  18. Mochizuki S.; Murakami, T. Vegetation map using the object-oriented image classification with ensemble learning. J Forest Plan. 2013, 18, 127-134. [CrossRef]
  19. Mtibaa, S.; Irie, M. Land cover mapping in cropland dominated area using information on vegetation phenology and multi-seasonal Landsat 8 images. Euro-Mediterr. J. Environ. Integr. 2016, 1. [CrossRef]
  20. Maurya, K.; Mahajan, S.; Chaube, N. Remote sensing techniques: mapping and monitoring of mangrove ecosystem—a review. Complex Intell. Syst. 2021, 7, 2797–2818. [CrossRef]
  21. Gilcher, M.; Udelhoven, T. Field Geometry and the Spatial and Temporal Generalization of Crop Classification Algorithms—a randomized approach to compare pixel based and convolution based methods. Remote Sens. 2021, 13, 775. [CrossRef]
  22. Taravat, A.; Wagner, M.P.; Bonifacio, R.; Petit, D. Advanced Fully Convolutional Networks for Agricultural Field Boundary Detection. Remote Sens. 2021, 13, 722. [CrossRef]
  23. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proc. IEEE conf. on computer vision and pattern recognition (CVPR), 2015. Boston, MA, USA, 2015, pp. 3431-3440. [CrossRef]
  24. DeepGlobe CVPR 2018 - Satellite Challenge. http://deepglobe.org.
  25. Golovanov S.; Kurbanov R.; Artamonov A.; Davydow A.; Nikolenko S. Detection from Satellite Imagery Using a Composite Loss Function, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, 219-2193. [CrossRef]
  26. Dickenson M.; Gueguen L. Rotated Rectangles for Symbolized Building Footprint Extraction, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, 215-2153. [CrossRef]
  27. Iglovikov, V. I., Seferbekov, S., Buslaev, A. V., & Shvets, A. TernausNetV2: Fully Convolutional Network for Instance Segmentation. 2018, ArXiv. /abs/1806.00844.
  28. Buslaev A.; Seferbekov S.; Iglovikov V.; Shvets A. Fully Convolutional Network for Automatic Road Extraction from Satellite Imagery, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, 197-1973. [CrossRef]
  29. Costea D.; Marcu A.; Slusanschi E.; Leordeanu M. Roadmap Generation using a Multi-stage Ensemble of Deep Neural Networks with Smoothing-Based Optimization, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 210-2104. [CrossRef]
  30. Doshi J. Residual Inception Skip Network for Binary Segmentation, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 206-2063. [CrossRef]
  31. Ghosh A.; Ehrlich M.; Shah S.; Davis L.; Chellappa R. Stacked U-Nets for Ground Material Segmentation in Remote Sensing Imagery, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 252-2524. [CrossRef]
  32. Samy M.; Amer K.; Eissa K.; Shaker M.; ElHelw M. NU-Net: Deep Residual Wide Field of View Convolutional Neural Network for Semantic Segmentation, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 267-2674. [CrossRef]
  33. Tian C.; Li C.; Shi J. Dense Fusion Classmate Network for Land Cover Classification, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 262-2624. [CrossRef]
  34. Verma, U.; Chauhan, A.; M.M., M. P.; Pai, R. DeepRivWidth: Deep learning based semantic segmentation approach for river identification and width measurement in SAR images of Coastal Karnataka. Computers & Geosciences 2021, 154, 104805. [CrossRef]
  35. Ling, F.; Boyd, D.; Ge, Y.; Foody, G. M.; Li, X.; Wang, L.; Zhang, Y.; Shi, L.; Shang, C.; Li, X.; Du, Y. Measuring River Wetted Width From Remotely Sensed Imagery at the Subpixel Scale With a Deep Convolutional Neural Network. Water Resour Res. 2019, 55, 5631-5649. [CrossRef]
  36. Nama, A.H.; Abbas, A.S.; Maatooq J. S., Field and Satellite Images-Based Investigation of Rivers Morphological Aspects, Civ. Eng. J. 2022, 8. [CrossRef]
  37. Yamawaki, M.; Matsui, T.; Kawahara, M.; Yasumoto, Y.; Ueyama, K.; Kawazoe, Y.; Matsuda, K.; Hara, F.; Study on enhancement of sandbar monitoring by deep learning -towards enhancement of management in Hojo-river, J. Jp. Soc. Civ. Eng. Ser. B2 2021, 77, I_511-I_516, (In Japanese). [CrossRef]
  38. Akiyama, T. S.; Marcato Junior, J.; Gonçalves, W. N.; Bressan, P. O.; Eltner, A.; Binder, F.; Singer, T. Deep learning applied to water segmentation, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2020, XLIII-B2-2020, 1189–1193. [CrossRef]
  39. Muhadi, N. A.; Abdullah, A. F.; Bejo, S. K.; Mahadi, M. R.; Mijic, A. Deep Learning Semantic Segmentation for Water Level Estimation Using Surveillance Camera. Applied Sciences 2020, 11, 9691. [CrossRef]
  40. Lopez-Fuentes L.; Rossi C.; Skinnemoen, H. River segmentation for flood monitoring, 2017 IEEE International Conference on Big Data (Big Data), Boston, MA, USA, 2017, pp. 3746-3749. [CrossRef]
  41. Inoue, H.; Katayama, T.; Song, T.; Shimamoto T. Semantic Segmentation of River Video for Efficient River Surveillance System, 2023 Int. Technical Conf. on Circuits/Sys., Comput. Commun. (ITC-CSCC), Jeju, Korea, 2023, 1-5. [CrossRef]
  42. Lambert, R.; Li, J.; Chavez-Galaviz J.; Mahmoudian N. A Survey on the Deployability of Semantic Segmentation Networks for Fluvial Navigation, 2023 IEEE/CVF Winter Conf. Appl. of Comput. Vision Workshops (WACVW), Waikoloa, HI, USA, 2023, 255-264. [CrossRef]
  43. Zhang, X.; Zhou, Y.; Jin, J.; Wang, Y.; Fan, M.; Wang, N.; Zhang, Y. ICENETv2: A Fine-Grained River Ice Semantic Segmentation Network Based on UAV Images. Remote Sens. 2020, 13, 633. [CrossRef]
  44. Ren, B.; Pan, Y.; Lin, X.; Yang, K. Statistical Roughness Properties of the Bed Surface in Braided Rivers. Water 2022, 15, 2612. [CrossRef]
  45. Piton, G.; Recking, A.; Le Coz, J.; Bellot, H.; Hauet, A.; Jodeau, M. Reconstructing depth-averaged open-channel flows using image velocimetry and photogrammetry. Water Resour. Res. 2018, 54, 4164–4179. [CrossRef]
  46. Brasington J.; Vericat D.; Rychkov I. Modeling river bed morphology, roughness, and surface sedimentology using high resolution terrestrial laser scanning, Water Resour. Res. 2012, 48. [CrossRef]
  47. Onaka, N.; Akamatsu, Y.; Mabu, S.; Inui, R.; Hanaoka T. Development of a habitat prediction method for ruditapes philippinarum based on image analysis using deep learning, J. Jp. Soc. Civ. Eng. Ser. B1 2020, 76, I_1279-I_1284, (In Japanese). [CrossRef]
  48. Onaka, N.; Akamatsu, Y.; Koyama, A.; Inui, R.; Saito, M.; Mabu, S., Development of a sediment particle size prediction method for tidal flat based on image analysis using deep learning. J. Jp. Soc. Civ. Eng. Ser. B1 2022, 78, I_1117-I_1122, (In Japanese). [CrossRef]
  49. Hedger, R. D.; Sundt-Hansen, L.; Foldvik, A. Evaluating the suitability of aerial photo surveys for assessing Atlantic salmon habitat in Norway. NINA Report 2105, 2022. https://brage.nina.no/nina-xmlui/bitstream/handle/11250/2975990/ninarapport2105.pdf?sequence=5&isAllowed=y.
  50. Carbonneau, P. E.; Dugdale, S. J.; Breckon, T. P.; Dietrich, J. T.; Fonstad, M. A.; Miyamoto, H.; Woodget, A. S. Adopting deep learning methods for airborne RGB fluvial scene classification. Remote Sens. of Environ. 2020, 251, 112107. [CrossRef]
  51. Casado, M. R.; Gonzalez, R. B.; Kriechbaumer, T.; Veal, A. Automated identification of river hydromorphological features using UAV high resolution aerial imagery. Sensors 2015, 15, 27969–27989.2015. [CrossRef]
  52. Milan, D. J.; Heritage, G. L.; Large, A. R. G.; Entwistle, N. S. Mapping hydraulic biotopes using terrestrial laser scan data of water surface properties. Earth Surface Processes and Landforms 2010, 35, 918–931. [CrossRef]
  53. Harrison, L. R.; Legleiter, C. J.; Overstreet, B. T.; Bell, T. W.; Hannon, J. Assessing the potential for spectrally based remote sensing of salmon spawning locations. River Res. Appl. 2020, 36, 1618–1632. [CrossRef]
  54. Hedger, R. D.; Gosselin, P. Automated fluvial hydromorphology mapping from airborne remote sensing. River Res. Appl. 2023. [CrossRef]
  55. Takechi, H., Aragaki, S., & Irie, M. Differentiation of River Sediments Fractions in UAV Aerial Images by Convolution Neural Network. Remote Sens. 2020, 13, 3188. [CrossRef]
  56. Nukazawa, K.; Shirasaka, K.; Kajiwara, S.; Saito, T.; Irie, M.; Suzuki, Y. Gradients of flow regulation shape community structures of stream fishes and insects within a catchment subject to typhoon events, Sci. Total Environ. 2020, 748. [CrossRef]
  57. Nakano, D.; Nakane, Y.; Kajiwara, S.; Sakada, K.; Nishimura; K. Fukaike, M.; Honjo, T. Macrozoobenthos distribution after flood events offshore the Mimi River estuary, Japan, Plankton and Benthos Res. 2022, 17, 277-289. [CrossRef]
  58. Ito K.; Matsunaga M.; Itakiyo T.; Oishi H.; Nukazawa K.; Irie M.; Suzuki Y. Tracing sediment transport history using mineralogical fingerprinting in a river basin with dams utilizing sediment sluicing. Intl. J. Sediment Res. 2022, 38, 469-480. [CrossRef]
  59. Nukazawa, K.; Kajiwara, S.; Saito, T.; Suzuki Y., Preliminary assessment of the impacts of sediment sluicing events on stream insects in the Mimi River, Jp, Ecol. Eng. 2020, 145, 105726. [CrossRef]
  60. Sumi, T.; Yoshimura, T.; Asazaki, K.; Kaku, M.; Kashiwai, J.; Sato, T. Retrofitting and change in operation of cascade dams to facilitate sediment sluicing in the Mimikawa river basin. In Proceedings of the 25th Congress of International Commission on Large Dams, Stavanger, Norway, 14–20 June 2015; Q99-R45, 597–616.
  61. Yoshimura, T.; Shinya, H. Environmental impact assessment plan due to sediment sluicing at dams along Mimikawa river system, J. Disaster Res. 2018, 13, 709-719. [CrossRef]
  62. Landwehr, T.; Kantoush, S.A; Pahl-Wostl, C.; Sumi, T.; Irie M. The effect of optimism bias and governmental action on siltation management within Japanese reservoirs surveyed via artificial neural network, Big Earth Data 2020, 4:1, 68-89. [CrossRef]
  63. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th Intl. Conference on Neural Inform. Process. Sys. (NIPS’12), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105.
  64. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper withconvolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA,7–12 June 2015; pp. 1–9. [CrossRef]
  65. ImageNet Website. Available online: http://image-net.org/ (accessed on 28 September 2023).
  66. Zamir, A.R.; Sax, A.; Shen, W.; Guibas, L.; Malik, J.; Savarese, S. Taskonomy: Disentangling Task Transfer Learning. InProceedings of the 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3712–3722. [CrossRef]
  67. Newson M. D.; Newson C. L.; Geomorphology, ecology and river channel habitat: mesoscale approaches to basin-scale challenges. Prog. Phys. Geogr., 2000, 24, 195–217. [CrossRef]
Figure 1. Location of the study site on the Mimikawa River in Miyazaki, Japan.
Figure 1. Location of the study site on the Mimikawa River in Miyazaki, Japan.
Preprints 91865 g001
Figure 5. The image samples of the 3 classes of the particle size and Terrestrial/Underwater.
Figure 5. The image samples of the 3 classes of the particle size and Terrestrial/Underwater.
Preprints 91865 g004
Figure 6. Overlaying Surface Flow Type (SFT) on the bed materials.
Figure 6. Overlaying Surface Flow Type (SFT) on the bed materials.
Preprints 91865 g005
Figure 7. Comparison of the total accuracies between the learning cases with the different wave type.
Figure 7. Comparison of the total accuracies between the learning cases with the different wave type.
Preprints 91865 g006
Figure 8. Shooting points by an automatic flight plan.
Figure 8. Shooting points by an automatic flight plan.
Preprints 91865 g007
Figure 9. Process for mapping.
Figure 9. Process for mapping.
Preprints 91865 g008
Figure 10. Distribution map of grain size (a) Before the sluicing (b) After the sluicing and (c) longitudinal cross-section after the sluicing of Site 1.
Figure 10. Distribution map of grain size (a) Before the sluicing (b) After the sluicing and (c) longitudinal cross-section after the sluicing of Site 1.
Preprints 91865 g009
Figure 11. Distribution map of grain size (a) Before the sluicing (b) After the sluicing and (c) longitudinal cross-section after the sluicing of Site 2.
Figure 11. Distribution map of grain size (a) Before the sluicing (b) After the sluicing and (c) longitudinal cross-section after the sluicing of Site 2.
Preprints 91865 g010
Table 1. Classification criteria in this study.
Table 1. Classification criteria in this study.
Classification Particle Size(mm) Classification Name
in This Study
Medium gravel 64-24.5 Class 1
Small gravel 24.5-2 Class 2
Fine gravel and coarse sand 2-0.5 Class 3
Table 2. Confusion Matrix of the result classifying all the images with the network trained only with terrestrial sample images (Batch size 10, Epoch 21).
Table 2. Confusion Matrix of the result classifying all the images with the network trained only with terrestrial sample images (Batch size 10, Epoch 21).
Class 1 Class 2 Class 3 Recall F-Score
Class 1 67 8 0 89.3% 86.4%
Class 2 13 58 18 65.2% 70.3%
Class 3 0 10 22 68.8% 61.1%
Precision 83.8% 76.3% 55.0%
Micro Prec. 75.0%
Macro Prec. 71.7%
Micro recall 75.0%
Macro recall 74.4%
Overall Acc. 75.0%
Average Acc. 71.7%
Table 3. Confusion Matrix with the 6 classes: 3 particle size classes and Terrestrial/Underwater (Batch size 10, Epoch 21).
Table 3. Confusion Matrix with the 6 classes: 3 particle size classes and Terrestrial/Underwater (Batch size 10, Epoch 21).
Terrestrial Underwater Recall F-Score
Class 1 Class 2 Class 3 Class 1 Class 2 Class 3
Terrestrial Class 1 56 0 0 0 0 0 100.0% 62.9%
Class 2 16 122 1 0 1 2 85.9% 92.4%
Class 3 50 0 58 80 65 19 21.3% 35.0%
Underwater Class 1 0 0 0 0 0 0 - -
Class 2 0 0 0 0 0 0 - -
Class 3 0 0 0 0 6 19 76.0% 58.5%
Precision 45.9% 100.0% 98.3% 0.0% 0.0% 47.5%
Micro Prec. 51.5%
Macro Prec. 48.6%
Micro recall 51.5%
Macro recall 69.1%
Overall Acc. 51.5%
Average Acc. 48.6%
Table 4. Confusion Matrix with the 5 classes: Class 1&2 of particle size x Terrestrial/Underwater + Class3 (Batch size10, Epoch21).
Table 4. Confusion Matrix with the 5 classes: Class 1&2 of particle size x Terrestrial/Underwater + Class3 (Batch size10, Epoch21).
Terrestrial Underwater Both Recall F-Score
Class 1 Class 2 Class 1 Class 2 Class 3
Terrestrial Class 1 113 6 0 1 0 94.2% 93.3%
Class 2 5 115 0 2 3 92.0% 93.1%
Underwater Class 1 4 1 78 18 11 69.6% 81.2%
Class 2 0 0 1 48 12 78.7% 68.1%
Both Class 3 0 0 1 11 94 88.7% 83.1%
Precision 92.6% 94.3% 97.5% 60.0% 78.3%
Micro Prec. 85.5%
Macro Prec. 84.5%
Micro recall 85.5%
Macro recall 84.5%
Overall Acc. 85.5%
Average Acc. 84.6%
Table 5. Confusion matrix classifying with the 3 classes of the particle, Deep pool and Grass (Batch size 26, Epoch21).
Table 5. Confusion matrix classifying with the 3 classes of the particle, Deep pool and Grass (Batch size 26, Epoch21).
Class 1 Class 2 Class 3 Deep pool Grass Recall F-Score
Class 1 134 1 0 0 0 99.3% 93.7%
Class 2 16 144 1 1 0 88.9% 88.3%
Class 3 1 17 56 2 0 73.7% 81.2%
Deep pool 0 2 5 73 0 91.3% 93.6%
Grass 0 0 0 0 80 100% 100%
Precision 88.7% 87.8% 90.3% 96.1% 100%
Overall Acc. 91.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated