Preprint
Article

Enhanced Mapping for Ecosystem Management: Evaluating the Accuracy of Sentinel-1 and Sentinel-2 Data Fusion Compared to Sole Sentinel-2 Using Random Forest Classification

Altmetrics

Downloads

88

Views

52

Comments

0

This version is not peer-reviewed

Submitted:

13 August 2024

Posted:

15 August 2024

You are already at the latest version

Alerts
Abstract
Recent advances in satellite technology have brought enormous potential to ecosystem mapping, which is one of the fundamental components of environmental studies. In this paper, a Random Forest classifier is applied for the strict assessment of the efficiency of ecosystem mapping through a detailed comparative analysis between combined Sentinel-1 and Sentinel-2 data and stand-alone Sentinel-2 imagery over three priority ecosystems, including wetlands, riverine areas, and mangroves in Bangladesh. The collocated images, based on the integration of Sentinel-1 data with Sentinel-2 data, would do better than Sentinel-2 imagery alone over various ecosystems. Particularly, in this study, attention focused on the Hakaluki Haor area for the wetlands, the Padma-Jamuna River confluence for the riverine ecosystem, and the Sundarban forest for mangroves. By leveraging Synthetic Aperture Radar (SAR) data in C-band dual-polarization from Sentinel-1 and four spectral bands (blue, green, red, and near-infrared) from Sentinel-2, the study analyzes imagery from December 2022 to February 2023. A 5% cloud masking filter is applied to optical data to enhance accuracy. In this methodology, 70% of the total signature values are used for training the classification model and the remaining 30% for testing. It can be noticed from the results that with the use of fused data, remarkably high accuracy in classification has been improved, such as overall accuracies of 94.17% for mangroves, 87.30% for riverine, and 85.96% for wetland ecosystems. In contrast, the use of singular Sentinel-2 imagery yields lower accuracies of 91.56%, 85.21%, and 82.51% for the respective ecosystems. The integration of radar data is shown to provide critical information, especially in environments with dense vegetation or cloud cover, where optical data alone may be insufficient. The findings of this study underline the limitations of relying on Sentinel-2 imagery to capture complex details of diverse ecosystems and highlight the need to include Sentinel-1 data for a more holistic analysis. This fusion allows improved accuracy to be achieved, which not only brings in more depth of ecological knowledge but also underpins more effective conservation strategies.
Keywords: 
Subject: Environmental and Earth Sciences  -   Remote Sensing

1. Introduction

Understanding the composition, health, and distribution of ecosystems is critical to effective environmental management, planning, and conservation. All these goals rely on one cornerstone process: ecosystem mapping, which involves the spatial identification and delineation of distinct ecological units [1]. However, the accuracy of ecosystem maps greatly affects their utility in practical applications. It is a key element in understanding the dynamics of a natural environment and thus plays a huge role in conservation, resource management, and policy formulation [2]. Traditionally, remote sensing technologies have been used comprehensively to understand and manage ecosystems by giving an overview of large landscapes. With the years, satellite imaging has undergone revolutions that have enabled researchers to capture increasingly detailed and comprehensive data. Among the very newest developments, the Sentinel satellite constellation—the new frontiers in Earth observation with very high-resolution optical and radar imagery—added Sentinel-1 and Sentinel-2 to the list [3]. Thus, the present work is related to the domain of ecosystem mapping and is striving for an improvement in accuracy through the integration of Sentinel-1 and Sentinel-2 data, allied to more advanced classification techniques.
Even with the benefits that optical imagery delivers, there are a number of inherent limitations within this data source that impact the accuracy of ecosystem maps produced on the basis of this data alone [4]. Perhaps the most important of these issues is spectral saturation, whereby the dense vegetation cover has very little penetration of light through the canopy, and thus little or no information can be obtained from the signal about the characteristics of the land cover [3]. This problem is significant in terms of distinguishing between different types of vegetation, especially within intensively forested ecosystems. Other challenges include the impact of atmospheric conditions on optical data. Cloud cover may obscure the ground surface from view, in which case retrieval of any useful information becomes impossible. Even partial cloud cover may introduce a variety of artifacts and inconsistencies within the data, which may lead to a misclassification of ecosystem components [5].
It is in this regard literature also points to the need for fusing multi-sensor data for accurate ecosystem mapping [6]. Previous research shows that optical and radar data complement each other, where each modality uniquely provides additional insight into characteristics, land cover types, and ecosystem dynamics [7]. Additionally, research efforts have demonstrated efficient algorithms of machine learning, such as Random Forest, in classifying complex landscapes from remote sensing data [6]. Nevertheless, against the backdrop of all these developments, gaps persist in understanding the complete potential of techniques for data fusion using machine learning and their application across diverse ecosystems [8].
The inadequacies of single-sensor optical imagery in ecosystem mapping have made researchers rather very active in exploring various avenues. First, there has been the development of sophisticated image processing techniques and more advanced classification algorithms [6]. These advanced classification algorithms help leverage the spectral information within the optical data to extract subtle signatures associated with different ecosystem constituents. However, this effectiveness is still potentially constrained by intrinsic data quality problems and complex spectral characteristics of various ecosystems [9]. Another strategy under development is the integration of radar imagery and other ancillary data sets that complement optical data. It also gives topographic features information, which can be very useful in distinguishing between wetlands and terrestrial ecosystems [10]. This is especially true for radar imagery, particularly synthetic aperture radar imagery, due to the fact that it can partially penetrate cloud covers and vegetation canopies. On the other hand, special expertise is often required for the interpretation of SAR data, which may be more difficult in comparison with optical imagery [11].
Even though the integration of multi-sensor data has some bright prospects in improving accuracy in ecosystem mapping, a good deal of knowledge gaps and uncertainties remain to be filled [7]. For certain types of ecosystems, the optimal combination of data sources and the most efficient techniques of data fusion are less frequently well-defined [2]. The synergies of optical and radar data—how they work together—are being researched for various ecosystems, and, consequently, studies are still underway for developing standardized protocols [11]. Secondly, further investigation on the impact of varying spatial and temporal resolutions of the different sensor data on the accuracy of ecosystem maps is required [12]. Normally, optical imagery will have finer spatial resolution compared to SAR data that may have coarser resolution [13]. How to effectively integrate such disparate resolutions has long been one of the challenges toward reaching the optimal mapping accuracy. The first is Sentinel-1, an ESA constellation of radar satellites that provide C-band SAR data with high temporal resolution [14]. Another ESA constellation is Sentinel-2, which provides high-resolution multispectral imagery acquired over a broad range of spectral bands [15]. Confident of the potential of these two sources of data, this study will explore how far their combination can offer better delineation of critical habitats such as wetlands, riverine systems, and mangrove forests. We hope to be able to compensate for the limitations of one type of sensor by using the strengths of the other.
This knowledge gap on the effectiveness of Sentinel-1 and Sentinel-2 data fusion in enhancing ecosystem mapping accuracy is what this study seeks to address. In pursuit of these broad objectives, the study has been carefully designed to achieve some specific objectives. First, it will assess the efficiency in effectively evaluating the effectiveness of Sentinel-1 and Sentinel-2 data fusion in enhancing the delineation and characterization of an ecosystem. This study will target three of the most important ecosystems in Bangladesh: wetlands, riverine systems, and mangrove forests. These are among the most critical ecosystems that maintain global biodiversity and climate regulation. The comparison between the ecosystem maps produced from the integrated Sentinel-1 and Sentinel-2 datasets and that produced using only Sentinel-2 imagery will be drawn out comprehensively. The comparison’s objective is to express these improvements in accuracy and detail due to the data fusion approach. The choice of the SVM (Support Vector Machine) based RF classifier, implemented in this study for classifying the ecosystem classes, was based on various factors.
First, strong points in SVMs are their strengths in treating high-dimensional datasets and their efficiency in making discriminations of complex and nonlinear trends in data [17]. Integrating the former into the RF framework has the advantage of employing the excellent classifier of SVM and using the ensemble learning of RF to gain better performance and accuracy. The strengths of the two algorithms are integrated in this approach, therefore acting as a synergistic combination of SVM’s discriminative power with RF’s versatility and efficiency [18]. In addition, there is room for flexibility regarding parameter tuning of an RF classifier based on SVM, so that classification parameters could be optimized with respect to the ecosystem mapping task’s specifics. In other words, RF classifiers based on SVM have already been proven to be effective in different remote sensing applications, including land cover classification and vegetation mapping, thereby making them quite suitable for ecosystem classification tasks [19]. The adoption for this study of the said classifier is in line with the pursuit of achieving precise and reliable classification outcomes that turn out to be very important in the full understanding of ecosystem dynamics and further help in making informed decisions on conservation and management initiatives. The study will provide measurements of several parameters, such as overall classification accuracy, class-specific accuracy metrics like producer’s and user’s accuracies, and impact of fusion techniques on the classification outcome.
In addition, environmental factors like cloud cover, roughness of terrain, and seasonality will be used to compare the analyses. Further, such research may also aid in formulating standardized protocols for using remote sensing data in ecological assessment processes, thereby furthering scientific knowledge and environmental stewardship. If these objectives are met, it would be expected of this study to set a base standard in ecosystem mapping.
Such integration could be designed to not only further our capabilities for global environmental monitoring but also to enlighten more effective conservation and management of the world’s most vulnerable ecosystems. We, therefore, expect that this research would be able to demonstrate significant benefits that come from data fusion between Sentinel-1 and Sentinel-2, thus delivering a robust scientific basis for adoption in environmental science and policy making.

2. Data and Methodology

2.1. Study Area

The current study has concentrated on three of the most ecologically varied and critical ecosystems in Bangladesh: Wetland Hakaluki Haor, the confluence region of the Padma and Jamuna Rivers, and the Sundarbans mangrove forest. The choice of study sites will represent most of the ecosystem types and environmental issues associated with ecosystem management and mapping exercises.
Situated in the north-eastern part of Bangladesh, Hakaluki Haor represents one of the largest freshwater wetland ecosystems in the country. Its intricate ecological fabric is composed of a mosaic of permanent and seasonal water bodies, inextricably interwoven with various vegetation communities and agricultural land [20]. Seasonal fluctuations in the water level are characteristic features of the wetland. Beginning from the monsoon season, it inundates vast areas and turns into seasonally flooded grassland [21]. This complex interplay of water bodies, vegetation types, and land-use patterns suggests a requirement for highly accurate mapping to inform proper strategies of wetland management and conservation.
The Padma-Jamuna River confluence region is one of the dynamic and complex riverine ecosystems in central Bangladesh. The zone is formed by the confluence of the two largest rivers in Bangladesh: Padma and Jamuna. The confluence region is experiencing continuous changes in geomorphology by erosion, sedimentation, and sandbar formation. This ecosystem comprises a great variety of fluvial features: active channels, meandering courses of rivers, point bars, and seasonally inundated floodplains [22]. This requires accurate mapping of such dynamism if comprehension of riverine processes and channel migration patterns is to be attained, and also if there are to be mitigations for risks related to riverbank erosion and flooding.
Sundarbans, along the southern coastline of Bangladesh, is the largest single block of tidal mangrove forest in the world. This unique ecosystem is characterized by its very thick or heavy network of mangrove trees and shrubs adapted to thrive in saline environments [23]. It provides an effective function of coastal protection from the impacts of wind-cyclone storm surges [10]. Effective conservation and management of this important ecosystem requires detailed mapping of the network of its waterways, intertidal mudflats, and varying densities of mangrove vegetation.
Figure 1. Mapping the Ecosystems Study Area: An Overview of the Sundarbans Mangrove Forest, Hakaluki Haor Wetland, and the Padma-Jamuna Riverine Confluence in Bangladesh.
Figure 1. Mapping the Ecosystems Study Area: An Overview of the Sundarbans Mangrove Forest, Hakaluki Haor Wetland, and the Padma-Jamuna Riverine Confluence in Bangladesh.
Preprints 115175 g001

2.2. Classification Schemes

A robust and ecologically relevant classification scheme is required to ensure that ecosystem mapping can be done with the best possible degree of accuracy. Building on existing knowledge, this paper consolidates classification schemes already established for target ecosystems so as to comprehensively represent their main components.
In the present research, this classification scheme has been adopted for the Hakaluki Haor wetland ecosystem, drawing on past experience and after a careful literature review. It categorizes the haor into five classes: water bodies, dense vegetation, cropland, bare land, and human settlements [21]. This detailed classification ensures that it is able to capture all of the essential components of a wetland system, thereby distinguishing permanent from seasonal water bodies, different kinds of vegetation communities, and agricultural lands from human settlement areas.
The following new classification scheme is established, by drawing impetus from the study on the Brahmaputra River basin, in the area of confluence of the Padma-Jamuna Rivers. This scheme includes three vital fluvial features that have a bearing on the confluence region: water, sandbars with vegetation, and unvegetated sandbars [24]. In order to ensure appropriate mapping of dynamic geomorphology of confluence zone, its complex network of channels and developing formations of sandbars, and the extent of seasonally flooded areas, these categories are incorporated in this study.
In this paper, we take a classification scheme for the Sundarbans mangrove forest informed by the research of, who conducted mangrove ecosystem mapping. Five of their nine classification features are used in our scheme: water, mudflat zone, bare land, sparse vegetation, and dense vegetation [10]. This effectively maps out spatially the distribution of the different water bodies, density variation of the mangrove vegetation, and occurrence of unvegetated mudflats that characterize the Sundarbans ecosystem.
Table 1 provides a summary of the classificatory schemes adopted for each ecosystem. Descriptions here give details of each class, accompanied by corresponding references that express data to be used for analysis. A large part of the classification schemes utilized is standardized, and consequently, will provide an ecologically meaningful representation for the target ecosystems to support key component delineation accurately from remote-sensing data analyses.

2.3. Datasets

In this study, various datasets were used to enable the comprehensive analysis and accurate ecosystem classification of the study sites. The main datasets used were Sentinel-1 and Sentinel-2 satellite image datasets. These provided relevant high-resolution optical and radar data that were of essence to ecological mapping [25]. The Sentinel-1 dataset provided information on surface properties related to soil moisture and vegetation structure. It was made up of C-band synthetic aperture radar data [26]. Meanwhile, proper land cover classes were allowed with a Sentinel-2 dataset that gives multispectral imagery across several spectral bands [15]. These datasets were obtained for the closed temporal interval from December 2022 to February 2023 to make sure the seasonal changes and dynamics of alteration in the environment were documented. Ancillary data were also used for validation, such as already existing land cover maps from Google Earth, to guarantee the accuracy and reliability of classification results. This paper has thus tried to leverage the complementarity between optical and radar remote sensing through this integration of datasets with improved accuracy in ecosystem mapping and enhanced ecological assessment.

2.3.1. Satellite Imageries

The study mapped an ecosystem using photos from Sentinel 1 and 2. The integration of SAR with optical data allows for the identification of many physical and spectral characteristics of land coverings, and such integration may result in exact classification findings [6]. Sentinel-1 is a European SAR satellite that acquires data in all-weather conditions, having a repeat cycle of six days. Both ascending and descending ground range detected pictures were used with a spatial resolution of 10 m [27]. Sentinel-2 includes the Multispectral Instrument sensor, capturing, in thirteen spectral bands ranging from visible to SWIR, reflectance information on the Earth’s surface with spatial resolutions of between 10 m to 60 m [15]. Only four bands of blue, green, red, and near infrared light with a spatial resolution of 10 m were used in this work. Notwithstanding, greater spatial resolution satellite imagery that aids mangrove ecosystem mapping, only bands with a spatial resolution of 10 m were used in this study.

2.3.2. Pre-processing and Cloud Masking

Before data analysis, a series of careful preprocessing steps were performed to prepare Sentinel-1 and Sentinel-2 image data for rigorous examination. These preparatory measures were quite indispensable in guaranteeing that the data was complete and reliable. The pre-processing workflow initiated with Sentinel-1 data, where radiometric calibration was applied to ensure that acquisitions at different times and by different sensors delivered a consistent radiometric value. These included speckle reduction techniques, which helped reduce the effect of inherent noise in the SAR imagery and improved clarity and interpretability [8]. Geometric correction procedures also applied to correct any geometric distortions present in the images and ensure accurate spatial representation.
Parallel to these operations, several preprocessing operations were done on Sentinel-2 imagery to deal with some of the peculiarities of data from optical remote sensing. Atmospheric correction was an essential operation in this regard, intended to remove the effects that the atmosphere introduces, resulting in haze and scattering to obtain accurate radiometric values. In other words, correcting for atmospheric distortions would conserve the real surface reflectance values and increase the efficiency of LULC analysis [28]. Similarly, geometric correction is applied to rectify the geometric distortions introduced during image acquisition by a variety of factors, especially terrain relief and sensor viewing angles. The geometric accuracy of imagery is important in all spatial analyses and mapping applications since this ensures that features are represented accurately in their real geographic locations [29].
Cloud cover is one of the major problems in optical remote sensing applications, it might mask out underlying features that would lead to complications to accurate analysis. Guided by the reduction of cloud impacts on analysis, this strict 5% cloud mask was set in the Sentinel-2 image [30]. This cloud mask will cover up areas with high cloud coverage, ensuring that only clear pixels without clouds are used for further analysis. In both cases, Sentinel-1 and Sentinel-2 would follow an extensive preprocessing workflow, driven by the objective to achieve optimum data quality and consistency [31].

2.3.3. Reference Samples

The visual interpretation of this work relied on obtaining reference samples from Google Earth’s high-resolution satellite pictures. Besides, existing mangrove ecosystem map and false-color composite satellite photos were put into use. Homogeneous sites were explored as a reference in a bid to prevent fragmented regions and reduce the difficulty of mixed pixels. All in all, classes have suitable reference samples and appropriate spatial distribution. After that, the reference samples were divided into two groups in proportion as follows: training samples of 70% and test samples of 30%.

2.4. Sentinel Data Processing

In the light of the current study, a number of preparatory steps were needed to ensure that the Sentinel datasets used were appropriate and accurate for ecosystem mapping. For the radar-based analysis, the source of data was Sentinel-1 GRD data accessed through the Google Earth Engine (GEE) platform. These data are readily available within GEE’s image collection, with ID COPERNUCUS/S1_GRD, and have partly been pre-processed by GEE developers.
Figure 2. Overall Framework of Collocation Process.
Figure 2. Overall Framework of Collocation Process.
Preprints 115175 g002
In parallel, Sentinel-2 top of atmosphere reflectance data, obtained from the COPERNICUS/S2 image collection within GEE, were utilized for optical analysis. These data underwent radiometric calibration to derive Top of Atmosphere (TOA) reflectance values, enabling accurate quantification of surface properties. This Sentinel data preparation process effectively eliminates noisy, dark, and overly bright pixels, enhancing the quality of the optical data for classification purposes [28].
For ecosystem mapping, a combination of SAR (Synthetic Aperture Radar) and optical features was employed to leverage the complementary strengths of both data types. Eight SAR features (4 VV + 4 VH) and 16 optical features (4 blue + 4 green + 4 red + 4 NIR bands) were utilized simultaneously to provide a comprehensive dataset for classification tasks. By incorporating both SAR and optical data, the study wanted to capitalize on the synergistic advantages of these datasets for improved ecosystem mapping precision and accuracy.

2.5. Classification Process

In the case of ecosystem mapping using satellite images, an appropriate classifier has a very important role to get an accurate and reliable classification result. Random Forest is outstanding as an efficient tool for complex cases of classification. RF had been chosen in this study as the best classifier since it proved its effectiveness and robustness in a number of ecosystem mapping projects. The classification method was pixel-based and supervised, which requires the development of extensive training samples representative of the satellite image’s idiosyncrasies [32]. On GEE, the data was split into a test set of 30% and a training set of 70%, allowing for rigorous validation of the classification results. RF was the primary algorithm due to its ability to combine a collection of Classification and Regression Trees, or simply CART, in order to create powerful ensembles [17]. Among others, the number of trees is one of the most important parameters that will define how well or how poorly the Random Forest Classifier will perform, and how complex it will be.
Figure 3. Methodological Flowchart Depicting the Pre-processing and Classification Workflow for Sentinel-1 and Sentinel-2 Satellite Imagery with Collocation and Subsequent Accuracy Assessment.
Figure 3. Methodological Flowchart Depicting the Pre-processing and Classification Workflow for Sentinel-1 and Sentinel-2 Satellite Imagery with Collocation and Subsequent Accuracy Assessment.
Preprints 115175 g003
Random Forest classification offers several advantages, including its ability to handle high-dimensional data, accommodate non-linear relationships between variables, and mitigate overfitting [32]. Furthermore, the ensemble nature of RF enables robustness against noise and outliers, thereby enhancing the overall reliability of classification outcomes [33]. By harnessing the capabilities of the Random Forest Classifier within the GEE environment, this study aspired to achieve precise and reliable classification outcomes for ecosystem mapping. These outcomes are poised to contribute significantly to the understanding of ecological dynamics and facilitate informed decision-making for planning, conservation, and management endeavors.
Figure 4. Model Architecture of SVM based RF.
Figure 4. Model Architecture of SVM based RF.
Preprints 115175 g004
The combination of SVM and RF provides a strong classification algorithm that is potent in being very effective with complex and high-dimensional data [6]. The essence is that it inherits the strength of SVM in finding the optimal hyperplane for class separation and throws this with the ensemble strength of RF back to ensuring improved classification accuracy and strategizing the most robust classification results beyond what each model would achieve by itself [34]. The workflow of the proposed SVM-based RF model’s implementation begins with the rigorous preprocessing of the data to guarantee compatibility of input data. The model then extracts relevant spectral and spatial features from the satellite images of optical imagery bands, including blue, green, red, and near-infrared, plus radar backscatter values from Sentinel-1, which are all used as input variables for classification [29]. The SVM-based RF classification algorithm is implemented inside the Google Earth Engine. It is set with parameters in SVM: RBF kernel, regularization parameter, and RF maximum: number of trees, maximum tree depth. The scenario follows common sense as applied in machine learning: first, train an SVM model to derive class probabilities; second, using these class probabilities, train the RF classifier based on original spectral and spatial features—in this order, a powerful combined model was obtained.
During training, it uses a dataset of labeled pixels representing different land cover classes to train the model in the intricate relationships between input features and land cover categories. Thus, the model harnesses the discriminative and ensemble learning skill of SVM and RF, respectively, in a streamlined manner to learn how to classify land cover with high precision. Once trained, the SVM-based RF model is applied over the complete study area to develop detailed land cover classification maps. Every pixel is further classified using the decided rules and feature relationships, which provide comprehensive spatial information on the distribution of land cover.

2.6. Accuracy Assessment

Accuracy assessment conducted through confusion matrix by using the following equations.
O v e r a l l   a c c u r a c y = N u m b e r   o f   t o t a l   c o r r e c t l y   c l a s s i f i e d   p i x e l s N u m b e r   o f   t o t a l   v a l i d a t i o n   p i x e l s × 100
F e a t u r e   s p e c i f i c   a c c u r a c y = N u m b e r   o f   t o t a l   c o r r e c t l y   c l a s s i f i e d   p i x e l s N u m b e r   o f   t o t a l   v a l i d a t i o n   p i x e l s × 100 .
K a p p a = ( t o t a l   a c c u r a c y r a n d o m   a c c u r a c y ) ( 1 r a n d o m   a c c u r a c y )
One of the key tools during the map classification validation phase and, more particularly, in connection with LULC categorization, is the confusion matrix. It provides a detailed account of how well the classification performs by comparing predicted classes against the reference or ground truth data. The confusion matrix gives information not only on overall accuracy but also on class-specific accuracy, generally called user’s accuracy and producer’s accuracy. In its broad sense, the overall accuracy expresses how well the classifier is performing on all classes, which comes down basically to the total of correctly classified pixels against the total of all validation pixels, as shown in equation (1) [37]. This then gives a percentage of the total area that was classified correctly. Feature-specific accuracy is the accuracy for individual classes or features within the map. It is computed in exactly the same way as overall accuracy but considers only pixels of a particular class, as indicated in equation (2) [8]. This metric is particularly useful to get a clear insight into how well the classifier works for every class and to figure out which classes are classified with higher or lower accuracy.
Another key statistic in the validation of classification is the kappa coefficient, equation (3) [38]. It measures the agreement between the classification output and the reference data, adjusting for that which might occur by chance. The kappa coefficient is calculated from the following formula, where total accuracy is the overall accuracy from equation 1, and random accuracy is that which would be expected if classification had been carried out randomly. A kappa of 1 has perfect agreement; a kappa of 0 has no agreement beyond chance. In the context of the search results presented herein, a confusion matrix and kappa coefficient are calculated to evaluate various classification algorithms like Random Forest for their accuracy, whose performance has already been compared in various studies related to LULC classification using remote sensing data and through Google Earth Engine. These metrics should form the base of any assessment of the reliability of a classification result and making informed decisions based on a classified map.

3. Results

3.1. Ecosystem Exploring through Mapping of Sentinel-1 and Sentinel-2 Fusion versus Single Sentinel-2 Imagery

3.1.1. Mangrove Ecosystem

In this detailed study, it minutely mapped the mangrove ecosystem into five different ecological classes: water bodies, mudflats, bare earth, densely vegetated, and sparsely vegetated. For the assurance of accuracy in our classification, a rigorous dataset of 806 training points was used. Classification results of the combined analysis of the two datasets: collocated Sentinel-1 radar and Sentinel-2 optical image, and stand-alone Sentinel-2 optical image are presented for easy visualization in Figure 5. Figure 5(a) shows the Mangrove ecosystem map as interpreted from the solitary Sentinel-2 image. It gives a granular view of the landscape, detailing unique spectral signatures captured by the optical sensor. Conversely, Figure 5(b) presents the enhanced ecosystem map achieved through synergistic fusion of Sentinel-1 and Sentinel-2 datasets. In this case, a synergy of radar and optical data is found to give information on the Bangladesh mangrove ecosystem that is more complete in character. Comparative visualizations in Figure 5, highlight the worth of multi-sensor data integration in environmental mapping for information extraction on spatial distribution and the health of mangrove ecosystems.
Table 2 shows the result breakdown of each land cover class area captured by both methods. Generally, there was a strong difference in the output of both classifications. Specifically, combined image performance in the extraction of water bodies was high, covering about 9.64 km2;, and that of the single Sentinel-2 image about 7.91 km2;. It can be seen that the combined image identified a larger area of sparse vegetation, 15.79 square km, compared to the single Sentinel-2 image, 12.28 square km.
However, it identified smaller areas of mudflat (0.05 square km), bare land (2.71 square km), and dense vegetation (24.65 square km) compared to the single Sentinel-2 image 0.14 square km, 4.48 square km, and 28.05 square km respectively.

3.1.2. Riverine Ecosystem

This work deals with detailed mapping of the Riverine ecosystem, targeting three main classes: waterbody, sandbar with vegetation, and sandbar. This classification was done based on 622 training points to ensure its accuracy in the process. The classified images obtained from both the collocated data set and the single Sentinel-2 dataset are represented graphically in Figure 6. Figure 6(a) represents the ecosystem mapping of the Riverine area using a single Sentinel-2 image, while Figure 6(b) is representing the ecosystem map which was obtained from the fusion of Sentinel-1 and Sentinel-2 data for only the Riverine ecosystem of Bangladesh.
Table 3 shows a detailed land cover area comparison of each class between the two classification methods. It clearly showed the areas, in square km, covered by each class in both the Sentinel 2 image and the collocated image. From this table, it is observed that the collocated image indicates a larger area of the waterbody as 2.65 square km, against that of the single Sentinel-2 image, which is 2.47 square km. On the other hand, the single Sentinel-2 image performed better in the identification of sandbar with vegetation, which covered 2.57 square km, while that of the collocated image was 2.41 square km of this class.
For the sandbar class, both methods showcased comparable accuracy, with the Sentinel 2 image covering 2.14 square km and the collocated image covering 2.12 square km. These findings, meticulously detailed in Figure 6 and Table 4, underscore the differences in classification outcomes between the single Sentinel-2 image and the fused Sentinel-1 and Sentinel-2 data.

3.1.3. Haor Ecosystem

This research work focuses on the detailed mapping of the wetland ecosystem of Hakaluki Haor in Bangladesh. Five major classes, like waterbody, dense vegetation, cropland, buildup area, and bare land, have been chosen for detailed study with regard to the specific ecosystem under consideration. The classification, which was one of the main parts of this research, was done by selecting 470 training points so that it could give accurate results.
Classified images resulting from the single Sentinel-2 dataset and fusion of Sentinel-1 and Sentinel-2 data are presented in Figure 7. Figure 7(a) shows the ecosystem map of Hakaluki Haor using a single Sentinel-2 image, while Figure 7(b) shows the ecosystem map using the fusion data that provided the overall view of the wetland ecosystem.
Table 4 presents a proper comparison of the area for each class between the two classification methods. It is shown that areas in square kilometers covered by every class are well stipulated for both the Sentinel 2 image and the collocated image. Interestingly, the collocated image indicated higher variations of differences in classification results. For instance, 2.15 square km of the collocated image was covered by the waterbody class, while the single Sentinel-2 image covered 2.73 square km. In addition, the collocated image better identified dense vegetation, which covered 1.20 square km, cropland that covered 0.64 square km, buildup areas that covered 0.12 square km, and bare land that covered 0.59 square km, compared to the single Sentinel-2 image.

3.2. Classification Performance Evaluation of Ecosystem Mapping

3.2.1. Mangrove Ecosystem

The comparative analysis of the mangrove ecosystem’s classification accuracy between the Sentinel 2 image and the collocated image, integrating Sentinel-1 and Sentinel-2 data, provides valuable insights into the performance of these methods for different ecosystem classes (Figure 8).
In Figure 8, it shows a comparative analysis between the classification results of Sentinel-2 imagery and a collocated image (a fusion of Sentinel-1 and Sentinel-2 data), focusing on their ability to delineate waterbodies within a specific study area.
The results indicate that the collocated image utterly outperformed the Sentinel-2 image in detecting and bringing out finer details of the waterbodies, thus demonstrating a synergistic effect resulting from combining radar with optical data. In its wake, this gain in accuracy for waterbody delineation can enable assessing the added value arising from data fusion techniques within environmental monitoring. Playing off the strengths of these two sensing technologies, this collocated image delivers invaluable insights into developing informed strategies for the preservation of crucial environmental features that foster the continuity of sustainable management practices underpinning the potential use of integrated data sources for improving the reliability of remote sensing applications.
In Figure 9, the accuracy of the water bodies’ classification from Sentinel-2 imagery alone is rather high at 94.52%. With other data sets integrated into it, this accuracy greatly increases to the magnificent 98.18%. This tremendous increase underlines the crucial role of data fusion in the exact identification of water bodies within mangrove landscapes. Correct mapping of these aquatic systems would have allowed ecological studies to be performed, enhancing the understanding of the complex aquatic ecosystems typical of mangrove environments.
Class-wise accuracy shows that the integrated dataset depicts a great amount of precision, even better than that obtained using Sentinel-2 imagery alone but already of quite strong performance for the identification of mudflats. That means reliable and continual mapping of the areas under mudflat regions for complete ecosystem analysis is guaranteed. The classification for bare land areas is very good using Sentinel-2 imagery, with an accuracy of 94.16%. This makes it good at distinguishing open land classes within the mangrove ecosystem. The integration of Sentinel-1 with Sentinel-2 further increases this accuracy in this class, thus underpinning the role of integrated data in improving the mapping of land use for strategies in land management and in explaining the impacts of human activities on mangrove ecosystems. The dense vegetation area classification on the integrated dataset identifies with an accuracy of 98.14%, whose excellence guarantees the conditions that can prevail for dense and lush vegetation to thrive within mangrove settings. In comparison, Sentinel-2 imagery resulted in a slightly lower accuracy of 96.50%. In the same way, the integration dataset gave an accuracy of 98.28% for the sparse vegetation area classification, against 94.04% accuracy using Sentinel-2 data only. The integration of Sentinel-1 and Sentinel-2 data consistently improves the accuracy of mangrove ecosystem classification in all classes. Classifications using Sentinel-2 imagery alone have an overall accuracy of 91.56%, while the integrated approach resulted in a higher overall accuracy of 94.17%. It is noted that the improved overall accuracy obtained with the integrated dataset in Figure 9(b) clearly shows the benefits gained by fusing complementary satellite data sources. Such an outcome will provide a more detailed and accurate expression of the mangrove ecosystems that in nature possess complex and varying terrains. An apparently very modest increase in overall accuracy of 2.61%, this could be important in large-scale mapping projects for provision of more reliable data on ecological research, conservation efforts, and land management strategies.

3.2.2. Riverine Ecosystem

The comparison of accuracy between the Sentinel 2 imagery and the collocated imagery for riverine ecosystem classes, as depicted in Figure 10, offers crucial insights into the effectiveness of these methods in accurately classifying the various elements of this habitat.
In respect to outlining sand bars, it is starkly evident from Figure 10 that the collocated image had a very high advantage in that it afforded an accuracy of 81.63% against a very poor performance at 76.53% for the Sentinel-2 image, strongly indicating a huge gap in its ability to detect this kind of terrain feature.
The Collocated image hence is, if taken holistically, the more accurate modality, with an overall accuracy of 87.30%, marginally higher than that for the Sentinel-2 image at 85.21%. However, the differences between the two for the individual ecosystem classes demonstrate clear variations in these imaging modalities’ performance characteristics and suggest complementary strengths that could be further exploited to optimize overall performance.
Figure 11 shows the quantitative detailed comparison of the classification outcome for two different datasets: Sentinel-2 and the collocated image containing Sentinel-1 and Sentinel-2 data. The exact delimitation of the water bodies and sandbars of the case study can be obtained with the use of the collocated image, whose features are much more highly distinguished with respect to the Sentinel-2 image.
Improved delineation in this collocated data fusion approach helps in representing the minute details of the riverine ecosystem. Regarding overall accuracy, though Sentinel-2 images were able to provide at least a baseline level of precision while classifying the riverine ecosystem components, the collocated image outperformed it by improving the accuracy by 2.09%. This increase is considerable, more so considering large-scale ecological studies. Hence, accuracy in the collocated image is higher and thus suggestive of a more reliable dataset that researchers and policymakers could make well-informed decisions on concerning conservation and land management in riverine landscapes [1]. For this reason, the collocated image is always better than the Sentinel-2 image in classifying the different components constituting the riverine ecosystem.

3.2.3. Wetland Ecosystem

The accuracy analysis for the Haor ecosystem, comparing the Sentinel 2 image against the collocated image, provides insights into the precision of these methods in classifying the diverse features within this wetland habitat in Figure 12.
This figure presents a detailed comparative analysis between the classification results of Sentinel-2 and the collocated image, more so within the wetland ecosystem. Visual examination shows that the collocated image provides better delineation of the water bodies within this wetland setting. Again, this observation brings out the extreme accuracy and efficiency of the collocated data fusion technique in efficiently identifying water features across varied and ecologically sensitive environments.
Figure 13 presents a comparative assessment of the classification accuracies for various ecosystem classes in the case study of Hakaluki Haor wetland ecosystem mapping with two different imaging modalities. It is noticed that both the imaging modalities, Sentinel-2 and the collocated image, which is a fusion of Sentinel-1 and Sentinel-2, are found to yield a very impressive and similar accuracy of 94.31 percent while mapping the waterbody class, thus showing their equal efficiency for identifying this class.
On the other hand, there is a performance split when considering dense vegetation mapping (Figure 13a). In this case, Sentinel-2 imagery has the clear advantage, returning an accuracy of 88.16 percent over the colocated image’s 86.84 percent, thus placing Sentinel-2 at an advantage over the latter in this aspect. For cropland, both modalities of imaging perform equally, returning an accuracy of 86.14 percent. This suggests that on cropland classification, the two techniques are on par and equally reliable.
This Figure 13(b) shows the detailed comparative analysis of the classification outcome from Sentinel-2, in relation to that of the collocated image for the wetland ecosystem. The results, upon visual examination, show that the collocated image has better capability to delineate the water bodies accurately in the wetland setting. Again, this proves the high accuracy and efficiency of the collocated data fusion technique for the identification of water features in many diverse, and often ecologically sensitive, environments.
Figure 13(a) shows the comparative assessment of classification accuracies for different ecosystem classes in case study mapping of Hakaluki Haor wetland ecosystem using two imaging modalities. Results indicate that both sentinel-2 and the collocated image that is fused from Sentinel-1 and Sentinel-2 perform brilliantly and identically, with an accuracy of 94.31% accuracy in waterbody mapping, thus proving the common strength of both in identifying this class. On the other hand, it displays performance divergence with respect to dense vegetation mapping, with the Sentinel-2 imagery clearly ahead at 88.16% accuracy, rivaled only by the collocated image with an 86.84% accuracy, giving it a slight edge over Sentinel-2 in this category. For the classification of cropland, both imaging modalities show parity at 86.14%. This again proves that when classifying cropland, these two techniques are equally capable and reliable.

3.3. Significant Ecosystem-Specific Differences in Synergizing Image Accuracy

The overall accuracy of collocated images varies across distinct ecosystems, as highlighted in the provided Figure 14. These accuracy values reflect the effectiveness of integrating Sentinel-1 and Sentinel-2 data in differentiating and classifying diverse habitats.
In the Mangrove Ecosystem, the collocated image demonstrates an exceptional overall accuracy of 94.17%. This high accuracy signifies the robust capability of the integrated data in precisely delineating complex mangrove landscapes, capturing nuances in waterbodies, mudflats, dense and sparse vegetation, and bare lands. Accurate mapping in mangrove ecosystems is crucial for understanding ecological transitions and supporting conservation efforts, making the accuracy particularly significant.
Moving further to the Riverine Ecosystem, the general accuracy still stays high at 87.30% for the collocated image itself, as shown in Figure 14. Though marginally lower than in mangrove ecosystems, this accuracy level is commendable considering the challenges the riverine landscape poses. This accuracy underscores the reliability of the collocated image in capturing the dynamics of riverine environments and hence providing very essential data in ecological studies and environmental monitoring. For the wetland ecosystem, the accuracy is 85.96%. The habitats of wetland areas are composed of varied features, as was observed in this case: the Hakaluki Haor of Bangladesh. Therefore, this accuracy returned shows that the collocated image is efficient in distinguishing and classifying these variable components of wetland environments. While these marginal differences occur, it should be noted that all three ecosystem types still portray extremely high accuracy rates, thereby underlining the robustness and versatility of the synergized imaging approach used in this work. At the same time, these variations hint at some potential for optimization and tailoring of the imaging techniques to better depict the unique features each ecosystem type presents, hence increasing overall accuracy and reliability in mapping efforts.

3.3.1. Accuracy and Kappa Coefficients Findings from Confusion Matrix

The comparative analysis, as done in this study, underlines the immense improvements in ecosystem mapping accuracy brought about by the fusion of Sentinel-1 and Sentinel-2 image data, compared to the use of only Sentinel-2 image data. This study, through a Random Forest classifier, rigorously assessed how comparative accuracies and reliabilities from ecosystem classifications performed for the three different ecosystems under investigation: mangrove, riverine, and wetland areas.
The Accuracy of mangrove ecosystem classification was significantly improved from 92.32% using Sentinel-2 alone, and then integrating Sentinel-1 and Sentinel-2 satellite image data, to 94.17%. Kappa Coefficient improved from 90.05% to 92.99%. This can, to a great degree, be attributed to the complementarity of Sentinel-1 radar data to Sentinel-2 optical image data, providing a richer dataset able to capture more effectively the complexity of the mangrove structures and surrounding surface water bodies.
In the Riverine ecosystem, the accuracy increase was even higher, where it increased from 85.21% to 92.19% upon using the fusion approach. The Kappa Coefficient jumped drastically from 77.40% to 87.55% at the same time. This dramatic improvement could be because of the improved distinguishability of the intricate interfaces of water bodies and land in Riverine landscapes by the fusion technique.
Table 5. Confusion Matrix: A comparative analysis of the accuracy and Kappa Coefficient for ecosystem mapping in Mangrove, Riverine, and Wetland areas using Sentinel 2 imagery and collocated fusion imagery. The table is organized to facilitate direct comparison of the two imaging techniques within each area/region.
Table 5. Confusion Matrix: A comparative analysis of the accuracy and Kappa Coefficient for ecosystem mapping in Mangrove, Riverine, and Wetland areas using Sentinel 2 imagery and collocated fusion imagery. The table is organized to facilitate direct comparison of the two imaging techniques within each area/region.
Preprints 115175 i001
Wetland mapping also benefited from the fusion approach, with accuracy increasing from 82.51% to 85.96% and the Kappa Coefficient from 78.33% to 82.42%. The enhanced spectral and spatial resolution provided by the fusion of Sentinel-1 and Sentinel-2 imagery likely facilitated a more accurate classification of the diverse vegetation and water coverage characteristic of wetland ecosystems.

4. Discussion

The methodology developed in this study, which integrates SAR and optical data, opens the door for remote sensing applications in ecosystem mapping. This will not only increase the accuracy of classification in such tasks but will also provide a repeatable model for similar studies to be taken elsewhere. Such future research should also consider including other data sources, such as LiDAR and high-resolution satellite image integration, in an effort to further increase the accuracy and detail of ecosystem classification. On the other hand, enhancing the current algorithms for machine learning may be one way to achieve new discoveries relevant to the complex interactions occurring within and between the ecosystems.
Although these results are very promising from this pilot study, the classification methodology applied may have some intrinsic limitations. That is, a proposed SVM-based RF model in turn may introduce variability into the classification results since the SVM algorithm itself might be sensitive to the choice of kernel functions and hyper parameter tuning. Future studies are needed to assess advanced machine-learning techniques, for example, deep learning architectures that were recently reported to be more robust and adaptable to the intrinsic complex and nonlinear relationship present in the ecosystem data. Additional support information sources, for example, digital elevation models, soil maps, and climatic data, may be integrated with the aim of improving the accuracy and interpretability of the ecosystem classifications. The addition of these extra layers of information on the underlying biophysical and environmental drivers of the ecosystems under investigation would be the key to gaining some better, more cogent insights into the ecosystems in question. Where the methodology developed in the present research provides a new milestone in remote sensing application for ecosystem mapping, the integration with SAR and optical data sets only. Besides, the improvement in classification accuracy, this approach could also provide a repeatable model for similar kinds of studies across different geographical contexts. Future research should thus incorporate other sources of data and help to integrate data, e.g., through LiDAR and integration of high-resolution satellite images, to improve the extent of accuracy and refinement of these classifications of ecosystems. In addition, new machine learning algorithms could provide further insight into the complex web of interactions existing within and between ecosystems.
The results of this study mean a high level of accuracy in the mapping of ecosystems, which can have serious implications for environmental conservation and management. It provides better ways to assess the size of the ecosystems and gives a more detailed representation in support of better decision-making for habitat protection, land use planning, and biodiversity conservation. This increased mapping capacity is conducive for policymakers and practitioners of conservation in monitoring ecological changes and improving the tracking system for the efficiency of conservation interventions that foster prioritization for conservation and restoration.
In a nutshell, the integration of Sentinel-1 and Sentinel-2 data has high potential for the increase in accuracy in ecosystem mapping. The findings of the present study highlight synergistic benefits through the integration of SAR and optical sensors within complex environments, such as mangroves, riverine systems, and wetlands. Methodological novelties and enhanced classification accuracy from data fusions bring important consequences for the conservation and management of ecosystems, allowing for a more solid underpinning of environmental monitoring, habitat assessment, and conservation planning developments. Such advances are likely to be further embellished in the coming times with more integration of new data sources, advanced analysis methods, and complementary environmental data for deepening current knowledge and stewardship of the natural world.

5. Conclusion

The current study investigates the potential of integrating Sentinel-1 and Sentinel-2 satellite data to improve the accuracy of ecosystem mapping for three different ecosystems: wetlands. In this respect, the random forest classification method is used in this research to evaluate the effectiveness of the strategy in efficiently distinguishing these key habitat types. These results strongly confirm the belief that combined Sentinel-1 and Sentinel-2 data outperform single-sensor Sentinel-2 imagery for highly accurate mapping of ecosystems across all the studied ecosystems.
Comparing the methodology of fusion between Sentinel-1 and Sentinel-2 images and a single-sensor Sentinel-2 image, classified by a Random Forest classifier, gives a significant contribution to the enhancement of precision in ecosystem mapping. The results show that the fusion approach always performs better than single-sensor imagery for classification accuracy of these very different ecosystems like mangroves, riverine areas, or wetlands. This analysis indicates that the collocated imagery, resulting from the fusion of Sentinel-1 and Sentinel-2 data, excels at capturing features of complex ecosystems, particularly water bodies, vegetation density, and land cover types. This confirms that multi-sensor data fusion is very relevant in improving the classification result in densely vegetated or cloud-covered areas.
The present research also underlines the need to integrate radar and optical data sources in the pursuit of more accurate and complete results for ecosystem mapping. Further, it also brings into focus the power of machine learning algorithms, of which Random Forest was one example, at harnessing fused data for accurate classification of ecosystem components. This paper presents multisensory data fusion as the most potent method of improving accuracy in ecosystem mapping. This could be done by fusing the complementary strengths of Sentinel-1 and Sentinel-2 data to catch more detail across diverse ecosystems, mitigating in turn the limitations of single-sensor approaches.

Acknowledgments

We wish to extend our sincere appreciation to the European Space Agency for making available data used in this study for Sentinel-1 and Sentinel-2 satellites.

References

  1. J. Maes et al., “Mapping ecosystem services for policy support and decision making in the European Union,” Ecosyst. Serv., vol. 1, no. 1, pp. 31–39, Jul. 2012. [CrossRef]
  2. D. Rocchini et al., “Uncertainty in ecosystem mapping by remote sensing,” Comput. Geosci., vol. 50, pp. 128–135, Jan. 2013. [CrossRef]
  3. P. J. Tanis, O. E. P. J. Tanis, O. E. Nieweg, R. A. Valdés Olmos, E. J. Th Rutgers, and B. B. Kroon, “History of sentinel node and validation of the technique,” Breast Cancer Res., vol. 3, no. 2, p. 109, Apr. 2001. [CrossRef]
  4. H. P. Forghani-zadeh and G. A. Rincon-Mora, “An Accurate, Continuous, and Lossless Self-Learning CMOS Current-Sensing Scheme for Inductor-Based DC-DC Converters,” IEEE J. Solid-State Circuits, vol. 42, no. 3, pp. 665–679, Mar. 2007. [CrossRef]
  5. R. G. Congalton and K. Green, Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, Third Edition, 3rd ed. CRC Press, 2019. Accessed: Mar. 27, 2024. [Online]. Available: https://www.taylorfrancis.com/books/9780429629358.
  6. J. Dong, D. J. Dong, D. Zhuang, Y. Huang, and J. Fu, “Advances in Multi-Sensor Data Fusion: Algorithms and Applications,” Sensors, vol. 9, no. 10, pp. 7771–7784, Sep. 2009. [CrossRef]
  7. Naboureh, A. Li, J. Bian, G. Lei, and M. Amani, “A Hybrid Data Balancing Method for Classification of Imbalanced Training Data within Google Earth Engine: Case Studies from Mountainous Regions,” Remote Sens., vol. 12, no. 20, p. 3301, Oct. 2020. [CrossRef]
  8. A. Asokan, J. A. Asokan, J. Anitha, M. Ciobanu, A. Gabor, A. Naaji, and D. J. Hemanth, “Image Processing Techniques for Analysis of Satellite Images for Historical Maps Classification—An Overview,” Appl. Sci., vol. 10, no. 12, p. 4207, Jun. 2020. [CrossRef]
  9. M. J. Campbell et al., “A multi-sensor, multi-scale approach to mapping tree mortality in woodland ecosystems,” Remote Sens. Environ., vol. 245, p. 111853, Aug. 2020. [CrossRef]
  10. A. Ghorbanian, S. A. Ghorbanian, S. Zaghian, R. M. Asiyabi, M. Amani, A. Mohammadzadeh, and S. Jamali, “Mangrove Ecosystem Mapping Using Sentinel-1 and Sentinel-2 Satellite Images and Random Forest Algorithm in Google Earth Engine,” Remote Sens., vol. 13, no. 13, p. 2565, Jun. 2021. [CrossRef]
  11. P. Cracknell, “The development of remote sensing in the last 40 years,” Int. J. Remote Sens., vol. 39, no. 23, pp. 8387–8427, Dec. 2018. [CrossRef]
  12. F. A. Al-Wassai and N. V. Kalyankar, “Major Limitations of Satellite images,” 2013. [CrossRef]
  13. “Remote Sensing | Free Full-Text | A Hybrid Data Balancing Method for Classification of Imbalanced Training Data within Google Earth Engine: Case Studies from Mountainous Regions.” Accessed: Mar. 27, 2024. [Online]. Available: https://www.mdpi.com/2072-4292/12/20/3301.
  14. D. Geudtner, R. D. Geudtner, R. Torres, P. Snoeij, M. Davidson, and B. Rommen, “Sentinel-1 System capabilities and applications,” in 2014 IEEE Geoscience and Remote Sensing Symposium, Jul. 2014, pp. 1457–1460. [CrossRef]
  15. D. Phiri, M. D. Phiri, M. Simwanda, S. Salekin, V. Nyirenda, Y. Murayama, and M. Ranagalage, “Sentinel-2 Data for Land Cover/Use Mapping: A Review,” Remote Sens., vol. 12, no. 14, p. 2291, Jul. 2020. [CrossRef]
  16. D. Geudtner, R. D. Geudtner, R. Torres, P. Snoeij, M. Davidson, and B. Rommen, “Sentinel-1 System capabilities and applications,” in 2014 IEEE Geoscience and Remote Sensing Symposium, Jul. 2014, pp. 1457–1460. [CrossRef]
  17. M. Sheykhmousa, M. M. Sheykhmousa, M. Mahdianpari, H. Ghanbari, F. Mohammadimanesh, P. Ghamisi, and S. Homayouni, “Support Vector Machine Versus Random Forest for Remote Sensing Image Classification: A Meta-Analysis and Systematic Review,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 13, pp. 6308–6325, 2020. [CrossRef]
  18. Ahmad, M. Basheri, M. J. Iqbal, and A. Rahim, “Performance Comparison of Support Vector Machine, Random Forest, and Extreme Learning Machine for Intrusion Detection,” IEEE Access, vol. 6, pp. 33789–33795, 2018. [CrossRef]
  19. A. Ghosh, R. A. Ghosh, R. Sharma, and P. K. Joshi, “Random forest classification of urban landscape using Landsat archive and ancillary data: Combining seasonal maps with decision level fusion,” Appl. Geogr., vol. 48, pp. 31–41, Mar. 2014. [CrossRef]
  20. Ahmed, B. J. Deaton, R. Sarker, and T. Virani, “Wetland ownership and management in a common property resource setting: A case study of Hakaluki Haor in Bangladesh,” Ecol. Econ., vol. 68, no. 1, pp. 429–436, Dec. 2008. [CrossRef]
  21. G. Polash, M. Islam, Md. M. Alam, and A. Q. Al-Amin, “Dynamics of changes in land use and land cover and perceived causes in Hakaluki Haor, Bangladesh,” J. Environ. Plan. Manag., vol. 66, no. 6, pp. 1209–1228, May 2023. https://doi.org/10.1080/09640568.2021.2017865. [CrossRef]
  22. M. R. Rahman, “River dynamics – a geospatial analysis of Jamuna (Brahmaputra) River in Bangladesh during 1973–2019 using Landsat satellite remote sensing data and GIS,” Environ. Monit. Assess., vol. 195, no. 1, p. 96, Jan. 2023. [CrossRef]
  23. S. M. D.-U. Islam and M. A. H. Bhuiyan, “Sundarbans mangrove forest of Bangladesh: causes of degradation and sustainable management options,” Environ. Sustain., vol. 1, no. 2, pp. 113–131, Jun. 2018. [CrossRef]
  24. G. Talukdar, A. K. G. Talukdar, A. K. Sarma, and R. K. Bhattacharjya, “Assessment of land use change in riverine ecosystem and utilizing it for socioeconomic benefit,” Environ. Monit. Assess., vol. 194, no. 11, p. 841, Nov. 2022. [CrossRef]
  25. S. Abdikan, F. B. S. Abdikan, F. B. Sanli, M. Ustuner, and F. Calò, “LAND COVER MAPPING USING SENTINEL-1 SAR DATA,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., vol. XLI-B7, pp. 757–761, Jun. 2016. [CrossRef]
  26. F. Filipponi, “Sentinel-1 GRD Preprocessing Workflow,” in 3rd International Electronic Conference on Remote Sensing, MDPI, Jun. 2019, p. 11. [CrossRef]
  27. Haas and, Y. Ban, “Sentinel-1A SAR and sentinel-2A MSI data fusion for urban ecosystem service mapping,” Remote Sens. Appl. Soc. Environ., vol. 8, pp. 41–53, Nov. 2017. [CrossRef]
  28. D. Frantz, “FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond,” Remote Sens., vol. 11, no. 9, p. 1124, May 2019. https://doi.org/10.3390/rs11091124. [CrossRef]
  29. Schiewe, “Integration of multi-sensor data for landscape modeling using a region-based approach,” ISPRS J. Photogramm. Remote Sens., vol. 57, no. 5–6, pp. 371–379, Apr. 2003. [CrossRef]
  30. S. Sarker and M. S. G. Adnan, “Evaluating multi-hazard risk associated with tropical cyclones using the fuzzy analytic hierarchy process model,” Nat. Hazards Res., vol. 4, no. 1, pp. 97–109, Mar. 2024. [CrossRef]
  31. A.H. Sanchez et al., “Comparison of Cloud Cover Detection Algorithms on Sentinel–2 Images of the Amazon Tropical Forest,” Remote Sens., vol. 12, no. 8, p. 1284, Apr. 2020. [CrossRef]
  32. Ghosh, R. Sharma, and P. K. Joshi, “Random forest classification of urban landscape using Landsat archive and ancillary data: Combining seasonal maps with decision level fusion,” Appl. Geogr., vol. 48, pp. 31–41, Mar. 2014. [CrossRef]
  33. Pal, “Random forest classifier for remote sensing classification,” Int. J. Remote Sens., vol. 26, no. 1, pp. 217–222, Jan. 2005. [CrossRef]
  34. W. M. Brown, “Synthetic Aperture Radar,” IEEE Trans. Aerosp. Electron. Syst., vol. AES-3, no. 2, pp. 217–229, Mar. 1967. [CrossRef]
  35. H. P. Forghani-zadeh and G. A. Rincon-Mora, “An Accurate, Continuous, and Lossless Self-Learning CMOS Current-Sensing Scheme for Inductor-Based DC-DC Converters,” IEEE J. Solid-State Circuits, vol. 42, no. 3, pp. 665–679, Mar. 2007. [CrossRef]
  36. R. G. Congalton and K. Green, Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, Third Edition, 3rd ed. CRC Press, 2019. [CrossRef]
  37. H. Ismail and K. Jusoff, “Satellite Data Classification Accuracy Assessment Based from Reference Dataset,” 2008.
  38. S. Jog and M. Dixit, “Supervised classification of satellite images,” in 2016 Conference on Advances in Signal Processing (CASP), Pune, India: IEEE, Jun. 2016, pp. 93–98. [CrossRef]
Figure 5. Comparative Analysis of Mangrove Ecosystem Mapping. Figure (a) displays the Sentinel-2 image classification, while figure (b) shows the collocated image classification, both highlighting land cover types.
Figure 5. Comparative Analysis of Mangrove Ecosystem Mapping. Figure (a) displays the Sentinel-2 image classification, while figure (b) shows the collocated image classification, both highlighting land cover types.
Preprints 115175 g005
Figure 6. Comparative Analysis of Riverine Ecosystem Mapping Using: (a) Sentinel 2 image (b) Collocated image (6(a) shows the Sentinel-2 satellite image interpretation, with water bodies, sandbars, and vegetation-rich areas distinctly classified; 6(b) displays the collocated image enhanced accuracy, revealing a more nuanced distribution of these key ecological features).
Figure 6. Comparative Analysis of Riverine Ecosystem Mapping Using: (a) Sentinel 2 image (b) Collocated image (6(a) shows the Sentinel-2 satellite image interpretation, with water bodies, sandbars, and vegetation-rich areas distinctly classified; 6(b) displays the collocated image enhanced accuracy, revealing a more nuanced distribution of these key ecological features).
Preprints 115175 g006
Figure 7. Comparative Analysis of Wetland Ecosystem Mapping in Hakaluki Haor: (a) depicts the ecosystem distribution using a Sentinel-2 satellite image, while (b) shows the distribution using a collated image.
Figure 7. Comparative Analysis of Wetland Ecosystem Mapping in Hakaluki Haor: (a) depicts the ecosystem distribution using a Sentinel-2 satellite image, while (b) shows the distribution using a collated image.
Preprints 115175 g007
Figure 8. Comparative study showcasing the effectiveness of Sentinel-2 satellite imagery against collocated ground-truth data for identifying mangrove vegetation and waterbody classes.
Figure 8. Comparative study showcasing the effectiveness of Sentinel-2 satellite imagery against collocated ground-truth data for identifying mangrove vegetation and waterbody classes.
Preprints 115175 g008
Figure 9. Accuracy Assessment of Mangrove Ecosystem Classification (a) showing the class-wise accuracy for different land cover types, and (b) depicting the overall accuracy percentage of the Sentinel-2 and collocated data methods.
Figure 9. Accuracy Assessment of Mangrove Ecosystem Classification (a) showing the class-wise accuracy for different land cover types, and (b) depicting the overall accuracy percentage of the Sentinel-2 and collocated data methods.
Preprints 115175 g009
Figure 10. Comparative Analysis of Riverine Ecosystem Classification Accuracy: (a) demonstrates the class-wise accuracy for riverine ecosystem, (b) presents the overall accuracy of the ecosystem delineation, comparing the performance of Sentinel 2 data against the Collocated data source.
Figure 10. Comparative Analysis of Riverine Ecosystem Classification Accuracy: (a) demonstrates the class-wise accuracy for riverine ecosystem, (b) presents the overall accuracy of the ecosystem delineation, comparing the performance of Sentinel 2 data against the Collocated data source.
Preprints 115175 g010
Figure 11. Comparative analysis of the classification outcome of the two different images. The left panel displays a classified Sentinel-2 image, the center panel shows the corresponding Google Earth image for reference, and the right panel presents the classification results using a collocated image.
Figure 11. Comparative analysis of the classification outcome of the two different images. The left panel displays a classified Sentinel-2 image, the center panel shows the corresponding Google Earth image for reference, and the right panel presents the classification results using a collocated image.
Preprints 115175 g011
Figure 12. Comparative analysis of the classification outcome of the two different images.
Figure 12. Comparative analysis of the classification outcome of the two different images.
Preprints 115175 g012
Figure 13. (a) Class-wise accuracy comparison of wetland ecosystem classification using Sentinel-2 and Collocated imagery, (b) Overall accuracy comparison, highlighting the superior performance of collocated imagery over Sentinel-2.
Figure 13. (a) Class-wise accuracy comparison of wetland ecosystem classification using Sentinel-2 and Collocated imagery, (b) Overall accuracy comparison, highlighting the superior performance of collocated imagery over Sentinel-2.
Preprints 115175 g013
Figure 14. Bar chart displaying the accuracy of image synthesis for Mangrove, Riverine, and Wetland ecosystems, with the highest accuracy observed in Mangrove ecosystems.
Figure 14. Bar chart displaying the accuracy of image synthesis for Mangrove, Riverine, and Wetland ecosystems, with the highest accuracy observed in Mangrove ecosystems.
Preprints 115175 g014
Table 1. Description of Ecosystem Classes for Remote Sensing Analysis.
Table 1. Description of Ecosystem Classes for Remote Sensing Analysis.
Ecosystem Type LULC class Description
Haor Basin
ecosystem
Waterbody The land covered by water in the form of rivers, ponds, and beels.
Dense Vegetation Areas covered by evergreen trees that grow naturally in the land and along the river.
Crop Land This land is normally used for producing crops.
Bare Land This land has no vegetation and abandoned crops.
Human Settlement This land is occupied by people’s-built settlement.
Floodplain ecosystem Waterbody The land covered by water in the form of river.
Sandbar with vegetation Sandbars with vegetation covers.
Sandbar Sandbars with no vegetation and abandoned.
Mangrove ecosystem Waterbody The land covered by water in the form of river
Mudflat A stretch of muddy land
Bare land This land has no vegetation and abandoned crops
Dense Vegetation Areas covered by evergreen trees that grow naturally in the land and along the river
Shrubs Small mangrove trees
Table 2. Comparison of the Area Captured in 2 Different Methods.
Table 2. Comparison of the Area Captured in 2 Different Methods.
Classes Sentinel 2 Image Collocated Image
Area (square km)
Waterbody 7.91 9.64
Mudflat 0.14 0.05
Bare land 4.48 2.71
Dense Vegetation 28.05 24.65
Sparse Vegetation 12.28 15.79
Table 3. Comparison of the Area Captured in 2 Different Methods.
Table 3. Comparison of the Area Captured in 2 Different Methods.
Classes Sentinel 2 Image Collocated Image
Area (square km)
Waterbody 2.47 2.65
Sandbar with Vegetation 2.57 2.41
Sandbar 2.14 2.12
Table 4. Comparison of the Area Captured in 2 Different Methods.
Table 4. Comparison of the Area Captured in 2 Different Methods.
Classes Sentinel 2 Image Collocated Image
Area (square km)
Waterbody 2.73 2.15
Dense Vegetation 0.41 1.20
Cropland 0.67 0.64
Buildup Area 0.57 0.12
Bare land 0.32 0.59
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated