Preprint Article Version 1 This version is not peer-reviewed

SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification

Version 1 : Received: 3 July 2024 / Approved: 3 July 2024 / Online: 4 July 2024 (14:46:48 CEST)

How to cite: Alkhatib, M. Q.; Zitouni, M. S.; Al-Saad, M.; Abura’ed, N.; Al-Ahmad, H. SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification. Preprints 2024, 2024070385. https://doi.org/10.20944/preprints202407.0385.v1 Alkhatib, M. Q.; Zitouni, M. S.; Al-Saad, M.; Abura’ed, N.; Al-Ahmad, H. SDF2Net: Shallow to Deep Feature Fusion Network for PolSAR Image Classification. Preprints 2024, 2024070385. https://doi.org/10.20944/preprints202407.0385.v1

Abstract

Polarimetric Synthetic Aperture Radar (PolSAR) images encompass valuable information that can facilitate extensive land cover interpretation and generate diverse output products. Extracting meaningful features from PolSAR data poses challenges distinct from those encountered in optical imagery. Deep Learning (DL) methods offer effective solutions for overcoming these challenges in PolSAR feature extraction. Convolutional Neural Networks (CNNs) play a crucial role in capturing PolSAR image characteristics by leveraging kernel capabilities to consider local information and the complex-valued nature of PolSAR data. In this study, a novel three-branch fusion of complex-valued CNN named Shallow to Deep Feature Fusion Network (SDF2Net) is proposed for PolSAR image classification. To validate the performance of the proposed method, classification results are compared against multiple state-of-the-art approaches using the Airborne Synthetic Aperture Radar (AIRSAR) datasets of Flevoland, San Francisco, and ESAR Oberpfaffenhofen dataset. The results indicate that the proposed approach demonstrates improvements in OA, with 1.3% and 0.8% enhancement for the AIRSAR datasets and 0.5% improvement for the ESAR dataset. The analyses conducted on the Flevoland data underscore the effectiveness of the SDF2Net model, revealing a promising OA of 96.01% even with only 1% sampling ratio.

Keywords

Complex-Valued Convolutional Neural Network (CV-CNN); Polarimetric Synthetic Aperture Radar (PolSAR) Image Classification, Attention Mechanism; Feature Fusion

Subject

Environmental and Earth Sciences, Remote Sensing

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.