Preprint Article Version 1 This version is not peer-reviewed

Enhanced Prototypical Network with Customized Region-Aware Convolution for Few-Shot SAR ATR

Version 1 : Received: 26 July 2024 / Approved: 29 July 2024 / Online: 29 July 2024 (09:48:38 CEST)

How to cite: Yu, X.; Yu, H.; Liu, Y.; Ren, H. Enhanced Prototypical Network with Customized Region-Aware Convolution for Few-Shot SAR ATR. Preprints 2024, 2024072260. https://doi.org/10.20944/preprints202407.2260.v1 Yu, X.; Yu, H.; Liu, Y.; Ren, H. Enhanced Prototypical Network with Customized Region-Aware Convolution for Few-Shot SAR ATR. Preprints 2024, 2024072260. https://doi.org/10.20944/preprints202407.2260.v1

Abstract

With prosperous development and successful application of deep learning technologies in the field of remote sensing, numerous deep-learning based methods have emerged for synthetic aperture radar (SAR) automatic target recognition (ATR) tasks over the past few years. Generally, most deep-learning based methods can achieve outstanding recognition performance in condition that an abundant of labeled samples are available to train the model. However, in real application scenarios, it is difficult and costly to acquire and to annotate abundant SAR images due to the imaging mechanism of SAR, which poses a big challenge to existing SAR ATR methods. Thereby, SAR target recognition under the situation of few-shot, where only a scarcely few of labeled samples are available, is a fundamental problem that needs to be solved. In this paper, we put forward a new method named enhanced prototypical network with customized region-aware convolution (CRCEPN) to specially tackle the few-shot SAR ATR tasks. To be specific, We first develop a feature extraction network based on a customized and region-aware convolution, which can adaptively adjust convolutional kernels and their receptive fields according to each SAR image’s own characteristics as well as the semantical similarity among spatial regions, thus augmenting its capability to extract more informative and discriminative features. To achieve accurate and robust target identity prediction under the few-shot condition, we then propose an enhanced prototypical network which can improve the representation ability of the class prototype by properly making use of training and test samples together, thus effectively raising the classification accuracy. Meanwhile, a new hybrid loss is designed to learn a feature space with both inter-class separability and intra-class tightness as much as possible, which can further upgrade the recognition performance of the proposed method. Experiments performed on the moving and stationary target acquisition and recognition (MSTAR) dataset and the OpenSARShip dataset demonstrate that the proposed method is competitive with some state-of-the-art methods for few-shot SAR ATR tasks.

Keywords

convolutional neural network (CNN); synthetic aperture radar (SAR); automatic target recognition (ATR); few-shot learning (FSL)

Subject

Engineering, Electrical and Electronic Engineering

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.