Preprint
Article

A GAN-based Augmentation Scheme for SAR Deceptive Jamming Templates with Shadows

This version is not peer-reviewed.

Submitted:

15 August 2023

Posted:

16 August 2023

You are already at the latest version

A peer-reviewed article of this preprint also exists.

Abstract
To realize fast and effective synthetic aperture radar (SAR) deception jamming, a high-quality SAR deception jamming template library can be generated by performing sample augmentation on SAR deception jamming templates. However, current sample augmentation schemes of SAR deception jamming templates face certain problems. First, the authenticity of templates is low due to the lack of speckle noise. Second, the generated templates have low similarity to the target and shadow areas of the input templates. To solve these problems, this study proposes a sample augmentation scheme based on generative adversarial networks, which can generate a high-quality library of SAR deception jamming templates with shadows. The proposed scheme solves the two aforementioned problems from the following aspects. First, the influence of the speckle noise is considered in the network to avoid the problem of reduced authenticity in generated images. Second, a channel attention mechanism module is used to improve the network's learning ability of shadow features, which improves the similarity between the generated template and the shadow area in the input template. Finally, the proposed scheme and the SinGAN scheme are compared regarding the equivalent numbers of looks and the structural similarity between the target and shadow in the sample augmentation results. The comparison results demonstrate that, compared to the templates generated by the SinGAN scheme, those generated by the proposed scheme have targets and shadow features similar to those of the original image, and can incorporate speckle noise characteristics, resulting in higher authenticity, which helps to achieve fast and effective SAR deception jamming.
Keywords: 
Subject: 
Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Synthetic aperture radar (SAR) deception jamming technology is effective in concealing important military facilities and operational equipment [1,2], enabling covert military operations[3,4]. The SAR deception jamming technology has the advantage of low power requirement, making it a popular research topic in SAR jamming technology[5,6,7,8]. At present, the methods for SAR deception jamming at the software level include using a SAR perception Jamming template library and electrical deception models. Between the existing techniques, using a SAR deception jamming template library costs less time and manpower than using electromagnetic scattering models for deception jamming, which further facilitates the rapid implementation of deception jamming in practical scenarios. The effect of using a SAR deception jamming template library in deception jamming depends on the refinement level of the deception jamming templates[9,10,11]. However, deception jamming templates with lower authenticity can be easily detected by an enemy, which reduces the effectiveness of deception jamming. In the SAR imaging of side-looking radar, real targets exhibit shadow features. Therefore, using SAR deception jamming templates with shadows in deception jamming is more deceptive than using templates without shadows. By augmenting the existing templates, an efficient library of SAR deception jamming templates with shadows can be established.
Currently, there are two types of sample augmentation schemes for SAR deception jamming templates with shadows, traditional schemes and deep learning-based schemes. The first type involves traditional techniques, such as translation, rotation, and scaling, to obtain augmented SAR deception jamming template libraries with shadows[12]. However, these schemes do not fundamentally alter the internal information on the images but only change the shape of the image targets at a geometric level. The processed shadow parts often lose their authentic correspondence with the targets, which limits the utility of deception jamming. The second type involves deep learning-based approaches for image sample augmentation, where deep learning models effectively capture the complex data distribution and features in SAR deception jamming templates with shadows, thus enabling the generation of more realistic and diverse templates[13]. In deep learning schemes, there are two types: generating using a single template and generating using a dataset. Due to the easier availability of individual templates, using a single template has more advantages in terms of usability [14,15].
However, both types of the above-mentioned schemes do not consider the influence of SAR’s inherent speckle noise, leading to a lower similarity with the input template and a significant decrease in the authenticity of the deception jamming templates. Therefore, it is necessary to consider the characteristics of SAR’s speckle noise and design a fast and accurate sample augmentation network specifically for SAR deception jamming templates with shadows. This will enable the acquisition of a library of shadowed SAR deception jamming templates that are highly adaptable, diverse, and authentic within a short period of time.
The remainder of this paper is organized as follows: Section 2 describes the features of the input SAR spoofing interference template with shadows and the training process and structure of the proposed generative adversarial network model. Section 3 evaluates the generated images and compares the proposed method with the SinGAN method. Finally, Section 4 concludes this paper.

2. Materials and Methods

2.1. Characteristics of the input template

During the training of the network proposed in this scheme, this network requires an input of a SAR target deceptive jamming template with shadows. Based on the speckle noise characteristics and shadow features of the input template, the network performs sample augmentation on the SAR target deceptive jamming template with shadows.
The speckle noise refers to the granular speckle patterns that appear in SAR images due to the interaction of different echo phases during the SAR imaging process[16,17,18]. This noise is an inherent characteristic of SAR images. Coherent speckle noise represents a multiplicative noise in SAR images, and a random distribution of SAR images can be mathematically modeled as follows[19]:
I ( x , y ) = A ( x , y ) × Z ( x , y ) ,
where I ( x , y ) represents the observed SAR image, A ( x , y ) is the ideal image without speckle noise, and Z ( x , y ) denotes the speckle noise generated during the SAR system imaging.The amplitude of speckle noise in a SAR image follows a Rayleigh distribution, which is expressed by[20]:
P ( Z ( x , y ) ) = Z ( x , y ) σ 2 e Z ( x , y ) 2 2 σ 2 ,
where σ 2 represents the variance.
Given the imaging characteristics of SAR side-looking, certain areas of a target may be occluded and not illuminated by radar, resulting in no echo being generated. As a result, in the image domain, unilluminated areas appear as dark regions, known as shadows[21,22,23]. Since the shadow regions are not illuminated by radar, a receiver does not receive any echo signals from these areas, and thus, there is no interaction between different echo phases, leading to the absence of speckle noise in the shadow regions of a SAR deceptive jamming template. As the pixel intensity values of shadow regions should be minimal in the entire image, it is possible to identify shadow points by searching for subregions with the minimum average intensity in an image. In a SAR deceptive jamming template with shadows, a sliding window with a size of ( 2 C + 1 ) 2 is moved within an image, and the average intensity value within each window is computed. The shadow point ( x s , y s ) corresponds to the center point of the subregion with the minimum average intensity, which can be expressed as follows:
( x s , y s ) = arg min C < x B C , C < y B C ( i = x C x + C j = y C y + C I ( i , j ) / ( 2 C + 1 ) 2 ) ,
where B represents the side image length; C < x B C , C < y B C , and C < ( B 1 ) / 2 .
The shadow region can be expressed by:
I s = { I ( x , y ) | x = x s , y = y s } ,

2.2. Proposed Scheme

2.2.1. Scheme Overview

To realize rapid and realistic sample augmentation of SAR deception jamming templates with shadows, it is necessary to design a network that considers the speckle noise and shadow features of templates. This design aims to generate SAR deception jamming templates with shadows that have higher similarity and accuracy than the input template with shadows. Since speckle noise is an inherent noise in SAR images, simulating the speckle noise and using it as one of the noise inputs is essential to enhance the deceptive nature of the generated SAR deception jamming templates with shadows. Through processing by a GAN, the speckle noise can be preserved in generated templates with shadows. Moreover, in SAR deception jamming templates with shadows, the background brightness is usually slightly darker in the shadowed areas than in other image areas, resulting in minimal brightness differences between the shadow regions and the surrounding areas. This makes it challenging for a GAN to learn the shadow features effectively. Therefore, it is necessary to enhance a networks ability to extract shadow features. The specific workflow of proposed scheme is as follows.
The network proposed in this scheme employs a pyramid-like multiscale structure as an overall framework to capture the internal information on SAR deception jamming templates with shadows. Each level of the pyramidal structure has a GAN responsible for generating and discriminating samples of SAR deception jamming templates with shadows at the current scale[14]. The specific network architecture is shown in Figure 1. This approach requires capturing only structural data of a single SAR deception jamming template with shadows at different scales and using it as a training set. Namely, this approach focuses on capturing both the global information and detailed local information on SAR deception jamming templates with shadows.
In Figure 1, N represents the scale of the pyramid, which is defined by the size of the input SAR deception jamming template with shadows; { u N , u N 1 , u 0 } represents the downsampled result set of the original input SAR deception jamming template with shadows at different scales, with a downsampling factor of r n , where n { 0 , 1 , 2 , , N } and r > 1 . The pyramid model starts training from the coarsest scale, and the first generator G N is capable of generating augmented samples u ˜ N based on the input mixed noise z N , which is a combination of Gaussian white noise z g and speckle noise z s . The specific process can be expressed as follows:
x ˜ N = G N ( z N ) ,
After generating sample u ˜ N by mixing noise z N , the upsampled result of u ˜ N and Gaussian white noise are both inputted to the next scales generator G N 1 ; then, generator G N 1 generates a new sample u ˜ N 1 . This process performs iteratively, and the output of each scales generator can be represented as follows:
u ˜ N 1 = G n 1 [ z n 1 , ( u ˜ n ) r ] , n < N + 1 ,
where u ˜ n 1 represents the generator output at the ( n 1 )th scale, where G n 1 is the generator at that scale; z n 1 refers to the mixed noise input specific to the ( n 1 )th scale, and ( u ˜ n ) r represents the upsampled output of ( u ˜ n ) r from the n th scale generator.
Noise z n from the n th scale and upsampled output u ˜ n of the generator at the ( n + 1 )th scale are simultaneously inputted into the generator. The main function of the generator is to generate the missing data in u ˜ n + 1 and incorporate them into u ˜ n + 1 . This process generates a new sample of SAR deception jamming template with shadows, which is denoted by u ˜ n + 1 and can be expressed as follows:
u ˜ n = ( u ˜ n + 1 ) r + f n [ z n + ( u ˜ n + 1 ) r ] ,
where f n represents the mapping function from the upsampled output u ˜ n + 1 and noise z n , which is used to generate the details for generator at the ( n + 1 )th scale.
Generators at the same scale share a similar structure. The entire training process progresses from bottom to top, starting from coarse to fine scales. At each scale, the output u ˜ n of a generator G N , in addition to being passed to the generator at the next scale, is fed to the discriminator D n . The discriminator D n compares the generated output u ˜ n from the n th scale generator with the data obtained by downsampling the input SAR deception jamming template with shadows u n at the current scale. This process continues until discriminator D n is unable to distinguish between real and fake samples. A SAR deception jamming template with shadows, which is denoted by u n , is composed of three regions with different features: the target region I t , the shadow region I s , and the background region I b , as given in Eq. (8). The target region I t contains complex and bright detailed information with regular shapes; a shadow region I s is characterized by darker areas and relatively clean content; the background region I b usually exhibits distinct texture details, and lacks clear geometric shapes, making it easier for the network to learn.
u n = I t + I s + I b ,

2.2.1. Specific Description of Scheme

  • Generator Structure;
A generator G N uses the Gaussian white noise z g as its original input and incorporates the speckle noise z s into a mixed noise input. Between the five fully convolutional networks composed of convolutional layers (Conv)[24], batch normalization layers (BN), and leaky rectified linear units (Leaky ReLU), a spatial attention mechanism (SAM) block is introduced[25,26,27,28]. This mechanism aims to enhance the learning ability of a network toward the target and shadow regions. At the image region level, the SAM block helps the network capture the high-response areas in a feature map, particularly focusing on the regions corresponding to shadows I s in the feature map. It facilitates the processing of the shadow features in the SAR deception jamming template UN. Since the target and its shadow are crucial during the learning process, and the extraction of shadow features is challenging, an attention mechanism is adopted to improve the networks capability of extracting shape and contour features of shadow region I s in a SAR deception jamming template u n . The inception block[29], which is placed in front of the generator G N , consists of multiple scales, thus enabling a more detailed extraction of shape contours and internal details of a SAR target and its shadow, thereby enhancing the authenticity of the generated samples. This block also reduces redundant information and accelerates the convergence speed. The residual dense block with the attention mechanism uses the features extracted by convolutional layers at various levels fully[30], thus further improving the feature extraction capability of the shadow. Additionally, it prevents the problem of gradient vanishing that may occur in deep networks and enhances network stability. The structure of generator G N is shown in Figure 2.
2.
Discriminator Structure;
Discriminator D N adopts the patch-GAN approach[31,32,33], which is inspired by the Markovian discriminator concept. Discriminator D N consists of five fully convolutional layers that use a downsampling scheme to capture data distribution at the current scale, as shown in Figure 3. The discriminator uses both the input SAR deception jamming template with shadows u n and the generated SAR deception jamming template with shadows u ˜ n as input data. The fully convolutional network is responsible for learning the internal distribution information on the two input SAR deception jamming templates at the same scale. By calculating the loss function, the fully convolutional network discriminates between the real and generated SAR deception jamming templates with shadows u ˜ n at the current scale. In the adversarial game between generator G N and discriminator D N , the generated SAR deception jamming templates with shadows become increasingly realistic. The comparison and loss function calculation are performed by contrasting the SAR deception jamming template with shadows generated at the current scale with the downsampled SAR deception jamming template with shadows.
3.
Loss Function;
The model training starts from the coarsest scale and follows the multiscale structure shown in Figure 1. After training a scale, the corresponding GAN for that scale is fixed. The training loss of the n th GAN includes the adversarial loss S loss _ adv and the reconstruction loss S loss _ rec , which can be expressed as follows:
S loss - train = min G n max D n S loss _ adv ( G n , D n ) + α S loss _ rec ( G n ) ,
where α represents the weight of the reconstruction loss in the training loss.
Adversarial loss: Each scales generator G N is accompanied by a Markovian discriminator D N , which discriminates the authenticity of the generated shadowed SAR deception jamming templates at that scale. The discriminators discrimination of the mean value of the spectrum can be expressed by:
S loss _ adv = adv { E x n ~ p data [ D ( u n ) ] + E u ˜ n ~ p g [ D ( u ˜ n ) ] } + λ E u ^ ~ p u ^ [ D ( u ^ ) 1 ] 2 ,
where adv is the average value function; p data is the distribution of a real image; p g is the distribution of the generated image; u n ~ p data is the distribution of u n obeying p data ; u ˜ n ~ p g is the distribution of u ˜ n obeying p g ; u data is the concentration area of the real sample; u g is the concentration area of the generated sample; u ^ is randomly interpolated between u data and u g , and u ^ = ε u data + ( 1 ε ) u g , ε [ 0 , 1 ] ; u ^ ~ p u ^ is the distribution of u ^ obeying p u ^ ; D ( u n ) represents the discriminator output when discriminating the input shadowed SAR deception jamming template; D ( u ˜ n ) represents the output of the discriminator when discriminating the generated shadowed SAR deception jamming template; E represents the expectation; is the gradient operator; represents the L 1 norm; λ represents the weight of the gradient loss function.
Reconstruction loss: To generate a specific noise map of the original image x , assume that { z N rec , z N 1 rec , , z 0 rec } = { z , 0 , , 0 } , where z n rec is the noise reconstructed at the n th scale, and n { 0 , 1 , 2 , , N } ; z is the fixed noise map; x ˜ n rec is the image generated at the n th scale using the noise map. When n < N , the reconstruction loss can be expressed by:
S loss _ rec = G n ( 0 , [ u ˜ n rec r ) u n 2 ,
When n = N , the reconstruction loss can be expressed by:
S loss _ rec = G N ( z g + z s * ) u N 2 ,

3. Results

3.1. Data Description

The experimental data included images from the MSTAR dataset[34,35,36]. The MSTAR data were collected using the Sandia National Laboratories SAR sensor platform and the Sentinel-1A multimode SAR sensor platform with X-band SAR sensors, having a resolution of 0.3 m in the spotlight mode. The publicly available MSTAR dataset consists of ten different categories of ground targets, including armored vehicles (BMP-2, BRDM-2, BTR-60, and BTR-70), tanks (T-62, T-72), rocket launchers (2S1), anti-aircraft units (ZSU-234), trucks (ZIL-131), and bulldozers (D7). Furthermore, the MSTAR dataset covers various depression angles and orientations and has been widely used for testing and performance comparison of SAR automatic target recognition algorithms.
In the experiment, the MSTAR data on a T72 tank with shadows were used as input data for training the network proposed in this scheme. The goal was to generate a library of SAR deception jamming templates with shadows.

3.2. Experimental Result

The main objective of the proposed network is to extract multilevel features of a target and generate SAR deception jamming templates with shadows that have a high similarity to the input SAR deception jamming templates with shadows of the T72 tank. The 50 SAR deception jamming template with shadows of the T72 tank generated by the proposed network is presented in Figure 4. The comparison between the real SAR deception jamming template with shadows of the T72 tank and the SAR deception jamming template with shadows of the T72 tank generated by the proposed network is presented in Figure 5.
The SAR deception jamming template with shadows of the T72 tank generated by the proposed network showed that the image exhibited prominent features of speckle noise, indicating a high level of authenticity. Moreover, the shadow contour of the T72 tank was realistic and well defined, and the edges and internal details of the tank were accurately represented. The visual comparison of the real and generated SAR deception jamming template with shadows demonstrated that the generated template had a high level of authenticity.

3.3. Effectiveness Analysis of Scheme

3.3.1. Quantitative Analysis of Image Quality

To evaluate the quality of the generated SAR deception jamming templates with shadows, its targets, shadows, and speckle noise were assessed.
The equivalent number of looks (ENL) was used to measure the relative strength of speckle noise in SAR images[37,38,39]. A lower ENL value indicated a greater presence of speckle noise. The ENL was calculated by:
E N L = ( μ σ ) 2 ,
where μ represents the mean value of a SAR image, and σ represents the standard deviation of the SAR image. The ENL was calculated for the background regions in each image.
The structural similarity (SSIM) index[40,41,42], which reflects the similarity between two images in terms of luminance, contrast, and structure, was also used. A higher SSIM value indicated a higher similarity of images. The SSIM was obtained by:
S S I M ( u n , u ˜ n ) = ( 2 μ 1 μ 2 + k 1 l 2 ) ( 2 σ 1 σ 2 + k 2 l 2 ) ( μ 1 2 + μ 2 2 + k 1 l 2 ) ( σ 1 2 + σ 2 2 + k 2 l 2 ) ,
where μ 1 represents the average value of an image u n ; μ 2 is the average value of an image u ˜ n ; σ 1 and σ 2 represent the standard deviations of images u n and u ˜ n , respectively; l L , where L is the dynamic range of pixel values; k 1 and k 2 are constants set to default values of 0.0001 and 0.0009, respectively. The SSIM was calculated for the red rectangular regions in each image.
The evaluation results of the first three images in Figure 4 regarding the ENL and SSIM metrics are presented in Table 1.
Table 1 shows that the original image in Figure 5(a) had an ENL value of 2.5604. The generated SAR deception jamming templates with shadows have ENL values of 1.9764, 1.9090 and 1.9092, with an average value of 1.9315. Therefore, the difference between the ENL values of the generated templates and the original image was small, indicating a high similarity between the images of 75.44%. This result suggested that the generated SAR deception templates has high authenticity.
For the original image and the generated SAR deception templates, the SSIM values of the target and shadow regions were calculated. The SSIM values of the target and shadow regions of the original image and the generated templates were 0.9617, 0.9620, and 0.9643, with an average value of 0.9627, which indicated a high similarity of 96.27% between the generated and original images. Therefore, the proposed method could generate SAR deception jamming templates with shadows that exhibit high similarity to the original image and have high authenticity.

3.3.2. Comparison with SinGAN Method

This experiment was performed on shadowed T72 tank samples from the MSTAR dataset. Since SinGAN was currently one of the methods that can perform sample augmentation on the shadowed SAR deception jamming template and obtain good results, so the SinGAN method was employed for image sample augmentation of SAR deception templates with shadows, resulting in a dataset of 50 T72 tank SAR deception templates, as shown in Figure 6. The comparison between the real SAR deception jamming template with shadows of the T72 tank and the SAR deception jamming template with shadows of the T72 tank generated by the proposed network is presented in Figure 7.
Table 2 presents the evaluation results of the first three images using two metrics, ENL (Equivalent Number of Looks) and SSIM (Structural similarity Measure).
The results in Table 2 show that the ENL value of the original image in Figure 7(a) was 2.5604, and the ENL values of the generated SAR deception jamming templates with shadows were 1.6677, 1.7150, and 0.5637, with an average value of 1.3155. The average ENL of the generated templates showed a significant difference from the original image’s ENL value, indicating a low similarity of only 51.38%. This result suggested that the generated templates exhibited weaker speckle noise than the proposed approach, resulting in lower authenticity.
For the original image and the generated SAR deception templates, the SSIM values of the target and shadow regions were calculated. The SSIM values of the target and shadow regions were 0.3846, 0.37794, and 0.3925, having an average value of 0.3855 and indicating a 38.55% similarity between the generated templates and the original image. These values were lower than those of the proposed approach for target and shadow regions. Therefore, the proposed approach could generate SAR deception jamming templates with shadows that exhibit high similarity to the original image and high authenticity.

4. Discussion

To achieve fast and effective SAR deception jamming, it is necessary to perform sample augmentation on SAR deception jamming templates to generate a high-quality template dataset. Currently, the existing sample augmentation methods for SAR deception jamming templates face two main problems: low authenticity due to the absence of speckle noise and low similarity between the shadow regions in the generated templates and the original image. Therefore, this paper proposes a sample augmentation method based on GANs that can generate high-quality SAR deception jamming templates with shadows.
The proposed method adopts a pyramid-style multiscale structure as an overall framework to capture the internal information on SAR deception jamming templates with shadows. Each level of the pyramidal structure has a GAN responsible for generating and discriminating SAR deception jamming samples at that level. The generator uses residual dense blocks with attention mechanisms, multiscale modules, and region attention modules to enhance the network’s learning capability of shadow features. In addition, the speckle noise is introduced as input data to the generator, ensuring that the generated images contain the characteristic features of speckle noise. The discriminator adopts a patch-GAN approach with five fully convolutional layers used to assess the quality of generated images and compute the corresponding loss function, which improves the ability of both the generator and the discriminator iteratively to produce increasingly realistic images.
The effectiveness of the proposed method is demonstrated by comparing its results with those of the SinGAN method regarding two evaluation metrics, the ENL and SSIM values. The comparison results show that the proposed approach can achieve a 75.44% similarity between the generated and real images in speckle noise, which is significantly higher than the result of 51.38% achieved by the SinGAN method. Moreover, the SSIM values of the proposed method of the generated and real images for targets and shadows reach 96.27%, surpassing those of the SinGAN approach of 38.55%. This validates the effectiveness of the proposed approach in generating SAR deception jamming templates with shadows.
In future research, more complex inception modules, such as Inception V3, could be considered to improve the computational efficiency of the generator further.

Author Contributions

Conceptualization, S.L.; methodology, Q.Z.; validation, Q.Z., W.L. and G.L.; formal analysis, Y.L.; investigation, K.C.; data curation, W.L.; writing—original draft preparation, G.L.; writing—review and editing, S.L.; supervision, K.C.; project administration, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Stable-Support Scientific Project of China Research Institute of Radiowave Propagation, grant number No.A132003W02.

Data Availability Statement

Not applicable.

Acknowledgments

This experiment was supported by the Aerospace Information Innovation Institute of the Chinese Academy of Sciences and the China Radio Propagation Institute, and we would like to express our heartfelt thanks!

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brown, W.M. Synthetic Aperture Radar. IEEE Trans. Aerosp. Electron. Syst. 1967, AES-3, 217–229.
  2. Doerry, A.W.; Dickey, F.M. Synthetic aperture radar. Opt. Photonics News. 2004, 15, 28–33. [Google Scholar] [CrossRef]
  3. Qin, J.; Liu, Z.; Ran, L.; Xie, R.; Tang, J.; Zhu, H. An SAR Image Automatic Target Recognition Method Based on the Scattering Parameter Gaussian Mixture Model. Remote Sens. 2023, 15, 3800. [Google Scholar] [CrossRef]
  4. Pei, J.; Huo, W.; Wang, C.; Huang, Y.; Zhang, Y.; Wu, J.; Yang, J. Multiview deep feature learning network for SAR automatic target recognition. Remote Sens. 2021, 13, 1455. [Google Scholar] [CrossRef]
  5. Zhou F, Zhao B, Tao M, et al. A large scene deceptive jamming method for space-borne SAR. IEEE Transactions on Geoscience and Remote Sensing, 2013, 51(8): 4486-4495. [CrossRef]
  6. Long S, Hong-rong Z, Yue-sheng T, et al. Research on deceptive jamming technologies against SAR. In Proceedings of the 2009 2nd Asian-Pacific Conference on Synthetic Aperture Radar. IEEE, China, 26-29 October 2009; pp. 521-525. [CrossRef]
  7. Wang H, Zhang S, Wang W Q, et al. Multi-scene deception jamming on SAR imaging with FDA antenna. IEEE Access, 2019, 8: 7058-7069. [CrossRef]
  8. Sun Q, Shu T, Yu K B, et al. Efficient deceptive jamming method of static and moving targets against SAR. IEEE Sensors Journal, 2018, 18(9): 3610-3618. [CrossRef]
  9. Tian T, Zhou F, Bai X, et al. A partitioned deceptive jamming method against TOPSAR. IEEE Transactions on Aerospace and Electronic Systems, 2019, 56(2): 1538-1552. [CrossRef]
  10. Zhao B, Huang L, Li J, et al. Deceptive SAR jamming based on 1-bit sampling and time-varying thresholds. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(3): 939-950. [CrossRef]
  11. Zhao B, Huang L, Li J, et al. Target reconstruction from deceptively jammed single-channel SAR. IEEE Transactions on Geoscience and Remote Sensing, 2017, 56(1): 152-167. [CrossRef]
  12. Vlahakis V, Ioannidis M, Karigiannis J, et al. Archeoguide: an augmented reality guide for archaeological sites. IEEE Computer Graphics and Applications, 2002, 22(5): 52-60. [CrossRef]
  13. Wenzel M. Generative Adversarial Networks and Other Generative Models. Machine Learning for Brain Disorders. New York, NY: Springer US, 2012: 139-192.
  14. Shaham T R, Dekel T, Michaeli T. Singan: Learning a generative model from a single natural image. In Proceedings of the IEEE/CVF international conference on computer vision, Korea, 27 October-2 November 2019; pp. 4570-4580.
  15. Fan W, Zhou F, Zhang Z, et al. Deceptive jamming template synthesis for SAR based on generative adversarial nets. Signal processing, 2020, 172: 107528. [CrossRef]
  16. Goodman J W. Some fundamental properties of speckle. JOSA, 1976, 66(11): 1145-1150. [CrossRef]
  17. Lee J S, Grunes M R, De Grandi G. Polarimetric SAR speckle filtering and its implication for classification. IEEE Transactions on Geoscience and remote sensing, 1999, 37(5): 2363-2373. [CrossRef]
  18. Raney R K, Wessels G J. Spatial considerations in SAR speckle consideration. IEEE Transactions on Geoscience and Remote Sensing, 1988, 26(5): 666-672. [CrossRef]
  19. Mullissa A G, Marcos D, Tuia D, et al. DeSpeckNet: Generalizing deep learning-based SAR image despeckling. IEEE Transactions on Geoscience and Remote Sensing, 2020, 60: 1-15. [CrossRef]
  20. Lee J S, Jurkevich L, Dewaele P, et al. Speckle filtering of synthetic aperture radar images: A review. Remote sensing reviews, 1994, 8(4): 313-340. [CrossRef]
  21. Tang X, Zhang X, Shi J, et al. SAR deception jamming target recognition based on the shadow feature, In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO). IEEE, Greece, 28 August-2 September 2017; pp. 2491-2495. [CrossRef]
  22. Papson S, Narayanan R M. Classification via the shadow region in SAR imagery. IEEE Transactions on Aerospace and Electronic Systems, 2012, 48(2): 969-980. [CrossRef]
  23. Cui J, Gudnason J, Brookes M. Radar shadow and superresolution features for automatic recognition of MSTAR targets. IEEE In Proceedings of the International Radar Conference, 2005. IEEE, USA, 9-12 May 2005; pp. 534-539. [CrossRef]
  24. Nebauer C. Evaluation of convolutional neural networks for visual recognition. IEEE transactions on neural networks, 1998, 9(4): 685-696. [CrossRef]
  25. Zhu X, Cheng D, Zhang Z, et al. An empirical study of spatial attention mechanisms in deep networks. Proceedings of the IEEE/CVF international conference on computer vision, Korea, 27 October-2 November 2019; pp. 6688-6697. [CrossRef]
  26. Chun M M, Jiang Y. Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive psychology, 1998, 36(1): 28-71. [CrossRef]
  27. Hoffman J E, Subramaniam B. The role of visual attention in saccadic eye movements. Perception & psychophysics, 1995, 57(6): 787-795. [CrossRef]
  28. Deubel H, Schneider W X. Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision research, 1996, 36(12): 1827-1837. [CrossRef]
  29. Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 26 June-; pp. 2818-Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 26 June-1 July 2016; pp. 2818-2826.
  30. Zhang Y, Tian Y, Kong Y, et al. Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 18-22 June 2018; pp. 2472-2481.
  31. Wang Y, Yan X, Guan D, et al. Cycle-snspgan: Towards real-world image dehazing via cycle spectral normalized soft likelihood estimation patch gan. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(11): 20368-20382. [CrossRef]
  32. Leihong Z, Zhai Y, Xu R, et al. An End-to-end Computational Ghost Imaging Method that Suppresses. Phys. Rev. Lett, 2002, 89(11): 113601. [CrossRef]
  33. Saypadith S. A Study on Anomaly Detection in Surveillance. neural networks, 2006, 313(5786): 504-507.
  34. Lin C, Peng F, Wang B H, et al. Research on PCA and KPCA self-fusion based MSTAR SAR automatic target recognition algorithm. Journal of Electronic Science and Technology, 2012, 10(4): 352-357.
  35. Keydel E R, Lee S W, Moore J T. MSTAR extended operating conditions: A tutorial. Algorithms for Synthetic Aperture Radar Imagery III, 1996, 2757: 228-242.
  36. Yang Y, Qiu Y, Lu C. Automatic target classification-experiments on the MSTAR SAR images. In Proceedings of the Sixth international conference on software engineering, artificial intelligence, networking and parallel/distributed computing and first ACIS international workshop on self-assembling wireless network. IEEE, USA, 20-22 June 2005; pp. 2-7. [CrossRef]
  37. Vespe M, Greidanus H. SAR image quality assessment and indicators for vessel and oil spill detection. IEEE Transactions on Geoscience and Remote Sensing, 2012, 50(11): 4726-4734. [CrossRef]
  38. Tang Z, Yu C, Deng Y, et al. Evaluation of Deceptive Jamming Effect on SAR Based on Visual Consistency. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 12246-12262. [CrossRef]
  39. Cui Y, Zhou G, Yang J, et al. Unsupervised estimation of the equivalent number of looks in SAR images. IEEE Geoscience and remote sensing letters, 2011, 8(4): 710-714. [CrossRef]
  40. Wang S, Rehman A, Wang Z, et al. SSIM-motivated rate-distortion optimization for video coding. IEEE Transactions on Circuits and Systems for Video Technology, 2011, 22(4): 516-529. [CrossRef]
  41. Hore A, Ziou D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th international conference on pattern recognition. IEEE, Turkey, 23-26 August 2010; pp. 2366-2369. [CrossRef]
  42. Al-Najjar Y, Chen D. Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI. International Journal of Scientific and Engineering Research, 2012, 3(8): 1-5.
Figure 1. Structure of network proposed in this scheme.
Figure 1. Structure of network proposed in this scheme.
Preprints 82464 g001
Figure 2. Block diagram of the generator.
Figure 2. Block diagram of the generator.
Preprints 82464 g002
Figure 3. Structural diagram of the discriminator.
Figure 3. Structural diagram of the discriminator.
Preprints 82464 g003
Figure 4. 50 SAR deception jamming template with shadows of the T72 tank generated by the proposed network.
Figure 4. 50 SAR deception jamming template with shadows of the T72 tank generated by the proposed network.
Preprints 82464 g004
Figure 5. Real image and three generated samples. (a) Real image; (b) Sample 1; (c) Sample 2; (d) Sample 3.
Figure 5. Real image and three generated samples. (a) Real image; (b) Sample 1; (c) Sample 2; (d) Sample 3.
Preprints 82464 g005
Figure 6. 50 SAR deception jamming template with shadows of the T72 tank generated by SinGAN.
Figure 6. 50 SAR deception jamming template with shadows of the T72 tank generated by SinGAN.
Preprints 82464 g006
Figure 7. Real image and three generated samples. (a) Real image; (b) Sample 1; (c) Sample 2; (d) Sample 3.
Figure 7. Real image and three generated samples. (a) Real image; (b) Sample 1; (c) Sample 2; (d) Sample 3.
Preprints 82464 g007
Table 1. Comparison results of different images regarding the two assessment indexes.
Table 1. Comparison results of different images regarding the two assessment indexes.
Image ENL SSIM
Figure 5(a) (the original image) 2.5604
Figure 5(b) (the first sample) 1.9764 0.9617
Figure 5(c) (the second sample) 1.9090 0.9620
Figure 5(d) (the third sample) 1.9092 0.9643
Average of samples 1.9315 0.9627
Table 2. Comparison results of different images regarding the two evaluation metrics.
Table 2. Comparison results of different images regarding the two evaluation metrics.
Image ENL SSIM
Figure 7(a) (the original image) 2.5604
Figure 7(b) (the first sample) 1.6677 0.3846
Figure 7(c) (the second sample) 1.7150 0.3794
Figure 7(d) (the third sample) 0.5637 0.3925
Average of samples 1.3155 0.3855
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Alerts
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated