Preprint Article Version 1 This version is not peer-reviewed

DCFF-Net: Deep Context Feature Fusion Network for High-Precision Classification of Hyperspectral Image

Version 1 : Received: 2 July 2024 / Approved: 2 July 2024 / Online: 2 July 2024 (11:21:06 CEST)

How to cite: Zhijie, C.; Yu, C.; Yuan, W.; Xiaoyan, W.; Xinsheng, W.; Zhouru, X. DCFF-Net: Deep Context Feature Fusion Network for High-Precision Classification of Hyperspectral Image. Preprints 2024, 2024070199. https://doi.org/10.20944/preprints202407.0199.v1 Zhijie, C.; Yu, C.; Yuan, W.; Xiaoyan, W.; Xinsheng, W.; Zhouru, X. DCFF-Net: Deep Context Feature Fusion Network for High-Precision Classification of Hyperspectral Image. Preprints 2024, 2024070199. https://doi.org/10.20944/preprints202407.0199.v1

Abstract

Hyperspectral images (HSI) contain abundant spectral information. Efficient extraction and utilization of this information for image classification remain prominent research topics. Previously, hyperspectral classification techniques primarily relied on statistical attributes and mathematical models of spectral data. Deep learning classification techniques have recently been extensively utilized for hyperspectral data classification, yielding promising outcomes. This study proposes a deep learning approach that uses polarization feature maps for classification. Initially, the polar coordinate transformation method is employed to convert the spectral information of all pixels in the image into spectral feature maps. Subsequently, the proposed Deep Context Feature Fusion Network (DCFF-NET) is utilized to classify these feature maps. The model is validated using three open-source hyperspectral datasets: Indian Pines, Pavia University, and Salinas. The experimental results indicated that DCFF-NET achieves excellent classification performance. Experimental results on three public HSI datasets demonstrate that the proposed method accurately recognizes different objects with an overall accuracy (OA) of 86.68%, 94.73%, and 95.14% based on the pixel method, and 98.15%, 99.86%, and 99.98% based on the pixel-patch method.

Keywords

deep learning; hyperspectral images; classification; hyperspectral feature map

Subject

Environmental and Earth Sciences, Remote Sensing

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.