Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement

Version 1 : Received: 27 June 2024 / Approved: 28 June 2024 / Online: 28 June 2024 (08:36:21 CEST)

How to cite: Qian, S.; Yang, L.; Xue, Y.; Tian, J.; Yiming, G.; Yang, J. LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement. Preprints 2024, 2024062023. https://doi.org/10.20944/preprints202406.2023.v1 Qian, S.; Yang, L.; Xue, Y.; Tian, J.; Yiming, G.; Yang, J. LLE-Fuse: Lightweight Infrared and Visible Light Image Fusion Based on Low-Light Image Enhancement. Preprints 2024, 2024062023. https://doi.org/10.20944/preprints202406.2023.v1

Abstract

Infrared and visible light image fusion technology integrates image feature information from two different modalities to generate an image that integrates complementary information from the source images. However, in low-light scenarios, the degradation of lighting in visible light images leads to existing fusion methods being unable to effectively extract texture detail information from the scene. At this time, the target prominence information provided by infrared images alone is far from sufficient. To address this challenge, this paper proposes a lightweight infrared and visible light image fusion method based on low-light enhancement, named LLE-Fuse. Firstly, the method improves the MobileOne Block by embedding the Sobel operator in the Edge-MobileOne Block to perform feature extraction and downsampling on the source images. Then, intermediate features of different scales are obtained and fused through a cross-modal attention fusion module, followed by image reconstruction using a decoder with upsampling. Next, the CLAHE algorithm is used for image enhancement of infrared and visible light images, and the enhanced loss guides the network model to learn the capability of low-light enhancement. Finally, upon completion of the network training, the method optimizes the Edge-MobileOne Block into the direct connection structure of MobileNetV1 through structural re-parameterization, effectively reducing the computational resource consumption of the model network. Through extensive experimental comparisons, this paper's method demonstrates excellent performance in both visual appeal and evaluation metrics, providing satisfactory fusion results even in low-light scenarios.

Keywords

Image Fusion; Infrared and Visible Images; Low-Light Image Enhancement; Structural Re-parameterization; Lightweight Network.

Subject

Computer Science and Mathematics, Computer Vision and Graphics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.