Preprint Article Version 1 This version is not peer-reviewed

Segmentation of Low-Grade Brain Tumors Using Mutual Attention Multimodal MRI

Version 1 : Received: 25 October 2024 / Approved: 28 October 2024 / Online: 28 October 2024 (14:48:35 CET)

How to cite: Seshimo, H.; Rashed, E. A. Segmentation of Low-Grade Brain Tumors Using Mutual Attention Multimodal MRI. Preprints 2024, 2024102183. https://doi.org/10.20944/preprints202410.2183.v1 Seshimo, H.; Rashed, E. A. Segmentation of Low-Grade Brain Tumors Using Mutual Attention Multimodal MRI. Preprints 2024, 2024102183. https://doi.org/10.20944/preprints202410.2183.v1

Abstract

Early detection and precise characterization of brain tumors play a crucial role in improving patient outcomes and extending survival rates. Among neuroimaging modalities, magnetic resonance imaging (MRI) is the gold standard for brain tumor diagnostics due to its ability to produce high-contrast images across a variety of sequences, each highlighting distinct tissue characteristics. This study focuses on enabling multimodal MRI sequences to advance the automatic segmentation of low-grade astrocytomas, a challenging task due to their diffuse and irregular growth patterns. A novel mutual-attention deep learning framework is proposed, which integrates complementary information from multiple MRI sequences, including T2-weighted and fluid-attenuated inversion recovery (FLAIR) sequences, to enhance the segmentation accuracy. Unlike conventional segmentation models, which treat each modality independently or simply concatenate them, our model introduces mutual attention mechanisms. This allows the network to dynamically focus on salient features across modalities by jointly learning interdependencies between imaging sequences, leading to more precise boundary delineations even in regions with subtle tumor signals. The proposed method is validated using the UCSF-PDGM dataset, which consists of 35 astrocytoma cases, presenting a realistic and clinically challenging dataset. The results demonstrate that T2w/FLAIR modalities contribute most significantly to the segmentation performance. The mutual-attention model achieves an average Dice coefficient of 0.87, outperforming traditional approaches. This study provides an innovative pathway toward improving segmentation of low-grade tumors by enabling context-aware fusion across imaging sequences. Furthermore, the study showcases the clinical relevance of integrating AI with multimodal MRI, potentially improving non-invasive tumor characterization and guiding future research in radiological diagnostics.

Keywords

image segmentation; low-grade brain tumor; MRI; mutual attention

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.