Version 1
: Received: 13 August 2024 / Approved: 14 August 2024 / Online: 14 August 2024 (08:20:54 CEST)
How to cite:
Anari, S.; Oliveira, G. G. D.; Ranjbarzadeh, R.; Alves, A. M.; Vaz, G. C.; Bendechache, M. EfficientUNetViT: Efficient Breast Tumor Segmentation utilizing U-Net Architecture and Pretrained Vision Transformer. Preprints2024, 2024081015. https://doi.org/10.20944/preprints202408.1015.v1
Anari, S.; Oliveira, G. G. D.; Ranjbarzadeh, R.; Alves, A. M.; Vaz, G. C.; Bendechache, M. EfficientUNetViT: Efficient Breast Tumor Segmentation utilizing U-Net Architecture and Pretrained Vision Transformer. Preprints 2024, 2024081015. https://doi.org/10.20944/preprints202408.1015.v1
Anari, S.; Oliveira, G. G. D.; Ranjbarzadeh, R.; Alves, A. M.; Vaz, G. C.; Bendechache, M. EfficientUNetViT: Efficient Breast Tumor Segmentation utilizing U-Net Architecture and Pretrained Vision Transformer. Preprints2024, 2024081015. https://doi.org/10.20944/preprints202408.1015.v1
APA Style
Anari, S., Oliveira, G. G. D., Ranjbarzadeh, R., Alves, A. M., Vaz, G. C., & Bendechache, M. (2024). EfficientUNetViT: Efficient Breast Tumor Segmentation utilizing U-Net Architecture and Pretrained Vision Transformer. Preprints. https://doi.org/10.20944/preprints202408.1015.v1
Chicago/Turabian Style
Anari, S., Gabriel Caumo Vaz and Malika Bendechache. 2024 "EfficientUNetViT: Efficient Breast Tumor Segmentation utilizing U-Net Architecture and Pretrained Vision Transformer" Preprints. https://doi.org/10.20944/preprints202408.1015.v1
Abstract
This study introduces a sophisticated neural network structure for segmenting breast tumors. It achieves this by combining a pretrained Vision Transformer (ViT) model with a U-Net framework. The U-Net architecture, commonly employed for biomedical image segmentation, is further enhanced with Depthwise Separable Convolutional Blocks to decrease computational complexity and parameter count, resulting in better efficiency and less overfitting. The Vision Transformer, renowned for its robust feature extraction capabilities utilizing self-attention processes, efficiently captures the overall context within images, surpassing the performance of conventional convolutional networks. By using a pretrained ViT as the encoder in our U-Net model, we take advantage of its extensive feature representations acquired from extensive datasets, resulting in a major enhancement in the model’s ability to generalize and train efficiently. The suggested model has exceptional performance in segmenting breast cancers from medical images, highlighting the advantages of integrating transformer-based encoders with efficient U-Net topologies. This hybrid methodology emphasizes the capabilities of transformers in the field of medical image processing and establishes a new standard for accuracy and efficiency in activities related to tumor segmentation.
Keywords
Breast cancer; Unet; Vision transformer; Depthwise Separable Convolutional
Subject
Public Health and Healthcare, Other
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.