Version 1
: Received: 30 October 2024 / Approved: 31 October 2024 / Online: 31 October 2024 (15:04:34 CET)
How to cite:
Yao, G.; Wang, Z.; Wei, G.; Zhu, F.; Fu, Q.; Yu, Q.; Wei, M. Multi-View 3D Reconstruction Based on FEWO-MVSNet. Preprints2024, 2024102566. https://doi.org/10.20944/preprints202410.2566.v1
Yao, G.; Wang, Z.; Wei, G.; Zhu, F.; Fu, Q.; Yu, Q.; Wei, M. Multi-View 3D Reconstruction Based on FEWO-MVSNet. Preprints 2024, 2024102566. https://doi.org/10.20944/preprints202410.2566.v1
Yao, G.; Wang, Z.; Wei, G.; Zhu, F.; Fu, Q.; Yu, Q.; Wei, M. Multi-View 3D Reconstruction Based on FEWO-MVSNet. Preprints2024, 2024102566. https://doi.org/10.20944/preprints202410.2566.v1
APA Style
Yao, G., Wang, Z., Wei, G., Zhu, F., Fu, Q., Yu, Q., & Wei, M. (2024). Multi-View 3D Reconstruction Based on FEWO-MVSNet. Preprints. https://doi.org/10.20944/preprints202410.2566.v1
Chicago/Turabian Style
Yao, G., Qian Yu and Min Wei. 2024 "Multi-View 3D Reconstruction Based on FEWO-MVSNet" Preprints. https://doi.org/10.20944/preprints202410.2566.v1
Abstract
Aiming to address the issue that the existing multi-view stereo reconstruction methods have insufficient adaptability to the repetitive patterns and weak textures in the multi-view images, this paper proposes a three-dimensional (3D) reconstruction algorithm based on feature enhancement and weight optimization MVSNet (Abbreviated as FEWO-MVSNet). To obtain accurate and detailed global and local features, we first develop an adaptive feature enhancement approach to obtain multi-scale information from the images. Second, we introduce an attention mechanism and a spatial feature capture module to enable high-sensitivity detection for weak texture features. Third, based on the 3D convolutional neural network, the fine depth map for multi-view images can be predicted and the complete 3D model is subsequently reconstructed. Last, we evaluated the proposed FEWO-MVSNet through training and testing on the DTU, BlendedMVS, and Tanks&Temples datasets. The results demonstrate significant superiorities of our method for 3D reconstruction from multi-view images, with our method ranking first in accuracy and second in completeness when compared to the existing representative methods.
Keywords
multi-view; MVSNet; Transformer; depth estimation; 3D reconstruction
Subject
Computer Science and Mathematics, Computer Vision and Graphics
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.