Xiao, Y.; Dou, Y.; Yang, S. PointBLIP: Zero-Training Point Cloud Classification Network Based on BLIP-2 Model. Remote Sensing 2024, 16, 2453, doi:10.3390/rs16132453.
Xiao, Y.; Dou, Y.; Yang, S. PointBLIP: Zero-Training Point Cloud Classification Network Based on BLIP-2 Model. Remote Sensing 2024, 16, 2453, doi:10.3390/rs16132453.
Xiao, Y.; Dou, Y.; Yang, S. PointBLIP: Zero-Training Point Cloud Classification Network Based on BLIP-2 Model. Remote Sensing 2024, 16, 2453, doi:10.3390/rs16132453.
Xiao, Y.; Dou, Y.; Yang, S. PointBLIP: Zero-Training Point Cloud Classification Network Based on BLIP-2 Model. Remote Sensing 2024, 16, 2453, doi:10.3390/rs16132453.
Abstract
Leveraging the open-world understanding capacity of large-scale visual-language pre-trained models has become a hot-spot in point cloud classification. Recent approaches rely on transferable visual-language pre-trained models, classifying point clouds by projecting them into 2D images and evaluating consistency with textual prompts. These methods benefit from the robust open-world understanding capabilities of visual-language pre-trained models and require no additional training. However, they face several challenges summarized as prompt ambiguity, image domain gap, view weights confusion, and feature deviation. In response to these challenges, we propose PointBLIP, a zero-training point cloud classification network based on the recently introduced BLIP-2 visual-language model. PointBLIP is adept at processing similarities between multi-images and multi-prompts. We separately introduce a novel method for point cloud zero-shot and few-shot classification, which involves comparing multiple features to achieve effective classification. Simultaneously, we enhance the input data quality for both the image and text sides of PointBLIP. In point cloud zero-shot classification tasks, we outperform state-of-the-art methods on three benchmark datasets. For few-shot classification tasks, to the best of our knowledge, we present the first zero-training few-shot point cloud method, surpassing previous works under the same conditions and showcasing comparable performance to full-training methods.
Computer Science and Mathematics, Computer Vision and Graphics
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.