Article
Version 1
Preserved in Portico This version is not peer-reviewed
Contrasitive Learning for 3D Point Clouds Classification and Shape Completion
Version 1
: Received: 31 August 2021 / Approved: 6 September 2021 / Online: 6 September 2021 (18:00:28 CEST)
A peer-reviewed article of this Preprint also exists.
Nazir, D.; Afzal, M.Z.; Pagani, A.; Liwicki, M.; Stricker, D. Contrastive Learning for 3D Point Clouds Classification and Shape Completion. Sensors 2021, 21, 7392. Nazir, D.; Afzal, M.Z.; Pagani, A.; Liwicki, M.; Stricker, D. Contrastive Learning for 3D Point Clouds Classification and Shape Completion. Sensors 2021, 21, 7392.
Abstract
In this paper, we present the idea of Self Supervised learning on the Shape Completion and Classification of point clouds. Most 3D shape completion pipelines utilize autoencoders to extract features from point clouds used in downstream tasks such as Classification, Segmentation, Detection, and other related applications. Our idea is to add Contrastive Learning into Auto-Encoders to learn both global and local feature representations of point clouds. We use a combination of Triplet Loss and Chamfer distance to learn global and local feature representations. To evaluate the performance of embeddings for Classification, we utilize the PointNet classifier. We also extend the number of classes to evaluate our model from 4 to 10 to show the generalization ability of learned features. Based on our results, embedding generated from the Contrastive autoencoder enhances Shape Completion and Classification performance from 84.2% to 84.9% of point clouds achieving the state-of-the-art results with 10 classes.
Keywords
3D point Cloud Classification, 3D point Cloud Shape Completion,Auto-Encoders, Contrastive Learning, Self-Supervised Learning
Subject
Engineering, Marine Engineering
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment