Article
Version 1
This version is not peer-reviewed
Augmented Feature Diffusion on Sparsely Sampled Subgraph
Version 1
: Received: 18 July 2024 / Approved: 21 July 2024 / Online: 22 July 2024 (05:44:53 CEST)
How to cite: Wu, X.; Chen, H. Augmented Feature Diffusion on Sparsely Sampled Subgraph. Preprints 2024, 2024071674. https://doi.org/10.20944/preprints202407.1674.v1 Wu, X.; Chen, H. Augmented Feature Diffusion on Sparsely Sampled Subgraph. Preprints 2024, 2024071674. https://doi.org/10.20944/preprints202407.1674.v1
Abstract
Link prediction is a fundamental problem in graphs. Currently, SubGraph Representation Learning (SGRL) methods provide state-of-the-art solutions for link prediction by transforming the task into a graph classification problem. However, existing SGRL solutions suffer from high computational costs and lack scalability. In this paper, we propose a novel SGRL framework called Augmented Feature Diffusion on Sparsely Sampled Subgraph (AFD3S). The AFD3S first uses a conditional variational autoencoder to augment the local features of the input graph, effectively improving the expressive ability of downstream Graph Neural Networks. Then, based on a random walk strategy, sparsely sampled subgraphs are obtained from the target node pairs, reducing computational and storage overhead. Graph diffusion is then performed on the sampled subgraph to achieve specific weighting. Finally, the diffusion matrix of the subgraph and its augmented feature matrix are used for feature diffusion to obtain operator-level node representations as inputs for the SGRL-based link prediction. Feature diffusion effectively simulates the message-passing process, simplifying subgraph representation learning, thus accelerating the training and inference speed of subgraph learning. Our proposed AFD3S achieves optimal prediction performance on several benchmark datasets, with significantly reduced storage and computational costs.
Keywords
efficiency; scalability; subgraph; graph neural network
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment