Preprint Article Version 1 This version is not peer-reviewed

Augmented Feature Diffusion on Sparsely Sampled Subgraph

Version 1 : Received: 18 July 2024 / Approved: 21 July 2024 / Online: 22 July 2024 (05:44:53 CEST)

How to cite: Wu, X.; Chen, H. Augmented Feature Diffusion on Sparsely Sampled Subgraph. Preprints 2024, 2024071674. https://doi.org/10.20944/preprints202407.1674.v1 Wu, X.; Chen, H. Augmented Feature Diffusion on Sparsely Sampled Subgraph. Preprints 2024, 2024071674. https://doi.org/10.20944/preprints202407.1674.v1

Abstract

Link prediction is a fundamental problem in graphs. Currently, SubGraph Representation Learning (SGRL) methods provide state-of-the-art solutions for link prediction by transforming the task into a graph classification problem. However, existing SGRL solutions suffer from high computational costs and lack scalability. In this paper, we propose a novel SGRL framework called Augmented Feature Diffusion on Sparsely Sampled Subgraph (AFD3S). The AFD3S first uses a conditional variational autoencoder to augment the local features of the input graph, effectively improving the expressive ability of downstream Graph Neural Networks. Then, based on a random walk strategy, sparsely sampled subgraphs are obtained from the target node pairs, reducing computational and storage overhead. Graph diffusion is then performed on the sampled subgraph to achieve specific weighting. Finally, the diffusion matrix of the subgraph and its augmented feature matrix are used for feature diffusion to obtain operator-level node representations as inputs for the SGRL-based link prediction. Feature diffusion effectively simulates the message-passing process, simplifying subgraph representation learning, thus accelerating the training and inference speed of subgraph learning. Our proposed AFD3S achieves optimal prediction performance on several benchmark datasets, with significantly reduced storage and computational costs.

Keywords

efficiency; scalability; subgraph; graph neural network

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.