Part of Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track
Seiyun Shin, Ilan Shomorony, Han Zhao
Graph Neural Networks (GNNs) are a powerful class of machine learning models with applications in recommender systems, drug discovery, social network analysis, and computer vision. One challenge with their implementation is that GNNs often take large-scale graphs as inputs, which imposes significant computational/storage costs in the training and testing phases. In particular, the message passing operations of a GNN require multiplication of the graph adjacency matrix A∈Rn×n and the data matrix X∈Rn×d, and the O(n2d) time complexity can be prohibitive for large n. Thus, a natural question is whether it is possible to perform the GNN operations in (quasi-)linear time by avoiding the full computation of AX. To study this question, we consider the setting of a regression task on a two-layer Linear Graph Convolutional Network (GCN). We develop an efficient training algorithm based on (1) performing node subsampling, (2) estimating the leverage scores of AX based on the subsampled graph, and (3) performing leverage score sampling on AX. We show that our proposed scheme learns the regression model observing only O(ndϵ−2logn) entries of A in time O(nd2ϵ−2logn), with the guarantee that the learned weights deviate by at most ϵ under the ℓ2 norm from the model learned using the entire adjacency matrix A. We present empirical results for regression problems on real-world graphs and show that our algorithm significantly outperforms other baseline sampling strategies that exploit the same number of observations.