Abstract:
|
The graph convolutional networks (GCNs) have recently achieved great success in different graph tasks. However, training GCNs for a large graph is computationally intensive. ¬ Full-batch GCN training recursively involves the neighbors of nodes in each GCN layer. Since the nodes are dependent, the linear growth of GCN depth can lead to the exponential growth of neighbors. To address the huge computational cost, sampling-based methods are proposed. Among them, the subgraph sampling method is sensitive to the graph structure; node-wise sampling still suffers from the exponential growth neighbor size; while layer-wise sampling address the neighbor explosion issue by layer-wise important sampling. We apply sketching as a layer-wise sampling method. The accuracy of sketching-GCN is comparable to the original full-batch GCN while our method is more efficient in both time and memory. Furthermore, existing sampling strategies including FastGCN and LADIES can be viewed as special cases of the sketching framework.
|