A reading list for deep graph learning acceleration, including but not limited to related research on software and hardware level. The list covers related papers, conferences, tools, books, blogs, courses and other resources. We have a team of Maintainers responsible for maintainance, meanwhile also welcome contributions from anyone.
Literatures in this page are arranged from a classification perspective, including the following topics:
You can also find Related Conferences, Graph Learning Tools, Learning Materials on GNNs and Other Resources in General Resources.
- [HPCA 2024] MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization.
Zeyu Zhu, Fanrong Li, et al. [paper]
- [TC 2023] Accelerating graph convolutional networks through a pim-accelerated approach.
Jin H, Chen D, Zheng L, et al. [Paper]
- [TPDS 2023] GraphAGILE: An FPGA-Based Overlay Accelerator for Low-Latency GNN Inference.
Zhang, Bingyi, Hanqing Zeng, and Viktor Prasanna. [paper]
- [TC 2023] Accelerating GNN Training by Adapting Large Graphs to Distributed Heterogeneous Architectures.
Lizhi Zhang, Dongsheng Li, et al. [paper]
- [CASES 2023] MaGNAS: A Mapping-Aware Graph Neural Architecture Search Framework for Heterogeneous MPSoC Deployment.
M Odema, H Bouzidi, H Ouarnoughi, et al. [paper]
- [DAC 2023] Lift: Exploiting Hybrid Stacked Memory for Energy-Efficient Processing of Graph Convolutional Networks.
Jiaxian Chen, Zhaoyu Zhong, et al. [paper]
- [DAC 2023] ReRAM-based Graph Attention Network with Node-Centric Edge Searching and Hamming Similarity.
Ruibin Mao, Xia Sheng, et al. [Paper]
- [arXiv 2023] HitGNN: High-throughput GNN Training Framework on CPU+Multi-FPGA Heterogeneous Platform.
Lin Yichien, Prasanna Vikor, et al. [Paper]
- [TCAD 2023] CoGNN: An Algorithm-Hardware Co-Design Approach to Accelerate GNN Inference With Minibatch Sampling.
Zhong, Kai, Shulin Zeng, Wentao Hou, et al. [paper]
- [TCAD 2023] Algorithm/Hardware Co-optimization for Sparsity-Aware SpMM Acceleration of GNNs.
Gao Y, Gong L, Wang C, et al. [Paper]
- [TCAD 2023] CoGNN: An Algorithm-Hardware Co-Design Approach to Accelerate GNN Inference with Mini-Batch Sampling.
Zhong K, Zeng S, Hou W, et al. [Paper]
- [GSVLSI 2023] IMA-GNN: In-Memory Acceleration of Centralized and Decentralized Graph Neural Networks at the Edge.
Morsali M, Nazzal M, Khreishah A, et al. [Paper]
- [JCST 2023] GShuttle: Optimizing Memory Access Efficiency for Graph Convolutional Neural Network Accelerators.
Li J J, Wang K, Zheng H, et al.. [Paper]
- [arXiv 2023] Dynasparse: Accelerating GNN Inference through Dynamic Sparsity Exploitation.
Bingyi Zhang, Viktor Prasanna. [Paper]
- [arXiv 2023] GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization.
Abi-Karam S, Hao C. [Paper]
- [arXiv 2023] HitGNN: High-throughput GNN Training Framework on CPU+Multi-FPGA Heterogeneous Platform.
Lin Y C, Zhang B. [Paper]
- [ICCD 2022] CoDG-ReRAM: An Algorithm-Hardware Co-design to Accelerate Semi-Structured GNNs on ReRAM.
Luo Y, Behnam P, Thorat K, et al. [Paper]
- [HPCA 2022] Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures.
Huang Y, Zheng L, Yao P, et al. [Paper]
- [HPCA 2022] GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design.
- [HPCA 2022] ReGNN: A Redundancy-Eliminated Graph Neural Networks Accelerator.
Chen C, Li K, Li Y, et al. [Paper]
- [ISCA 2022] DIMMining: pruning-efficient and parallel graph mining on near-memory-computing.
Dai G, Zhu Z, Fu T, et al. [Paper]
- [ISCA 2022] Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network.
Li S, Niu D, Wang Y, et al. [Paper]
- [DAC 2022] Improving GNN-Based Accelerator Design Automation with Meta Learning.
Bai Y, Sohrabizadeh A, Sun Y, et al. [Paper]
- [CICC 2022] StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing.
Sohrabizadeh A, Chi Y, Cong J. [Paper]
- [IPDPS 2022] Model-Architecture Co-Design for High Performance Temporal GNN Inference on FPGA.
Zhou H, Zhang B, Kannan R, et al. [Paper]
- [TPDS 2022] SGCNAX: A Scalable Graph Convolutional Neural Network Accelerator With Workload Balancing.
Li J, Zheng H, Wang K, et al. [Paper]
- [TCSI 2022] A Low-Power Graph Convolutional Network Processor With Sparse Grouping for 3D Point Cloud Semantic Segmentation in Mobile Devices.
Kim S, Kim S, Lee J, et al. [Paper]
- [JAHC 2022] DRGN: a dynamically reconfigurable accelerator for graph neural networks.
Yang C, Huo K B, Geng L F, et al. [Paper]
- [JSA 2022] Algorithms and architecture support of degree-based quantization for graph neural networks.
Guo Y, Chen Y, Zou X, et al. [Paper]
- [JSA 2022] QEGCN: An FPGA-based accelerator for quantized GCNs with edge-level parallelism.
Yuan W, Tian T, Wu Q, et al. [Paper]
- [FCCM 2022] GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration.
- [FAST 2022] Hardware/Software Co-Programmable Framework for Computational SSDs to Accelerate Deep Learning Service on Large-Scale Graphs.
Kwon M, Gouk D, Lee S, et al. [Paper]
- [arXiv 2022] DFG-NAS: Deep and Flexible Graph Neural Architecture Search.
Zhang W, Lin Z, Shen Y, et al. [Paper]
- [arXiv 2022] GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for Memory-Efficient Graph Convolutional Neural Networks.
Kang M, Hwang R, Lee J, et al. [Paper]
- [arXiv 2022] Enabling Flexibility for Sparse Tensor Acceleration via Heterogeneity.
Qin E, Garg R, Bambhaniya A, et al. [Paper]
- [arXiv 2022] FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming.
- [arXiv 2022] Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform.
Zhang B, Zeng H, Prasanna V. [Paper]
- [arXiv 2022] SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures.
Lee Y, Chung J, Rhu M. [Paper]
- [MICRO 2021] AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing.
Geng T, Li A, Shi R, et al. [Paper]
- [MICRO 2021] Point-X: A Spatial-Locality-Aware Architecture for Energy-Efficient Graph-Based Point-Cloud Deep Learning.
Zhang J F, Zhang Z. [Paper]
- [HPCA 2021] GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks.
Li J, Louri A, Karanth A, et al. [Paper]
- [DAC 2021] DyGNN: Algorithm and Architecture Support of Dynamic Pruning for Graph Neural Networks.
Chen C, Li K, Zou X, et al. [Paper]
- [DAC 2021] BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices.
Zhou Z, Shi B, Zhang Z, et al. [Paper]
- [DAC 2021] GNNerator: A Hardware/Software Framework for Accelerating Graph Neural Networks.
Stevens J R, Das D, Avancha S, et al. [Paper]
- [DAC 2021] PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration.
Yang T, Li D, Han Y, et al. [Paper]
- [TCAD 2021] Rubik: A Hierarchical Architecture for Efficient Graph Neural Network Training.
Chen X, Wang Y, Xie X, et al. [Paper]
- [TCAD 2021] Cambricon-G: A Polyvalent Energy-efficient Accelerator for Dynamic Graph Neural Networks.
Song X, Zhi T, Fan Z, et al. [Paper]
- [ICCAD 2021] DARe: DropLayer-Aware Manycore ReRAM architecture for Training Graph Neural Networks.
Arka A I, Joardar B K, Doppa J R, et al. [Paper]
- [DATE 2021] ReGraphX: NoC-Enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks.
Arka A I, Doppa J R, Pande P P, et al. [Paper]
- [FCCM 2021] BoostGCN: A Framework for Optimizing GCN Inference on FPGA.
Zhang B, Kannan R, Prasanna V. [Paper]
- [SCIS 2021] Towards efficient allocation of graph convolutional networks on hybrid computation-in-memory architecture.
Chen J, Lin G, Chen J, et al. [Paper]
- [EuroSys 2021] Tesseract: distributed, general graph pattern mining on evolving graphs.
Bindschaedler L, Malicevic J, Lepers B, et al. [Paper]
- [EuroSys 2021] Accelerating Graph Sampling for Graph Machine Learning Using GPUs.
Jangda A, Polisetty S, Guha A, et al. [Paper]
- [ATC 2021] GLIST: Towards In-Storage Graph Learning.
Li C, Wang Y, Liu C, et al. [Paper]
- [CAL 2021] Hardware Acceleration for GCNs via Bidirectional Fusion.
Li H, Yan M, Yang X, et al. [Paper]
- [arXiv 2021] GNNIE: GNN Inference Engine with Load-balancing and Graph-Specific Caching.
Mondal S, Manasi S D, Kunal K, et al. [Paper]
- [arXiv 2021] LW-GCN: A Lightweight FPGA-based Graph Convolutional Network Accelerator.
Tao Z, Wu C, Liang Y, et al. [Paper]
- [arXiv 2021] VersaGNN: a Versatile accelerator for Graph neural networks.
Shi F, Jin A Y, Zhu S C. [Paper]
- [arXiv 2021] ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration.
Zhang Z, Leng J, Lu S, et al. [Paper]
- [HPCA 2020] HyGCN: A GCN Accelerator with Hybrid Architecture.
Yan M, Deng L, Hu X, et al. [Paper]
- [DAC 2020] Hardware Acceleration of Graph Neural Networks.
Auten A, Tomei M, Kumar R. [Paper]
- [ICCAD 2020] DeepBurning-GL: an automated framework for generating graph neural network accelerators.
Liang S, Liu C, Wang Y, et al. [Paper]
- [TC 2020] EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks.
Liang S, Wang Y, Liu C, et al. [Paper]
- [SC 2020] GE-SpMM: General-Purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks.
- [CCIS 2020] GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks.
Wang Z, Guan Y, Sun G, et al. [Paper]
- [FPGA 2020] GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms.
- [ICPADS 2020] S-GAT: Accelerating Graph Attention Networks Inference on FPGA Platform with Shift Operation.
Yan W, Tong W, Zhi X. [Paper]
- [ASAP 2020] Hardware Acceleration of Large Scale GCN Inference.
Zhang B, Zeng H, Prasanna V.[Paper]
- [ICA3PP 2020] Towards a Deep-Pipelined Architecture for Accelerating Deep GCN on a Multi-FPGA Platform.
Cheng Q, Wen M, Shen J, et al. [Paper]
- [Access 2020] FPGAN: An FPGA Accelerator for Graph Attention Networks With Software and Hardware Co-Optimization.
Yan W, Tong W, Zhi X. [Paper]
- [arXiv 2020] GRIP: A Graph Neural Network Accelerator Architecture.
Kiningham K, Re C, Levis P. [Paper]
- [ASICON 2019] An FPGA Implementation of GCN with Sparse Adjacency Matrix.
Ding L, Huang Z, Chen G. [Paper]
- [IPDPS 2024] ARGO: An Auto-Tuning Runtime System for Scalable GNN Training on Multi-Core Processor.
Yi-Chien Lin, Yuyang Chen, et al. [paper]
- [SC 2023] BLAD: Adaptive Load Balanced Scheduling and Operator Overlap Pipeline For Accelerating The Dynamic GNN Training.
Fu, Kaihua, et al. [paper]
- [SOSP 2023] gSampler: General and Efficient GPU-based Graph Sampling for Graph Learning.
Gong, Ping, et al. [paper]
- [ICDE 2023] InferTurbo: A Scalable System for Boosting Full-graph Inference of Graph Neural Network over Huge Graphs.
Zhang, Dalong, et al. [paper]
- [IPDPS 2023] Communication Optimization for Distributed Execution of Graph Neural Networks.
Kurt, Süreyya Emre, et al. [paper]
- [IPDPS 2023] Betty: Enabling Large-Scale GNN Training with Batch-Level Graph Partitioning.
Yang, Shuangyan, et al. [paper]
- [LOG 2023] PyTorch-Geometric Edge - a Library for Learning Representations of Graph Edges.
Bielak, Piotr, and Tomasz Jan Kajdanowicz. [paper]
- [LOG 2023] FreshGNN: Reducing Memory Access via Stable Historical Embeddings for Graph Neural Network Training.
Kezhao Huang, Haitian Jiang, Haitian_Jiang3, et al. [paper]
- [PPoPP 2023] PiPAD: Pipelined and Parallel Dynamic GNN Training on GPUs.
Wang, Chunyang, Desen Sun, and Yuebin Bai. [paper]
- [PPoPP 2023] DSP: Efficient GNN Training with Multiple GPUs.
Cai Z, Zhou Q, et al. [Paper]
- [ICS 2023] BitGNN: Unleashing the Performance Potential of Binary Graph Neural Networks on GPUs.
Chen, Jou-An, Hsin-Hsuan Sung, Xipeng Shen, et al. [paper]
- [JSAC 2022] GNN at the Edge: Cost-Efficient Graph Neural Network Processing Over Distributed Edge Servers.
Zeng, Liekang, Chongyu Yang, Peng Huang, et al. [paper]
- [TC 2023] TurboGNN: Improving the End-to-End Performance for Sampling-Based GNN Training on GPUs.
Wenchao Wu, Xuanhua Shi, et al. [Paper]
- [TPDS 2023] TurboMGNN: Improving Concurrent GNN Training Tasks on GPU With Fine-Grained Kernel Fusion.
Wenchao Wu, Xuanhua Shi, et al. [Paper
- [TPDPS 2023] HyScale-GNN: A Scalable Hybrid GNN Training System on Single-Node Heterogeneous Architecture.
Lin Yichien, Prasanna Vikor, et al. [Paper]
- [INFOCOM 2023] Two-level Graph Caching for Expediting Distributed GNN Training.
Ziyue Luo et al. [Paper]
- [NSDI 2023] BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing.
Liu T, Chen Y, Li D, et al. [Paper]
- [arXiv 2023] GraphTensor: Comprehensive GNN-Acceleration Framework for Efficient Parallel Processing of Massive Datasets.
Jang J, Kwon M, Gouk D, et al. [Paper]
- [arXiv 2022] DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling.
Vasimuddin M, Mohanty R, Misra S, et al. [Paper]
- [VLDB 2022] ByteGNN: efficient graph neural network training at large scale.
Zheng C, Chen H, Cheng Y, et al. [Paper]
- [EuroSys 2022] GNNLab: a factored system for sample-based GNN training over GPUs.
Yang J, Tang D, Song X, et al. [Paper]
- [PPoPP 2022] Rethinking graph data placement for graph neural network training on multiple GPUs.
Song S, Jiang P. [Paper]
- [TC 2022] Multi-node Acceleration for Large-scale GCNs.
Sun, Gongjian, et al. [Paper]
- [ISCA 2022] Graphite: optimizing graph neural networks on CPUs through cooperative software-hardware techniques.
Gong Z, Ji H, Yao Y, et al. [Paper]
- [PPoPP 2022] QGTC: accelerating quantized graph neural networks via GPU tensor core.
Wang Y, Feng B, Ding Y. [Paper]
- [SIGMOD 2022] NeutronStar: Distributed GNN Training with Hybrid Dependency Management.
Wang Q, Zhang Y, Wang H, et al. [Paper]
- [MLSys 2022] Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining.
Kaler T, Stathas N, Ouyang A, et al. [Paper]
- [KDD 2022] Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Heterogeneous Graphs.
Zheng D, Song X, Yang C, et al. [Paper]
- [FPGA 2022] SPA-GCN: Efficient and Flexible GCN Accelerator with Application for Graph Similarity Computation.
Sohrabizadeh A, Chi Y, Cong J. [Paper]
- [HPDC 2022] TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU.
Fu Q, Ji Y, Huang H H. [Paper]
- [Concurrency and Computation 2022] BRGraph: An efficient graph neural network training system by reusing batch data on GPU.
Ge K, Ran Z, Lai Z, et al. [Paper]
- [arXiv 2022] Improved Aggregating and Accelerating Training Methods for Spatial Graph Neural Networks on Fraud Detection.
Zeng Y, Tang J. [Paper]
- [arXiv 2022] Marius++: Large-scale training of graph neural networks on a single machine.
Waleffe R, Mohoney J, Rekatsinas T, et al. [Paper]
- [HPCA 2021] DistGNN: scalable distributed training for large-scale graph neural networks.
Md V, Misra S, Ma G, et al. [Paper]
- [CLUSTER 2021] 2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters.
Zhang L, Lai Z, Li S, et al. [Paper]
- [APSys 2021] Accelerating GNN training with locality-aware partial execution.
*Kim T, Hwang C, Park K S, et al. * [Paper]
- [JPDC 2021] Accurate, efficient and scalable training of Graph Neural Networks.
- [JPDC 2021] High performance GPU primitives for graph-tensor learning operations.
Zhang T, Kan W, Liu X Y. [Paper]
- [OSDI 2021] Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads.
- [OSDI 2021] GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs
- [EuroSys 2021] DGCL: an efficient communication library for distributed GNN training.
Cai Z, Yan X, Wu Y, et al. [Paper]
- [EuroSys 2021] FlexGraph: a flexible and efficient distributed framework for GNN training.
- [EuroSys 2021] Seastar: vertex-centric programming for graph neural networks.
Wu Y, Ma K, Cai Z, et al. [Paper]
- [TPDS 2021] Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs.
Bai Y, Li C, Lin Z, et al. [Paper]
- [GNNSys 2021] FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks.
He C, Balasubramanian K, Ceyani E, et al. [Paper] [Poster] [GitHub]
- [GNNSys 2021] Graphiler: A Compiler for Graph Neural Networks.
- [GNNSys 2021] IGNNITION: A framework for fast prototyping of Graph Neural Networks.
Pujol Perich D, Suárez-Varela Maciá J R, Ferriol Galmés M, et al. [Paper] [Poster]
- [GNNSys 2021] Load Balancing for Parallel GNN Training.
- [IPDPS 2021] FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks.
- [IPCCC 2021] Accelerate graph neural network training by reusing batch data on GPUs.
*Ran Z, Lai Z, Zhang L, et al. * [Paper]
- [arXiv 2021] PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models.
- [arXiv 2021] QGTC: Accelerating Quantized GNN via GPU Tensor Core.
- [arXiv 2021] TC-GNN: Accelerating Sparse Graph Neural Network Computation Via Dense Tensor Core on GPUs.
- [ICCAD 2020] fuseGNN: accelerating graph convolutional neural network training on GPGPU.
- [VLDB 2020] AGL: a scalable system for industrial-purpose graph machine learning.
Zhang D, Huang X, Liu Z, et al. [Paper]
- [SC 2020] FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems.
- [MLSys 2020] Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc.
Jia Z, Lin S, Gao M, et al. [Paper]
- [CVPR 2020] L2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks.
You Y, Chen T, Wang Z, et al. [Paper]
- [TPDS 2020] EDGES: An Efficient Distributed Graph Embedding System on GPU Clusters.
Yang D, Liu J, Lai J. [Paper]
- [AccML 2020] GIN : High-Performance, Scalable Inference for Graph Neural Networks.
Fu Q, Huang H H. [Paper]
- [SoCC 2020] PaGraph: Scaling GNN training on large graphs via computation-aware caching.
Lin Z, Li C, Miao Y, et al. [Paper]
- [IPDPS 2020] Pcgcn: Partition-centric processing for accelerating graph convolutional network.
Tian C, Ma L, Yang Z, et al. [Paper]
- [arXiv 2020] Deep graph library optimizations for intel (r) x86 architecture.
Avancha S, Md V, Misra S, et al. [Paper]
- [IA3 2020] DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs.
Zheng D, Ma C, Wang M, et al. [Paper]
- [CoRR 2019] Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs.
Wang M Y. [Paper] [GitHub] [Home Page]
- [ICLR 2019] Fast Graph Representation Learning with PyTorch Geometric.
Fey M, Lenssen J E. [Paper] [GitHub] [Documentation]
- [KDD 2019] AliGraph: a comprehensive graph neural network platform.
- [SysML 2019] PyTorch-BigGraph: A Large-scale Graph Embedding System.
- [ATC 2019] NeuGraph: Parallel Deep Neural Network Computation on Large Graphs.
Ma L, Yang Z, Miao Y, et al. [Paper]
- [arXiv 2018] Relational inductive biases, deep learning, and graph networks.
Battaglia P W, Hamrick J B, Bapst V, et al. [Paper] [GitHub]
- [IEEE Transactions on Computers 2023] Approximation- and Quantization-Aware Training for Graph Neural Networks.
Novkin R, Klemme F, Amrouch H [Paper]
- [IEEE Transactions on Computers 2023] Sugar: Efficient subgraph-level training via resource-aware graph partitioning.
Xue Z, Yang Y, Marculescu R [Paper]
- [ASC 2023] Imbalanced node classification with Graph Neural Networks: A unified approach leveraging homophily and label information.
Lv D, Xu Z, Zhang J, et al. [Paper]
- [arXiv 2023] AdaptGear: Accelerating GNN Training via Adaptive Subgraph-Level Kernels on GPUs.
Yangjie Zhou, Jingwen Leng, et al. [Paper]
- [DAC 2023] Hardware-Aware Graph Neural Network Automated Design for Edge Computing Platforms.
Ao Zhou, Jianlei Yang, et al. [Paper]
- [arXiv 2023] Provably Convergent Subgraph-wise Sampling for Fast GNN Training.
Wang Jie, Shi Zhihao, et al.. [Paper]
- [arXiv 2023] LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation.
Rui Xue, Haoyu Han, et al.. [Paper]
- [TPDS 2022] Accelerating Backward Aggregation in GCN Training With Execution Path Preparing on GPUs.
Shaoxian Xu, Zhiyuan Shao, et al. [Paper]
- [AAAI 2022] Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets.
- [ICLR 2022] Adaptive Filters for Low-Latency and Memory-Efficient Graph Neural Networks.
- [ICLR 2022] Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation.
- [ICLR 2022] EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression.
Liu Z, Zhou K, Yang F, et al. [Paper]
- [ICLR 2022] IGLU: Efficient GCN Training via Lazy Updates.
Narayanan S D, Sinha A, Jain P, et al. [Paper]
- [ICLR 2022] PipeGCN: Efficient full-graph training of graph convolutional networks with pipelined feature communication.
- [ICLR 2022] Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks.
Ramezani M, Cong W, Mahdavi M, et al. [Paper]
- [ICML 2022] Efficient Computation of Higher-Order Subgraph Attribution via Message Passing.
Xiong et al. [Paper]
- [ICML 2022] Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling.
Li H, Weng M, Liu S, et al. [Paper]
- [ICML 2022] Scalable Deep Gaussian Markov Random Fields for General Graphs.
- [ICML 2022] GraphFM: Improving Large-Scale GNN Training via Feature Momentum.
- [SC 2022] CoGNN: Efficient Scheduling for Concurrent GNN Training on GPUs.
Sun Q, Liu Y, Yang H, et al. [Paper]
- [MLSys 2022] BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Boundary Node Sampling.
- [MLSys 2022] Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph.
Xie Z, Wang M, Ye Z, et al. [Paper]
- [MLSys 2022] Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs.
- [WWW 2022] Fograph: Enabling Real-Time Deep Graph Inference with Fog Computing.
Zeng L, Huang P, Luo K, et al. [Paper]
- [www 2022] PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm.
Zhang W, Shen Y, Lin Z, et al. [Paper]
- [www 2022] Resource-Efficient Training for Large Graph Convolutional Networks with Label-Centric Cumulative Sampling.
Lin M, Li W, Li D, et al. [Paper]
- [FPGA 2022] DecGNN: A Framework for Mapping Decoupled GNN Models onto CPU-FPGA Heterogeneous Platform.
Zhang B, Zeng H, Prasanna V K. [Paper]
- [FPGA 2022] HP-GNN: Generating High Throughput GNN Training Implementation on CPU-FPGA Heterogeneous Platform.
Lin Y C, Zhang B, Prasanna V. [Paper]
- [arXiv 2022] SUGAR: Efficient Subgraph-level Training via Resource-aware Graph Partitioning.
Xue Z, Yang Y, Yang M, et al. [Paper]
- [CAL 2022] Characterizing and Understanding Distributed GNN Training on GPUs.
Lin H, Yan M, Yang X, et al. [Paper]
- [ICLR 2021] Degree-Quant: Quantization-Aware Training for Graph Neural Networks.
Tailor S A, Fernandez-Marques J, Lane N D. [Paper]
- [ICLR 2021 Open Review] FGNAS: FPGA-AWARE GRAPH NEURAL ARCHITECTURE SEARCH.
Lu Q, Jiang W, Jiang M, et al. [Paper]
- [ICML 2021] GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training.
Cai T, Luo S, Xu K, et al. [Paper]
- [ICML 2021] Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth.
Xu K, Zhang M, Jegelka S, et al. [Paper]
- [KDD 2021] DeGNN: Improving Graph Neural Networks with Graph Decomposition.
Miao X, Gürel N M, Zhang W, et al. [Paper]
- [KDD 2021] Performance-Adaptive Sampling Strategy Towards Fast and Accurate Graph Neural Networks.
Yoon M, Gervet T, Shi B, et al. [Paper]
- [KDD 2021] Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs.
Dong J, Zheng D, Yang L F, et al. [Paper]
- [CVPR 2021] Binary Graph Neural Networks.
Bahri M, Bahl G, Zafeiriou S. [Paper]
- [CVPR 2021] Bi-GCN: Binary Graph Convolutional Network.
- [NeurIPS 2021] Graph Differentiable Architecture Search with Structure Learning.
Qin Y, Wang X, Zhang Z, et al. [Paper]
- [ICCAD 2021] G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency.
Zhang Y, You H, Fu Y, et al. [Paper]
- [GNNSys 2021] Efficient Data Loader for Fast Sampling-based GNN Training on Large Graphs.
- [GNNSys 2021] Effiicent Distribution for Deep Learning on Large Graphs.
- [GNNSys 2021] Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions.
- [GNNSys 2021] Adaptive Load Balancing for Parallel GNN Training.
Su Q, Wang M, Zheng D, et al. [Paper]
- [PMLR 2021] A Unified Lottery Ticket Hypothesis for Graph Neural Networks.
Chen T, Sui Y, Chen X, et al. [Paper]
- [PVLDB 2021] Accelerating Large Scale Real-Time GNN Inference using Channel Pruning.
- [SC 2021] Efficient scaling of dynamic graph neural networks.
Chakaravarthy V T, Pandian S S, Raje S, et al. [Paper]
- [RTAS 2021] Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms.
- [ICDM 2021] GraphANGEL: Adaptive aNd Structure-Aware Sampling on Graph NEuraL Networks.
Peng J, Shen Y, Chen L. [Paper]
- [GLSVLSI 2021] Co-Exploration of Graph Neural Network and Network-on-Chip Design Using AutoML.
Manu D, Huang S, Ding C, et al. [Paper]
- [arXiv 2021] Edge-featured Graph Neural Architecture Search.
Cai S, Li L, Han X, et al. [Paper]
- [arXiv 2021] GNNSampler: Bridging the Gap between Sampling Algorithms of GNN and Hardware.
- [KDD 2020] TinyGNN: Learning Efficient Graph Neural Networks.
Yan B, Wang C, Guo G, et al. [Paper]
- [ICLR 2020] GraphSAINT: Graph Sampling Based Inductive Learning Method.
- [NeurIPS 2020] Gcn meets gpu: Decoupling “when to sample” from “how to sample”.
Ramezani M, Cong W, Mahdavi M, et al. [Paper]
- [SC 2020] Reducing Communication in Graph Neural Network Training.
- [ICTAI 2020] SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization.
Feng B, Wang Y, Li X, et al. [Paper]
- [arXiv 2020] Learned Low Precision Graph Neural Networks.
Zhao Y, Wang D, Bates D, et al. [Paper]
- [arXiv 2020] Distributed Training of Graph Convolutional Networks using Subgraph Approximation.
Angerd A, Balasubramanian K, Annavaram M. [Paper]
- [IPDPS 2019] Accurate, efficient and scalable graph embedding.
Zeng H, Zhou H, Srivastava A, et al. [Paper]
- [CAL 2023] Architectural Implications of GNN Aggregation Programming Abstractions.
Qi Y, Yang J, Zhou A, et al. [Paper]
- [arXiv 2023] A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware.
S Zhang, A Sohrabizadeh, C Wan, et al. [Paper]
- [arXiv 2022] A Comprehensive Survey on Distributed Training of Graph Neural Networks.
Lin H, Yan M, Ye X, et al. [Paper]
- [arXiv 2022] Distributed Graph Neural Network Training: A Survey.
Shao Y, Li H, Gu X, et al. [Paper]
- [CAL 2022] Characterizing and Understanding HGNNs on GPUs.
Yan M, Zou M, Yang X, et al. [Paper]
- [arXiv 2022] Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis.
Besta M, Hoefler T. [Paper]
- [IJCAI 2022] Survey on Graph Neural Network Acceleration: An Algorithmic Perspective.
Liu X, Yan M, Deng L, et al. [Paper]
- [ACM Computing Surveys 2022] A Practical Tutorial on Graph Neural Networks
Ward I R, Joyner J, Lickfold C, et al.[Paper]
- [CAL 2022] Characterizing and Understanding Distributed GNN Training on GPUs.
Lin H, Yan M, Yang X, et al.[Paper]
- [Access 2022] Analyzing GCN Aggregation on GPU.
Kim I, Jeong J, Oh Y, et al.[Paper]
- [GNNSys 2021] Analyzing the Performance of Graph Neural Networks with Pipe Parallelism.
- [IJCAI 2021] Automated Machine Learning on Graphs: A Survey.
Zhang Z, Wang X, Zhu W. [Paper]
- [PPoPP 2021] Understanding and bridging the gaps in current GNN performance optimizations.
Huang K, Zhai J, Zheng Z, et al. [Paper]
- [ISCAS 2021] Characterizing the Communication Requirements of GNN Accelerators: A Model-Based Approach.
Guirado R, Jain A, Abadal S, et al. [Paper]
- [ISPASS 2021] GNNMark: A Benchmark Suite to Characterize Graph Neural Network Training on GPUs.
Baruah T, Shivdikar K, Dong S, et al. [Paper]
- [ISPASS 2021] Performance Analysis of Graph Neural Network Frameworks.
Wu J, Sun J, Sun H, et al. [Paper]
- [CAL 2021] Making a Better Use of Caches for GCN Accelerators with Feature Slicing and Automatic Tile Morphing.
Yoo M, Song J, Lee J, et al. [Paper]
- [arXiv 2021] Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective.
Zhang H, Yu Z, Dai G, et al. [Paper]
- [arXiv 2021] Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators.
Garg R, Qin E, Muñoz-Martínez F, et al. [Paper]
- [arXiv 2021] A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators.
Garg R, Qin E, Martínez F M, et al. [Paper]
- [arXiv 2021] Graph Neural Networks: Methods, Applications, and Opportunities.
Waikhom L, Patgiri R. [Paper]
- [arXiv 2021] Sampling methods for efficient training of graph convolutional networks: A survey.
Liu X, Yan M, Deng L, et al. [Paper]
- [KDD 2020] Deep Graph Learning: Foundations, Advances and Applications.
Rong Y, Xu T, Huang J, et al. [Paper]
- [TKDE 2020] Deep Learning on Graphs: A Survey.
Zhang Z, Cui P, Zhu W.[paper]
- [CAL 2020] Characterizing and Understanding GCNs on GPU.
Yan M, Chen Z, Deng L, et al. [Paper]
- [arXiv 2020] Computing Graph Neural Networks: A Survey from Algorithms to Accelerators.
Abadal S, Jain A, Guirado R, et al. [Paper]