GreatX is great!
"Reliability" on graphs refers to robustness against the following threats:
- Inherent noise
- Distribution Shift
- Adversarial Attacks
For more details, please kindly refer to our paper Recent Advances in Reliable Deep Graph Learning: Inherent Noise, Distribution Shift, and Adversarial Attack
- November 2, 2022: We are planning to release GreatX 0.1.0 this month, stay tuned!
- June 30, 2022: GraphWar has been renamed to GreatX.
June 9, 2022: GraphWar v0.1.0 has been released. We also provide the documentation along with numerous examples.May 27, 2022: GraphWar has been refactored with PyTorch Geometric (PyG), old code based on DGL can be found here. We will soon release the first version of GreatX, stay tuned!
NOTE: GreatX is still in the early stages and the API will likely continue to change. If you are interested in this project, don't hesitate to contact me or make a PR directly.
Please make sure you have installed PyTorch and PyTorch Geometric (PyG).
# Coming soon
pip install -U greatx
or
# Recommended
git clone https://github.com/EdisonLeeeee/GreatX.git && cd GreatX
pip install -e . --verbose
where -e
means "editable" mode so you don't have to reinstall every time you make changes.
Assume that you have a torch_geometric.data.Data
instance data
that describes your graph.
Take GCN
as an example:
from greatx.nn.models import GCN
from greatx.training import Trainer
from torch_geometric.datasets import Planetoid
# Any PyG dataset is available!
dataset = Planetoid(root='.', name='Cora')
data = dataset[0]
model = GCN(dataset.num_features, dataset.num_classes)
trainer = Trainer(model, device='cuda:0') # or 'cpu'
trainer.fit(data, mask=data.train_mask)
trainer.evaluate(data, mask=data.test_mask)
from greatx.attack.targeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(1, num_budgets=3) # attacking target node `1` with `3` edges
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()
from greatx.attack.untargeted import RandomAttack
attacker = RandomAttack(data)
attacker.attack(num_budgets=0.05) # attacking the graph with 5% edges perturbations
attacked_data = attacker.data()
edge_flips = attacker.edge_flips()
In detail, the following methods are currently implemented:
Methods | Descriptions | Examples |
---|---|---|
RandomAttack | A simple random method that chooses edges to flip randomly. | [Example] |
DICEAttack | Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 | [Example] |
Nettack | ZĂĽgner et al. Adversarial Attacks on Neural Networks for Graph Data, KDD'18 | [Example] |
FGAttack | Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 | [Example] |
GFAttack | Chang et al. A Restricted Black - box Adversarial Framework Towards Attacking Graph Embedding Models, AAAI'20 | [Example] |
IGAttack | Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 | [Example] |
SGAttack | Li et al. Adversarial Attack on Large Scale Graph, TKDE'21 | [Example] |
PGDAttack | Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 | [Example] |
Methods | Descriptions | Examples |
---|---|---|
RandomAttack | A simple random method that chooses edges to flip randomly | [Example] |
DICEAttack | Waniek et al. Hiding Individuals and Communities in a Social Network, Nature Human Behavior'16 | [Example] |
FGAttack | Chen et al. Fast Gradient Attack on Network Embedding, arXiv'18 | [Example] |
Metattack | ZĂĽgner et al. Adversarial Attacks on Graph Neural Networks via Meta Learning, ICLR'19 | [Example] |
IGAttack | Wu et al. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense, IJCAI'19 | [Example] |
PGDAttack | Xu et al. Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective, IJCAI'19 | [Example] |
Methods | Descriptions | Examples |
---|---|---|
RandomInjection | A simple random method that chooses nodes to inject randomly. | [Example] |
AdvInjection | The 2nd place solution of KDD Cup 2020, team: ADVERSARIES. | [Example] |
Methods | Descriptions | Examples |
---|---|---|
LGCBackdoor | Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 | [Example] |
FGBackdoor | Chen et al. Neighboring Backdoor Attacks on Graph Convolutional Network, arXiv'22 | [Example] |
Methods | Descriptions | Examples |
---|---|---|
GCN | Kipf et al. Semi-Supervised Classification with Graph Convolutional Networks, ICLR'17 | [Example] |
SGC | Wu et al. Simplifying Graph Convolutional Networks, ICLR'19 | [Example] |
GAT | Veličković et al. Graph Attention Networks, ICLR'18 | [Example] |
DAGNN | Liu et al. Towards Deeper Graph Neural Networks, KDD'20 | [Example] |
APPNP | Klicpera et al. Predict then Propagate: Graph Neural Networks meet Personalized PageRank, ICLR'19 | [Example] |
JKNet | Xu et al. Representation Learning on Graphs with Jumping Knowledge Networks, ICML'18 | [Example] |
TAGCN | Du et al. Topological Adaptive Graph Convolutional Networks, arXiv'17 | [Example] |
SSGC | Zhu et al. Simple Spectral Graph Convolution, ICLR'21 | [Example] |
DGC | Wang et al. Dissecting the Diffusion Process in Linear Graph Convolutional Networks, NeurIPS'21 | [Example] |
NLGCN, NLMLP, NLGAT | Liu et al. Non-Local Graph Neural Networks, TPAMI'22 | [Example] |
SpikingGCN | Zhu et al. Spiking Graph Convolutional Networks, IJCAI'22 | [Example] |
Methods | Descriptions | Examples |
---|---|---|
DGI | Veličković et al. Deep Graph Infomax, ICLR'19 | [Example] |
GRACE | Zhu et al. Deep Graph Contrastive Representation Learning, ICML'20 | [Example] |
CCA-SSG | Zhang et al. From Canonical Correlation Analysis to Self-supervised Graph Neural Networks, NeurIPS'21 | [Example] |
GGD | Zheng et al. Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination, NeurIPS'22 | [Example] |
More details of literatures and the official codes can be found at Awesome Graph Adversarial Learning.
Methods | Descriptions | Examples |
---|---|---|
DropEdge | Rong et al. DropEdge: Towards Deep Graph Convolutional Networks on Node Classification, ICLR'20 | [Example] |
DropNode | You et al. Graph Contrastive Learning with Augmentations, NeurIPS'20 | [Example] |
DropPath | Li et al. MaskGAE: Masked Graph Modeling Meets Graph Autoencoders, arXiv'22' | [Example] |
FeaturePropagation | Rossi et al. On the Unreasonable Effectiveness of Feature propagation in Learning on Graphs with Missing Node Features, Log'22 | [Example] |
Methods | Descriptions | Examples |
---|---|---|
Centered Kernel Alignment (CKA) | Nguyen et al. Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth, ICLR'21 | [Example] |
Untargeted attacks
are suffering from performance degradation, as also in DeepRobust, when a validation set is used during training with model picking. Such phenomenon has also been revealed in Black-box Gradient Attack on Graph Neural Networks: Deeper Insights in Graph-based Attack and Defense.