On the Experimental transferability of Spectral Graph Convolutional Networks
Master’s project presentation 6/ 7/ 2020 Axel Nilsson
On the Experimental transferability of Spectral Graph Convolutional - - PowerPoint PPT Presentation
On the Experimental transferability of Spectral Graph Convolutional Networks Masters project presentation 6/ 7/ 2020 Axel Nilsson Outline 1. Introduction - Spectral graph convolutional networks - ChebNet 2. Benchmarking -
Master’s project presentation 6/ 7/ 2020 Axel Nilsson
3
Meshes Social networks
G - Graph N - Set of nodes E - set of edges A - Adjacency matrix D - Degree matrix h - Node features e - Edge features g - Graph features
Molecules Worldwide Web
4
Convolutional neural networks do not translate well to graphs:
1 2 AD 1 2
<latexit sha1_base64="BClwxI3FfOIOcgvRI4ve2qwkvQ=">ACGnicbVDLSsNAFJ3UV62vqEs3g0VwY0mqoBuhahe6q2Af0MQwmU7aoZNJmJkIJeQ73Pgrblwo4k7c+DdO2y09cC9HM65l5l7/JhRqSzr2ygsLC4trxRXS2vrG5tb5vZOS0aJwKSJIxaJjo8kYZSTpqKkU4sCAp9Rtr+8Grstx+IkDTid2oUEzdEfU4DipHSkmfaTp0whTwOz290O4L1+9QJBMKpnaXVLIMXs4pnlq2KNQGcJ3ZOyiBHwzM/nV6Ek5BwhRmSsmtbsXJTJBTFjGQlJ5EkRniI+qSrKUchkW46OS2DB1rpwSASuriCE/X3RopCKUehrydDpAZy1huL/3ndRAVnbkp5nCjC8fShIGFQRXCcE+xRQbBiI0QFlT/FeIB0jEonWZJh2DPnjxPWtWKfVyp3p6Ua5d5HEWwB/bBIbDBKaiBa9ATYDBI3gGr+DNeDJejHfjYzpaMPKdXfAHxtcPVHCf1w=</latexit>The Laplacian operator:
Spectral decomposition: n eigenvalues λ and eigenvectors Φ Vanilla spectral GCN:
h`+1 = ξ ⇣ Φˆ θ(Λ)Φ>h`⌘ = ξ ⇣ ˆ θ(∆)h`⌘
<latexit sha1_base64="rygHktOznzaPEHpPqJKOHTvAE=">ACh3icbVFNb9NAEF2bAq35CnDksiICBZC3VbQC1L5OHDgECTSVsqm0Xg9jldr63dMSKy/Ff4Udz4N6zTIJGWkVZ6em/ezOxMWmvlKI5/B+GNnZu3bu/uRXfu3rv/YPDw0YmrGitxKitd2bMUHGplcEqKNJ7VFqFMNZ6mFx97/fQ7Wqcq841WNc5LWBqVKwnkqcXgp0hxqUzrfDPqouK8Faj1q6Tjz/k78UMJjTmNxKRQXBRAraACTo+El98kwxe8F7zLqrqjhfnvZsLq5YFeUlE2W2K3xCTb7AtikSaLK/4/DFYBiP43Xw6yDZgCHbxGQx+CWySjYlGpIanJslcU3zFiwpqbGLROwBnkBS5x5aKBEN2/Xe+z4M89kPK+sf4b4mv3X0ULp3KpMfWYJVLirWk/+T5s1lB/NW2XqhtDIy0Z5ozlVvD8Kz5RFSXrlAUir/KxcFmBkj9d5JeQXP3ydXCyP04OxvtfD4fHzbr2GVP2FM2Ygl7y47ZzZhUyaDneBlcBAchnvh6/BNeHSZGgYbz2O2FeH7Pz6nwZ8=</latexit>h - node feature xi - Non-linear activation function theta - matrix of learnable weights phi - eigenvectors of the laplacian
5
Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. page 9. arXiv:1606.09375
max∆n − I
<latexit sha1_base64="E90jtfDd+M5HCaRdNbC5umZQvuU=">ACMXicbVDLShxBFK32kegkJqNZuikcAtk4dI+BEQdaGQhYGMCtOT5nb1HaewqrpTdVscmv4lN/5JyMZFQsjWn7DmsfCRAwWHc87l1j1poaSjMLwN5uYXFl+8XFpuvHq98uZtc3XtxOWlFdgVucrtWQoOlTYJUkKzwqLoFOFp+nF/tg/vUTrZG6+0ajAvoZzIwdSAHkpaR7GJFWGVXyAiqDmO7zDY+XnM0gqDVf192ozqvnUTgzf5EeNZR9IUVX4Y9tiYnKrky910myF7XAC/pxEM9JiMxwnzZ9xlotSoyGhwLleFBbUr8CSFArRlw6LEBcwDn2PDWg0fWrycU1f+VjA9y658hPlEfTlSgnRvp1Cc10NA9cbi/7xeSYP/UqaoiQ0YrpoUCpOR/XxzNpUZAaeQLCSv9XLoZgQZAvueFLiJ6e/JycdNrRVrvz9WNrd29WxJbZxvsA4vYJ7bLDtkx6zLBrtkv9pv9CW6C2+Bv8G8anQtmM+/YIwR39zsbqQU=</latexit>Learned filters:
Re-normalised Laplacian: Chebyschev Polynoms: Re-scales the eigenvalues to [-1,1] Recursively computes a basis For the corresponding order k
k
j=0
6
Levie et al. - 2019 - Transferability of Spectral Graph Convolutional Networks
The work of Levie et al. debunked the prejudices of the vanilla spectral GCNs “If two graphs discretise the same continuous metric space, then a spectral GCN has approximately the same repercussion
Spectral GCNs should work well on sets of graphs
7
8
9
Data of the MNIST Superpixel dataset - label: 0
SLIC transform.
similar models
10
Data of the ZINC, node colours are related to atom type - label: -0.2070
solubility of each molecule
learning isotropic filters. Good performance
representative of any underlying space
continuous space
11
Data of the SBM Cluster dataset. The colour of the nodes represent their labels.
communities of various sizes with a probability p of being connected to other nodes of the community and q to others
continuous underlying manifold
12
link : https://ogb.stanford.edu/docs/leader_graphprop/
regard to classical models GCN and GIN on both tasks
show greater performance
13
MNIST image on a 4 - NN Lattice MNIST image on a 4 - NN with structural edge dropout
Structural augmentation are particular to graphs Cut a random set of edges at a variable rate between 0 and r % of all the edges for every graph during the training
14
20 40 60 80 100 20 40 60 80 100 Mde CebNeeaeded4NNace CebNeeaedadb-aededef4NNacea
%feededge %accac
15
16
18