Stochastic Blockmodels meet Graph Neural Networks
Nikhil Mehta Lawrence Carin Piyush Rai
International Conference on Machine Learning (ICML) 2019, Long Beach, CA
Stochastic Blockmodels meet Graph Neural Networks Nikhil Mehta - - PowerPoint PPT Presentation
Stochastic Blockmodels meet Graph Neural Networks Nikhil Mehta Lawrence Carin Piyush Rai I nternational Conference on Machine Learning (ICML) 2019, Long Beach, CA Problem Statement Goal: Learn sparse node embeddings for graphs.
International Conference on Machine Learning (ICML) 2019, Long Beach, CA
𝑂×𝑂
Stochastic blockmodels meet Graph Neural Networks
𝑨𝑗 ∼ 𝑁𝑣𝑚𝑢𝑗𝑜𝑝𝑣𝑚𝑚𝑗 𝜌 𝐵 𝑗,𝑘 ∼ 𝐶𝑓𝑠𝑜𝑝𝑣𝑚𝑚𝑗 𝑨𝑗
𝑈𝑋𝑨 𝑘
𝑎 ∼ 𝐽𝐶𝑄(𝛽); 𝜇 𝑙,𝑙` ∼ 𝒪 0, 𝜏
𝜇 2 ; 𝐵 𝑗,𝑘 ∼ 𝐶𝑓𝑠𝑜𝑝𝑣𝑚𝑚𝑗(𝜏(𝑨𝑗 𝑈Λ𝑨 𝑘))
𝑈𝑨 𝑘 or Node classification: softmax(g(z))
Stochastic blockmodels meet Graph Neural Networks
Graphs
via Graph Neural Network.
two other latent variables: 𝑨𝑜 = 𝑐𝑜 ⊙ 𝑠
𝑜.
memberships (cluster assignments). This allows the model to infer the “active communities” for a given (𝐿).
𝑜 ∈ ℝ𝐿 defines the node-community membership
strength.
Stochastic blockmodels meet Graph Neural Networks
𝑙
𝑤𝑘, 𝑐𝑜𝑙 ∼ 𝐶𝑓𝑠𝑜𝑝𝑣𝑚𝑚𝑗(𝜌𝑙)
𝑜 ∈ ℝ𝐿)
𝑜 ∼ 𝒪(0,1)
𝑜)
𝑜 = 𝑔(𝑨𝑜), where 𝑔 is a multi-layered perceptron.
𝑜, 𝑔 𝑛) = 𝜏(𝑔 𝑜 𝑈𝑔 𝑛)
Stochastic blockmodels meet Graph Neural Networks
posterior with the variational posterior.
𝐿
ς𝑜=1
𝑂
𝑟𝜚 𝑤𝑜𝑙 𝑟𝜚 𝑐𝑜𝑙 𝑟𝜚(𝑠
𝑜𝑙)
𝑜𝑙 = 𝒪(𝜈𝑜, 𝑒𝑗𝑏(𝜏𝑜 2))
reasonable approximation for Beta. For Bernoulli, we use continuous relaxation (Concrete Distribution).
Stochastic blockmodels meet Graph Neural Networks
considered Structured Mean-Field: 𝑟𝜚 𝑤, 𝑐, 𝑠 = ς𝑙=1
𝐿
𝑟𝜚 𝑤𝑙 ς𝑜=1
𝑂
𝑟𝜚 𝑐𝑜𝑙|𝑤 𝑟𝜚(𝑠
𝑜𝑙)
all nodes); bnk|v ∼ 𝐶𝑓𝑠𝑜𝑝𝑣𝑚𝑚𝑗(𝜌𝑙).
𝑜=1 𝑂
𝑛=1 𝑂
(𝔽[log 𝑞𝜄(𝐵𝑜𝑛|𝑨𝑜, 𝑨𝑛)]) +
𝑜=1 𝑂
(𝔽[log 𝑞𝜄(𝑌𝑜|𝑨𝑜)]) − σ𝑜=1
𝑂
(𝐿𝑀 𝑟𝜚 𝑐𝑜 𝑤𝑜 𝑞𝜄 𝑐𝑜 𝑤𝑜 + 𝐿𝑀 𝑟𝜚 𝑠
𝑜 𝑞𝜄 𝑠 𝑜
+ 𝐿𝑀[𝑟𝜚(𝑤𝑜)|𝑞(𝑤𝑜)])
Stochastic blockmodels meet Graph Neural Networks
Stochastic blockmodels meet Graph Neural Networks
Train network
Generated network Sparse latent space Performance on Link prediction task on five datasets.
Stochastic blockmodels meet Graph Neural Networks