CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University
http://cs224w.stanford.edu ? ? ? ? Machine Learning ? Node - - PowerPoint PPT Presentation
http://cs224w.stanford.edu ? ? ? ? Machine Learning ? Node - - PowerPoint PPT Presentation
CS224W: Machine Learning with Graphs Jure Leskovec, Stanford University http://cs224w.stanford.edu ? ? ? ? Machine Learning ? Node classification 10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs,
? ? ? ? ?
Machine Learning
Node classification
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 2 10/15/19
Machine Learning
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 4
? ? ?
x
10/15/19
5
Raw Data Structured Data Learning Algorithm Model Downstream task Feature Engineering
Automatically learn the features
Β‘ (Supervised) Machine Learning Lifecycle
requires feature engineering every single time!
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 10/15/19
Goal: Efficient task-independent feature learning for machine learning with graphs!
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 6
vec node π: π£ β β& β&
Feature representation, embedding
u
10/15/19
- β
β β
17
Β‘ Task: We map each node in a network into a
low-dimensional space
Β§ Distributed representations for nodes Β§ Similarity of embeddings between nodes indicates their network similarity Β§ Encode network information and generate node representation
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 7
Β‘ 2D embeddings of nodes of the Zacharyβs
Karate Club network:
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 8
- Zacharyβs Karate Network:
Image from: Perozzi et al. DeepWalk: Online Learning of Social Representations. KDD 2014.
Β‘ Modern deep learning toolbox is designed for
simple sequences or grids.
Β§ CNNs for fixed-size images/gridsβ¦. Β§ RNNs or word2vec for text/sequencesβ¦
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 9
Β‘ But networks are far more complex!
Β§ Complex topographical structure (i.e., no spatial locality like grids) Β§ No fixed node ordering or reference point (i.e., the isomorphism problem) Β§ Often dynamic and have multimodal features.
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 10
Β‘ Assume we have a graph G:
Β§ V is the vertex set. Β§ A is the adjacency matrix (assume binary). Β§ No node features or extra information is used!
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 12 10/15/19
Β‘ Goal is to encode nodes so that similarity in
the embedding space (e.g., dot product) approximates similarity in the original network
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 13 10/15/19
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 14
similarity(u, v) β z>
v zu
Go Goal: Ne Need t to d define!
10/15/19
in the original network Similarity of the embedding
1.
Define an encoder (i.e., a mapping from nodes to embeddings)
2.
Define a node similarity function (i.e., a measure of similarity in the original network)
3.
Optimize the parameters of the encoder so that:
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 15
similarity(u, v) β z>
v zu
10/15/19
in the original network Similarity of the embedding
Β‘ Encoder: maps each node to a low-
dimensional vector
Β‘ Similarity function: specifies how the
relationships in vector space map to the relationships in the original network
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 16
enc(v) = zv
node in the input graph d-dimensional embedding Similarity of u and v in the original network dot product between node embeddings
similarity(u, v) β z>
v zu
10/15/19
Β‘ Simplest encoding approach: encoder is just
an embedding-lookup
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 17
matrix, each column is a node embedding [w [what w we l learn!] !] indicator vector, all zeroes except a one in column indicating node v
enc(v) = Zv
Z β RdΓ|V| v β I|V|
10/15/19
Β‘ Simplest encoding approach: encoder is just
an embedding-lookup
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 18
Z =
Dimension/size
- f embeddings
- ne column per node
embedding matrix embedding vector for a specific node
10/15/19
Simplest encoding approach: encoder is just an embedding-lookup Each node is assigned to a unique embedding vector Many methods: DeepWalk, node2vec, TransE
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 19
Β‘ Key choice of methods is how they define
node similarity.
Β‘ E.g., should two nodes have similar
embeddings if theyβ¦.
Β§ are connected? Β§ share neighbors? Β§ have similar βstructural rolesβ? Β§ β¦?
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 20
Material based on:
- Perozzi et al. 2014. DeepWalk: Online Learning of Social Representations. KDD.
- Grover et al. 2016. node2vec: Scalable Feature Learning for Networks. KDD.
1 4 3 2 5 6 7 9 10 8 11 12
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 22
Given a graph and a starting point, we select a neighbor of it at random, and move to this neighbor; then we select a neighbor of this point at random, and move to it, etc. The (random) sequence of points selected this way is a random walk on the graph.
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 23
probability that u and v co-occur on a random walk over the network
z>
u zv β
10/15/19
1.
Estimate probability of visiting node π on a random walk starting from node π using some random walk strategy πΊ
2.
Optimize embeddings to encode these random walk statistics:
Similarity (here: dot product=cos(π)) encodes random walk βsimilarityβ
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 24 10/15/19
1.
Expressivity: Flexible stochastic definition of node similarity that incorporates both local and higher-order neighborhood information
2.
Efficiency: Do not need to consider all node pairs when training; only need to consider pairs that co-occur on random walks
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 25 10/15/19
Β‘ Intuition: Find embedding of nodes to
d-dimensions that preserves similarity
Β‘ Idea: Learn node embedding such that nearby
nodes are close together in the network
Β‘ Given a node π£, how do we define nearby
nodes?
Β§ π7 π£ β¦ neighbourhood of π£ obtained by some strategy π
26 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 10/15/19
Β‘ Given π» = (π, πΉ), Β‘ Our goal is to learn a mapping π¨: π£ β β&. Β‘ Log-likelihood objective:
max
B
C
D βF
log P(πJ(π£)| π¨D)
Β§ where π7(π£) is neighborhood of node π£ by strategy π
Β‘ Given node π£, we want to learn feature
representations that are predictive of the nodes in its neighborhood πJ(π£)
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 27
1.
Run short fixed-length random walks starting from each node on the graph using some strategy R
2.
For each node π£ collect π7(π£), the multiset*
- f nodes visited on random walks starting
from u
3.
Optimize embeddings according to: Given node π£, predict its neighbors πJ(π£) max
B
C
D βF
log P(πJ(π£)| π¨D)
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 28
*π7(π£) can have repeat elements since nodes can be visited multiple times on random walks
10/15/19
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 30
- Intuition: Optimize embeddings to maximize
likelihood of random walk co-occurrences
- Parameterize π(π€|ππ£) using softmax:
L = X
uβV
X
vβNR(u)
β log(P(v|zu))
P(v|zu) = exp(z>
u zv)
P
n2V exp(z> u zn)
10/15/19
Why softmax? We want node π€ to be most similar to node π£ (out of all nodes π). Intuition: βR exp π¦R β max
R
exp(π¦R)
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 31
Putting it all together:
sum over all nodes π£ sum over nodes π€ seen on random walks starting from π£ predicted probability of π£ and π€ co-occuring on random walk
Optimizing random walk embeddings = Finding embeddings zu that minimize L
L = X
u2V
X
v2NR(u)
β log β exp(z>
u zv)
P
n2V exp(z> u zn)
β
10/15/19
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 32
But doing this naively is too expensive!!
Nested sum over nodes gives O(|V|2) complexity!
L = X
u2V
X
v2NR(u)
β log β exp(z>
u zv)
P
n2V exp(z> u zn)
β
10/15/19
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 33
But doing this naively is too expensive!! The normalization term from the softmax is the culprit⦠can we approximate it?
L = X
u2V
X
v2NR(u)
β log β exp(z>
u zv)
P
n2V exp(z> u zn)
β
10/15/19
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 34
sigmoid function
(makes each term a βprobabilityβ between 0 and 1)
random distribution over all nodes
log β exp(z>
u zv)
P
n2V exp(z> u zn)
β β log(Ο(z>
u zv)) β k
X
i=1
log(Ο(z>
u zni)), ni βΌ PV
10/15/19
Β‘ Solution: Negative sampling
Instead of normalizing w.r.t. all nodes, just normalize against π random βnegative samplesβ πR
Why is the approximation valid? Technically, this is a different objective. But Negative Sampling is a form of Noise Contrastive Estimation (NCE) which approx. maximizes the log probability of softmax. New formulation corresponds to using a logistic regression (sigmoid func.) to distinguish the target node π€ from nodes πR sampled from background distribution π
\.
More at https://arxiv.org/pdf/1402.3722.pdf
log β exp(z>
u zv)
P
n2V exp(z> u zn)
β β log(Ο(z>
u zv)) β k
X
i=1
log(Ο(z>
u zni)), ni βΌ PV
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 35
random distribution
- ver all nodes
Β§ Sample π negative nodes proportional to degree Β§ Two considerations for π (# negative samples):
- 1. Higher π gives more robust estimates
- 2. Higher π corresponds to higher bias on negative events
In practice π =5-20
10/15/19
1.
Run short fixed-length random walks starting from each node on the graph using some strategy R.
2.
For each node u collect NR(u), the multiset of nodes visited on random walks starting from u
3.
Optimize embeddings using Stochastic Gradient Descent:
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 36
We We can efficiently approximate this using ne negative sampling ng!
L = X
uβV
X
vβNR(u)
β log(P(v|zu))
10/15/19
Β‘ So far we have described how to optimize
embeddings given random walk statistics
Β‘ What strategies should we use to run these
random walks?
Β§ Simplest idea: Just run fixed-length, unbiased random walks starting from each node (i.e., DeepWalk from Perozzi et al., 2013).
Β§ The issue is that such notion of similarity is too constrained
Β§ How can we generalize this?
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 37 10/15/19
Β‘ Goal: Embed nodes with similar network
neighborhoods close in the feature space
Β‘ We frame this goal as a maximum likelihood
- ptimization problem, independent to the
downstream prediction task
Β‘ Key observation: Flexible notion of network
neighborhood π7(π£) of node π£ leads to rich node embeddings
Β‘ Develop biased 2nd order random walk π to
generate network neighborhood π7(π£) of node π£
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 38
Idea: use flexible, biased random walks that can trade off between local and global views of the network (Grover and Leskovec, 2016).
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 39
u s3 s2
s1
s4 s8 s9 s6 s7 s5
BFS DFS
10/15/19
Two classic strategies to define a neighborhood π7 π£ of a given node π£: Walk of length 3 (π7 π£ of size 3):
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 40
π^_` π£ = { π‘c, π‘d, π‘e} πg_` π£ = { π‘h, π‘i, π‘j} Local microscopic view Global macroscopic view
u s3 s2
s1
s4 s8 s9 s6 s7 s5
BFS DFS
10/15/19
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 41
BFS: Micro-view of neighbourhood
u
DFS: Macro-view of neighbourhood
10/15/19
Biased fixed-length random walk πΊ that given a node π generates neighborhood πΆπΊ π
Β‘ Two parameters:
Β§ Return parameter π:
Β§ Return back to the previous node
Β§ In-out parameter π:
Β§ Moving outwards (DFS) vs. inwards (BFS) Β§ Intuitively, π is the βratioβ of BFS vs. DFS
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 42 10/15/19
Biased 2nd-order random walks explore network neighborhoods:
Β§ Rnd. walk just traversed edge (π‘c, π₯) and is now at π₯ Β§ Insight: Neighbors of π₯ can only be: Idea: Remember where that walk came from
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 43
s1 s2 w s3 u
Back k to ππ Sam Same e distan ance ce to ππ Fa Farthe her fr from ππ
10/15/19
Β‘ Walker came over edge (sc, w) and is at w.
Where to go next?
Β‘ π, π model transition probabilities
Β§ π β¦ return parameter Β§ π β¦ βwalk awayβ parameter
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 44
1 1/π 1/π
1/π, 1/π, 1 are
unnormalized probabilities
s1 s2 w s3 u
10/15/19
s4
1/π
Β‘ Walker came over edge (sc, w) and is at w.
Where to go next?
Β§ BFS-like walk: Low value of π Β§ DFS-like walk: Low value of π
π7(π£) are the nodes visited by the biased walk
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 45
w β
s1 s2 s3 s4 1/π 1 1/π 1/π
Unnormalized transition prob. segmented based
- n distance from π‘!
10/15/19
- Dist. (ππ, π)
1 2 2 1 1/π 1/π
s1 s2 w s3 u s4
1/π
Target π Prob.
Β‘ 1) Compute random walk probabilities Β‘ 2) Simulate π random walks of length π starting
from each node π£
Β‘ 3) Optimize the node2vec objective using
Stochastic Gradient Descent Linear-time complexity All 3 steps are individually parallelizable
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 46 10/15/19
Β‘ How to use embeddings ππ of nodes:
Β§ Clustering/community detection: Cluster points π¨R Β§ Node classification: Predict label π(π¨R) of node π based on π¨R Β§ Link prediction: Predict edge (π, π) based on π(π¨R, π¨
})
Β§ Where we can: concatenate, avg, product, or take a difference between the embeddings:
Β§ Concatenate: π(π¨R, π¨
})= π([π¨R, π¨ }])
Β§ Hadamard: π(π¨R, π¨
})= π(π¨R β π¨ }) (per coordinate product)
Β§ Sum/Avg: π(π¨R, π¨
})= π(π¨R + π¨ })
Β§ Distance: π(π¨R, π¨
})= π(||π¨R β π¨ }||d)
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 50
Β‘ Basic idea: Embed nodes so that distances in
embedding space reflect node similarities in the original network.
Β‘ Different notions of node similarity:
Β§ Adjacency-based (i.e., similar if connected) Β§ Multi-hop similarity definitions Β§ Random walk approaches (covered today)
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 51 10/15/19
Β‘ So what method should I use..? Β‘ No one method wins in all casesβ¦.
Β§ E.g., node2vec performs better on node classification while multi-hop methods performs better on link prediction (Goyal and Ferrara, 2017 survey)
Β‘ Random walk approaches are generally more
efficient
Β‘ In general: Must choose definition of node
similarity that matches your application!
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 52 10/15/19
An Application of Embeddings to the Knowledge Graph:
Bordes, Usunier, Garcia-Duran. NeurIPS 2013.
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 54 Pierre-Yves Vandenbussche
Nodes are referred to as en enti titi ties es, edges as re relations
A kn knowledge graph is composed of facts/statements about inter-related entities In KGs, edges can be of many types!
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 55
missing relation KG incompleteness can substantially affect the efficiency
- f systems relying on it!
IN INTUIT ITION ION: : we want a link prediction model that learns from local and global connectivity patterns in the KG, taking into account entities and relationships
- f different types at the same time.
DOWNSTREAM TASK: relation predictions are performed by using the learned patterns to generalize observed relationships between an entity of interest and all the other entities.
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 56
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 57
Β‘ In TransE, relationships between entities are
represented as triplets
Β§ π (head entity), π (relation), π (tail entity) => (β, π, π’)
Β‘ Entities are first embedded in an entity space πΛ
Β§ similarly to the previous methods
Β‘ Relations are represented as translations
Β§ β + π β π’ if the given fact is true Β§ else, β + π β π’
π
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 58
Entities and relations are initialized uniformly, and normalized
Negative sampling with triplet that does not appear in the KG Co Comparative loss: favors lower distance values for valid triplets, high distance values for corrupted ones
positive sample negative sample
Β‘ Goal: Want to embed an entire graph π» Β‘ Tasks:
Β§ Classifying toxic vs. non-toxic molecules Β§ Identifying anomalous graphs
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 60
πΕ
Simple idea:
Β‘ Run a standard graph embedding
technique on the (sub)graph π»
Β‘ Then just sum (or average) the node
embeddings in the (sub)graph π»
Β‘ Used by Duvenaud et al., 2016 to classify
molecules based on their graph structure
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 61
π¨Ε = C
\βΕ
π¨\
10/15/19
Β‘ Idea: Introduce a βvirtual nodeβ to represent
the (sub)graph and run a standard graph embedding technique
Β‘ Proposed by Li et al., 2016 as a general
technique for subgraph embedding
Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 62 10/15/19
States in anonymous walk correspond to the index of the first time we visited the node in a random walk
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 63
Anonymous Walk Embeddings, ICML 2018 https://arxiv.org/pdf/1805.11921.pdf
Number of anonymous walks grows exponentially:
Β§ There are 5 anon. walks πR of length 3: πc=111, πd=112, πe= 121, πh= 122, πi= 123
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 64
Β‘ Enumerate all possible anonymous walks πR of π
steps and record their counts
Β‘ Represent the graph as a probability distribution
- ver these walks
Β‘ For example:
Β§ Set π = 3 Β§ Then we can represent the graph as a 5-dim vector
Β§ Since there are 5 anonymous walks πR of length 3: 111, 112, 121, 122, 123
Β§ πΕ [π] = probability of anonymous walk πR in π»
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 65
Β‘ Complete counting of all anonymous walks in a
large graph may be infeasible
Β‘ Sampling approach to approximating the true
distribution: Generate independently a set of π random walks and calculate its corresponding empirical distribution of anonymous walks
Β‘ How many random walks π do we need?
Β§ We want the distribution to have error of more than π with prob. less than π:
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 66
where: π is the total number of anon. walks of length π.
For example: There are π = 877 anonymous walks of length π = 7. If we set π = 0.1 and π = 0.01 then we need to generate π=122500 random walks
Learn embedding ππ of every anonymous walk ππ
Β‘ The embedding of a graph π» is then
sum/avg/concatenation of walk embeddings zR How to embed walks?
Β‘ Idea: Embed walks s.t.
next walk can be predicted
Β§ Set zR s.t. we maximize π π₯β
D π₯βΛβ’ D
, β¦ , π₯β
D = π(π¨)
Β§ Where π₯β
D is a π’-th random
walk starting at node π£
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 67
Β‘ Run πΌ different random walks from π each of length π:
π7 π£ = π₯c
D, π₯d D β¦ π₯Ε D
Β§ Let πR be its anonymous version of walk π₯R
Β‘ Learn to predict walks that co-occur in π¬-size window Β‘ Estimate embedding π¨R of anonymous walk πR of π₯R:
max 1 π C
βΕΈβ’ Ε
log π(π₯β|π₯βΛβ’, β¦ , π₯βΛc)
where: Ξβ¦ context window size Β§ π π₯β π₯βΛβ’, β¦ , π₯βΛc =
‘’£(Β€ Β₯Β¦ ) βΒ§
Β¨ ‘’£(Β€(Β₯Β§))
Β§ π§ π₯β = π + π β
c β’ βRΕΈc β’
π¨R
Β§ where π β β, π β βg, π¨R is the embedding of πR (anonymized version of walk π₯R)
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 68
Anonymous Walk Embeddings, ICML 2018 https://arxiv.org/pdf/1805.11921.pdf
We discussed 3 ideas to graph embeddings
Β‘ Approach 1: Embed nodes and sum/avg them Β‘ Approach 2: Create super-node that spans the
(sub) graph and then embed that node
Β‘ Approach 3: Anonymous Walk Embeddings
Β§ Idea 1: Represent the graph via the distribution over all the anonymous walks Β§ Idea 2: Sample the walks to approximate the distribution Β§ Idea 3: Embed anonymous walks
10/15/19 Jure Leskovec, Stanford CS224W: Machine Learning with Graphs, http://cs224w.stanford.edu 69