Belief Propagation for Spatial Network Embeddings Andrew Frank - - PowerPoint PPT Presentation
Belief Propagation for Spatial Network Embeddings Andrew Frank - - PowerPoint PPT Presentation
Belief Propagation for Spatial Network Embeddings Andrew Frank Alex Ihler Padhraic Smyth Department of Computer Science UC Irvine August 25, 2009 Outline 1 Graphical Models Markov Random Fields Inference 2 Self-Localization Problem
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Graphical Models Markov Random Fields
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Graphical Models Markov Random Fields
What Are Graphical Models?
Concise representations of probabilistic models Several types:
Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!
Graphical Models Markov Random Fields
What Are Graphical Models?
Concise representations of probabilistic models Several types:
Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!
A B C D E Nodes = random variables Edges = dependencies between variables
Graphical Models Markov Random Fields
What Are Graphical Models?
Concise representations of probabilistic models Several types:
Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!
A B C D E suspects {innocent,guilty} Nodes = random variables Edges = dependencies between variables
Graphical Models Markov Random Fields
What Are Graphical Models?
Concise representations of probabilistic models Several types:
Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!
A B C D E suspects {innocent,guilty} friends Nodes = random variables Edges = dependencies between variables
Graphical Models Markov Random Fields
Representing Conditional Independencies
Interpreting a Markov Random Field If all paths from X to Y pass through Z, then we can say X and Y are conditionally independent given Z. Graphically, with a Markov Random Field (MRF): A B C D E Textually, through enumeration: A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C . . .
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)p(E|C)
Graphical Models Markov Random Fields
Factorization
Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)p(E|C) Largest factor involves 2 variables!
Graphical Models Markov Random Fields
Hammersley-Clifford Theorem
General factorization property of all MRFs: Hammersley-Clifford Theorem Every MRF factors as the product of potential functions defined
- ver cliques of the graph.
Potential functions are. . .
Strictly positive Unnormalized
Graphical Models Markov Random Fields
Hammersley-Clifford Theorem
General factorization property of all MRFs: Hammersley-Clifford Theorem Every MRF factors as the product of potential functions defined
- ver cliques of the graph.
Potential functions are. . .
Strictly positive Unnormalized
A B C D E p(·) ∝ fA(A)fB(B)fC(C)fD(D)fE(E)fAB(A, B)fAC(A, C)fCD(C, D)fCE(C, E)
Graphical Models Markov Random Fields
Specifying a Markov Random Field Model
Define the potential functions, e.g.: Let our domain be 0=innocent, 1=guilty. fB(B) =
- .4
B = 0 .6 B = 1 fAB(A, B) =
- 2
A = B 1 A = B
Graphical Models Markov Random Fields
Specifying a Markov Random Field Model
Define the potential functions, e.g.: Let our domain be 0=innocent, 1=guilty. fB(B) =
- .4
B = 0 .6 B = 1 Suspect B is acting suspicious. fAB(A, B) =
- 2
A = B 1 A = B
Graphical Models Markov Random Fields
Specifying a Markov Random Field Model
Define the potential functions, e.g.: Let our domain be 0=innocent, 1=guilty. fB(B) =
- .4
B = 0 .6 B = 1 Suspect B is acting suspicious. fAB(A, B) =
- 2
A = B 1 A = B Suspects A and B are friends.
Graphical Models Inference
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Graphical Models Inference
Marginalization with MRFs
Query p(A): p(A) =
- B,C,D,E
p(A, B, C, D, E) O(dn)
Graphical Models Inference
Marginalization with MRFs
Query p(A): p(A) =
- B,C,D,E
p(A, B, C, D, E) O(dn) A B C D E Use graph structure to compute p(A) in O(dn2).
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E
- B,C,D,E
f(A)f(B)f(C)f(D)f(E)f(A, B)f(A, C)f(C, D)f(C, E)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E
- B,C,D,E
f(A)f(B)f(C)f(D)f(E)f(A, B)f(A, C)f(C, D)f(C, E)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E
- B,C,D
f(A)f(B)f(C)f(D)f(A, B)f(A, C)f(C, D)
- E
f(E)f(C, E)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C)
- B,C,D
f(A)f(B)f(C)f(D)f(A, B)f(A, C)f(C, D)mEC(C)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C)
- B,C,D
f(A)f(B)f(C)f(D)f(A, B)f(A, C)f(C, D)mEC(C)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C)
- B,C
f(A)f(B)f(C)f(A, B)f(A, C)mEC(C)
- D
f(D)f(C, D)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C)
- B,C
f(A)f(B)f(C)f(A, B)f(A, C)mEC(C)mDC(C)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C)
- B,C
f(A)f(B)f(C)f(A, B)f(A, C)mEC(C)mDC(C)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C)
- B
f(A)f(B)f(A, B)
- C
f(C)f(A, C)mEC(C)mDC(C)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C) mCA(A)
- B
f(A)f(B)f(A, B)mCA(A)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C) mCA(A)
- B
f(A)f(B)f(A, B)mCA(A)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C) mCA(A) f(A)mCA(A)
- B
f(B)f(A, B)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C) mCA(A) mBA(A) f(A)mCA(A)mBA(A)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C) mCA(A) mBA(A) f(A)mCA(A)mBA(A)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
View marginalization as a “message-passing” algorithm
Variables are computational nodes. Intermediate results are “messages” between nodes.
A B C D E mEC(C) mDC(C) mCA(A) mBA(A) ∝ p(A)
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
Message update equation for pairwise MRFs: mst(xt) =
- xs
f(xs)f(xs, xt)
- xu∈N(xs)\xt
mus(xs)
Exact for tree-structured graphs.
Graphical Models Inference
Belief Propagation (Sum-Product Algorithm)
Message update equation for pairwise MRFs: mst(xt) =
- xs
f(xs)f(xs, xt)
- xu∈N(xs)\xt
mus(xs)
Exact for tree-structured graphs.
What about on graphs with loops?
Use the same equation! (“Loopy” BP) No longer exact May not converge Often does quite well
Graphical Models Inference
Related Algorithms
−1 1 2 0.5 1 1.5 Exact −1 1 2 0.5 1 1.5 BP, Easy −1 1 2 1 2 3 Mean Field, Easy −1 1 2 0.5 1 1.5 TRW−BP, Easy −1 1 2 0.5 1 1.5 Exact −1 1 2 1 2 3 BP, Hard −1 1 2 1 2 3 Mean Field, Hard −1 1 2 0.5 1 1.5 TRW−BP, Hard
Self-Localization Problem Description
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Self-Localization Problem Description
Localization Scenario
Nodes distributed throughout a planar region.
(people, mobile sensors, . . . )
1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {} 2 {} 3 {} 4 {} 5 {} Nodes that are “close enough” can estimate distance between them.
Self-Localization Problem Description
Localization Scenario
Nodes distributed throughout a planar region.
(people, mobile sensors, . . . )
1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)} 2 {(1,3)} 3 {} 4 {} 5 {} Nodes that are “close enough” can estimate distance between them.
Self-Localization Problem Description
Localization Scenario
Nodes distributed throughout a planar region.
(people, mobile sensors, . . . )
1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)} 2 {(1,3)} 3 {(1,2)} 4 {} 5 {} Nodes that are “close enough” can estimate distance between them.
Self-Localization Problem Description
Localization Scenario
Nodes distributed throughout a planar region.
(people, mobile sensors, . . . )
1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)} 2 {(1,3)} 3 {(1,2)} 4 {(5,1)} 5 {(4,1)} Nodes that are “close enough” can estimate distance between them.
Self-Localization Problem Description
Localization Scenario
Nodes distributed throughout a planar region.
(people, mobile sensors, . . . )
1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)(4,4) (5,3)} 2 {(1,3)(3,1) (5,3)} 3 {(1,2)(2,1)} 4 {(5,1)(1,4)} 5 {(4,1)(1,3) (2,3)} Nodes that are “close enough” can estimate distance between them.
Self-Localization Problem Description
Localization Scenario
Nodes distributed throughout a planar region.
(people, mobile sensors, . . . )
1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)(4,4) (5,3)} 2 {(1,3)(3,1) (5,3)} 3 {(1,2)(2,1)} 4 {(5,1)(1,4)} 5 {(4,1)(1,3) (2,3)} Nodes that are “close enough” can estimate distance between them. Task: recover node locations.
Self-Localization Model Formulation
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Self-Localization Model Formulation
Local Detection Model
Variables:
xs, location in R2 of node s
- st, indicates whether nodes s and t detect each other
dst, noisy observation of ||xs − xt||
1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 ||xs − xt|| p(ost| ||xs−xt||) Detection Noise 1 2 3 4 5 6 0.2 0.4 0.6 0.8 1 dst p(dst| ||xs−xt|| = 2) Distance Sensor Noise
Self-Localization Model Formulation
Joint Model
p(x, o, d) =
- (s,t)
p(ost | xs, xt)
- (s,t):ost=1
p(dst | xs, xt)
- s
p(xs) x1 x2 x3 x4 x5
- st = 1
- st = 0
fst(xs, xt) =
- p(ost = 1|xs, xt)p(dst|xs, xt)
if ost = 1 1 − p(ost = 1|xs, xt) if ost = 0
Self-Localization Model Formulation
Handling Continuous Variables
Variable domains are continuous! (locations in R2) = ⇒ replace sums with integrals? mst(xt) =
- xs
f(xs)f(xs, xt)
- xu∈N(xs)\xt
mus(xs)
Self-Localization Model Formulation
Handling Continuous Variables
Variable domains are continuous! (locations in R2) = ⇒ replace sums with integrals? mst(xt) =
- xs
f(xs)f(xs, xt)
- xu∈N(xs)\xt
mus(xs) Theory holds, but now we must compute the integrals.
Self-Localization Model Formulation
Particle Belief Propagation (PBP)
Draw weighted particles from each variable’s domain. Run (importance-corrected) discrete BP over these particles. mst(xt) =
- xs
f(xs, xt)f(xs)
- xu∈N(xs)\xt
mus(xs)
Self-Localization Model Formulation
Particle Belief Propagation (PBP)
Draw weighted particles from each variable’s domain. Run (importance-corrected) discrete BP over these particles. mst(xt) =
- xs
f(xs, xt)f(xs)
- xu∈N(xs)\xt
mus(xs) = E
xs∼W(xs)
f(xs, xt) f(xs) W(xs)
- xu∈N(xs)\xt
mus(xs)
Self-Localization Model Formulation
Particle Belief Propagation (PBP)
Draw weighted particles from each variable’s domain. Run (importance-corrected) discrete BP over these particles. mst(xt) =
- xs
f(xs, xt)f(xs)
- xu∈N(xs)\xt
mus(xs) = E
xs∼W(xs)
f(xs, xt) f(xs) W(xs)
- xu∈N(xs)\xt
mus(xs) ˆ m(i)
st ≈ 1
n
n
- k=1
f
- x(k)
s
, x(i)
t
f
- x(k)
s
- W
- x(k)
s
- xu∈N(xs)\xt
mus
- x(k)
s
-
Self-Localization Experimental Results
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Self-Localization Experimental Results
Results
Synthetic example:
Anchor Mobile Target
“Exact”
Anchor Mobile Target
Particle BP
Self-Localization Experimental Results
Results
Synthetic example:
Anchor Mobile Target
“Exact”
Anchor Mobile Target Anchor Mobile Target
Particle BP
Self-Localization Experimental Results
Results
Synthetic example:
Anchor Mobile Target
“Exact”
Anchor Mobile Target Anchor Mobile Target
Particle BP
Anchor Mobile Target
TRW Particle BP
Latent Space Embeddings of Social Networks Problem Description
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Latent Space Embeddings of Social Networks Problem Description
Main Idea
Intuition
Actors live in a latent, d−dimensional “social space”. Proximity in social space increases the likelihood of a link.
Hoff, Raftery, Handcock. Latent space approaches to social network analysis. JASA 2002.
Latent Space Embeddings of Social Networks Problem Description
Connection to Localization
Localization
Geographic space Detection ⇒ physical proximity Location is the end goal
Latent space embedding
Social space Network link ⇒ proximity in latent space Latent location indirectly useful
Latent Space Embeddings of Social Networks Problem Description
Connection to Localization
Localization
Geographic space Detection ⇒ physical proximity Location is the end goal
Latent space embedding
Social space Network link ⇒ proximity in latent space Latent location indirectly useful
Latent Space Embeddings of Social Networks Problem Description
Connection to Localization
Localization
Geographic space Detection ⇒ physical proximity Location is the end goal
Latent space embedding
Social space Network link ⇒ proximity in latent space Latent location indirectly useful
Latent Space Embeddings of Social Networks Model Formulation
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Latent Space Embeddings of Social Networks Model Formulation
Local Model
Variables:
zs, location in latent space of node s yst, social network link indicator
p(yst | zs, zt) = σ(α − ||zs − zt||)
0.2 0.4 0.6 0.8 1 Link Probability ||zs − zt|| p(yst | ||zs − zt||)
Latent Space Embeddings of Social Networks Model Formulation
Joint Model
Social Network: 1 2 3 4 5 MRF model: z1 z2 z3 z4 z5 yst = 1 yst = 0 fst(zs, zt) = p(yst|zs, zt)
Latent Space Embeddings of Social Networks Preliminary Results
Outline
1
Graphical Models Markov Random Fields Inference
2
Self-Localization Problem Description Model Formulation Experimental Results
3
Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results
Latent Space Embeddings of Social Networks Preliminary Results
Test Data Set
Sampson’s monk data
18 monks living in a monestary Links indicate a “liking” relation Well studied data set
MLE (Hoff, Raftery, Handcock ’02)
Latent Space Embeddings of Social Networks Preliminary Results
PBP Embedding of Monk Data
−10 −5 5 −6 −4 −2 2 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Monk Embedding, Marginal Modes
Latent Space Embeddings of Social Networks Preliminary Results
PBP Embedding of Monk Data
−10 −5 5 −6 −4 −2 2 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Monk Embedding, Marginal Modes −10 −5 5 −6 −4 −2 2 4 Monk 16 Marginal
Conclusions
Conclusions
BP is a generic inference method for computing marginals. PBP can estimate marginals in the self-localization problem. Could BP be useful for latent space network modeling?
Conclusions