Belief Propagation for Spatial Network Embeddings Andrew Frank - - PowerPoint PPT Presentation

belief propagation for spatial network embeddings
SMART_READER_LITE
LIVE PREVIEW

Belief Propagation for Spatial Network Embeddings Andrew Frank - - PowerPoint PPT Presentation

Belief Propagation for Spatial Network Embeddings Andrew Frank Alex Ihler Padhraic Smyth Department of Computer Science UC Irvine August 25, 2009 Outline 1 Graphical Models Markov Random Fields Inference 2 Self-Localization Problem


slide-1
SLIDE 1

Belief Propagation for Spatial Network Embeddings

Andrew Frank Alex Ihler Padhraic Smyth

Department of Computer Science UC Irvine

August 25, 2009

slide-2
SLIDE 2

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-3
SLIDE 3

Graphical Models Markov Random Fields

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-4
SLIDE 4

Graphical Models Markov Random Fields

What Are Graphical Models?

Concise representations of probabilistic models Several types:

Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!

slide-5
SLIDE 5

Graphical Models Markov Random Fields

What Are Graphical Models?

Concise representations of probabilistic models Several types:

Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!

A B C D E Nodes = random variables Edges = dependencies between variables

slide-6
SLIDE 6

Graphical Models Markov Random Fields

What Are Graphical Models?

Concise representations of probabilistic models Several types:

Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!

A B C D E suspects {innocent,guilty} Nodes = random variables Edges = dependencies between variables

slide-7
SLIDE 7

Graphical Models Markov Random Fields

What Are Graphical Models?

Concise representations of probabilistic models Several types:

Bayesian networks (DAGs) Markov random fields (undirected graphs) Factor graphs (bipartite graphs) . . . and others!

A B C D E suspects {innocent,guilty} friends Nodes = random variables Edges = dependencies between variables

slide-8
SLIDE 8

Graphical Models Markov Random Fields

Representing Conditional Independencies

Interpreting a Markov Random Field If all paths from X to Y pass through Z, then we can say X and Y are conditionally independent given Z. Graphically, with a Markov Random Field (MRF): A B C D E Textually, through enumeration: A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C . . .

slide-9
SLIDE 9

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D)

slide-10
SLIDE 10

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D)

slide-11
SLIDE 11

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)

slide-12
SLIDE 12

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)

slide-13
SLIDE 13

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)

slide-14
SLIDE 14

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)

slide-15
SLIDE 15

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)

slide-16
SLIDE 16

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)

slide-17
SLIDE 17

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)

slide-18
SLIDE 18

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)

slide-19
SLIDE 19

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)p(E|C)

slide-20
SLIDE 20

Graphical Models Markov Random Fields

Factorization

Conditional independence lets us factor a distribution: A B C D E A ⊥ D, E | C B ⊥ C, D, E | A C ⊥ B | A D ⊥ A, B, E | C E ⊥ A, B, D | C p(A, B, C, D, E) = p(A)p(B|A)p(C|A, B)p(D|A, B, C)p(E|A, B, C, D) = p(A)p(B|A)p(C|A)p(D|C)p(E|C) Largest factor involves 2 variables!

slide-21
SLIDE 21

Graphical Models Markov Random Fields

Hammersley-Clifford Theorem

General factorization property of all MRFs: Hammersley-Clifford Theorem Every MRF factors as the product of potential functions defined

  • ver cliques of the graph.

Potential functions are. . .

Strictly positive Unnormalized

slide-22
SLIDE 22

Graphical Models Markov Random Fields

Hammersley-Clifford Theorem

General factorization property of all MRFs: Hammersley-Clifford Theorem Every MRF factors as the product of potential functions defined

  • ver cliques of the graph.

Potential functions are. . .

Strictly positive Unnormalized

A B C D E p(·) ∝ fA(A)fB(B)fC(C)fD(D)fE(E)fAB(A, B)fAC(A, C)fCD(C, D)fCE(C, E)

slide-23
SLIDE 23

Graphical Models Markov Random Fields

Specifying a Markov Random Field Model

Define the potential functions, e.g.: Let our domain be 0=innocent, 1=guilty. fB(B) =

  • .4

B = 0 .6 B = 1 fAB(A, B) =

  • 2

A = B 1 A = B

slide-24
SLIDE 24

Graphical Models Markov Random Fields

Specifying a Markov Random Field Model

Define the potential functions, e.g.: Let our domain be 0=innocent, 1=guilty. fB(B) =

  • .4

B = 0 .6 B = 1 Suspect B is acting suspicious. fAB(A, B) =

  • 2

A = B 1 A = B

slide-25
SLIDE 25

Graphical Models Markov Random Fields

Specifying a Markov Random Field Model

Define the potential functions, e.g.: Let our domain be 0=innocent, 1=guilty. fB(B) =

  • .4

B = 0 .6 B = 1 Suspect B is acting suspicious. fAB(A, B) =

  • 2

A = B 1 A = B Suspects A and B are friends.

slide-26
SLIDE 26

Graphical Models Inference

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-27
SLIDE 27

Graphical Models Inference

Marginalization with MRFs

Query p(A): p(A) =

  • B,C,D,E

p(A, B, C, D, E) O(dn)

slide-28
SLIDE 28

Graphical Models Inference

Marginalization with MRFs

Query p(A): p(A) =

  • B,C,D,E

p(A, B, C, D, E) O(dn) A B C D E Use graph structure to compute p(A) in O(dn2).

slide-29
SLIDE 29

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E

  • B,C,D,E

f(A)f(B)f(C)f(D)f(E)f(A, B)f(A, C)f(C, D)f(C, E)

slide-30
SLIDE 30

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E

  • B,C,D,E

f(A)f(B)f(C)f(D)f(E)f(A, B)f(A, C)f(C, D)f(C, E)

slide-31
SLIDE 31

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E

  • B,C,D

f(A)f(B)f(C)f(D)f(A, B)f(A, C)f(C, D)

  • E

f(E)f(C, E)

slide-32
SLIDE 32

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C)

  • B,C,D

f(A)f(B)f(C)f(D)f(A, B)f(A, C)f(C, D)mEC(C)

slide-33
SLIDE 33

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C)

  • B,C,D

f(A)f(B)f(C)f(D)f(A, B)f(A, C)f(C, D)mEC(C)

slide-34
SLIDE 34

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C)

  • B,C

f(A)f(B)f(C)f(A, B)f(A, C)mEC(C)

  • D

f(D)f(C, D)

slide-35
SLIDE 35

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C)

  • B,C

f(A)f(B)f(C)f(A, B)f(A, C)mEC(C)mDC(C)

slide-36
SLIDE 36

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C)

  • B,C

f(A)f(B)f(C)f(A, B)f(A, C)mEC(C)mDC(C)

slide-37
SLIDE 37

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C)

  • B

f(A)f(B)f(A, B)

  • C

f(C)f(A, C)mEC(C)mDC(C)

slide-38
SLIDE 38

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C) mCA(A)

  • B

f(A)f(B)f(A, B)mCA(A)

slide-39
SLIDE 39

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C) mCA(A)

  • B

f(A)f(B)f(A, B)mCA(A)

slide-40
SLIDE 40

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C) mCA(A) f(A)mCA(A)

  • B

f(B)f(A, B)

slide-41
SLIDE 41

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C) mCA(A) mBA(A) f(A)mCA(A)mBA(A)

slide-42
SLIDE 42

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C) mCA(A) mBA(A) f(A)mCA(A)mBA(A)

slide-43
SLIDE 43

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

View marginalization as a “message-passing” algorithm

Variables are computational nodes. Intermediate results are “messages” between nodes.

A B C D E mEC(C) mDC(C) mCA(A) mBA(A) ∝ p(A)

slide-44
SLIDE 44

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

Message update equation for pairwise MRFs: mst(xt) =

  • xs

 f(xs)f(xs, xt)

  • xu∈N(xs)\xt

mus(xs)  

Exact for tree-structured graphs.

slide-45
SLIDE 45

Graphical Models Inference

Belief Propagation (Sum-Product Algorithm)

Message update equation for pairwise MRFs: mst(xt) =

  • xs

 f(xs)f(xs, xt)

  • xu∈N(xs)\xt

mus(xs)  

Exact for tree-structured graphs.

What about on graphs with loops?

Use the same equation! (“Loopy” BP) No longer exact May not converge Often does quite well

slide-46
SLIDE 46

Graphical Models Inference

Related Algorithms

−1 1 2 0.5 1 1.5 Exact −1 1 2 0.5 1 1.5 BP, Easy −1 1 2 1 2 3 Mean Field, Easy −1 1 2 0.5 1 1.5 TRW−BP, Easy −1 1 2 0.5 1 1.5 Exact −1 1 2 1 2 3 BP, Hard −1 1 2 1 2 3 Mean Field, Hard −1 1 2 0.5 1 1.5 TRW−BP, Hard

slide-47
SLIDE 47

Self-Localization Problem Description

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-48
SLIDE 48

Self-Localization Problem Description

Localization Scenario

Nodes distributed throughout a planar region.

(people, mobile sensors, . . . )

1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {} 2 {} 3 {} 4 {} 5 {} Nodes that are “close enough” can estimate distance between them.

slide-49
SLIDE 49

Self-Localization Problem Description

Localization Scenario

Nodes distributed throughout a planar region.

(people, mobile sensors, . . . )

1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)} 2 {(1,3)} 3 {} 4 {} 5 {} Nodes that are “close enough” can estimate distance between them.

slide-50
SLIDE 50

Self-Localization Problem Description

Localization Scenario

Nodes distributed throughout a planar region.

(people, mobile sensors, . . . )

1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)} 2 {(1,3)} 3 {(1,2)} 4 {} 5 {} Nodes that are “close enough” can estimate distance between them.

slide-51
SLIDE 51

Self-Localization Problem Description

Localization Scenario

Nodes distributed throughout a planar region.

(people, mobile sensors, . . . )

1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)} 2 {(1,3)} 3 {(1,2)} 4 {(5,1)} 5 {(4,1)} Nodes that are “close enough” can estimate distance between them.

slide-52
SLIDE 52

Self-Localization Problem Description

Localization Scenario

Nodes distributed throughout a planar region.

(people, mobile sensors, . . . )

1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)(4,4) (5,3)} 2 {(1,3)(3,1) (5,3)} 3 {(1,2)(2,1)} 4 {(5,1)(1,4)} 5 {(4,1)(1,3) (2,3)} Nodes that are “close enough” can estimate distance between them.

slide-53
SLIDE 53

Self-Localization Problem Description

Localization Scenario

Nodes distributed throughout a planar region.

(people, mobile sensors, . . . )

1 2 3 4 5 Local measurements: node {(neighbor, distance)} 1 {(2,3)(3,2)(4,4) (5,3)} 2 {(1,3)(3,1) (5,3)} 3 {(1,2)(2,1)} 4 {(5,1)(1,4)} 5 {(4,1)(1,3) (2,3)} Nodes that are “close enough” can estimate distance between them. Task: recover node locations.

slide-54
SLIDE 54

Self-Localization Model Formulation

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-55
SLIDE 55

Self-Localization Model Formulation

Local Detection Model

Variables:

xs, location in R2 of node s

  • st, indicates whether nodes s and t detect each other

dst, noisy observation of ||xs − xt||

1 2 3 4 5 6 7 8 0.2 0.4 0.6 0.8 1 ||xs − xt|| p(ost| ||xs−xt||) Detection Noise 1 2 3 4 5 6 0.2 0.4 0.6 0.8 1 dst p(dst| ||xs−xt|| = 2) Distance Sensor Noise

slide-56
SLIDE 56

Self-Localization Model Formulation

Joint Model

p(x, o, d) =

  • (s,t)

p(ost | xs, xt)

  • (s,t):ost=1

p(dst | xs, xt)

  • s

p(xs) x1 x2 x3 x4 x5

  • st = 1
  • st = 0

fst(xs, xt) =

  • p(ost = 1|xs, xt)p(dst|xs, xt)

if ost = 1 1 − p(ost = 1|xs, xt) if ost = 0

slide-57
SLIDE 57

Self-Localization Model Formulation

Handling Continuous Variables

Variable domains are continuous! (locations in R2) = ⇒ replace sums with integrals? mst(xt) =

  • xs

 f(xs)f(xs, xt)

  • xu∈N(xs)\xt

mus(xs)  

slide-58
SLIDE 58

Self-Localization Model Formulation

Handling Continuous Variables

Variable domains are continuous! (locations in R2) = ⇒ replace sums with integrals? mst(xt) =

  • xs

 f(xs)f(xs, xt)

  • xu∈N(xs)\xt

mus(xs)   Theory holds, but now we must compute the integrals.

slide-59
SLIDE 59

Self-Localization Model Formulation

Particle Belief Propagation (PBP)

Draw weighted particles from each variable’s domain. Run (importance-corrected) discrete BP over these particles. mst(xt) =

  • xs

 f(xs, xt)f(xs)

  • xu∈N(xs)\xt

mus(xs)  

slide-60
SLIDE 60

Self-Localization Model Formulation

Particle Belief Propagation (PBP)

Draw weighted particles from each variable’s domain. Run (importance-corrected) discrete BP over these particles. mst(xt) =

  • xs

 f(xs, xt)f(xs)

  • xu∈N(xs)\xt

mus(xs)   = E

xs∼W(xs)

 f(xs, xt) f(xs) W(xs)

  • xu∈N(xs)\xt

mus(xs)  

slide-61
SLIDE 61

Self-Localization Model Formulation

Particle Belief Propagation (PBP)

Draw weighted particles from each variable’s domain. Run (importance-corrected) discrete BP over these particles. mst(xt) =

  • xs

 f(xs, xt)f(xs)

  • xu∈N(xs)\xt

mus(xs)   = E

xs∼W(xs)

 f(xs, xt) f(xs) W(xs)

  • xu∈N(xs)\xt

mus(xs)   ˆ m(i)

st ≈ 1

n

n

  • k=1

 f

  • x(k)

s

, x(i)

t

f

  • x(k)

s

  • W
  • x(k)

s

  • xu∈N(xs)\xt

mus

  • x(k)

s

slide-62
SLIDE 62

Self-Localization Experimental Results

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-63
SLIDE 63

Self-Localization Experimental Results

Results

Synthetic example:

Anchor Mobile Target

“Exact”

Anchor Mobile Target

Particle BP

slide-64
SLIDE 64

Self-Localization Experimental Results

Results

Synthetic example:

Anchor Mobile Target

“Exact”

Anchor Mobile Target Anchor Mobile Target

Particle BP

slide-65
SLIDE 65

Self-Localization Experimental Results

Results

Synthetic example:

Anchor Mobile Target

“Exact”

Anchor Mobile Target Anchor Mobile Target

Particle BP

Anchor Mobile Target

TRW Particle BP

slide-66
SLIDE 66

Latent Space Embeddings of Social Networks Problem Description

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-67
SLIDE 67

Latent Space Embeddings of Social Networks Problem Description

Main Idea

Intuition

Actors live in a latent, d−dimensional “social space”. Proximity in social space increases the likelihood of a link.

Hoff, Raftery, Handcock. Latent space approaches to social network analysis. JASA 2002.

slide-68
SLIDE 68

Latent Space Embeddings of Social Networks Problem Description

Connection to Localization

Localization

Geographic space Detection ⇒ physical proximity Location is the end goal

Latent space embedding

Social space Network link ⇒ proximity in latent space Latent location indirectly useful

slide-69
SLIDE 69

Latent Space Embeddings of Social Networks Problem Description

Connection to Localization

Localization

Geographic space Detection ⇒ physical proximity Location is the end goal

Latent space embedding

Social space Network link ⇒ proximity in latent space Latent location indirectly useful

slide-70
SLIDE 70

Latent Space Embeddings of Social Networks Problem Description

Connection to Localization

Localization

Geographic space Detection ⇒ physical proximity Location is the end goal

Latent space embedding

Social space Network link ⇒ proximity in latent space Latent location indirectly useful

slide-71
SLIDE 71

Latent Space Embeddings of Social Networks Model Formulation

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-72
SLIDE 72

Latent Space Embeddings of Social Networks Model Formulation

Local Model

Variables:

zs, location in latent space of node s yst, social network link indicator

p(yst | zs, zt) = σ(α − ||zs − zt||)

0.2 0.4 0.6 0.8 1 Link Probability ||zs − zt|| p(yst | ||zs − zt||)

slide-73
SLIDE 73

Latent Space Embeddings of Social Networks Model Formulation

Joint Model

Social Network: 1 2 3 4 5 MRF model: z1 z2 z3 z4 z5 yst = 1 yst = 0 fst(zs, zt) = p(yst|zs, zt)

slide-74
SLIDE 74

Latent Space Embeddings of Social Networks Preliminary Results

Outline

1

Graphical Models Markov Random Fields Inference

2

Self-Localization Problem Description Model Formulation Experimental Results

3

Latent Space Embeddings of Social Networks Problem Description Model Formulation Preliminary Results

slide-75
SLIDE 75

Latent Space Embeddings of Social Networks Preliminary Results

Test Data Set

Sampson’s monk data

18 monks living in a monestary Links indicate a “liking” relation Well studied data set

MLE (Hoff, Raftery, Handcock ’02)

slide-76
SLIDE 76

Latent Space Embeddings of Social Networks Preliminary Results

PBP Embedding of Monk Data

−10 −5 5 −6 −4 −2 2 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Monk Embedding, Marginal Modes

slide-77
SLIDE 77

Latent Space Embeddings of Social Networks Preliminary Results

PBP Embedding of Monk Data

−10 −5 5 −6 −4 −2 2 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Monk Embedding, Marginal Modes −10 −5 5 −6 −4 −2 2 4 Monk 16 Marginal

slide-78
SLIDE 78

Conclusions

Conclusions

BP is a generic inference method for computing marginals. PBP can estimate marginals in the self-localization problem. Could BP be useful for latent space network modeling?

slide-79
SLIDE 79

Conclusions

Conclusions

BP is a generic inference method for computing marginals. PBP can estimate marginals in the self-localization problem. Could BP be useful for latent space network modeling?

Thank you!