Learning Dynamics in Social Networks Simon Board Moritz - - PowerPoint PPT Presentation

learning dynamics in social networks
SMART_READER_LITE
LIVE PREVIEW

Learning Dynamics in Social Networks Simon Board Moritz - - PowerPoint PPT Presentation

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End Learning Dynamics in Social Networks Simon Board Moritz Meyer-ter-Vehn UCLA August 18, 2018 Introduction Model Example General Networks


slide-1
SLIDE 1

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Learning Dynamics in Social Networks

Simon Board Moritz Meyer-ter-Vehn

UCLA

August 18, 2018

slide-2
SLIDE 2

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

BMW i3

slide-3
SLIDE 3

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Motivation

How do societies learn about innovations?

◮ New products, e.g. electric cars. ◮ New production techniques, e.g. pineapples. ◮ New sources of finance, e.g. microfinance.

Two sources of information

◮ Social information acquired from neighbors. ◮ Private information if inspect innovation.

How does diffusion depend on the network?

◮ Is diffusion faster in more interconnected societies? ◮ Is diffusion faster in more centralized societies?

slide-4
SLIDE 4

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

The Social Purchasing Funnel

Develop Need Consideration (Social Info) Inspection (Private Info) Adoption

slide-5
SLIDE 5

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning Curves

Modeling approach

◮ Agents learn private information after inspection. ◮ Characterize social learning curves for any network via ODEs.

Social learning in tree networks

◮ Learning from neighbors, and neighbors’ neighbors ◮ Learning from direct vs. indirect links

Network structure

◮ Learning from backward and correlating links. ◮ Characterize agent’s favorite network. ◮ Compare centralized and decentralized networks.

slide-6
SLIDE 6

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Literature

Diffusion on networks

◮ Bass (1969) ◮ Morris (2000) ◮ Campbell (2013), Sadler (2017)

Social learning on networks

◮ Banerjee (1992), Bikhchandani, Hirshleifer and Welch (1992) ◮ Smith and Sorensen (1996), Acemoglu et al (2011) ◮ Mueller-Frank and Pai (2016), Ali (2017), Lomys (2017)

Social Learning and Adoption

◮ Guarino, Harmgart and Huck (2011) ◮ Hendricks, Sorensen and Wiseman (2012) ◮ Herrera and Horner (2013)

slide-7
SLIDE 7

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

“A significant gap in our knowledge concerns short-run dynamics and rates of learning in these models....The complexity of Bayesian updating in a network makes this difficult, but even limited results would offer a valuable contribution to the literature.”

Golub, Sadler, in Oxford Handbook 2016

slide-8
SLIDE 8

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Model

slide-9
SLIDE 9

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Model

Players and Products

◮ I players i on exogenous, directed network G. ◮ Product quality θ ∈ {L, H}, where Pr(H) = π0.

Timing: Player i

◮ . . . enters at iid “time” ti ∼ U[0, 1]. ◮ . . . observes which of her neighbors Ni adopt product by ti. ◮ . . . can inspect product at iid cost ci ∼ F. ◮ . . . adopts product iff inspected and θ = H.

Payoffs

◮ Player gets 1 if adopts; 0 otherwise, net of inspection cost ci.

slide-10
SLIDE 10

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

The Inference Problem

i sees j has adopted

◮ Quality is high, θ = H

i sees j has not adopted

◮ j tried product, but quality is low, θ = L? ◮ j chose not to try product (maybe k did not adopt)? ◮ j has not yet entered, tj ≥ t?

slide-11
SLIDE 11

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Example

slide-12
SLIDE 12

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Directed Pair

Definition: Adoption rate

xi,t: Probability i adopts product H by time t

Leader, j:

˙ xj,t = Pr(j inspect) = F(π0)

Follower, i:

˙ xi,t = Pr(i inspect)

slide-13
SLIDE 13

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Directed Pair

Definition: Adoption rate

xi,t: Probability i adopts product H by time t

Leader, j:

˙ xj,t = Pr(j inspect) = F(π0)

Follower, i:

˙ xi,t = 1 − Pr(i not inspect)

slide-14
SLIDE 14

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Directed Pair

Definition: Adoption rate

xi,t: Probability i adopts product H by time t

Leader, j:

˙ xj,t = Pr(j inspect) = F(π0)

Follower, i:

˙ xi,t = 1 − Pr(j not adopt) × Pr(ci high)

slide-15
SLIDE 15

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Directed Pair

Definition: Adoption rate

xi,t: Probability i adopts product H by time t

Leader, j:

˙ xj,t = Pr(j inspect) = F(π0)

Follower, i:

˙ xi,t = 1 − (1 − xj,t)(1 − F(π∅

t ))

with posterior π∅

t = π0(1−xj,t) π0(1−xj,t)+1−π0

slide-16
SLIDE 16

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Directed Pair

Definition: Adoption rate

xi,t: Probability i adopts product H by time t

Leader, j:

˙ xj,t = Pr(j inspect) = F(π0)

Follower, i:

˙ xi,t = 1 − (1 − xj,t)(1 − ˜ F(1 − xj,t)) where ˜ F(1 − x) := F

  • π0(1−x)

π0(1−x)+1−π0

slide-17
SLIDE 17

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Agent i’s Social Learning Curve

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

xi,t=Pr(i Adopt|H) xj,t=Pr(i observes Adopt|H) Pr(i Inpect| j Not)

Time

Pr(i Inspect| j Adopt)

Assumptions: c ∼ U[0, 1], π0 = 1/2.

slide-18
SLIDE 18

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

General Networks: Preliminaries

slide-19
SLIDE 19

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Individual Adoption Rates . . . are not enough

A general formula for individual adoption rates

◮ x−i Ni,t: Probability some of i’s neighbors adopt H by t ≤ ti.

˙ xi = 1 − (1 − x−i

Ni)(1 − ˜

F(1 − x−i

Ni))

But cannot recover joint xNi (or x−i

Ni) from marginals xj

slide-20
SLIDE 20

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

The Social Learning Curve

Definition: i’s social learning curve

◮ x−i Ni,t: Probability some of i’s neighbors adopts H by t ≤ ti.

Fact: i’s information Blackwell-increasing in x−i

Ni ◮ i’s signal structure

≥ 1 adopt 0 adopt θ = H x−i

Ni

1 − x−i

Ni

θ = L 1

◮ Signal x < x′ equiv. to “losing” adopt signal with prob. x′−x x′ .

slide-21
SLIDE 21

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning and Adoption

Assumption:

Costs have a bounded hazard rate (BHR) if f(c) 1 − F(c) ≤ 1 (1 − c)c for c ∈ [0, π0] (1)

◮ Satisfied if f(c) weakly increasing, e.g. c ∼ U[0, 1] ◮ At bottom, when c ≈ 0, always satisfied as RHS → ∞. ◮ At top, holds with equality when f(c) ∝ 1/c2

slide-22
SLIDE 22

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning and Adoption

Assumption:

Costs have a bounded hazard rate (BHR) if f(c) 1 − F(c) ≤ 1 (1 − c)c for c ∈ [0, π0] (1)

Lemma 1.

Assume BHR. Adoption (xi,t)t rises in information (x−i

Ni,t)t.

Idea

◮ Recall adoption probabilities are conditional on θ = H ◮ Hence E[πt|H] exceeds π0 and increases in information x−i Ni,t ◮ Compare: Increase in adoption given a neighbor adopts

Decrease in adoption given no neighbor adopts

slide-23
SLIDE 23

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning and Adoption

Assumption:

Costs have a bounded hazard rate (BHR) if f(c) 1 − F(c) ≤ 1 (1 − c)c for c ∈ [0, π0] (1)

Lemma 1.

Assume BHR. Adoption (xi,t)t rises in information (x−i

Ni,t)t.

Counterexample

◮ Suppose F ∼ U[0, π0] ◮ Adoption maximized for zero social learning.

slide-24
SLIDE 24

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning Improves over Time

Lemma 2.

In any network, agent i’s information Blackwell improves over

  • time. Under BHR, her adoption probability increases over time.

Idea

◮ Over time more people adopt, so x−i Ni,t increases in t. ◮ Apply Lemma 1.

slide-25
SLIDE 25

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Information Aggregation in Complete Networks

◮ Lowest cost type, c := sup{c|F(c) = 0}.

Lemma 3 (HSW ’12, HH ’13).

In a complete network with I → ∞ agents: (a) Bad products fail, PrL

I (i inspects) → 0.

(b) Good products succeed, PrH

I (i inspects) → 1, iff c = 0.

Proof

◮ Adoption: For all t > 0, as I → ∞, x−i Ni,t converges to

¯ x := inf{x : ˜ F(1 − x) = 0}

◮ By definition π(1−¯ x) 1−π¯ x = c, and so ¯

x = 1 iff c = 0.

◮ Inspection: If θ = L at ti = t:

PrL

I (i inspects) = ˜

F(1 − x−i

Ni,t) → 0

slide-26
SLIDE 26

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Information Aggregation in Complete Networks

◮ Lowest cost type, c := sup{c|F(c) = 0}.

Lemma 3 (HSW ’12, HH ’13).

In a complete network with I → ∞ agents: (a) Bad products fail, PrL

I (i inspects) → 0.

(b) Good products succeed, PrH

I (i inspects) → 1, iff c = 0.

Proof

◮ Adoption: For all t > 0, as I → ∞, x−i Ni,t converges to

¯ x := inf{x : ˜ F(1 − x) = 0}

◮ By definition π(1−¯ x) 1−π¯ x = c, and so ¯

x = 1 iff c = 0.

◮ Inspection: If θ = H at ti = t:

PrH

I (i inspects) = 1 − (1 − x−i Ni,t)(1 − ˜

F(1 − x−i

Ni,t)) → ¯

x

slide-27
SLIDE 27

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

General Networks: Characterization

slide-28
SLIDE 28

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

A Larger State Space

State of network λ ∈ {∅, a, b}I

◮ λi = ∅: i hasn’t moved yet, t ≤ ti. ◮ λi = a: i has moved, tried, and adopted the product. ◮ λi = b: i has moved, but not adopted the product.

Agent i’s knowledge in state λ

Λ(i, λ) := {λ′ : λ′

i = λi, λj = a iff λ′ j = a for all j ∈ Ni}

Additional notation

◮ Distribution z = (zθ λ), and zθ Λ := λ∈Λ zθ λ for sets Λ. ◮ For λ with λi = a, b, write λ−i for “same state with λi = ∅”.

slide-29
SLIDE 29

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

State Transitions

Three agents (i, j, k), state λ = (λi, λj, λk)

λ−j =(∅, ∅, b) λ−k =(∅, a, ∅) λ=(∅, a, b) (a, a, b) (b, a, b)

slide-30
SLIDE 30

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

ODE for General Networks

Theorem 1.

Given quality θ = L, H, the state evolves according to the ODE: ˙ zH

λ =

− 1 1 − t

  • i:λi=∅

zH

λ

+ 1 1 − t

  • i:λi=a

zH

λ−i ˜

F zH

Λ(i,λ−i)

zL

Λ(i,λ−i)

  • +

1 1 − t

  • i:λi=b

zH

λ−i

  • 1 − ˜

F zH

Λ(i,λ−i)

zL

Λ(i,λ−i)

  • zL

λ = (1 − t)#{i:λi=∅}t#{i:λi=b}0#{i:λi=a}

Implications

◮ Existence, uniqueness, discrete-time approximation ... ◮ But: ODE cannot be computed, since it is 3I-dimensional.

slide-31
SLIDE 31

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Tree Networks

slide-32
SLIDE 32

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Trees

◮ Abstract from Self-reference and Correlation problems. ◮ Approximate large random network with finite degree. ◮ Resemble hierarchies seen in firms or on Twitter.

Network G is ...

◮ . . . a tree if there is at most one path i → . . . → j. ◮ . . . regular with degree d if every node has out-degree d.

slide-33
SLIDE 33

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Adoption in Trees

Conditional independence

◮ (xj)j∈Ni independent of λi = ∅. ◮ Neighbors’ adoption (xj)j∈Ni conditionally independent.

Probability some of i’s neighbors Ni adopts:

x−i

Ni = xNi = 1 −

  • j∈Ni

(1 − xj)

Individual adoption rates

˙ xi = 1 − (1 − xNi)(1 − ˜ F(1 − xNi))

◮ I-dimensional ODE.

slide-34
SLIDE 34

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Adoption in Regular Trees

Probability some neighbor adopts

1 − (1 − x)d

Evolution of individual adoption rates

˙ x = 1 − (1 − x)d(1 − ˜ F((1 − x)d))

◮ 1-dimensional ODE.

slide-35
SLIDE 35

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Comparative Statics in Tree Networks

slide-36
SLIDE 36

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning Curves: More Informed Neighbors

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Solo agent One link Chain of links Time Pr(Observe Adopt|H)

Assumptions: c ∼ U[0, 1], π0 = 1/2.

slide-37
SLIDE 37

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning Curves: More Neighbors

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Tree, d=1 Tree, d=5 Tree, d=20 Time Pr(Observe Adopt|H)

Regular tree with d neighbors, c ∼ U[0, 1], π0 = 1/2.

slide-38
SLIDE 38

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Social Learning Improves in Links

◮ Consider tree ˆ

G with subtree G ⊆ ˆ

  • G. Adoption rates: ˆ

xi, xi.

Theorem 2.

Assume BHR. Social learning improves in links: For any agent i, xNi ≤ ˆ x ˆ

Ni

(*)

Prove (*) by induction

◮ Leaves i of G: xNi = 0 ≤ ˆ

x ˆ

Ni ◮ Fix any i and assume (*) holds for all j ∈ Ni. ◮ By BHR, agent j adopts more xj ≤ ˆ

xj.

◮ Additionally, i has more neighbors, Ni ⊆ ˆ

  • Ni. Thus:

xNi = 1 −

  • j∈Ni

(1 − xj) ≤ 1 −

  • j∈ ˆ

Ni

(1 − ˆ xj) = ˆ x ˆ

Ni

slide-39
SLIDE 39

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Direct vs Indirect Links

slide-40
SLIDE 40

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Direct vs Indirect Links

˙

  • x = 1 − (1 −

x)(1 − ˜ F(1 − x)) ≤ 1 − (1 − x)(1 − ˜ F(1))

slide-41
SLIDE 41

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Direct vs Indirect Links

  • xt ≤

˜ F(1) 1 − ˜ F(1) exp((1 − ˜ F(1))t − 1)

slide-42
SLIDE 42

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Direct vs Indirect Links

  • xt ≤

˜ F(1) 1 − ˜ F(1) exp((1 − ˜ F(1))t − 1) ˇ xt = x{j,k},t = 1 − (1 − ˜ F(1)t)2

slide-43
SLIDE 43

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Direct vs Indirect Links

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Two direct links Chain of links Time Pr(Observe Adopt|H)

Theorem 3.

Two direct links are superior to line of indirect ones: ˇ xt > xt.

slide-44
SLIDE 44

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Beyond Trees

slide-45
SLIDE 45

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Are All Links Beneficial?

Rationale for Theorem 2

◮ Indirect links induce neighbors to inspect. ◮ Learn from neighbors’ inspections and adoptions.

How about correlating and backward links?

i j k i j

slide-46
SLIDE 46

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Adding a Correlating Link

i j k i j k

The correlating link lowers i’s information and utility

◮ Agent i’s only learns from j if k has not adopted. ◮ In this event, adding j → k reduces j’s adoption.

slide-47
SLIDE 47

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Adding a Backward Link

i j i j

The backward link lowers i’s information and utility

◮ Before ti, agent j never observes adoption by i. ◮ x−i j,t: Probability j adopts product H by t ≤ ti.

˙ x−i

j,t = Pr(j inspect|i not)

slide-48
SLIDE 48

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Adding a Backward Link

i j i j

The backward link lowers i’s information and utility

◮ Before ti, agent j never observes adoption by i. ◮ x−i j,t: Probability j adopts product H by t ≤ ti.

˙ x−i

j,t

= ˜ F(1 − x−j

i,t ) ≤ ˜

F(1) x−i

j,t

≤ ˜ F(1)t = xj,t where xj,t is j’s adoption probability in i → j.

slide-49
SLIDE 49

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Self-Referential and Correlating Links

G is i-tree iff ...

1 i has no backward links B := {j → i}. 2 i’s neighbors j, j′ ∈ Ni are independent: Sj ∩ Sj′ = ∅;

in particular, there are no correlating links C := {j → j′}.

Adding self-referential and correlating links to an i-tree

ˆ G with G ˆ G ⊂ G ∪ C ∪ B.

Theorem 4.

Backward and correlating links harm i’s learning: ˆ x−i

Ni < xNi.

Idea:

Links C ∪ B only matter when they convey bad news.

slide-50
SLIDE 50

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Optimality of the Star Network

The i-Star Theorem 5.

The i-star maximizes i’s learning: For any G = G∗, ˆ x∗

N∗

i > x−i

Ni.

In i-Star

◮ i observes no adoption at ti then cj > π ∀{j : tj < ti}. (*)

In arbitrary network G, if (*) holds

◮ j with lowest tj observes no adopt. ⇒ does not inspect. ◮ j′ with next-lowest tj′ observes no adopt. ⇒ does not inspect.

slide-51
SLIDE 51

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Centralized Networks vs. Decentralized Networks

Theorem 6.

Assume BHR, and all agents have d neighbors. All agents prefer large random network over complete network.

Idea

◮ Agent i’s optimal network is the i-star. ◮ Complete network worse: add correlated and reverse links. ◮ Random network better under BHR: add new information.

slide-52
SLIDE 52

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Imperfect Information from Adoption

slide-53
SLIDE 53

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Imperfect Learning

Agents have idiosyncratic preferences

◮ Adopt with probability qθ in state θ.

Social learning curves

◮ May see multiple adoptions ◮ Let {x−i A,t, y−i A,t} be prob. A ⊂ Ni adopt if θ ∈ {H, L}.

Adoption rates in general network

˙ xi = qH

A⊆Ni

x−i

A ˜

F

  • x−i

A

y−i

A

slide-54
SLIDE 54

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Imperfect Learning

Agents have idiosyncratic preferences

◮ Adopt with probability qθ in state θ.

Social learning curves

◮ May see multiple adoptions ◮ Let {x−i A,t, y−i A,t} be prob. A ⊂ Ni adopt if θ ∈ {H, L}.

Adoption rates in tree

˙ xi = qH

A⊆Ni

xA ˜ F xA yA

  • for

xA =

  • j∈A

xj

  • j∈Ni\A

(1 − xj)

slide-55
SLIDE 55

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Imperfect Learning

Agents have idiosyncratic preferences

◮ Adopt with probability qθ in state θ.

Social learning curves

◮ May see multiple adoptions ◮ Let {x−i A,t, y−i A,t} be prob. A ⊂ Ni adopt if θ ∈ {H, L}.

Adoption rates in regular tree of degree d

˙ x = qH

d

  • ν=0

x(ν,d) ˜ F x(ν,d) y(ν,d)

  • for

x(ν,d) := ν d

  • xν(1 − x)d−ν
slide-56
SLIDE 56

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Comparative Statics: Blackwell Sufficiency

Experiment (ˆ x, ˆ y) is more informative than (x, y) iff

ˆ x ˆ y ≥ x y and 1 − ˆ x 1 − ˆ y ≤ 1 − x 1 − y

(0,0) (1,1) (1,0) (0,1) (x, y) (ˆ x, ˆ y)

y x “Not” const. “Adopt” const.

slide-57
SLIDE 57

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Comparative Statics for Trees

Lemma 2.

Assume BHR. If social learning curves of Ni are more informative then i’s adoption is more informative.

This implies that on trees

◮ Adoption is more informative over time. ◮ Adoption is more informative in direct and indirect links. ◮ Adoption is more informative if qH rises or qL falls.

Also in some examples

◮ Self-referential links lower informativeness of adoption.

slide-58
SLIDE 58

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Imperfect Information of Networks

slide-59
SLIDE 59

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Imperfect Information Poisson Trees

Known neighbors: Observes A and Ni

◮ Probability of ι neighbors: P(ι|k) := e−kkι/ι!

˙ x =

  • ι=0

P(ι|k)[1 − (1 − x)ι(1 − ˜ F((1 − x)ι))]

Unknown neighbors: Observes only A, not Ni

◮ Probability no neighbor adopts: e−kx

˙ x = 1 − e−kx(1 − ˜ F(e−kx))

Under BHR, more social learning with known neighbors.

◮ The number of neighbors Ni is directly informative. ◮ This compounds and increases everyone’s information.

slide-60
SLIDE 60

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Deterministic vs Random Trees

slide-61
SLIDE 61

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Deterministic vs Random Trees

◮ Random tree with D links, known neighbors

˙ ˆ x = E[1 − (1 − ˆ x)D(1 − ˜ F((1 − ˆ x)D))]

◮ Deterministic tree with d = E[D] links

˙ x = 1 − (1 − x)d(1 − ˜ F((1 − x)d)) =: φ(x, d)

If π0 ≤ 1/2, c ∼ U[0, 1], more social learning in Determ. tree.

◮ φ(x, d) concave in d, and so E[φ(x, D)] < φ(x, d). ◮ Hence ˆ

x ≤ x, and so E[1 − (1 − ˆ x)D] ≤ 1 − (1 − x)d

slide-62
SLIDE 62

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Undirected Poisson Networks

Two Complications

◮ i’s neighbors j have P(·|k) + 1 neighbors. ◮ Before ti, j conditions on λi = ∅.

Known neighbors

◮ j has ι ∼ P(·|d) neighbors (and i, who has not adopted)

˙ x =

  • ι=0

P(ι|d)[1 − (1 − x)ι(1 − ˜ F((1 − x)ι+1))]

Unknown neighbors

◮ Complications cancel, since j can’t see i before ti. ◮ j observes ν ∼ P(·|dx) adoptions from ι ∼ P(·|d) neighbors. ◮ Same adoption rates xt as in directed Poisson network.

slide-63
SLIDE 63

Introduction Model Example General Networks Trees Network Structure Imperfect Information The End

Conclusion

A tractable model of learning in networks

◮ Agents learn private information after inspection. ◮ Exogenous network, independent of timing.

Social learning curves

◮ Describe full dynamics via ODEs. ◮ “Value function” of links in trees. ◮ Effects of undirected learning and correlation. ◮ Optimality of the star network.

Future work

◮ Impact of network on aggregates: Welfare, diffusion. ◮ Policies: Pricing, advertising and seeding. ◮ And much more. . .