Naive Bayesian Learning in Social Networks Jerry Anunrojwong - - PowerPoint PPT Presentation

naive bayesian learning in social networks
SMART_READER_LITE
LIVE PREVIEW

Naive Bayesian Learning in Social Networks Jerry Anunrojwong - - PowerPoint PPT Presentation

Naive Bayesian Learning in Social Networks Jerry Anunrojwong (Harvard) joint with Nat Sothanaphan (MIT) EC18 Social Learning state of the world unknown to the agents Rule: can only talk to your neighbors Prior works: Bayesian Learning


slide-1
SLIDE 1

Naive Bayesian Learning in Social Networks

Jerry Anunrojwong (Harvard)

joint with Nat Sothanaphan (MIT)

EC’18

slide-2
SLIDE 2

Social Learning

state of the world unknown to the agents Rule: can only talk to your neighbors Prior works: Bayesian Learning Naive Learning

slide-3
SLIDE 3

Bayesian Learning (Rational)

Beliefs are distributions. Perfectly rational and Bayesian.

slide-4
SLIDE 4

Bayesian Learning (Rational)

Beliefs are distributions. Perfectly rational and Bayesian. Weigh confidence in beliefs

slide-5
SLIDE 5

Bayesian Learning (Rational)

I need to subtract

  • ther people’s beliefs

from yours. But how? I need superhuman reasoning & knowledge.

Beliefs are distributions. Perfectly rational and Bayesian. Weigh confidence in beliefs Do very sophisticated Bayesian reasoning Network structure is common knowledge

slide-6
SLIDE 6

Naive Learning (DeGroot)

Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know beyond my neighbors!

slide-7
SLIDE 7

Naive Learning (DeGroot)

Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. Simple and intuitive belief update rule Only need to know neighbors I don’t need to know beyond my neighbors!

slide-8
SLIDE 8

Naive Learning (DeGroot)

Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. Simple and intuitive belief update rule Only need to know neighbors No notion of confidence in beliefs I don’t need to know beyond my neighbors!

slide-9
SLIDE 9

Question: How can we combine the pros of naive and Bayesian learning?

Simple and intuitive belief update rule Only need to know neighbors Weigh confidence in beliefs Naive Bayesian Learning Bayesian Naive Naive Beliefs are distributions. Agents use Bayes’ rule. Agents treat neighbors as independent. Belief update rule only depends on neighbors.

slide-10
SLIDE 10

Naive Bayesian Learning (Our paper)

Beliefs are distributions. Update beliefs by Bayes rule, assuming naively that neighbors are independent information sources. unknown state of the world My signal my neighbors’ signal common prior my belief my neighbors’ belief My mental model independent signals posterior update

slide-11
SLIDE 11

Naive Bayesian Learning (Our paper)

Beliefs are distributions. Update beliefs by Bayes rule, assuming naively that neighbors are independent information sources. unknown state of the world My signal my neighbors’ signal common prior my belief my neighbors’ belief My mental model My update rule independent signals I have access to my and my neighbors’ beliefs posterior update Each time step:

slide-12
SLIDE 12

Naive Bayesian Learning (Our paper)

Beliefs are distributions. Update beliefs by Bayes rule, assuming naively that neighbors are independent information sources. unknown state of the world My signal my neighbors’ signal common prior my belief my neighbors’ belief My mental model My update rule independent signals I have access to my and my neighbors’ beliefs I infer my and my neighbors’ signals assuming their beliefs arise from my mental model posterior update Each time step:

slide-13
SLIDE 13

Naive Bayesian Learning (Our paper)

Beliefs are distributions. Update beliefs by Bayes rule, assuming naively that neighbors are independent information sources. unknown state of the world My signal my neighbors’ signal common prior my belief my neighbors’ belief My mental model My update rule independent signals I have access to my and my neighbors’ beliefs I infer my and my neighbors’ signals assuming their beliefs arise from my mental model I update my beliefs from common prior by conditioning on my and my neighbors’ inferred signals posterior update Each time step:

slide-14
SLIDE 14

Naive Bayesian Update Rule: Example

1 2 3

belief t=0 inferred signals belief t=1

slide-15
SLIDE 15

Naive Bayesian Update Rule: Example

1 2 3

belief t=1 inferred signals belief t=2

slide-16
SLIDE 16

Naive Bayesian Update Rule: Example

1 2 3

belief t=1 inferred signals belief t=2 Copies of signals “flow” Mental model assumes beliefs are “fresh”

slide-17
SLIDE 17

Main Result (Informal)

influence on consensus centrally located confident beliefs NAIVE LEARNING BAYESIAN LEARNING we analytically characterize the consensus and the formula for the consensus says ...

slide-18
SLIDE 18

Main Result (Informal)

influence on consensus centrally located confident beliefs NAIVE LEARNING BAYESIAN LEARNING we analytically characterize the consensus and the formula for the consensus says ... eigenvector centrality eigenvector of adjacency matrix

slide-19
SLIDE 19

Main Result (Informal)

influence on consensus centrally located confident beliefs NAIVE LEARNING BAYESIAN LEARNING we analytically characterize the consensus and the formula for the consensus says ... eigenvector centrality eigenvector of adjacency matrix an agent is central if it connects to other central agents

slide-20
SLIDE 20

Main Result (Informal)

influence on consensus centrally located confident beliefs NAIVE LEARNING BAYESIAN LEARNING we analytically characterize the consensus and the formula for the consensus says ... eigenvector centrality eigenvector of adjacency matrix an agent is central if it connects to other central agents also appear in DeGroot learning but from different dynamics

slide-21
SLIDE 21

Main Result (Formal)

DEFINITION weighted log-likelihood function

Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ). θmax

consensus for each state θ

slide-22
SLIDE 22

Main Result (Formal)

DEFINITION weighted log-likelihood function agent i’s initial belief common prior “confidence of beliefs at θ” how much agent i believes in θ compared to the prior baseline

Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ). θmax

consensus for each state θ

slide-23
SLIDE 23

Main Result (Formal)

DEFINITION weighted log-likelihood function agent i’s initial belief common prior “confidence of beliefs at θ” how much agent i believes in θ compared to the prior baseline centrality-weighted average of

Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ). θmax

consensus for each state θ

slide-24
SLIDE 24

Understanding Main Result Intuitively

  • agents take a lot of signals as independent

→ beliefs converge to a point

slide-25
SLIDE 25

Understanding Main Result Intuitively

  • agents take a lot of signals as independent

→ beliefs converge to a point

  • initial beliefs come from independent signals

→ “confident beliefs” = “informative signals”

slide-26
SLIDE 26

Example: Gaussian Beliefs

agent i’s initial belief

interpretation: scalar belief μi with confidence τi

slide-27
SLIDE 27

Example: Gaussian Beliefs

agent i’s initial belief

interpretation: scalar belief μi with confidence τi

a scenario: at the beginning, agent i receives signal with independent

slide-28
SLIDE 28

Example: Gaussian Beliefs

agent i’s initial belief

interpretation: scalar belief μi with confidence τi

consensus a scenario: at the beginning, agent i receives signal with independent

slide-29
SLIDE 29

Example: Gaussian Beliefs

influence on consensus centrally located informative signals agent i’s initial belief

interpretation: scalar belief μi with confidence τi

consensus agent i’s influence a scenario: at the beginning, agent i receives signal with independent

slide-30
SLIDE 30

Policy Implication I: how to seed opinion leaders

I am a centrally located leader, and I am confident enough to dump my uninformed belief on you all isolated minions!

If social planners want to seed opinion leaders, they must make those leaders well informed. ELSE you get this Learning quality = precision of consensus, as a random variable!

unless agent is central but poorly informed

slide-31
SLIDE 31

Policy Implication II: how to solve clustered seeding

Information loss from clustered seeding

  • ccurs in their model but not ours.

⅓ ⅓ ⅓

  • ur model
  • ptimal information

aggregation

~½ ~½ ~0 BBCM

middle agent is “blocked”

Key point: their model has no notion of “confidence in beliefs”

slide-32
SLIDE 32

Conclusion

  • We propose a model that combines

the pros of naive and Bayesian learning.

  • consensus = maximizer of the weighted log-likelihood function
  • centrally located + confident beliefs = influence on consensus
  • Two policy implications:

how to seed opinion leaders + clustered seeding

slide-33
SLIDE 33

Gaussian Beliefs: Quality of Learning

consensus is a random variable precision Q captures learning quality

Comparative statics unless vk is large and τk is small

POLICY IMPLICATION

I am a centrally located leader, and I am confident enough to dump my uninformed belief on you all isolated minions!

If social planners want to seed opinion leaders, they must make those leaders well informed. ELSE you get this