naive bayesian learning in social networks
play

Naive Bayesian Learning in Social Networks Jerry Anunrojwong - PowerPoint PPT Presentation

Naive Bayesian Learning in Social Networks Jerry Anunrojwong (Harvard) joint with Nat Sothanaphan (MIT) EC18 Social Learning state of the world unknown to the agents Rule: can only talk to your neighbors Prior works: Bayesian Learning


  1. Naive Bayesian Learning in Social Networks Jerry Anunrojwong (Harvard) joint with Nat Sothanaphan (MIT) EC’18

  2. Social Learning state of the world unknown to the agents Rule: can only talk to your neighbors Prior works: Bayesian Learning Naive Learning

  3. Bayesian Learning (Rational) Beliefs are distributions. Perfectly rational and Bayesian.

  4. Bayesian Learning (Rational) Beliefs are distributions. Perfectly rational and Bayesian. Weigh confidence in beliefs

  5. Bayesian Learning (Rational) I need to subtract other people’s beliefs Beliefs are distributions. from yours. But how? Perfectly rational and I need superhuman Bayesian. reasoning & knowledge. Weigh confidence in beliefs Do very sophisticated Bayesian reasoning Network structure is common knowledge

  6. Naive Learning (DeGroot) Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know beyond my neighbors!

  7. Naive Learning (DeGroot) Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know beyond my neighbors! Simple and intuitive belief update rule Only need to know neighbors

  8. Naive Learning (DeGroot) Beliefs are scalars. Update beliefs by taking (weighted) average of neighbors’ beliefs. I don’t need to know No notion of beyond confidence in beliefs my neighbors! Simple and intuitive belief update rule Only need to know neighbors

  9. Question: How can we combine the pros of naive and Bayesian learning? Naive Bayesian Learning Beliefs are distributions. Weigh Bayesian Agents use Bayes’ rule. confidence in beliefs Simple and intuitive Agents treat neighbors as independent. Naive belief update rule Only need to know Belief update rule only depends on neighbors. Naive neighbors

  10. Naive Bayesian Learning (Our paper) unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent rule, assuming naively signals that neighbors are My my neighbors’ independent information signal signal sources. common prior posterior update my my neighbors’ belief belief My mental model

  11. Naive Bayesian Learning (Our paper) Each time step: unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent I have access to rule, assuming naively signals my and my neighbors’ beliefs that neighbors are My my neighbors’ independent information signal signal sources. common prior posterior update my my neighbors’ belief belief My mental model My update rule

  12. Naive Bayesian Learning (Our paper) Each time step: unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent I have access to rule, assuming naively signals my and my neighbors’ beliefs that neighbors are My my neighbors’ independent information signal signal sources. I infer my and my neighbors’ signals assuming their beliefs common arise from my mental model prior posterior update my my neighbors’ belief belief My mental model My update rule

  13. Naive Bayesian Learning (Our paper) Each time step: unknown state of the world Beliefs are distributions. Update beliefs by Bayes independent I have access to rule, assuming naively signals my and my neighbors’ beliefs that neighbors are My my neighbors’ independent information signal signal sources. I infer my and my neighbors’ signals assuming their beliefs common arise from my mental model prior posterior I update my beliefs from common update prior by conditioning on my and my my neighbors’ my neighbors’ inferred signals belief belief My mental model My update rule

  14. Naive Bayesian Update Rule: Example 1 2 3 belief t=0 inferred signals belief t=1

  15. Naive Bayesian Update Rule: Example 1 2 3 belief t=1 inferred signals belief t=2

  16. Naive Bayesian Update Rule: Example Copies of signals “flow” Mental model assumes beliefs are “fresh” 1 2 3 belief t=1 inferred signals belief t=2

  17. Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING

  18. Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING eigenvector centrality eigenvector of adjacency matrix

  19. Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING eigenvector centrality eigenvector of an agent is central adjacency matrix if it connects to other central agents

  20. Main Result (Informal) we analytically characterize the consensus and the formula for the consensus says ... influence on centrally confident consensus located beliefs NAIVE LEARNING BAYESIAN LEARNING eigenvector centrality eigenvector of an agent is central adjacency matrix if it connects to other central agents also appear in DeGroot learning but from different dynamics

  21. Main Result (Formal) DEFINITION weighted log-likelihood function for each state θ consensus θ max Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ).

  22. agent i’s initial belief Main Result (Formal) common prior DEFINITION weighted log-likelihood function for each state θ consensus “confidence of beliefs at θ” how much agent i believes in θ θ max compared to the prior baseline Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ).

  23. agent i’s initial belief Main Result (Formal) common prior DEFINITION weighted log-likelihood function for each state θ consensus “confidence of beliefs at θ” centrality-weighted how much agent i believes in θ average of θ max compared to the prior baseline Theorem Every agent’s belief converges to the point distribution at maximizer of ℓ(θ).

  24. Understanding Main Result Intuitively - agents take a lot of signals as independent → beliefs converge to a point

  25. Understanding Main Result Intuitively - agents take a lot of signals as independent → beliefs converge to a point - initial beliefs come from independent signals → “confident beliefs” = “informative signals”

  26. Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief

  27. Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief a scenario: at the beginning, agent i receives signal with independent

  28. Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief a scenario: at the beginning, agent i receives signal with independent consensus

  29. Example: Gaussian Beliefs interpretation: scalar belief μ i with confidence τ i agent i’s initial belief a scenario: at the beginning, agent i receives signal with independent agent i’s influence consensus influence on centrally informative consensus located signals

  30. Policy Implication I: how to seed opinion leaders Learning quality = precision of consensus, as a random variable! unless agent is central but poorly informed I am a centrally located leader, and If social planners want to seed opinion leaders, I am confident enough to dump they must make those leaders well informed. my uninformed belief on you all ELSE you get this isolated minions!

  31. Policy Implication II: how to solve clustered seeding Key point: their model has no notion of Information loss from clustered seeding occurs in their model but not ours. “confidence in beliefs” ⅓ ~0 ⅓ ~½ ~½ ⅓ our model BBCM optimal information middle agent aggregation is “blocked”

  32. Conclusion - We propose a model that combines the pros of naive and Bayesian learning. - consensus = maximizer of the weighted log-likelihood function - centrally located + confident beliefs = influence on consensus - Two policy implications: how to seed opinion leaders + clustered seeding

  33. Gaussian Beliefs: Quality of Learning consensus is a random variable precision Q captures learning quality Comparative statics unless v k is large and τ k is small POLICY IMPLICATION I am a centrally located leader, and If social planners want to seed opinion leaders, I am confident enough to dump they must make those leaders well informed. my uninformed belief on you all ELSE you get this isolated minions!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend