peer prediction mechanisms and their connections to
play

Peer Prediction Mechanisms and their Connections to Machine Learning - PowerPoint PPT Presentation

Peer Prediction Mechanisms and their Connections to Machine Learning Jens Witkowski ETH Zurich Game Theory Meets Computational Learning Theory Dagstuhl June 22, 2017 Joint work with David C. Parkes (Harvard) and Rafael Frongillo


  1. Peer Prediction Mechanisms and their Connections to Machine Learning Jens Witkowski ∗ ETH Zurich Game Theory Meets Computational Learning Theory Dagstuhl June 22, 2017 ∗ Joint work with David C. Parkes (Harvard) and Rafael Frongillo (Boulder).

  2. Forecasting Peer Prediction Connections to ML Conclusion Proper Scoring Rules Forecaster with belief p ∈ [ 0 , 1 ] reports belief y ∈ [ 0 , 1 ] . 1 � 1 , if event occurs March 1: pay R ( y , ω ) , where ω = 2 if not. 0 , If R ( y , ω ) proper, E p [ R ( y , ω )] maximized by reporting y = p . Example: Quadratic Scoring Rule [Brier, 1950] R q ( y , ω ) = 1 − ( y − ω ) 2 1 / 14

  3. Forecasting Peer Prediction Connections to ML Conclusion “Effective” Scoring Rules What if allowed reports y are restricted? True belief: p = 0 . 7 . Allowed reports: y 1 = 0 . 6 or y 2 = 0 . 79 . p y 1 = 0 . 6 y 2 = 0 . 79 0 1 Quadratic Rule: report y “closest” to p ! Theorem [Friedman, 1983] The Quadratic Rule is effective : if Y ⊆ [ 0 , 1 ] , reporting min y ∈ Y | y − p | maximizes expected score. 2 / 14

  4. Forecasting Peer Prediction Connections to ML Conclusion Motivation Objective: Truthful elicitation of opinions or experiences. 3 / 14

  5. Forecasting Peer Prediction Connections to ML Conclusion Peer Prediction Elicit informative signal (e.g. “high” or “low” experience). Ground truth never observed. Pay agent given her report x i and report of other agent x j . Objective: truthful reporting of signal is a Bayes NE. Key: Agent i ’s signal informative about j ’s signal. S j S i S j = s j S i = s i j j i i 4 / 14

  6. Forecasting Peer Prediction Connections to ML Conclusion Belief Model h 0 . 1 0 . 6 S j = h good 0 . 7 bad 0 . 3 j 0 . 9 0 . 4 l i p ( h ) = 0 . 25 p ( h | h ) = 0 . 46 Belief that S j = h : p ( h | l ) = 0 . 18 5 / 14

  7. Forecasting Peer Prediction Connections to ML Conclusion Simplest Mechanism: Output Agreement � $ 2 if reports agree, Compare two agents’ reports and pay: $ 0 otherwise. Example with S i = h S j = l S j = h = 0 . 54 j = 0 . 46 j i i � � � � payment | x i = h = $ 0 . 92 payment | x i = l = $ 1 . 08 E E Output Agreement not truthful for this belief model! 6 / 14

  8. Forecasting Peer Prediction Connections to ML Conclusion Classical Peer Prediction [Miller et al., 2005] p ( h | l )= 0 . 18 p ( h | h )= 0 . 46 x j x i = h � � R 0 . 46 , x j j i Truthful if p ( h | h ) � = p ( h | l ) ! Intuition Define agent j ’s signal report as event. 1 Restrict possible belief reports to possible posteriors. 2 Crucial: mechanism knows belief model! 7 / 14

  9. Forecasting Peer Prediction Connections to ML Conclusion Shadowing Method [W. and Parkes, 2012] � � Assumption: only y ∈ p ( h | l ) , p ( h | h ) known. δ δ x j x i = h 0 1 y 0 1 y j i � � y + δ, x j R q Truthful: agent prefers “shadow posterior” closer to true belief! Crucial: quadratic scoring rule R q ! 8 / 14

  10. Forecasting Peer Prediction Connections to ML Conclusion Shadowing Method: Key Idea Challenge: no knowledge of posteriors p ( h | h ) or p ( h | l ) ! Compute “Shadow posteriors” y + δ and y − δ : δ δ y p ( h | l ) p ( h | h ) 0 1 Observe: S i = h ⇒ y + δ closer to p ( h | h ) than y − δ . S i = l ⇒ y − δ closer to p ( h | l ) than y + δ . 9 / 14

  11. Forecasting Peer Prediction Connections to ML Conclusion 1. Learning Mechanisms from Reports Can y be learned from data? Idea Sketch � � Known: p ( h ) ∈ p ( h | l ) , p ( h | h ) . Consider other questions/tasks with same prior. Empirical frequency of x = h reports on those will be close to p ( h ) with high probability. Main Result [W. and Parkes, 2013] The Shadowing Method with y = ˆ p i ( h ) is strictly truthful given enough samples where this number depends on a lower bound on the belief change from prior to posterior. 10 / 14

  12. Forecasting Peer Prediction Connections to ML Conclusion 2. Peer Prediction Mechanisms are Loss Functions Classification Loss Peer Prediction Mechanism y i x j + 1 − 1 h l + 1 0 1 h 1 0 sign ( w T x i ) x i − 1 1 0 l 0 1 0/1 Loss Output Agreement y i x j + 1 − 1 h l + 1 0 2 h 2 0 sign ( w T x i ) x i − 1 1 0 l 1 2 Cost-sensitive Loss Shadowing w/ y = 2 3 11 / 14

  13. Forecasting Peer Prediction Connections to ML Conclusion Scoring Rules for Properties Theorem [Frongillo and W., 2017] Peer Prediction Mechanisms are equivalent to scoring rules for properties of probability distributions. Example: Output Agreement c c a a b b arg max x i Pr ( S j = x i | S i = s i ) arg max y i Pr ( Y i = y i | x i , w ) Output Agreement elicits the mode. 12 / 14

  14. Forecasting Peer Prediction Connections to ML Conclusion Conclusion Peer Prediction mechanisms truthfully elicit private opinions or experiences. Connections between PP mechanisms and ML: Learn truthful mechanisms using reports on other items. 1 Mechanisms are equivalent to loss functions eliciting 2 property of conditional label probability. THANK YOU! 13 / 14

  15. References I Brier, G. W. (1950). Verification of Forecasts Expressed in Terms of Probability. Monthly Weather Review, 78(1):1–3. Friedman, D. (1983). Effective Scoring Rules for Probabilistic Forecasts. Management Science, 29(4):447–454. Frongillo, R. and Witkowski, J. (2017). A geometric perspective on minimal peer prediction. ACM Transactions on Economics and Computation (TEAC). Forthcoming. 13 / 14

  16. References II Miller, N., Resnick, P ., and Zeckhauser, R. (2005). Eliciting Informative Feedback: The Peer-Prediction Method. Management Science, 51(9):1359–1373. Witkowski, J. (2014). Robust Peer Prediction Mechanisms. PhD thesis, Department of Computer Science, Albert-Ludwigs-Universität Freiburg. Witkowski, J. and Parkes, D. C. (2012). A Robust Bayesian Truth Serum for Small Populations. In Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI’12), pages 1492–1498. 14 / 14

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend