Incentives in Crowdsourcing: A Game-theoretic Approach ARPITA GHOSH - - PowerPoint PPT Presentation

incentives in crowdsourcing a game theoretic approach
SMART_READER_LITE
LIVE PREVIEW

Incentives in Crowdsourcing: A Game-theoretic Approach ARPITA GHOSH - - PowerPoint PPT Presentation

Incentives in Crowdsourcing: A Game-theoretic Approach ARPITA GHOSH Cornell University NIPS 2013 Workshop on Crowdsourcing: Theory, Algorithms, and Applications Incentives in Crowdsourcing: A Game-theoretic Approach 1 / 26 Users on the Web:


slide-1
SLIDE 1

Incentives in Crowdsourcing: A Game-theoretic Approach

ARPITA GHOSH

Cornell University

NIPS 2013 Workshop on Crowdsourcing: Theory, Algorithms, and Applications

Incentives in Crowdsourcing: A Game-theoretic Approach 1 / 26

slide-2
SLIDE 2

Users on the Web: Online collective effort

Contribution online from the crowds:

Incentives in Crowdsourcing: A Game-theoretic Approach 2 / 26

slide-3
SLIDE 3

Users on the Web: Online collective effort

Contribution online from the crowds: Reviews (Amazon, Yelp), online Q&A sites (Y! Answers, Quora, StackOverflow), discussion forums Wikipedia Social media: Blogs, YouTube, . . .

Incentives in Crowdsourcing: A Game-theoretic Approach 2 / 26

slide-4
SLIDE 4

Users on the Web: Online collective effort

Contribution online from the crowds: Reviews (Amazon, Yelp), online Q&A sites (Y! Answers, Quora, StackOverflow), discussion forums Wikipedia Social media: Blogs, YouTube, . . . Crowdsourcing: Paid and unpaid; microtasks and challenges

Amazon Mechanical Turk, Citizen Science (GalaxyZoo, FoldIt), Games with a Purpose, contests (Innocentive, Topcoder)

Incentives in Crowdsourcing: A Game-theoretic Approach 2 / 26

slide-5
SLIDE 5

Users on the Web: Online collective effort

Contribution online from the crowds: Reviews (Amazon, Yelp), online Q&A sites (Y! Answers, Quora, StackOverflow), discussion forums Wikipedia Social media: Blogs, YouTube, . . . Crowdsourcing: Paid and unpaid; microtasks and challenges

Amazon Mechanical Turk, Citizen Science (GalaxyZoo, FoldIt), Games with a Purpose, contests (Innocentive, Topcoder)

Online education: Peer-learning, peer-grading

Incentives in Crowdsourcing: A Game-theoretic Approach 2 / 26

slide-6
SLIDE 6

Incentives and collective effort

Quality, participation varies widely across systems

Incentives in Crowdsourcing: A Game-theoretic Approach 3 / 26

slide-7
SLIDE 7

Incentives and collective effort

Quality, participation varies widely across systems How to incentivize high participation and effort?

Incentives in Crowdsourcing: A Game-theoretic Approach 3 / 26

slide-8
SLIDE 8

Incentives and collective effort

Quality, participation varies widely across systems How to incentivize high participation and effort? Two components to designing incentives:

Social psychology: What constitutes a reward?

Incentives in Crowdsourcing: A Game-theoretic Approach 3 / 26

slide-9
SLIDE 9

Incentives and collective effort

Quality, participation varies widely across systems How to incentivize high participation and effort? Two components to designing incentives:

Social psychology: What constitutes a reward? Rewards are limited: How to allocate among self-interested users?

A game-theoretic framework for incentive design

Incentives in Crowdsourcing: A Game-theoretic Approach 3 / 26

slide-10
SLIDE 10

The game-theoretic approach to incentive design

System design induces rules specifying allocation of rewards Self-interested users choose actions to maximize own payoff

Incentives in Crowdsourcing: A Game-theoretic Approach 4 / 26

slide-11
SLIDE 11

The game-theoretic approach to incentive design

System design induces rules specifying allocation of rewards Self-interested users choose actions to maximize own payoff

Participation (‘Endogenous entry’)

Incentives in Crowdsourcing: A Game-theoretic Approach 4 / 26

slide-12
SLIDE 12

The game-theoretic approach to incentive design

System design induces rules specifying allocation of rewards Self-interested users choose actions to maximize own payoff

Participation (‘Endogenous entry’) Revealing information truthfully (ratings, opinions, . . . )

Incentives in Crowdsourcing: A Game-theoretic Approach 4 / 26

slide-13
SLIDE 13

The game-theoretic approach to incentive design

System design induces rules specifying allocation of rewards Self-interested users choose actions to maximize own payoff

Participation (‘Endogenous entry’) Revealing information truthfully (ratings, opinions, . . . ) Effort:

Quality of content (UGC sites) Output accuracy (crowdsourcing) Quantity: Number of contributions, attemped tasks Speed of response (Q&A forums), . . .

Incentive design: Allocate reward to align agent’s incentives with system

Incentives in Crowdsourcing: A Game-theoretic Approach 4 / 26

slide-14
SLIDE 14

Incentive design for crowdsourcing

Reward allocation problem varies across systems:

Incentives in Crowdsourcing: A Game-theoretic Approach 5 / 26

slide-15
SLIDE 15

Incentive design for crowdsourcing

Reward allocation problem varies across systems: Why? Constraints, reward regimes, vary with nature of reward:

Monetary; social-psychological (attention, status, . . . ) Attention rewards: Diverging [GM11, GH11]; subset constraints [GM12] Money-like rewards: Bounded; sum constraints [GM12]

Incentives in Crowdsourcing: A Game-theoretic Approach 5 / 26

slide-16
SLIDE 16

Incentive design for crowdsourcing

Reward allocation problem varies across systems: Why? Constraints, reward regimes, vary with nature of reward:

Monetary; social-psychological (attention, status, . . . ) Attention rewards: Diverging [GM11, GH11]; subset constraints [GM12] Money-like rewards: Bounded; sum constraints [GM12]

Observability of (value of) agents’ output

Can only reward what you can see

Incentives in Crowdsourcing: A Game-theoretic Approach 5 / 26

slide-17
SLIDE 17

Incentive design for crowdsourcing

Reward allocation problem varies across systems: Why? Constraints, reward regimes, vary with nature of reward:

Monetary; social-psychological (attention, status, . . . ) Attention rewards: Diverging [GM11, GH11]; subset constraints [GM12] Money-like rewards: Bounded; sum constraints [GM12]

Observability of (value of) agents’ output

Can only reward what you can see Perfect rank-ordering: Contests [. . . ] Imperfect: Noisy votes in UGC [EG13, GH13] Unobservable: Judgement elicitation [DG13]

Incentives in Crowdsourcing: A Game-theoretic Approach 5 / 26

slide-18
SLIDE 18

Learning & incentives in user-generated content

Joint work with Patrick Hummel, ITCS’13

The setting: User-generated content

(Reviews, Q&A forums, comments, videos, articles, . . . )

Quality of contributions varies widely:

Incentives in Crowdsourcing: A Game-theoretic Approach 6 / 26

slide-19
SLIDE 19

Learning & incentives in user-generated content

Joint work with Patrick Hummel, ITCS’13

The setting: User-generated content

(Reviews, Q&A forums, comments, videos, articles, . . . )

Quality of contributions varies widely: Sites want to display best contributions

Incentives in Crowdsourcing: A Game-theoretic Approach 6 / 26

slide-20
SLIDE 20

Learning & incentives in user-generated content

Joint work with Patrick Hummel, ITCS’13

The setting: User-generated content

(Reviews, Q&A forums, comments, videos, articles, . . . )

Quality of contributions varies widely: Sites want to display best contributions But quality is not directly observable:

Incentives in Crowdsourcing: A Game-theoretic Approach 6 / 26

slide-21
SLIDE 21

Learning & incentives in user-generated content

Joint work with Patrick Hummel, ITCS’13

The setting: User-generated content

(Reviews, Q&A forums, comments, videos, articles, . . . )

Quality of contributions varies widely: Sites want to display best contributions But quality is not directly observable: Infer quality from viewer votes

Incentives in Crowdsourcing: A Game-theoretic Approach 6 / 26

slide-22
SLIDE 22

Learning & incentives in user-generated content

Joint work with Patrick Hummel, ITCS’13

The setting: User-generated content

(Reviews, Q&A forums, comments, videos, articles, . . . )

Quality of contributions varies widely: Sites want to display best contributions But quality is not directly observable: Infer quality from viewer votes How to display contributions to optimize overall viewer experience?

Incentives in Crowdsourcing: A Game-theoretic Approach 6 / 26

slide-23
SLIDE 23

A multi-armed bandit problem

Incentives in Crowdsourcing: A Game-theoretic Approach 7 / 26

slide-24
SLIDE 24

A multi-armed bandit problem

Learning contribution qualities: Multi-armed bandit problem

Arms: Contributions Success probability: Contribution’s ‘quality’

Incentives in Crowdsourcing: A Game-theoretic Approach 7 / 26

slide-25
SLIDE 25

A multi-armed bandit problem

Learning contribution qualities: Multi-armed bandit problem

Arms: Contributions Success probability: Contribution’s ‘quality’

Contributors: Agents with cost to quality, benefit from views

Incentives in Crowdsourcing: A Game-theoretic Approach 7 / 26

slide-26
SLIDE 26

A multi-armed bandit problem

Learning contribution qualities: Multi-armed bandit problem

Arms: Contributions Success probability: Contribution’s ‘quality’

Contributors: Agents with cost to quality, benefit from views Arms are endogenous!

Contributors choose whether to participate, content quality

Incentives in Crowdsourcing: A Game-theoretic Approach 7 / 26

slide-27
SLIDE 27

A multi-armed bandit problem

Learning contribution qualities: Multi-armed bandit problem

Arms: Contributions Success probability: Contribution’s ‘quality’

Contributors: Agents with cost to quality, benefit from views Arms are endogenous!

Contributors choose whether to participate, content quality

What is a good learning algorithm in this setting?

Incentives in Crowdsourcing: A Game-theoretic Approach 7 / 26

slide-28
SLIDE 28

Overview

Strategic contributors: Decide participation, quality Viewers vote on displayed contributions Mechanism: Decides which contribution to display Metric: Equilibrium regret

Incentives in Crowdsourcing: A Game-theoretic Approach 8 / 26

slide-29
SLIDE 29

Model: Content and feedback

Contribution quality q: Probability of viewer upvote

Incentives in Crowdsourcing: A Game-theoretic Approach 9 / 26

slide-30
SLIDE 30

Model: Content and feedback

Contribution quality q: Probability of viewer upvote Stream of T viewers: Each viewer shown, votes on, one contribution

Incentives in Crowdsourcing: A Game-theoretic Approach 9 / 26

slide-31
SLIDE 31

Model: Content and feedback

Contribution quality q: Probability of viewer upvote Stream of T viewers: Each viewer shown, votes on, one contribution Viewers need not vote ‘perfectly’: q ∈ [0, γ]

Incentives in Crowdsourcing: A Game-theoretic Approach 9 / 26

slide-32
SLIDE 32

Model: Content and feedback

Contribution quality q: Probability of viewer upvote Stream of T viewers: Each viewer shown, votes on, one contribution Viewers need not vote ‘perfectly’: q ∈ [0, γ]

Mechanism should be robust to γ < 1

Incentives in Crowdsourcing: A Game-theoretic Approach 9 / 26

slide-33
SLIDE 33

Contributors

K = K(T) potential contributors

Incentives in Crowdsourcing: A Game-theoretic Approach 10 / 26

slide-34
SLIDE 34

Contributors

K = K(T) potential contributors Contributors are strategic agents

Incentives in Crowdsourcing: A Game-theoretic Approach 10 / 26

slide-35
SLIDE 35

Contributors

K = K(T) potential contributors Contributors are strategic agents, choosing

Whether or not to participate:

Incentives in Crowdsourcing: A Game-theoretic Approach 10 / 26

slide-36
SLIDE 36

Contributors

K = K(T) potential contributors Contributors are strategic agents, choosing

Whether or not to participate: Probability βi

Incentives in Crowdsourcing: A Game-theoretic Approach 10 / 26

slide-37
SLIDE 37

Contributors

K = K(T) potential contributors Contributors are strategic agents, choosing

Whether or not to participate: Probability βi Contribution quality: qi

Incentives in Crowdsourcing: A Game-theoretic Approach 10 / 26

slide-38
SLIDE 38

Contributors

K = K(T) potential contributors Contributors are strategic agents, choosing

Whether or not to participate: Probability βi Contribution quality: qi

Actual number of contributions (arms): k(T) ≤ K(T)

Incentives in Crowdsourcing: A Game-theoretic Approach 10 / 26

slide-39
SLIDE 39

Contributor utilities

Cost: Quality q incurs cost c(q)

c(q) increasing, continuously differentiable

Incentives in Crowdsourcing: A Game-theoretic Approach 11 / 26

slide-40
SLIDE 40

Contributor utilities

Cost: Quality q incurs cost c(q)

c(q) increasing, continuously differentiable

Benefit from views

Incentives in Crowdsourcing: A Game-theoretic Approach 11 / 26

slide-41
SLIDE 41

Contributor utilities

Cost: Quality q incurs cost c(q)

c(q) increasing, continuously differentiable

Benefit from views

Psychological ([Huberman et al 09, . . . ]) or monetary benefit

Incentives in Crowdsourcing: A Game-theoretic Approach 11 / 26

slide-42
SLIDE 42

Contributor utilities

Cost: Quality q incurs cost c(q)

c(q) increasing, continuously differentiable

Benefit from views

Psychological ([Huberman et al 09, . . . ]) or monetary benefit nt

i : Views allocated to i until period t

Total benefit to i: nT

i

Incentives in Crowdsourcing: A Game-theoretic Approach 11 / 26

slide-43
SLIDE 43

Contributor utilities

Cost: Quality q incurs cost c(q)

c(q) increasing, continuously differentiable

Benefit from views

Psychological ([Huberman et al 09, . . . ]) or monetary benefit nt

i : Views allocated to i until period t

Total benefit to i: nT

i

Mechanism: Decides which contribution to display at t

Incentives in Crowdsourcing: A Game-theoretic Approach 11 / 26

slide-44
SLIDE 44

Contributor utilities

Cost: Quality q incurs cost c(q)

c(q) increasing, continuously differentiable

Benefit from views

Psychological ([Huberman et al 09, . . . ]) or monetary benefit nt

i : Views allocated to i until period t

Total benefit to i: nT

i

Mechanism: Decides which contribution to display at t Utility: ui = E[nT

i (qi, q−i, k(T))] − c(qi)

Incentives in Crowdsourcing: A Game-theoretic Approach 11 / 26

slide-45
SLIDE 45

Mapping to MAB

K(T) potential contributors, or arms Viewer t: Pull of arm at time t T: Time horizon or total number of viewers Content quality qi: Success probability of arm i

Incentives in Crowdsourcing: A Game-theoretic Approach 12 / 26

slide-46
SLIDE 46

Mapping to MAB

K(T) potential contributors, or arms Viewer t: Pull of arm at time t T: Time horizon or total number of viewers Content quality qi: Success probability of arm i Actual number of arms k(T), qualities qi, determined endogenously in response to learning algorithm

Incentives in Crowdsourcing: A Game-theoretic Approach 12 / 26

slide-47
SLIDE 47

How good is a learning algorithm as a mechanism?

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-48
SLIDE 48

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-49
SLIDE 49

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-50
SLIDE 50

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M:

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-51
SLIDE 51

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M: Regret wrt γ,

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-52
SLIDE 52

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M: Regret wrt γ, in symmetric mixed-strategy Bayes-Nash equilibrium (β, F(q)),

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-53
SLIDE 53

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M: Regret wrt γ, in symmetric mixed-strategy Bayes-Nash equilibrium (β, F(q)), R(T) = γT − E[

T

  • t=1

qt]

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-54
SLIDE 54

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M: Regret wrt γ, in symmetric mixed-strategy Bayes-Nash equilibrium (β, F(q)), R(T) = γT − E[

T

  • t=1

qt] Strong sublinear equilibrium regret:

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-55
SLIDE 55

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M: Regret wrt γ, in symmetric mixed-strategy Bayes-Nash equilibrium (β, F(q)), R(T) = γT − E[

T

  • t=1

qt] Strong sublinear equilibrium regret: limT→∞

R(T) T

= 0

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-56
SLIDE 56

How good is a learning algorithm as a mechanism?

Performance measure: Equilibrium regret

Recall contributors choose q ∈ [0, γ]

Strong regret of mechanism M: Regret wrt γ, in symmetric mixed-strategy Bayes-Nash equilibrium (β, F(q)), R(T) = γT − E[

T

  • t=1

qt] Strong sublinear equilibrium regret: limT→∞

R(T) T

= 0 in every symmetric equilibrium of M

Incentives in Crowdsourcing: A Game-theoretic Approach 13 / 26

slide-57
SLIDE 57

The UCB algorithm, as a mechanism

qt

i : Estimated quality of i at time t

UCB algorithm MUCB:

Display all arms once, then Display i = arg max qt

i +

  • 2 ln T

nt

i

Incentives in Crowdsourcing: A Game-theoretic Approach 14 / 26

slide-58
SLIDE 58

The UCB algorithm, as a mechanism

qt

i : Estimated quality of i at time t

UCB algorithm MUCB:

Display all arms once, then Display i = arg max qt

i +

  • 2 ln T

nt

i

Theorem: Mechanism MUCB always has a symmetric mixed-strategy equilibrium (β, F(q))

Incentives in Crowdsourcing: A Game-theoretic Approach 14 / 26

slide-59
SLIDE 59

UCB as a mechanism: The good news

Incentives in Crowdsourcing: A Game-theoretic Approach 15 / 26

slide-60
SLIDE 60

UCB as a mechanism: The good news

Theorem If K(T) is such that limT→∞

T K(T) ln T = ∞:

Incentives in Crowdsourcing: A Game-theoretic Approach 15 / 26

slide-61
SLIDE 61

UCB as a mechanism: The good news

Theorem If K(T) is such that limT→∞

T K(T) ln T = ∞:

β = 1 in any equilibria of MUCB for sufficiently large T

Incentives in Crowdsourcing: A Game-theoretic Approach 15 / 26

slide-62
SLIDE 62

UCB as a mechanism: The good news

Theorem If K(T) is such that limT→∞

T K(T) ln T = ∞:

β = 1 in any equilibria of MUCB for sufficiently large T For any fixed q∗ < γ, the probability of choosing quality q ≤ q∗ in any equilibrium goes to 0 as T → ∞.

Incentives in Crowdsourcing: A Game-theoretic Approach 15 / 26

slide-63
SLIDE 63

UCB as a mechanism: The good news

Theorem If K(T) is such that limT→∞

T K(T) ln T = ∞:

β = 1 in any equilibria of MUCB for sufficiently large T For any fixed q∗ < γ, the probability of choosing quality q ≤ q∗ in any equilibrium goes to 0 as T → ∞. MUCB achieves strong sublinear equilibrium regret.

Incentives in Crowdsourcing: A Game-theoretic Approach 15 / 26

slide-64
SLIDE 64

UCB as a mechanism: The bad news

Theorem Suppose limT→∞

T K(T) = r < ∞.

Incentives in Crowdsourcing: A Game-theoretic Approach 16 / 26

slide-65
SLIDE 65

UCB as a mechanism: The bad news

Theorem Suppose limT→∞

T K(T) = r < ∞. Then for sufficiently large T,

any equilibrium has the property that no agent chooses quality greater than q = c−1

τ (1 + c(0)).

Incentives in Crowdsourcing: A Game-theoretic Approach 16 / 26

slide-66
SLIDE 66

Improving equilibrium regret: A modified UCB mechanism

MUCB−MOD: Run UCB on random subset of min{⌊ √ T⌋, k(T)} arms

Exploring random subset: M1−FAIL [Berry et al’97] M1−FAIL achieves strong sublinear regret as an algorithm for large K(T)

Incentives in Crowdsourcing: A Game-theoretic Approach 17 / 26

slide-67
SLIDE 67

Improving equilibrium regret: A modified UCB mechanism

MUCB−MOD: Run UCB on random subset of min{⌊ √ T⌋, k(T)} arms

Exploring random subset: M1−FAIL [Berry et al’97] M1−FAIL achieves strong sublinear regret as an algorithm for large K(T), but not as a mechanism

Incentives in Crowdsourcing: A Game-theoretic Approach 17 / 26

slide-68
SLIDE 68

Improving equilibrium regret: A modified UCB mechanism

MUCB−MOD: Run UCB on random subset of min{⌊ √ T⌋, k(T)} arms

Exploring random subset: M1−FAIL [Berry et al’97] M1−FAIL achieves strong sublinear regret as an algorithm for large K(T), but not as a mechanism

Theorem MUCB−MOD achieves strong sublinear equilibrium regret for all γ ≤ 1 and cost functions c, for all K(T) ≤ T.

Why UCB works. Incentives in Crowdsourcing: A Game-theoretic Approach 17 / 26

slide-69
SLIDE 69

Extensions of result

MUCB−MOD retains strong sublinear equilibrium regret if:

Each viewer is shown multiple contributions Explore min{G(T), k(T)} arms: G(T) → ∞, G(T) = o( T

ln T )

Heterogenous types: Cost functions cτ(q) q ∈ [δ, γ], δ > 0

Incentives in Crowdsourcing: A Game-theoretic Approach 18 / 26

slide-70
SLIDE 70

Open questions

Multi-armed bandits with endogenous arms: Strong sublinear equilibrium regret achievable with modified-UCB mechanism Many unanswered questions: Models, mechanisms

Probabilistic feedback Sequential contributions Quality-participation tradeoffs with G(T)

Incentives in Crowdsourcing: A Game-theoretic Approach 19 / 26

slide-71
SLIDE 71

Open questions

Multi-armed bandits with endogenous arms: Strong sublinear equilibrium regret achievable with modified-UCB mechanism Many unanswered questions: Models, mechanisms

Probabilistic feedback Sequential contributions Quality-participation tradeoffs with G(T) What learning algorithms make good mechanisms when arms are endogenous?

Incentives in Crowdsourcing: A Game-theoretic Approach 19 / 26

slide-72
SLIDE 72

Incentives in crowdsourcing: Unobservable output

Crowdsourced evaluation: Replace expert by aggregated evaluation from ‘crowd’

Image classification & labeling; content rating; abuse detection; MOOCs peer grading, . . .

How to aggregate evaluations from crowd?

Workers have different proficiencies;

Incentives in Crowdsourcing: A Game-theoretic Approach 20 / 26

slide-73
SLIDE 73

Incentives in crowdsourcing: Unobservable output

Crowdsourced evaluation: Replace expert by aggregated evaluation from ‘crowd’

Image classification & labeling; content rating; abuse detection; MOOCs peer grading, . . .

How to aggregate evaluations from crowd?

Workers have different proficiencies; possibly unknown to system:

Incentives in Crowdsourcing: A Game-theoretic Approach 20 / 26

slide-74
SLIDE 74

Incentives in crowdsourcing: Unobservable output

Crowdsourced evaluation: Replace expert by aggregated evaluation from ‘crowd’

Image classification & labeling; content rating; abuse detection; MOOCs peer grading, . . .

How to aggregate evaluations from crowd?

Workers have different proficiencies; possibly unknown to system: Learn, weight to maximize accuracy

Incentives in Crowdsourcing: A Game-theoretic Approach 20 / 26

slide-75
SLIDE 75

Incentives in crowdsourcing: Unobservable output

Crowdsourced evaluation: Replace expert by aggregated evaluation from ‘crowd’

Image classification & labeling; content rating; abuse detection; MOOCs peer grading, . . .

How to aggregate evaluations from crowd?

Workers have different proficiencies; possibly unknown to system: Learn, weight to maximize accuracy

Input to aggregation problem comes from self-interested agents How to incentivize good evaluations from crowd?

Incentives in Crowdsourcing: A Game-theoretic Approach 20 / 26

slide-76
SLIDE 76

Incentives in crowdsourced evaluation

Incentivizing accurate evaluations, truthful reporting:

(i) Unobservable ground truth (ii) Effort-dependent accuracy (Information elicitation with endogenous proficiency) Direct monitoring infeasible: Reward ‘agreement’

Incentives in Crowdsourcing: A Game-theoretic Approach 21 / 26

slide-77
SLIDE 77

Incentives in crowdsourced evaluation

Incentivizing accurate evaluations, truthful reporting:

(i) Unobservable ground truth (ii) Effort-dependent accuracy (Information elicitation with endogenous proficiency) Direct monitoring infeasible: Reward ‘agreement’ Problem: Undesirable low-effort/second-guessing equilibria (e.g. always say ‘H’)

Incentives in Crowdsourcing: A Game-theoretic Approach 21 / 26

slide-78
SLIDE 78

Incentives in crowdsourced evaluation

Incentivizing accurate evaluations, truthful reporting:

(i) Unobservable ground truth (ii) Effort-dependent accuracy (Information elicitation with endogenous proficiency) Direct monitoring infeasible: Reward ‘agreement’ Problem: Undesirable low-effort/second-guessing equilibria (e.g. always say ‘H’)

Mechanism [Dasgupta-Ghosh, WWW’13]: Maximum effort-truthful reporting is highest-payoff equilibrium!

(Assuming no task-specific collusions) Use multiple tasks and ratings: Reward for agreement,

Incentives in Crowdsourcing: A Game-theoretic Approach 21 / 26

slide-79
SLIDE 79

Incentives in crowdsourced evaluation

Incentivizing accurate evaluations, truthful reporting:

(i) Unobservable ground truth (ii) Effort-dependent accuracy (Information elicitation with endogenous proficiency) Direct monitoring infeasible: Reward ‘agreement’ Problem: Undesirable low-effort/second-guessing equilibria (e.g. always say ‘H’)

Mechanism [Dasgupta-Ghosh, WWW’13]: Maximum effort-truthful reporting is highest-payoff equilibrium!

(Assuming no task-specific collusions) Use multiple tasks and ratings: Reward for agreement, but also identify and penalize blind agreement

Incentives in Crowdsourcing: A Game-theoretic Approach 21 / 26

slide-80
SLIDE 80

Moving beyond single tasks: Incentivizing overall contribution

Problems so far: Incentives for single contribution/task

Incentives in Crowdsourcing: A Game-theoretic Approach 22 / 26

slide-81
SLIDE 81

Moving beyond single tasks: Incentivizing overall contribution

Problems so far: Incentives for single contribution/task Rewarding contributors for overall identity:

Badges, leaderboards, reputations, . . . Virtual rewards for cumulative contribution

Gamification rewards valued by agents; contribution to earn reward is costly Badges induce mechanisms!

Design affects participation, effort from users

Incentives in Crowdsourcing: A Game-theoretic Approach 22 / 26

slide-82
SLIDE 82

Moving beyond single tasks: Incentivizing overall contribution

Problems so far: Incentives for single contribution/task Rewarding contributors for overall identity:

Badges, leaderboards, reputations, . . . Virtual rewards for cumulative contribution

Gamification rewards valued by agents; contribution to earn reward is costly Badges induce mechanisms!

Design affects participation, effort from users

Incentives in Crowdsourcing: A Game-theoretic Approach 22 / 26

slide-83
SLIDE 83

Badges and incentive design

Different badge designs online:

Absolute ‘milestone’ badges (StackOverflow, Foursquare), versus competitive ‘top-contributor’ badges (Y!Answers, Tripadvisor) Information about badge winners (StackOverflow vs Y! Answers)

Incentives in Crowdsourcing: A Game-theoretic Approach 23 / 26

slide-84
SLIDE 84

Badges and incentive design

Different badge designs online:

Absolute ‘milestone’ badges (StackOverflow, Foursquare), versus competitive ‘top-contributor’ badges (Y!Answers, Tripadvisor) Information about badge winners (StackOverflow vs Y! Answers)

What incentives do different badge designs create for participation and effort?

Game-theoretic analysis of badge design (Easley & Ghosh, ACM EC’13) ‘Absolute’ or ‘competitive’ badges? ‘Competitive’ badges: Fixed number or fraction of participants? Visibility of information: Transparent or not?

Results Incentives in Crowdsourcing: A Game-theoretic Approach 23 / 26

slide-85
SLIDE 85

Wrapping up

Incentive design for crowdsourcing

Data, inputs to algorithms come from self-interested agents Mechanisms rather than algorithms: A game-theoretic framework

Incentives in Crowdsourcing: A Game-theoretic Approach 24 / 26

slide-86
SLIDE 86

Wrapping up

Incentive design for crowdsourcing

Data, inputs to algorithms come from self-interested agents Mechanisms rather than algorithms: A game-theoretic framework

Lots more to understand!

Sequential decision making Eliciting effort with strategic contributors and raters

Incentives in Crowdsourcing: A Game-theoretic Approach 24 / 26

slide-87
SLIDE 87

Wrapping up

Incentive design for crowdsourcing

Data, inputs to algorithms come from self-interested agents Mechanisms rather than algorithms: A game-theoretic framework

Lots more to understand!

Sequential decision making Eliciting effort with strategic contributors and raters Overall contributor rewards; sustaining contribution Learning and incentives: Designing reputations

Incentives in Crowdsourcing: A Game-theoretic Approach 24 / 26

slide-88
SLIDE 88

Wrapping up

Incentive design for crowdsourcing

Data, inputs to algorithms come from self-interested agents Mechanisms rather than algorithms: A game-theoretic framework

Lots more to understand!

Sequential decision making Eliciting effort with strategic contributors and raters Overall contributor rewards; sustaining contribution Learning and incentives: Designing reputations Experimental and empirical: What do agents value, and how?

Incentives in Crowdsourcing: A Game-theoretic Approach 24 / 26

slide-89
SLIDE 89

Why MUCB−MOD works

Lemma Any arm with quality qi ≤ qmax(T) − δ receives Θ(ln T) attention in expectation for all δ > 0 qmax(T): Highest-quality explored contribution A purely algorithmic statement; proof by contradiction Theorem For any fixed q∗ < γ, the probability that there is some agent explored by MUCB−MOD who chooses quality q > q∗ goes to 1 as T → ∞ in every equilibrium of MUCB−MOD. Proof by contradiction: Demonstrate profitable deviation (Involves strategic reasoning, not purely algorithmic)

Back to UCB Incentives in Crowdsourcing: A Game-theoretic Approach 25 / 26

slide-90
SLIDE 90

Badges and incentive design: An economic framework

(Easley & Ghosh, ACM EC’13) Design recommendations from analysis:

Competitive badges: Reward fixed number, not fraction of competitors Absolute versus competitive badges ‘equivalent’ if population parameters known With uncertainty, or unknown parameters, competitive badges more ‘robust’ Sharing information about other users’ performance: Depends

  • n convexity of value as function of winners

Conclusion Incentives in Crowdsourcing: A Game-theoretic Approach 26 / 26