CS6501: T opics in Learning and Game Theory (Fall 2019) How Can - - PowerPoint PPT Presentation

cs6501 t opics in learning and game theory fall 2019 how
SMART_READER_LITE
LIVE PREVIEW

CS6501: T opics in Learning and Game Theory (Fall 2019) How Can - - PowerPoint PPT Presentation

CS6501: T opics in Learning and Game Theory (Fall 2019) How Can Classifiers Induce Right Efforts? Instructor: Haifeng Xu Outline Motivations and Model Examples and Results 2 Decisions and Incentives Often today, ML is used to assist


slide-1
SLIDE 1

CS6501: T

  • pics in Learning and Game Theory

(Fall 2019) How Can Classifiers Induce Right Efforts?

Instructor: Haifeng Xu

slide-2
SLIDE 2

2

Outline

Ø Motivations and Model Ø Examples and Results

slide-3
SLIDE 3

3

Decisions and Incentives

Often today, ML is used to assist decisions about human beings

slide-4
SLIDE 4

4

Decisions and Incentives

ØEducation

Often today, ML is used to assist decisions about human beings

slide-5
SLIDE 5

5

Decisions and Incentives

ØEducation ØWhen a measure becomes a target, gaming behaviors happen

(Goodhart’s Law) Often today, ML is used to assist decisions about human beings

slide-6
SLIDE 6

6

Decisions and Incentives

ØEducation ØWhen a measure becomes a target, gaming behaviors happen

(Goodhart’s Law)

ØMany other applications: recommender systems, hiring, finance…

  • E.g., restaurants can game Yelp’s ranking metric by pay for positive

reviews or checkins

Often today, ML is used to assist decisions about human beings

slide-7
SLIDE 7

7

Decisions and Incentives

ØEducation ØWhen a measure becomes a target, gaming behaviors happen

(Goodhart’s Law)

ØMany other applications: recommender systems, hiring, finance…

  • E.g., restaurants can game Yelp’s ranking metric by pay for positive

reviews or checkins

ØParticularly an issue when transparency is required

Often today, ML is used to assist decisions about human beings

Chief scientist of Obama 2012 Campaign

slide-8
SLIDE 8

8

Education as a Running Example

Strategic Behaviors Goal/score (determined by some measure)

slide-9
SLIDE 9

9

Education as a Running Example

Strategic Behaviors Goal/score (determined by some measure)

Desirable behavior

slide-10
SLIDE 10

10

Education as a Running Example

Strategic Behaviors

Undesirable behavior

Goal/score (determined by some measure)

slide-11
SLIDE 11

11

Education as a Running Example

ØSome strategic behaviors are desirable, and some are not

I think it’s best to. . . distinguish between seven different types of test preparation: Working more effectively; Teaching more; Working harder; Reallocation; Alignment; Coaching; Cheating. The first three are what proponents of high-stakes testing want to see

  • - Daniel M. Koretz, Measuring up
slide-12
SLIDE 12

12

Education as a Running Example

ØSome strategic behaviors are desirable, and some are not

The Main Question How to design decision rules to induce desirable strategic behaviors?

ØUsually not possible to keep the rule confidential ØShould not simply use a rule that cannot be affected at all ØSo, this requires careful design

slide-13
SLIDE 13

13

The Mathematical Model

Ø𝑛 available actions (e.g., study hard, cheating) Ø𝑜 different features (e.g., HW grade, midterm grade) ØEach unit effort on action 𝑘 results in 𝛽%&(≥ 0) increase in feature 𝑗

1 𝑘 𝑛

. . . . . .

𝐺

.

𝐺& 𝐺

/

. . . . . .

𝛽.. 𝛽%. 𝛽%& 𝛽0%

slide-14
SLIDE 14

14

A Game between Agent and Principal

ØAgent’s action: allocation (𝑦., ⋯ , 𝑦0) of 1 unit of effort to actions

1 𝑘 𝑛

. . . . . .

𝐺

.

𝐺& 𝐺

/

. . . . . .

𝛽.. 𝛽%. 𝛽%& 𝛽0%

slide-15
SLIDE 15

15

A Game between Agent and Principal

ØAgent’s action: allocation (𝑦., ⋯ , 𝑦0) of 1 unit of effort to actions

  • Effort profile 𝑦(> 0) decides feature values

𝐺& = 𝑔

&(∑% 𝑦%𝛽%&)

(an increasing concave fnc)

𝑦. 𝑦% 𝑦0

. . . . . .

𝐺

.

𝐺& 𝐺

/

. . . . . .

𝛽.. 𝛽%. 𝛽%& 𝛽0%

∑% 𝑦% ≤ 1

slide-16
SLIDE 16

16

A Game between Agent and Principal

ØAgent’s action: allocation (𝑦., ⋯ , 𝑦0) of 1 unit of effort to actions

  • Effort profile 𝑦(> 0) decides feature values

𝐺& = 𝑔

&(∑% 𝑦%𝛽%&)

(an increasing concave fnc)

ØPrincipal’s action: design the evaluation rule 𝐼(𝐺

., ⋯ , 𝐺 /)

  • 𝐼 is increasing in every feature, and publicly known (e.g., a grading rule)

𝑦. 𝑦% 𝑦0

. . . . . .

𝐺

.

𝐺& 𝐺

/

. . . . . .

𝛽.. 𝛽%. 𝛽%& 𝛽0% 𝐼

Evaluation rule 𝐼(𝐺

., ⋯ , 𝐺 /)

∑% 𝑦% ≤ 1

slide-17
SLIDE 17

17

A Game between Agent and Principal

ØAgent’s action: allocation (𝑦., ⋯ , 𝑦0) of 1 unit of effort to actions

  • Effort profile 𝑦(> 0) decides feature values

𝐺& = 𝑔

&(∑% 𝑦%𝛽%&)

(an increasing concave fnc)

ØPrincipal’s action: design the evaluation rule 𝐼(𝐺

., ⋯ , 𝐺 /)

  • 𝐼 is increasing in every feature, and publicly known (e.g., a grading rule)

ØPrincipal has a desirable effort profile 𝑦∗ (e.g., 𝑦∗ = “work hard”) ØAgent goal: choose 𝑦 to maximize 𝐼

Q: Can the principal design 𝐼 to induce her desirable 𝑦∗?

slide-18
SLIDE 18

18

A Game between Agent and Principal

Relation to problems we studied before

ØThis is a Stackelberg game

  • First, principal announces the evaluation rule 𝐼
  • Second, agent best responds to 𝐼 by picking effort profile 𝑦

ØThis is a mechanism design problem

  • Want to design evaluation rule 𝐼 to induce desirable response 𝑦∗

ØMore generally, this a principal-agent mechanism design problem

  • Rich literature in economics, explosive recent interest in EconCS

Q: Can the principal design 𝐼 to induce her desirable 𝑦∗?

slide-19
SLIDE 19

19

Outline

Ø Motivations and Model Ø Examples and Results

slide-20
SLIDE 20

20

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 1 2 2 1 𝐼

𝑦∗ = (0, 1, 0) Q: Can the principal induce the desirable 𝑦∗ = (0,1,0)?

cheating studying copying

𝐼 = 0.6 𝐺= + 0.4𝐺

D

slide-21
SLIDE 21

21

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 1 2 2 1 𝐼

𝑦∗ = (0, 1, 0) Q: Can the principal induce the desirable 𝑦∗ = (0,1,0)?

ØAns: Yes

  • For any unit of effort on cheating or copying, agent would rather

spend it on studying

cheating studying copying

𝐼 = 0.6 𝐺= + 0.4𝐺

D

slide-22
SLIDE 22

22

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 2 1 1 1.5 𝐼

𝐼 = 0.6 𝐺= + 0.4𝐺

D

Q: What about this setting?

cheating studying copying

𝑦∗ = (0, 1, 0)

slide-23
SLIDE 23

23

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 2 1 1 1.5 𝐼

𝐼 = 0.6 𝐺= + 0.4𝐺

D

Q: What about this setting?

ØAns: No

  • Spending 1 unit studying à H = 1
  • Spending 1 unit on cheating à H = 1.2
  • Problem: weight of exam is to large

cheating studying copying

𝑦∗ = (0, 1, 0)

slide-24
SLIDE 24

24

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 2 1 1 1.5 𝐼

𝐼 = 0.4 𝐺= + 0.6𝐺

D

Q: What about changing 𝐼 to our class’s rule?

cheating studying copying

𝑦∗ = (0, 1, 0)

slide-25
SLIDE 25

25

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 2 1 1 1.5 𝐼

𝐼 = 0.4 𝐺= + 0.6𝐺

D

Q: What about changing 𝐼 to our class’s rule?

ØAns: Yes

  • Spending 1 unit studying à H = 1
  • Shifting any amount of effort to copying or cheating only decreases H
  • Whether we can induce 𝑦∗ does depends on our design of 𝐼

cheating studying copying

𝑦∗ = (0, 1, 0)

slide-26
SLIDE 26

26

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 3 1 1 3 𝐼

𝐼 = 0.4 𝐺= + 0.6𝐺

D

Q: What about these effort transition values?

cheating studying copying

𝑦∗ = (0, 1, 0)

slide-27
SLIDE 27

27

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 3 1 1 3 𝐼

𝐼 = 0.4 𝐺= + 0.6𝐺

D

Q: What about these effort transition values?

ØAns: No, regardless of what 𝐼 you choose

  • For whatever (𝑦., 𝑦;, 𝑦<), (𝑦. +

GH ; , 0, 𝑦< + IH ; ) is better for agent

  • There are cases where 𝑦∗ just cannot be induced regardless of 𝐼

cheating studying copying

𝑦∗ = (0, 1, 0)

slide-28
SLIDE 28

28

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 𝐼

Q: In general, when would it be impossible to induce 𝑦∗?

cheating studying copying

𝑦∗ = (0, 1, 0)

𝛽.= 𝛽;= 𝛽;> 𝛽<>

𝐼 = 𝛾=𝐺= + 𝛾>𝐺

D

slide-29
SLIDE 29

29

Example: Classroom Setting

𝑦. 𝑦; 𝑦< 𝐺= 𝐺> 𝐼

Q: In general, when would it be impossible to induce 𝑦∗?

ØWith 𝐶 = 1 effort on studying, we get 𝐺=, 𝐺> = (𝛽;=, 𝛽;>) ØIf ∃ (𝑦., 𝑦;, 𝑦<) such that: (1) 𝑦. + 𝑦; + 𝑦< < 1; but (2) 𝑦.𝛽.= + 𝑦;𝛽;= ≥

𝛽;= and 𝑦;𝛽;> + 𝑦<𝛽<> ≥ 𝛽;>, then cannot induce effort on studying

  • This condition does not depend on 𝐼

cheating studying copying

𝑦∗ = (0, 1, 0)

𝛽.= 𝛽;= 𝛽;> 𝛽<>

𝐼 = 𝛾=𝐺= + 𝛾>𝐺

D

slide-30
SLIDE 30

30

ØLet’s focus on the special case 𝑦∗ = 𝑓

% for some 𝑘

ØPrevious argument shows a necessary condition

There is no 𝑦., ⋯ , 𝑦0 ≥ 0 such that: 1. ∑% 𝑦% < 1 2. 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅) Note: 𝑦 here is a row vector

Which Effort Profile Can Be Incentivized, and How?

slide-31
SLIDE 31

31

ØLet’s focus on the special case 𝑦∗ = 𝑓

% for some 𝑘

ØPrevious argument shows a necessary condition

There is no 𝑦., ⋯ , 𝑦0 ≥ 0 such that: 1. ∑% 𝑦% < 1 2. 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅) Note: 𝑦 here is a row vector Define 𝜆% ≔ min

I

∑% 𝑦% subject to (1) 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅); (2) 𝑦 ≥ 0. A necessary condition is 𝜆% ≥ 1.

Which Effort Profile Can Be Incentivized, and How?

slide-32
SLIDE 32

32

ØLet’s focus on the special case 𝑦∗ = 𝑓

% for some 𝑘

ØPrevious argument shows a necessary condition

Note: 𝜆% ≤ 1 always because 𝑦 = 𝑓

% is feasible

Define 𝜆% ≔ min

I

∑% 𝑦% subject to (1) 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅); (2) 𝑦 ≥ 0. A necessary condition is 𝜆% ≥ 1.

Which Effort Profile Can Be Incentivized, and How?

slide-33
SLIDE 33

33

ØLet’s focus on the special case 𝑦∗ = 𝑓

% for some 𝑘

ØPrevious argument shows a necessary condition

Note: 𝜆% ≤ 1 always because 𝑦 = 𝑓

% is feasible

Define 𝜆% ≔ min

I

∑% 𝑦% subject to (1) 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅); (2) 𝑦 ≥ 0. A necessary condition is 𝜆% = 1.

Which Effort Profile Can Be Incentivized, and How?

slide-34
SLIDE 34

34

Which Effort Profile Can Be Incentivized, and How?

ØLet’s focus on the special case 𝑦∗ = 𝑓

% for some 𝑘

ØPrevious argument shows a necessary condition

Define 𝜆% ≔ min

I

∑% 𝑦% subject to (1) 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅); (2) 𝑦 ≥ 0. A necessary condition is 𝜆% = 1. Theorem: (1) There is a way to incentivize 𝑓

% if and only if

𝜆% = 1. (2) Whenever 𝑓

% can be incentivized, there is a linear

𝐼 of form 𝐼 = ∑& 𝛾& 𝐺& that incentivizes 𝑓

%.

slide-35
SLIDE 35

35

Which Effort Profile Can Be Incentivized, and How?

ØLet’s focus on the special case 𝑦∗ = 𝑓

% for some 𝑘

ØPrevious argument shows a necessary condition

Define 𝜆% ≔ min

I

∑% 𝑦% subject to (1) 𝑦 ⋅ 𝛽 ≥ 𝛽(𝑘,⋅); (2) 𝑦 ≥ 0. A necessary condition is 𝜆% = 1. Theorem: (1) There is a way to incentivize 𝑓

% if and only if

𝜆% = 1. (2) Whenever 𝑓

% can be incentivized, there is a linear

𝐼 of form 𝐼 = ∑& 𝛾& 𝐺& that incentivizes 𝑓

%.

Proof

ØWe know if 𝜆% < 1, we cannot incentivize 𝑓

%, so 𝜆% = 1 is necessary

ØTo prove sufficiency, we construct a linear 𝐼 that indeed induce 𝑓

% when

𝜆% = 1

slide-36
SLIDE 36

36

Linear 𝐼 That Induces 𝑓

%

ØConsider 𝐼 = ∑& 𝛾& 𝐺&, agent’s optimization problem

max

I∈XY 𝐼 = ∑& 𝛾& ⋅ 𝑔 & ∑Z 𝑦Z𝛽Z&

Value of feature 𝑗

slide-37
SLIDE 37

37

Linear 𝐼 That Induces 𝑓

%

ØConsider 𝐼 = ∑& 𝛾& 𝐺&, agent’s optimization problem

max

I∈XY 𝐼 = ∑& 𝛾& ⋅ 𝑔 & ∑Z 𝑦Z𝛽Z&

ØWhen would the optimal solution be 𝑦∗ = 𝑓

%?

  • Ans: when [\

[I] |I_I∗ ≥ [\ [I]` |I_I∗ for all 𝑘′ (verify it after class)

  • Spell the derivatives out:

∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

Eq.(1)

slide-38
SLIDE 38

38

Linear 𝐼 That Induces 𝑓

%

ØConsider 𝐼 = ∑& 𝛾& 𝐺&, agent’s optimization problem

max

I∈XY 𝐼 = ∑& 𝛾& ⋅ 𝑔 & ∑Z 𝑦Z𝛽Z&

ØWhen would the optimal solution be 𝑦∗ = 𝑓

%?

  • Ans: when [\

[I] |I_I∗ ≥ [\ [I]` |I_I∗ for all 𝑘′ (verify it after class)

  • Spell the derivatives out:

∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

Eq.(1)

Q: Given 𝜐% = 1, do there exist 𝛾 ≠ 0 so that Eq. (1) holds? Ø Eq (1) is also a set of linear constraints on 𝛾 Ø Ans: yes, through an elegant duality argument

slide-39
SLIDE 39

39

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

𝐵 𝑘,⋅ ⋅ 𝛾= ≥ 𝐵 𝑘b,⋅ ⋅ 𝛾=, ∀𝑘′ 𝛾 ≥ 0 ∃𝛾 ≠ 0 such that

slide-40
SLIDE 40

40

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

𝐵 𝑘,⋅ ⋅ 𝛾= ≥ 𝐵 𝑘b,⋅ ⋅ 𝛾=, ∀𝑘′ 𝛾 ≥ 0 ∃𝛾 ≠ 0 such that s.t. 𝟐 ≥ 𝐵 ⋅ 𝛾=, ∀𝑙 𝛾 ≥ 0 max

i

𝐵 𝑘,⋅ ⋅ 𝛾=

  • btains opt ≥ 1
slide-41
SLIDE 41

41

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

s.t. 𝟐 ≥ 𝐵 ⋅ 𝛾=, ∀𝑙 𝛾 ≥ 0 max

i

𝐵 𝑘,⋅ ⋅ 𝛾=

  • btains opt ≥ 1

s.t. 𝑧 ⋅ 𝐵 ≥ 𝐵(𝑘, : ) 𝑧 ≥ 0 min

m

𝟐 ⋅ 𝑧= Dual LP

slide-42
SLIDE 42

42

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

s.t. 𝟐 ≥ 𝐵 ⋅ 𝛾=, ∀𝑙 𝛾 ≥ 0 max

i

𝐵 𝑘,⋅ ⋅ 𝛾=

  • btains opt ≥ 1

s.t. 𝑧 ⋅ 𝐵 ≥ 𝐵(𝑘, : ) 𝑧 ≥ 0 min

m

𝟐 ⋅ 𝑧= Dual LP Ø The constraint is ∑Z 𝑧Z 𝛽Z& ⋅ 𝑔

& b ≥ 𝛽%& ⋅ 𝑔 & b, ∀𝑗

i.e., ∑Z 𝑧Z 𝛽Z& ≥ 𝛽%&, ∀𝑗

slide-43
SLIDE 43

43

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

s.t. 𝟐 ≥ 𝐵 ⋅ 𝛾=, ∀𝑙 𝛾 ≥ 0 max

i

𝐵 𝑘,⋅ ⋅ 𝛾=

  • btains opt ≥ 1

s.t. 𝑧 ⋅ 𝐵 ≥ 𝐵(𝑘, : ) 𝑧 ≥ 0 min

m

𝟐 ⋅ 𝑧= Dual LP Ø The constraint is ∑Z 𝑧Z 𝛽Z& ⋅ 𝑔

& b ≥ 𝛽%& ⋅ 𝑔 & b, ∀𝑗

i.e., ∑Z 𝑧Z 𝛽Z& ≥ 𝛽%&, ∀𝑗 Ø Dual opt is exactly the def of 𝜆%(= 1)

slide-44
SLIDE 44

44

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

s.t. 𝟐 ≥ 𝐵 ⋅ 𝛾=, ∀𝑙 𝛾 ≥ 0 max

i

𝐵 𝑘,⋅ ⋅ 𝛾=

  • btains opt ≥ 1

s.t. 𝑧 ⋅ 𝐵 ≥ 𝐵(𝑘, : ) 𝑧 ≥ 0 min

m

𝟐 ⋅ 𝑧= Dual LP Ø The constraint is ∑Z 𝑧Z 𝛽Z& ⋅ 𝑔

& b ≥ 𝛽%& ⋅ 𝑔 & b, ∀𝑗

i.e., ∑Z 𝑧Z 𝛽Z& ≥ 𝛽%&, ∀𝑗 Ø Dual opt is exactly the def of 𝜆%(= 1) Primal opt = 1 𝛾 can be easily constructed

slide-45
SLIDE 45

45

Choosing the 𝛾

ØGoal:∑& 𝛾& ⋅ 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& ≥ ∑& 𝛾& ⋅ 𝛽%`& ⋅ 𝑔 & b ∑Z 𝑦Z ∗𝛽Z& , ∀𝑘′

ØLet 𝐵%,& = 𝛽%& ⋅ 𝑔

& b ∑Z 𝑦Z ∗𝛽Z& which is a constant (𝑦∗ is given)

  • Let 𝐵(𝑘,⋅) denotes the 𝑘’th row

ØNeed to check the linear system

s.t. 𝟐 ≥ 𝐵 ⋅ 𝛾=, ∀𝑙 𝛾 ≥ 0 max

i

𝐵 𝑘,⋅ ⋅ 𝛾=

  • btains opt ≥ 1

s.t. 𝑧 ⋅ 𝐵 ≥ 𝐵(𝑘, : ) 𝑧 ≥ 0 min

m

𝟐 ⋅ 𝑧= Dual LP Ø The constraint is ∑Z 𝑧Z 𝛽Z& ⋅ 𝑔

& b ≥ 𝛽%& ⋅ 𝑔 & b, ∀𝑗

i.e., ∑Z 𝑧Z 𝛽Z& ≥ 𝛽%&, ∀𝑗 Ø Dual opt is exactly the def of 𝜆%(= 1) Primal opt = 1 𝛾 can be easily constructed

slide-46
SLIDE 46

46

General 𝑦∗

ØSimilar conclusion holds with similar proof ØIt turns out that the condition depends on 𝑇∗, the support of 𝑦∗

Theorem: (1) There is a way to incentivize 𝑦∗ if and only if 𝜆o∗ = 1 for some suitably defined 𝜆o∗. (2) Whenever 𝑦∗ can be incentivized, there is a linear 𝐼 that incentivizes 𝑦∗.

slide-47
SLIDE 47

47

Optimization Version of the Problem

ØPreviously, principal has a single 𝑦∗ to induce

  • Some of 𝑦∗ can be incentivized, and some cannot

ØA natural optimization version of the problem

  • Among all incentivizable 𝑦∗, how can principal incentivize the “best” one
  • Assume a utility function 𝑕(𝑦) over 𝑦
slide-48
SLIDE 48

48

Optimization Version of the Problem

ØPreviously, principal has a single 𝑦∗ to induce

  • Some of 𝑦∗ can be incentivized, and some cannot

ØA natural optimization version of the problem

  • Among all incentivizable 𝑦∗, how can principal incentivize the “best” one
  • Assume a utility function 𝑕(𝑦) over 𝑦

ØProblem: maximize 𝑕(𝑦) subject to 𝑦 is incentivizable

Theorem: The above problem is NP-hard, even when 𝑕 is concave.

Open question: Ø What kind of 𝑕 can be optimized? Linear? Ø What kind effort transition graph makes the problem more tractable?

slide-49
SLIDE 49

Thank You

Haifeng Xu

University of Virginia hx4ad@virginia.edu