Announcements HW 3 is out, due Nov 5th Project instruction is out - - PowerPoint PPT Presentation

announcements
SMART_READER_LITE
LIVE PREVIEW

Announcements HW 3 is out, due Nov 5th Project instruction is out - - PowerPoint PPT Presentation

Announcements HW 3 is out, due Nov 5th Project instruction is out Format: proposal (5) + presentation (10) + report (25) Proposal due Nov 7th -- mainly to check you formed a team and have some ideas about what to do


slide-1
SLIDE 1

1

Announcements

ØHW 3 is out, due Nov 5’th ØProject instruction is out

  • Format: proposal (5’) + presentation (10’) + report (25’)
  • Proposal due Nov 7’th -- mainly to check you formed a team and have

some ideas about what to do

  • We have some suggested topics, but you are more encouraged to find

your own

slide-2
SLIDE 2

CS6501: T

  • pics in Learning and Game Theory

(Fall 2019) Scoring Rules

Instructor: Haifeng Xu

slide-3
SLIDE 3

3

Outline

Ø Recap: Prediction Markets Ø Scoring Rule and its Characterization Ø Connection to Prediction Markets

slide-4
SLIDE 4

4

Prediction Markets

Ø Payoffs of the traded contract are determined by outcomes of future events A prediction market is a financial market that is designed for event prediction via information aggregation We design a market maker by specifying the payment for bundles of contracts. $1 iff 𝑓" $1 iff 𝑓#

. . .

contracts

slide-5
SLIDE 5

5

Example: Logarithmic Market Scoring Rule (LMSR [Hanson 03, 06])

ØDefine value function (𝑟 = (𝑟", ⋯ , 𝑟#) is current sales quantity)

𝑊 𝑟 = 𝑐 log ∑0∈[#] 𝑓45/7

ØPrice function

𝑞9 𝑟 = 𝑓4:/7 ∑0∈[#] 𝑓45/7 = 𝜖𝑊(𝑟) 𝜖𝑟9

ØTo buy 𝑦 ∈ ℝ# amount, a buyer pays: 𝑊 𝑟 + 𝑦 − 𝑊(𝑟)

  • Negative 𝑦9’s mean selling contracts to MM
  • Negative payment means market maker pays the buyer
  • Market starts with 𝑊 0 = 𝑐 log 𝑜

$1 iff 𝑓" $1 iff 𝑓#

. . .

Parameter 𝑐 adjusts liquidity

slide-6
SLIDE 6

6

Properties of LMSR

Ø I.e., should purchase 𝑦∗ such that

C D(4EF∗) C F:

= 𝜇9 Ø Market efficiency

  • Fact. The optimal amount an expert purchases is the amount

that moves the market price to her belief 𝜇. Her expected utility

  • f purchasing this amount is always non-negative.
  • Fact. Worst case market maker loses is 𝑐 log 𝑜 (i.e., bounded).
slide-7
SLIDE 7

7

Price Curve as a Function of Share Quantities

slide-8
SLIDE 8

8

Examples of LMSR in Practice

Ø Has been implemented by several prediction markets

  • E.g., InklingMarkets, Washington Stock Exchange, BizPredict, Net

Exchange, and (reportedly) at YooNew.

slide-9
SLIDE 9

9

Big on-going project: “replication market” for DARPA SCORE project Markets can potentially be a very effective forecasting tool

slide-10
SLIDE 10

10

Connection between LMSR and Exponential Weight Updates (EWU)

slide-11
SLIDE 11

11

Recap: Exponential Weight Update

ØPlayed for 𝑈 rounds; each round selects an action 𝑗 ∈ [𝑜] ØMaintains weights over 𝑜 actions: 𝑥K 1 , ⋯ , 𝑥K(𝑜) ØObserve cost vector 𝑑K, and update 𝑥KE" 𝑗 = 𝑥K 𝑗 ⋅ 𝑓OPQR 9 , ∀𝑗 ∈ [𝑜]

Action 1, 𝑥K(1) Action 2, 𝑥K(2) Action 𝑜, 𝑥K(𝑜)

. . .

slide-12
SLIDE 12

12

Recap: Exponential Weight Update

ØPlayed for 𝑈 rounds; each round selects an action 𝑗 ∈ [𝑜] ØMaintains weights over 𝑜 actions: 𝑥K 1 , ⋯ , 𝑥K(𝑜) ØObserve cost vector 𝑑K, and update 𝑥KE" 𝑗 = 𝑥K 𝑗 ⋅ 𝑓OPQR 9 , ∀𝑗 ∈ [𝑜]

Action 1, 𝑥K(1) Action 2, 𝑥K(2) Action 𝑜, 𝑥K(𝑜)

. . .

𝑥KE" 𝑗 = 𝑥K 𝑗 ⋅ 𝑓OPQR 9 = [𝑥KO" 𝑗 ⋅ 𝑓OPQRUV 9 ] ⋅ 𝑓OPQR 9 = ⋯ = 𝑓OPWR 9 where 𝐷K 𝑗 = ∑YZK 𝑑Y(𝑗)

slide-13
SLIDE 13

13

Recap: Exponential Weight Update

ØPlayed for 𝑈 rounds; each round selects an action 𝑗 ∈ [𝑜] ØMaintains weights over 𝑜 actions: 𝑥K 1 , ⋯ , 𝑥K(𝑜) ØObserve cost vector 𝑑K, and update 𝑥KE" 𝑗 = 𝑥K 𝑗 ⋅ 𝑓OPQR 9 , ∀𝑗 ∈ [𝑜] ØAt round 𝑢 + 1, select action 𝑗 with probability

𝑥K(𝑗) 𝑋

K

= 𝑓OPWR 9 ∑0∈[#] 𝑓OPWR 0 where 𝐷K = ∑YZK 𝑑K is the accumulated cost vector

slide-14
SLIDE 14

14

Recap: Exponential Weight Update

ØPlayed for 𝑈 rounds; each round selects an action 𝑗 ∈ [𝑜] ØMaintains weights over 𝑜 actions: 𝑥K 1 , ⋯ , 𝑥K(𝑜) ØObserve cost vector 𝑑K, and update 𝑥KE" 𝑗 = 𝑥K 𝑗 ⋅ 𝑓OPQR 9 , ∀𝑗 ∈ [𝑜] ØAt round 𝑢 + 1, select action 𝑗 with probability

𝑥K(𝑗) 𝑋

K

= 𝑓OPWR 9 ∑0∈[#] 𝑓OPWR 0 where 𝐷K = ∑YZK 𝑑K is the accumulated cost vector This looks very much like the price function in LMSR (𝑟 is the accumulated sales quantity) 𝑞9 = 𝑓4:/7 ∑0∈[#] 𝑓45/7

slide-15
SLIDE 15

15

ØLMSR

  • 𝑜 contracts (i.e., outcomes)
  • Maintain prices 𝑞(𝑗)
  • Total shares sold 𝑟 𝑗
  • Price of contract 𝑗
  • Prices reflect how probable is an

event

  • Care about worst case MM loss

($ received) − max 𝑟(𝑗)

9

EWU vs LMSR

ØExponential Weight Update

  • 𝑜 actions
  • Maintain weight 𝑥K(𝑗)
  • Total cost 𝐷` 𝑗 = ∑KZ` 𝑑K(𝑗)
  • Select 𝑗 with prob
  • Weights reflect how good an

action is

  • Care about worst case regret

𝐷` Alg − min

9

𝐷`(𝑗) 𝑞9 = 𝑓4:/7 ∑0∈[#] 𝑓45/7 𝑞9 = 𝑓OPWR 9 ∑0∈[#] 𝑓OPWR 0

slide-16
SLIDE 16

16

ØLMSR is just one particular automatic MM

  • Similar relation holds for other market markers and no-regret learning

algorithms (see [Chen and Vaughan 2010])

ØNext: will study other “good” scoring rules, and see why they work

slide-17
SLIDE 17

17

Outline

Ø Recap: Prediction Markets Ø Scoring Rule and its Characterization Ø Connection to Prediction Markets

slide-18
SLIDE 18

18

Consider a Simpler Setting

ØWe (designer) want to learn the distribution of random var 𝐹 ∈ [𝑜]

  • 𝐹 will be sampled in the future

ØWe have no samples from 𝐹; Instead, we have an expert/predictor

who has a predicted distribution 𝜇 ∈ Δ#

ØWe want to incentivize the expert to truthfully report 𝜇

𝜇

slide-19
SLIDE 19

19

Consider a Simpler Setting

Example Ø 𝐹 is whether UVA will win NCAA title in 2020 Ø Expert is the UVA coach Ø Expert’s prediction does not need to be perfect

  • But, better than the designer who knows nothing

Ø Assume expert will not give you truthful info for free

ØWe (designer) want to learn the distribution of random var 𝐹 ∈ [𝑜]

  • 𝐹 will be sampled in the future

ØWe have no samples from 𝐹; Instead, we have an expert/predictor

who has a predicted distribution 𝜇 ∈ Δ#

ØWe want to incentivize the expert to truthfully report 𝜇

slide-20
SLIDE 20

20

Idea: “Score” Expert’s Report

Will reward the expert certain amount 𝑇(𝑗; 𝑞) where: (1) 𝑞 is the expert’s report (does not have to equal 𝜇); (2) 𝑗 ∈ [𝑜] is the event realization Not like a prediction market yet, but will see later they are related

slide-21
SLIDE 21

21

Idea: “Score” Expert’s Report

Will reward the expert certain amount 𝑇(𝑗; 𝑞) where: (1) 𝑞 is the expert’s report (does not have to equal 𝜇); (2) 𝑗 ∈ [𝑜] is the event realization Q: what is the expert’s expected utility?

ØExpert believes 𝑗 ∼ 𝜇 ØExpected utility 𝔽9∼j𝑇 𝑗; 𝑞 = ∑9∈[#] 𝜇9 ⋅ 𝑇(𝑗; 𝑞) = 𝑇(𝜇; 𝑞)

slide-22
SLIDE 22

22

Idea: “Score” Expert’s Report

Will reward the expert certain amount 𝑇(𝑗; 𝑞) where: (1) 𝑞 is the expert’s report (does not have to equal 𝜇); (2) 𝑗 ∈ [𝑜] is the event realization Q: what is the expert’s expected utility?

ØExpert believes 𝑗 ∼ 𝜇 ØExpected utility 𝔽9∼j𝑇 𝑗; 𝑞 = ∑9∈[#] 𝜇9 ⋅ 𝑇(𝑗; 𝑞)

Q: what 𝑇(𝑗; 𝑞) function can elicit truthful report 𝜇?

ØWhen expert finds that 𝜇 = arg max

l∈mn[∑9∈[#] 𝜇9 ⋅ 𝑇(𝑗; 𝑞)]

ØIdeally, 𝜇 is the unique maximizer

= 𝑇(𝜇; 𝑞)

slide-23
SLIDE 23

23

Proper Scoring Rules

ØThus, typically, strict properness is desired

  • Definition. The “scoring rule” 𝑇(𝑗; 𝑞)is [strictly] proper if truthful

report 𝑞 = 𝜇 [uniquely] maximizes expected utility 𝑇(𝜇; 𝑞). Observations. 1. 𝑇 𝑗; 𝑞 = 0 is a trivial proper scoring fnc 2.

ØExpert is incentivized to report truthfully iff 𝑇(𝑓; 𝑞) is proper

Proper scores are closed under affine transformation

  • I.e., if 𝑇 𝑗; 𝑞

is [strictly] proper, so is 𝛽 ⋅ 𝑇 𝑗; 𝑞 + 𝛾 for any constant 𝛽 ≠ 0, 𝛾

slide-24
SLIDE 24

24

Examples of Scoring Rules

Example 1 [Log Scoring Rule] Ø 𝑇 𝑗; 𝑞 = log 𝑞9 Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 = ∑9∈[#] 𝜇9 log 𝑞9 − log 𝜇9 + ∑9∈[#] 𝜇9 log 𝜇9 Ø Negative, but okay – can always add a constant Ø Properness requires 𝜇 = arg max

l∈mn 𝑇(𝜇; 𝑞)

slide-25
SLIDE 25

25

Examples of Scoring Rules

Example 1 [Log Scoring Rule] Ø 𝑇 𝑗; 𝑞 = log 𝑞9 Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 = ∑9∈[#] 𝜇9 log 𝑞9 − log 𝜇9 + ∑9∈[#] 𝜇9 log 𝜇9 Ø Negative, but okay – can always add a constant Ø Properness requires 𝜇 = arg max

l∈mn 𝑇(𝜇; 𝑞)

= − ∑9∈[#] 𝜇9 ⋅ log

j: l: − 𝐹𝑜𝑢𝑠𝑝𝑞(𝜇)

slide-26
SLIDE 26

26

Examples of Scoring Rules

Example 1 [Log Scoring Rule] Ø 𝑇 𝑗; 𝑞 = log 𝑞9 Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 = ∑9∈[#] 𝜇9 log 𝑞9 − log 𝜇9 + ∑9∈[#] 𝜇9 log 𝜇9 = − ∑9∈[#] 𝜇9 ⋅ log

j: l: − 𝐹𝑜𝑢𝑠𝑝𝑞(𝜇)

Ø Negative, but okay – can always add a constant Ø Properness requires 𝜇 = arg max

l∈mn 𝑇(𝜇; 𝑞)

KL-divergence 𝐿𝑀(𝜇; 𝑞) (a.k.a. relative entropy)

  • Measures the distance between two distributions
  • Always non-negative, and equals 0 only when 𝑞 = 𝜇
slide-27
SLIDE 27

27

Examples of Scoring Rules

Example 1 [Log Scoring Rule] Ø 𝑇 𝑗; 𝑞 = log 𝑞9 Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9 ⋅ log 𝑞9 = ∑9∈[#] 𝜇9 log 𝑞9 − log 𝜇9 + ∑9∈[#] 𝜇9 log 𝜇9 = − ∑9∈[#] 𝜇9 ⋅ log

j: l: − 𝐹𝑜𝑢𝑠𝑝𝑞(𝜇)

  • 𝑞 should minimize distance 𝐿𝑀(𝜇; 𝑞), which is achieved at 𝑞 = 𝜇
  • Log scoring rule is strictly proper

Ø Negative, but okay – can always add a constant Ø Properness requires 𝜇 = arg max

l∈mn 𝑇(𝜇; 𝑞)

slide-28
SLIDE 28

28

Examples of Scoring Rules

Example 2 [Quadratic Scoring Rule] Ø 𝑇 𝑗; 𝑞 = 2𝑞9 − ∑0∈[#] 𝑞0

v

Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9[2𝑞9 − ∑0∈[#] 𝑞0

v]

slide-29
SLIDE 29

29

Examples of Scoring Rules

Example 2 [Quadratic Scoring Rule] Ø 𝑇 𝑗; 𝑞 = 2𝑞9 − ∑0∈[#] 𝑞0

v

Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9[2𝑞9 − ∑0∈[#] 𝑞0

v]

𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9[2𝑞9 − ∑0∈[#] 𝑞0

v]

= ∑9∈[#] 2𝜇9𝑞9 − ∑9∈ # 𝜇9 ⋅ ∑0∈[#] 𝑞0

v

= ∑9∈[#] 2𝜇9𝑞9 − ∑9∈[#] 𝑞9

v

= − ∑9∈[#] 𝑞9 − 𝜇9 v + ∑9∈[#] 𝜇9

v

  • Prediction 𝑞 should minimize 𝑚v-distance between 𝑞 and 𝜇
  • 𝑞9 = 𝜇9 is the unique maximizer of 𝑇 𝜇; 𝑞
  • Quadratic scoring rule is also strictly proper
slide-30
SLIDE 30

30

Examples of Scoring Rules

Example 3 [Linear Scoring Rule] Ø 𝑇 𝑗; 𝑞 = 𝑞9 Ø 𝑇 𝜇; 𝑞 = ∑9∈[#] 𝜇9𝑞9

  • Linear scoring rule turns out to be not proper (verify it after class)
slide-31
SLIDE 31

31

What 𝑇(𝑗; 𝑞) Are Proper?

  • Theorem. The scoring rule 𝑇(𝑗; 𝑞) is (strictly) proper if and only

if there exists a (strictly) convex function 𝐻: Δ# → ℝ such that 𝑇 𝑗; 𝑞 = 𝐻 𝑞 + ∇𝐻(𝑞)(𝑓9 − 𝑞)

basis vector

slide-32
SLIDE 32

32

What 𝑇(𝑗; 𝑞) Are Proper?

  • Theorem. The scoring rule 𝑇(𝑗; 𝑞) is (strictly) proper if and only

if there exists a (strictly) convex function 𝐻: Δ# → ℝ such that 𝑇 𝑗; 𝑞 = 𝐻 𝑞 + ∇𝐻(𝑞)(𝑓9 − 𝑞) Recall 𝐻(𝑞) is convex if for any 𝛽 ∈ [0,1] 𝛽𝐻 𝑞 + 1 − 𝛽 𝐻 𝑟 ≥ 𝐻( 𝛽𝑞 + 1 − 𝛽 𝑟 )

basis vector

slide-33
SLIDE 33

33

What 𝑇(𝑗; 𝑞) Are Proper?

Proof of “⇐”

  • Theorem. The scoring rule 𝑇(𝑗; 𝑞) is (strictly) proper if and only

if there exists a (strictly) convex function 𝐻: Δ# → ℝ such that 𝑇 𝑗; 𝑞 = 𝐻 𝑞 + ∇𝐻(𝑞)(𝑓9 − 𝑞) 𝑇 𝜇; 𝑞 = 𝔽9∼j 𝐻 𝑞 + ∇𝐻(𝑞)(𝑓9 − 𝑞) = 𝐻 𝑞 + ∇𝐻(𝑞)(𝜇 − 𝑞) ≤ 𝐻 𝜇

𝐻 𝑞 + ∇𝐻(𝑞)(𝜇 − 𝑞)

= 𝑇(𝜇; 𝜇) By convexity

𝐻 𝑞 𝐻 𝜇 𝜇 ∇𝐻(𝑞)

slide-34
SLIDE 34

34

What 𝑇(𝑗; 𝑞) Are Proper?

Proof of “⇒”

Ø S 𝜇; 𝑞 = ∑9∈[#] 𝜇9𝑇(𝑗; 𝑞) is a linear fnc of 𝜇 for any 𝑞 ØBy properness, S 𝜇; 𝜇 = max

l∈mn

∑9∈[#] 𝜇9𝑇(𝑗; 𝑞), denoted as 𝐻(𝜇)

  • 𝐻(𝜇) is convex in 𝜇
  • Theorem. The scoring rule 𝑇(𝑗; 𝑞) is (strictly) proper if and only

if there exists a (strictly) convex function 𝐻: Δ# → ℝ such that 𝑇 𝑗; 𝑞 = 𝐻 𝑞 + ∇𝐻(𝑞)(𝑓9 − 𝑞)

slide-35
SLIDE 35

35

What 𝑇(𝑗; 𝑞) Are Proper?

Proof of “⇒”

Ø S 𝜇; 𝑞 = ∑9∈[#] 𝜇9𝑇(𝑗; 𝑞) is a linear fnc of 𝜇 for any 𝑞 ØBy properness, S 𝜇; 𝜇 = max

l∈mn

∑9∈[#] 𝜇9𝑇(𝑗; 𝑞), denoted as 𝐻(𝜇)

  • 𝐻(𝜇) is convex in 𝜇

ØThe gradient of 𝐻(𝜇) is the gradient of ∑9∈[#] 𝜇9𝑇(𝑗; 𝑞) for the 𝑞 = 𝜇

  • I.e., ∇𝐻 𝜇 = 𝑇( ⋅ ; 𝜇)

ØThus,

  • Theorem. The scoring rule 𝑇(𝑗; 𝑞) is (strictly) proper if and only

if there exists a (strictly) convex function 𝐻: Δ# → ℝ such that 𝑇 𝑗; 𝑞 = 𝐻 𝑞 + ∇𝐻(𝑞)(𝑓9 − 𝑞) 𝑇 𝑗; 𝑞 = 𝑇 𝑞; 𝑞 + [𝑇 𝑗; 𝑞 − 𝑇(𝑞; 𝑞)] = 𝐻 𝑞 + 𝑇 ⋅; 𝑞 ⋅ [𝑓9 − 𝑞] = 𝐻 𝑞 + ∇𝐻(𝑞)[𝑓9 − 𝑞]

slide-36
SLIDE 36

36

Outline

Ø Recap: Prediction Markets Ø Scoring Rule and its Characterization Ø Connection to Prediction Markets

slide-37
SLIDE 37

37

What If There are Many Experts?

ØOne idea: elicit their predictions privately/separately ØDrawbacks

  • 1. May be expensive or wasteful – if experts all agree, we pay many

times for the same prediction

  • 2. Not clear how to aggregate these predictions (average or geometric

mean would not work)

  • 3. In fact, it may require experts’ knowledge to correctly aggregate

predictions 𝜇" 𝜇v 𝜇•

. . .

slide-38
SLIDE 38

38

Sequential Elicitation

ØAsk experts to make predictions in sequence ØThe reward for expert 𝑙’s prediction 𝑞• will be

𝑇 𝑗; 𝑞• − 𝑇(𝑗; 𝑞•O") where 𝑞•O" is the prediction of expert 𝑙 − 1

  • I.e., experts are paid based on how much they improved the prediction
slide-39
SLIDE 39

39

Sequential Elicitation

ØAsk experts to make predictions in sequence ØThe reward for expert 𝑙’s prediction 𝑞• will be

𝑇 𝑗; 𝑞• − 𝑇(𝑗; 𝑞•O") where 𝑞•O" is the prediction of expert 𝑙 − 1

  • I.e., experts are paid based on how much they improved the prediction
  • Theorem. If 𝑇 is a proper scoring rule and each expert can only

predict once, then each expert maximizes utility by reporting true belief given her own knowledge.

ØProof: since 𝑇(𝑗; 𝑞•O") not under 𝑙’s control, she maximizes

reward by maximizing 𝑇(𝑗; 𝑞•)

slide-40
SLIDE 40

40

Sequential Elicitation

ØAsk experts to make predictions in sequence ØThe reward for expert 𝑙’s prediction 𝑞• will be

𝑇 𝑗; 𝑞• − 𝑇(𝑗; 𝑞•O") where 𝑞•O" is the prediction of expert 𝑙 − 1

  • I.e., experts are paid based on how much they improved the prediction

Remarks:

Ø𝑙 may see previous reports and then update his prediction

  • Experts will aggregate predictions automatically
  • Theorem. If 𝑇 is a proper scoring rule and each expert can only

predict once, then each expert maximizes utility by reporting true belief given her own knowledge.

slide-41
SLIDE 41

41

Sequential Elicitation

ØAsk experts to make predictions in sequence ØThe reward for expert 𝑙’s prediction 𝑞• will be

𝑇 𝑗; 𝑞• − 𝑇(𝑗; 𝑞•O") where 𝑞•O" is the prediction of expert 𝑙 − 1

  • I.e., experts are paid based on how much they improved the prediction

Remarks:

ØNot true if an expert can predict for multiple times

  • She may manipulate her initial report to mislead others’ prediction so

that she has opportunity to significantly improve her prediction later

  • Will see an example in next lecture
  • Theorem. If 𝑇 is a proper scoring rule and each expert can only

predict once, then each expert maximizes utility by reporting true belief given her own knowledge.

slide-42
SLIDE 42

42

Equivalence to Prediction Markets Described Previously

ØIt turns out that sequential elicitation is equivalent (in incentives)

to the prediction market (PM) for buying and selling contracts

Ø Each expert moves the prediction to his own belief

  • Recall in PMs, expert will buy shares until prices hit his own belief

ØAny strictly proper scoring rule can be used to implement a PM

and any PM correspond to some proper scoring rules

slide-43
SLIDE 43

43

Remarks

Mechanism design for prediction tasks

ØML is one way but not the only way of making predictions ØIn some settings, aggregating predictions from experts is more

desirable

slide-44
SLIDE 44

Thank You

Haifeng Xu

University of Virginia hx4ad@virginia.edu