Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale - - PowerPoint PPT Presentation

motivational ratings
SMART_READER_LITE
LIVE PREVIEW

Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale - - PowerPoint PPT Presentation

Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale and CEPR 2 Stanford and Microsoft Research Frontiers of Economic Theory and Computer Science Becker Friedman Institute, August 2016 Focus: Ratings that incentivize effort


slide-1
SLIDE 1
slide-2
SLIDE 2

Motivational Ratings

Johannes H¨

  • rner1, Nicolas Lambert2

1Yale and CEPR 2Stanford and Microsoft Research

Frontiers of Economic Theory and Computer Science Becker Friedman Institute, August 2016

slide-3
SLIDE 3

Focus: Ratings that incentivize effort (moral hazard).

Hospitals, physicians; schools, teachers; companies, executives, etc.

slide-4
SLIDE 4

Focus: Ratings that incentivize effort (moral hazard).

Hospitals, physicians; schools, teachers; companies, executives, etc.

Goal: What is the best rating system?

slide-5
SLIDE 5

Unknown skill θ Private effort A Forward-looking

Agent

slide-6
SLIDE 6

Market Agent dXt

Competitive Rational expectations

At, θt dWt

slide-7
SLIDE 7

Market Agent dXt At, θt dπt

Pays upfront

dWt

slide-8
SLIDE 8

Rater Market Agent At, θt dXt, dSt dπt {dWk,t}k

slide-9
SLIDE 9

Rater Market Agent At, θt dXt, dSt dπt

Ancillary statistics

{dWk,t}k

slide-10
SLIDE 10

Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t) {dWk,t}k

slide-11
SLIDE 11

Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t)

Non-strategic Committed Maximizes efficiency

{dWk,t}k

slide-12
SLIDE 12

Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t)

Non-strategic Committed Maximizes effort

{dWk,t}k

slide-13
SLIDE 13

Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t) {dWk,t}k {Ys}s≤t: public rating

slide-14
SLIDE 14

Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t) {dWk,t}k dXt, dSk,t, . . .

Non-exclusive rating

slide-15
SLIDE 15

Confidential Public Exclusive Yt {Ys}s≤t Non-exclusive Yt, {Xs, Sk,s}s≤t,k≤K0 {Ys, Xs, Sk,s}s≤t,k≤K0

slide-16
SLIDE 16

Confidential Public Exclusive Yt {Ys}s≤t Non-exclusive Yt, {Xs, Sk,s}s≤t,k≤K0 {Ys, Xs, Sk,s}s≤t,k≤K0

slide-17
SLIDE 17

Model

slide-18
SLIDE 18

Continuous time t ≥ 0, infinite horizon.

slide-19
SLIDE 19

Continuous time t ≥ 0, infinite horizon. The relevant processes are: Effort: At ∈ R+; privately known by the agent.

slide-20
SLIDE 20

Continuous time t ≥ 0, infinite horizon. The relevant processes are: Effort: At ∈ R+; privately known by the agent. Ability: θt ∈ R; unknown to all.

slide-21
SLIDE 21

Continuous time t ≥ 0, infinite horizon. The relevant processes are: Effort: At ∈ R+; privately known by the agent. Ability: θt ∈ R; unknown to all. Output: Xt ∈ R. Ancillary Statistics: St ∈ RK−1.

slide-22
SLIDE 22

Ability Process: dθt = −θt dt

Mean-reversion

+

Innovation

γ dW θ

t ,

with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.

slide-23
SLIDE 23

Ability Process: dθt = −θt dt

Mean-reversion

+

Innovation

γ dW θ

t ,

with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.

slide-24
SLIDE 24

Ability Process: dθt = −θt dt

Mean-reversion

+

Innovation

γ dW θ

t ,

with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.

slide-25
SLIDE 25

Ability Process: dθt = −θt dt

Mean-reversion

+

Innovation

γ dW θ

t ,

with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.

slide-26
SLIDE 26

Output Process: dXt = (At + θt)dt + σ1dW1,t, with X0 = 0, σ1 > 0, and W1 a standard B.M. (W1 ⊥ W θ).

slide-27
SLIDE 27

Signal Processes, k = 2, . . . , K: dSk,t = (αkAt + βkθt) dt + σk dWk,t, with Sk,0 = 0, σk > 0, αk, βk ∈ R and Wk a standard B.M.

slide-28
SLIDE 28

Signal Processes, k = 2, . . . , K: dSk,t = (αkAt + βkθt) dt + σk dWk,t, with Sk,0 = 0, σk > 0, αk, βk ∈ R and Wk a standard B.M. If αk = βk = 0, the signal is “white noise.”

slide-29
SLIDE 29

Signal Processes, k = 2 , . . . , K: dSk,t = (αkAt + βkθt) dt + σk dWk,t, with Sk,0 = 0, σk > 0, αk, βk ∈ R and Wk a standard B.M. If αk = βk = 0, the signal is “white noise.” We also write S1 := X (α1 := 1, β1 := 1).

slide-30
SLIDE 30

Payoffs

Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(

Revenue

  • n [t, t + dt)
  • dXt

Transfer

  • n [t, t + dt)
  • dπt ),

Agent: ∞ e−rt( dπt

  • Transfer
  • n [t, t + dt)

− c(At) dt

  • Cost
  • n [t, t + dt)

), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.

slide-31
SLIDE 31

Payoffs

Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(

Revenue

  • n [t, t + dt)
  • dXt

Transfer

  • n [t, t + dt)
  • dπt ),

Agent: ∞ e−rt( dπt

  • Transfer
  • n [t, t + dt)

− c(At) dt

  • Cost
  • n [t, t + dt)

), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.

slide-32
SLIDE 32

Payoffs

Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(

Revenue

  • n [t, t + dt)
  • dXt

Transfer

  • n [t, t + dt)
  • dπt ),

Agent: ∞ e−rt( dπt

  • Transfer
  • n [t, t + dt)

− c(At) dt

  • Cost
  • n [t, t + dt)

), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.

slide-33
SLIDE 33

Payoffs

Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(

Revenue

  • n [t, t + dt)
  • dXt

Transfer

  • n [t, t + dt)
  • dπt ),

Agent: ∞ e−rt( dπt

  • Transfer
  • n [t, t + dt)

− c(At) dt

  • Cost
  • n [t, t + dt)

), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.

slide-34
SLIDE 34

Payoffs

Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(

Revenue

  • n [t, t + dt)
  • dXt

Transfer

  • n [t, t + dt)
  • dπt ),

Agent: ∞ e−rt( dπt

  • Transfer
  • n [t, t + dt)

− c(At) dt

  • Cost
  • n [t, t + dt)

), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.

slide-35
SLIDE 35

Payoffs

Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(

Revenue

  • n [t, t + dt)
  • dXt

Transfer

  • n [t, t + dt)
  • dπt ),

Agent: ∞ e−rt( dπt

  • Transfer
  • n [t, t + dt)

− c(At) dt

  • Cost
  • n [t, t + dt)

), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0. Recall that E[dXt] = At dt. Hence, efficiency requires c′(At) = 1 ∀t.

slide-36
SLIDE 36

The Optimization Program

The market has some expectation A∗ about the agent’s effort.

slide-37
SLIDE 37

The Optimization Program

The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt].

slide-38
SLIDE 38

The Optimization Program

The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt]. The agent maximizes over all processes (At): E ∞ e−rt(µt − c(At)) dt

  • .
slide-39
SLIDE 39

The Optimization Program

The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt]. The agent maximizes over all processes (At): E ∞ e−rt(µt − c(At))dt

  • .
slide-40
SLIDE 40

The Optimization Program

The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt]. The agent maximizes over all processes (At): E ∞ e−rt(µt − c(At)) dt

  • .

The rater’s goal: to maximize the argmax (At) over M.

slide-41
SLIDE 41

Why is Transparency Suboptimal?

Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t.

slide-42
SLIDE 42

Why is Transparency Suboptimal?

Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t. If {Xs, S2,s}s≤t is disclosed, then {As}s≤t doesn’t affect µt: At = 0.

slide-43
SLIDE 43

Why is Transparency Suboptimal?

Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t. If {Xs, S2,s}s≤t is disclosed, then {As}s≤t doesn’t affect µt: At = 0. If instead the market only observes: d(Xt + S2,t) = (At + θt)dt + dW1,t + dW2,t, career concerns arise: At > 0.

slide-44
SLIDE 44

Why is Transparency Suboptimal?

Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t. If {Xs, S2,s}s≤t is disclosed, then {As}s≤t doesn’t affect µt: At = 0. If instead the market only observes: d(Xt + S2,t) = (At + θt)dt + dW1,t + dW2,t, career concerns arise: At > 0. One can do better than disclosing the sum of the signals.

slide-45
SLIDE 45

Rating Processes

A rating process is a I-adapted Y, where It = σ({Ss}s≤t).

slide-46
SLIDE 46

Rating Processes

A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)

Confidential

.

slide-47
SLIDE 47

Rating Processes

A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)

Confidential

.

slide-48
SLIDE 48

Rating Processes

A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)

Confidential

. Throughout, we impose:

  • 1. For all ∆, (Yt, St − St−∆) is normal and stationary.
  • 2. The map ∆ → Cov[Yt, St−∆] is absolutely cts., with

integrable and square integrable Radon-Nikodym derivative.

  • 3. The mean rating is zero: E∗[Yt] = 0.
slide-49
SLIDE 49

Rating Processes

A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)

Confidential

. Throughout, we impose:

  • 1. For all ∆, (Yt, St − St−∆) is normal and stationary.
  • 2. The map ∆ → Cov[Yt, St−∆] is absolutely cts., with

integrable and square integrable Radon-Nikodym derivative.

  • 3. The mean rating is zero: E∗[Yt] = 0.

We usually work with scalar Yt = E∗[θt | Mt] (“Direct ratings”).

slide-50
SLIDE 50

Deterministic Information Quality Implies Normality

Lemma (Normal Representation) Let Y be a progressively measurable process on I such that:

  • 1. ∀T > t + τ > t, Cov[YT, St+τ | It] is a function of (t, T, τ),

differentiable in τ, with uniformly Lipschitz cts. derivative in t.

  • 2. ∀T > t, Cov[YT, θt | It] is a function of (t, T).
  • 3. ∀t, E[Y2

t ] < ∞ and E[Yt] = 0.

Then, for all ∆ ≥ 0, (Yt, St − St−∆) is normally distributed.

slide-51
SLIDE 51

Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =

  • s≤t

e−a(t−s) dXs.

slide-52
SLIDE 52

Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =

  • s≤t

e−a(t−s) dXs. Special case: Transparency (a = κ :=

  • 1 + γ2/σ2

1).

slide-53
SLIDE 53

Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =

  • s≤t

e−a(t−s) dXs. Special case: Transparency (a = κ :=

  • 1 + γ2/σ2

1).

Moving window. (Consumer credit ratings, BBB grades.) Yt = t

t−∆

dXs.

slide-54
SLIDE 54

Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =

  • s≤t

e−a(t−s) dXs. Special case: Transparency (a = κ :=

  • 1 + γ2/σ2

1).

Moving window. (Consumer credit ratings, BBB grades.) Yt = t

t−∆

dXs. Methods that Don’t: Coarse ratings.

slide-55
SLIDE 55

What can the rater Do? More Generally:

· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .

slide-56
SLIDE 56

What can the rater Do? More Generally:

· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .

slide-57
SLIDE 57

What can the rater Do? More Generally:

· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .

slide-58
SLIDE 58

What can the rater Do? More Generally:

· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .

slide-59
SLIDE 59

What can the rater Do? More Generally:

· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . . e−δdt e−δ(t−s) 1 + +

slide-60
SLIDE 60

What can the rater Do? More Generally:

· · · · · · · · · u1(·) dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .

slide-61
SLIDE 61

What can the rater Do? More Generally:

· · · · · · · · · u1(·) uk(·) uK(·) dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .

slide-62
SLIDE 62

What can the rater Do? More Generally:

· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . . uk(t − s) u1(0)

slide-63
SLIDE 63

Lemma (Analytic Representation) Fix a rating process Y . Given a conjectured A∗, there exist unique vector-valued functions uk, k = 1, . . . , K, such that, for all t, Yt =

  • k
  • s≤t

uk(t − s)(dSk,s − αkA∗ ds).

slide-64
SLIDE 64

Main Results for the Confidential/Exclusive Case

slide-65
SLIDE 65

The unique optimal confidential rating system is uk(t) = dk √r λ e−rt + βk σ2

k

e−κt. dk := (κ2 − r2)mβ αk σ2

k

− (κ2 − 1)mαβ βk σ2

k

, with

λ := (κ − 1)√r(1 + r)mαβ + (κ − r) √ ∆, mβ :=

  • k

β2

k

σ2

k

, mαβ :=

  • k

αkβk σ2

k

, mα :=

  • k

α2

k

σ2

k

, ∆ := (κ + r)2(mαmβ − m2

αβ) + (1 + r)2m2 αβ,

κ :=

  • 1 + γ2

k

β2

k

σ2

k

.

slide-66
SLIDE 66

That is, Yt =

  • s≤t
  • k
  • incentive term
  • dk

√r λ e−r(t−s) +

belief term

  • βk

σ2

k

e−κ(t−s) (dSk,s − αkA∗ ds).

slide-67
SLIDE 67

That is, Yt =

  • s≤t
  • k
  • incentive term
  • dk

√r λ e−r(t−s) +

belief term

  • βk

σ2

k

e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system.

slide-68
SLIDE 68

That is, Yt =

  • s≤t
  • k
  • incentive term
  • dk

√r λ e−r(t−s) +

belief term

  • βk

σ2

k

e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes.

slide-69
SLIDE 69

That is, Yt =

  • s≤t
  • k
  • incentive term
  • dk

√r λ e−r(t−s) +

belief term

  • βk

σ2

k

e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes. One state is the rater’s belief νt := E∗[θt | It]. dνt = − κνt dt + γ2 κ + 1

  • k

βk σ2

k

(dSk,t − αkA∗ dt)

slide-70
SLIDE 70

That is, Yt =

  • s≤t
  • k
  • incentive term
  • dk

√r λ e−r(t−s) + βk σ2

k

e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes. The other is some incentive state It. dIt = − rIt dt + √r λ

  • k dk (dSk,t − αkA∗ dt) .
slide-71
SLIDE 71

Two states are needed: keeping track of νt isn’t enough.

slide-72
SLIDE 72

Two states are needed: keeping track of νt isn’t enough. νt It Yt

slide-73
SLIDE 73

Two states are needed: keeping track of νt isn’t enough. νt It Yt

slide-74
SLIDE 74

Two states are needed: keeping track of νt isn’t enough. It Yt (It, νt)

slide-75
SLIDE 75

Two states are needed: keeping track of νt isn’t enough. It Yt (It, νt) The rating process Y = I + ν isn’t Markov.

slide-76
SLIDE 76

Reality Check

Ratings are not Markov: widely documented for credit rating.

Altman and Kao (1992), Carty and Fons (1993), Altman (1998), Nickell et al. (2000), Bangia et al. (2002), Lando and Skødeberg (2002), Hamilton and Cantor (2004), etc.

slide-77
SLIDE 77

Reality Check

Ratings are not Markov: widely documented for credit rating.

Altman and Kao (1992), Carty and Fons (1993), Altman (1998), Nickell et al. (2000), Bangia et al. (2002), Lando and Skødeberg (2002), Hamilton and Cantor (2004), etc.

Mixture rating models: shown to explain economic differences.

Two-state: Frydman and Schuerman (2008); HMM: Giampieri et al. (2005); Rating momentum: Stefanescu et al. (2006).

slide-78
SLIDE 78

Implication: Benchmarking

As an example, suppose there is one signal (output): αk = α, βk = β, σk = σ.

slide-79
SLIDE 79

Implication: Benchmarking

As an example, suppose there is one signal (output): αk = α, βk = β, σk = σ. Then, the optimal confidential rating simplifies to u(t) = β σ2

  • 1 − √r

κ − √r √re−rt + e−κt

  • .
slide-80
SLIDE 80

Implication: Benchmarking

As an example, suppose there is one signal (output): αk = α, βk = β, σk = σ. Then, the optimal confidential rating simplifies to u(t) = β σ2

  • 1 − √r

κ − √r √re−rt + e−κt

  • .

So the incentive state isn’t always “added.” It may be subtracted.

slide-81
SLIDE 81

0.02 0.04 0.06 0.08 0.10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 t u(t) (r, κ) = (14, 15)

slide-82
SLIDE 82

Reality Check

Benchmarking: Prior-year performance widely used for incentives. When standards are based on prior-year performance, man- agers might avoid unusually positive performance outcomes, since good current performance is penalized in the next period through an increased standard. —Murphy, 2001.

slide-83
SLIDE 83

Implementation

What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y .

slide-84
SLIDE 84

Implementation

What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y . Minimum effort 0 induced by “pure noise” W , B.M.

slide-85
SLIDE 85

Implementation

What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y . Minimum effort 0 induced by “pure noise” W , B.M. Can induce any A ∈ [0, A] by λY + (1 − λ)W , some λ ∈ [0, 1]. Two-state mixture Markov ratings (plus noise) are wlog.

slide-86
SLIDE 86

In Conclusion. . .

slide-87
SLIDE 87

In Conclusion

Our analysis shows why: Insisting on transparency or even publicness isn’t optimal.

slide-88
SLIDE 88

In Conclusion

Our analysis shows why: Insisting on transparency or even publicness isn’t optimal. And, more surprisingly: Two-state mixture Markov models are “robust.” Ratings aren’t Markovian. Benchmarking can be optimal.

slide-89
SLIDE 89

Technical Aspects

slide-90
SLIDE 90

Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]

  • Belief term

·

Incentive term

  • k
  • t≥0 αkuk(t)e−rt dt
  • Var[Y ]
  • Normalization
slide-91
SLIDE 91

Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]

  • Belief term

·

Incentive term

  • k
  • t≥0 αkuk(t)e−rt dt
  • Var[Y ]
  • Normalization
slide-92
SLIDE 92

Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]

  • Belief term

·

Incentive term

  • k
  • t≥0 αkuk(t)e−rt dt
  • Var[Y ]
  • Normalization
slide-93
SLIDE 93

Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]

  • Belief term

·

Incentive term

  • k
  • t≥0 αkuk(t)e−rt dt
  • Var[Y ]
  • Normalization
slide-94
SLIDE 94

Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]

  • Belief term

·

Incentive term

  • k
  • t≥0 αkuk(t)e−rt dt
  • Var[Y ]
  • Normalization

→ max

{uk}k

slide-95
SLIDE 95

Optimal Ratings: Proof Overview

We first guess what optimal ratings look like.

  • 1. Write the agent’s marginal cost as a function of {uk}k.
slide-96
SLIDE 96

Optimal Ratings: Proof Overview

We first guess what optimal ratings look like.

  • 1. Write the agent’s marginal cost as a function of {uk}k.
  • 2. Get a set of FOC by adding small perturbations.

(=Calculus of variation.)

slide-97
SLIDE 97

Optimal Ratings: Proof Overview

We first guess what optimal ratings look like.

  • 1. Write the agent’s marginal cost as a function of {uk}k.
  • 2. Get a set of FOC by adding small perturbations.

(=Calculus of variation.)

  • 3. Derive systems of differential equations that the uk’s must

satisfy.

(Yields exponential ratings.)

slide-98
SLIDE 98

Optimal Ratings: Proof Overview

We first guess what optimal ratings look like.

  • 1. Write the agent’s marginal cost as a function of {uk}k.
  • 2. Get a set of FOC by adding small perturbations.

(=Calculus of variation.)

  • 3. Derive systems of differential equations that the uk’s must

satisfy.

(Yields exponential ratings.)

Main difficulties:

◮ Non-standard calculus of variation.

(Multidimensional objective with single-dimensional input, time-delayed controls.)

◮ Set of FOC is a continuum.

slide-99
SLIDE 99

We then verify that the guess is correct. To do so, we define an auxiliary principal-agent problem. The agent is as in the main model. The principal pays the agent, as the market does in the main model, but her payoffs includes the objective of the intermediary in the main model.

slide-100
SLIDE 100

As in the main model, the agent maximizes E

  • s≥t

e−r(s−t)(µs − c(As)) ds | Mt

  • ,

but now µ is an arbitrary transfer rate.

slide-101
SLIDE 101

As in the main model, the agent maximizes E

  • s≥t

e−r(s−t)(µs − c(As)) ds | Mt

  • ,

but now µ is an arbitrary transfer rate. The principal maximizes E    ∞ ρe−ρt(c′(At)

Objective term

− φµt(µt − νt)

  • Penalty term

) dt    , where ρ, φ > 0 and νt is the belief under “full transparency”.

slide-102
SLIDE 102

As in the main model, the agent maximizes E

  • s≥t

e−r(s−t)(µs − c(As)) ds | Mt

  • ,

but now µ is an arbitrary transfer rate. The principal maximizes E    ∞ ρe−ρt(c′(At)

Objective term

− φµt(µt − νt)

  • Penalty term

) dt    , where ρ, φ > 0 and νt is the belief under “full transparency”.

slide-103
SLIDE 103

As in the main model, the agent maximizes E

  • s≥t

e−r(s−t)(µs − c(As)) ds | Mt

  • ,

but now µ is an arbitrary transfer rate. The principal maximizes E    ∞ ρe−ρt(c′(At)

Objective term

− φµt(µt − νt)

  • Penalty term

) dt    , where ρ, φ > 0 and νt is the belief under “full transparency”.

slide-104
SLIDE 104

As in the main model, the agent maximizes E

  • s≥t

e−r(s−t)(µs − c(As)) ds | Mt

  • ,

but now µ is an arbitrary transfer rate. The principal maximizes E    ∞ ρe−ρt(c′(At)

Objective term

− φµt(µt − νt)

  • Penalty term

) dt    , where ρ, φ > 0 and νt is the belief under “full transparency”. Note: if E[µ] = 0, E [µt(µt − νt)] = 0 ⇔ Var[µt] = Cov[µt, νt].

slide-105
SLIDE 105

Why the penalty? We want µ to be a belief process.

slide-106
SLIDE 106

Why the penalty? We want µ to be a belief process. It is a belief process iff µt = E[θt | µt] (= E[νt | µt]).

slide-107
SLIDE 107

Why the penalty? We want µ to be a belief process. It is a belief process iff µt = E[θt | µt] (= E[νt | µt]). For any Gaussian (µ, ν): E[νt | µt] = E[νt] + Cov[µt, νt] Var[µt] (µt − E[µt]) . = ⇒ A (mean-normalized) process is a market belief iff Var[µt] = Cov[µt, νt].

slide-108
SLIDE 108

With a carefully chosen φ, as the principal becomes increasingly patient, E [µt(µt − νt)] → 0 and c′(A) → c′(A∗) where A∗ is the conjectured optimal effort level derived from the FOC, and µ converges to the conjectured optimal rating.

slide-109
SLIDE 109

Overview of the Other Cases

slide-110
SLIDE 110

Public Ratings

The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt. Here,

  • dk := κ − √r

κ − r dk + λ √r − 1 κ − r βk σ2

k

.

slide-111
SLIDE 111

Public Ratings

The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt. So: uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt, Here,

  • dk := κ − √r

κ − r dk + λ √r − 1 κ − r βk σ2

k

.

slide-112
SLIDE 112

Public Ratings

The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt. So: uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt, = dk √r λ exp(−

rate of mean reversion

  • 11/2 r1/2
  • discount rate

t) + βk σ2

k

e−κt, Here,

  • dk := κ − √r

κ − r dk + λ √r − 1 κ − r βk σ2

k

.

slide-113
SLIDE 113

Public Ratings

The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt. So: uk(t) = dk √r λ e−√rt + βk σ2

k

e−κt, = dk √r λ exp(−

rate of mean reversion

  • 11/2 r1/2
  • discount rate

t) + βk σ2

k

e−κt, Here,

  • dk := κ − √r

κ − r dk + λ √r − 1 κ − r βk σ2

k

.

slide-114
SLIDE 114

In common: Differences:

A two-state rating system. One state is the belief. No signal gets discarded. Benchmarking can arise. Impulse response is the harmonic mean between the discount rate and the rate of mean-reversion.

slide-115
SLIDE 115

In common: Differences:

A two-state rating system. One state is the belief. No signal gets discarded. Benchmarking can arise. Impulse response is the harmonic mean between the discount rate and the rate of mean-reversion. With homogeneous signals, ˜ dk = 0: transparency is best.

slide-116
SLIDE 116

Exclusive vs. Non-Exclusive Information

Suppose some (not all) signals are openly available to the market.

slide-117
SLIDE 117

In common: New features:

(With homogeneous signals) A two-state rating system. Private: uk = ˆ dke−rt + βk σ2

k

e−κt Public: uk = ˇ dke−δt + βk σ2

k

e−κt Better informed market. Public information and ratings can be substitutes.

slide-118
SLIDE 118

Multi-Dimensional Actions

The analysis extends to multi-dimensional actions (separable cost).

slide-119
SLIDE 119

Multi-Dimensional Actions

The analysis extends to multi-dimensional actions (separable cost). As an example: dS1,t = a1,tdt + σ1dW1,t, dS2,t = (a2,t + θt) dt + σ2dW2,t, with cost c(a1, a2) = c · (a2

1 + a2 2).

slide-120
SLIDE 120

Multi-Dimensional Actions

The analysis extends to multi-dimensional actions (separable cost). As an example: dS1,t = a1,tdt + σ1dW1,t, dS2,t = (a2,t + θt) dt + σ2dW2,t, with cost c(a1, a2) = c · (a2

1 + a2 2). The best confidential system is

u1(t) = √r σ1 e−rt, u2(t) = e−κt σ2

2

, and effort c′(a1) = κ − 1 4√rσ1 , c′(a2) = κ − 1 2(r + κ)σ2

2

.

slide-121
SLIDE 121

How do Different Signals get Weighted?

The confidential process can be rewritten as uk(t) = βk σ2

k

  • (κ2 − r2)αk

βk − (κ2 − 1)mαβ mβ √rmβ λ e−rt + e−κt

  • .

Fixing the SNR βk

σ2

k , signals are ordered according to the ratio αk

βk :

the higher the ratio, the larger the weight (whether positive or not).

slide-122
SLIDE 122

Precision

How well-informed is the market? (as measured by Var[µ].)

slide-123
SLIDE 123

Precision

How well-informed is the market? (as measured by Var[µ].) Always better informed with public ratings.

slide-124
SLIDE 124

Precision

How well-informed is the market? (as measured by Var[µ].) Always better informed with public ratings. But it isn’t simply a trade-off between effort and information: fixing precision, higher effort under the best confidential rating system.

slide-125
SLIDE 125

Precision

How well-informed is the market? (as measured by Var[µ].) Always better informed with public ratings. But it isn’t simply a trade-off between effort and information: fixing precision, higher effort under the best confidential rating system. Variance non-monotone in r.

slide-126
SLIDE 126

Limits in the Confidential Case

Recall that uk(t) = dk √r λ e−rt + βk σ2

k

e−κt.

slide-127
SLIDE 127

Limits in the Confidential Case

Recall that uk(t) = dk √r λ e−rt + βk σ2

k

e−κt. When r tends to 0: The coefficient dk/λ tends to a nonzero limit. Effort diverges, and market is less informed than under transparency.

slide-128
SLIDE 128

Limits in the Confidential Case

Recall that uk(t) = dk √r λ e−rt + βk σ2

k

e−κt. When r tends to 0: The coefficient dk/λ tends to a nonzero limit. Effort diverges, and market is less informed than under transparency. When the rater’s signals become arbitrarily informative: Unless αk/βk is independent of k, effort diverges, and limit rating process is non-degenerate. No transparency.

slide-129
SLIDE 129

Limits in the Confidential Case

Recall that uk(t) = dk √r λ e−rt + βk σ2

k

e−κt. When r tends to 0: The coefficient dk/λ tends to a nonzero limit. Effort diverges, and market is less informed than under transparency. When the rater’s signals become arbitrarily informative: Unless αk/βk is independent of k, effort diverges, and limit rating process is non-degenerate. No transparency. When mean-reversion tends to 0: Effort converges to a finite limit; no transparency.