Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale - - PowerPoint PPT Presentation
Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale - - PowerPoint PPT Presentation
Motivational Ratings orner 1 , Nicolas Lambert 2 Johannes H 1 Yale and CEPR 2 Stanford and Microsoft Research Frontiers of Economic Theory and Computer Science Becker Friedman Institute, August 2016 Focus: Ratings that incentivize effort
Motivational Ratings
Johannes H¨
- rner1, Nicolas Lambert2
1Yale and CEPR 2Stanford and Microsoft Research
Frontiers of Economic Theory and Computer Science Becker Friedman Institute, August 2016
Focus: Ratings that incentivize effort (moral hazard).
Hospitals, physicians; schools, teachers; companies, executives, etc.
Focus: Ratings that incentivize effort (moral hazard).
Hospitals, physicians; schools, teachers; companies, executives, etc.
Goal: What is the best rating system?
Unknown skill θ Private effort A Forward-looking
Agent
Market Agent dXt
Competitive Rational expectations
At, θt dWt
Market Agent dXt At, θt dπt
Pays upfront
dWt
Rater Market Agent At, θt dXt, dSt dπt {dWk,t}k
Rater Market Agent At, θt dXt, dSt dπt
Ancillary statistics
{dWk,t}k
Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t) {dWk,t}k
Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t)
Non-strategic Committed Maximizes efficiency
{dWk,t}k
Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t)
Non-strategic Committed Maximizes effort
{dWk,t}k
Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t) {dWk,t}k {Ys}s≤t: public rating
Rater Market Agent At, θt dXt, dSt dπt Yt = ft({Xs, Ss}s≤t) {dWk,t}k dXt, dSk,t, . . .
Non-exclusive rating
Confidential Public Exclusive Yt {Ys}s≤t Non-exclusive Yt, {Xs, Sk,s}s≤t,k≤K0 {Ys, Xs, Sk,s}s≤t,k≤K0
Confidential Public Exclusive Yt {Ys}s≤t Non-exclusive Yt, {Xs, Sk,s}s≤t,k≤K0 {Ys, Xs, Sk,s}s≤t,k≤K0
Model
Continuous time t ≥ 0, infinite horizon.
Continuous time t ≥ 0, infinite horizon. The relevant processes are: Effort: At ∈ R+; privately known by the agent.
Continuous time t ≥ 0, infinite horizon. The relevant processes are: Effort: At ∈ R+; privately known by the agent. Ability: θt ∈ R; unknown to all.
Continuous time t ≥ 0, infinite horizon. The relevant processes are: Effort: At ∈ R+; privately known by the agent. Ability: θt ∈ R; unknown to all. Output: Xt ∈ R. Ancillary Statistics: St ∈ RK−1.
Ability Process: dθt = −θt dt
Mean-reversion
+
Innovation
γ dW θ
t ,
with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.
Ability Process: dθt = −θt dt
Mean-reversion
+
Innovation
γ dW θ
t ,
with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.
Ability Process: dθt = −θt dt
Mean-reversion
+
Innovation
γ dW θ
t ,
with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.
Ability Process: dθt = −θt dt
Mean-reversion
+
Innovation
γ dW θ
t ,
with θ0 ∼ N(0, γ2/2), γ > 0, and W θ a standard B.M. Rate of mean-reversion: 1.
Output Process: dXt = (At + θt)dt + σ1dW1,t, with X0 = 0, σ1 > 0, and W1 a standard B.M. (W1 ⊥ W θ).
Signal Processes, k = 2, . . . , K: dSk,t = (αkAt + βkθt) dt + σk dWk,t, with Sk,0 = 0, σk > 0, αk, βk ∈ R and Wk a standard B.M.
Signal Processes, k = 2, . . . , K: dSk,t = (αkAt + βkθt) dt + σk dWk,t, with Sk,0 = 0, σk > 0, αk, βk ∈ R and Wk a standard B.M. If αk = βk = 0, the signal is “white noise.”
Signal Processes, k = 2 , . . . , K: dSk,t = (αkAt + βkθt) dt + σk dWk,t, with Sk,0 = 0, σk > 0, αk, βk ∈ R and Wk a standard B.M. If αk = βk = 0, the signal is “white noise.” We also write S1 := X (α1 := 1, β1 := 1).
Payoffs
Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(
Revenue
- n [t, t + dt)
- dXt
−
Transfer
- n [t, t + dt)
- dπt ),
Agent: ∞ e−rt( dπt
- Transfer
- n [t, t + dt)
− c(At) dt
- Cost
- n [t, t + dt)
), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.
Payoffs
Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(
Revenue
- n [t, t + dt)
- dXt
−
Transfer
- n [t, t + dt)
- dπt ),
Agent: ∞ e−rt( dπt
- Transfer
- n [t, t + dt)
− c(At) dt
- Cost
- n [t, t + dt)
), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.
Payoffs
Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(
Revenue
- n [t, t + dt)
- dXt
−
Transfer
- n [t, t + dt)
- dπt ),
Agent: ∞ e−rt( dπt
- Transfer
- n [t, t + dt)
− c(At) dt
- Cost
- n [t, t + dt)
), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.
Payoffs
Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(
Revenue
- n [t, t + dt)
- dXt
−
Transfer
- n [t, t + dt)
- dπt ),
Agent: ∞ e−rt( dπt
- Transfer
- n [t, t + dt)
− c(At) dt
- Cost
- n [t, t + dt)
), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.
Payoffs
Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(
Revenue
- n [t, t + dt)
- dXt
−
Transfer
- n [t, t + dt)
- dπt ),
Agent: ∞ e−rt( dπt
- Transfer
- n [t, t + dt)
− c(At) dt
- Cost
- n [t, t + dt)
), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0.
Payoffs
Given a (cumulative) transfer process π, realized payoffs are: Market: ∞ e−rt(
Revenue
- n [t, t + dt)
- dXt
−
Transfer
- n [t, t + dt)
- dπt ),
Agent: ∞ e−rt( dπt
- Transfer
- n [t, t + dt)
− c(At) dt
- Cost
- n [t, t + dt)
), where the discount rate is r > 0, and c(0) = c′(0) = 0 and c′′ > 0. Recall that E[dXt] = At dt. Hence, efficiency requires c′(At) = 1 ∀t.
The Optimization Program
The market has some expectation A∗ about the agent’s effort.
The Optimization Program
The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt].
The Optimization Program
The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt]. The agent maximizes over all processes (At): E ∞ e−rt(µt − c(At)) dt
- .
The Optimization Program
The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt]. The agent maximizes over all processes (At): E ∞ e−rt(µt − c(At))dt
- .
The Optimization Program
The market has some expectation A∗ about the agent’s effort. Given its information Mt, the market forms a belief µt = E∗[θt | Mt]. The agent maximizes over all processes (At): E ∞ e−rt(µt − c(At)) dt
- .
The rater’s goal: to maximize the argmax (At) over M.
Why is Transparency Suboptimal?
Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t.
Why is Transparency Suboptimal?
Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t. If {Xs, S2,s}s≤t is disclosed, then {As}s≤t doesn’t affect µt: At = 0.
Why is Transparency Suboptimal?
Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t. If {Xs, S2,s}s≤t is disclosed, then {As}s≤t doesn’t affect µt: At = 0. If instead the market only observes: d(Xt + S2,t) = (At + θt)dt + dW1,t + dW2,t, career concerns arise: At > 0.
Why is Transparency Suboptimal?
Consider: dXt = At dt + dW1,t, dS2,t = θt dt + dW2,t. If {Xs, S2,s}s≤t is disclosed, then {As}s≤t doesn’t affect µt: At = 0. If instead the market only observes: d(Xt + S2,t) = (At + θt)dt + dW1,t + dW2,t, career concerns arise: At > 0. One can do better than disclosing the sum of the signals.
Rating Processes
A rating process is a I-adapted Y, where It = σ({Ss}s≤t).
Rating Processes
A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)
Confidential
.
Rating Processes
A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)
Confidential
.
Rating Processes
A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)
Confidential
. Throughout, we impose:
- 1. For all ∆, (Yt, St − St−∆) is normal and stationary.
- 2. The map ∆ → Cov[Yt, St−∆] is absolutely cts., with
integrable and square integrable Radon-Nikodym derivative.
- 3. The mean rating is zero: E∗[Yt] = 0.
Rating Processes
A rating process is a I-adapted Y, where It = σ({Ss}s≤t). A rating process defines an information structure via Mt = σ(Yt)
Confidential
. Throughout, we impose:
- 1. For all ∆, (Yt, St − St−∆) is normal and stationary.
- 2. The map ∆ → Cov[Yt, St−∆] is absolutely cts., with
integrable and square integrable Radon-Nikodym derivative.
- 3. The mean rating is zero: E∗[Yt] = 0.
We usually work with scalar Yt = E∗[θt | Mt] (“Direct ratings”).
Deterministic Information Quality Implies Normality
Lemma (Normal Representation) Let Y be a progressively measurable process on I such that:
- 1. ∀T > t + τ > t, Cov[YT, St+τ | It] is a function of (t, T, τ),
differentiable in τ, with uniformly Lipschitz cts. derivative in t.
- 2. ∀T > t, Cov[YT, θt | It] is a function of (t, T).
- 3. ∀t, E[Y2
t ] < ∞ and E[Yt] = 0.
Then, for all ∆ ≥ 0, (Yt, St − St−∆) is normally distributed.
Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =
- s≤t
e−a(t−s) dXs.
Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =
- s≤t
e−a(t−s) dXs. Special case: Transparency (a = κ :=
- 1 + γ2/σ2
1).
Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =
- s≤t
e−a(t−s) dXs. Special case: Transparency (a = κ :=
- 1 + γ2/σ2
1).
Moving window. (Consumer credit ratings, BBB grades.) Yt = t
t−∆
dXs.
Methods that Qualify: Exponential smoothing. (Business Week’s b-school ranking.) Yt =
- s≤t
e−a(t−s) dXs. Special case: Transparency (a = κ :=
- 1 + γ2/σ2
1).
Moving window. (Consumer credit ratings, BBB grades.) Yt = t
t−∆
dXs. Methods that Don’t: Coarse ratings.
What can the rater Do? More Generally:
· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .
What can the rater Do? More Generally:
· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .
What can the rater Do? More Generally:
· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .
What can the rater Do? More Generally:
· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .
What can the rater Do? More Generally:
· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . . e−δdt e−δ(t−s) 1 + +
What can the rater Do? More Generally:
· · · · · · · · · u1(·) dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .
What can the rater Do? More Generally:
· · · · · · · · · u1(·) uk(·) uK(·) dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . .
What can the rater Do? More Generally:
· · · · · · · · · dSK,t dXt dSk,t dSK,t−dt dXt−dt dSk,t−dt dSK,s dXs dSk,s · · · · · · · · · . . . . . . . . . . . . . . . . . . uk(t − s) u1(0)
Lemma (Analytic Representation) Fix a rating process Y . Given a conjectured A∗, there exist unique vector-valued functions uk, k = 1, . . . , K, such that, for all t, Yt =
- k
- s≤t
uk(t − s)(dSk,s − αkA∗ ds).
Main Results for the Confidential/Exclusive Case
The unique optimal confidential rating system is uk(t) = dk √r λ e−rt + βk σ2
k
e−κt. dk := (κ2 − r2)mβ αk σ2
k
− (κ2 − 1)mαβ βk σ2
k
, with
λ := (κ − 1)√r(1 + r)mαβ + (κ − r) √ ∆, mβ :=
- k
β2
k
σ2
k
, mαβ :=
- k
αkβk σ2
k
, mα :=
- k
α2
k
σ2
k
, ∆ := (κ + r)2(mαmβ − m2
αβ) + (1 + r)2m2 αβ,
κ :=
- 1 + γ2
k
β2
k
σ2
k
.
That is, Yt =
- s≤t
- k
- incentive term
- dk
√r λ e−r(t−s) +
belief term
- βk
σ2
k
e−κ(t−s) (dSk,s − αkA∗ ds).
That is, Yt =
- s≤t
- k
- incentive term
- dk
√r λ e−r(t−s) +
belief term
- βk
σ2
k
e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system.
That is, Yt =
- s≤t
- k
- incentive term
- dk
√r λ e−r(t−s) +
belief term
- βk
σ2
k
e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes.
That is, Yt =
- s≤t
- k
- incentive term
- dk
√r λ e−r(t−s) +
belief term
- βk
σ2
k
e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes. One state is the rater’s belief νt := E∗[θt | It]. dνt = − κνt dt + γ2 κ + 1
- k
βk σ2
k
(dSk,t − αkA∗ dt)
That is, Yt =
- s≤t
- k
- incentive term
- dk
√r λ e−r(t−s) + βk σ2
k
e−κ(t−s) (dSk,s − αkA∗ ds). The system is a (two-state) mixture Markov rating system. The rating can be written as the sum of two Markov processes. The other is some incentive state It. dIt = − rIt dt + √r λ
- k dk (dSk,t − αkA∗ dt) .
Two states are needed: keeping track of νt isn’t enough.
Two states are needed: keeping track of νt isn’t enough. νt It Yt
Two states are needed: keeping track of νt isn’t enough. νt It Yt
Two states are needed: keeping track of νt isn’t enough. It Yt (It, νt)
Two states are needed: keeping track of νt isn’t enough. It Yt (It, νt) The rating process Y = I + ν isn’t Markov.
Reality Check
Ratings are not Markov: widely documented for credit rating.
Altman and Kao (1992), Carty and Fons (1993), Altman (1998), Nickell et al. (2000), Bangia et al. (2002), Lando and Skødeberg (2002), Hamilton and Cantor (2004), etc.
Reality Check
Ratings are not Markov: widely documented for credit rating.
Altman and Kao (1992), Carty and Fons (1993), Altman (1998), Nickell et al. (2000), Bangia et al. (2002), Lando and Skødeberg (2002), Hamilton and Cantor (2004), etc.
Mixture rating models: shown to explain economic differences.
Two-state: Frydman and Schuerman (2008); HMM: Giampieri et al. (2005); Rating momentum: Stefanescu et al. (2006).
Implication: Benchmarking
As an example, suppose there is one signal (output): αk = α, βk = β, σk = σ.
Implication: Benchmarking
As an example, suppose there is one signal (output): αk = α, βk = β, σk = σ. Then, the optimal confidential rating simplifies to u(t) = β σ2
- 1 − √r
κ − √r √re−rt + e−κt
- .
Implication: Benchmarking
As an example, suppose there is one signal (output): αk = α, βk = β, σk = σ. Then, the optimal confidential rating simplifies to u(t) = β σ2
- 1 − √r
κ − √r √re−rt + e−κt
- .
So the incentive state isn’t always “added.” It may be subtracted.
0.02 0.04 0.06 0.08 0.10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 t u(t) (r, κ) = (14, 15)
Reality Check
Benchmarking: Prior-year performance widely used for incentives. When standards are based on prior-year performance, man- agers might avoid unusually positive performance outcomes, since good current performance is penalized in the next period through an increased standard. —Murphy, 2001.
Implementation
What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y .
Implementation
What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y . Minimum effort 0 induced by “pure noise” W , B.M.
Implementation
What about other effort levels, given alternative goals of the rater? Maximum effort A induced by two-state mixture Markov rating Y . Minimum effort 0 induced by “pure noise” W , B.M. Can induce any A ∈ [0, A] by λY + (1 − λ)W , some λ ∈ [0, 1]. Two-state mixture Markov ratings (plus noise) are wlog.
In Conclusion. . .
In Conclusion
Our analysis shows why: Insisting on transparency or even publicness isn’t optimal.
In Conclusion
Our analysis shows why: Insisting on transparency or even publicness isn’t optimal. And, more surprisingly: Two-state mixture Markov models are “robust.” Ratings aren’t Markovian. Benchmarking can be optimal.
Technical Aspects
Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]
- Belief term
·
Incentive term
- k
- t≥0 αkuk(t)e−rt dt
- Var[Y ]
- Normalization
Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]
- Belief term
·
Incentive term
- k
- t≥0 αkuk(t)e−rt dt
- Var[Y ]
- Normalization
Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]
- Belief term
·
Incentive term
- k
- t≥0 αkuk(t)e−rt dt
- Var[Y ]
- Normalization
Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]
- Belief term
·
Incentive term
- k
- t≥0 αkuk(t)e−rt dt
- Var[Y ]
- Normalization
Focus on scalar ratings (wlog). Lemma. The effort A∗ induced by a confidential process Y solves c′(A∗) ∝ Corr[Y , θ]
- Belief term
·
Incentive term
- k
- t≥0 αkuk(t)e−rt dt
- Var[Y ]
- Normalization
→ max
{uk}k
Optimal Ratings: Proof Overview
We first guess what optimal ratings look like.
- 1. Write the agent’s marginal cost as a function of {uk}k.
Optimal Ratings: Proof Overview
We first guess what optimal ratings look like.
- 1. Write the agent’s marginal cost as a function of {uk}k.
- 2. Get a set of FOC by adding small perturbations.
(=Calculus of variation.)
Optimal Ratings: Proof Overview
We first guess what optimal ratings look like.
- 1. Write the agent’s marginal cost as a function of {uk}k.
- 2. Get a set of FOC by adding small perturbations.
(=Calculus of variation.)
- 3. Derive systems of differential equations that the uk’s must
satisfy.
(Yields exponential ratings.)
Optimal Ratings: Proof Overview
We first guess what optimal ratings look like.
- 1. Write the agent’s marginal cost as a function of {uk}k.
- 2. Get a set of FOC by adding small perturbations.
(=Calculus of variation.)
- 3. Derive systems of differential equations that the uk’s must
satisfy.
(Yields exponential ratings.)
Main difficulties:
◮ Non-standard calculus of variation.
(Multidimensional objective with single-dimensional input, time-delayed controls.)
◮ Set of FOC is a continuum.
We then verify that the guess is correct. To do so, we define an auxiliary principal-agent problem. The agent is as in the main model. The principal pays the agent, as the market does in the main model, but her payoffs includes the objective of the intermediary in the main model.
As in the main model, the agent maximizes E
- s≥t
e−r(s−t)(µs − c(As)) ds | Mt
- ,
but now µ is an arbitrary transfer rate.
As in the main model, the agent maximizes E
- s≥t
e−r(s−t)(µs − c(As)) ds | Mt
- ,
but now µ is an arbitrary transfer rate. The principal maximizes E ∞ ρe−ρt(c′(At)
Objective term
− φµt(µt − νt)
- Penalty term
) dt , where ρ, φ > 0 and νt is the belief under “full transparency”.
As in the main model, the agent maximizes E
- s≥t
e−r(s−t)(µs − c(As)) ds | Mt
- ,
but now µ is an arbitrary transfer rate. The principal maximizes E ∞ ρe−ρt(c′(At)
Objective term
− φµt(µt − νt)
- Penalty term
) dt , where ρ, φ > 0 and νt is the belief under “full transparency”.
As in the main model, the agent maximizes E
- s≥t
e−r(s−t)(µs − c(As)) ds | Mt
- ,
but now µ is an arbitrary transfer rate. The principal maximizes E ∞ ρe−ρt(c′(At)
Objective term
− φµt(µt − νt)
- Penalty term
) dt , where ρ, φ > 0 and νt is the belief under “full transparency”.
As in the main model, the agent maximizes E
- s≥t
e−r(s−t)(µs − c(As)) ds | Mt
- ,
but now µ is an arbitrary transfer rate. The principal maximizes E ∞ ρe−ρt(c′(At)
Objective term
− φµt(µt − νt)
- Penalty term
) dt , where ρ, φ > 0 and νt is the belief under “full transparency”. Note: if E[µ] = 0, E [µt(µt − νt)] = 0 ⇔ Var[µt] = Cov[µt, νt].
Why the penalty? We want µ to be a belief process.
Why the penalty? We want µ to be a belief process. It is a belief process iff µt = E[θt | µt] (= E[νt | µt]).
Why the penalty? We want µ to be a belief process. It is a belief process iff µt = E[θt | µt] (= E[νt | µt]). For any Gaussian (µ, ν): E[νt | µt] = E[νt] + Cov[µt, νt] Var[µt] (µt − E[µt]) . = ⇒ A (mean-normalized) process is a market belief iff Var[µt] = Cov[µt, νt].
With a carefully chosen φ, as the principal becomes increasingly patient, E [µt(µt − νt)] → 0 and c′(A) → c′(A∗) where A∗ is the conjectured optimal effort level derived from the FOC, and µ converges to the conjectured optimal rating.
Overview of the Other Cases
Public Ratings
The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt. Here,
- dk := κ − √r
κ − r dk + λ √r − 1 κ − r βk σ2
k
.
Public Ratings
The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt. So: uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt, Here,
- dk := κ − √r
κ − r dk + λ √r − 1 κ − r βk σ2
k
.
Public Ratings
The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt. So: uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt, = dk √r λ exp(−
rate of mean reversion
- 11/2 r1/2
- discount rate
t) + βk σ2
k
e−κt, Here,
- dk := κ − √r
κ − r dk + λ √r − 1 κ − r βk σ2
k
.
Public Ratings
The unique optimal public rating system is uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt. So: uk(t) = dk √r λ e−√rt + βk σ2
k
e−κt, = dk √r λ exp(−
rate of mean reversion
- 11/2 r1/2
- discount rate
t) + βk σ2
k
e−κt, Here,
- dk := κ − √r
κ − r dk + λ √r − 1 κ − r βk σ2
k
.
In common: Differences:
A two-state rating system. One state is the belief. No signal gets discarded. Benchmarking can arise. Impulse response is the harmonic mean between the discount rate and the rate of mean-reversion.
In common: Differences:
A two-state rating system. One state is the belief. No signal gets discarded. Benchmarking can arise. Impulse response is the harmonic mean between the discount rate and the rate of mean-reversion. With homogeneous signals, ˜ dk = 0: transparency is best.
Exclusive vs. Non-Exclusive Information
Suppose some (not all) signals are openly available to the market.
In common: New features:
(With homogeneous signals) A two-state rating system. Private: uk = ˆ dke−rt + βk σ2
k
e−κt Public: uk = ˇ dke−δt + βk σ2
k
e−κt Better informed market. Public information and ratings can be substitutes.
Multi-Dimensional Actions
The analysis extends to multi-dimensional actions (separable cost).
Multi-Dimensional Actions
The analysis extends to multi-dimensional actions (separable cost). As an example: dS1,t = a1,tdt + σ1dW1,t, dS2,t = (a2,t + θt) dt + σ2dW2,t, with cost c(a1, a2) = c · (a2
1 + a2 2).
Multi-Dimensional Actions
The analysis extends to multi-dimensional actions (separable cost). As an example: dS1,t = a1,tdt + σ1dW1,t, dS2,t = (a2,t + θt) dt + σ2dW2,t, with cost c(a1, a2) = c · (a2
1 + a2 2). The best confidential system is
u1(t) = √r σ1 e−rt, u2(t) = e−κt σ2
2
, and effort c′(a1) = κ − 1 4√rσ1 , c′(a2) = κ − 1 2(r + κ)σ2
2
.
How do Different Signals get Weighted?
The confidential process can be rewritten as uk(t) = βk σ2
k
- (κ2 − r2)αk
βk − (κ2 − 1)mαβ mβ √rmβ λ e−rt + e−κt
- .
Fixing the SNR βk
σ2
k , signals are ordered according to the ratio αk
βk :
the higher the ratio, the larger the weight (whether positive or not).
Precision
How well-informed is the market? (as measured by Var[µ].)
Precision
How well-informed is the market? (as measured by Var[µ].) Always better informed with public ratings.
Precision
How well-informed is the market? (as measured by Var[µ].) Always better informed with public ratings. But it isn’t simply a trade-off between effort and information: fixing precision, higher effort under the best confidential rating system.
Precision
How well-informed is the market? (as measured by Var[µ].) Always better informed with public ratings. But it isn’t simply a trade-off between effort and information: fixing precision, higher effort under the best confidential rating system. Variance non-monotone in r.
Limits in the Confidential Case
Recall that uk(t) = dk √r λ e−rt + βk σ2
k
e−κt.
Limits in the Confidential Case
Recall that uk(t) = dk √r λ e−rt + βk σ2
k
e−κt. When r tends to 0: The coefficient dk/λ tends to a nonzero limit. Effort diverges, and market is less informed than under transparency.
Limits in the Confidential Case
Recall that uk(t) = dk √r λ e−rt + βk σ2
k
e−κt. When r tends to 0: The coefficient dk/λ tends to a nonzero limit. Effort diverges, and market is less informed than under transparency. When the rater’s signals become arbitrarily informative: Unless αk/βk is independent of k, effort diverges, and limit rating process is non-degenerate. No transparency.
Limits in the Confidential Case
Recall that uk(t) = dk √r λ e−rt + βk σ2
k