Statistics 498 Summer 2009 Summer Practicum in Statistics and - - PowerPoint PPT Presentation

statistics 498 summer 2009 summer practicum in statistics
SMART_READER_LITE
LIVE PREVIEW

Statistics 498 Summer 2009 Summer Practicum in Statistics and - - PowerPoint PPT Presentation

Statistics 498 Summer 2009 Summer Practicum in Statistics and Financial Risk Professor Peter Bloomfield email: bloomfield@stat.ncsu.edu Course home page: http://www4.stat.ncsu.edu/ bloomfld/courses/498/ 1 Topic 1: Dynamics of Credit Ratings


slide-1
SLIDE 1

Statistics 498 Summer 2009 Summer Practicum in Statistics and Financial Risk

Professor Peter Bloomfield email: bloomfield@stat.ncsu.edu Course home page: http://www4.stat.ncsu.edu/ bloomfld/courses/498/

1

slide-2
SLIDE 2

Topic 1: Dynamics of Credit Ratings

  • Investors who buy bonds, such as insurers and pension funds,

must evaluate the risk that the issuer of the bond may not make required payments (periodic interest, and return of principal).

  • The credit rating agencies Standard & Poor’s, Moody’s, and

Fitch assign ratings to bonds, as a service to the investors.

2

slide-3
SLIDE 3

Rating Scales S&P and Fitch Moody’s AAA Aaa Investment AA Aa Grade A A BBB Baa Speculative, BB Ba High Yield, B B “Junk” CCC Caa Default D D

3

slide-4
SLIDE 4
  • All ratings below AAA/Aaa are split into three minor cate-

gories; for example: – AA+, AA, AA-; – Aa1, Aa2, Aa3.

4

slide-5
SLIDE 5

Statistical Approach

  • Investors are interested in questions like:

– If a bond is A-rated today, what is the chance that it will be in default within 5 years? – What is the chance that it will stay investment grade until maturity?

  • To provide answers, we need a probability model for changes

in ratings, and estimates of its parameters.

5

slide-6
SLIDE 6

The Markov Assumption

  • Write X(t) for the rating at time t.
  • The Markov assumption is that future ratings X(u), u > t,

are affected by X(t) but not by X(s) for any s < t.

  • More specifically: suppose that AF is an event in the future,

defined by X(u), u > t, and that AP is an event in the past, defined by X(s), s < t;

  • Then

P[AF| X(t) = i and AF] = P[AF| X(t) = i] .

6

slide-7
SLIDE 7
  • In particular, if u > t, then

P[X(u) = j| X(t) = i and AF] = P[X(u) = j|X(t) = i].

  • P[X(u) = j|X(t) = i] is called a transition probability.
  • The transition probabilities are the most important charac-

teristics of the distribution of X(·).

7

slide-8
SLIDE 8

Time Homogeneity

  • In many situations, we assume that the transition probability

P[X(u) = j|X(t) = i] depends on t and u only through the difference u − t.

  • For h > 0, we write

pi,j(h) = P[X(t + h) = j|X(t) = i], which under this assumption does not depend on t.

  • Read this as “the probability of making the transition from

state i to state j in the time increment h.”

8

slide-9
SLIDE 9
  • Because of the Markov property, transition probabilities sat-

isfy a core set of equations known as the Chapman-Kolmogorov equations.

  • We derive them using two basic parts of the calculus of prob-

abilities:

  • 1. conditional probability:

P[A|B] = P[A ∩ B] P[B] ⇔ P[A ∩ B] = P[A|B] × P[B]

  • 2. total probability: if B1, B2, . . . , BJ is a partition, then

P[A] =

  • j

P[A ∩ Bj]

9

slide-10
SLIDE 10
  • Now if h and h′ are both > 0,

pi,k(h + h′) = P[X(t + h + h′) = k ∩ X(t) = i] P[X(t) = i] =

  • j P[X(t + h + h′) = k ∩ X(t + h) = j ∩ X(t) = i]

P[X(t) = i] =

  • j P[X(t + h + h′) = k|X(t + h) = j ∩ X(t) = i]P[X(t + h) = j ∩ X

P[X(t) = i] =

  • j P[X(t + h + h′) = k|X(t + h) = j]P[X(t + h) = j ∩ X(t) = i]

P[X(t) = i] =

  • j

P[X(t + h + h′) = k|X(t + h) = j]P[X(t + h) = j|X(t) = i] =

  • j

pi,j(h)pj,k(h′).

10

slide-11
SLIDE 11
  • When the number of states is finite, say N, we can put these

transition probabilities in a matrix

P (h) =

    

p1,1(h) p1,2(h) . . . p1,N(h) p2,1(h) p2,2(h) . . . p2,N(h) . . . . . . ... . . . pN,1(h) pN,2(h) . . . pN,N(h)

    

  • The Chapman-Kolmogorov equations may then be written

more compactly as

P (h + h′) = P (h)P (h′).

  • Note that each row of P (h) is a conditional distribution, and

therefore sums to 1.

11

slide-12
SLIDE 12
  • In discrete time, that is when t is an integer, the one-step

transition matrix P (1) = P is the key information, because

P (h) = P (1 + h − 1)

= P (1) × P (h − 1) = P × P (h − 1) = P 2 × P (h − 2) = . . . = P h.

  • Once we have an estimate of P , we can then easily get

estimates of all other transition matrices.

12

slide-13
SLIDE 13
  • We shall work with discrete time data, with a focus on one-

year transition probabilities, but ratings are in fact changed in (almost) continuous time.

  • In a continuous time model, the key information consists of

transition rates.

  • We assume that for small h and for i = j,

pi,j(h) = hqi,j + o(h), where o(h) means an error term that is small compared with h, and qi,j is the rate of transitions from i to j.

13

slide-14
SLIDE 14
  • That is,

qi,j = lim

h↓0

pi,j(h) h .

  • Because

j pi,j(h) = 1, it follows that

pi,i(h) = 1 − h

  • j=i

qi,j + o(h).

  • In matrix terms,

P (h) = I + hQ + o(h),

where the diagonal entry qi,i = −

j=i qi,j.

14

slide-15
SLIDE 15
  • Now

dP (h) dh = lim

h′↓0

P (h′ + h) − P (h)

h′ = lim

h′↓0

P (h′)P (h) − P (h)

h′ = lim

h′↓0

[P (h′) − I]P (h) h′ = lim

h′↓0

[I + h′Q + o(h′) − I]P (h) h′ = QP (h).

  • This system of differential equations has the solution

P (h) = exp(hQ) = I + hQ + 1

2h2Q2 + . . .

15

slide-16
SLIDE 16
  • So in the continuous time case, we need only an estimate
  • f the rate matrix Q, and from it we can construct all the

transition matrices.

  • When we work with discrete time data, but we know that

they arise from a continuous time process, we should re- member that

P (1) = exp(Q),

which imposes restrictions on the form of P (1).

  • That is, not every one-step transition matrix can be written

in this form for non-negative rates qi,j, i = j.

16