EMLYON 07 April 2016 FILTERING WITH MULTIVARIATE COUNTING PROCESSES - - PowerPoint PPT Presentation

emlyon 07 april 2016 filtering with multivariate counting
SMART_READER_LITE
LIVE PREVIEW

EMLYON 07 April 2016 FILTERING WITH MULTIVARIATE COUNTING PROCESSES - - PowerPoint PPT Presentation

EMLYON 07 April 2016 FILTERING WITH MULTIVARIATE COUNTING PROCESSES AND AN APPLICATION TO CREDIT RISK ( ) Ragnar Norberg London School of Economics & Universit e Lyon 1 Homepage: http://isfa.univ-lyon1.fr/ norberg ( ) Based on


slide-1
SLIDE 1

EMLYON 07 April 2016 FILTERING WITH MULTIVARIATE COUNTING PROCESSES AND AN APPLICATION TO CREDIT RISK(∗) Ragnar Norberg London School of Economics & Universit´ e Lyon 1 Homepage: http://isfa.univ-lyon1.fr/∼norberg

(∗) Based on joint work with Areski Cousin, Universit´

e Lyon 1

1

slide-2
SLIDE 2

(Ω, F, F = (Ft)t∈[0,T], P) Θ = (Θt)t∈[0,T] F -adapted, non-observable N = (Nt)t∈[0,T] F -adapted, only observable object.

F N = (FN

t )t∈[0,T], flow of statistical information.

Estimator of Θt based on available information at time t: ˆ Θt = E[Θt|FN

t ]

(Optimal in the mean squared error MSE sense). Notation: X = (Xt)t∈[0,T]: ˆ Xt = E[Xt|FN

t ]

2

slide-3
SLIDE 3

EXAMPLE N is simple counting process: Nj

t = 0<s≤t ∆Nj s < ∞, ∆Nj s ∈ {0, 1}.

Assume it is intensity driven: E[dNj

t |Ft−] = P[dNj t = 1|Ft−] = 1 − P[dNj t = 0|Ft−] = yt Θt dt

all equalities up to negligible o(dt). E[dNt|FN

t−] = E[ E[dNt|Ft−] | FN t−] = yt ˆ

Θt dt (1)

3

slide-4
SLIDE 4

yt ˆ Θt dt = P[dNt = 1 | Nt− = n, dNti = 1, i = 1, . . . , n] = P[dNt = 1 , Nt− = n, dNti = 1, i = 1, . . . , n] P[Nt− = n, dNti = 1, i = 1, . . . , n] = E P[dNt = 1 , Nt− = n, dNti = 1, i = 1, . . . , n | FΘ

t ]

E P[Nt− = n, dNti = 1, i = 1, . . . , n | FΘ

t ]

= E

  • e− t

0 yuΘu du n

i=1 ytiΘti dti

  • ytΘt dt
  • E
  • e− t

0 yuΘu du n

i=1 yti Θti dti

  • .

Cancel factor yt dt appearing on on both sides of equation, and cancel common factors yti dti in numerator and denominator on the right.

4

slide-5
SLIDE 5

Need to calculate or compute numerically ˆ Θt = E

  • e− t

0 yu Θu duΘt1 · · · Θtn Θt

  • E
  • e− t

0 yu Θu duΘt1 · · · Θtn

  • (2)

Simplest case: Θ ≡ Θ ∼ Gamma(α, β) with density βα Γ(α) θα−1 e−β θ , θ > 0 : ˆ Θt = Nt + α

t

0 ys ds + β

Exercise: Calculate second and third moments!

5

slide-6
SLIDE 6

If Θ is a process, there is usually no explicit expression for ˆ Θt. Numer- ator and denominator in (2) can be computed numerically, typically as solution to backward ODEs. The entire computational scheme needs to be repeated when we move forward in time - no recycling of previous computed values.

6

slide-7
SLIDE 7

FILTERING APPROACH seeks to express ˆ Θ as the solution to a for- ward SDE that allows computation of ˆ Θ by a simple recursive updat- ing formula. Classics: Bremaud (1981), Karr (1991), van-Schuppen (1977). THEOREM (classic) Let Θ be of the form dΘt = at dt + dMt , M is F -martingale with no jump times in common with N. The process ˆ Θ is the solution to the forward SDE d ˆ Θt = ˆ at dt + ηt (dNt − ˆ νt dt) , (3) ηt =

  • (Θ ν)t−

ˆ νt− − ˆ Θt− , and ν is the F -intensity of N.

7

slide-8
SLIDE 8

The forward dynamics (3) suggests an algorithm for recursive updat- ing of the process ˆ Θ. Predicament arising from (3): the dynamics involves (Θ ν), the dy- namics of which involves

  • (Θ ν2), and so on indefinitely.

CLUE: If Θ = ν and ν is 0 or 1, then Θt νt = (νt)2 = νt, and the infinity predicament is resolved. The same holds if Θ and ν are finite-valued. This comes later.

8

slide-9
SLIDE 9

MULTIVARIATE COUNTING PROCESS: Nj = (Nj

t )t∈[0,T], j = 1, . . . , p,

are simple counting processes with F -intensities νj = (νj

t )t∈[0,T], j = 1, . . . , p :

The compensated counting processes, Nj

t −

t

0 νj s ds ,

(4) are F -martingales: for t < u, E

  • Nj

u −

u

0 νj s ds | Ft

  • =

Nj

t −

t

0 νj s ds

+E

u

t E(dNj s − νj s ds)|Fs−) | Ft

  • =

Nj

t −

t

0 νj s ds.

9

slide-10
SLIDE 10

Assume the Nj have no common jumps: dNj

t dNk t = δjkdNj t , hence

E[dNj

t dNk t |Ft−] = δjk νj t dt .

This entails orthogonality of the martingales in (4): E[(dNj

t − νj t dt)(dNk t − νk t dt)|Ft−] = E[dNj t dNk t |Ft−] − νj t dt E[dNk t |Ft−]

−E[dNj

t |Ft−] νk t dt + νj t dt νk t dt

= δjk νj

t dt + o(dt)(5)

10

slide-11
SLIDE 11

Consequence: for g and h predictable (essentially left-continuous) E

T

t

gr(dNj

r − νj r dr)

T

t

hs(dNk

s − νk s ds)

  • Ft
  • = δjk E

T

t

gshs νj

s ds

  • Ft
  • (6)

Follows from Fubini and iterated expectation: E

T

t

T

t

E[gr (dNj

r − νj r dr) hs (dNk s − νk s ds) | Fmax(r,s)−]

  • Ft
  • Diagonal terms (r = s) under the integral reduce to δjk gs hs νj

s ds due

to (5), and off-diagonal terms vanish: e.g. for any r < s it is gr hs (dNj

r − νj r dr)(E[dNk s |Fs−] − νk s ds) = 0 .

11

slide-12
SLIDE 12

INNOVATION THEOREM: Natural filtration of N is

F N = (FN

t )t∈[0,T] ,

FN

t

= σ{Nj

s; s ∈ [0, t], j = 1, . . . , p}.

FN is trivial, hence E[X|FN

0 ] is constant for any integrable random

variable X. Innovation theorem: the F N-intensities of the Nj are ˆ νj

t = E[νj t | FN t ] .

Follows from the tower property of conditional expectation applied to definition (1) of the intensity: E[dNj

t | FN t−] = E[ E[dNj t |Ft−] | FN t−] = E[νj t dt | FN t−] = E[νj t | FN t ] dt

(left-limits are of no significance in the presence of the factor dt).

12

slide-13
SLIDE 13

Theorem (Filtering with multivariate counting processes) Let Θ be of the form Θt =

t

0 as ds + Mt ,

(7) where M is F -martingale with no jump times in common with N. Then ˆ Θ is the solution to the forward SDE d ˆ Θt = ˆ at dt +

  • j

ηj

t (dNj t − ˆ

νj

t dt) ,

(8) ηj

t =

  • (Θ νj)t−

ˆ νj

t−

− ˆ Θt− Initial condition ˆ Θ0 = E[Θ0] . (9)

13

slide-14
SLIDE 14

Proof: Rewrite (7) as Θt =

t

0 ˆ

as ds + Lt + Mt , (10) Lt =

t

0 (as − ˆ

as) ds . Take conditional expectation, given FN

t , in (10):

ˆ Θt =

t

0 ˆ

as ds + ˆ Lt + ˆ Mt .

14

slide-15
SLIDE 15

ˆ L is F N-martingale: for r < t, E[ˆ Lt − ˆ Lr | FN

r ]

= E

t

r

  • as − E[as|FN

s ]

  • ds
  • FN

r

  • =

t

r

  • E[as|FN

r ] − E[as|FN r ]

  • ds

= 0 . ˆ M is F N-martingale: E[ ˆ Mt | FN

r ]

= E[ E[Mt|FN

t ] | FN r ] = E[Mt|FN r ] = E[ E[Mt|Fr] | FN r ]

= E[Mr|FN

r ] = ˆ

Mr.

15

slide-16
SLIDE 16

Introduce Kt = Lt + Mt = Θt −

t

0 ˆ

as ds . an F N-martingale. Predictable representation: ˆ Kt = γ +

  • j

t

0 ηj s (dNj s − ˆ

νj

s ds),

(11) γ = ˆ K0 is FN

0 -measurable (hence constant), the ηj are F N-predictable

processes independent of t.

16

slide-17
SLIDE 17

Any integrable FN

t -measurable random variable has a representation

g +

j

t

0 hj s (dNj s − ˆ

νj

s ds), with g constant and the hj F N-predictable.

Since ˆ Kt is the L2 projection of Kt onto the space of square integrable FN

t -measurable random variables, the coefficients in the representa-

tion (11) are uniquely determined by the normal equations E

  • Kt − γ −
  • j

t

0 ηj s(dNj s − ˆ

νj

s ds)

  • g +
  • j

t

0 hj s(dNj s − ˆ

νj

s ds)

  • = 0

for all constants g and all F N-predictable hj. For details, see paper.

  • 17
slide-18
SLIDE 18

The infinity predicament prevails to exist. Next section offers resolution to this problem when Θ and the νj are driven by a process with finite state. Tthe clue is that such a process is a linear combination of indicator processes, each of which is binary and therefore identical to its square and higher powers.

18

slide-19
SLIDE 19

MARKOV MODULATED MULTIPLICATIVE INTENSITIES Assume the latent process Θ governs the intensities νj. Assume Θ is Markov chain with finite state space T = {1, . . . , m} and constant transition rates κhi, i = h, and define κhh = −

i;i=h κhi.

Introduce indicator processes Ih

t = 1[Θt=h],

and counting processes Khi

t

= ♯{s ∈ (0, t]; Θs− = h, Θs = i} . Θt =

  • h∈T

h Ih

t .

(12)

19

slide-20
SLIDE 20

Assume the intensities of the Nj are of the multiplicative form νj

t = Y j t ℓΘt,j = Y j t

  • h

ℓh,j Ih

t ,

(13) ℓh,j are constants, Y j

t

is the “exposure to risk of a jump of type j just before time t”. The Y j depend N and possibly on censoring mechanisms that are non-informative and, therefore, can be included in FN

0 . Thus, assume the Y j are F N-predictable.

By (12), ˆ Θt =

  • h

h ˆ Ih

t .

(14) Filtering of Θ reduces to filtering of the Ih

t , which goes by recursive

updating because the indicators are binary processes. Details follow.

20

slide-21
SLIDE 21

To apply the Theorem, we need the martingale representation (7) for Ih

t . Starting point is

Ih

t = Ih 0 +

  • i;i=h

(Kih

t

− Khi

t ) ,

(15) The Khi have intensities Ih

t− κhi. Reshaping (15) as

Ih

t

= Ih

0 +

t

  • i;i=h

(Ii

s− κih − Ih s− κhi) ds

+

t

  • i;i=h
  • (dKih

s − Ii s− κih ds) − (dKhi s − Ih s− κhi ds)

  • =

t

0 ah s ds + Mh t ,

21

slide-22
SLIDE 22

ah

t =

  • i;i=h

(Ii

t− κih − Ih t− κhi) =

  • i

κih Ii

t−

(16) Mh is martingale commencing at Mh

0 = Ih 0, and has no jumps in

common with N.

22

slide-23
SLIDE 23

Let Ih take the role of Θ in (8). The role of at is taken by ah

t in (16),

the role of Θt νj

t is taken by (recall (13))

Ih

t Y j t

  • i

ℓi,j Ii

t = Y j t ℓh,j Ih t ,

ˆ νj

t = Y j t

  • i

ℓi,j ˆ Ii

t .

(17) Inserting these things into (8) - (9), gives dˆ Ih

t =

  • i

κih ˆ Ii

t− dt+

  • j
  • ℓh,j
  • i ℓi,jˆ

Ii

t

− 1

  • ˆ

Ih

t− (dNj t −Y j t

  • i

ℓi,j ˆ Ii

t− dt) , (18)

which is an explicit recursive updating formula from which all the ˆ Ih can be computed forwards starting from ˆ Ih

0 = E[Ih 0] .

23

slide-24
SLIDE 24

This initial condition requires that E[Ih

0], h ∈ T, be specified. A natural

choice is the stationary distribution of (π1, . . . , πm) of Θ, which is assumed to exist and is the solution to

  • h

πh κhi = 0 , i ∈ T ,

  • h

πh = 1 . With this choice the initial conditions become ˆ Ih

0 = πh , h ∈ T.

(19) Once the ˆ Ih

t

have been computed, ˆ Θt is computed from (14), ˆ νj

t

is computed from (17), and similarly for any function of Θt. Of particular interest are the occurrence rates per unit of risk exposed, λj

t =

  • h

ℓh,j Ih

t :

ˆ λj

t =

  • h

ℓh,j ˆ Ih

t

24

slide-25
SLIDE 25

APPLICATION TO CREDIT RISK Consider a bond market comprising a finite number of individual

  • bonds. All bonds are affected by randomly varying market conditions

represented by a hidden Markov chain Θ as described above. Bond No. q enters the market at time sq and is observed until time uq, 0 ≤ sq < uq ≤ T. At any time the bond is in (only) one of a finite set of possible states, J = {1, . . . , n}, representing distinct credit ratings in descending order, n = “default”. For instance, long term investment grades by Standard & Poor’s translate as AAA = 1, AA = 2,A = 3, BBB =4,...,D =10 (default)

25

slide-26
SLIDE 26

Let Zq

t be the state of the bond at time t. Introduce indicator and

counting processes Jq,j

t

= 1[Zq

t =j],

Nq,jk

t

= ♯{s ∈ [0, t]; Zq

s− = j, Zq s = k}.

Zq is intensity driven and P[Zq

t+dt = k | Θt = h, Zq t = j] = ℓh,jk dt + o(dt).

The Zq are conditionally independent, given FΘ

T .

Censoring (sq, uq) is non-informative hence part of FN

0 .

26

slide-27
SLIDE 27

Inferences about Θ can be based on the amalgamated counting pro- cesses Njk

t

=

  • q

Nq,jk

t

, driven by the F -intensities νjk

t

= Y j

t

  • h

ℓh,jk Ih

t ,

Y j

t =

  • q; sq≤t<uq

Jq,j

t− .

The exposure process Y j is left-continuous. It increases by 1 upon a jump in any Nkj, k = j, or upon the entry of a new bond with rating j, and it decreases by 1 upon a jump in any Njk, k = j, or upon the expiry of a bond with rating j.

27

slide-28
SLIDE 28

The situation fits into the framework of filtering with multivariate counting process, the only new feature being that indices j are to be replaced by double indices jk in the counting processes. The filtering dynamics (18) becomes dˆ Ih

t =

  • i

κih ˆ Ii

t− dt +

  • j=k
  • ℓh,jk
  • i ℓi,jkˆ

Ii

t

− 1

  • ˆ

Ih

t− (dNjk t

− Y j

t

  • i

ℓi,jk ˆ Ii

t− dt) ,

(20) and the initial conditions (19) remain unchanged.

28

slide-29
SLIDE 29

The conditional probability that a bond rated j at time t will be rated k = j by time t + dt, given the observed rating histories, is

  • h

ℓh,jk ˆ Ih

t dt .

This is estimated instantaneous rate of transfer. Usually one would be interested in other functionals like the probability

  • f default in a certain time period or the expected discounted cash-

flow (price) of a bond.

29

slide-30
SLIDE 30

As an example, suppose a generic bond yields a return (coupon) dj per time unit when it is rated j and that its principal bj will be paid

  • ut at term u if it is then rated j. Normally dj and bj are 0 in the

default state j = n and fixed and constant in all other states. The present value of future payoffs at time t ∈ (s, u] is Vt =

u

t e−r (τ−t) j

Jj

τ djdτ + e−r (u−t) j

Jj

u bj ,

(21) where r is the interest rate, assumed to be constant. The value (price) of the bond at time t is vt = E[Vt | Ft] , where expectation is taken under a pricing measure, which is assumed to be P (modelling is under the pricing measure).

30

slide-31
SLIDE 31

The pair (Θ, Z) is a process with state space T × J, and it is Markov since its intensities depend only on its current state: P[(Θ, Z)t+dt = (i, k) | Ft] = P[(Θ, Z)t+dt = (i, k) | (Θ, Z)t] and P[(Θ, Z)t+dt = (i, k) | (Θ, Z)t = (h, j)] =

    

κhi + o(dt) if h = i, j = k, ℓh,jk + o(dt) if h = i, j = k,

  • (dt)

if h = i, j = k.

31

slide-32
SLIDE 32

By the Markov property, vt = E[Vt | (Θ, Z)t] =

  • h,j

Ih

t Jj t vh,j(t) ,

(22) where the vh,j(t) are the state-wise values at time t: vh,j(t) = E[Vt | (Θ, Z)t = (h, j)] .

32

slide-33
SLIDE 33

Decompose the present value in (21) into payments in the small time interval (t, t + dt] and payments in (t + dt, u]: Vt =

  • j

Jj

t djdt + e−r dt Vt+dt + o(dt) .

Use the direct backward argument, conditioning on what happens in the small time interval [t, t + dt): vh,j(t) =

 1 −

  • i; i=h

κhi dt −

  • k; k=j

ℓh,jk dt

 

  • dj dt + e−r dt vh,j(t + dt)
  • +
  • i; i=h

κhi dt

  • O(dt) + e−r dt vi,j(t + dt)
  • +
  • k; k=j

ℓh,jk dt

  • O(dt) + e−r dt vh,k(t + dt)
  • + o(dt) .

33

slide-34
SLIDE 34

Multiply out, subtract vh,j(t + dt) on both sides, divide through with dt, and let dt → 0 to arrive at the backward ODEs d dtvh,j(t) = vh,j(t) r − dj −

  • i; i=h

κhi vi,j(t) − vh,j(t)

  • k; k=j

ℓh,jk vh,k(t) − vh,k(t)

  • ,

with terminal conditions vh,j(u) = 0 , h ∈ T, j ∈ J. Solve numerically by e.g. Runge-Kutta to produce the values vh,j(t) for all h and j and all t on some fine time grid. IN ONE GO!

34

slide-35
SLIDE 35

Once these values have been computed, one has determined the value process in (22). Finally, at any given time t one obtains the value based on the observed rating processes as E[vt |FN

t ] =

  • h,j

ˆ Ih

t Jj t vh,j(t) ,

where the ˆ Ih

t are the filtered estimates of the Ih t computed by (20).

35

slide-36
SLIDE 36

REFERENCES Andersen PK, Borgan Ø, Gill RD, Keiding N (1993): Statistical Mod- els Based on Counting Processes, Springer. Br´ emaud P (1981): Point Processes and Queues. Springer-Verlag. Cariboni J, Schoutens W (2009): Jumps in intensity models: inves- tigating the performance of Ornstein-Uhlenbeck processes in credit risk modeling. Metrika 69, 173198. Frey R, Runggaldier W (2010): Pricing credit derivatives under in- complete information: a nonlinear-filtering approach. Finance and Stochastics 14, 495-526 .

36

slide-37
SLIDE 37

Leijdekker V, Spreij P (2011): Explicit Computations for a Filtering Problem with Point Process Observations with Applications to Credit

  • Risk. Probability in the Engineering and Informational Sciences 25,

393-418. Karr A (1991): Point Processes and their Statistical Inference 2nd edition, Marcel Dekker. van Schuppen JH (1977): Filtering, prediction, and smoothing for counting process observations – a martingale approach. SIAM J.

  • Appl. Math. 32, 552-570.

37