CFP Risk EMEA Conference 24 th May 2016 Brandon Davies } It is often - - PowerPoint PPT Presentation

cfp risk emea conference 24 th may 2016 brandon davies it
SMART_READER_LITE
LIVE PREVIEW

CFP Risk EMEA Conference 24 th May 2016 Brandon Davies } It is often - - PowerPoint PPT Presentation

CFP Risk EMEA Conference 24 th May 2016 Brandon Davies } It is often stated that you cannot mange what you cannot measure. } Whilst I personally have many times in a 40 year career in banking had to do just that. } I am, however, more


slide-1
SLIDE 1

CFP Risk EMEA Conference 24th May 2016 Brandon Davies

slide-2
SLIDE 2

} It is often stated that you cannot mange what

you cannot measure.

} Whilst I personally have many times in a 40

year career in banking had to do just that.

} I am, however, more than willing to accept

that it is far easier to manage something if you can measure it.

} Though this does have one major problem,

you need to be sure you do indeed have an appropriate measure of whatever it is you are measuring.

slide-3
SLIDE 3

} Note I use the word appropriate not accurate,

accuracy is always better than inaccuracy but if the measure is not appropriate you may simply be left with a very accurate wrong answer.

} The appropriateness of a measure can be

gauged by how well it does the job of describing the thing to be measured so we come to the crux of this presentation.

} How well do our current measures do in

measuring risk?

slide-4
SLIDE 4

} The answer is not immediately obvious. } Many models of risk clearly did not work well

in the current series of financial crises that seem to have become a permanent feature of the financial world since the collapse of Lehman Brothers in 2008.

} Does this point to problems both with the

measure as well as with its application through the models to which it was applied?

slide-5
SLIDE 5

} I believe it does, indeed I believe our problems

with the measure come from a very early problem in developing a suite of measures, models and market products (tools) for managing risk.

} To support my argument I show below (Figure 1)

a slide that is now some 30 years old.

} It was used to explain to a major banks board

committee some of the problems the treasury (it pre dates a dedicate risk department) were having with measuring risk.

slide-6
SLIDE 6

GDP = Generalised Pareto Distribution

slide-7
SLIDE 7

} The slide focused on the problems of describing

an appropriate distribution of prices that we should use to fit the relatively sparse historic date we then had in relation to the assets being measured

} It also covered different risk measures as it

shows both the Mean/ Variance measure of risk it also shows Expected Shortfall as a measure of risk (as is now expected to be a required measure for market risk as a result of The Fundamental Review of the Trading Book).

} Why did the slide show two measures of risk?

slide-8
SLIDE 8

} At the time we were not entirely sure as to how we

should define risk and with risk we felt the definition and the measure were particularly closely connected,

} Define risk one way you need one measure, define it

another way and you need a different measure. The two definitions can be described as:-

  • What is the maximum loss within a given probability

that we could suffer as a result of holding the given asset portfolio over a given time period?

  • What is the maximum loss that we could suffer as a

result of holding the given asset portfolio over a given time period?

slide-9
SLIDE 9

} The difference is subtle but very important. } Should we measure risk whilst constraining the

results within a certain probability of outcome. OR

} Should we measure risk as an absolute number. } Subsequently we became very sure about how

we wished to define risk and therefore how we would measure risk, we opted for the first definition above and Value at Risk (VaR) as the measure.

slide-10
SLIDE 10

} VaR is a constrained measure, as such, it

looks at risk as variance measured at some percentile from the mean (average) outcome.

} Given this the constraint on the outcome

made for a very much more simple measure

  • f risk than we would need were we to look

for the most extreme outcomes.

} In many ways, however, it was a much less

intellectually defensible definition and measure than the alternative.

slide-11
SLIDE 11

} When anyone thinks of risk they usually think

  • f some absolutely bad outcome epitomized

say by the risk of death, a pretty absolute measure of risk!

} So why did we settle for measuring financial

risk at some point less than the worst possible outcome?

slide-12
SLIDE 12

} Firstly risk of financial loss is difficult to measure

even if we are looking at a relatively simple portfolio of assets.

} As Figure 1 shows the measurement problem we

frequently encountered was uncertainty over the shape of the distribution of bad outturns that

  • ccur in the tails of distributions.

} The data is sparse and will inevitably need to be

cleaned, it may fit a number of different distributions, which whilst we can agree will have a fat tail, just how fat the tail will be often results from a number of assumptions about which data to use and how to “clean” it.

slide-13
SLIDE 13

} Secondly we are really not interested in the loss

from a single asset or portfolio of similar assets we are really interested in the absolute maximum

  • f losses we could face from holding the entirety
  • f our asset base.

} This means we are really not interested in

measuring from the mean to the chosen percentile but rather from the furthest point of the distribution back to the chosen percentile.

} Measuring risk in this way is usually represented

by the Expected Shortfall measure.

slide-14
SLIDE 14

} Once we start looking at large and complex

portfolios of assets we run into a very serious problem in looking at maximum losses.

} The losses are not additive, that is to say the

losses will depend not only on the losses of each portfolio of like assets but also on the correlation between each portfolio.

} This is called Dynamic Conditional

Correlation and is a particular property of extreme outcomes in financial markets.

slide-15
SLIDE 15

} What we are saying with DCC is that extremely

bad outcomes can be much more likely than would be the case if we assumed that the correlations between different asset portfolios in

  • ur overall balance sheet were stable.

} We are also saying that the correlations between

individual asset portfolios change (are dynamic) and change differently depending on how the circumstances in which we find ourselves are changing (are conditional).

} This is not the same as fat tails but in a very real

sense is a story of fatter and fatter tails, i.e. the tail risk ceases to be static and become dynamic.

slide-16
SLIDE 16
slide-17
SLIDE 17

} To put this in the language of statistics we might

find in our dynamic world that a 7 standard deviation event is almost inevitable given a 5 standard deviation event has happened, whereas looking at a static distribution in similar circumstances the 7 standard deviation event is still very unlikely.

} The challenge to those looking to use the

measure are:

} To describe the event or events that will start this

dynamic correlation process and

} To measure how the correlations will change

given a certain set of, often evolving, events.

slide-18
SLIDE 18

}

Source: MPI Stylus, Absolute Return Partners LLP

slide-19
SLIDE 19

} One set of events, which have been

connected to DCC, is dramatic changes in the liquidity of markets known as liquidity black holes.

} Recent events in international markets do

seem to validate that liquidity is an important factor in driving changes in asset portfolio correlations.

slide-20
SLIDE 20

} These fat tails and dynamic conditional

correlation are related concepts as they are the result of relationships between the

  • bserved parameters (say losses or asset

prices) that are not normally distributed.

} Characteristically these non-normal

  • bservations are recorded in the tails of

distributions that is they are characteristic of extreme values.

slide-21
SLIDE 21

} The measuring of these extreme values has taken on

more importance across a wide range of problems in finance, notably in credit, options pricing and risk modeling.

} It has become increasingly understood that in all

these areas many risk and pricing models assumed linearity of results, whereas by observation historic results were not linear.

} In credit the example of both Structural (Merton) and

Reduced Form (Jarrow) models produced theoretical values that differed from those observed.

} In options pricing the “Smile” effect clearly showed

that there were effects on options prices that did not conform directly to models based on completeness of markets and the application of arbitrage free conditions.

slide-22
SLIDE 22

} To address these problems there was a growing

trend towards the use of Copula mathematics.

} First developed in the 1950’s copula could be

used to look at the dynamics of the underlying asset (or assets).

} Technically copula functions enable us to express

a joint probability distribution as a function of the marginal probability distributions.

} This gets us away from the problem of using

(static) correlations which is an effective way to represent co-movements between variables if they are linked by linear relationships but not if the co-movements are non-linear.

slide-23
SLIDE 23

} One of the most common uses of copula

mathematics in finance resulted from the work of A.J. Lee, which is generally held to have resulted in the use by rating agencies of copula to price Collateralised Debt Obligations (CDO) including Mortgage Backed Securities (MBS).

} In the crisis that resulted from the collapse of

Lehman Bros. on 15th September 2008 (date of filing for Chapter 11) these models proved to significantly under estimate the joint default probabilities of the mortgage assets within the individual MBS.

slide-24
SLIDE 24

} The problem of these models lay in the choice of

Gaussian copula to replicate the marginal probability distributions.

} Whilst we commonly associate Gaussian distributions

with randomness this is a very constrained form of randomness (in contrast to say a Cauchy view of randomness) in that the Gaussian distribution is normal.

} In practice this proved disastrous as the defaults

proved to be very fat tailed, indeed the default process appeared to feed upon itself as defaults mounted and the liquidity of markets dried up.

} So prices of MBS fell further as defaults mounted, the

correlations between the defaulting assets appeared to be both dynamic and conditional, meaning in this case the assumed correlations increased as the liquidity of markets decreased.

slide-25
SLIDE 25

} What we should learn from this is that whilst

copula offer a way of measuring non-linear relationships they also present problems in that the choice of the copula is vital to the outcome.

} In practice therefore it is akin to the problem of

the choice between using parametric or non- parametric distributions common in VaR based market risk models.

} If the choice is made to use a parametric

distribution the choice of the specific parametric distribution (Normal, distribution Students t- distribution, Generalised Pareto etc.) will largely determine the outcome.

slide-26
SLIDE 26

} In the case of copula choice, however, only a very

limited number of copula are usually considered for use in finance, not least because the

  • utcomes of models using some copula can be

very difficult to interpret.

} The use of copula in measuring large portfolios

  • f differing risks is thus in its infancy and whilst

it is possible to understand the issues produced by dynamic conditional correlation in banks asset portfolios we are still far from having a reliable statistical methodology for measuring that risk.

slide-27
SLIDE 27

} There are other possible ways to statistically model

tail risk.

} One way is to assign subjective probabilities using

Bayesian net technology.

} This has the advantage of polling experts for their

judgment to specify the relationships between asset portfolios in exceptional circumstances.

} As a result a debate is created, though of course with

no guarantee of agreement as a result.

} Moreover there is a substantial problem in choosing

who the experts are, too alike and they may simply represent one view of the world, too dissimilar and there may be no agreement.

slide-28
SLIDE 28

} Given the problems with statistical techniques,

scenarios as a basis for looking at extreme

  • utcomes have become the main methodology

embedded in regulation (e.g. stress tests).

} Whilst scenarios lack any statistical validity, as

there is no way of accurately ascribing a statistical probability to any particular assumed scenario, they do have some benefits:-

} Scenarios have an intuitive meaning to senior

executives, boards and regulators, it is easy to see the thinking behind any particular scenario and to modify any assumptions to fit a particular bank or economy (or both).

slide-29
SLIDE 29

} Scenarios can be developed that are ‘tailored’ to the

risk profile and /or business model of the individual bank, this is particularly useful when tying the scenario into the banks Business Plan, Individual Capital Adequacy Assessment Process (ICAAP) or Individual Liquidity Adequacy Assessment Statement (ILAAS).

} Scenarios can be bank specific and /or system wide

as regulators require for both the ICAAP and ILAAS.

} Scenario reports can be extended to show how, for

example, a bank’s board and management will react to specific risks thrown up by a scenario.

} In a similar way they can also address opportunities

that may arise from a specific scenario.

slide-30
SLIDE 30

} As a scenario does not have any statistical

probability it is a guess at what could happen and how this would affect the risk profile of the

  • rganization.

} Focusing on how the organisation would react to

the events projected in the scenario can, however, lead to the organisation learning some important and useful lessons.

} Scenarios are, however, often poorly presented to

boards and as a result can cause a lot of debate but remarkably little insight.

} A disciplined process for developing and for

analysing scenarios is vital to get the best from scenario based analysis.

slide-31
SLIDE 31

} A disciplined approach might, say, involve:-

  • A replication of changes in market variables resulting

from extreme historic events, e.g. the Oct 1987 stock market crash in the US, the UK’s forced withdrawal from the ERM in Sept 1992, the Asian currency crisis 1997/8, the Lehman’s crisis Sept 2008.

  • An examination of the existing portfolio in a

systematic way to define its particular vulnerabilities and then construct a scenario that exploits these.

  • Asking senior management and consultants to

construct an imagined scenarios that represent their views of the worst that could happen to the business and the economy.

slide-32
SLIDE 32

} Modeling systemic risk is perhaps the greater

challenge.

} The current process relies heavily on banks

portfolio models overlaid with scenario modeling of extreme market events.

} Whilst having the same advantage of being

reasonably easy to understand in this case even the regulatory authorities seem to be concerned at the range of outcomes the banks are producing and their seeming incoherence.

slide-33
SLIDE 33

} It seems to me the starting point here needs

changing.

} Systemic risk is risk derived from the system

as a whole, so modeling it from an individual banks perspective seems to be starting at the wrong place.

} A systemic approach to systemic risk might

more usefully start with systemic banks working with the central banks to take as a starting point the Dynamic Stochastic General Equilibrium models that national treasuries and central banks use for economic analysis.

slide-34
SLIDE 34

} This would result in the macro economic

variables being defined for expected given economy wide outturns and would allow all the commercial banks to start from the same place in terms of the likely market outturns from the model.

} They could then fit their own asset portfolios

and liability structures around these common

  • utturns.
slide-35
SLIDE 35

} Network theory, the non-linear behaviour of

the financial system in situations of stress, is increasingly being put in the spotlight by researchers looking at systemic risks across markets.

} Financial network systems are complex and

the interconnections between them are in many instances not well understood but the importance of network theory can be appreciated from the simple diagrams below.

slide-36
SLIDE 36

} Diagram 1 shows two independent market

systems with no connections. An institution collapses and there is no contagion in this scenario.

} On the other hand, Diagram 2 shows what

happens in the same circumstances but in this diagram there are just two interconnections between institutions in each market system. The same institution defaults but this now results in all the institutions in both systems collapsing.

slide-37
SLIDE 37
slide-38
SLIDE 38
slide-39
SLIDE 39
slide-40
SLIDE 40

} Ha

Handle with care

} So, do we understand how our financial markets are

connected? Can we know what effect current changes in regulation are likely to have on the number and extent of these connections? The answer is, of course, no, but understanding this does matter greatly.

} When we look at the Lehman Brothers collapse and

the seemingly endless reverberations of this on the global economy, it is very clear that the effects are not a direct result of a massive shock, as if Lehman were some Fukushima-like event that overwhelmed the global financial systems defenses. It is rather that the network effects of interconnected markets amplified the initial shock and created new shocks that the global financial system was unprepared for.

slide-41
SLIDE 41

} If this is correct, then we need to be far more vigilant when

looking for catastrophic events, as the initial event may well not look catastrophic at all. It is not the event itself that we should focus on, but rather the feedback and feed-forward repercussions and the so called spillover effects of the event within, and across, different financial markets; something that no individual institution, central bank or otherwise, is well set up to understand.

} We also need to be very careful in assessing the effects of

actions that drain liquidity from markets, as this will drive correlations more closely together. If it becomes difficult to sell one asset class in a crisis, the inevitable result is that other asset classes are sold, spreading contagion across seemingly independent financial systems . Volatile and highly correlated asset markets are exactly where liquidity strains are likely to show.

slide-42
SLIDE 42

} We are thus still left with our two definitions

  • f risk:-
  • What is the maximum loss within a given

probability that we could suffer as a result of holding the given asset portfolio over a given time period?

  • What is the maximum loss that we could

suffer as a result of holding the given asset portfolio over a given time period

} But we have no really satisfactory

methodology for measuring risk.

slide-43
SLIDE 43

} We know that VaR is widely used to measure our

first definition of risk but we also know that VaR is a very incomplete measure of risk as it fails to deal with the worst outcomes issue altogether.

} But both regulators and regulated are

increasingly focusing their energies on how to measure worst outcomes and whilst we may have a metric, ES, we have only a few insights (Liquidity Black Holes, Dynamic Conditional Correlation, Cauchy Randomness) into understanding the behaviour of the tail data.

} To guide us in our choice of measure as the best

available statistical tool (Copulas) have at best a chequered history in providing accurate answers.

slide-44
SLIDE 44

} For our second definition we are thus left with

Scenario Analysis, which whilst intuitively appealing to many is no more than a “best guess” which is highly likely to turn out to be the wrong guess as the future is not simply unknown it is unknowable.

} We must therefore adopt processes and

procedures to allow us to manage as best we can what we cannot measure.

} The day the practical banker is replaced by the

quantitative analyst is further off than we once thought.

slide-45
SLIDE 45

} The use of copula based on a modified Student T

distribution = Boryana Racheva-lotova and Dr Stoyan Stoyanov of FinAnalytica “Going Beyond Normal: A Case for more Sophisticated Models to Capture Fat Tails in Risk Drivers” 9th May 2009.

} Bayesian net technology = Rebonato and Denev “Coherent

Asset Allocation and Diversification in the Presence of Stress Events” 27th April 2011.

} Scenario Modeling = Stephen Wright “Risk Management

from an Organisational Dynamics Perspective” 2013. AND David M. Rowe “Risk Management Beyond VaR” 10th April 2013.

} Systemic Risk = Scott Roger and Jan Vlcek “Macrofinancial

Modeling at Central Banks: Recent Developments and Future Directions” IMF Working Paper January 2011.

} Liquidity Black Holes = Liquidity black Holes by Stephen

Morris & Hyun Song Shin – November 2003

slide-46
SLIDE 46