# Exam FRM Level 2 Ultimate Guide (Books 1 & 2)

## Introduction

We continue with our in-depth coverage of how to pass the level 2 FRM exam based on summaries for both books 1 and 2.

You may download this entire content on our shop page, free of charge.

## Estimating Market Risk Measures

This segment provides a brief introduction and overview of the main issues in market risk measurement

The main concerns are:

• Preliminary data issues: How to deal with data in profit/loss form, rate-of-return form, etc.
• Basic methods of VaR estimation
• How to estimate coherent risk measures
• How to gauge the precision of our risk measure estimators

The first and most important decision is to choose the type of risk measure: Do we want to estimate VaR, ES, etc.?

The second issue is the level of analysis: Do we estimate our risk measure at the portfolio level or at the individual positions?

## Selection methodology

Having chosen our risk measure and level of analysis, we then choose a suitable estimation method

To decide on this, we would usually think in terms of the classic ‘VaR trinity’:

• Non-parametric methods
• Parametric methods
• Monte Carlo simulation methods

When confronted with a new data set, we should never proceed straight to estimation without some preliminary analysis to get to know our data

In risk measurement, we are particularly interested in any non-normal features of our data: Skewness, excess kurtosis, outliers in our data, etc.

## Non-Parametric Approaches

All non-parametric approaches to risk are based on the given underlying assumptions:

• That the near future will be sufficiently like the recent past
• We can use the data from the recent past to forecast risks over the near future

Non-parametric methods are widely used and in many respects are highly attractive approaches to the estimation of financial risk measures

They have a reasonable track record and are often superior to parametric approaches

Wherever possible, we should complement non-parametric methods with stress testing to gauge vulnerability to ‘what if’ events

We should never rely on non-parametric methods alone

## The OS approach

The OS approach provides an ideal method for estimating the confidence intervals for VaRs and ESs

The OS approach is:

• Completely general, in that it can be applied to any parametric or non-parametric VaR or ES
• Reasonable even for relatively small samples
• It is not based on asymptotic theory
• Easy to implement in practice

The OS approach is also superior to confidence-interval estimation methods

This is so because it does not rely on asymptotic theory

It does not force estimated confidence intervals to be symmetric

## Parametric Approaches

Extreme events are events that are unlikely to occur, but can be very costly when they do

These events are often referred to as low-probability, high-impact events

They include large market falls, the failures of major institutions, the outbreak of financial crises and natural catastrophes

Given the importance of such events, the estimation of extreme risk measures is a key concern for risk managers

Unfortunately, the inherent lack of data accompanying extreme events always complicates its analysis

Practitioners can only respond by relying on assumptions to make up for lack of data

Unfortunately, the assumptions made are often questionable

## Extreme Value Theory

Extreme Value Theory provides a tailor-made approach to the estimation of extreme probabilities and quantiles

• It is intuitive and plausible
• It is relatively easy to apply in its basic forms
• It gives us practical guidance on what we should estimate
• It has a relatively good track record
• It also guides us on what not to do

## Limitations of EV approaches

EV problems are intrinsically difficult, because we always have relatively few extreme-value observations to work with

EV estimates are subject to considerable model risk. We have to make various assumptions in order to carry out extreme-value estimations

Because we have so little data and the theory we have is (mostly) asymptotic, EV estimates can be very sensitive to small sample effects, biases, non-linearities, and other problems

## Backtesting VaR

VaR models are only useful insofar as they can be demonstrated to be reasonably accurate

This is why the application of these models always should be accompanied by validation

## Model validation

Model validation is the general process of checking whether a model is adequate

This can be done with backtesting, stress testing, and independent review

Backtesting is a formal statistical frame-work that consists of verifying that actual losses are in line with projected losses

This involves systematically comparing the history of VaR forecasts with their associated portfolio returns

## Backtesting and the Basel Committee

Backtesting is central to the Basel Committee’s ground-breaking decision to allow internal VaR models for capital requirements

The backtesting framework is designed to maximize the probability of catching banks that willfully understate their risk

However, the system also should avoid unduly penalizing banks whose VaR is exceeded simply because of bad luck

## Backtesting and errors

Backtesting involves balancing two types of errors:

• Rejecting a correct model
• Accepting an incorrect model

## VaR Mapping

Whichever value-at-risk (VaR) method is used, the risk measurement process needs to simplify the portfolio by mapping the positions on the selected risk factors

Mapping is the process by which the current values of the portfolio positions are replaced by exposures on the risk factors

Mapping arises because of the fundamental nature of VaR, which is portfolio measurement at the highest level

Choosing the appropriate set of risk factors, however, is part of the art of risk management

• Too many risk factors would be unnecessary, slow, and wasteful
• Too few risk factors could create blind spots in the risk measurement system

For some instruments, the allocation into general-market risk factors is exhaustive and there is no specific risk left

• This is typically the case with derivatives, which are tightly priced in relation to their underlying risk factor

For others positions, such as individual stocks or corporate bonds, there remain some risk, called specific risk

In large, well-diversified portfolios, this remaining risk tends to “wash away”

## Risk Management for the Trading Book

This report summarizes the findings of a working group that surveyed the academic literature that is relevant to a fundamental review of the regulatory framework of the trading book

The review was complemented by feedback from academic experts at a workshop hosted by the Deutsche Bundesbank in April 2010

They address fundamental issues of a sometimes highly technical nature in current VaR-based approaches to risk measurement

They give an overview of implementation issues including questions on the necessity of including time-variation in volatility, the appropriate time horizon over which risk is measured and backtesting of VaR

Capturing market liquidity in a VaR framework is the key question addressed

The last section looks at the relations between and among risk measurement, systemic risk, and potential pro-cyclical effects of risk measurement

## Exogenous and endogenous liquidity

Both exogenous and endogenous liquidity risks are important

Endogenous liquidity risk is particularly relevant for exotic/ complex trading positions

Exogenous liquidity is partially incorporated in the valuation of trading portfolios

Endogenous liquidity is typically not, even though its impact may be substantial

Endogenous liquidity risk is especially relevant under stress conditions

## Correlation basics

Correlation risk can be defined as the risk of financial loss due to adverse movements in correlation between two or more variables

These variables can be financial in nature such as defaulting debtors or non-financial such as political tension

Correlation risk relates to other risks in finance such as market risk, credit risk, systemic risk, and concentration risk

## Types of financial correlations

There are two types of financial correlations:

Static correlations, which measure how two or more financial assets are associated within a given time period (e.g. 1 year)

Dynamic financial correlations, which measure how two or more financial assets move together in time

Correlation risk can be non-monotonic:

Meaning- the dependent variable can increase or decrease when the correlation parameter value increases

## Correlations and systemic crisis

Correlations also play a key role in a systemic crisis:

Correlations typically increase and can lead to high unexpected losses

Market risk and credit risk are highly sensitive to changing correlations

Correlation risk is also closely related to concentration risk, as well as systemic risk, since correlations typically increase in a systemic crisis

## Empirical Properties of Correlation

Contrary to common beliefs, financial correlations display statistically significant and expected properties

The worse the state of the economy, the higher equity correlations are

## Equity correlations

Equity correlations were extremely high during the of 2007-09 recession and reached 97% in February 2009

Equity correlation volatility is lowest in an expansionary period and higher in normal and recessionary economic periods

Equity correlation levels and equity correlation volatility are positively related

Equity correlations show very strong mean reversion

The Dow correlations from 1972 to 2017 showed a monthly mean reversion of 79%

Since equity correlations display strong mean reversion, they display low autocorrelation

## Bond correlation levels

Bond correlation levels and bond correlation volatilities are generally higher in economic bad times

Default probability correlation levels are slightly lower than equity correlation levels

Whilst default probability correlation volatilities are slightly higher than equity correlations

## Financial Correlation Modeling, Empirical Approaches to Risk Metrics

Central to the DV01-style metrics and the multifactor metrics are implicit assumptions about how rates of different term structures change relative to one another

Empirical models do not always describe the data very precisely

A principal component analysis is an empirical description of how rates move together across the curve

## The nature of empirical relationships

Empirical relationships are far from static

Hedges estimated over one period of time may not work very well over subsequent periods

Option-Adjusted Spreads (OAS) are the most popular measure of deviations of market prices from those predicted by models

## The Evolution of Short Rates & the shape of the term structure

This segment presents a framework for understanding the shape of the term structure

It is shown that spot or forward rates are determined by expectations of future short-term rates, the volatility of short-term rates, and an interest rate risk premium

We can derive a risk-neutral process that can be used to price all fixed income securities by arbitrage

## Arbitrage-free models

Arbitrage-free models follow the approach that take the initial term structure as given

For equilibrium models, an understanding of the relationships between the model assumptions and the shape of the term structure is important in order to make reasonable assumptions

For arbitrage-free models, an understanding of the relationships reveals the assumptions implied by the market through the observed term structure

## The Art of Term Structure Models

Previous readings show that assumptions about the risk-neutral short-term rate processes determine the term structure of interest rates and the prices of fixed income derivatives

This segment is to describe the most common building blocks of short-term rate models

How close are the market prices of options to those predicted by the Black–Scholes–Merton model?

Traders use the Black–Scholes–Merton model—but not in exactly the way that Black, Scholes, and Merton originally intended

This is because they allow the volatility used to price an option to depend on its strike price and time to maturity

The Black–Scholes–Merton model assume that the probability distribution of the underlying asset at any given future time is lognormal

• They assume the probability distribution of an equity price has a heavier left tail and a less heavy right tail than the lognormal distribution
• Traders also assume that the probability distribution of an exchange rate has a heavier right tail and a heavier left tail than the lognormal distribution

## Volatility Smile

A plot of the implied volatility of an option with a certain life as a function of its strike price is known as a volatility smile

Traders use volatility smiles to allow for non-lognormality

The volatility smile defines the relationship between the implied volatility of an option and its strike price

## Volatility Smile and equity options

For equity options, the volatility smile tends to be downward sloping

• Thus out-of-the-money puts and in-the-money calls tend to have high implied volatilities
• And out-of-the-money calls and in-the-money puts tend to have low implied volatilities

## Volatility Smile and foreign currency options

For foreign currency options, the volatility smile is U-shaped

Both out-of-the-money and in-the-money options have higher implied volatilities than at-the-money options

The implied volatility of an option depends on its life

## Volatility surface

When volatility smiles and volatility term structures are combined, they produce a volatility surface

This defines implied volatility as a function of both the strike price and the time to maturity

## Fundamental Review of the Trading Book

In May 2012, the Basel Committee on Banking Supervision issued a document proposing major revisions to the way regulatory capital for market risk is calculated

This is referred to as the “Fundamental Review of the Trading Book” (FRTB)

The final version of the rules was published by the Basel Committee in January 2016

FRTB’s approach to determining capital for market risk proved to be a lot more complex than the approaches previously used by regulators

FRTB is a major change to the way capital is calculated for market risk

• We previously had 20 years of using VaR with a 10-day time horizon and 99% confidence to determine market risk capital
• Now, regulators are switching to using ES with a 97.5% confidence level and varying time horizons
• The time horizons, which can be as high as 120 days, are designed to incorporate liquidity considerations into the capital calculations

The Basel Committee has specified a standardized approach and an internal model approach

Even when they have been approved by their supervisors to use the internal model approach, banks must also implement the standardized approach

## Special notes for regulatory capital

Regulatory capital under the standardized approach is based on formulas involving the delta, vega, and gamma exposures of the trading book

Regulatory capital under the internal model approach is based on the calculation of stressed expected shortfall. Calculations are carried out separately for each trading desk

## The Credit Decision

A definition of credit is the realistic expectation that funds advanced will be repaid in full in accordance with the agreement made between the party lending the funds and the party borrowing the funds

Bank credit analysis provides the means to avoid fragile banks

Banks are different in that they are highly regulated and their assessment is intrinsically highly qualitative

## Bank credit analysis

Bank credit analysis and corporate credit analysis are more alike than they are dissimilar

A bank does not have to fail for it to cause damage to a counterparty or creditor

The costs of repairing a banking crisis typically far outweigh the costs of taking prudent measures to prevent one

Governments therefore actively monitor, regulate, and ultimately function as lenders of last resort through the national central bank, or an equivalent agency

## The Credit Analyst

The approach to credit evaluation is contingent upon the type of entity being evaluated

The scope and nature of credit evaluation will depend upon the functional role occupied by the analyst

Credit analysis and credit analysts are classified in three different ways:

1. By function
2. By the type of entity analyzed
3. By the category of employer

## CAMEL

CAMEL is an acronym that stands for the five most important attributes of bank financial health

The five elements of CAMEL are:

• C : Capital
• A : Asset Quality
• M : Management
• E : Earnings
• L : Liquidity

All but the assessment of the quality of “management” are amenable to ratio analysis

But it must be emphasized that “liquidity” is very difficult to quantify

## Using CAMEL

Although sometimes termed a model, the CAMEL system is really more of a checklist of the attributes of a bank that are viewed as critical in evaluating its financial performance and condition

As used by bank regulators in the United States, the CAMEL system functions as a scoring model

Institutions are assigned a score between 1 (best) and 5 (worst) by bank examiners for each letter in the acronym

CAMEL scores on each attribute are aggregated to form composite scores

Scores of 3 or higher are viewed as unsatisfactory and draw regulatory scrutiny

## Capital Structure in Banks

Credit risk is the risk that arises from any non-payment or rescheduling of any promised payments

Since credit losses are a predictable element of the lending business, it is useful to distinguish between expected losses and unexpected losses when attempting to quantify the risk of a credit portfolio

Despite the beauty and simplicity of the bottom-up (total) risk measurement approach, there are a number of caveats that need to be addressed:

• This approach assumes that credits are illiquid assets
• It measures only the risk contribution (i.e., the internal “betas”)
• It does not measure the correlation with risk factors as priced in liquid markets

## Rating Assignment Methodologies

The event of default is one of the most significant sources of losses in a bank’s profit and loss statement

Rating supports credit pricing and capital provisions to cover unexpected credit losses

Internal rating has to be as ‘objective’ as possible

Statistical methods are well suited to manage quantitative data

However, useful information for assessing probability of default is not only quantitative

Other types of information are also highly relevant such as: the firm’s competitive strengths and weaknesses, management quality, stability of owners, managerial reputation, etc.

## Credit approval categories

These judgment-based approaches to credit approval can be classified in three categories:

1. Efficiency and effectiveness of internal processes (production, administration, marketing, post-marketing)
2. Investment, technology, and innovation
3. Human resource management, talent valorization, key resources retention, and motivation

## Credit Risks and Credit Derivatives

Credit derivatives are one of the newest and most dynamic growth areas in the derivatives industry

At the end of 2000, the total notional amount of credit derivatives was estimated to be \$810 billion; it was only \$180 billion two years before

Following Black and Scholes (1973), option pricing theory has been used to evaluate default risky debt in many different situations

The basic model to value risky debt using option pricing theory is the Merton (1974) model

## The Merton model and others

The Merton model allows us to price risky debt by viewing it as risk-free debt minus a put written on the firm issuing the debt

The Merton model is practical mostly for simple capital structures with one debt issue that has no coupons

Other approaches to pricing risky debt model the probability of default and then discount the risky cash flows from debt using a risk-neutral distribution of the probability of default

Credit risk models such as the CreditRisk+ model, the CreditMetricsTM model, and the KMV model provide approaches to estimating the VaR for a portfolio of credits

Credit derivatives can be used to hedge credit risks

## Spread Risk and Default Intensity Models

Credit spreads are the compensation the market offers for bearing default risk (They are not pure expressions of default risk)

Apart from the probability of default over the life of the security, credit spreads also contain compensation for risk

The spread must induce investors to put up with:

• The uncertainty of credit returns
• Liquidity risk
• The extremeness of loss in the event of default
• Extent of recovery payments
• Legal risks

Spread risk encompasses both the market’s expectations of credit risk events and the credit spread it requires in equilibrium to put up with credit risk

The most common way of measuring spread risk is via the spread volatility or “spread vol,” the degree to which spreads fluctuate over time

Spread vol is the standard deviation—historical or expected—of changes in spread, generally measured in basis points per day

## Portfolio Credit Risk

A typical portfolio may contain many different obligors, but may also contain exposures to different parts of one obligor’s capital structure, such as preferred shares and senior debt

In the CreditMetrics approach, this model is used to compute the distribution of credit migrations as well as default

An advantage of the CreditMetrics approach is that factors can be related to real-world phenomena, such as equity prices, providing an empirical anchor for the model

The model is also tractable

## Structured Credit Risk

This segment focuses on a class of credit-risky securities called securitizations and structured credit products

These securities play an important role in contemporary finance, and had a major role in the subprime crisis of 2007

## The waterfall rules

The waterfall refers to the rules about how the cash flows from the collateral are distributed to the various securities in the capital structure

The term “waterfall” arose because generally the capital structure is paid in sequence:

That is: “top down,” with the senior debt receiving all of its promised payments before any lower tranche receives any monies

## Default correlation

Default correlation is the correlation concept most directly related to portfolio credit risk

We formally define the default correlation of two firms over a given future time period as:

The correlation coefficient of the two random variables describing the firms’ default behavior over a given time period

## Asset return correlation

Asset return correlation is the correlation of logarithmic changes in two firms’ asset values

In practice, portfolio credit risk measurement of corporate obligations often relies on asset return correlations

The asset return correlation in a factor model is driven by each firm’s factor loading

Equity return correlation is the correlation of logarithmic changes in the market value of two firms’ equity prices

Asset correlation is not directly unobservable

In practice, asset correlations are often proxied by equity correlations

## Copula correlations

Copula correlations are the values entered into the off-diagonal cells of the correlation matrix of the distribution used in the copula approach to measuring credit portfolio risk

Unlike the other correlation concepts, the copula correlations have no direct economic interpretation

They depend on which family of statistical distributions is used in the copula-based risk estimate

## Counterparty Risk

Counterparty credit risk is the risk that the entity with whom one has entered into a financial contract (the counterparty to the contract) will fail to fulfil their side of the contractual agreement

Counterparty risk is typically defined as arising from two broad classes of financial products:

1. OTC derivatives (e.g. interest rate swaps) and
2. Securities financial transactions (e.g. repos)

The OTC derivatives category is the more significant due to the size and diversity of the OTC derivatives market and the fact that a significant amount of risk is not collateralized

## Netting

Netting is a traditional way to mitigate counterparty risk where there may be a large number of transactions of both positive and negative value with a given counter-party

## Payment netting

Payment netting allows offsetting cash flows to be combined into a single amount and reduces settlement risk

## Close-out

Close-out refers to the process of terminating and settling contracts with a defaulted counterparty

## Close-out netting

Close-out netting is a crucial way to control exposure by being legally able to offset transactions with positive and negative MTM values in the event a counterparty does default

## Compression

Compression reduces gross notional and improves efficiency, although the associated net exposure is not materially reduced

## Reset features

Reset features allow the periodic resetting of an exposure

## Collateral

This segment explains the role of collateral (or “margin”) in reducing counterparty risk beyond the benefit achieved with netting and other methods

Collateral is an asset supporting a risk in a legally enforceable way

## Fundamentals of OTC derivatives collateralization

The fundamental idea of OTC derivatives collateralization is that:

Cash or securities are passed from one party to another primarily as a means to reduce counterparty risk

Whilst break clauses and resets can provide some risk-mitigating benefit in these situations, collateral is a more dynamic and generic concept

A break clause can be seen as a single payment of collateral and cancellation of the transaction

A reset feature is essentially the periodic (typically infrequent) payment of collateral to neutralize an exposure

## Credit Exposure and Funding

This segment is concerned with defining exposure in more detail and explaining the key characteristics

Exposure is the key determinant in xVA because it represents the core value that may be at risk in default scenarios and that otherwise needs to be funded

There is a link between credit exposure and funding costs that are driven by similar components but have some different features

This is especially true when aspects such as segregation are involved

Funding exposure is similar to credit exposure but has some distinct differences

A defining feature of counterparty risk arises from the asymmetry of potential losses with respect to the value of the underlying transaction

A key feature of counterparty risk is that it is bilateral:

Both parties to a transaction can default and therefore both can experience losses

## Counterparty Risk Intermediation

This segment concerns counterparty risk mitigation via entities acting as intermediators and/or offering guarantees or insurance in relation to default events

One considers the role of central counterparties (CCPs) in mitigation of OTC derivative counterparty risk

This is a key element due to the regulatory mandate regarding clearing of standardized OTC derivatives

Exchange-traded derivatives have long used CCPs to control counterparty risk

The concept of counterparty risk intermediation is where a third-party guarantor intermediates and guarantees the performance of one or both counterparties with the aim of reducing counterparty risk

Clearly the guarantor will need an enhanced credit quality for this to be beneficial

## Credit and Debt Value Adjustments

This section introduces: CVA (credit or counterparty value adjustment) and DVA (debt or debit value adjustment)

Under fairly standard assumptions, CVA and DVA can be defined in a straightforward way via credit exposure and default probability

CVA has become a key topic for banks in recent years due to the volatility of credit spreads and the associated accounting (e.g. IFRS 13) and capital requirements (Basel III)

Whilst CVA calculations are a major concern for banks, they are also relevant for other financial institutions and corporations that have significant amounts of OTC derivatives to hedge their economic risks

CVA and DVA should only be ignored for financial reporting if they are immaterial which is not the case for any significant OTC derivative user

A key and common assumption made in this section is that credit exposure and default probability are independent

This segment describes the calculation and computation of CVA under the commonly made simplification of no wrong-way risk, which assumes that the credit exposure, default of the counterparty and recovery rate are not related

## Wrong-way Risk

WWR is the phrase generally used to indicate an unfavorable dependence between exposure and counterparty credit quality

The exposure is high when the counterparty is more likely to default and vice versa

WWR is difficult to identify, model and hedge due to the often subtle macro-economic and structural effects that cause it

Whilst it may often be a reasonable assumption to ignore WWR, its manifestation can be potentially dramatic

## Right-way Risk

In contrast, “right-way” risk can also exist in cases where the dependence between exposure and credit quality is a favorable one

Right-way situations will reduce counterparty risk and CVA

## The Evolution of Stress Testing Counterparty Exposures

The call for better stress testing of counterparty credit risk exposures has been a common occurrence from both regulators and industry in response to various financial crises experienced over the decades

Unfortunately, statistical measures have progressed more rapidly than stress testing

This segment shows how stress testing may be improved by building off the development of the statistical measures

## Counterparty risk measurement

The measurement of counterparty risk has developed by viewing the risk as a credit risk and as a market risk

Methods used to stress-test counterparty risk can be described from both a credit risk perspective and from a market risk perspective

These stress tests are considered from both a portfolio perspective and individual counterparty perspective

The measurement and management of counterparty credit risk (CCR) has evolved rapidly since the late 1990s

CCR may well be the fastest-changing part of financial risk management over the time period

In the wake of the Long-Term Capital Management crisis, the Counterparty Risk Management Policy Group cited deficiencies in these areas and also called for use of better measures of CCR

As computer technology advanced, the ability to model CCR developed quickly and allowed assessments of how the risk would change in the future

In today’s world, a counterparty credit risk manager has a multiplicity of stress tests to consider:

## Too many stress tests can hide the risk of a portfolio, but…

A fair number of stresses are important to develop a comprehensive view of the risks in the portfolio

Both the credit risk and market risk views are important since both fair-value losses and default losses can occur no matter how a financial institution manages its CCR

## Generating integrated stress tests

More integrated stress tests can be generated by: Combining the credit risk view with the loan portfolio, or combining the market risk view of CCR with the trading book

The true difficulty remains in combining the default stresses and the fair-value stresses to get a single comprehensive stress test

## Credit Scoring and Retail Credit Risk Management

This section examines credit risk in retail banking

Retail banking has been transformed over the last few years by innovations in products, marketing, and risk management

However, poorly controlled sub-prime lending in the U.S. mortgage market provided the fuel for the disastrous failures of the U.S. securitization industry in the run-up to the financial crisis of 2007–2009

Credit scoring is now a widespread technique, not only in banking but also in many other sectors

Fixed-rate mortgages and adjustable-rate mortgages (ARMs) are secured by the residential properties financed by the loan

The loan-to-value ratio (LTV) represents the proportion of the property value financed by the loan and is a key risk variable

## HELOC loans

Home equity loans are sometimes called home equity line of credit (HELOC) loans

• These can be considered a hybrid between a consumer loan and a mortgage loan
• They are secured by residential properties

## The purpose of analytical methods

Throughout loan servicing, analytical methods are used to:

• Anticipate consumer behavior or payment patterns
• Determine opportunities for cross-selling
• Assess prepayment risk
• Identify any fraudulent transactions
• Optimize customer relationship management
• Prioritize collection efforts

## Finding the optimal strategy

Risk-based pricing can be used to analyze trade-offs and to determine the “optimal” multitier, risk-based pricing strategy

Every new product or marketing technology introduces the danger that a systematic risk will be introduced into the credit portfolio

That is, a common risk factor that causes losses to rise unexpectedly high once the economy or consumer behavior moves into a new configuration

Scoring models are a tool that must be applied with a considerable dose of judgment, based on a deep under-standing of each consumer product

## The Credit Transfer Markets and their implications

Securitization is a financing technique whereby a company, the originator, arranges for the issuance of securities whose cash flows are based on the revenues of a segregated pool of assets

Examples include: corporate investment-grade loans, leveraged loans, mortgages, and other asset-backed securities (ABS) such as auto loans and credit card receivables

## Asset origination

Assets are originated by the originator(s) and funded on the originator’s balance sheet

• A suitably large portfolio of assets will be originated
• The assets are analyzed as a portfolio
• They are then sold or assigned to a bankruptcy-remote company (i.e., a special purpose vehicle (SPV) company formed for the specific purpose of funding the assets)

In the aftermath of the 2007 crisis; the picture going forward for credit transfer markets and strategies is mixed:

• New credit risk transfer strategies are appearing
• There is a trend for insurance companies to purchase loans from banks to build asset portfolios that match their long-term liabilities
• The high capital costs associated with post-crisis reforms (e.g., Basel III) suggest that the “buy and hold” model of banking will remain a relatively inefficient way for banks to manage the credit risk that lending and other banking activities generate

## Tranching

Tranching is the process of creating notes of various seniorities and risk profiles, including senior and mezzanine tranches and an equity (or first loss) piece

As a result of the prioritization scheme (i.e. the “waterfall”) used in the distribution of cash flows to the tranche holders, the most senior tranches are far safer than the average asset in the underlying pool

## Senior tranches

Senior tranches are insulated from default risk up to the point where credit losses deplete the more junior tranches

Losses on the mortgage loan pool are first applied to the most junior tranche until the principal balance of that tranche is completely exhausted

Then losses are allocated to the most junior tranche remaining, and so on

The performance of these securities is directly linked to the performance of the assets and, in principle; there is no recourse back to the originator

## Tools for transferring credit risk

Credit derivatives and securitization are key tools for the transfer and management of credit risk and for the provision of bank funding

Basel III and the Dodd-Frank Act are likely to raise the cost of capital for banks

Banks may, in the longer term, have no alternative other than to adopt the originate-to-distribute business model and use credit derivatives and other risk transfer techniques to redistribute and repackage credit risk

## Securitization

Securitization is a well-established practice in the global debt capital markets

It refers to the sale of assets, which generate cash flows from the institution that owns the assets, to another company that has been specifically set up for the purpose of acquiring them, and the issuing of notes by this second company

## Synthetic securitization

Synthetic securitization is a generic term covering structured financial products that use credit derivatives in their construction

The motivations behind the origination of synthetic structured products sometimes differ from those of cash flow ones

Both product types are aimed at institutional investors

Securitization allows institutions such as banks and corporations to convert assets that are not readily marketable— such as residential mortgages or car loans —into rated securities that are tradable in the secondary market

The investors that buy these securities gain exposure to these types of original assets that they would not otherwise have access to

## Why securitization?

The driving force behind securitization has been the need for banks to realize value from the assets on their balance sheet

Typically, these assets are residential mortgages, corporate loans and retail loans such as credit card debt

The main reason that a bank securitizes part of its balance sheet is for one or all of the following reasons:

• Funding the assets it owns
• Balance sheet capital management
• Risk management
• Credit risk transfer

## Securitization of Subprime Mortgage Credit

The typical subprime trust has the following structural features designed to protect investors from losses on the underlying mortgage loans:

• Subordination
• Shifting interest
• Performance triggers
• Interest rate swap

Credit rating agencies play an important role in resolving or at least mitigating several of these frictions

However, it is important to understand that repairing the securitization process does not end with the rating agencies

The incentives of investors and investment managers need to be aligned

Structured investments should be evaluated relative to an index of structured products in order to give the manager appropriate incentives to conduct his own due diligence