The power of value-at-risk lies in ints generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, value-at-risk is general. It is based on the probability distribution for a portfolio’s market value. All liquid assets have uncertain market values, which can be characterized with probability distributions. All sources of market risk contribute to those probability distributions. Being applicable to all liquid assets and encompassing, at least in theory, all sources of market risk, value-at-risk is a broad metric of market risk.

The generality of value-at-risk poses a computational challenge. In order to measure market risk in a portfolio using value-at-risk, some means must be found for determining the probability distribution of that portfolio’s market value. Obviously, the more complex a portfolio is—the more asset categories and sources of market risk it is exposed to—the more challenging that task becomes.

It is worth distinguishing two concepts:

- A
**value-at-risk****measure**is an algorithm with which we calculate a portfolio’s value-at-risk. - A value-at-risk metric is our interpretation of the output of the value-at-risk measure.

A value-at-risk metric, such as one-day 90% USD VaR, is specified with three items:

- a time horizon;
- a probability;
- a currency.

A value-at-risk measure calculates an amount of money, measured in *that* currency, such that there is *that* probability of the portfolio not loosing *that* amount of money over *that* horizon. In the terminology of mathematics, this is called a quantile, so one-day 90% USD VaR is just the .90-quantile of a portfolio’s one day loss.

This is worth emphasizing: value-at-risk is *a quantile of loss*. The task of a value-at-risk measure is to calculate such a quantile.

For a given value-at-risk metric, measure time in units of the value-at-risk horizon. Let time 0 be now, so time 1 represents the end of the horizon. We know a portfolio’s current market value ^{0}*p*. Its market value ^{1}*P* at the end of the horizon is unknown. Define portfolio loss ^{1}*L* as

^{1}*L* = ^{0}*p* – ^{1}*P*

[1]

If ^{0}*p* exceeds ^{1}*P*, the loss will be positive. If ^{0}*p* is less than ^{1}*P*, the loss will be negative, which is another way of saying the portfolio makes a profit.

Because we don’t know the portfolio’s future value ^{1}*P*, we don’t know its loss ^{1}*L*. Both are random variables, and we can assign them probability distributions. That is exactly what a value-at-risk measure does. It assigns a distribution to ^{1}*P* and/or ^{1}*L*, so it can calculate the desired quantile of ^{1}*L*. Most typically, value-at-risk measures work directly with the distribution of ^{1}*P* and use that to infer the quantile of ^{1}*L*. This is illustrated in Exhibit 1 for a 90% VaR metric.

Exhibit 1 shows how the .90-quantile of ^{1}*L* (the portfolio’s value-at-risk) can be obtained as the .10-quantile of ^{1}*P* minus the portfolio’s current value ^{0}*p*. Other value-at-risk metrics can be valued similarly. So if we know the distribution for ^{1}*P*, calculating value-at-risk is easy. The challenge for any value-at-risk measure is constructing that distribution of ^{1}*P*. Value-at-risk measures do so in various ways, but all practical value-at-risk measures share certain features described below.

Because value-at-risk measures are probabilistic, they deal with various random financial variables. Three types are particularly significant and are given standard notation:

- a portfolio value
^{1}*P*; - asset values
^{1}*S*; and_{i} - key factors
^{1}*R*._{i}

We have already discussed portfolio value ^{1}*P*, which is the portfolio’s market value at time 1—the end of the value-at-risk horizon. It has current value ^{0}*p*. Mathematically, a portfolio is defined as an ordered pair (^{0}*p*,^{1}*P*).

**Asset values** ^{1}*S _{i}* represent the accumulated value at time 1 of individual assets held by the portfolio. Individual assets might be stocks, bonds, futures, options or other instruments. Current asset values are denoted

^{0}

*s*. Mathematically, we define an asset as an ordered pair (

_{i}^{0}

*s*,

_{i}^{1}

*S*). The

_{i}*m*asset values

^{1}

*S*comprise an ordered set (or “vector”) called the

_{i}**asset vector**, which we denote

^{1}

*. Its current value*

**S**^{0}

*is the ordered set of asset current values*

**s**^{0}

*s*.

_{i}[2]

**Key factors** ^{1}*R _{i}* represent values at time 1 of financial variables that can be used to value the assets. Depending on the composition of the portfolio, key factors might represent exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. The

*n*key factors

^{1}

*R*comprise an ordered set called the

_{i}**key vector**, which we denote

^{1}

*. Value-at-risk measures utilize not only the current value*

**R**^{0}

*for the key vector but also other historical values*

**r**^{–1}

**,**

*r*^{–2}

**,**

*r*^{–3}

**, … ,**

*r*^{–α}

**:**

*r*[3]

Where are we going with this? The quantities ^{1}*P*, ^{1}*S _{i}* and

^{1}

*R*are all random. But the portfolio’s value

_{i}^{1}

*P*is a function of the values

^{1}

*S*of the assets it holds. Those in turn are a function of the key factors

_{i}^{1}

*R*. For example, a bond portfolio’s value

_{i}^{1}

*P*is a function of the values

^{1}

*S*of the individual bonds it holds. Their values are in turn functions of applicable interest rates

_{i}^{1}

*R*. Because a function of a function is a function,

_{i}^{1}

*P*is a function θ of

^{1}

**:**

*R*^{1}*P* = θ(^{1}** R**)

[4]

Value-at-risk measures apply time series analysis to historical data ^{0}* r*,

^{–1}

**,**

*r*^{–2}

**, … ,**

*r*^{–α}

**to construct a joint probability distribution for**

*r*^{1}

**. They then exploit the functional relationship θ between**

*R*^{1}

*P*and

^{1}

**to convert that joint distribution into a distribution for**

*R*^{1}

*P*. From that distribution for

^{1}

*P*, value-at-risk is calculated, as illustrated in Exhibit 1 above.

Let’s formalize this. Exhibit 2 summarizes the components common to all practical value-at-risk measures:

A value-at-risk measure accepts two inputs:

- historical data
^{0},**r**^{–1},*r*^{–2}, … ,*r*^{–α}for*r*^{1}, and**R** - the portfolio’s holdings
**ω**.

The portfolio holdings comprise a row vector **ω** whose components indicate the number of units held of each asset. For example, if a portfolio holds 1000 shares of IBM stock, 5000 shares of Google stock and a short position of 3000 shares of Microsoft stock, its holdings are

**ω** = (1000 5000 –3000)

[5]

The two inputs—historical data and portfolio holdings—are processed separately by two procedures within the value-at-risk measure:

- An inference procedure applies methods of time series analysis to the historical data
^{0},**r**^{–1},*r*^{–2}, … ,*r*^{–α}to construct a joint distribution for*r*^{1}.**R** - A mapping procedure uses the portfolio’s holdings
**ω**to construct a function θ such that^{1}*P*= θ(^{1}).*R*

The mapping procedure uses a set of pricing functions φ* _{i} *that value each asset

^{1}

*S*in terms of

_{i}^{1}

**:**

*R*^{1}*S _{i}* = φ

*(*

_{i}^{1}

*)*

**R**[6]

For example, if asset ^{1}*S*_{1} is a bond, pricing formula φ_{1} will be a bond pricing formula. If asset ^{1}*S*_{2} is an equity option, pricing formula φ_{2} will be an equity option pricing formula. A functional relationship ^{1}*P* = θ(^{1}** R**) is then defined as a weighted sum of the pricing formulas φ

*, with the weights being the holdings*

_{i}**ω**

*:*

_{i}^{1}*P* = **ω**_{1}^{1}*S*_{1} + **ω**_{2}^{1}*S*_{2} + … + **ω**_{m}^{1}*S _{m}*

[7]

= **ω**_{1}φ_{1}(^{1}** R**) +

**ω**

_{2}φ

_{2}(

^{1}

**) + … +**

*R***ω**

*φ*

_{m}*(*

_{m}^{1}

**)**

*R*[8]

This is called a primary mapping. If a portfolio is large or holds complex instruments, such as derivatives or mortgage-backed securities, a primary mapping may be computationally expensive to value. Many mapping procedures replace a primary mapping θ with a simpler approximation . Such approximations are called remappings. They can take many forms. Two common examples are remappings that are constructed, using the method of least squares, as either a linear polynomial or quadratic polynomial approximation of θ. Such remappings are called, respectively, linear remappings and quadratic remappings.

Most of the literature on value-at-risk is either elementary or theoretical, so remappings receive little mention. This is unfortunate. As a practical tool for making production value-at-risk measures tractable, remappings can be indispensable.

Returning to Exhibit 2, we have discussed the two inputs to a value-at-risk measure as well as the inference procedure and mapping procedure that process these. If you think about it, the two outputs of those procedures correspond to the two components of risk. As explained by Holton (2004), every risk has two components:

- uncertainty
- exposure

In the context of market risk, we are *uncertain* if we don’t know what will happen in the markets. We are *exposed* if we have holdings in instruments traded in those markets. A value-at-risk measure characterizes uncertainty with the joint distribution for ^{1}** R** constructed by its inference procedure. It characterizes exposure with the portfolio mapping θ constructed by its mapping procedure. A value-at-risk measure must combine those two components to measure a portolio’s market risk, and it does so with a transformation procedure.

A transformation procedure accepts as inputs

- a joint distribution for
^{1}, and**R** - a portfolio mapping θ, which can be either a primary mapping or a remapping.

It uses these to construct a distribution for ^{1}*P* from which it calculates the portfolio’s value-at-risk.

Transformation procedures take various forms, but there are essentially three types:

- Linear transformation procedures apply if the portfolio mapping θ is a linear polynomial. They employ a standard formula from probability theory for calculating the variance of a linear polynomial of a random vector. For certain asset categories, such as equities or futures, primary mappings can be liner polynomials. Alternatively, θ may be a linear remapping.
- Quadratic transformation procedures apply if the portfolio mapping θ is a quadratic polynomial and the joint distribution of
^{1}is joint-normal. Primary mappings are almost never quadratic polynomials, so quadratic transformations assume use of a quadratic remapping.**R** - Monte Carlo transformation procedures employ the Monte Carlo method and are applicable to all portfolio mappings. This advantage comes with potentially significant computational expense, as Monte Carlo transformation procedures entail revaluing the portfolio under numerous scenarios. A subcategory of Monte Carlo transformation procedures do not randomly generate scenarios but instead construct them directly from historical data for
^{1}. These are called historical transformation procedures.**R**

Elementary treatments of value-at-risk often mention “methods” for calculating value-at-risk. Mostly, these reference the transformation procedures used. For example, the terms “parametric method” or “variance-covariance method” refer to value-at-risk measures that employ a liner transformation procedure. The “delta-gamma method” refers to those that use a quadratic transformation procedure. The “Monte Carlo method” and “historical method” refer, of course, to value-at-risk measures that use Monte Carlo or historical transformation procedures.

For a deeper discussion of value-at-risk, or for worked examples of actual value-at-risk measures, see my book *Value-at-Risk: Theory and Practice*. I distribute the latest edition free online at http://value-at-risk.net.

## References

- Holton, Glyn A. (2004). Defining risk,
*Financial Analysts Journal*, 60 (6), 19–25. - Holton, Glyn A. (2014).
*Value-at-Risk: Theory and Practice*, 2^{nd}ed. e-book at http://value-at-risk.net.

## Comments are closed.