The power of value-at-risk lies in ints generality. Unlike market risk metrics such as the Greeks, duration or beta, which are applicable to only certain asset categories or certain sources of market risk, value-at-risk is general. It is based on the probability distribution for a portfolio’s market value. All liquid assets have uncertain market values, which can be characterized with probability distributions. All sources of market risk contribute to those probability distributions. Being applicable to all liquid assets and encompassing, at least in theory, all sources of market risk, value-at-risk is a broad metric of market risk.

The generality of value-at-risk poses a computational challenge. In order to measure market risk in a portfolio using value-at-risk, some means must be found for determining the probability distribution of that portfolio’s market value. Obviously, the more complex a portfolio is—the more asset categories and sources of market risk it is exposed to—the more challenging that task becomes.

## Value-at-Risk as a Quantile of Loss

It is worth distinguishing two concepts:

- A
**value-at-risk****measure**is an algorithm with which we calculate a portfolio’s value-at-risk. - A value-at-risk metric is our interpretation of the output of the value-at-risk measure.

A value-at-risk metric, such as one-day 90% USD VaR, is specified with three items:

- a time horizon;
- a probability;
- a currency.

A value-at-risk measure calculates an amount of money, measured in *that* currency, such that there is *that* probability of the portfolio not loosing more than *that* amount of money over *that* time horizon. In the terminology of mathematics, this is called a quantile, so one-day 90% USD VaR is just the 90% quantile of a portfolio’s one day loss in US dollars.

This is worth emphasizing: value-at-risk is *a quantile of loss*. The task of a value-at-risk measure is to calculate such a quantile.

## Value-at-Risk: Preliminary Definitions

For a given value-at-risk metric, measure time in units—days, weeks, months, etc.—equal to the time horizon. Let time 0 be now, so time 1 represents the end of the horizon. We know a portfolio’s current market value ^{0}*p*. Its market value ^{1}*P* at the end of the horizon is unknown.

Here, as in other contexts, I use the convention that unknown (i.e. random) quantities are capitalized while known quantities are lower-case. Preceding superscripts indicate time, so ^{0}*p* is the portfolio’s known current value, and ^{1}*P* is its unknown market value at the end of the horizon – at time *t* = 1.

Define portfolio loss ^{1}*L* as

^{1}*L* = ^{0}*p* – ^{1}*P*

[1]

If ^{0}*p* exceeds ^{1}*P*, the loss will be positive. If ^{0}*p* is less than ^{1}*P*, the loss will be negative, which is another way of saying the portfolio makes a profit.

## Calculating Value-at-Risk as a Quantile of Loss

Because we don’t know the portfolio’s future value ^{1}*P*, we don’t know its loss ^{1}*L*. Both are random variables, and we can assign them probability distributions. That is exactly what a value-at-risk measure does. It assigns a distribution to ^{1}*P* and/or ^{1}*L*, so it can calculate the desired quantile of ^{1}*L*. Most typically, value-at-risk measures work directly with the distribution of ^{1}*P* and use that to infer the quantile of ^{1}*L*.

This is illustrated in Exhibit 1 for a 90% VaR metric. Working with the probability distribution of ^{1}*P*, first the 10% quantile of ^{1}*P* is found. Then, subtracting this from the portfolio’s current market value ^{0}*p* gives the 90% quantile of ^{1}*L*. This is the portfolio’s value-at-risk – the amount of money such that there is a 90% probability that the portfolio will either make a profit or lose less than that amount.

**Exhibit 1:** A portfolio’s 90% VaR is the amount of money such that there is a 90% probability of the portfolio losing less than that amount of money—the 90% quantile of ^{1}*L*. This exhibit illustrates how that quantity can be calculated as the portfolio’s current value ^{0}*p* minus the 10% quantile of ^{1}*P*. Other value-at-risk metrics can be valued similarly.

Other value-at-risk metrics can be valued similarly. So if we know the distribution for ^{1}*P*, calculating value-at-risk is easy. The challenge for any value-at-risk measure is constructing that distribution of ^{1}*P*. Value-at-risk measures do so in various ways, but all practical value-at-risk measures share certain features described below.

## Risk Factors

Because value-at-risk measures are probabilistic, they deal with various random financial variables. Three types are particularly significant and are given standard notation:

- a portfolio value
^{1}*P*; - asset values
^{1}*S*; and_{i} - key factors
^{1}*R*._{i}

We have already discussed portfolio value ^{1}*P*, which is the portfolio’s market value at time 1—the end of the value-at-risk horizon. The portfolio has current value ^{0}*p*.

**Asset values** ^{1}*S _{i}* represent the accumulated value at time 1 of individual assets that might be held by the portfolio at time 0. Individual assets might be stocks, bonds, futures, options or other instruments. Let

*m*be the total number of assets to be modeled. The

*m*asset values

^{1}

*S*comprise an ordered set (an

_{i}*m*-dimensional vector) called the

**asset vector**, which we denote

^{1}

*. Its current value*

**S**^{0}

*is the ordered set of asset current values*

**s**^{0}

*s*. I am using the notation convention of making multivariate quantities – vectors or matrices – bold.

_{i}[2]

**Key factors** ^{1}*R _{i}* represent values at time 1 of financial variables that can be used to value the assets. Depending on the composition of the portfolio, key factors might represent exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. The

*n*key factors

^{1}

*R*comprise an ordered set called the

_{i}**key vector**, which we denote

^{1}

*. This has current value*

**R**^{0}

*:*

**r**[3]

Past values of the key vector are also required:

[4]

Together, current and past values for the key vector, ^{0}** r**,

^{–1}

**,**

*r*^{–2}

**, … ,**

*r*^{–α}

**, are called**

*r***historical market data**.

## Calculating Value-at-Risk: The Big Picture

Where are we going with this? The quantities ^{1}*P*, ^{1}*S _{i}* and

^{1}

*R*are all random. But the portfolio’s value

_{i}^{1}

*P*is a function of the values

^{1}

*S*of the assets it holds. Those in turn are a function of the key factors

_{i}^{1}

*R*. For example, a Treasury bond portfolio’s value

_{i}^{1}

*P*is a function of the values

^{1}

*S*of the individual bonds it holds. Their values are in turn functions of applicable interest rates

_{i}^{1}

*R*. Because a function of a function is a function,

_{i}^{1}

*P*is a function θ of

^{1}

**:**

*R*^{1}*P* = θ(^{1}** R**)

[5]

Value-at-risk measures apply time series analysis to historical data ^{0}* r*,

^{–1}

**,**

*r*^{–2}

**, … ,**

*r*^{–α}

**to construct a joint probability distribution for**

*r*^{1}

**. They then exploit the functional relationship θ between**

*R*^{1}

*P*and

^{1}

**to convert that joint distribution into a distribution for**

*R*^{1}

*P*. From that distribution for

^{1}

*P*, value-at-risk is calculated, as illustrated in Exhibit 1 above.

Exhibit 2 summarizes the components common to all practical value-at-risk measures: We describe those components next.

**Exhibit 2:** All practical value-at-risk measures accept portfolio holdings and historical market data as inputs. They process these with a mapping procedure, inference procedure, and transformation procedure. Output comprises the value of a value-at-risk metric. That value is the value-at-risk measurement.

## Value-at-Risk Inputs

A value-at-risk measure accepts two inputs:

- historical data
^{0},**r**^{–1},*r*^{–2}, … ,*r*^{–α}for*r*^{1}, and**R** - the portfolio’s holdings
**ω**.

The portfolio holdings comprise a row vector **ω** whose components indicate the number of units held of each asset. For example, if a portfolio holds 1000 shares of IBM stock, 5000 shares of Google stock and a short position of 3000 shares of Microsoft stock, its holdings are

**ω** = (1000 5000 –3000)

[6]

## Inference and Mapping Procedures

The two inputs—historical data and portfolio holdings—are processed separately by two procedures within the value-at-risk measure:

- An inference procedure applies methods of time series analysis to the historical data
^{0},**r**^{–1},*r*^{–2}, … ,*r*^{–α}to construct a joint distribution for*r*^{1}.**R** - A mapping procedure uses the portfolio’s holdings
**ω**to construct a function θ such that^{1}*P*= θ(^{1}).*R*

The mapping procedure uses a set of pricing functions φ* _{i} *that value each asset

^{1}

*S*in terms of

_{i}^{1}

**:**

*R*^{1}*S _{i}* = φ

*(*

_{i}^{1}

*)*

**R**[7]

For example, if asset ^{1}*S*_{1} is a bond, pricing formula φ_{1} will be a bond pricing formula. If asset ^{1}*S*_{2} is an equity option, pricing formula φ_{2} will be an equity option pricing formula.

A functional relationship ^{1}*P* = θ(^{1}** R**) is then defined as a weighted sum of the pricing formulas φ

*, with the weights being the holdings*

_{i}**ω**

*:*

_{i}^{1}*P* = **ω**_{1}^{1}*S*_{1} + **ω**_{2}^{1}*S*_{2} + … + **ω**_{m}^{1}*S _{m}*

[8]

= **ω**_{1}φ_{1}(^{1}** R**) +

**ω**

_{2}φ

_{2}(

^{1}

**) + … +**

*R***ω**

*φ*

_{m}*(*

_{m}^{1}

**)**

*R*[9]

This is called a **primary mapping**. If a portfolio is large or holds complex instruments, such as derivatives or mortgage-backed securities, a primary mapping may be computationally expensive to value. Many mapping procedures replace a primary mapping θ with a simpler approximation . Such approximations are called remappings. They can take many forms. Two common examples are remappings that are constructed, using the method of least squares, as either a linear polynomial or quadratic polynomial approximation of θ. Such remappings are called, respectively, linear remappings and quadratic remappings.

Most of the literature on value-at-risk is either elementary or theoretical, so remappings receive little mention. This is unfortunate. As a practical tool for making production value-at-risk measures tractable, remappings can be indispensable.

## Transformation Procedures

Returning to Exhibit 2, we have discussed the two inputs to a value-at-risk measure as well as the inference procedure and mapping procedure that process these. If you think about it, the two outputs of those procedures correspond to the two components of risk. As explained by Holton (2004), every risk has two components:

- uncertainty
- exposure

In the context of market risk, we are *uncertain* if we don’t know what will happen in the markets. We are *exposed* if we have holdings in instruments traded in those markets. A value-at-risk measure characterizes uncertainty with the joint distribution for ^{1}** R** constructed by its inference procedure. It characterizes exposure with the portfolio mapping θ constructed by its mapping procedure. A value-at-risk measure must combine those two components to measure a portolio’s market risk, and it does so with a transformation procedure.

A transformation procedure accepts as inputs

- a joint distribution for
^{1}, and**R** - a portfolio mapping θ, which can be either a primary mapping or a remapping.

It uses these to construct a distribution for ^{1}*P* from which it calculates the portfolio’s value-at-risk.

Transformation procedures take various forms, but there are essentially three types:

- Linear transformation procedures apply if the portfolio mapping θ is a linear polynomial. They employ a standard formula from probability theory for calculating the variance of a linear polynomial of a random vector. For certain asset categories, such as equities or futures, primary mappings can be liner polynomials. Alternatively, θ may be a linear remapping.
- Quadratic transformation procedures apply if the portfolio mapping θ is a quadratic polynomial and the joint distribution of
^{1}is joint-normal. Primary mappings are almost never quadratic polynomials, so quadratic transformations assume use of a quadratic remapping.**R** - Monte Carlo transformation procedures employ the Monte Carlo method and are applicable to all portfolio mappings. This advantage comes with potentially significant computational expense, as Monte Carlo transformation procedures entail revaluing the portfolio under numerous scenarios. A subcategory of Monte Carlo transformation procedures do not randomly generate scenarios but instead construct them directly from historical data for
^{1}. These are called historical transformation procedures.**R**

Elementary treatments of value-at-risk often mention “methods” for calculating value-at-risk. Mostly, these reference the transformation procedures used. For example, the terms “parametric method” or “variance-covariance method” refer to value-at-risk measures that employ a liner transformation procedure. The “delta-gamma method” refers to those that use a quadratic transformation procedure. The “Monte Carlo method” and “historical method” refer, of course, to value-at-risk measures that use Monte Carlo or historical transformation procedures.

## More Value-at-Risk Resources

For a deeper discussion of value-at-risk, or for worked examples of actual value-at-risk measures, see my book *Value-at-Risk: Theory and Practice*. I distribute the latest edition free online at http://value-at-risk.net. The book contains about 160 exercises you can practice on, with solutions provided right on this website.

Also explore this website. The blog in particular offers plenty of information on market risk management and value-at-risk.

## References

- Holton, Glyn A. (2004). Defining risk,
*Financial Analysts Journal*, 60 (6), 19–25. - Holton, Glyn A. (2014).
*Value-at-Risk: Theory and Practice*, 2^{nd}ed. e-book at http://value-at-risk.net.