The generality of value-at-risk poses a computational challenge. In order to measure market risk in a portfolio using value-at-risk, some means must be found for determining the probability distribution of that portfolio’s market value. Obviously, the more complex a portfolio is—the more asset categories and sources of market risk it is exposed to—the more challenging that task becomes.
Value-at-Risk as a Quantile of Loss
It is worth distinguishing two concepts:
- A value-at-risk measure is an algorithm with which we calculate a portfolio’s value-at-risk.
- A value-at-risk metric is our interpretation of the output of the value-at-risk measure.
A value-at-risk metric, such as one-day 90% USD VaR, is specified with three items:
- a time horizon;
- a probability;
- a currency.
A value-at-risk measure calculates an amount of money, measured in that currency, such that there is that probability of the portfolio not loosing more than that amount of money over that time horizon. In the terminology of mathematics, this is called a quantile, so one-day 90% USD VaR is just the 90% quantile of a portfolio’s one day loss in US dollars.
This is worth emphasizing: value-at-risk is a quantile of loss. The task of a value-at-risk measure is to calculate such a quantile.
Value-at-Risk: Preliminary Definitions
For a given value-at-risk metric, measure time in units—days, weeks, months, etc.—equal to the time horizon. Let time 0 be now, so time 1 represents the end of the horizon. We know a portfolio’s current market value 0p. Its market value 1P at the end of the horizon is unknown.
Here, as in other contexts, I use the convention that unknown (i.e. random) quantities are capitalized while known quantities are lower-case. Preceding superscripts indicate time, so 0p is the portfolio’s known current value, and 1P is its unknown market value at the end of the horizon – at time t = 1.
Define portfolio loss 1L as
1L = 0p – 1P
If 0p exceeds 1P, the loss will be positive. If 0p is less than 1P, the loss will be negative, which is another way of saying the portfolio makes a profit.
Calculating Value-at-Risk as a Quantile of Loss
Because we don’t know the portfolio’s future value 1P, we don’t know its loss 1L. Both are random variables, and we can assign them probability distributions. That is exactly what a value-at-risk measure does. It assigns a distribution to 1P and/or 1L, so it can calculate the desired quantile of 1L. Most typically, value-at-risk measures work directly with the distribution of 1P and use that to infer the quantile of 1L.
This is illustrated in Exhibit 1 for a 90% VaR metric. Working with the probability distribution of 1P, first the 10% quantile of 1P is found. Then, subtracting this from the portfolio’s current market value 0p gives the 90% quantile of 1L. This is the portfolio’s value-at-risk – the amount of money such that there is a 90% probability that the portfolio will either make a profit or lose less than that amount.
Exhibit 1: A portfolio’s 90% VaR is the amount of money such that there is a 90% probability of the portfolio losing less than that amount of money—the 90% quantile of 1L. This exhibit illustrates how that quantity can be calculated as the portfolio’s current value 0p minus the 10% quantile of 1P. Other value-at-risk metrics can be valued similarly.
Other value-at-risk metrics can be valued similarly. So if we know the distribution for 1P, calculating value-at-risk is easy. The challenge for any value-at-risk measure is constructing that distribution of 1P. Value-at-risk measures do so in various ways, but all practical value-at-risk measures share certain features described below.
Because value-at-risk measures are probabilistic, they deal with various random financial variables. Three types are particularly significant and are given standard notation:
- a portfolio value 1P;
- asset values 1Si; and
- key factors 1Ri.
We have already discussed portfolio value 1P, which is the portfolio’s market value at time 1—the end of the value-at-risk horizon. The portfolio has current value 0p.
Asset values 1Si represent the accumulated value at time 1 of individual assets that might be held by the portfolio at time 0. Individual assets might be stocks, bonds, futures, options or other instruments. Let m be the total number of assets to be modeled. The m asset values 1Si comprise an ordered set (an m-dimensional vector) called the asset vector, which we denote 1S. Its current value 0s is the ordered set of asset current values 0si. I am using the notation convention of making multivariate quantities – vectors or matrices – bold.
Key factors 1Ri represent values at time 1 of financial variables that can be used to value the assets. Depending on the composition of the portfolio, key factors might represent exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. The n key factors 1Ri comprise an ordered set called the key vector, which we denote 1R. This has current value 0r:
Past values of the key vector are also required:
Together, current and past values for the key vector, 0r, –1r, –2r, … , –αr, are called historical market data.
Calculating Value-at-Risk: The Big Picture
Where are we going with this? The quantities 1P, 1Si and 1Ri are all random. But the portfolio’s value 1P is a function of the values 1Si of the assets it holds. Those in turn are a function of the key factors 1Ri. For example, a Treasury bond portfolio’s value 1P is a function of the values 1Si of the individual bonds it holds. Their values are in turn functions of applicable interest rates 1Ri. Because a function of a function is a function, 1P is a function θ of 1R:
1P = θ(1R)
Value-at-risk measures apply time series analysis to historical data 0r, –1r, –2r, … , –αr to construct a joint probability distribution for 1R. They then exploit the functional relationship θ between 1P and 1R to convert that joint distribution into a distribution for 1P. From that distribution for 1P, value-at-risk is calculated, as illustrated in Exhibit 1 above.
Exhibit 2 summarizes the components common to all practical value-at-risk measures: We describe those components next.
Exhibit 2: All practical value-at-risk measures accept portfolio holdings and historical market data as inputs. They process these with a mapping procedure, inference procedure, and transformation procedure. Output comprises the value of a value-at-risk metric. That value is the value-at-risk measurement.
A value-at-risk measure accepts two inputs:
- historical data 0r, –1r, –2r, … , –αr for 1R, and
- the portfolio’s holdings ω.
The portfolio holdings comprise a row vector ω whose components indicate the number of units held of each asset. For example, if a portfolio holds 1000 shares of IBM stock, 5000 shares of Google stock and a short position of 3000 shares of Microsoft stock, its holdings are
ω = (1000 5000 –3000)
Inference and Mapping Procedures
The two inputs—historical data and portfolio holdings—are processed separately by two procedures within the value-at-risk measure:
- An inference procedure applies methods of time series analysis to the historical data 0r, –1r, –2r, … , –αr to construct a joint distribution for 1R.
- A mapping procedure uses the portfolio’s holdings ω to construct a function θ such that 1P = θ(1R).
The mapping procedure uses a set of pricing functions φi that value each asset 1Si in terms of 1R:
1Si = φi(1R)
For example, if asset 1S1 is a bond, pricing formula φ1 will be a bond pricing formula. If asset 1S2 is an equity option, pricing formula φ2 will be an equity option pricing formula.
A functional relationship 1P = θ(1R) is then defined as a weighted sum of the pricing formulas φi, with the weights being the holdings ωi:
1P = ω11S1 + ω21S2 + … + ωm1Sm
= ω1φ1(1R) + ω2φ2(1R) + … + ωmφm(1R)
This is called a primary mapping. If a portfolio is large or holds complex instruments, such as derivatives or mortgage-backed securities, a primary mapping may be computationally expensive to value. Many mapping procedures replace a primary mapping θ with a simpler approximation . Such approximations are called remappings. They can take many forms. Two common examples are remappings that are constructed, using the method of least squares, as either a linear polynomial or quadratic polynomial approximation of θ. Such remappings are called, respectively, linear remappings and quadratic remappings.
Most of the literature on value-at-risk is either elementary or theoretical, so remappings receive little mention. This is unfortunate. As a practical tool for making production value-at-risk measures tractable, remappings can be indispensable.
Returning to Exhibit 2, we have discussed the two inputs to a value-at-risk measure as well as the inference procedure and mapping procedure that process these. If you think about it, the two outputs of those procedures correspond to the two components of risk. As explained by Holton (2004), every risk has two components:
In the context of market risk, we are uncertain if we don’t know what will happen in the markets. We are exposed if we have holdings in instruments traded in those markets. A value-at-risk measure characterizes uncertainty with the joint distribution for 1R constructed by its inference procedure. It characterizes exposure with the portfolio mapping θ constructed by its mapping procedure. A value-at-risk measure must combine those two components to measure a portolio’s market risk, and it does so with a transformation procedure.
A transformation procedure accepts as inputs
- a joint distribution for 1R, and
- a portfolio mapping θ, which can be either a primary mapping or a remapping.
It uses these to construct a distribution for 1P from which it calculates the portfolio’s value-at-risk.
Transformation procedures take various forms, but there are essentially three types:
- Linear transformation procedures apply if the portfolio mapping θ is a linear polynomial. They employ a standard formula from probability theory for calculating the variance of a linear polynomial of a random vector. For certain asset categories, such as equities or futures, primary mappings can be liner polynomials. Alternatively, θ may be a linear remapping.
- Quadratic transformation procedures apply if the portfolio mapping θ is a quadratic polynomial and the joint distribution of 1R is joint-normal. Primary mappings are almost never quadratic polynomials, so quadratic transformations assume use of a quadratic remapping.
- Monte Carlo transformation procedures employ the Monte Carlo method and are applicable to all portfolio mappings. This advantage comes with potentially significant computational expense, as Monte Carlo transformation procedures entail revaluing the portfolio under numerous scenarios. A subcategory of Monte Carlo transformation procedures do not randomly generate scenarios but instead construct them directly from historical data for 1R. These are called historical transformation procedures.
Elementary treatments of value-at-risk often mention “methods” for calculating value-at-risk. Mostly, these reference the transformation procedures used. For example, the terms “parametric method” or “variance-covariance method” refer to value-at-risk measures that employ a liner transformation procedure. The “delta-gamma method” refers to those that use a quadratic transformation procedure. The “Monte Carlo method” and “historical method” refer, of course, to value-at-risk measures that use Monte Carlo or historical transformation procedures.
More Value-at-Risk Resources
For a deeper discussion of value-at-risk, or for worked examples of actual value-at-risk measures, see my book Value-at-Risk: Theory and Practice. I distribute the latest edition free online at http://value-at-risk.net. The book contains about 160 exercises you can practice on, with solutions provided right on this website.
Also explore this website. The blog in particular offers plenty of information on market risk management and value-at-risk.
- Holton, Glyn A. (2004). Defining risk, Financial Analysts Journal, 60 (6), 19–25.
- Holton, Glyn A. (2014). Value-at-Risk: Theory and Practice, 2nd ed. e-book at http://value-at-risk.net.