How to Calculate Value-at-Risk – Step by Step

The power of value-at-risk lies in ints generality. Unlike market risk metrics such as the Greeks, duration and convexity, or beta, which are applicable to only certain asset categories or certain sources of market risk, value-at-risk is general. It is based on the probability distribution for a portfolio’s market value. All liquid assets have uncertain market values, which can be characterized with probability distributions. All sources of market risk contribute to those probability distributions. Being applicable to all liquid assets and encompassing, at least in theory, all sources of market risk, value-at-risk is a broad metric of market risk.

The generality of value-at-risk poses a computational challenge. In order to measure market risk in a portfolio using value-at-risk, some means must be found for determining the probability distribution of that portfolio’s market value. Obviously, the more complex a portfolio is—the more asset categories and sources of market risk it is exposed to—the more challenging that task becomes.

It is worth distinguishing two concepts:

  • value-at-risk measure is an algorithm with which we calculate a portfolio’s value-at-risk.
  • value-at-risk metric is our interpretation of the output of the value-at-risk measure.

A value-at-risk metric, such as one-day 90% USD VaR, is specified with three items:

  • a time horizon;
  • a probability;
  • a currency.

A value-at-risk measure calculates an amount of money, measured in that currency, such that there is that probability of the portfolio not loosing that amount of money over that horizon. In the terminology of mathematics, this is called a quantile, so one-day 90% USD VaR is just the .90-quantile of a portfolio’s one day loss.

This is worth emphasizing: value-at-risk is a quantile of loss. The task of a value-at-risk measure is to calculate such a quantile.

For a given value-at-risk metric, measure time in units of the value-at-risk horizon. Let time 0 be now, so time 1 represents the end of the horizon. We know a portfolio’s current market value 0p. Its market value 1P at the end of the horizon is unknown. Define portfolio loss 1L as

1L = 0p – 1P


If 0p exceeds 1P, the loss will be positive. If 0p is less than 1P, the loss will be negative, which is another way of saying the portfolio makes a profit.

Because we don’t know the portfolio’s future value 1P, we don’t know its loss 1L. Both are random variables, and we can assign them probability distributions. That is exactly what a value-at-risk measure does—in assignes a distribution to 1P and/or 1L, so it can calculate the desired quantile of 1L. Most typically, value-at-risk measures work directly with the distribution of  1P and use that to infer the quantile of 1L. This is illustrated in Exhibit 1 for a 90% VaR metric.


Exhibit 1: A portfolio’s 90% VaR is the amount of money such that there is a 90% probability of the portfolio losing less than that amount of money—the .90-quantile of 1L. This exhibit illustrates how that quantity can be calculated as the portfolio’s current value 0p minus the .10-quantile of 1P. Other value-at-risk metrics can be valued similarly.

Exhibit 1 shows how the .90-quantile of 1L (the portfolio’s value-at-risk) can be obtained as the .10-quantile of 1P minus the portfolio’s current value 0p. Other value-at-risk metrics can be valued similarly. So if we know the distribution for 1P, calculating value-at-risk is easy. The challenge for any value-at-risk measure is constructing that distribution of  1P. Value-at-risk measures do so in various ways, but all practical value-at-risk measures share certain features described below.

Because value-at-risk measures are probabilistic, they deal with various random financial variables. Three types are particularly significant and are given standard notation:

  • a portfolio value 1P;
  • asset values 1Si; and
  • key factors 1Ri.

We have already discussed portfolio value 1P, which is the portfolio’s market value at time 1—the end of the value-at-risk horizon. It has current value 0p. Mathematically, a portfolio is defined as an ordered pair (0p,1P).

Asset values 1Si represent the accumulated value at time 1 of individual assets held by the portfolio. Individual assets might be stocks, bonds, futures, options or other instruments. Current asset values are denoted 0si. Mathematically, we define an asset as an ordered pair (0si,1Si). The m asset values 1Si comprise an ordered set (or “vector”) called the asset vector, which we denote 1S. Its current value 0s is the ordered set of asset current values 0si.


Key factors 1Ri represent values at time 1 of financial variables that can be used to value the assets. Depending on the composition of the portfolio, key factors might represent exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. The n key factors 1Ri comprise an ordered set called the key vector, which we denote 1R. Value-at-risk measures utilize not only the current value 0r for the key vector but also other historical values –1r–2r,  –3r,  … , –αr:


Where are we going with this? The quantities 1P, 1Si and 1Ri are all random. But the portfolio’s value 1P is a function of the values 1Si of the assets it holds. Those in turn are a function of the key factors 1Ri. For example, a bond portfolio’s value 1P is a function of the values 1Si of the individual bonds it holds. Their values are in turn functions of applicable interest rates 1Ri. Because a function of a function is a function, 1P is a function θ of 1R:

1P = θ(1R)


Value-at-risk measures apply time series analysis to historical data 0r, –1r–2r, … , –αr to construct a joint probability distribution for 1R. They then exploit the functional relationship θ between 1P and 1R to convert that joint distribution into a distribution for 1P. From that distribution for 1P, value-at-risk is calculated, as illustrated in Exhibit 1 above.

Let’s formalize this. Exhibit 2 summarizes the components common to all practical value-at-risk measures:


Exhibit 2: All practical value-at-risk measures accept portfolio holdings and historical market data as inputs. They process these with a mapping procedure, inference procedure, and transformation procedure. Output comprises the value of a value-at-risk metric. That value is the value-at-risk measurement.

A value-at-risk measure accepts two inputs:

  • historical data 0r–1r–2r, … , –αr for 1R, and
  • the portfolio’s holdings ω.

The portfolio holdings comprise a row vector ω whose components indicate the number of units held of each asset. For example, if a portfolio holds 1000 shares of IBM stock, 5000 shares of Google stock and a short position of 3000 shares of Microsoft stock, its holdings are

ω = (1000  5000  –3000)


The two inputs—historical data and portfolio holdings—are processed separately by two procedures within the value-at-risk measure:

  • An inference procedure applies methods of time series analysis to the historical data 0r–1r–2r, … , –αr to construct a joint distribution for 1R.
  • A mapping procedure uses the portfolio’s holdings ω to construct a function θ such that 1P = θ(1R).

The mapping procedure uses a set of pricing functions φi that value each asset 1Si in terms of 1R:

1Si = φi(1R)


For example, if asset 1S1 is a bond, pricing formula φ1 will be a bond pricing formula. If  asset 1S2 is an equity option, pricing formula φ2 will be an equity option pricing formula. A functional relationship 1P = θ(1R) is then defined as a weighted sum of the pricing formulas φi, with the weights being the holdings ωi:

1P = ω11S1 + ω21S2 + … + ωm1Sm


ω1φ1(1R) + ω2φ2(1R) + … + ωmφm(1R)


This is called a primary mapping. If a portfolio is large or holds complex instruments, such as derivatives or mortgage-backed securities, a primary mapping may be computationally expensive to value. Many mapping procedures replace a primary mapping θ with a simpler approximation . Such approximations are called remappings. They can take many forms. Two common examples are remappings that are constructed, using the method of least squares, as either a linear polynomial or quadratic polynomial approximation of θ. Such remappings are called, respectively, linear remappings and quadratic remappings.

Most of the literature on value-at-risk is either elementary or theoretical, so remappings receive little mention. This is unfortunate. As a practical tool for making production value-at-risk measures tractable, remappings can be indispensable.

Returning to Exhibit 2, we have discussed the two inputs to a value-at-risk measure as well as the inference procedure and mapping procedure that process these. If you think about it, the two outputs of those procedures correspond to the two components of risk. As explained by Holton (2004), every risk has two components:

  • uncertainty
  • exposure

In the context of market risk, we are uncertain if we don’t know what will happen in the markets. We are exposed if we have holdings in instruments traded in those markets. A value-at-risk measure characterizes uncertainty with the joint distribution for 1R constructed by its inference procedure. It characterizes exposure with the portfolio mapping θ constructed by its mapping procedure. A value-at-risk measure must combine those two components to measure a portolio’s market risk, and it does so with a transformation procedure.

A transformation procedure accepts as inputs

  • a joint distribution for 1R, and
  • a portfolio mapping θ, which can be either a primary mapping or a remapping.

It uses these to construct a distribution for 1P from which it calculates the portfolio’s value-at-risk.

Transformation procedures take various forms, but there are essentially three types:

  • Linear transformation procedures apply if the portfolio mapping θ is a linear polynomial. They employ a standard formula from probability theory for calculating the variance of a linear polynomial of a random vector. For certain asset categories, such as equities or futures, primary mappings can be liner polynomials. Alternatively, θ may be a linear remapping.
  • Quadratic transformation procedures apply if the portfolio mapping θ is a quadratic polynomial and the joint distribution of 1R is joint-normal. Primary mappings are almost never quadratic polynomials, so quadratic transformations assume use of a quadratic remapping.
  • Monte Carlo transformation procedures employ the Monte Carlo method and are applicable to all portfolio mappings. This advantage comes with potentially significant computational expense, as Monte Carlo transformation procedures entail revaluing the portfolio under numerous scenarios. A subcategory of Monte Carlo transformation procedures do not randomly generate scenarios but instead construct them directly from historical data for 1R. These are called historical transformation procedures.

Elementary treatments of value-at-risk often mention “methods” for calculating value-at-risk. Mostly, these reference the transformation procedures used. For example, the terms “parametric method” or “variance-covariance method” refer to value-at-risk measures that employ a liner transformation procedure. The “delta-gamma method” refers to those that use a quadratic transformation procedure. The “Monte Carlo method” and “historical method” refer, of course, to value-at-risk measures that use Monte Carlo or historical transformation procedures.

This article provides a broad introduction to value-at-risk measures. If you plan to implement a value-at-risk measure, you will need more depth. See Holton (2014) for detailed explanations of how to design, implement and test production value-at-risk measures.



11 Responses to How to Calculate Value-at-Risk – Step by Step

  1. ONEX March 28, 2015 at 10:21 am #

    Would it be too much to ask that you provide an example of the above mathematical process?

    • Glyn Holton March 28, 2015 at 10:23 am #

      Sure! I provide three introductory examples, which get progressively more detailed and practical, here

      • ONEX March 28, 2015 at 10:25 am #

        Okay, I located your online book which contains the information I requested earlier. I did a quick scan of the information and I must admit I was quite impressed with the amount of thought you presented along with various examples of what you discussed within each section. Thank you for all your hard work.

        • ONEX March 28, 2015 at 10:28 am #

          I have one question before I delve into your book. What factors determine the method [Monte Carlo, Black Scholes, quadratic, et al] deployed?

          Example: You indicated in one of your examples of a company holding reserve metals and I believe you utilized the Monte Carlo method. Why did you utilize the Monte Carlo as opposed to another mathematical representation to calculate VaR?


          • Glyn Holton March 28, 2015 at 10:31 am #

            The simple answer is that, if your portfolio holds only (and will always hold only) instruments with linear or almost linear payoffs — forwards, futures, cash, stocks, physical commodities — use a linear transformation. Otherwise, use Monte Carlo. Quadratic tends not to be good stand-alone, but is useful for facilitating variance reduction for a Monte Carlo transformation procedure, as discussed later in the book. Historical is a simplistic form of Monte Carlo that I do not recommend.

      • ONEX July 8, 2016 at 6:16 pm #

        Ah, very nice! Thanks!! :-) You rock!

  2. M.bilal July 24, 2015 at 8:15 am #

    i would like to use var in commodity market.i mean ,i want to make research in commodity market through var.
    can u help me regard this

    • Glyn Holton July 24, 2015 at 9:48 am #

      See my book at In addition to explaining how to calculate value-at-risk it offers plenty of worked examples involving commodities: industrial metals, coffee, energies, flaxseed, lumber, etc.

  3. dinesh January 6, 2016 at 3:09 pm #

    Dear Glyn,
    I was reading much about VAR recently and your article just came by. I went through yours. Need your help to clarify my doubts. If we use Monte Carlo to calculate the future price of lets say a vanilla stock then we finally plot Histogram of price returns in future finally calculate the Var.
    To calculate Future Price and as per Geometric Bronian Motion Model
    we know and as you mentioned Future Price has 2 factors
    1.Drift-> this we try to make as expected price/return
    2.Shock-> this we make it as factor of some standard deviation and random numbers.
    So essentially the Future price=Present price* Function of Drift and Shock.
    And in simple Microsoft Excel terms I found that
    Price today=Price on Previous day*(1+NORMINV(RAND(),Expected Return on price, Standard Deviation on Price returns))
    And then plot the Histogram of the Future Price changes and then calculate Var.
    Am I talking about some preliminary stuff which you mentioned or Am I talking about completely different method to predict the Stock Price and eventually calculating the Var?
    Please advise.
    Thanks in advance.

    warm regards

    • Glyn Holton January 6, 2016 at 4:47 pm #

      VaR measures can differ according to various details. Nowhere do I describe the specific model you propose, but I have described many VaR measures similar to it. What you propose is fairly standard. However, because you are assuming a normal distribution, you don’t need Monte Carlo. You know your distribution is normal, so why approximate that normal distribution with a Monte Carlo generated histogram?

      If you have not already stumbled on it, my book on value-at-risk offers much more detail:

  4. Lawrence K. Danso-Boadu March 28, 2016 at 1:12 pm #

    Dear Glyn,
    I am a graduate student writing my thesis on the topic “Evaluation of Value-at-Risk as a measure of foreign exchange risk in Ghana”.
    I want to ask which methodology will be appropriate for my work and the kind of test I can run?
    Thanks very much.

Powered by WordPress. Designed by WooThemes