On Friday, I wrote a lighthearted piece challenging criticisms of value-at-risk. The Swiss had abandoned their currency peg; people had lost money; and VaR was being blamed—*again*.

My message was essentially this

If your VaR measure wasn’t flashing red alerts prior to the peg’s abandonment, don’t worry. It wasn’t supposed to.

I neglected to provide a comment form with the piece. *I won’t make that mistake again!* You won’t believe the emails I received.

Some correspondents understood what I was saying. Others did not and criticized me for condoning ineffective VaR measures. For the latter group, I clearly touched too lightly on a deep and interesting topic.

Measuring value-at-risk for a pegged currency is an atypical situation. Because it is atypical, it affords insights unavailable through more routine discussions of value-at-risk.

So let’s delve more deeply.

What I am about to explain is important.

# How Not To Calculate Value-at-Risk

Here is a graph of the Swiss franc’s performance against the euro over the five years leading up to the peg’s abandonment.

You can clearly see the period of stability from September 2011 to January 2015, while the peg was in place. To the left, in 2011, is a period of heightened volatility leading up to the peg. On the far right, you can see the currency’s one day surge upon the peg’s abandonment.

The unfortunate article from *The Economist* that I cited last week explains how this data would, in their opinion, be correctly used to calculate value-at-risk during two periods, 1) while the peg was in place and 2) now that the peg has been abandoned:

… “value-at-risk” models … are based on past volatility. Since the Swiss franc’s daily movements since 2011 had been artificially low, the maximum predicted losses would have been negligible—or so banks would have assured regulators and shareholders …

The big movements of 2011 should have raised the bar to trading francs using value-at-risk models. The latest spike should in theory make it difficult for banks to deal in Swissies at all. It probably won’t: models can be tweaked with “overlays”, risk-manager talk for “ignoring data points you don’t like”. With luck, regulators will now give such methods greater scrutiny.

This is nonsense piled on nonsense.

In the aftermath of the peg’s abandonment, the article advises that volatility estimates include the data point for the day the peg was abandoned. That was the day the currency soared. This would supposedly so elevate VaR estimates as to “make it difficult for banks to deal in Swissies at all.”

Mathematically, I doubt including that data point would have such an enormous impact. It would depend on the length of the data window you use in estimating VaR.

From both practical and theoretical standpoints, the data point should not be included. It represents a one-time event arising from circumstances that no longer exist. The Swiss can’t abandon their peg a second time. The risk no longer exists, just as the risk of a bridge collapsing no longer exists once the bridge has in fact collapsed.

What about VaR estimates while the peg was in place?

While the peg was in place, the exchange rate fluctuated little. VaR estimates based just on data since the peg’s establishment would have been low. Should those estimates have been based on larger data windows to include the high volatility of 2011? The article says “yes”, but the correct answer is “no”.

Here is why:

# The Correct Approach

Calculating value-at-risk is about constructing a profit & loss distribution—you take a quantile of that distribution to determine VaR.

For a pegged currency position, the task is interesting because the distribution is bimodal. On any given trading day, while the peg holds, there is a very high probability that the peg will continue to hold through the day, in which case the exchange rate will fluctuate little. There is a very slight probability that the peg will be abandoned during the day, in which case the exchange rate will move dramatically.

This bimodal distribution is illustrated in the following exhibit. To keep numbers simple, I assume there is a 1% probability of the peg being abandoned on any given trading day (although the actual probability might be a fraction of that). In the exhibit, the distribution has two “humps”. The one on the left contains probability of 1% and corresponds to extreme losses that might occur in the event of peg abandonment. The right hump contains probability of 99% and corresponds to profits or losses in the event that the peg continues to hold.

The wrong approach to calculating VaR, described earlier, captures none of this detail. It assumes a normal distribution, perhaps with an elevated standard deviation to crudely account for possible peg abandonment.

OK. How do we calculate VaR based on the correct bimodal distribution?

We do so the same way we always calculate VaR.

Suppose we are calculating one-day 95% VaR. Then we find that loss such that 95% of probability is to the right of that loss, and 5% is to the left. This is illustrated in Exhibit 3:

The next important question is: how do we statistically construct the two humps of the correct distribution?

For the right hump, we do what *The Economist* tells us not to do: The right hump reflects behavior while the peg persists, so we construct it using low-volatility data following the peg’s establishment. High-volatility data from prior to the peg’s establishment is irrelevant.

What about the smaller left hump? It reflects the behavior of the exchange rate on the one day the peg is abandoned. Prior to the peg’s abandonment—the period during which we are trying to calculate VaR—there is no relevant data whatsoever. Even after the peg’s abandonment, we have just one data point, which is insufficient for statistical analysis.

We cannot model the left hump statistically.

Here is the good news: *We don’t have to.*

Take another look at Exhibit 3, in which we calculated VaR as the loss such that 95% of probability is to the right of that loss. Suppose, in that graph, we shifted the smaller left hump to the left. Or we shifted it to the right. Or we made it more peaked. Or we flattened it out.

Would any of these changes alter our VaR measurement?

*Not at all!*

This is because the left hump encompasses just 1% of probability, and we are calculating 95% VaR. The loss corresponding to 95% VaR falls within the right hump. It is the loss such that 95% of probability is to its right. It will not change at all no matter how we model the left hump to its left, so long as that left hump contains just 1% of probability.

So we can measure VaR by modeling just the right hump, and we should construct that right hump using just low-volatility data from since the peg’s establishment.

# Significance

Based on its definition, 95% VaR does not reflect the most extreme 1% of outcomes. VaR is not intended to—and should not be criticized for failing to—warn that the Swiss might abandon their peg.

Some people won’t like this conclusion. Like *The Economist*, they may complain that VaR is flawed and needs to be fixed.

Some people would only be satisfied with a market risk measure that consistently indicates high risk immediately prior to losses and low risk otherwise. VaR measures do not do that, but there is a risk measure that does. It is called a crystal ball. So far, no one has implemented one.

Essentially, critics of VaR complain that it is not a crystal ball.

They are like critics complaining that screwdrivers are flawed because they don’t drive nails.

Yes, screw drivers don’t drive nails. Carpenters supplement their screw drivers with another tool that does drive nails.

Yes, Value-at-risk does not address all market risks. Risk managers supplement VaR with another tool that does address other market risks.

# A Role for Stress Testing

You have probably heard this caution before:

Value-at-risk should always be supplemented with stress testing.

Ever ask yourself why? We don’t provide similar cautions for other risk measures. Why does value-at-risk need to be supplemented?

To shed light on this question, consider the following two possible cautions that might be given in the context of fire safety:

1. Every home should have two smoke detectors.

2. Every home should have a smoke detector and a carbon monoxide detector.

The first implies smoke detectors are unreliable and require redundancy. The second implies smoke detectors address only one of several aspects of fire risk.

Value-at-risk should be supplemented with stress testing, not because VaR measures are unreliable and require redundancy—correctly implemented, they are reliable—but because they are intended to address only certain aspects of market risk. Correctly implemented, stress tests supplement VaR by addressing aspects of market risk that VaR does not address.

Primarily, these are high-impact, low probability risks such as the market risk of a currency peg being abandoned.

I have written before that financial risk management should be more qualitative. I have even proposed that risk managers incorporate narrative discussions into risk reports.

What should go into those narrative discussions?

If your firm has exposures to a pegged currency, that would be worth highlighting. And mention that VaR measurements are not intended to—and will not—reflect the risk.

Create awareness. Get a dialogue going. As part of that dialogue, stress testing should be performed.

Don’t rush to do the stress testing! I would prefer that senior management respond to your narrative warnings in the risk report by *requesting* that you conduct a stress test. That way they will take the results more seriously—because it was their idea.

But if senior management won’t take the bait, conduct the stress test anyway, and include them in a future risk report.

Risk management is not about numbers. It is about people.

## Comments are closed.