A 2008 paper “How Unlucky is 25-Sigma”, describes an interview with Goldman Sach’s then-CFO David Viniar who referred to their flagship fund’s 27% loss in August 2007 as a “25 Sigma event”. The paper goes on to analyze this statement and quickly shows that a 25-Sigma event would take place once in 1.309e+135 years, a number so large that the paper accurately describes this time frame as “being on par with Hell freezing over”.
Mr. Viniar’s estimate was off by more than the age of the universe. Clearly, no measure of market risk (a.k.a. Sigma, or standard deviation of returns) could provide a clue into what happened. What was missing?
The vast majority of investors measure and compare the quality of their investments by calculating risk-adjusted returns provided by any number of readily available ratios such as Sharpe, Sortino, and others. The denominator in these ratios is always risk. Risk, conveniently, is measured by calculating standard deviation of returns, an observable metric. Even if Viniar exaggerated and the event that occurred was “only” a 15-Sigma event (27% monthly loss divided by the fund’s monthly standard deviation of ~1.8%) this would imply a one-in-1.308E+49 chance of such a loss, an equivalent of winning the Powerball lottery 5 times in a row.
The only logical conclusion is that the models that Goldman (and most other investors in 2007-2008) were using were inadequate. The true risk could not be explained by volatility of historical returns. Another source of risk creeped into their portfolio, resulting in a massive overallocation of risk capital.
Let’s reverse engineer the true risk that was embedded in Goldman’s portfolio at that time. If we assume that a loss like this might occur once every 1,000 years (=12,000 months), this would have been an approximately 5-sigma event, implying the monthly standard deviation of 5.4%, triple the market risk implied by volatility of returns. Assuming the frequency of such loss to be once every 100 years, the implied total risk would be 6.8%, almost quadruple the market risk.
Clearly, traditional measures of risk failed to predict the true size of the problem. Proper due diligence could have uncovered the fat tail not visible through the lens of market analysis.
What was the source of that additional risk? In Goldman’s case, it was something called “model risk”, or the likelihood that internally-developed investment models did not properly account for all aspects of complex investments. In other cases, infrastructural risks such as a cyber-security breach, unauthorized cash movements, trade errors, or plain fraud, could decimate an investment that may otherwise seem relatively safe.
Estimating non-market risk must be a critical component of any investment analysis. The challenge is that these risks are incredibly difficult to quantify ex ante. Just like in the above example, we can only do this ex post, by which time it’s always too late.
One of the problems in quantifying non-market risk is that it does not have a clear scale or a standard measure. Most investors who perform a thorough due diligence do so using their own proprietary metrics, and thus no standard measure is available for investors to use in estimating their true risk.
DDX blockchain protocol aims to change this. With multiple participants sharing their findings in a trusted, secure distributed network, a quantifiable measure can potentially be established. It will not be as clear cut or formulaic as standard deviation of returns. But in a world where no measure exists at all, consensus-based risk metrics can help make risk management and capital allocation decisions a lot more efficient.
To learn about our project or apply to join the group of
large financial institutions helping us with the development of the blockchain
due diligence protocol, visit our website, https://ddx.exchange
 Probability of winning a single Powerball lottery is 1 in 292 million according to an Allstate report. Winning it 5 times in a row is (1/292 million)^5 or 4.7197E+43, slightly better odds than 15 sigma.