How is Risk Traditionally Measured?
Are we equipped with a flawed understanding of risk? In Business Schools all over the world students are drilled to think of risk as volatility measured by standard deviation σ, when investing in financial securities. As inputs to calculate the standard deviation (Stdev) we are taught to rely on observable asset prices. Such data is easy to gather e.g. historical equity prices are downloadable at yahoo-finance over time frames sometimes as long as 50 years. We then use this data to calculate the Stdev of e.g. our portfolios or single equity positions. A low standard deviation indicates that returns of the stock are usually very close to their mean. From this we infer that at a given day in the future there is only little chance for highly negative or highly positive returns. We are told that such a position has low risk. The opposite a high Stdev means that we have a lot of returns far away from the mean both highly positive and highly negative. In textbooks such positions are described as very risky. In other words a high risk position´s price fluctuates more than a low risk position´s price. Finance then tells us that we require a higher return from wildly fluctuating positions than from less oscillating ones. Intuitively this makes sense, since we despise volatility in our portfolios. (Especially the downside half of it). We rather see an ever increasing steady upward slope, where we do not have to worry too much about tomorrow. But is this the right path to go for everyone?
Where This Approach Makes Sense?
Of course using Stdev as measure of risk is justified in some places but not in others. For example banks rely, and have to rely, on volatility of asset prices to measure their risk exposures. Several regulations require banks to hold a specified amount of capital against risky positions they are incurring that should not be undercut. Stdev helps them to judge the probability that on a given day, or during a given time frame their portfolio of risky positions will suffer a loss that would lead to a capital shortfall. This is especially important to banks, that operate, due to their business model, with very high leverage. The assets that banks are holding are financed mainly by liabilities such as deposits of costumers or funds lent from other banks. The tiny amount of own capital provides a small buffer that absorbs eventual losses of a huge asset portfolio. In case this buffer is run down by short term fluctuations of asset prices, the bank cannot meet its obligations and risks bankruptcy. Hence the use of standard deviation incorporated into risk management models such as Value at Risk enables banks to judge the probability of losing a significant amount of capital over a short period of time.
Surely this is not the only application where measuring risk as Stdev makes sense, but we can infer that everytime large amount of debt financing is involved measuring risk as Stdev makes perfekt sense, since payments to third parties have to be met at specific points in time and Stdev can provide us with the likelihood that these cannot be met.
Where This Approach Does not Make Sense?
In case portfolios, such as my portfolio and those of many other private investors, that are usually fully equity financed Stdev as a measure of risk is seriously flawed. I do not have to meet any kind of obligations e.g. next Tuesday, next months or next year. Portfolios financed by own capital are here to stay. They can sit out temporal declines and wait for recovery without a fear of receiving margin calls by their brokers. Hence for me it is completely immaterial what the price of a position does over the short-term and whether or not I reach my return goals with low Stdev or high Stdev positions. As long as fundamentals justify a higher value than the security is currently trading for a buy decision is justified no matter if the position tended to fluctuate big in the past.
A better Measure for Risk!
After all even private investors need some instruments to determine the riskiness of their positions. I would propose the following: Do not measure risk by the Stdev of the asset- or stock price but by the probability that your investment suffers a permanent impairment, caused by a deterioration of the underlying operating business. Ok, I am not the first person to highlight this. Warren Buffett and other value-minded investors constantly emphasize the same.
To be able to reliably act according to this better risk measure I have decided to always value businesses according to Greenwald´s strategy, namely to determine the reproduction value of assets, which provides a reliable the floor for the value of an investment. The beauty of this strategy is that I do not have to fall back on shaky predictions of future earnings but I am able to work with assets owned by the company as of today! If I then buy the equity for a price lower than said reproduction value I can be sufficiently confident that the value of my investment will not permanently fall below that threshold, even when I am wrong in forecasting earnings!
That means the higher the discount to reproduction value of assets, the less risky is the investment. Or likewise, the higher the premium over reproduction value, the riskier is the investment.
Now “Is our Understanding of Risk Seriously Flawed?” I would not go that far and say we are on the completely wrong track. However, I would attest people to misuse risk measures provided by modern portfolio theory. Volatility as risk is critical to highly levered institutions such as banks and insurance businesses but causes wrong decision making for private, unlevered investors.