An official website of the United States government
Parts of this site may be down for maintenance Saturday, November 23, 7:00 p.m. to Sunday, November 24, 9:00 a.m. (Eastern).
News Release 1999-95 | October 15, 1999
Share This Page:
Two millennia ago, the leaders of ancient Greece and Rome would consult an oracle tohelp them assess the potential losses associated with critical decisions of statecraft. In other words, they looked to the oracle to provide insight into the magnitude and probability of risk.
The oracle's research generally consisted of disemboweling a goat and sifting for clues in the entrails. Not surprisingly, the revelations gained by this method proved to be of limited value. But without any better alternatives, this ritual survived for a thousand years, to the dismay of the goats.
Today the supply of seers trained in the art of examining entrails is rather limited. Fortunately, with the advent of the scientific method, we now have advanced tools to aid in decision making. Modern information and communication technologies have increased the emphasis on data and the speed at which some decisions can be made. In the financial industries and elsewhere, quantification abounds. Indeed, the use of vast amounts of data and analytic tools to measure risks is one of the defining, if less noticed, characteristics of our financial age.
At financial institutions the modern-day analogue to the oracle is a modeler who holds advanced degrees in economics, finance or even physics. Those modelers have sophisticated computer hardware and software and extensive data upon which to base their analyses. They use modern mathematical techniques, sometimes borrowed from the physical sciences. And emerging techniques hold out promise for further advances in risk assessment. While some may recoil at the complexity of such measures, at least the goats have won a reprieve.
Despite the application of increasingly sophisticated techniques by well-trained people, we are frequently reminded that risk measurement is an imperfect discipline. Analysts, even the best and the brightest, are still caught by surprise by sudden movements in interest rates, foreign exchange rates, and asset prices. Perhaps the situation is best summarized by the plaintive question we hear so often: how many more times will we see the financial equivalent of the thousand year flood?
Given the demonstrated limitations of risk measurement, there are those who would dispense with it and restore the primacy of instinct in risk assessments. This school of thought claims that risk can be sensed but not accurately measured. Thus the lines are drawn.
Our conference was stimulated by the significant strides that have been made in risk assessment and by the divergent views about the outlook for progress in this area. In effect, we have come together to ask this question: is risk measurement an advanced science — or a pseudo science to be ignored? The answer, of course, lies somewhere in between. Financial risk measurement has evolved and its evolution continues. Quantification is an essential ingredient of any risk management system, but it's no panacea.
In order to understand where we are in terms of risk measurement, it is helpful to see where we have been and how we got here. Obviously, the sophisticated risk measurement tools that we think of today are of fairly recent vintage. Yet, asset manager and financial writer Peter Bernstein reminds us that European bankers were controlling risk before they even had a rudimentary understanding of fractions. Bernstein observes that Western Civilization did not even embrace the Hindu-Arabic numbering system—which allowed for fractions—until late in the 15th century and it took another three centuries before modern probability theory evolved. Clearly, banking and financial risk taking predated advanced mathematics.
In the absence of analytical risk measurement tools, banks, like other businesses, practiced sensible risk management principles such as diversification and risk shifting through rudimentary insurance and futures markets. But demand eventually led to important advances, especially in mathematics, and they were catalysts for still further improvements in our ability to analyze and measure risk.
In recent centuries, the interplay between the demands of business and the supply of knowledge combined to give us modern, market-based economies. Science and technology accelerated production and commerce. The harnessing of power, advances in engineering and rapid transportation combined to create complex economies. The linkages inherent in complex market economies brought both abundant rewards and new risks.
More recently—basically in the post World War II world—we have seen the development of modern financial theory. Modern portfolio theory, which provides a rational basis for the principle of diversification, dates to the 1950s. Option pricing theory, in the form of the now famous Black-Scholes formula, dates to the early 1970s. That date, more than any other, marks the beginning of the modern era of financial derivatives.
Over the last two decades, advances in computers and telecommunications have changed the economy again. At the same time, advances in the use of those technologies have changed risk analysis. It is now routine for financial firms to employ experts with advanced degrees in mathematics, statistics, computer science, physics, chemistry and other scientific disciplines, as well as finance. We enter the new century with a financial system that is dependent on data, models, modelers, and fluid markets.
It is axiomatic that banks are in the business of managing risk. It gives that axiom a modern twist to say that banks are engaged in risk measurement, presumably using sophisticated tools. Interest in risk measurement in banking has now become so broad and deep that even corporate boardrooms are not safe from references to probability distributions or "Value at Risk" numbers. Journals devoted solely to the measurement of risk have begun to appear on coffee tables in bank reception areas. Given the extent to which the lingo of risk measurement has permeated routine financial discourse, one might easily conclude that it is simply a matter of time before we will be able to quantify all of the risks that banks face!
However, if you scratch below the surface—if you actually read the journals or talk to the practitioners—you quickly discover that there are many unanswered questions about how to measure risk. There are serious disagreements about the fundamental issues of how to define risk and about whether risk measurement is a realistic goal. That is why we are delighted to offer this conference—so that we can learn from one another to get a better sense of how each of us defines and attempts to measure risk
Experts have advanced different views as to what "risk" actually is and how it should be measured. Yesterday, we had two sessions devoted to defining risk, and there was more agreement than some might have expected. We heard that the development of risk measures is at different stages for different types of risk. The measurement of market risk in traded instruments is more advanced than the measurement of interest rate risk in the non-traded portions of bank portfolios. The measurement of credit risk lags further. And the measurement — let alone the definition — of operating risk lags further still. We also heard that banks are at different stages in their implementation of risk measures.
Perhaps the most important point that has emerged from our conference is that measuring risk is devilishly difficult work. It seeks to measure what's not actually known. Indeed, an important and nettlesome component of risk measurement is determining the probabilities of various outcomes. And the fact that those probabilities are not directly observable means that risk measurement must rely on models that attempt to represent those probabilities. In other words, we might be better off if we called it risk estimation instead of risk measurement. This is more than a difference in semantics. The distinction drives home the importance of the definitional discussion we heard yesterday. Modeling and measuring risks require clear definitions.
Which risk one focuses on reflects one's objectives. Among my duties as Comptroller of the Currency is to maintain the safety and soundness of the banking system and to monitor the industry so as to identify those banks taking "excessive risks." Thus I focus attention on the financial risk to banks—the chance that future losses will eradicate capital and that future earnings will be extremely low, perhaps even low enough to cause failure. Obviously, that risk concept is of interest to bank creditors, bank managers, and other stakeholders in banks. However, the financial risk that I focus on is different, for example, from the risk on which an investor in a bank stock might focus. While investors in the stock are, of course, interested in the risk that the bank might fail, that is not their only concern. Stock investors are interested in the added risk to their diversified portfolio.
The second reason to distinguish between the measurement and the estimation of risk is that terminology affects perceptions. When we are told that banks use risk measurement tools, it provides the comforting connotation of precision. But when we then read that a bank has been surprised by large losses, we blame the risk measurement model. Both reactions are exaggerated. Models are not exact. No single event can prove or disprove their validity.
This leads me to the third reason it is important to recognize the distinction between measurement and estimation. The validation of risk estimation models is a difficult exercise. There is no absolute standard by which we can judge a risk model. All models are, by their nature, imperfect — yet hopefully valuable — approximations of reality. Thus, the relevant question is whether one model is better than the next. But because different models seek to do different things, determining their accuracy and reliability is often difficult to do.
Thus, the complexity and inherently forward-looking nature of banking has led to the use of risk models. While risk modeling requires precise definitions, those models cannot deliver precise measurements. Furthermore, only with the passage of time can we validate and improve those models. Together, these factors go a long way toward explaining why, despite the impressive advances in computer science, mathematics, and financial theory, we have still not finished the task of building risk measurement models in banking.
The OCC employs an approach to bank supervision called supervision by risk. An important component of that approach is that banks are expected to know what risks they are taking, understand the implications of those risks, and be able to manage them. Since risk measurement relies on models, we expect banks to be aware that they face model risk—the imprecision associated with the use of any model. We expect banks to approach model risk in a prudent and systematic fashion, as it would approach any other risk that they face.
We expect banks to understand any model that they use. That means that banks should understand that models are not calculators and that they do not give precise answers. That means that we expect banks to understand not only how to operate their model, but also to understand its limitations. And we expect banks to have in place a process for validating the models they build and for critically evaluating the validation of models that they buy. That means educating or hiring staff that understands the models. That means investing in infrastructure that will allow monitoring and reporting on the operation of models.
Model validation, in this context, means making the determination that a model is appropriate for a particular use. We understand that model validation is difficult. Since every model has its limitations, a model cannot be held to any absolute standard of performance. Instead, a model is validated by comparing it to an alternative. Such validation requires expertise in modeling and experience in judging models. At the OCC we use experts in model validation to review the process that banks have in place to validate their models.
In short, while models can be quite sophisticated and complex and useful, banks cannot place undue reliance on the output they produce.
I can tell you with some confidence that developing improved risk measurements will continue to be a priority for banks and regulators. The information revolution, technological advances, and intense competition that accompany any industry deregulation mandate it. Better risk measurement will inevitably lead to better risk management. Those banks most capable of incorporating good risk measurement strategies into their decision-making functions will ultimately realize greater financial returns.
However, financial risk measurement has a long way to go before it lives up to its notices. Until we adjust expectations about what risk models can realistically deliver, some who use them will inevitably be disappointed.
This conference has provided a forum to begin some reconciliation of risk measurement issues and differences. Over time, some of the approaches discussed during these two days will prove superior to others. The market will ensure that those approaches that provide the most valuable information will remain in use. Those approaches that prove less useful will fall by the wayside. But, given its importance, neither bankers nor regulators can afford to sit on their hands and watch this process run its course.
Even the most advanced current techniques and procedures will inevitably outlive their usefulness. Better procedures will continue to replace less useful ones. The business of banking will continue to change, and risk measurement will change with it. We've got some tough miles ahead. At least it should be easier on the goats.
Robert M. Garsson (202) 874-5770