Table of Contents:
- a. A Brief Digression into the Field of Meteorology (A Bright Spot in the World of Forecasting)
- a. Endless Optimism in the Price of Real Estate
- b. The Folly of the Ratings Agencies
- c. Leveraged to High Heaven
- d. The Government’s Failed Forecast of the Great Recession
- e. Not Everyone Got the Great Recession Wrong (A Prelude to Part II)
Making decisions based on an assessment of future outcomes is a natural and inescapable part of the human condition. Indeed, as Nate Silver points out, “prediction is indispensable to our lives. Every time we choose a route to work, decide whether to go on a second date, or set money aside for a rainy day, we are making a forecast about how the future will proceed–and how our plans will affect the odds for a favorable outcome” (loc. 285). And over and above these private decisions, prognosticating does, of course, bleed over into the public realm; as indeed whole industries from weather forecasting, to sports betting, to financial investing are built on the premise that predictions of future outcomes are not only possible, but can be made reliable. As Silver points out, though, there is a wide discrepancy across industries and also between individuals regarding just how accurate these predictions are. In his new book The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t Silver attempts to get to the bottom of all of this prediction-making to uncover what separates the accurate from the misguided.
In doing so, the author first takes us on a journey through financial crashes, political elections, baseball games, weather reports, earthquakes, disease epidemics, sports bets, chess matches, poker tables, and the good ol’ American economy, as we explore what goes into a well-made prediction and its opposite. The key teaching of this journey is that wise predictions come out of self-awareness, humility, and attention to detail: lack of self-awareness causes us to make predictions that tell us what we’d like to hear, rather than what is true (or most likely the case); lack of humility causes us to feel more certain than is warranted, leading us to rash decisions; and lack of attention to detail (in conjunction with self-serving bias and rashness) leads us to miss the key variables that make all the difference. Attention to detail is what we need to capture the signal in the noise (the key variable[s] in the sea of data and information that are integral in determining future outcomes), but without self-awareness and humility, we don’t even stand a chance.
While self-awareness requires us to make an honest assessment of our particular biases, humility requires us to take a probabilistic approach to our predictions. Specifically, Silver advises a Bayesian approach. Bayes’ theorem has it that when it comes to making a prediction, the most prudent way to proceed is to first come up with an initial probability of a particular event occurring (rather than a black and white prediction of the form `I believe x will occur’). Next, we must continually adjust this initial probability as new information filters in.
The level of certainty that we can place on our initial estimate of the probability of a particular event (and the degree to which we can accurately refine it moving forward) is limited by the complexity of the field in which we are making our prediction, and also the amount and quality of the information that we have access to. For instance, in a field like baseball, where wins and losses mostly comes down to two variables (the skill of the pitchers, and the skill of the hitters), and where there is an enormous wealth of precise data, prediction is relatively straightforward (but still not easy). On the other hand, in a dynamic and evolving field such as the American economy, where the outcomes are influenced by an enormous number of variables, and where the interactions between these variables are both incredibly complex (due to things like positive and negative feedback), and also subject to change over time, probabilities become a whole lot more difficult to pin down precisely (though they often remain possible on a general and/or long-term scale).
It is also important to recognize that while additional information can help us no matter what field we are trying to make our prediction in, we must be careful not to think that information can stand on its own. Indeed, additional information (when it is not met with insightful analysis) often does nothing more than draw our attention away from the key variables that truly make a difference. In other words, it creates more noise, which can make it more difficult to identify the signal. It is for this reason that predictive models that rely on statistics and statistics alone are often not very effective (though they do often help a seasoned expert who is able to apply insightful analysis to them).
In the final stage of the book Silver explores how the lessons that he lays out can be applied to such fields as global warming, terrorism and financial markets. Unfortunately, each of these fields is a lot noisier than many of us would like to think (thus making them very difficult to predict precisely). Nevertheless, the author argues, within each there are certain signals that can help us make better predictions regarding them, and which should help make the world a safer and more livable place.
Here is Nate Silver introducing his new book:
What follows is a full executive summary of The Signal and the Noise: Why Some Predictions Fail—but Some Don’t by Nate Silver.
*It should be noted that while Silver covers a very wide range of subjects in his book, I eschew many of these subjects, and focus mostly on the economy (and a couple of political and social issues) here. There are many reasons for this. For one, the main argument of the book is repeated through the various subjects that are discussed, and I felt it advisable to focus on how the argument is developed in one subject in detail, rather than glance at it in less depth (and repeatedly) in the multitude of subjects. The economy is the field that receives the most attention in the book, which explains why I chose it, and not another field. Finally, at 500 pages, the book is simply too long for every topic to find its way into a 15 to 20 page summary (without drastically over-simplifying them).
**In a related note, I do not discuss Silver’s approach to political prediction-making here, as it is a relatively minor theme in the book. Information on this is readily available at his blog: http://fivethirtyeight.blogs.nytimes.com/
PART I: THE TROUBLE WITH PREDICTIONS: WHY SO MANY PREDICTIONS FAIL (AS SEEN THROUGH THE EYES OF THE ECONOMY)
If there is one field where we might expect the art and science of prediction to have evolved to a fairly high level of precision it is the economy. To begin with, understanding where the economy is heading is of deep interest to many individuals, organizations and whole nations, and therefore garners a great deal of attention from experts. In addition, there is a wealth of economic data available. Indeed, as Silver points out, “the government produces data on literally 45,000 economic indicators each year. Private data providers track as many as four million statistics” (loc. 3127). And finally, the progress of the economy is monitored very closely (loc. 3099), which means that prognosticators can easily gain a very good sense of how their predictions are faring, and thus make needed adjustments as they go.
Despite all of this, though, experts have a very poor track record of predicting the course of the economy. To begin with, the simplest and most accurate measure of an economy is commonly thought to be its Gross Domestic Product (GDP); and therefore, most economic forecasts come in the form of a GDP projection. For instance, an economist may project that the American economy will grow by 2.4% in the following year. These precise projections are derived from broader prediction intervals (though they are not often publicized as such). As Silver explains, “a prediction interval is a range of the most likely outcomes that a forecast provides for, much like the margin of error in a poll” (loc. 3070). So, for instance, the 2.4% growth projection mentioned above may be derived from a prediction interval which has it as 90% certain that the economy will grow somewhere between 1.0% and 3.9% (leaving only a 10% chance that the economy’s growth will lie outside of this range).
Now, for Silver, it is already unfortunate that economic forecasts most often come in the form of a precise number (rather than a prediction interval), since this gives off the impression that the economy is something that can be predicted to a high level of certainty, while the evidence indicates that this is simply not the case (loc. 3003). Even worse, though, the evidence indicates that the prediction intervals that economists come up with are themselves not very accurate. So, for instance, take the Survey of Professional Forecasters (which is a “a quarterly poll put out by the Federal Reserve Bank of Philadelphia” (loc. 3048). In order for the 90% prediction intervals that this poll comes up with to be considered accurate, the actual growth rate of the economy should fall outside of the prediction interval only 10% of the time, or on average 1 in every 10 years. However, going back to 1968, it has been found that “the actual figure for GDP fell outside the prediction interval almost half the time” (loc. 3079). As Silver points out, “there is almost no chance that the economists have simply been unlucky; they fundamentally overstate the reliability of their predictions” (loc. 3079).
Not only, then, do economists have a poor track record of predicting the economy, they also have a poor track record of predicting how reliable their own predictions are. Now, it is one thing to make poor predictions. It may just be the case that the phenomenon you are trying to predict eludes accurate forecasting. But there is really no excuse for egregiously misjudging how certain you are of your predictions, for you can check to see how they’ve done, and adjust your level of certainty accordingly. When it comes to economic forecasts, as Silver points out, “the true 90 percent prediction interval—based on how these forecasts have actually performed and not on how accurate the economists claim them to be—spans about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2 percent). When you hear on the news that GDP will grow by 2.5 percent next year, that means it could easily grow at a spectacular rate of 5.7 percent instead. Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t been able to do any better than that, and there isn’t much evidence that their forecasts are improving” (loc. 3088).
To take just the most recent glaring example, the 2.4% growth projection mentioned above (with the 90% prediction interval between 1.0% and 3.9%) was the actual forecast that the Survey of Professional Forecasters issued in November 2007 for the 2008 fiscal year (loc. 3062), which is the very year that the American economy fell by 3.3 percent (loc. 3064). It gets worse, though, as Silver notes, the Survey “assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008. And they gave it only about a 1-in-500 chance of shrinking by at least 2 percent, as it did” (loc. 3068).
And on the topic of downturns, if you thought that economists’ ability to predict the GDP was bad, it looks downright spectacular when compared to their track record in predicting recessions. Indeed, as Silver points out, “one actual statistic is that in the 1990’s, economists predicted only 2 of the 60 recessions around the world a year ahead of time” (loc. 3088). What’s worse, economists often fail to ‘predict’ recessions even when they’ve already begun. As Silver notes, “a majority of economists did not think we were in one when the three most recent recessions, in 1990, 2001, and 2007, were later determined to have begun” (loc. 3006).
And when economists aren’t busy failing to ‘predict’ recessions that have already begun, they are busy predicting ones that never arise. To take just one example, “in September 2011, [the forecasting firm] ECRI predicted a near certainty of a ‘double dip’ recession. ‘There’s nothing policy makers can do to head it off,’ it advised. ‘If you think this is a bad economy, you haven’t seen anything yet’” (loc. 3322). Not only did a double dip recession fail to materialize, though, but “the S&P gained 21 percent in the five months after ECRI announced its recession call, while GDP growth registered at a fairly healthy clip of 3.0 percent in the last quarter of 2011 instead of going into recession” (loc. 3342).
The problem with all of this is not just that economists are getting their predictions (and their predictions of their predictions) wrong, but that their overconfidence leads others astray, which, as Silver notes, “tends to leave us less prepared when a deluge hits” (loc. 3011). Of course, economists are not alone in being overconfident with their predictions. As the author notes, “this property of overconfident predictions has been identified in many other fields, including medical research, political science, finance, and psychology” (loc. 3092).
Now, while economists may have no good excuses for their overconfidence, there are plenty of good reasons (and a few bad ones) as to why they have failed when it comes to making accurate predictions. Understanding these factors is integral to understanding prudent prediction-making, so we will turn our attention to them now.
To begin with, the number of factors that influence the American economy are simply staggering, and truly global in scope. As Silver explains, “a tsunami in Japan or a longshoreman’s strike in Long Beach can affect whether someone in Texas finds a job” (loc. 3290). This puts the American economy in stark contrast to a field such as baseball, where the number of factors that influence outcomes is very limited. Indeed, wins and losses in baseball mostly come down to two factors: the skill of the pitchers, and the skill of the hitters: “although baseball is a team sport, it proceeds in a highly orderly way: pitchers take their turn in the rotation, hitters take their turn in the batting order, and they are largely responsible for their own statistics… That makes life easy for a baseball forecaster” (loc. 1423).
*For prospective buyers: To get a good indication of how this (and other) articles look before purchasing, I’ve made several of my past articles available for free. Each of my articles follows the same form and is similar in length (15-20 pages). The free articles are available here: Free Articles