A podcast discussion of this book is also available here:
Table of Contents:
PART I: THE TROUBLE WITH PREDICTIONS: WHY SO MANY PREDICTIONS FAIL (AS SEEN THROUGH THE EYES OF THE ECONOMY)
1. The Trouble with Predicting the Economy
2. The Complicating Factors
- A Brief Digression into the Field of Meteorology (A Bright Spot in the World of Forecasting)
3. The Statistical Solution (and its Problems)
4. Biased Thinking
5. A Case Study in Failed Prediction-Making: The Great Recession
- a. Endless Optimism in the Price of Real Estate
- b. The Folly of the Ratings Agencies
- c. Leveraged to High Heaven
- d. The Government’s Failed Forecast of the Great Recession
- e. Not Everyone Got the Great Recession Wrong (A Prelude to Part II)
PART II: TOWARDS BETTER PREDICTION-MAKING
Section 1: Strategies to Help Us with Our Predictions
6. The Bayesian Approach
7. Hedgehogs and Foxes: Be Foxy
Section 2: Applying the Prediction Strategies to Different Fields
8. Predicting the Stock Market
9. Predicting Climate Change
10. Predicting Terrorism
Making decisions based on an assessment of future outcomes is a natural and inescapable part of the human condition. Indeed, as Nate Silver points out, “prediction is indispensable to our lives. Every time we choose a route to work, decide whether to go on a second date, or set money aside for a rainy day, we are making a forecast about how the future will proceed–and how our plans will affect the odds for a favorable outcome” (loc. 285). And over and above these private decisions, prognosticating does, of course, bleed over into the public realm; as indeed whole industries from weather forecasting, to sports betting, to financial investing are built on the premise that predictions of future outcomes are not only possible, but can be made reliable. As Silver points out, though, there is a wide discrepancy across industries and also between individuals regarding just how accurate these predictions are. In his new book The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t Silver attempts to get to the bottom of all of this prediction-making to uncover what separates the accurate from the misguided.
In doing so, the author first takes us on a journey through financial crashes, political elections, baseball games, weather reports, earthquakes, disease epidemics, sports bets, chess matches, poker tables, and the good ol’ American economy, as we explore what goes into a well-made prediction and its opposite. The key teaching of this journey is that wise predictions come out of self-awareness, humility, and attention to detail: lack of self-awareness causes us to make predictions that tell us what we’d like to hear, rather than what is true (or most likely the case); lack of humility causes us to feel more certain than is warranted, leading us to rash decisions; and lack of attention to detail (in conjunction with self-serving bias and rashness) leads us to miss the key variables that make all the difference. Attention to detail is what we need to capture the signal in the noise (the key variable[s] in the sea of data and information that are integral in determining future outcomes), but without self-awareness and humility, we don’t even stand a chance.
While self-awareness requires us to make an honest assessment of our particular biases, humility requires us to take a probabilistic approach to our predictions. Specifically, Silver advises a Bayesian approach. Bayes’ theorem has it that when it comes to making a prediction, the most prudent way to proceed is to first come up with an initial probability of a particular event occurring (rather than a black and white prediction of the form `I believe x will occur’). Next, we must continually adjust this initial probability as new information filters in.
The level of certainty that we can place on our initial estimate of the probability of a particular event (and the degree to which we can accurately refine it moving forward) is limited by the complexity of the field in which we are making our prediction, and also the amount and quality of the information that we have access to. For instance, in a field like baseball, where wins and losses mostly comes down to two variables (the skill of the pitchers, and the skill of the hitters), and where there is an enormous wealth of precise data, prediction is relatively straightforward (but still not easy). On the other hand, in a dynamic and evolving field such as the American economy, where the outcomes are influenced by an enormous number of variables, and where the interactions between these variables are both incredibly complex (due to things like positive and negative feedback), and also subject to change over time, probabilities become a whole lot more difficult to pin down precisely (though they often remain possible on a general and/or long-term scale).
It is also important to recognize that while additional information can help us no matter what field we are trying to make our prediction in, we must be careful not to think that information can stand on its own. Indeed, additional information (when it is not met with insightful analysis) often does nothing more than draw our attention away from the key variables that truly make a difference. In other words, it creates more noise, which can make it more difficult to identify the signal. It is for this reason that predictive models that rely on statistics and statistics alone are often not very effective (though they do often help a seasoned expert who is able to apply insightful analysis to them).
In the final stage of the book Silver explores how the lessons that he lays out can be applied to such fields as global warming, terrorism and financial markets. Unfortunately, each of these fields is a lot noisier than many of us would like to think (thus making them very difficult to predict precisely). Nevertheless, the author argues, within each there are certain signals that can help us make better predictions regarding them, and which should help make the world a safer and more livable place.
Here is Nate Silver introducing his new book:
*To check out this book at Amazon.com, or purchase it, please click here: The Signal and the Noise: Why So Many Predictions Fail but Some Don’t. The book is also available as an audio file from Audible.com here: Audio Book
What follows is a full executive summary of The Signal and the Noise: Why Some Predictions Fail—but Some Don’t by Nate Silver.
*It should be noted that while Silver covers a very wide range of subjects in his book, I eschew many of these subjects, and focus mostly on the economy (and a couple of political and social issues) here. There are many reasons for this. For one, the main argument of the book is repeated through the various subjects that are discussed, and I felt it advisable to focus on how the argument is developed in one subject in detail, rather than glance at it in less depth (and repeatedly) in the multitude of subjects. The economy is the field that receives the most attention in the book, which explains why I chose it, and not another field. Finally, at 500 pages, the book is simply too long for every topic to find its way into a 15 to 20 page summary (without drastically over-simplifying them).
**In a related note, I do not discuss Silver’s approach to political prediction-making here, as it is a relatively minor theme in the book. Information on this is readily available at his blog: http://fivethirtyeight.blogs.nytimes.com/
PART I: THE TROUBLE WITH PREDICTIONS: WHY SO MANY PREDICTIONS FAIL (AS SEEN THROUGH THE EYES OF THE ECONOMY)
1. The Trouble with Predicting the Economy
If there is one field where we might expect the art and science of prediction to have evolved to a fairly high level of precision it is the economy. To begin with, understanding where the economy is heading is of deep interest to many individuals, organizations and whole nations, and therefore garners a great deal of attention from experts. In addition, there is a wealth of economic data available. Indeed, as Silver points out, “the government produces data on literally 45,000 economic indicators each year. Private data providers track as many as four million statistics” (loc. 3127). And finally, the progress of the economy is monitored very closely (loc. 3099), which means that prognosticators can easily gain a very good sense of how their predictions are faring, and thus make needed adjustments as they go.
Despite all of this, though, experts have a very poor track record of predicting the course of the economy. To begin with, the simplest and most accurate measure of an economy is commonly thought to be its Gross Domestic Product (GDP); and therefore, most economic forecasts come in the form of a GDP projection. For instance, an economist may project that the American economy will grow by 2.4% in the following year. These precise projections are derived from broader prediction intervals (though they are not often publicized as such). As Silver explains, “a prediction interval is a range of the most likely outcomes that a forecast provides for, much like the margin of error in a poll” (loc. 3070). So, for instance, the 2.4% growth projection mentioned above may be derived from a prediction interval which has it as 90% certain that the economy will grow somewhere between 1.0% and 3.9% (leaving only a 10% chance that the economy’s growth will lie outside of this range).
Now, for Silver, it is already unfortunate that economic forecasts most often come in the form of a precise number (rather than a prediction interval), since this gives off the impression that the economy is something that can be predicted to a high level of certainty, while the evidence indicates that this is simply not the case (loc. 3003). Even worse, though, the evidence indicates that the prediction intervals that economists come up with are themselves not very accurate. So, for instance, take the Survey of Professional Forecasters (which is a “a quarterly poll put out by the Federal Reserve Bank of Philadelphia” (loc. 3048). In order for the 90% prediction intervals that this poll comes up with to be considered accurate, the actual growth rate of the economy should fall outside of the prediction interval only 10% of the time, or on average 1 in every 10 years. However, going back to 1968, it has been found that “the actual figure for GDP fell outside the prediction interval almost half the time” (loc. 3079). As Silver points out, “there is almost no chance that the economists have simply been unlucky; they fundamentally overstate the reliability of their predictions” (loc. 3079).
Not only, then, do economists have a poor track record of predicting the economy, they also have a poor track record of predicting how reliable their own predictions are. Now, it is one thing to make poor predictions. It may just be the case that the phenomenon you are trying to predict eludes accurate forecasting. But there is really no excuse for egregiously misjudging how certain you are of your predictions, for you can check to see how they’ve done, and adjust your level of certainty accordingly. When it comes to economic forecasts, as Silver points out, “the true 90 percent prediction interval—based on how these forecasts have actually performed and not on how accurate the economists claim them to be—spans about 6.4 points of GDP (equivalent to a margin of error of plus or minus 3.2 percent). When you hear on the news that GDP will grow by 2.5 percent next year, that means it could easily grow at a spectacular rate of 5.7 percent instead. Or it could fall by 0.7 percent—a fairly serious recession. Economists haven’t been able to do any better than that, and there isn’t much evidence that their forecasts are improving” (loc. 3088).
To take just the most recent glaring example, the 2.4% growth projection mentioned above (with the 90% prediction interval between 1.0% and 3.9%) was the actual forecast that the Survey of Professional Forecasters issued in November 2007 for the 2008 fiscal year (loc. 3062), which is the very year that the American economy fell by 3.3 percent (loc. 3064). It gets worse, though, as Silver notes, the Survey “assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008. And they gave it only about a 1-in-500 chance of shrinking by at least 2 percent, as it did” (loc. 3068).
And on the topic of downturns, if you thought that economists’ ability to predict the GDP was bad, it looks downright spectacular when compared to their track record in predicting recessions. Indeed, as Silver points out, “one actual statistic is that in the 1990’s, economists predicted only 2 of the 60 recessions around the world a year ahead of time” (loc. 3088). What’s worse, economists often fail to ‘predict’ recessions even when they’ve already begun. As Silver notes, “a majority of economists did not think we were in one when the three most recent recessions, in 1990, 2001, and 2007, were later determined to have begun” (loc. 3006).
And when economists aren’t busy failing to ‘predict’ recessions that have already begun, they are busy predicting ones that never arise. To take just one example, “in September 2011, [the forecasting firm] ECRI predicted a near certainty of a ‘double dip’ recession. ‘There’s nothing policy makers can do to head it off,’ it advised. ‘If you think this is a bad economy, you haven’t seen anything yet’” (loc. 3322). Not only did a double dip recession fail to materialize, though, but “the S&P gained 21 percent in the five months after ECRI announced its recession call, while GDP growth registered at a fairly healthy clip of 3.0 percent in the last quarter of 2011 instead of going into recession” (loc. 3342).
The problem with all of this is not just that economists are getting their predictions (and their predictions of their predictions) wrong, but that their overconfidence leads others astray, which, as Silver notes, “tends to leave us less prepared when a deluge hits” (loc. 3011). Of course, economists are not alone in being overconfident with their predictions. As the author notes, “this property of overconfident predictions has been identified in many other fields, including medical research, political science, finance, and psychology” (loc. 3092).
Now, while economists may have no good excuses for their overconfidence, there are plenty of good reasons (and a few bad ones) as to why they have failed when it comes to making accurate predictions. Understanding these factors is integral to understanding prudent prediction-making, so we will turn our attention to them now.
2. The Complicating Factors
To begin with, the number of factors that influence the American economy are simply staggering, and truly global in scope. As Silver explains, “a tsunami in Japan or a longshoreman’s strike in Long Beach can affect whether someone in Texas finds a job” (loc. 3290). This puts the American economy in stark contrast to a field such as baseball, where the number of factors that influence outcomes is very limited. Indeed, wins and losses in baseball mostly come down to two factors: the skill of the pitchers, and the skill of the hitters: “although baseball is a team sport, it proceeds in a highly orderly way: pitchers take their turn in the rotation, hitters take their turn in the batting order, and they are largely responsible for their own statistics… That makes life easy for a baseball forecaster” (loc. 1423).
What’s more, not only are there very few salient factors in baseball, but it is also very straightforward to tease out cause and effect. As Silver puts it, “there are relatively few problems involving complexity and nonlinearity. The causality is easy to sort out” (loc. 1427). In the economy, by contrast, it can be extremely difficult to separate out cause and effect. Take the unemployment rate, for instance. The unemployment rate is often considered an effect of economic health, for the underlying health of the economy will influence whether businesses are hiring or not. However, the unemployment rate also influences consumer demand which then has a causal effect on the health of the economy (loc. 3177). Or take consumer confidence. Consumer confidence has the capacity to influence the health of the economy, but consumer confidence also depends on the health of the economy, and it can fluctuate depending on where the economy is in the business cycle, “thus economists debate whether consumer confidence is a leading or lagging indicator, and the answer may be contingent on the point in the business cycle the economy finds itself at” (loc. 3180).
And over above the fact that it is difficult to tease apart cause and effect in the economy, it is also the case that the situation is complicated by causal feedback loops, both positive feedback loops, which are self-reinforcing (like the relationship between sales and unemployment [loc. 3180]), and negative feedback loops, which are self-negating (like the relationship between supply and demand, and that between fear and greed [loc. 689, 699]).
On top of this, forecasts of the economy may themselves affect outcomes, since they can influence behavior in their own right. Depending on the situation, then, an economic forecast can either turn itself into a self-fulfilling prediction, or undermine itself (loc. 3693).
In addition, economic policies and other underlying conditions can influence the cause and effect process. For instance, under normal circumstances housing prices may be a good indicator of economic health, but “if the government artificially takes steps to inflate housing prices, they might well increase, but they will no longer be good measures of overall economic health” (loc. 3196).
Similarly, the global economy may evolve in such a way that past trends may become obsolete. For example, as Silver explains, “historically… there has been a reasonably strong correlation between GDP growth and job growth. Economists refer to this as Okun’s law… But its dynamics seem to have changed. After each of the last couple of recessions, considerably fewer jobs were created than would have been expected during the Long Boom years. In the year after the stimulus package was passed in 2009, for instance, GDP was growing fast enough to create about two million jobs according to Okun’s law. Instead, an additional 3.5 million jobs were lost during the period” (loc. 3209). Many theories have been advanced to explain this phenomenon, but one of the leading ones has it that the global economy has simply changed in such a way that Okun’s law no longer holds (loc. 3213).
In short, then, the economy is not only an extremely complex system, but an evolving one, such that what influences it in one way at one time may influence it in another way (and even an opposite one) in another (loc. 3202). What’s more, as you might expect, it is virtually impossible to predict when conditions may change that will upset the causal process: “you never know when the next paradigm shift will occur, and whether it will tend to make the economy more volatile or less so, stronger or weaker” (loc. 3265). All of these factors, then, make it extremely difficult to pin down the causal processes at play in the economy, thus making it very difficult to predict.
But even if this causal labyrinth could be untangled, predicting the future course of the economy would still not be a straightforward matter, for it is also complicated by the fact that we do not even have a precise understanding of what is going on in this field from moment to moment. This proves to be the case because, as Silver points out, “most economic data series are subject to revision, a process that can go on for months and even years after the statistics are first published. The revisions are sometimes enormous” (loc. 3271). For example, America’s GDP at any given time is only known to an approximate degree, and even these approximations are sometimes way off. “One somewhat infamous example”, Silver mentions, “was the government’s estimate of GDP growth in the last quarter of 2008. Initially reported as ‘only’ a 3.8 percent rate of decline, the economy is now believed to have been declining at almost 9 percent” (loc. 3271). As you can well imagine, when you combine uncertainty in initial conditions, with causal uncertainty, you are left with a prediction nightmare.
A Brief Digression into the Field of Meteorology (A Bright Spot in the World of Forecasting)
Interestingly, insofar as the initial conditions in the economy are difficult to pin down precisely, this field is akin to another field wherein prediction plays a key part, and that is the field of meteorology. Indeed, the uncertainty in initial conditions in meteorology is precisely why the predictions in this field are couched in terms of probabilities (like 20% chance of rain) (loc. 2127) (Silver argues that economists should do the same). The fact is that meteorological instruments are only so precise (loc. 2116), and very small divergences in initial conditions can lead to vastly different outcomes over the long term (loc. 2111).
Where meteorology differs from the economy, though, is in that the causal process is very well understood here (loc. 2013). For this reason, meteorologists have been able to devise highly efficient weather models. Still, these models are incredibly complex, and require supercomputers that fill whole rooms in order to run (loc. 1950). It is for this reason that the field of meteorology has witnessed such vast improvements over the past 40 years, as computers have become ever faster. As Silver explains, “in the mid-‘70s, the jokes about weather forecasters had some grounding in truth. On average, for instance, the NWS [National Weather Service] was missing the high temperature by about 6 degrees when trying to forecast it three days in advance… Today, the average miss is about 3.5 degrees, meaning that almost half the inaccuracy has been stripped out” (loc. 2197).
Nevertheless, the computer models themselves aren’t foolproof, and they can be substantially improved upon when analyzed by an experienced meteorologist. As Silver explains, “humans improve the accuracy of precipitation forecasts by about 25 percent over the computer guidance alone, and temperature forecasts by about 10 percent” (loc. 2189). The reason for this is that, as sophisticated as they are, the computer models are still prone to being misled by statistical anomalies and outliers, whereas a human expert is not (loc. 2164, 2170). This truth holds in many fields—including the economy, as we shall now see.
3. The Statistical Solution (and its Problems)
Given that the causal process is so difficult to pin down in the economy, many economists have attempted to approach the problem of prediction in a purely statistical way. That is, rather than attempting to pick out the important causes and effects directly, by way of analysis, they have attempted to do so by way of looking at statistical patterns alone. This is particularly tempting to do in a field like the economy since, as mentioned above, so much statistical information is kept (loc. 3127). The idea is this: “ ‘just as you do not need to know exactly how a car engine works in order to drive safely… you do not need to understand all the intricacies of the economy to accurately read the gauges’” (loc. 3331).
As Silver points out, though, the statistical approach really hasn’t fared all that well. And in fact, Silver contends that this approach often leads to worst results. To take just one example, the failed prediction mentioned above by ECRI regarding the supposed imminent double-dip recession to occur in 2011 was based on one of these data models (loc. 3331) (the quote above that draws an analogy between driving a car and reading the economy was itself made by a representative of ECRI [loc. 3331]).
The problem with the statistical approach, according to Silver, is that it is susceptible to identifying variables as significant by coincidence alone. This is because correlation does not necessarily imply causation, while a statistical model simply does not pick up on this. In the field of statistics, the misleading variables that are churned out as a result of this error are called false positives. A (once) famous false positive relating to the economy was the winner of the Super Bowl. As Silver explains, “from Super Bowl I in 1967 through Super Bowl XXXI in 1997, the stock market gained an average of 14 percent for the rest of the year when a team from the original National Football League (NFL) won the game. But it fell by almost 10 percent when a team from the original American Football League (AFL) won instead. Through 1997, this indicator had correctly ‘predicted’ the direction of the stock market in twenty-eight of thirty-one years. A standard test of statistical significance, if taken literally, would have implied that there was only about a 1-in-4,700,000 possibility that the relationship had emerged from chance alone” (loc. 3137).
Of course, this incredible correlation did occur by chance alone (since 1998 the correlation has in fact reversed itself [loc. 3145]), and so this is a very glaring example of just how badly statistical models can be tripped up by coincidences.
Now, the Super Bowl winner, of course, plays no part in economic statistics, so it did not foul up economists. However, as mentioned above, upwards of four million economic statistics are kept; and, as Silver points out, “of the millions of statistical indicators in the world, a few will have happened to correlate especially well with stock prices or GDP or the unemployment rate. If not the winner of the Super Bowl, it might be chicken production in Uganda. But the relationship is merely coincidental” (loc. 3148). And it is just this type of coincidental correlation that is liable to contaminate a statistical model, and send it down a misguided path of prediction.
Now, this is not to say that Silver is outright opposed to statistical models. On the contrary, he strongly believes that they can be of great value. And, in fact, the author has himself been responsible for creating statistical models in a professional capacity, both in the field of baseball (loc. 200, 1323) and political forecasting (loc. 200, 1099). For Silver, though, the problem comes when we start thinking that statistical models can stand on their own, unaided by human analysis and insight. This is simply not the case. And if we think otherwise, our statistical models are more likely to become a hindrance than a help. As Silver puts it, “technology is beneficial as a labor-saving device, but we should not expect our machines to do our thinking for us” (loc. 4496).
Unfortunately, many of us tend to think that as more and more information is made available, that the statistical approach will become more and more accurate. Indeed, as the author points out, this belief is becoming ever more widespread, and is captured by the following rhetorical question: “who needs theory when you have so much information?” (loc. 3333). According to Silver, though, just the opposite is true. For as more information is kept, the volume of noise increases, thereby making it harder to pick out the signal: “if the quantity of information is increasing by 2.5 quintillion bytes per day, the amount of useful information almost certainly isn’t. Most of it is just noise, and the noise is increasing faster than the signal” (loc. 272). The truth of this has been displayed in economic forecasting repeatedly, as time and time again over the past 30 years “promising-seeming models failed badly at some point or another and were consigned to the dustbin” (loc. 3360).
The importance of applying analysis and insight to statistics and statistical models is a recurring theme throughout the book, for it applies to almost every field where prediction plays a part, from baseball, to political elections, to meteorology, to disease epidemics, to sports betting, to you name it. Even in the economy, while it may be extremely difficult to accurately assess the causal processes at play, there are signs that at least some analysts can produce accurate forecasts through clear and insightful thinking (loc. 3112) (There will be more on this below, in the section on the Great Recession).
4. Biased Thinking
Now, there is one final factor that complicates economic forecasting, but this one has less to do with economic variables, and more to do with the forecasters themselves. Here we enter the realm of human bias.
To begin with, studies have been done comparing the economic predictions made by anonymous forecasters with those made by forecasters who disclosed their identity (loc. 3374). Now, you might think that the forecasters who revealed their identity would have more incentive to be accurate in their predictions, and would therefore perform better. However, this is not what happened. In order to understand why, we must first appreciate that there are different incentives for economic forecasters depending on whether they have a well-established reputation and work at a large firm, or not. As Silver explains, “the less reputation you have, the less you have to lose by taking a big risk when you make a prediction. Even if you know that the forecast is dodgy, it might be rational for you to go after the big score. Conversely, if you have already established a good reputation, you might be reluctant to step too far out of line even when you think the data demands it” (loc. 3382). Both of these approaches tend to draw one away from accuracy in one’s predictions, and indeed the results of the studies reflect this: “although the differences are modest, historically the anonymous participants in the Survey of Professional Forecasters have done slightly better at predicting GDP and unemployment than the reputation-minded Blue Chip panelists” (loc. 3385).
And, of course, bias in economic forecasting does not end with concerns over reputation. Indeed, there are often far more powerful considerations that sway a forecast, including economic and political interests. As Silver points out, there is always the temptation that “you may make the forecast that happens to fit your economic incentives or your political beliefs” (loc. 3363). And it doesn’t seem to matter what your political beliefs are: “it turns out that the economic forecasts produced by the White House, for instance, have historically been among the least accurate of all, regardless of whether it’s a Democrat or a Republican in charge” (loc. 3391).
5. A Case Study in Failed Prediction-Making: The Great Recession
As you might well imagine, breakdowns in prediction not only plague economic forecasting, they can also plague the economy itself. The housing and financial crash of 2008 is a good case in point. Silver outlines 4 key breakdowns in prediction that contributed to the crash and the length and depth of the resulting Great Recession.
a. Endless Optimism in the Price of Real Estate
To begin with, there was the housing bubble itself, which was caused by the overoptimistic belief that the price of real estate would continue to keep on rising. This despite clear evidence that the meteoric rise of housing prices experienced in the US on the run up to the crash (in conjunction with the record low savings [loc. 567]) had historically led to “results [that] had been uniformly disastrous” (loc. 571).
Of course, it is difficult to listen to history when your pockets are being lined. As Silver observes, “prices had become untethered from supply and demand, as lenders, brokers, and the ratings agencies—all of whom profited in one way or another from every home sale—strove to keep the party going” (loc. 571).
b. The Folly of the Ratings Agencies
And speaking of the ratings agencies, it is to them specifically that Silver attributes the second key breakdown in prediction that led to the crash. In particular, the role that these agencies had in rating the notorious collateralized debt obligations (CDO’s), which consisted of a package of mortgage debts (loc. 369). As Silver notes, “Standard & Poor’s, for instance, told investors that when it rated a particularly complex type of security known as a collateralized debt obligation… at AAA, there was only a 0.12 percent probability—about 1 chance in 850—that it would fail to pay out over the next five years… In fact, around 28 percent of the AAA-rated CDOs defaulted, according to S&P’s internal figures… This is just about as complete a failure as it is possible to make in a prediction: trillions of dollars in investments that were rated as being almost completely safe instead turned out to be almost completely unsafe” (loc. 379).
For Silver, there are two factors in particular that led to this egregious miscalculation. For one, CDOs were completely new, and the ratings agencies had no track record of them (loc. 388). This being the case, the ratings agencies had to rely on statistical models alone to determine their level of security (loc. 392). Second, the statistical models that the ratings agencies developed to help rate the CDOs assumed that each mortgage debt was completely independent from the others; and therefore, the chance of all of the debts in a package defaulting at the same time would be very low (loc. 520). It is this that allowed the ratings agencies to rate the CDO’s so highly (loc. 520). What this assumption failed to account for, though, was the possibility of a major housing crash in which real estate prices would drop across the board, and in which many home owners would be forced to default on their mortgages all at once—which is precisely what occurred (loc. 520).
Now, while the possibility of a housing crash had apparently failed to dawn on the ratings agencies, it was not something that had missed the attention of other observers. Indeed, as Silver points out, “discussion of the bubble was remarkably widespread. Instances of the two-word phrase ‘housing bubble’ had appeared in just eight news accounts in 2001 but jumped to 3,447 references by 2005. The housing bubble was discussed about ten times per day in reputable newspapers and periodicals” (loc. 413).
How could the ratings agencies—“whose job it is to measure risk in financial markets” (loc. 417)—have failed to account for the possibility of a housing crash? Well, when you consider how much money these agencies were making off of their ratings (loc. 445-66), the whole conundrum becomes a little easier to unravel. As Silver puts it, “the possibility of a housing bubble, and that it might burst… represented a threat to the ratings agencies’ gravy train. Human being have an extraordinary capacity to ignore risks that threaten their livelihood, as though this will make them go away” (loc. 466).
c. Leveraged to High Heaven
Compounding the danger in this scenario was the fact that American financial institutions were, at the time, very highly leveraged. As Silver notes, “Lehman Brothers, in 2007, had a leverage ratio of about 33 to 1, meaning that it had about $1 in capital for every $33 in financial positions that it held. This meant that if there was just a 3 to 4 percent decline in the value of its portfolio, Lehman Brothers would have negative equity and would potentially face bankruptcy. Lehman was not alone in being highly levered: the leverage ratio for other major U.S. banks was about 30 and had been increasing steadily in the run up to the financial crisis” (loc. 633). In maintaining this amount of leverage, these financial institutions were essentially banking on the idea that a recession simply was not possible. More or less the same idea that the ratings agencies were banking on. History would suggest otherwise. Again, though, financial institutions were making a killing, so it is perhaps understandable why they weren’t so interested in looking in the rear-view mirror.
As you may have surmised, there is something that connects each of the aforementioned breakdowns in prediction. Essentially, the key characters played their predictions as though the world was the way they wanted it to be, rather than the way it really was. According to Silver, “the most calamitous failures of prediction usually have a lot in common” (loc. 360), and the factor of self-deception heads the list (loc. 360).
d. The Government’s Failed Forecast of the Great Recession
The final breakdown in prediction that Silver associates with the Great Recession was the government’s failure to accurately forecast the depth and severity of the downturn. As Silver explains, in formulating the stimulus package of 2009, President Obama’s economic team relied on a model which understood the recession to be of a regular, garden variety sort (loc. 736). On this assumption, unemployment was expected to bounce back within a year—or two at most (loc. 735). However, this was very far from the case. Indeed, that particular recession had been triggered by a financial crash, and history shows that these types of recessions tend to be unusually severe, with “unemployment that persist[s] for four to six years” (loc. 718).
Of course, in the end this may have been a moot point, for while the government predicted that a robust stimulus would help the economy (and might have proposed a more robust one had they correctly predicted how severe the downturn would be), studies are divided on just how effective stimulus spending is. As Silver mentions, “estimates of the multiplier effect—how much each dollar in stimulus spending contributes to growth—vary radically from study to study, with some claiming that $1 in stimulus spending returns as much as $4 in GDP growth and others saying the return is just 60 cents on the dollar” (loc. 764). The effect of stimulus spending, it seems, is just one more aspect of the economy that proves very hard to predict.
e. Not Everyone Got the Great Recession Wrong (A Prelude to Part II)
While the Great Recession fooled many, it didn’t fool everyone. And one of the heroes here is Jan Hatzius, chief economist at Goldman Sachs (loc. 3109). Hatzius has a very good track record with his analysis, and in fact, foresaw both the recession (in 2007), and its depth and severity (in 2009) (loc. 3112). What’s Hatzius’ secret? Mostly a combination of unbiased thinking, humility, and attention to detail (all of which we will learn about more in a moment).
Here is Hatzius flexing his impressive predictive muscles regarding the 2011 fiscal year, and beyond:
Here is Hatzius on 2012:
PART II: TOWARDS BETTER PREDICTION-MAKING
Section 1: Strategies to Help Us with Our Predictions
We have now seen some of the major difficulties involved with prediction (our discussion has focussed mainly on the economy, but the lessons here can be applied to virtually any field). Essentially, the difficulties can be reduced to 3 major themes: bias, overconfidence, and lack of attention to detail. Bias enters the picture when our particular interests (economic, political, reputation, self-image etc.) color our predictions. Overconfidence enters the scene when we fail to account for the difficulties inherent in the practice of prediction. The inherent difficulties here differ from field to field, and are multiplied by things like the number of variables involved, the complexity of the relationships between these variables, and the degree to which these relationships change over time. Finally, lack of attention to detail enters the scene when we fail to recognize the factors that have an especially strong influence on outcomes, and how these factors enter the causal process. One easy way to fall prey to this pitfall (especially in the age of information) is to operate on the assumption that data and statistics can speak for themselves, and do not need to be supplemented with human analysis and insight.
Having diagnosed the problem, we are now ready to learn the solution. For Silver, a big part of the solution involves shifting the way we think to reflect a Bayesian attitude.
6. The Bayesian Approach
Thomas Bayes was an English minister who lived in the 18th century. Though Bayes was elected as a Fellow of the Royal Society and did publish during his lifetime, he did not achieve a good deal of influence until after his death; and today his influence is stronger than ever. Bayes’ influence comes mainly from a paper of his that was published after his death called ‘An Essay toward Solving a Problem in the Doctrine of Chances,’ which “concerned how we formulate probabilistic beliefs about the world when we encounter new data” (loc. 4123).
The paper was intended as a response to the famous philosopher and skeptic David Hume, who argued that we could not truly predict anything with any amount of certainty. This is the case, according to Hume, because all of our information about the world comes from past experience, and just because something happened in the past (even with great frequency) does not mean we can logically deduce that it will happen again in the future. For instance, our knowledge that the sun rises in the morning is derived from the fact that on all previous occasions the sun has risen in the morning. However, because our sample size is necessarily limited, we have no way of knowing whether this is a matter of necessity or simply chance (loc. 4135). This being the case, Hume “argued that since we could not be certain that the sun would rise again, a prediction that it would was inherently no more rational than one that it wouldn’t” (loc. 4135).
Bayes agreed with Hume that we can never predict anything with absolute certainty. However, he disagreed that this effectively made all prediction an irrational process. Instead, Bayes contended that prediction could be made rational by way of treating it as a matter of probability rather than certainty (loc. 4135). For instance, when it comes to the sun rising in the morning, we may never be able to predict with certainty that it will, but the more it happens, the more we are justified in raising the probability that it will: “Gradually, through this purely statistical form of inference, the probability [we] assign to [our] prediction that the sun will rise again tomorrow approaches (although never exactly reaches) 100 percent” (loc. 4128).
Later, the French mathematician and astronomer Pierre-Simon Laplace decided to bring a little math to this philosophical position (loc. 4141). Laplace deduced that Bayes’ argument could be expressed mathematically in the following way:
Here’s how the theorem works. First, you must formulate an initial probability for a particular event occurring. Take the (somewhat provocative) event of your spouse cheating on you. What is the probability that this is happening? This is not something that’s easy to pin down, or be objective about, but, as Silver points out, “studies have found… that about 4 percent of married partners cheat on their spouses in any given year” (loc. 4186), so let’s say you put the probability at 4%. This is what Bayesians call a ‘prior probability’ (or just a ‘prior’) (loc. 4181). Your prior is subject to change when new and salient information enter the scene.
For instance, let’s say you’re a woman, and one day you come home after your husband has returned from a business trip, and you find a pair of mysterious panties in your dresser drawer. This is certainly new and salient information that pertains to your belief about whether or not your husband is cheating on you. So your prior definitely stands to be modified. What is the probability that your husband is cheating on you given this new bit of information? Under Bayes’ theorem, you must do 2 things (and apply a bit of math) in order to find out.
To begin with, you must “estimate the probability of the underwear’s appearing as a condition of the hypothesis being true—that is, you are being cheated upon” (loc. ). In plain terms, if he is cheating on you, what are the chances that he would be so careless as to allow a pair of his lover’s panties to find its way into your dresser drawer? If he is extremely absentminded you might say 100%. However, if he is normally an extremely careful and fastidious person, you may say only about 50%. Let’s assume the latter and go with 50%.
Second, “you need to estimate the probability of the underwear’s appearing conditional on the hypothesis being false” (loc. 4178). That is, if he is not cheating on you, what are the chances that a pair of mysterious panties would nevertheless show up in your drawer? At first blush, you may think zero. But realistically, there are other things that might explain your find. For instance, “they could be his panties. It could be that his luggage got mixed up. It could be that a platonic female friend of his, whom you trust, stayed over one night. The panties could be a gift to you that he forgot to wrap up” (loc. 4181). Still, you might think that all of these scenarios together are still quite remote, so you peg the overall probability of an explanation like this at 5%.
Next you crunch your numbers according to the following formula: xy/xy + z(1 – x), where x is your prior, y is your positive conditional, and z is your negative conditional. When you run the aforementioned percentages through this equation you come up with the conclusion that the chances that your husband is cheating on you “is still fairly low: 29 percent” (loc. 4190). Perhaps a little too low? It is possible that we may have misjudged one of the initial probabilities in our scenario. These probabilities were, after all, very general approximations. Nevertheless, this is not necessarily the case, as Bayes’ theorem will often enough spit out probabilities that seem counterintuitive but which are nonetheless accurate.
A famous example here is one involving breast cancer. About 1.4% of women develop breast cancer when they are in their 40’s (loc. 4196). One way to detect breast cancer is with a mammogram, but these tests are not foolproof. Specifically, if a woman has breast cancer, a mammogram will detect it about 75% of the time. If, on the other hand, she does not have breast cancer, a mammogram will still come up positive 10% of the time (loc. 4199). Let’s say a woman in her 40’s has a mammogram and it comes up positive. What are the chances that she has breast cancer? The answer is a lot less that what you might think. It’s actually 10% (a number that Bayes’ theorem accurately comes up with [loc. 4201]).
If you badly misjudged the probability here, you’re not alone. As Silver explains, “a recent study that polled the statistical literacy of Americans presented this breast cancer example to them—and found that just 3 percent of them came up with the right probability estimate” (loc. 4204). The reason why most of us tend to get problems like this wrong is because most of just aren’t very good at intuitively recognizing how new information interacts with previously established probabilities to yield new probabilities. Our problem is that we tend to “focus on the newest or most immediately available information, and the bigger picture gets lost” (loc. 4210). Applying Bayes’ theorem prevents us from falling prey to this tendency, so this is one reason why approaching life through the lens of the theorem can be helpful. However, the benefits do not stop here.
*This is a fairly good video on Bayes’ theorem, if you would like a more in-depth explanation:
Also of value is the fact that the theorem requires us to think in terms of probabilities rather than certainties. This is beneficial because we are, after all, fallible creatures, who can never be absolutely certain about the future, and acknowledging this is very important in placing a reality check on our predictions, and preventing us from becoming overconfident in them (loc. 4240).
Next, formulating priors forces us to confront and evaluate the particular reasons why we assign a certain probability to any given event. And this forces us to confront the biases behind our reasoning, which can help us overcome these biases (loc. 4415). A particularly effective strategy here is to treat your prior as “the odds at which you are willing to make a bet” (loc. 4365). Sometimes putting something on the line can help us identify whether we are really being entirely honest with ourselves, or whether some biases are slipping into our thinking (Silver identifies gamblers—those who think in terms of odds, and are willing to put their money where their mouth is—as epitomizing the Bayesian approach).
Of course, there are no guarantees that our forming a prior will allow us to eradicate our biases, but the Bayesian approach doesn’t depend on our being entirely accurate right off the bat. Rather, the approach is designed to get us closer and closer to the truth as new evidence filters in. This being the case, no matter what our biases are coming into a situation, shifting our prior as new evidence comes in should allow us to whittle away our biases as we proceed (loc. 4426). For Silver, this is essentially the same way that science progresses, and so we may think of the Bayesian approach as adding a little science to our everyday lives.
In addition to helping us overcome our biases and our tendency towards overconfidence, the Bayesian approach can also help us focus in on important details, for it puts us on the lookout for information that is pertinent to updating our prior probabilities. Nevertheless, we cannot rely on Bayes’ theorem alone to allow us to identify the signal in the noise. For this, a few other strategies are needed.
We have, in fact, already seen one of these strategies. And this is that of avoiding the temptation to think that statistical patterns alone can allow us to identify the key variables that affect outcomes. Again, though, this strategy will only take us so far. It may help in preventing us from getting lost in the noise, but it does little to help us find the signal. For this, something a little extra is needed.
7. Hedgehogs and Foxes: Be Foxy
In order to get at this little something extra, Silver points to the work of the psychology and political science professor Philip Tetlock. As Silver explains, “beginning in 1987, Tetlock started collecting predictions from a broad array of experts in academia and government on a variety of topics in domestic politics, economics, and international relations” (loc. 905). While Tetlock found that, on aggregate, the predictions of the experts were quite poor, he also found that some experts did better than others. When Tetlock looked into the cognitive styles and personality traits of the various experts he found that a clear pattern (we might call it a signal) emerged. Specifically, the more accurate predictors tended to have a particular set of cognitive strategies and personality traits that differed from the less accurate ones.
Tetlock organized his subjects along a spectrum with what he called ‘foxes’ on one end, and ‘hedgehogs’ on the other. The difference between foxes and hedgehogs can be summed up in the following way: “‘the fox knows many little things, but the hedgehog knows one big thing’” (loc. 949). More specifically, “hedgehogs are type A personalities who believe in Big Ideas—in governing principles about the world that behave as though they were physical laws and undergird virtually every interaction in society. Think Karl Marx and class struggle, or Sigmund Freud and the unconscious. Or Malcolm Gladwell and the ‘tipping point’” (loc. 955). Foxes, by contrast, “are scrappy creatures who believe in a plethora of little ideas and in taking a multitude of approaches toward a problem. They tend to be more tolerant of nuance, uncertainty, complexity, and dissenting opinion. If hedgehogs are hunters, always looking out for the big kill, then foxes are gatherers” (loc. 958).
While hedgehogs tend to be bold and brash, and express singular confidence in their predictions, foxes are much more cautious, as they consider numerous perspectives, carefully weighing their pros and cons. This being the case, foxes can often seem dithering and unsure of themselves (loc. 1006). As you might expect, then, hedgehogs make for much better television than foxes; and indeed, Tetlock found that the former garnered a lot more media attention than the latter (loc. 991-1006). However, when it came to the quality of their predictions, the hedgehogs were well outperformed by their foxier counterparts. As Silver notes, Tetlock found that “whereas the hedgehogs’ forecasts were barely any better than random chance, the foxes’ demonstrated predictive skill” (loc. 962).
Aside from being less susceptible to black and white thinking, and also overconfidence, the foxes had one other quality that allowed them to make better predictions than the hedgehogs. This was the fact that they tended to be less ideological in outlook, and to rely more on empirical evidence to help shape their opinions (loc. 980). Again, this harkens back to the fact that bias (here in the form of ideology) tends to interfere with the activity of formulating accurate predictions.
Having learned a few strategies that can help us improve our predictions, we are now in a position to see how we might be able to use these lessons in a few different fields (and also learn a few more tricks along the way).
Section 2: Applying the Prediction Strategies to Different Fields
8. Predicting the Stock Market
One aspect of the economy that is particularly difficult to predict is the stock market. At least in the short term; for indeed, over the long term the stock market does show fairly consistent returns. The goal of most traders, though, is not just to ride the stock market up over time, but to beat it both in the short term and the long. So, how are they doing? According to the studies, not very well. As Silver explains, when this issue “has been studied over the long term… the aggregate forecast has often beaten even the very best individual forecast. A study of the Blue Chip Economic Indicators survey, for instance, found that the aggregate forecast was better over a multiyear period than the forecasts issued by any one of the seventy economists that made up the panel” (loc. 5671).
Eugene Fama, an academic man who has spent much of his career formulating strategies designed to beat the stock market, and also studying the strategies of others, has found much the same. Indeed, while Fama has consistently failed to come up with a strategy that beats the market, he has found that no one else has really been able to do so either. To take just one example, “studying the returns of dozens of mutual funds in a ten-year period from 1950 to 1960, Fama found that funds that performed well in one year were no more likely to beat their competition the next time around” (loc. 5713). And the very same pattern holds when it comes to hedge funds (loc. 5740).
The fact is that the stock market is (at most times) an extremely efficient system. And it is so because the price of stocks is set by the collective actions of an enormous number of individuals, where most of the important players are very bright, very well-informed, and highly motivated. As Silver explains, “in the stock market, the competition is fierce. The average trader, particularly in today’s market, in which trading is dominated by institutional investors, is someone who will have ample credentials, a high IQ, and a fair amount of experience” (loc. 6135). Henry Blodget, a former financial investor (now under a lifetime ban from stock trading), and currently CEO of the web-based conglomerate Business Insider puts it this way: “‘Everybody thinks they have this supersmart mutual fund manager… He went to Harvard and has been doing it for twenty-five years. How can he not be smart enough to beat the market? The answer is: Because there are nine million of him and they all have a fifty-million-dollar budget and computers that are collocated in the New York Stock Exchange. How can you possible beat that?’” (loc. 6139).
Well, one way you can beat that is if you happen to have insider information, or you can otherwise manipulate the market, which one group of people in particular is especially well-positioned to do, and that is politicians. As Silver explains, “one particularly disturbing example is that members of Congress, who often gain access to inside information about a company while they are lobbied and who also have some ability to influence the fate of companies through legislation, return a profit on their investments that beats market averages by 5 to 10 percent per year, a remarkable rate that would make even Bernie Madoff blush” (loc. 5771).
In any event, while the market may be extremely efficient at most times, there is one occasion when it is not, and that is when a bubble forms. To begin with, while there may be no foolproof way to detect a bubble, there are signals that do appear to have some predictive value. First, a very sharp increase in the stock market in itself often indicates that a bubble is forming. As Silver explains, “simply looking at periods when the stock market has increased at a rate much faster than its historical average can give you some inkling of a bubble. Of the eight times in which the S&P 500 increased in value by twice its long-term average over a five-year period, five cases were followed by a severe and notorious crash, such as the Great Depression, the dot-com bust, or the Black Monday crash of 1987” (loc. 5859).
In addition to an over-exuberant market, there is another indicator that is even more accurate in predicting a bubble, and that is the price-to earnings ratio (or P/E ratio) of company shares. The P/E ratio refers to the price of a share in a company in relation to the company’s annual profit. As Silver explains, “this calculation… has gravitated toward a value of about 15 over the long run, meaning that the market price per share is generally about fifteen times larger than a company’s annual profits” (loc. 5867).
If a particular company (or entire industry) is growing quickly, and its prospects for continued growth look good, this may justify a P/E ratio that is higher than 15 (loc. 5871). However, emerging companies and industries should, in theory, be balanced out by dying ones, so “the market P/E ratio should be fairly constant over time” (loc. 5874). However, this has not always been the case. Indeed, as the economist Robert Shiller has found, “the P/E ratio for all companies in the S&P 500 [has] ranged everywhere from about 5 (in 1921) to 44 (… in 2000)” (loc. 5874). As you might expect, investors have done much better when buying at a time when the P/E ratio is below 15, and much worse when buying above this ratio (at least in the long term [loc. 5880]). For example, “when the P/E ratio is 10, meaning that stocks are cheap compared with earnings, they have historically produced a real return of about 9 percent per year, meaning that a $10,000 investment would be worth $22,000 ten years later. When the P/E ratio is 25, on the other hand, a $10,000 investment in the stock market has historically been worth just $12,000 ten years later. And when they are very high, above about 30—as they were in 1929 or 2000—the expected return has been negative” (loc. 5880). In other words, in this last scenario, it indicates that a bubble is forming, and that this bubble will necessarily pop.
Wait a second, though, if markets are supposed to be so efficient, how do bubbles form and inflate in the first place? The answer to this is really quite interesting, and very instructive. To begin with, it’s no secret that emerging and otherwise hot industries tend to attract a great deal of attention and interest. Most people show a marked tendency towards wanting to buy when the market is hot. As Silver explains, “Americans tend to think it’s a good time to buy when P/E ratios are inflated and stocks overpriced. The highest figure that Gallup ever recorded in their survey was in January 2000, when a record high of 67 percent of Americans thought it was a good time to invest. Just two months later, the NASDAQ and other stock indices began to crash. Conversely, only 26 percent of Americans thought it was a good time to buy stocks in February 1990—but the S&P 500 almost quadrupled in value over the next ten years” (loc. 6146).
Now, savvy investors recognize this, of course, and so they are particularly well-placed to take advantage of over-exuberance when it does occur by way of short-selling the market. The thing is, though, is that it is never quite certain when a bubble is going to burst. In the short term, the odds are actually quite small. As Silver point out, “historically, even when the P/E ratio in the market has been above 30—meaning that stock valuations are twice as high as they are ordinarily—the odds of a crash over the next ninety days have been only about 4 percent” (loc. 5950). The further out in time you go, the higher the chances that a crash will occur, so the investor knows that if he keeps his money in the game, eventually he’s going to get torched (loc. 5950). But the way Wall Street works, it may just be better for him to just keep right on buying until the whole house of cards comes tumbling down (loc. 5950).
To begin with, it’s important to note that most big time investors aren’t playing with their own money (loc. 5990); and, as Silver points out, “when it’s not your money on the line but someone else’s, your incentives may change” (loc. 5990). Indeed! Now, if the market is overheated and an investor decides to sell, the best case scenario is that the market crashes, because then he’ll look like a genius. With this, “there’s a chance that he’ll get a significantly better job—as a partner at a hedge fund, for instance” (loc. 5960). Still, though, this is no guarantee, as a bottomed-out market means contraction all around, and “even geniuses aren’t always in demand after the market crashes and capital is tight” (loc. 5960). The worst case scenario is that the market continues to rise, since the investor will now look like an idiot: “not only will the trader have significantly underperformed his peers—he’ll have done so after having stuck his next out and screaming that they were fools. It is extremely likely that he will be fired. And he will not be well-liked, so his prospects for future employment will be dim” (loc. 5973). What’s more, even if the market does eventually crash, he may be long forgotten by this time, or remembered only as a contrarian brat.
Now consider what happens if the investor keeps on buying. The best case scenario is that the market continues to rise: “it’s business as usual. Everyone is happy when the stock market makes money. The trader gets a six-figure bonus and uses it to buy a new Lexus” (loc. 5957). The worst case scenario is that the market crashes. This is bad, but just how bad? It’s true, “he’s lost his firm a lot of money and there will be no big bonus and no new Lexus. But since he’s stayed with the herd, most of his colleagues will have made the same mistake. Following the last three big crashes on Wall Street, employment at securities firms decreased by about 20 percent. That means there is an 80 percent chance the trader keeps his job and comes out okay; the Lexus can wait until the next bull market” (loc. 5970).
When you compare the best and worst case scenarios in both of these hypothetical situations, it begins to look a little bit like a no-brainer. As Silver puts it, “even if these firms know that the party is coming to an end, it may nevertheless be in their best interest to prolong it for as long as they can” (loc. 5983). Prepare yourself accordingly.
We will now turn our attention away from the economic realm, and towards the social and political, and end by way of discussing two hot-button issues here wherein prediction plays a key role: climate change and terrorism.
9. Predicting Climate Change
By some estimates, climate change is the single biggest issue of our time. Should the climate shift as much as some believe, the results could be catastrophic. Nevertheless, the topic remains very controversial, with some going so far as to deny that the phenomenon even exists. Where does the truth lie in all of this disagreement, and what can we reasonably expect out of our climate in the future?
To begin with, it must be noted that climate statistics are themselves very noisy. From what scientists can tell, global temperatures do tend to cycle in deep trends over time. And, as Silver points out, “these cycles long predate the dawn of industrial civilization” (loc. 6271). But in the short term, climate statistics from year to year, and even decade to decade, can fluctuate to a high degree (loc. 6271). This makes models that use past statistics to forecast future change somewhat unreliable. What’s more, even models that take many factors that affect climate into account (such as “sulfur and ENSO [El Nino and la Nina cycles] and sunspots and everything else” ) are prone to errors. For example, the climate prediction made by the International Panel on Climate Change (IPCC) in 1990 was based on one such complicated model. The prediction that this model yielded “at the high end of the range was a catastrophic temperature increase of 5 oC over the course of the next one hundred years. At the low end was a more modest increase of 2 oC per century, with a 3 oC increase representing the most likely case” (loc. 6687). In fact, this prediction missed the trend observed to this point by a good margin. As Silver explains, “temperatures increased by an average of 0.015 oC per year from the time the IPCC forecast was issued in 1990 through 2011, or at a rate of 1.5 oC per century. This is about half the IPCC’s most likely case, of 3 oC warming per century, and also slightly less than the low end of their range at 2 oC” (loc. 6691).
Given the problems inherent in climate forecasting, and also the track record of certain models, many scientists are skeptical of just how accurate these models can be (loc. 6469-94). For example, “a survey of climate scientists conducted in 2008 found that almost all (94 percent) were agreed that climate change is occurring now, and 84 percent were persuaded that it was the result of human activity. But there was much less agreement about the accuracy of the climate computer models. The scientists held mixed views about the ability of these models to predict global temperatures, and generally skeptical ones about their capacity to model other potential effects of climate change. Just 19 percent, for instance, thought they did a good job of modeling what sea-rise levels will look like fifty years hence” (loc. 6484)
Nevertheless, amidst all of the noise in climate statistics there is a clear signal that is cutting through it. And this is the level of CO2 in the atmosphere. Indeed, models that look only at temperature trends in conjunction with current and projected levels of CO2 have done very well, and much better than more complicated simulation models. For example, as Silver explains, “if you had placed the temperature record from 1850 through 1989 into a simple linear regression equation, along with the level of CO2 as measured in Antarctic ice cores and at the Mauna Loa Observatory in Hawaii, it would have predicted a global temperature increase at the rate of 1.5 oC per century through 1990, exactly in line with the actual figure” (loc. 6766).
The climate scientist James Hansen has been involved in producing climate forecasts for decades, though his methodology has changed throughout the years. In 1981 Hansen produced a forecast that relied almost exclusively on temperature trends and levels of CO2 in the atmosphere, and, as Silver notes, this model “did quite a bit better at predicting current temperatures than his 1988 forecast, which relied on simulated models of the climate” (loc. 6777).
As convincing as the correlation between CO2 levels and global temperatures is, the case becomes a whole lot more convincing when we recognize that there is a clear and well-established causal connection from the former to the latter. The causal connection is popularly known as the greenhouse effect, and it consists in “the process by which certain atmospheric gases—principally water vapor, carbon dioxide (CO2), methane, and ozone—absorb solar energy that has been reflected from the earth’s surface” (loc. 6283), preventing it from leaving the atmosphere, and thus resulting in higher overall global temperatures (loc. 6287).
Given this clear causal connection, we can predict with great confidence that as CO2 (and other greenhouse gas) levels rise, global temperatures will also rise. As Silver explains, “the prediction relies on relatively simple chemical reactions that were identified in laboratory experiments many years ago. The greenhouse effect was first proposed by the French physicist Joseph Fourier in 1824 and is usually regarded as having been proved by the Irish physicist John Tyndall in 1859” (loc. 6318).
So, the greenhouse effect is well established, which means that as CO2 (and other greenhouse gas) levels continue to rise, temperatures will also continue to rise. As Silver points out, “the impacts of this are uncertain, but are weighted toward unfavorable outcomes” (loc. 6899). Still, it is one thing to establish this, and quite another to get governments around the world to agree to do something about it. This proves to be the case because, in order to get any one country to commit to act, it is often necessary to get near-unanimous agreement about precisely what to do, and this seems almost impossible (loc. 6384, 6905). This is especially the case given that so many countries still stand to gain so much economically from the industries that are doing the most damage (loc. 6377-81).
Given that this is the case, it may very well be that the only practicable solution will be to find a way to remove carbon from the atmosphere (loc. 6902).
10. Predicting Terrorism
When you look back at some of the events leading up to the attacks of September 11, 2001, the attacks themselves can seem almost inevitable. As Silver notes, “there had been at least a dozen warnings about the potential for aircraft to be used as weapons, including a 1994 threat by Algerian terrorists to crash a hijacked jet into the Eiffel Tower, and a 1998 plot by a group linked to Al Qaeda to crash an explosives-laden airplane into the World Trade Center; The World Trade Center had been targeted by terrorists before…; Al Qaeda was known to be an exceptionally dangerous and inventive terrorist organization…; Secretary of State Condoleezza Rice had been warned in July 2001 about heightened Al Qaeda activity—and that the group was shifting its focus from foreign targets to the United States itself…; An Islamic fundamentalist named Zacarias Moussaoui had been arrested on August 16, 2001, less than a month before the attacks, after an instructor at a flight training school in Minnesota reported he was behaving suspiciously. Moussaoui, despite having barely more than 50 hours of training and having never flown solo, had sought training in a Boeing 747 simulator, an unusual request for someone who was nowhere near obtaining his pilot’s license” (loc. 7098).
Again, all of these events can make the 9/11 attacks seem almost inevitable. And indeed, at least some are convinced that these signs are so clear that the U.S. government must have known about the attacks beforehand; and therefore, must somehow have been involved in them (loc. 7009). The fact of the matter is, though, that events like this often seem predictable after the fact because we know exactly what to look for, whereas beforehand the signals are lost in a sea of noise (loc. 7017). As Silver notes, “our national security agencies have to sort through literally tens of thousands or even hundreds of thousands of potential warnings to find useful nuggets of information. Most of them amount to nothing” (loc. 7102).
Still, it is worth asking whether the U.S. government might have been better prepared for the attacks, and to what degree we can prepare ourselves for future ones. With regards to the former question, the answer would appear to be a resounding ‘yes’. To begin with, it was later determined (by the 9/11 Commission Report) that a major reason why the attacks were able to come off as ‘successful’ as they were, was because the American government had lacked the imagination to consider an attack of the kind and scope as that which occurred (loc. 7109). Indeed, as Silver points out, “the North American Aerospace Defense Command (NORAD) had actually proposed running a war game in which a hijacked airliner crashed into the Pentagon. But the idea was dismissed as being ‘too unrealistic’” (loc. 7113).
Now, it is one thing to fail to imagine something that is entirely without precedent, or evidence to indicate that it may occur, but neither of these can be said regarding the September 11 attacks. To begin with, while the FAA recognized the possibility of a terrorist hijacking an airplane, its protocols centered on a scenario whereby there would be a prolonged and tense standoff (loc. 7116). Meanwhile, suicide attacks by this time already had a long and storied history (loc. 7119), and they “had become much more common in the years immediately preceding September 11; one database of terrorist incidents documented thirty-nine of them in 2000 alone… up from thirty one in the 1980s” (loc. 7124). What’s more, as mentioned above, plots to use airplanes to crash into buildings had already been uncovered.
As for size of the September 11 attacks, it is true that a terrorist attack of this size had never been successfully executed, but the statistics indicated that it was a clear possibility. This is because terrorist attacks have actually been found to follow a very distinct statistical power-law known as a double-logarithmic scale (loc. 7227). When it comes to terrorist attacks, the power law draws a relationship between the size of a terrorist attack and its relative frequency. Given that this is the case, the size and frequency of previous attacks can be used to predict how often a terrorist attack of a particular size will occur in the future. When it comes to an attack of the size of 9/11, the power-law predicted (even before the attack ever occurred) that we could expect to see one “about once every eighty years in a NATO country, or roughly once in our lifetimes” (loc. 7243). In other words, “what this data does suggest is that an attack on the scale of September 11 should not have been unimaginable” (loc. 7247).
But if the data suggests that a terrorist attack that kills thousands of people should not be considered unimaginable, what of attacks on an even larger scale? Frighteningly, the power law distribution of terrorist attacks “gives us reason to believe that attacks that might kill tens of thousands or hundreds of thousands of people are a possibility to contemplate as well” (loc. 7251). And, of course, we can well imagine the means through which an attack of this size would be accomplished: weapons of mass destruction, particularly nuclear and biological weapons (loc. 7253, 7335).
Nevertheless, the good news is that terrorist attacks can be curbed through human intervention, and there is reason to believe that prudent intervention can buck the power-law trend. The evidence comes out of Israel, where terrorism is much more a part of everyday life than it is in America. As Silver explains, Israel takes a unique approach to terrorism whereby prevention is aimed primarily at large-scale attacks: “small-scale terrorism is treated more like crime than an existential threat. What Israel certainly does not tolerate is the potential for large-scale terrorism (as might be made more likely, for instance, by one of their neighbors acquiring weapons of mass destruction)” (loc. 7429).
And the evidence that the approach is working is clear. As Silver explains, “Israel is the one country that has been able to bend Clauset’s curve. if we plot the fatality tolls from terrorist incidents in Israel using the power-law method, we find that there have been significantly fewer large-scale terror attacks than the power-law would predict; no incident since 1979 has killed more than two hundred people” (loc. 7432). This is extremely significant, for it suggests that the power-law is not cast in stone, and that “our strategic choices do make some difference” (loc. 7432).
Perhaps this should not come as so much of a surprise, though. The fact that there have been no significant terrorist attacks on American soil since 9/11 (and obviously not for lack of trying) goes to show that prevention does work (loc. 7475). Here’s hoping that this trend continues into the future.
The future may be fundamentally uncertain, but by applying a few simple strategies, we can substantially improve our powers of predicting what it is likely to look like. Specifically, by paying closer attention to our biases, acknowledging our limitations, and focussing in on identifying the key variables that influence outcomes, some of the fog that obscures the future can be lifted.
*To purchase the book at Amazon.com, please click here: The Signal and the Noise: Why So Many Predictions Fail but Some Don’t. To purchase the book from Audible.com, please click here: Audio Book
*Thank you for taking the time to read this article. If you have enjoyed this summary of Nate Silver’s The Signal and the Noise: Why So Many Predictions Fail–but Some Don’t or just have a thought, please feel free to leave a comment below. Also, if you feel others may benefit from this article, please feel free to click on the g+1 symbol below, or share it on one of the umpteen social networking sites hidden beneath the ‘share’ button. Finally, if you feel inclined to tip me for my efforts, then please do! I have given you the opportunity to do so below.
The Book Reporter