If you want to know who's going to be our next president, ask Nate Silver. Back in 2008, the 30-year-old statistician who runs the FiveThirtyEight blog at The New York Times predicted the winner of 49 of 50 states—and all 35 senate races. But politics is just one manifestation of Silver's prophetic prowess. He developed an innovative baseball forecasting system so accurate that the folks behind the sport's stats bible, Baseball Prospectus, bought it from him in 2003. He has also applied his talents to professional poker (his winnings peaked at $400,000). In his new book, The Signal and the Noise: Why So Many Predictions Fail—But Some Don't, Silver explores our attempts at forecasting stocks, storms, sports, and anything else not set in stone.
WIRED: What makes for a good forecaster?
Nate Silver: You have to be comfortable with probability and uncertainty. When you say, "We know for sure," those predictions have a really poor track record, even though they make headlines. We have a culture where the guy who goes on CNBC and is sure that we're headed into a recession is going to get noticed. But certainty is actually correlated with making worse forecasts.
__WIRED: __Why do so many predictions fail?
__Silver: __One issue is a set of what I call out-of-sample problems, which means the event you're concerned about didn't even exist in your data set. In a 2007 forecast, the Federal Reserve didn't think there was much chance of a recession, and one started a month later. They built their model on an analysis of data from 1986 to 2006, when there were just two very minor recessions.
__WIRED: __Can prediction models be too complex?
__Silver: __There's a problem called overfitting. You find patterns, but the patterns don't actually have any predictive power. They're just descriptive, and often they're descriptive of the noise. There's so much data out there, but there's a finite amount of truth. Take earthquake prediction. People try to apply complex equations with eight different variables to a very noisy data set. It creates the equivalent of a program that thinks it can predict coin flips. Overfitting also comes up in economic and political forecasting. The government publishes 45,000 economics stats, and you can always find something that happens to coincide with a couple of recessions. But that doesn't make those things predictors.
- Glass Works: How Corning Created the Ultrathin, Ultrastrong Material of the Future
- The New MakerBot Replicator Might Just Change Your World
- How Nerf Became the World’s Best Purveyor of Big Guns for Kids
__WIRED: __How do we avoid spinning a narrative out of noise?
__Silver: __If you're prone to overreact to new data, you should stick to basic models. Without a good framework for weighing information, having more can backfire.
__WIRED: __What's the best way to refine a prediction?
__Silver: __Run a lot of natural experiments instead of trying to find the perfect idea. There's no substitute for testing your ideas out in an environment where you don't know what's going to happen. For instance, Google can predict how useful changes to its search algorithm will be and then test those hypotheses on real customers.
__WIRED: __You write that poker refines probabilistic judgment. Should more people play?
__Silver: __Yeah, probably. Of course, now some hedge fund will enter its employees in the World Series of Poker. In poker you learn to deal with luck, with uncertainty, with accounting for new information. And if you're actually playing for money, there are consequences to the decisions. I can't think of any substitute that will train you quite as quickly, or that is as accessible.
__WIRED: __Where's your money on the 2012 election—as of late July?
__Silver: __Our model has Obama as a 65-to-35 favorite. Those numbers can change, but they have been pretty steady so far.