Ah, I see that Nerdfight has flared up again (be sure to look at comments on the first of those). I'll just mention that I think Nate Silver really overstates his case about the failures of the objective indicator models; I make the case for that here (see points 5-10). My key point? If you look at the three strongest prediction systems from the group that Silver collects, they do quite well indeed. And there's a pretty good argument, in my view, that that's not just cherry-picking the best ones.
At any rate: it's a Friday afternoon, so I'd rather have fun.
Here's what we can take a first-run crack at thanks to the prediction models: who did best beyond the fundamentals? If fundamental-based predictors say one candidate should win by 5 points and he wins by only one, then we can suppose (although many caveats would have to apply) that it's the things beyond the fundamentals -- candidates, campaigns, events that matter but for whatever reason are not in the models -- that hurt that candidate and helped his opponent. I'm not even going to bother spelling out the (necessary!) caveats here; I just want to have a little fun.
So: Silver collected predictor systems from 1992 through 2008. I'm only going to look at three systems, however, and not all of them show up for each election. The ones that have performed well overall are Abromowitz, Wlezien & Erickson, and Hibbs. I'll take a straight average of any of their forecasts that show up, and compare that to the election results, specifically to the incumbent party percentage of the two-party vote (all data from Silver).
1992: Predictors average 47.6; Bush gets 46.6. Clinton does a bit better than expected.
1996: Predictors 55.4; Clinton gets 54.7. This time, Clinton does a bit worse than expected.
2000: Predictors 54.4; Gore gets only 50.3. A big miss; Bush overachieves a lot (I've always thought the big thing was late-breaking economic weakness, some of which showed up only in after-the-election revisions...but that could be totally wrong).
2004: Predictors 52.9; Bush actually comes in at 51.2. I've done this one before: Kerry did better than the fundamentals models predicted (and, for what it's worth, the models I'm looking at here lean far more towards Kerry than the ones I'm not using).
2008: Predictors: 47.3; McCain gets 46.3. Obama does a little better than expected.
Comments::
First of all, the only real miss here is 2000. The predictors are pretty good (remembering, of course, that I picked which ones to look at).
Second: the only one that's way off had Bush beating expectations and Gore failing to reach expectations in 2000.
Third: Bob Dole's losing campaign and John Kerry's losing campaign did a bit better, not worse, than what the predictors thought would happen. Again, that doesn't mean they really had great campaigns after all, but I'd say it's at least a bit of evidence in that direction.
I recall one of the modelers at the time -- don't remember which -- explaining the gap in 2000 as follows: The prediction model assumes that the incumbent-party candidate will take credit for the good fundamentals, not try to distance himself from them as Gore arguably did.
ReplyDeleteDole made major gains in the last two weeks of the 1996 campaign, emphasizing Clinton's questionable fundraising practices; he was trailing by 15 points on average in the polls with two weeks to go and lost by only 8 1/2 points (his rallying of the Republican base in the last two weeks likely saved the Republican majorities in both houses of Congress). Kerry won all three of his debates with Bush, two of them decisively. There never was much evidence in the campaigns that Dole and Kerry were below average candidates; they got knocked by their co-partisans after the fact because they lost.
ReplyDeleteSeems like the models tended to overstate support for the incumbent/incumbent party. Is there anything to this? Is it just coincidence in a sample size of six? Or do the models themselves overvalue the incumbency advantage? Has that advantage waned in modern election cycles?
ReplyDelete