Tuesday, July 10, 2012
Continuing on my reaction to yesterday's discussion of whether a result in this year's election could mean that political scientists have been wrong about presidential elections. In this part: what it might mean if the models do get this year's election wrong.
Noam Scheiber at one point in the twitter thread says that "If Obama wins, it will not be a very good day for political scientists who study US presidential races." Now, I agree with the political scientists in the threat who note that in fact Scheiber is just plain wrong about the prediction models (remember, Larry Bartels has a model which strongly favors Obama, and the general impression I get is that the models overall predict a close race).
But beyond that: look, there are lots of reasons that a relationship which appeared to exist in the past would not hold up in new events. One is, to be sure, a possibility that the relationship was never there in the first place and that scholars got it wrong. Another possibility, however, is that the relationship held in some circumstances and not in others. For example: it's possible that the relationship between economic performance and vote might turn out to be more complex than previously understood, with new and unexpected results during economic conditions we rarely encounter. If that turns out to be the case (and I'm certainly not saying that it is!), it wouldn't fit Scheiber's put-down: "a theory that says structural factors shld predominate, except when they dont, isnt so impressive." Indeed, that situation would tell us nothing new about the balance between fundamentals and campaigns; it would just say that it's harder to understand the effects of fundamentals. (Indeed, it might be impossible; there could be an underlying real but complex relationship which is impossible to understand from our limited observations, at least beyond a narrow range of typical situations).
And that's not all. Another possibility would be that the world has changed. It's possible, although in my view unlikely at least in the short term, that voters could change over time, becoming more (or less) responsive to campaigns (or that campaigns changed, and become more or less effective at getting to voters). If that's the case (and again, I don't actually see any evidence for it from this campaign), then once again that doesn't reflect badly on political scientists who had the earlier findings; it's just something new to study and understand.
So what would reflect badly on political scientists? Other than that first one (the relationship was never there and people got it wrong), what would in fact reflect badly on us is if we made claims that were too strong based on what we had. I think most of us try to be careful not to do that, but I'm sure I've fallen short at times, and I suspect others have as well; Nate Silver in a post a while ago pulled up several claims about prediction models (not necessarily made by actual political scientists, however) which were way over the top, so obviously there are some mistakes, and I think it's absolutely right when we're called on it. But it's not always easy. I've referred to this in the past...when you're writing for a larger audience it's difficult to include all the proper caveats without sounding wishy-washy and, even worse, so inconclusive that no one is going to read it. Hey, political scientists: if you read something that I write or John writes or Seth Masket writes or one of the others of us writes that sounds too certain, please let us know! I know I definitely want to hear it -- as I always want to hear if I say something based on out-of-date research, or if I just bungle applying perfectly good research to some event.
I think that's it. As I said, I think John's discussion of it is very good, so if you're interested in the substance here you should jump over to him.