I ordered Carlin and Louis 2009 “Bayesian Methods for Data Analysis” yesterday and have been left with some questions by the first chapter that I want to raise here, because it reinforces my belief that there’s an unfortunate lacuna in this and many similar books. Specifically, is there a specific Bayesian approach to political science beyond what we think of as the Bayesian approach to statistical inference? I believe that our discipline conducts empirical testing primarily for the basis of making a convincing attempt at falsifying ourselves. If a tool is confusing to the median political scientist, can it ever be convincing? And for Bayesian statistics in particular, is an effort to fit the best model really an effort to falsify the theory?
The Bayesian approach to statistical inference is relatively well-known and at least moderately well-regarded in contemporary political science. There’s a fair number of articles in the top journals that use Bayesian approaches to data analysis, and courses on the topics are common at ICPSR, EITM, and PhD programs at multiple universities. My interest in the methodology comes less from any of these sources though, than from its philosophical appeal – it seems to match my intuition. I think people have subjective beliefs about probability, and that those prior beliefs are often not updated in light of new evidence – climate change being a particular passion of mine that serves as a nice example of this. Beyond this, taking the data as fixed and the underlying parameter of interest as variable is also a structure that makes sense to me. We have the data, I can bring it to you on a plate, but I don’t believe that a ‘true parameter’ ever exists in the social sciences – we just want to make inference about its probability distribution. It’s these fundamentals that attracted me to Bayesian approaches to statistics and it eventually consumed a large slice of my time.
But there are also fundamental problems. None of the books I’ve read on it deal with how the philosophy of inference intersects with the philosophy of science. And so I want to raise a few of the questions that seem important about that. Firstly Bayesian stats is hard. I’m a history BA who came to grad school with no experience of stats, and no idea about philosophy of science. Reading my first empirical articles was a long and unpleasant campaign against my instinct to skip the tables and get the intuition. The more our field shifts to advanced methods, the harder it will be for new graduate students to catch up. There’s obviously a balance here, because I don’t think we should abandon theories that requires advanced methods, and this is just a special case of a wider communication problem. But without a clear disciplinary agreement on our inferential approach there will always be those who are unconvinced by Bayesian analysis. And articles that don’t address this, I think, are making a mistake. Second, their needs to be more discussion of what replication means for Bayesian analysis. WINbugs is notoriously fickle, so in a practical sense it’s difficult, especially with the speed of computing advances. But what does it mean to replicate a prior?
As well as this, there’s the issue of falsifiability. We aren’t (usually) statistical consultants trying to maximize the amount of information we can wring out of a dataset, and so priors don’t help us in the same way they can maybe help financial experts understand outliers (for example). The problem is understanding what falsification means in a Bayesian sense. If data contradicts theory, how do we update? Do we stop believing what we used to believe? I think there’s some interesting avenues for discussion with regards to what falsification means for these models but again it’s hard to find existing work that deals with this, at least in the major textbooks and articles cited by contemporary Bayesian work in political science. I think we need to explore what the key terms in philosophy of science – replication and falsification – mean for Bayesian inference.