Bayesian Approaches to Political Science

I ordered Carlin and Louis 2009 “Bayesian Methods for Data Analysis” yesterday and have been left with some questions by the first chapter that I want to raise here, because it reinforces my belief that there’s an unfortunate lacuna in this and many similar books. Specifically, is there a specific Bayesian approach to political science beyond what we think of as the Bayesian approach to statistical inference? I believe that our discipline conducts empirical testing primarily for the basis of making a convincing attempt at falsifying ourselves. If a tool is confusing to the median political scientist, can it ever be convincing? And for Bayesian statistics in particular, is an effort to fit the best model really an effort to falsify the theory?


 

The Bayesian approach to statistical inference is relatively well-known and at least moderately well-regarded in contemporary political science. There’s a fair number of articles in the top journals that use Bayesian approaches to data analysis, and courses on the topics are common at ICPSR, EITM, and PhD programs at multiple universities. My interest in the methodology comes less from any of these sources though, than from its philosophical appeal – it seems to match my intuition. I think people have subjective beliefs about probability, and that those prior beliefs are often not updated in light of new evidence – climate change being a particular passion of mine that serves as a nice example of this. Beyond this, taking the data as fixed and the underlying parameter of interest as variable is also a structure that makes sense to me. We have the data, I can bring it to you on a plate, but I don’t believe that a ‘true parameter’ ever exists in the social sciences – we just want to make inference about its probability distribution. It’s these fundamentals that attracted me to Bayesian approaches to statistics and it eventually consumed a large slice of my time.

But there are also fundamental problems. None of the books I’ve read on it deal with how the philosophy of inference intersects with the philosophy of science. And so I want to raise a few of the questions that seem important about that. Firstly Bayesian stats is hard. I’m a history BA who came to grad school with no experience of stats, and no idea about philosophy of science. Reading my first empirical articles was a long and unpleasant campaign against my instinct to skip the tables and get the intuition. The more our field shifts to advanced methods, the harder it will be for new graduate students to catch up. There’s obviously a balance here, because I don’t think we should abandon theories that requires advanced methods, and this is just a special case of a wider communication problem. But without a clear disciplinary agreement on our inferential approach there will always be those who are unconvinced by Bayesian analysis. And articles that don’t address this, I think, are making a mistake. Second, their needs to be more discussion of what replication means for Bayesian analysis. WINbugs is notoriously fickle, so in a practical sense it’s difficult, especially with the speed of computing advances. But what does it mean to replicate a prior?

As well as this, there’s the issue of falsifiability. We aren’t (usually) statistical consultants trying to maximize the amount of information we can wring out of a dataset, and so priors don’t help us in the same way they can maybe help financial experts understand outliers (for example). The problem is understanding what falsification means in a Bayesian sense. If data contradicts theory, how do we update? Do we stop believing what we used to believe? I think there’s some interesting avenues for discussion with regards to what falsification means for these models but again it’s hard to find existing work that deals with this, at least in the major textbooks and articles cited by contemporary Bayesian work in political science. I think we need to explore what the key terms in philosophy of science – replication and falsification – mean for Bayesian inference.

About Ben Farrer

Ben is currently an assistant professor in the Department of Environmental Studies at Knox College. He received his PhD in Political Science from Binghamton University in 2014. Ben was previously a visiting assistant professor in the Department of Political Science at Hobart and William Smith Colleges, and previously held a research position in the Department of Political Science at Fordham University. His research and teaching interests are centered around parties and interest groups, particularly those from under-represented constituencies. A great deal of his work deals with the political organizations of the environmental movement. He studies both American and Comparative politics.

3 Replies to “Bayesian Approaches to Political Science”

  1. Hi,
    this is my first time here at your blog.
    I am a grad student myself and I am learning bayesian inference as well.
    I only have to comments
    1: take a look at Andrew Gelman’s Blog.
    2. Popper is nice (falsification and so on), but it seems to me that the power of the bayesian approach is on fitting model to the data better than the usual approach. It is not about philosofical accounts of probability and so on. Rather, it is a more suitable tool for our analysis.
    Maybe the Gelman`s books can convince you of that (ou either his Blog).
    regards
    Manoel

  2. Hi,
    thanks for the comments, I’m a big fan of Gelman’s blog too. I also agree that Bayesian inference often allows for a deeper exploration of the data, but my worry here is that we’re not setting ourselves and our theories a harder test – which in my opinion, is what our empirical sections should be doing. It may be a more suitable tool in a statistical sense, but are there ways we could be using it to make it a more suitable ‘social scientific’ tool too?

  3. Hi,
    I understand your concern. On one hand, we are not used to test theories (for example, game theory predictions or mechanisms). On the other hand, hierarquical models are very good to get a better understanding of our data. I mean, in social sciences, I don’t think average effects (ATE) are so important. Actually, interactions are almost always present and usually there are some structure in the data. Multilevel models allows us adress these issues.
    All in all, I think you are right that we do not test our theories (mechanisms), but this is not related to bayesian ou classical inference, but to the general approach of our field. Formal theorists do not bother to test theories, and empiricists do not bother to properly test formal theories.
    Manoel

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.