History vs. Political Science? Temporally Constrained Studies and Generalizability.

So this is my first
post back from a prolonged break. As I mentioned in a previous—albeit
brief—entry, I’ve had a busy but enjoyable summer. I got married, defended my
dissertation at the beginning of July, and my wife and I have since relocated
to Tuscaloosa, Alabama where I’ve accepted a position as a post-doc. We’re both
pretty excited about the move and we’re really enjoying ourselves. Having grown
up in the Adirondacks in upstate New York, summers that extend beyond a two
month window have a certain appeal. And now that we’re getting settled in down
here, I’m slowly getting back into a routine for work. I’ve been sitting on
some papers for a while now that I’m finally turning my attention to, and it’s
my experience with one of these papers that motivated me to write this
particular post.

I’ve been thinking recently about how we view the place that
history holds in our discipline. Now, I suspect there is an initial reaction
that many people (particularly quantitative political scientists) are likely to
have to the word “history,” and I’m guessing that is probably one of derision.
It’s not that political scientists necessarily dislike history or historians, but
this reaction is conceivable given that these disciplines are marked by some
important epistemological differences, and many quantitative political
scientists are often taught from the outset to avoid relying
on single cases when drawing more generalizable lessons and conclusions (with
good reason). When we think of historians, it is often in the context of
someone who provides an excruciating level of detail about one particular event,
and then tries to explain its origins and broader implications/lessons. My
issue is not in fact with this particular point—as social scientists, we want
to be able to assess just how generalizable the relationships that we’re looking
at really are. We are typically interested in exploring systematic relationships,
trends, and patterns that hold over broader periods of time and apply to a wide
range of actors than are many historical studies. So to be clear from the
outset, I am not advocating that quantitative political science should attempt
to emulate the professional approach or methods that many historians employ.

My issue, really, stems more from the assertion by many quantitative
political scientists that a study that is time bound in some way is “history”. This
does not apply to all political scientists, but it’s been my experience that reviewers
will often look at the temporal range of a study and dismiss its if it is
explicitly bound to a particular period of time that does not overlap with the
present (allowing a few years of slack on just what constitutes the “present”,
that is). Indeed, some seem to be under the impression that it’s not even
political science if the gap between the time period covered by a study and the
present is sufficiently large. For example, let’s say I’m interested in
understanding the domestic determinants of pre-hegemonic US foreign policy
behavior. In this particular instance, there are theoretical reasons to suppose
that the behavior of the US could be different in this time period as compared
to later periods. Let’s further suppose that I have theoretical expectations
regarding the relationship between X and Y, and I write a paper on this
relationship for the period between 1900 and 1945. I would not at all be
surprised to receive reviews rejecting the paper at least in part because the
reviewer(s) questioned the study’s broader relevance, given that it’s bound to
a 46-year time period.  

I think these sorts of reactions expose some problems and some
important assumptions that we often make as political scientists. First, not
every study necessarily needs to neatly map onto the present time period—or any
other arbitrarily chosen time period. There is nothing inherently unscientific
about the notion that certain relationships or phenomena can only be found in a
particular temporal context. I’m fairly certain people have not abandoned the
study of dinosaurs simply because we no longer see T-Rex roaming around the
countryside. And our field is rife with examples of research, the temporal
context of which is fundamentally limited in some way. Scholars of American
politics provide perhaps the clearest example of an entire sub-field that
cannot be held to apply to a period extending back beyond 1789. The sub-field
of international relations is similarly dependent in many ways upon the
existence of the modern nation-state, which we typically trace back to 1648,
and some of the most widely used only go back to 1815. Do we consider our
endeavors in these areas “history” because our studies are bound to these time
periods?

Also on this point, there seems to be a double standard when
we consider the broader implications of our research. Taking my example from
earlier, reviewers will commonly ask how the study of US foreign policy between
1900 and 1945 is relevant for today, but rarely do we consider how a study of US
foreign policy from 1945–2013 informs our understanding of the 1900–1945
period. It strikes me that this is (1) perpetually moving the goalpost, and (2)
that it may be the wrong goalpost. This may seem like an odd point, but what it
“relevant” by current standards is something that obviously changes on a
day-to-day basis, and it’s a standard that says nothing about the quality of a
paper as a piece of social science, or whether or not a paper helps us to
understand a particular question about a given set of relationships. Really,
this only represents our own innate temporal biases, but it says nothing about
how scientific a piece of research is.  If the goal is truly generalizable theoretical and empirical knowledge,
in a temporal sense, then this kind of consideration should apply just as much
as thinking about how a study informs our understanding of the present. Conducting
a study with the purpose of expanding our knowledge of systematic relationships
between societal actors is not synonymous with expanding our knowledge of
systematic relationships between societal actors for the purposes of informing
our understanding of the present.

And it’s not as though we don’t attempt to deal with “unique”
time periods and cases in our current research. However, it’s often the case
that the manner in which we deal with these cases is fairly crude. For example,
we might include a dummy variable in a model to control for a time period (or characteristics
of a time period) that we believe to be unique in some way. Bipolarity during
the Cold War, for example. Similarly, we might include a dummy variable in our
model to control for states that we believe to be unique—depending on our
topic, it might be a state like Israel, Egypt, the US, or maybe a group of
states like the “Great Powers”. Sometimes we might also include an interaction
term to account for how the effect of one variable might be conditional upon
another. 

But these approaches are not always appropriate methods for
dealing with the questions that we want to answer. Dummying out a particular
time period is only going to tell us whether or not a given time period or
group has a higher or lower intercept than the alternative time period or
group. This approach is also often atheoretical. For example, we might have a
belief that a time period is somehow different, but cannot fully articulate why
or how. Similarly, interaction terms with a variable capturing a particular
time period, for example, are implicitly suggesting that a given relationship
is time-bound in some way. However, these approaches don’t allow us to examine
whether or not the remaining variables in our models also have different
effects in the context of a particular time period. For example, maybe we’re
interested in whether or not both regime type and economic interests have a
different effect on conflict propensity during the Cold War as compared to
after. This, then, would suggest that maybe splitting our sample into two time
periods is the more appropriate means of addressing our question. In fact, the
notion that we would dummy out a particular time period because we suspect that
it’s “different” in some way, but don’t exactly know how, is exactly the reason
why we would want to conduct a study that is temporally bound in the first
place.

This points to another issue. I think these biases are
somewhat rooted in, and reinforced by, our relatively limited access to “good”
data. Almost every journal article contains passages wherein the authors
attempt to assert the broader relevance of their work. In the case of
international relations, it is also quite common for these articles to then
proceed to test their arguments using data that is only available for the
post-World War II period. Sometimes, these tests will use data for a single
country—often the US. Yet the arguments the authors make are often asserted as
applicable to a broader set of countries than just the US, and rarely do such
papers even address their own temporal limitations. We implicitly accept the
generalizability of papers in which the tests of broader theoretical arguments
rely on data from an incredibly narrow and often unrepresentative set of
states, but push back when a study openly acknowledges its more narrow temporal
confines. Why should we automatically assume that such studies inform our
understanding of international relations and state behavior in the 1800s?

This is understandable. Particularly in the field of
international relations, the availability of data is exponentially greater in
the post-World War II period than before. Commonly used indicators like GDP and
trade either don’t exist, are often missing, or are highly inaccurate for
earlier time periods. Accordingly, many of our studies focus on this 50–60 year
time period—not because it somehow matters more, or because we are interested
only in this particular time period, but because this is the period for which
we have access to relatively abundant data sources. But even in the post-World
War II time period some of the data we use can still be of questionable
reliability. Accordingly, when we see an article that focuses on a much earlier
time period it sticks out like a sore thumb, and reviewers will often proceed
to subject that paper to a different standard than other papers—a standard that
really has nothing to do with the execution of the paper or the soundness of
its argument.

This knee-jerk reaction against studies that are temporally
bound in some way can also have deleterious consequences for our ability to
understand the world. Finding that a particular relationship between two
variables holds only for a given time period can reveal new and interesting
questions. For example, we have evidence
that Republicans and Democrats have switched their positions on military
spending over the course of the Cold War. If we were looking for a relationship
between Republicans and higher military spending over the entirety of this time
period, we might erroneously conclude that there is no relationship.
Alternatively, the finding that this relationship is temporally bound in some
way raises new questions: Why did they switch? What caused the switch? Etc.

If our goal is to continuously develop and refine our
understanding of how the world works then we must think carefully about the
standards we set. If that standard is that we must only examine relationships
that hold for centuries at a time, then we are imposing some very serious limitations
on ourselves as researchers. These kinds of “big” systematic relationships are
clearly important, but the march of scientific progress is not marked
exclusively in these terms. 

About Michael Flynn

Michael Flynn is an associate professor in the Department of Political Science at Kansas State University. He received his Ph.D. in Political Science from Binghamton University in 2013. His research focuses on the political and economic determinants of foreign economic and security policy, security issues, and state repression.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.