At the Duck of Minerva, Josh Busby has a post on the gap between political science research/IR and the policymaking community. I don't have a whole lot to say about the specific content of Busby's post, aside from the fact that I think this is an interesting an important debate to have. (Also, please note that the links in the quotations were provided in Busby's original post).
The one quick and dirty thought I do have concerns one assumption that seems to be underlying these sorts of debates. The general tone of academic-policy gap discussions seems to suggest that the problem—insofar as such a problem exists—lies with academics not making their work more user-friendly or accessible to policymakers. I get this—as Busby points out, there are barriers to entry in terms of lingo, length of academic articles, methodology, etc. However, I can't help but feel that the discussion is often a one-sided one. For example, it seems slightly contradictory to bemoan the fact that academic research is often too long and complicated to be of any real use, while simultaneously complaining about the fact that such research just can't provide the kind of detailed predictive power or nuance that you, as a policymaker, are looking for. Speaking purely as an outsider, doesn't it seems funny for policymakers to ask researchers to distill the content of their analyses down to a page or two, and then argue that they need more nuance? An average research paper might have in the ballpark of somewhere between 10 and 15 pages of actual data analysis (thinking in terms of my drafts, not necessarily finished published work), aside from the theoretical argument and research design, and there are probably a lot of "ifs" and "buts" located in the midst of that analysis. I'm sure the actual process itself is also more nuanced, but to what extent?
Along these lines, this passage in particular also caught my eye:
Steve Krasner, who served as Director of Policy Planning in the George W. Bush Administration, also downplayed the potential policy-relevance of our work. Referencing Fearon and Laitin's findings on the contributions of mountainous terrain to civil war, he wrote that such structural factors are "not something policymakers can do much about."
Moreover, Krasner noted the challenges for policymakers to know how to deal with central tendencies in particular circumstances: "A statistically significant general finding, may often be of little help for a policymaker dealing with a specific problem."
I'm not saying that we, as a discipline have great and accurate predictive powers, but I feel like the degree to which we explore more nuanced relationships between variables and examine the applicability of our theoretical arguments to subsets of broader samples is a bit undersold. And if I may riff a bit more off of this passage, I think that political scientists have also done some interesting work to move away from this fixation on central tendencies. Modeling the variance component, as with a heteroskedastic probit, adds an additional layer of nuance to our studies and helps us to explore the degree of variation we might expect around that measure of central tendency.
And so this prompts me to ask, what can policymakers do to facilitate the transmission of useful academic research into the policymaking process in a constructive way? A lot of ink has been spilled regarding the professional and institutionalized habits/incentives that make it difficult for us as academics to be more policy-relevant, but what such factors are at work for policymakers? What habits could they change? This is not an element of the debate that is completely ignored—Busby himself does make note of the fact that elements of our training could be useful for informing the way policymakers approach questions:
For someone sympathetic to the bridging the policy-academe gap, you might expect me to defend what contributions we can make. As Bob Jervis (who delivered the IPSI keynote) argued, I do think that our training may help us resist the temptations of bias, particularly the inappropriate use of decision short-cuts like historical analogies. In the best of circumstances, our critical thinking skills force us entertain alternative explanations and to look for observable implications of our argument. Those kinds of approaches may be useful in policymaking forcing decisionmakers to surface their assumptions about the likely consequences of their actions, "and then what?"
Still, arguing that policymakers could learn from our approach(es) and training is not necessarily the same thing as asking what policymakers can change about their own approach to better bridge the academic-policy gap.
I've not worked in a policymaking capacity, so it could be that I am just moderately to hopelessly naive when it comes to this subject, and other folks might have a different read on the situation. Still, it feels like the situation is a bit skewed with respect to where the onus falls to bridge this gap.