This past weekend, I had the privilege of participating, with a number of friends and colleagues, in the Sussex International Relations Department’s 50-year celebration: What’s the Point of IR?
The conference was interesting in a lot of ways – just go look at the (annoyingly long, but effective) hashtag: #whatsthepointofIR. There was a lot of very important (and very diverse) discussion of what we do and how we do it, both in practice and normatively.
In this post, I want to highlight a part of the conversation I found particularly interesting: a discussion about if IR scholars have an individual or collective normative accountability for the product of their/the discipline’s work. This conversation was had alongside the conference, on Twitter, inspired by Patrick Thaddeus Jackson’s talk on the pedagogical value of IR – largely between Patrick and I, focusing on the question of moral responsibility but also engaging whether there is an IR, who is in it, and what it is for. We intend to expand on/continue to have the conversation, but I figured that it’d be interesting to share:
In a new article in the Millennium Special Issue on Quo Vadis IR: Method, Methodology, and Innovation, Sammy Barkin and I make the argument that IR’s “methods matching game” is fundamentally flawed – the scholarly equivalent of a dysfunctional relationship. On the one hand, the dating metaphor (made in the article and played up here) is trite. On the other hand, the suggestion that IR scholars’ choices of methods are often “matched” to people, projects, and paradigms in a haphazard and problematic way is meant seriously, and at the heart of our argument.
Our article, “Calculating Critique: Thinking Outside the Methods Matching Game,” makes the argument that IR scholars of all stripes often assume that certain methods belong with certain paradigmatic or substantive approaches to the field, so choosing their research approach of research subject chooses the methods that scholars are trained in and go on to use. There tends to be a linear path, ontology –> epistemology –> methodology – -> method. We argue that this pattern is simple, and often easily accepted across the field, even without reflection or when reflection might produce a different result. We also argue that it is completely wrong.
The substance of the article, and of the edited volume that it is meant to introduce (Interpretive Quantification, which is under contract and about to be sent for review) is the use of quantitative, computational, and formal methods to explore questions in constructivist and critical IR research – that is, traditionally positivist methods being used for traditionally non-positivist work.
But this is not an attempt to bridge the positivist/post-positivist divide (whatever that is) or the qualitative/quantitative divide. It is, instead, the promotion of two arguments: 1) the methodology by which IR scholars choose methods is fundamentally flawed; and 2) quantitative methods are interpreted too narrowly, and often incorrectly, in IR.