Tag Archives: research

A Conversation from “What’s the Point of IR”?

This past weekend, I had the privilege of participating, with a number of friends and colleagues, in the Sussex International Relations Department’s 50-year celebration: What’s the Point of IR?

The conference was interesting in a lot of ways – just go look at the (annoyingly long, but effective) hashtag: #whatsthepointofIR. There was a lot of very important (and very diverse) discussion of what we do and how we do it, both in practice and normatively.

In this post, I want to highlight a part of the conversation I found particularly interesting: a discussion about if IR scholars have an individual or collective normative accountability for the product of their/the discipline’s work. This conversation was had alongside the conference, on Twitter, inspired by Patrick Thaddeus Jackson’s talk on the pedagogical value of IR – largely between Patrick and I, focusing on the question of moral responsibility but also engaging whether there is an IR, who is in it, and what it is for. We intend to expand on/continue to have the conversation, but I figured that it’d be interesting to share:

 

Why I Don’t Give a Shit about My H-Index

My scholarship is a politics. I did not start out interested in an academic career, then narrow down my research interest in graduate school to focus on gender/feminism in IR. Patrick Thaddeus Jackson once called scholarship and teaching a vocation – it may be for him, but it never has been for me. For me, it is feminist politics and the search for global justice that is a vocation, and scholarship the vehicle to follow that vocation. In Marysia Zalewski’s sense, I see theory as practice, as activism. I had an interest in one graduate program, in one dissertation topic, and in one research program – and its not because that is what I like to research. Its because gender studies/global justice is what I am drawn to do, and research and teaching is how I do it.

Don’t get me wrong. I know I’m fortunate to be paid, and paid well, to follow my politics. And I’m not pretending that I do not follow my politics, and navigate my job, strategically. I do all of those things. But if I was putting professional strategy first and politics second, my career would look a lot different. Put into the context of recent discussions, the only way my h-index would measure how well I do what I do is if what I do is look to maximize the attention that my research gets in the academic social sciences. That’s not what I do, and I don’t give a shit about my h-index.

If what I did was try to maximize the attention of my research, I would do research using cutting-edge statistical techniques addressing questions that are of direct interest to a significant percentage of prolific scholars in the field. I do not stay that because it is as easy as that sentence makes it sound – compiling a great h-index, even if one sets out to do it, is very difficult and takes a significant amount of skill. And I don’t say it criticizing the people who take such a career path – to each his/her own, and I know a lot of great people who see this profession as an end in itself, or a means to an end of living comfortably.

At the end of the day, what I’m saying is that choosing to research gender, sexuality, and security in global politics is likely not an h-index maximizing choice. H-index maximization would involve paying attention to, following, and developing disciplinary trends, and citation seeking. My chosen research instead looks to buck and alter disciplinary trends writ large. That doesn’t mean it gets no attention – but it does mean that it gets attention differently, and there is a limit to the amount of attention it gets. That is a career externality to the political choice that I’ve made.

There are some who would argue that the h-index tradeoff is a personal choice that I have made – akin to other personal or career choices people make that have various costs and benefits. That argument might be worth considering if IR research as a whole benefitted from selecting for h-index maximization. I argue that it does not.

My argument does not come from the position that IR research is an unmeasurable art, or the contention that there is some intangible quality that makes scholarship good. Both might be true, but I think my issue is more fundamental. In the provision of data about, and discussion around, IR scholars’ h-index in the last couple of weeks, there has been discussion of whether h-index is a good indicator (whether it captures ‘productive researchers’) and whether what it indicates (‘research productivity’) is what the best Departments should be built around. In those conversations, the suggestion that the h-index measurement has biases has come up several times, especially in Facebook conversations with friends and colleagues.

I reject the notion that h-index measurements are biased. Bias implies that there is some achievable, objective standard out there that h-indexes just fail at measuring – a je ne sais quoi of good scholarship that is either intangible or poorly measured. I disagree. I think that using h-indexes as metrics is a combination of reifications of the political status quo and popularity contests – but I don’t think that is a bias. It is a politics, a direction, and a disciplining move.

The politics is that we like where the discipline is right now, and want to honor innovative, high-quality, attention-grabbing work at its center.  The politics suggests that the majority of ‘research productive’ scholars in IR currently study desirable subject areas from desirable theoretical perspectives using desirable methods. The work that is at the margins is appropriately at the margins, and taking theoretical, empirical, and methodological risks is unlikely to pay off. The direction, then, is the perpetuation of the status quo. The disciplining move is to tie professional success to this status-quo mainstreaming and call it an objective metric. If you can measure quality by influence, and influence by the number of people who pay attention to the work, then IR scholarship has an incentive to run towards the middle and find popular niches.

To me, that does not work for whatever IR is and/or should be. It stifles macroinnovation for microinnovation, encouraging stagnation. It reifies the marginality of disciplinary margins. And it does so in a more formalized way than the social structural exclusions of the discipline do currently.

My h-index exists, like everyone else’s. I even know it. But I don’t give a shit about it. There are those who will judge the quality of my scholarship by my h-index, and/or see it as a good indicator thereof. I cannot stop that. But I can think its both misapplied to me and a bad move for IR scholarship. It being misapplied to me may be my issues – but I’d wager I’m not the only one who doesn’t see citation as a primary purpose of my work. That it’s a bad move for IR scholarship is an argument that I think merits further consideration.