RelationsInternational

global politics, relationally

Stop the Madness: Reviewer Demands and Quantitative Models

| 9 Comments

The madness has to stop.  Things are getting out of hand for those that do statistical research and try to publish their results.  Reviewers for quantitative political science articles have gone off the deep end in their requests for revisions of models.  A recent Journal of Peace Research article had a table with 37 different models for robustness checks.  It is not unheard of to see other articles with hundreds of different models in an appendix or footnotes.  New variables, robustness, new methods, the requests never seem to end and only escalate.

What is the source of the problem?  For one, the concern by reviewers could always be that the scholar is fudging the modeling to make the results meet their expectations or using inappropriate methods.  Modeling to meet expectations is a horrible practice and efforts of this sort always are discovered.  We know many debates where this has happened, but the benefit of the doubt should be with the scholar in the first place.  Using inappropriate methods is another key concern, but new methods are not always better or even useful.  Regardless, the findings should be central rather than the model.

The reasonableness of the request must be judged against the payoff for the change.  Is it worth it to request dozens of new specifications based on your own inclinations?  Are you making comments and suggestions because you want to be able to say something about a model at the review stage?  Is it really necessary?

Will someone think of the authors?  Requiring new specifications and variables often requires a vastly expanded model, learning new methods, or restructuring an entire dataset to meet demands.  Since careers often hang in the balance, this request is often granted, but at what price?

At the review stage, we need to be better at utility calculations.  Will our request make the article better?  Do we have some sort of special knowledge about the model or question that would suggest adding a new variable will greatly enhance the outcome?  Having gone through this process many times, I rarely ask for new models or specifications in reviews because I know the burden this demands of the author.  Basically, the question is simple, is the article good enough or not?  No amount of specifications will change the outcome of this question.

The real issue is civility.  Are we being civil to our colleagues or just presenting them with a list of tasks in order to get over our personal bar.  Looking at recent publications and reviewing many articles, I believe we have gone too far in our demands during the article revision stage.  Are we asking for changes because it will make the article better or because we are creating unreasonable publication standards?

PS – Reflections (9/9/2014)

Seems this post struck a nerve for some.  That was the intention since I gather many who work in the quantitative IR field feel this way while the statistics wing of politics argues for more robustness and standards.  My main point was to think about the nature of the article and the burden on the author when making such requests during the reviewing stage.  For me, the issue really struck home after a coauthor spent weeks modifying a model to satisfy a reviewer with no substantive changes made despite mountains of work.

My JPR example example was perhaps ill chosen.  I picked the article because it started a conversion on facebook that made me aware of the collective angst regarding these issues.

I did want to add that at no point did I ever blame the editors of journals.  They are only working with what they are given by the reviewers and have to uphold standards.   To me, this issue is more about nature of demands by reviewers rather than some structural problems.  As always, be civil, unless its really funny to not be…

 

Author: Brandon Valeriano

Brandon Valeriano is the Donald Bren Chair of Armed Politics at the Marine Corps University.