Peer review can also be very political – a paper that supports your scientific worldview might get a softer, friendlier treatment than one which casts doubt on your life’s work. This can be subconscious or entirely deliberate. I’ve recently heard about a phenomenon of “networking” through review, where early and mid-career scientists give big-name authors soft, signed reviews in the hope that the author might reach out for their next grant proposal or similar. It’s also getting more and more difficult to get peer reviewers as the number of publications per year per scientist goes up, and as a result the quality of reviews is going down (see this very interesting AGU editorial on the subject).
So grim stuff all round with regard to peer review. But some scientists say that we can improve things by making publishers pay us - I’ll address that first, before reflecting on my own reviewing behaviour.
Why publishers shouldn’t pay me for peer review (but I get paid for it anyway)
The most common criticism of peer review that I hear is “we do it for free”. This is often followed with reference to profit-making publishers with big margins (looking at you Elsevier). Many think that their lucrative business model is predicated on not paying scientists for their services, and I think that’s probably true. But here is a brief critique of “publisher pays”. Once I’ve made it, I’ll reflect on my own reviewing activities, which is something I think we should all be more transparent about.
My premise is that peer review is part of my/your job as a scientist, for which you and I are paid! In my mind, we are therefore paid to do peer review. I work about 45 hours per week in my job at UiT, and I get paid a salary to do that. My contract says that my role is full-time “research and development”, and I think it’s a fairly straightforward argument that peer review activity is part of the R&D process that I get paid to do that. If you’re a PhD student, it’s should definitely also be part of your research process, and if you’re a professor it should be part of yours too. I think that if you’re paid to do research, you’re paid to do peer review. In fact, if I were to be paid for peer review by publishers, then I would be getting paid twice for the same work, which is a bit... dubious.
Some people seem to imply that because peer-review is not mentioned specifically in my contract, then I’m essentially doing it for free. This argument doesn’t wash with me because most of the things I do aren’t listed in my contract. Throwing my trash in the bin isn’t explicitly listed as a responsibility, but I do it “for free” anyway. If I worked in a department where we all only did things that appear explicitly in our contracts then I would work in an extremely dysfunctional department.
Others advocate that we should shift the burden of payment from our academic institutions to publishers, since some publishers (MDPI, Frontiers, Springer-Nature, Elsevier) make huge profits at the expense of our institutions. I fully agree with this criticism, but not the proposed solution of “publisher pays”. If you dislike the way in which a publisher operates, just don’t publish with or review for them! I’m convinced it’s possible to make a decent career writing for not-for-profit publishers: examples in my field would be EGU journals (TC, GMD etc), IEEE journals (TGARS, JSTARS etc), and CUP journals (AoG, JoG etc). I hinted at the need for this in a recent PLoS Climate editorial.
In fact, I think “publisher pays” is a road to ruin; peer review is already being commodified and metricised via services such as Publons. Some scientists have responded by proudly reviewing one or two papers per day, totalling thousands of reviews over a few years. I can’t get on board with this – I think that the only way to review manuscripts at such a rate is to provide a superficial appraisal that will inevitably lead to bad analyses entering the literature (adding to what seems to be an uncontrollable fire-hose of lit that nobody can keep up with). We need reviewers to take time over their reviews, and work with diligence and care. Linking individual reviews to a financial payoff is not the way to do that. That’s not to mention the possibility of reviewers getting paid for purely Chat-GPT generated reviews!
There’s a related proposal that suggests publishers could offer a reduced article processing charge on subsequent submissions in exchange for reviewing service (as opposed to paying scientists in real money). To my mind this also doesn’t wash in the modern world of science, with Plan S and whatnot. Firstly, this would only work for open access publishers. That’s probably fine, because I think OA is the future. But secondly, the issue is that I’m very insensitive to the cost of publishing OA, and so would be insensitive to these incentives to review as well. All the APCs for my articles have been paid centrally by UCL or UiT – I’ve never handled the invoices or made a decision about where to submit based on the size of those fees and in most cases I don’t even check. So I just can’t imagine subsidised submission as reward for reviewing as an effective incentive.
My Peer Reviewing Track Record
One way to get people to spend more time reviewing (and not just do more reviews) is to talk about it more – that’s what motivated this blog. I thought I’d look back on how much I’ve reviewed over the last five years, and what journals I’ve done that reviewing for. There’s naturally an element of caution in this: much of my reviewing is anonymous, and I don’t want to reveal my identity by being overly specific with the stats.
You now might ask “why do you review anonymously; aren’t you prepared to stand behind your views?”. I think this is an understandable and respectable position, and if you have this view I certainly envy you! The reason I don’t sign around 2/3 of my reviews is this: people in science can take criticism extremely personally. Even the most objective critique of a paper’s analysis can be construed as a personal attack. The reason that this is a problem for me is that I live and work in single blind world where much of my work is reviewed with my name on it, by anonymous peers: my journal submissions, funding proposals etc. Unfortunately I don’t have full confidence that everyone whose work I review will have a healthy, detached perspective. So I therefore often opt for anonymity to limit blowback from bruised egos. One of the reasons I know people take things personally, is that I also have a science-ego, and can take things very personally despite trying my best not to.
So here are my stats as of October 2024. Since my first invitation in early 2020 I’ve reviewed eighteen manuscripts. Eight of these have been for The Cryosphere – I’m not including two unsolicited community comments that I’ve made in this count. Of the remaining ten, three have been for GRL, with the seven other reviews being split between six journals that I won’t name in defence of my anonymity. Of my eighteen reviews, four have resulted in rejection, and ten have been major revisions leading to subsequent publication. I haven’t tallied what my rate of invitation acceptance is, but I’d say it’s probably at around 70/30 accept/reject.
Reading how common it is for me to recommend rejection and major revisions, you might think conclude that I’m a jerk – but hold on. My (arguably) high rate of “reject” & “major revs” decisions results from a strong selection bias. I disproportionately accept invitations to review in cases where I have already identified something in the preprint/abstract that I’m concerned about. This also partly explains why I review so much for TC – as well as liking the journal a lot, TC requires preprinting which allows me to completely read a paper prior to deciding whether to accept a review invitation. If I can’t see anything that I’m really concerned about, I’ll often decline the invitation to review. If I can spot a showstopper, then I feel bound to review and help get it fixed. My greatest fear is having bad analysis escape into the lit when I could have stopped it! This is a much greater fear than having a decent manuscript move slowly through review because I declined to review – see Blackstone’s ratio. I know that my attitude is annoying for editors who might struggle to get reviewers for good manuscripts. I recognise this, and my only defence is that I review quite a lot (I think? See below) so will just pay for the sin with some general review-karma.
As stated above, I think I review more than most but not radically so. I’m not sure what percentile of the distribution I would fall into – perhaps the 25th percentile as a guess? Perhaps you, dear reader, might speculate on where I might fall in the comments at the bottom of the blog. My first-order approach is that one should review at least as many mansucripts as they have had reviews as lead author. By that metric I should only have reviewed twelve manuscripts over the course of my career, but my suspicion is that I have a slightly elevated ratio of coauthorship to lead-authorship which warrants more a bit more reviewing. But again, that’s me speculating.
To conclude, I would encourage the readers who’ve made it this far to go a little further: count up the number of manuscripts you’ve reviewed. If your career is long and storied, perhaps just do this over the last five years. See how you compare to mine, and how your figure compares to the number of reviews you’ve received as lead author in that time. If you’re brave enough, post the stats in the comments!