I rarely publish anything on the blog related to my day job, reserving it instead to discuss crime fiction. I'm making an exception for this piece as it's too long for a Facebook post and I've no other suitable place to post it.
I’m finding that managing my workload is becoming increasingly difficult. A big part of the issue is an endless stream of requests for work that extends beyond my usual workload. Since January I’ve been keeping a list of such requests. Here’s a summary table of the number and type of request – the list doesn’t include university business/committees, circular spam requests/calls, already existing board commitments, follow-on requests to re-review, requests for copies of papers/info, requests for reviews of novels, my own journal editing work, and student references.
In total I’ve received 194 new requests in the first 26 weeks of 2017, varying between four and ten a week.
42 paper review
16 grant review (one of which was to undertake 12 site reviews of 2 days each)
3 book proposal review
4 book endorsement
2 review book manuscript
5 academic reference/tenure request
3 PhD external examiner
38 research or media interview/survey/seeking advice
12 contribute paper/chapter
4 write a book
6 work on a project
8 appoint to advisory board
49 speak at workshop/conference
1 be a journal editor
1 visiting prof
That’s a lot of potential additional tasks. In fact, taking on all of it would be a full-time job (and even then would involve overtime!). And I take on a quite a few of them – certainly all of the reference, external examining requests and book endorsements, most of the interviews/advice, about a third of paper reviews, grant reviews, speaking at events, contributing paper/chapter, and membership of advisory boards. That adds up to a lot of service work. All 194 need some correspondence, even if it’s just to say ‘sorry, I can’t do at present’.
This weight of expectation raises two main questions. First, what is a reasonable acceptance rate for these requests – in total and across types of requests? Second, what is the distribution of such requests across all academics – and if it’s as asymmetrical as my anecdotal evidence of asking colleagues suggests, what can be done to re-balance the workload?
With respect to the first, undertaking a commensurate number of reviews to work submitted seems like a reasonable, minimum expectation. So if one submits two articles in a year, one should expect to review six papers in return. Ditto for grant applications, etc. But is there an upper threshold beyond reciprocity where it’s acceptable to flatly turn down additional review work (say 20 papers in per year)? Or if one is a journal editor managing 50+ papers a year can one forego reviewing for other journals? I certainly generate a lot of reviewing work through submitting papers and grant applications, and I’m often asking people to review for Dialogues in Human Geography or contribute chapters to a book. I have a strong sense of obligation to undertake such work in return. Nonetheless, I’m certainly doing a lot more at present than I’m generating and I’m considering adopting an upper-limit on reviews to make things manageable, and only reviewing papers/grants that closely fit my expertise and I’m interested in. I’m not quite sure though what a reasonable upper-limit might be – suggestions welcome.
My own experiences as an editor and asking colleagues chimes with Stuart Eldon’s observations* concerning the ‘exchange economy of peer review’; that is there is a strong asymmetry in the both the requests for reviews and in the acceptance of doing review work (indeed, my anecdotal evidence is that the ‘decline to review’ rate has increased substantially over the past twenty years). Some people get asked a lot, other infrequently; some do a large number of requested reviews, others decline most requests. Of course, there are reasons as to why there is an asymmetry in requests and why people decline. Editors and agencies tend to favour established networks and those with an established profile. Academics are under increasing pressure within their workplaces with respect to teaching, research and admin. Yet, the entire academic system of evaluation is reliant on reciprocal peer review.
My sense is that journals and grant agencies need to get much better at spreading the additional academic work around. There were nearly 10,000 attendees at this year’s Association of American Geographers conference, probably 6,000+ of which were post-PhD academics. There’s a large number of academic geographers across Europe, Asia, Africa, Australasia. There is no shortage of potential experts to review work and to participate on boards, etc. One ‘rule’ we used to apply when I was editing Social and Cultural Geography was: the three referees had to come from two or more continents and at least one had to be an early career scholar. We would often also look for a reviewer outside of the discipline. A useful addition might be at least one reviewer located outside of Anglo-America, and to also consider issues of gender and race as well. It would also help to diversify editors beyond Anglo-America, thus gaining their networks of scholars (as well as encouraging a more diverse set of submissions and tackling the hegemony of Anglo-American scholarship in leading journals^). Another ‘rule’ some journals apply is that if you submit an article there is an obligation to undertake three reviews in return; if you don’t fulfil this you cannot submit another article to the same journal. In terms of talk and board invitations, I try and pass many of these on to postdocs and early career scholars to help build their profiles. Again, it would be particularly useful to spread invitations with respect to gender and race.
It would be interesting to see some data from journals as to status, location, etc. of who are being invited to review, the decline rates (and who are more likely to decline), and the extent to which people submitting to a journal are being used as referees; also what policies journals and grant agencies have for recruiting referees and tacking decline rates.
While my list of requests might be a few standard deviations from the norm, there are certainly a number of colleagues who are also dealing with a large number of requests and are doing more than their fair share of additional academic labour. My feeling is that we’re long past the point where we need to proactively tackle who gets asked to do what. I’ll continue to do my share, but I’m going to try and better manage requests. My hope is that I won’t need to say 'no' more often because the load is being shared around more effectively. But I suspect that will only happen if there’s a concerted attempt to modify selection procedures for invitation.
Many thanks to folks who wrote comments on a Facebook post I posted a couple of weeks ago about this issue. I’d welcome more feedback – please post a comment.
For a related post on how to deal with requests and when to say 'yes' and 'no', see Rules of thumb for making decisions on requests for academic work
* Elden, S. (2008) The exchange economy of peer review. Environment and Planning D: Society and Space 26: 951-953. http://journals.sagepub.com/doi/abs/10.1068/d2606eda
^ Kitchin, R. (2003) Cuestionando y desestabilizando la hegemonia angloamericana y del ingles en geografia. Documents d'Anàlisi Geogràfica 42: 17-36. Reprinted as Disrupting and destabilising Anglo-American and English-language hegemony in Geography in Social and Cultural Geography (2005): 6(1): 1-16. http://eprints.maynoothuniversity.ie/3878/1/RK__Disrupting_and_destabilizing.pdf
6 comments:
Thank you for airing this issue, Rob. Your case, with its meticulous record keeping and dramatic numbers, definitely highlights just how out of hand things can get!
As a journal editor myself, I'm aware that is has gotten harder to secure reviews -- you have to ask more people, and you have to nag them more often to get the review in a "timely" manner. Most people remain good natured and willing to help, but some are so highly impacted that they have to decline, or say Yes to too much and end up late.
You're right that explicitly working to expand the pool of reviewers is wise, and it does help. But another part of it is the nature of scholarship, where "key" ideas are engaged by certain authors and those authors end up in high demand. Like you, Rob!
Since the pressures on our research and teaching have not diminished, how can we balance these additional service pressures -- and still do the other service (and community engagement) work we'd like to be doing, and still have lives? I think many of us are struggling with these issues.
The challenge is that when one person makes the wise decision to say No, it passes the work along the chain. So it solves the problem only locally -- and of course it does not stop the number of requests, which, as you rightly point out, take time even to politely decline.
I think a lot of academics share these concerns and I'd love to discuss them further. Perhaps at an AAG session in New Orleans next year?
Dydia DeLyser
Last year our department adopted a Google spreadsheet for workload allocation -- every staff member's teaching obligations were visible for everyone else to see, and it transformed the way that that I go about finding people to staff my complex methods module: instead of going to my 'usual suspects' of people who I know well and who I knew would be willing to help me out, I started looking at what people were actually teaching, how many hours they were signed up for, and when. It gave me much more traction when asking for contributions and, as a result, greater depth on the bench.
Perhaps what's needed is a 'simple' (insert long, arduous process here) database that uses ORCID and other UIDs to assemble a joined-up profile of submitters and reviewers: each journal uploads data on a quarterly basis on its reviews and submissions so that anyone can see how many articles a person has submitted, against how many they've reviewed (with asked in column 3 and declined in column 4?). It wouldn't even be necessary to see which journals the submissions or reviews had been made. Individual reports could be suppressed entirely until someone reached a minimum 'k' in both columns.
I suspect we'd find that there are some 'serial offenders' who submit far more than they review, and also that there are 'serial samaritans' who do the reverse. Or perhaps I'm completely wrong about that, so the nice thing is that this fits with the growing 'openness' of academic work in many fields and I'd welcome the chance to be proved wrong by data.
Dydia, thanks for the comments. I think a session to discuss is a good idea. We had a seminar the other week about the 'slow university' and there are other related issues that need a bit of thought and action. Not at all easy to solve mind.
Jon, I can see the logic in your proposal, but I think it would be a very large and costly undertaking and I'd also be worried that it would provide Elsevier, Thomson, etc. with another set of metrics to sell (at exorbitant price) to universities/governments with which to beat academics with - it won't be an open process in that sense. It's also not just a numbers/allocation problem, but one of matching of expertise to paper content, etc. which would be more tricky.
I suspect that many people are not serial offenders in that they submit more than they review where they turn down refereeing, but rather that they are not asked to review (e.g., they submit one paper that needs 3 reviews, but are asked to only review one paper in return). The system does rely at present on good samaritans. However, as someone pointed out to me on twitter, these can act as gatekeepers - just because someone does the work doesn't mean they do it well or without an agenda. Which is a whole other bugbear issue with refereeing!
I wonder if part of the problem is that increasing pressure to publish and bid, from various sources, means on average academics are producing more 'outputs' than ever. I'd be interested to see data on the average number of articles per academic over the last 50 years.
A slow approach would not only reduce the amount of reviewing that requires doing, but it might spread around the opportunities to do research in the first place, e.g. research funding more be more evenly spread across the sector.
I argued that radical scholars (incl. geographers) can demonstrate commitment by refusing neoliberal pressures to advance personal research above all else, and actually pull their weight on tasks that are not self-interested and careerist. This includes refereeing and so-on. A rather unconventional position. https://thewinnower.com/papers/327-who-are-the-radical-academics-today
In accord with this, I have a huge 'service' workload, although certainly less in demand than Rob, and I do less personal research as a result. It can peak at a day a week, plus managing and laying up 30-40 articles for the J of PE, plus all of those promotion/tenure type tasks. I still see people that don't do any of these things though, which seems unjust, especially when they are full profs. Generally they are unpleasant types that I cannot work with.
On refereeing for JPE, junior faculty have generally proven more helpful; so have previous journal authors. The worst for not even replying to referee requests have been academics from China, Italy, France and Scandinavia. the most thorough reviews come from the US. n=c500 over 15 years but I am still not sure if there is a real pattern. One way to 'count' refereeing it is to log it on Publons, although this startup company was recently purchased.
I would set a non-quantified limit on service activity. Everybody has their limits.
Post a Comment