Talking peer review series #1: On kindness in scholarly evaluation practices – a guest post
Peer review is a central scholarly practice, a practice that carries an enormous weight in terms of gatekeeping; shaping disciplines, publication patterns and power relations. It governs the (re)distribution of resources such as research grants, promotions, tenure and even larger institutional budgets. As such, it is crucial to deeply understand how it works in situated evaluation practices, and to continually rethink it to strive for its best, and least imperfect (or reasonably imperfect) instances. This new series on the blog, Talking Peer Review, is a means to do just that. You will find here reflections, reports, opinion pieces and, to bring in some metascience, reviews of the current state of the art in the arts and humanities disciplines. In the first episode, Anne Baillot, Professor in German Studies at Le Mans Université and guest researcher at ICAR (CNRS/ENS de Lyon) wishes to share the views of a group of colleagues on the value of kindness in peer review and possibilities to bring more transparency to the process. Enjoy!
This text is a sightly edited version of the French article “Comment l’évaluation ouverte renouvelle-t-elle la conversation scientifique?” by Anne Baillot, Céline Barthonnat, Julie Giovacchini, Anthony Pecqueux, Cédric Poivret, published on February 3rd, 2022 in The Conversation-FR (https://theconversation.com/debat-comment-levaluation-ouverte-renouvelle-t-elle-la-conversation-scientifique-175771)
On kindness in scholarly evaluation practices
The Covid crisis had shed a new light on the impact of access to knowledge: sharing medical and biological studies was essential to the rapid development of treatments and vaccines. It is in this context of a global emergency that a recent form of publication, the preprints, have suddenly met a wide interest. Preprints are scientific papers that are made available online before they have been officially evaluated and accepted by an academic journal. While they allow for a rapid dissemination of ideas, they present novel, partly unverified information which becomes part of the ongoing scientific conversation. The reliability of preprints has become a polarizing topic in the wake of the pandemic. The public health issues at stake were so massive that even the general press turned to dedicating articles to preprints in and of themselves (see: https://www.liberation.fr/sciences/2020/05/29/avec-le-covid-19-la-folie-des-prepublications-scientifiques_1789429/ or https://www.timeshighereducation.com/opinion/covid-19-outbreak-highlights-potential-preprints).
But this surge of interest did not necessarily foster a better understanding of reviewing processes, evaluation and their role in the way scientific communities work and produce knowledge.
How research evaluation usually works
In most disciplines, we see an urge towards increased rigor and transparency when it comes to the evaluation of scholarly outputs. The rules it follows are explicitly intended to promote transparency. In order to reach that goal, the papers go through the mill of a complex editorial circuit involving different actors.
The authors (usually professional academic staff) submit their draft articles to entities called “journals”. Journals can emanate from learned societies constituted in association, from research teams, from university presses or from independent commercial publishers whose degree of specialization and/or control of a scientific domain can vary. Journals are organized in committees (editorial and scientific) who are in charge of supervising two essential stages in transforming a personal text into a published scientific article: the evaluation phase (“reviewing”) and the article production stage (editing the text according to the standards and layout of the journal, correcting spelling, improving wording, etc.).
The reviewing process determines whether an article is accepted or rejected by the journal; it is of fundamental importance. Journals have the responsibility to guarantee that the submitted texts will be treated with objectivity, avoiding conflicts of interest and personal vendettas. They also should be designed so as to offer authors an opportunity to significantly improve their texts, e.g. not only to make them publishable, but also to advance scientific knowledge significatively on a given topic.
In the context of Academia – a highly competitive, occasionally toxic sphere – , the framework usually favored for evaluation is the so-called double-blind peer review, in which – theoretically at least – neither the author nor the expert knows who is evaluating whom and who they are evaluated by. The expert is chosen by the journal’s editorial committees on the basis of their own work and experience on the topic of the submitted article. This general anonymity, combined with confidentiality regarding the evaluation’s wording (expert reports are communicated to the author, but not made public) is considered by the largest part of the academic world as the best guarantee for a sound and efficient evaluation (see for instance: Mayden, Kelley D. 2012. ‘Peer Review: Publication’s Gold Standard’. Journal of the Advanced Practitioner in Oncology 3 (2): 117–22).
Double blind peer-review is also a core asset in the “economy of prestige” at play in the dissemination and discussion of scientific publications. Recent studies such as https://zenodo.org/record/5017705#.Yd3w0lko-M8// have shown how this reputation leverage empowers publishing houses to exert an important control over scientific research.
Indeed, double-blind peer review has its drawbacks. It has not always only been the favored modus operandi; in fact, it established itself in the scientific community only late in the 1970s (Bazin and Magne, “De la République des Lettres à l’évaluation en double aveugle: une archéologie des revues académiques”. Revue internationale de psychosociologie et de gestion des comportements organisationnels, 2020/64, vol. XXVI, p. 123-144). In today’s practice, it easily leads to terse, violent criticism (see Tennant, Jonathan P., and Tony Ross-Hellauer. 2020. “The Limitations to Our Understanding of Peer Review”. Research Integrity and Peer Review 5 (April). https://doi.org/10.1186/s41073-020-00092-1) – obviously facilitated by anonymity. Even in lesser confrontative cases, authors and reviewers may feel frustrated by a system that prevents them to exchange views as the responsible adults and professionals that they are.
Open peer-reviewing – and kindness
Other forms of peer review are gaining traction in the scientific community (see Ross-Hellauer, Tony. 2017. “What Is Open Peer Review? A Systematic Review”. F1000Research 6 (August): 588. https://doi.org/10.12688/f1000research.11369.2). They follow two principles that double blind peer reviewing doesn’t provide: transparency and distribution. Transparency means that authors and reviewers know each others’ names and can actually get into a dialogue. Alternatively, the review reports can be published alongside the articles but without uncovering the identity of the reviewers – an option that the OPERAS study found particularly attractive for SSH scholars (ibid.). Distribution allows to open the reviewing process. The experts are not a restricted circle of experts appointed by a journal, but anyone who wishes to delve into the article and comment on it can do so.
These new forms of evaluation, called open peer review, have given way to the development of platforms such as hypothes.is or Peer Community In. The FOSTER portal even offers a complete online course for training in open evaluation: https://www.fosteropenscience.eu/learning/open-peer-review/#/id/5a17e150c2af651d1e3b1bce
Open peer review formats have several inherent benefits. The articles submitted for publication are of better quality to begin with: since the text is made public prior to evaluation, authors generally take better care of content and form. Additionally, reviews tend to be formulated in a more benevolent way. Arguments and counter-arguments can be made public, and thus contribute to the public scientific debate. Finally, the reviewer pool is potentially much wider than that of journals working with and within their own networks. But it still has its downsides. Due to the extremely hierarchical structure of the academic world, open scientific dialogue does not have only advantages for all of those involved. For example, young researchers or non-permanent staff may feel better protected by the anonymity of double-blind reviewing than by having their names published or even their evaluations openly accessible online. Even open evaluation requires safeguards to achieve its community-building goal without putting at risk those that are the most vulnerable.
An evaluation that is designed as open from the onset allows for better collective work. Each person can thus value their contribution to scientific research (the annual or ongoing publication of the lists of experts in the journals allows for the recognition of this necessary but invisible step) and the work of the professionals who take on the second step of the editorial process, e.g. copyediting, is also facilitated. Texts that have been thoroughly drafted, crafted, reread by several people, and improved following constructive exchanges among peers are of better quality in terms of both content and form. Several studies present numerical analyses of the positive or negative impact of the different models, blind or open review; they unanimously underline the overall better output of open peer review (for example, Walsh et al. 2000 doi:10.1192/bjp.176.1.47, Ross-Hellauer 2017 https://f1000research.com/articles/6-588/v2, Besançon, et al. 2020 https://doi.org/10.1186/s41073-020-00094-z).
It is not just articles…
Quality assessment issues do not only arise in the academic community when it comes to publishing scientific articles. Reviewing and evaluation have become a massive part in the daily routine of researchers’ activities: in addition to the publication of articles in journals, publishing book chapters or books, applying for permanent positions, prizes, medals and other distinctions, participating in conferences and congresses, and allocating funding for travel or for research projects – in short, all research activities are concerned with third-party assessment. Reviewers and reviewed ones are caught in a system of rewards that has less and less to offer, and that gets more and more out of hands of the scientific community (see also the take of https://dariahopen.hypotheses.org/1172 on the need to reassess the value of evaluation in academic careers).
Rethinking the human values of dialogue, openness and kindness as core virtues of the scientific community may allow for a more serene approach to these issues. Valuing kindness does not mean making scientific practice a matter of “good feelings”. It invites to reassess the net of relationships that makes research possible. Pondering upon these relationships rather than hiding them helps us better see the interdependencies between the different actors who actually make scientific practices – beyond individualization, star-system and hierarchy-building. Journals are a good place to try out forms of transparency and pluralization.
What we proposed here is conceived as an invitation to debate. As authors, editors and reviewers, we have experienced what we analyze here. There should be more empirical studies on these topics, more recognition, more incentives to move to novel models. Some pioneering experiences offer food for thought (such as the journal Vertigo, cf. Bordier 2016 https://hal.archives-ouvertes.fr/hal-01283582); we hope that the research community as a whole will consider and embrace them, and be open to this debate.
————-
English translation: Anne Baillot
Intro block: Erzsébet Tóth-Czifra
Cite this article as: Anne Baillot, Céline Barthonnat, Julie Giovacchini, Anthony Pecqueux, Cédric Poivret "Talking peer review series #1: On kindness in scholarly evaluation practices – a guest post," in
DARIAH Open, 02/03/2022, https://dariahopen.hypotheses.org/1228.
OpenEdition suggests that you cite this post as follows:
Anne Baillot (March 2, 2022). Talking peer review series #1: On kindness in scholarly evaluation practices – a guest post. DARIAH Open. Retrieved January 18, 2025 from https://doi.org/10.58079/ngur