Magnifying glass on a computer keyboard
Research

Peer Review Week 2020

Here at the BPS, we understand that Peer Review may not always be perfect but is vital to help us protect and promote scientific quality and rigour.

05 March 2022

By BPS Policy Unit

The Covid-19 pandemic reinforces the need for trustworthy and reliable research and publications as the scientific community endeavour to find solutions to some of world's most pressing problems.

So how does Peer Review play its role? And how can we trust in this process?

In advocacy of Peer Review Week 2020, we want know how people feel about Peer Review.

Is it viewed as important? Do people trust in Peer Review?

We've asked our BPS members, across a range of experience and disciplines, some of these burning questions to find out more.

Q1. Why is peer review important?

"Peer review is the cornerstone of science and it is important because it should help safeguard scientific quality and rigor.

When it is working well, it should ensure that peers evaluate their peers in a constructive, balanced and fair manner that provides expert and careful evaluation of new research, findings and ideas"

Professor Daryl O'Connor - Health Psychologist, Professor of Psychology at University of Leeds, Chair of BPS Research Board

"Peer review is important, but it is not the be-all-and-end-all of the research process. It is an important means of communicating research to fellow scientists. It is (or should be) a helpful way of testing out theorising and interpretation with fellow experts.

Sadly, in days of metric-led performance, getting grants and publications is often a matter of career development. So, often, peer reviewers need to be pleased!

Today, as before, there is much great science, and the great papers usually get through the system. But peer reviewers have their agendas and perspectives which can act as an obstacle to the effective creation of knowledge.

Peer review is important, but it is by no means perfect"

Professor Patrick Leman - Developmental Psychologist, Chair of BPS Editorial Advisory Group (Journals)

"To ensure that expertise is involved in establishing quality in research publication. Unfortunately this is not always a condition through which this is established - people recommend their "friends" to review, editors cannot always guarantee that those who agree to review will be the ones with the greatest subject expertise, editors may have their own agenda"

Dr Linda Kaye  - Cyberpsychologist, Senior Lecturer in Psychology at Edge Hill University, Chair of BPS Cyberpsychology section

"I'm not entirely convinced that it is important. I think that it can be important and useful when applied at the right time, which is before research is undertaken.

As journal editor I have overseen hundreds of peer reviews for regular articles and Registered Reports, and I would say the review process, on average, only adds value when peers have the opportunity to constructively evaluate a study's rationale and design while the research can still be modified.

Peer review applied after studies are finished is of limited value in addressing flaws because the horse has left the barn long ago, and as often as not, digresses to a subjective judgment of whether a study's results are "interesting" or "compelling".

Post-study review for regular articles is a catalyst for outcome bias and, at least within the life and social sciences, is likely to be a significant cause of publication bias"

Professor Chris Chambers - Cognitive Neuroscientist, Head of Brain Simulation at Cardiff University, Winner of the BPS Book Award 2018 for The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice

"Peer review helps ensure that only high-quality work is published, whilst at the same time, assisting in improving the quality of that work. Whilst peer review is not perfect, a peer-reviewed published article indicates a seal of approval"

Dr Daniel Jolley - Social Psychologist, Senior Psychologist at Northumbria University in Newcastle, Executive Committee Member of BPS Social Psychology Section, Member of the BPS Early Career Network

Q2. What do you think is the most common misperception about peer review?

"For those who have not been through it, there could be the perception that it is an easy process – but reviews can be challenging to write (as a reviewer) and respond to (as an author).

Sometimes reviews are short, but others they are long, with multiple points, that could be contradictory when there are several reviewers."

Dr Daniel Jolley - Social Psychologist, Senior Psychologist at Northumbria University in Newcastle, Executive Committee Member of BPS Social Psychology Section, Member of the BPS Early Career Network

"That it is accurate or, rather, always an accurate evaluation work. When a paper or grant proposal is rejected it is more likely the result of a failure of communication or, particularly in grant reviews panels which are notoriously capricious, influenced by the predilections or even prejudices of the panel.

And communication is a two-way process, so either the significance (or some other aspects of the work) was not clearly written, or it was not clearly understood, or a mixture of the two.

Reviewers often differ, authors often submit to the wrong sort of journal. There is an art to writing and there is an art to successfully publishing research and over the course of an academic career you learn to do it (better), to calibrate your writing against the system."

Professor Patrick Leman -  Developmental Psychologist, Chair of BPS Editorial Advisory Group (Journals)

"That all journals treat peer review as a quality assurance exercise. I am aware of some journals and publishers than use questionable reviewers and practices and seem to think this is acceptable.

I have received editor notes back before with reviews which have specifically asked me to include a few articles published in that journal solely as a way of inflating impact factors.

Because of this, peer review is not a consistent indicator of research quality unfortunately"

Dr Linda Kaye - Cyberpsychologist, Senior Lecturer in Psychology at Edge Hill University, Chair of BPS Cyberpsychology section

"This is an interesting question. A couple things come to mind. First, that scientists are paid to review other scientists' papers.

They are not (as a general rule, but there are some exceptions). Second, that findings reported in a paper are completely robust and replicable just because they have been published in a peer-reviewed journal"

Professor Daryl O'Connor - Health Psychologist, Professor of Psychology at University of Leeds, Chair of BPS Research Board

"That it provides anything beyond the thinnest veneer of quality control.

In most fields, reviewers rarely reanalyse data or code, and the review process is not effective at detecting fraud.

To say that a study is of high quality because it was peer reviewed is, in my view, to misunderstand the capabilities and limitations of the review process"

Professor Chris Chambers - Cognitive Neuroscientist, Head of Brain Simulation at Cardiff University, Winner of the BPS Book Award 2018 for The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice

Q3. What is your top tip for someone about to do their first review peer?

"If I am allowed only one tip (and a top one) it is to remember there is at least one human being on the receiving end of your review.

So, unless you actively cultivate a sociopathic personality, be kind, constructive, and open-minded.

We do not demonstrate our intelligence by being harsh, patronising, or hyper-critical. Quite the opposite in fact.

Work out what the author(s) are saying and why they are saying it. If you find errors, point them out and suggest alternative, more appropriate approaches"

Professor Patrick Leman - Developmental Psychologist, Chair of BPS Editorial Advisory Group (Journals)

"If you are an educator, you will be aware of the importance of constructive feedback.

For those who are not educators, provide feedback that can be actionable and not simply criticism (e.g., "the manuscript could benefit from..." rather than "you should have done...").

Recommend ways to improve rather than simply state what is wrong with the work. Segregate feedback into the following types:

  1. What can be changed, e.g. presentational things such as how the manuscript is written, structured, use of language etc.
  2. Conceptual things like how the research was actually done, what measures were used or methodology"

Dr Linda Kaye - Cyberpsychologist, Senior Lecturer in Psychology at Edge Hill University, Chair of BPS Cyberpsychology section

"If you are reviewing a regular empirical article, thoroughly evaluate the theory, rationale, methods, and interpretation, but try as hard as possible not to form a judgment of the paper based on the results themselves.

Comments such as "results are unconvincing" or "not compelling" or "impressive" or "novel" are likely to lead editors to accept or reject manuscripts for reasons that should be beyond the control of authors.

Do not punish or reward authors for aspects of their research that should be beyond their control.

In addition, under no circumstances instruct authors to change the rationale or hypotheses in their study, as stated in the introduction.

If the hypotheses don't make sense then either recommend that these issues are addressed in the Discussion, or if the problems are sufficiently serious then recommend outright rejection.

Encouraging authors to changing their hypotheses in the light of the results is a form of hindsight bias known as HARKing that distorts the scientific record"

Professor Chris Chambers - Cognitive Neuroscientist, Head of Brain Simulation at Cardiff University, Winner of the BPS Book Award 2018 for The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice

"Take your time – for me, I read the paper, and then after some time, I re-read but taking some notes.

For me, reading it, as I would do any piece of writing, helps me think about the papers as a whole."

Dr Daniel Jolley - Social Psychologist, Senior Psychologist at Northumbria University in Newcastle, Executive Committee Member of BPS Social Psychology Section, Member of the BPS Early Career Network


"Ask a senior or more experienced colleague for some recent examples of their reviews as this will provide you with valuable insights into the structure and nature of a scientific review. It will also help allay your fears of what is involved."

Professor Daryl O'Connor - Health Psychologist, Professor of Psychology at University of Leeds, Chair of BPS Research Board

Q4. What would be your top tip for dealing with a negative review?

"Read it, leave it for a day (a week if it's really bad), and then return to it later.

Learn from it, work out what led to the negative comments with sufficient distance from the natural, visceral response of disappointment, to identify objectively what you agree and don't agree with.

Discuss it with others. Console yourself that the people who never know or acknowledge failure are, in my experience at least, insufferable!

Do not let it knock your self-esteem. Not all editorial decisions turn out to be good: the physicist Enrico Fermi's seminal paper on weak interaction was initially rejected for publication on the grounds that it was, "too remote from reality to be of interest to the reader".

He went on to win the Nobel Prize in Physics at the age of 37"

Professor Patrick Leman -  Developmental Psychologist, Chair of BPS Editorial Advisory Group (Journals)

"If you have the opportunity to revise the manuscript, copy and paste the reviews into a document and write some quick, unpolished comments in response. These are for your own later reference and can be as instinctive and emotional as you like.

Then take a few days and when writing the actual response to reviewers, pick out the logical and evidential arguments and discard your emotional reactions.

Always respond politely and dispassionately to reviewers. Give ground where it makes scientific sense to do so, but be polite and firm in rebuttal.

Thank reviewers but don't be effusive. Ensure that all responses are directly link to the specific review comments.

And last but not least be careful using "reply-all" to tell your co-authors how awful a reviewer is — sometimes your reply will also go to the editor."

Professor Chris Chambers - Cognitive Neuroscientist, Head of Brain Simulation at Cardiff University, Winner of the BPS Book Award 2018 for The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice

"You need to assume that the reviewer is on your side, who is trying to make the paper stronger.

Read the review, and then take some time to adjust to the feeling of rejection – which is never nice!

I usually find with fresh eyes, you can find points that are indeed useful, and can start your process of re-editing the paper before a resubmission."

Dr Daniel Jolley - Social Psychologist, Senior Psychologist at Northumbria University in Newcastle, Executive Committee Member of BPS Social Psychology Section, Member of the BPS Early Career Network

"Do not take it personally and remember two things:

  1. All scientists, irrespective of career stage and experience, receive negative reviews of their papers and also have their papers rejected.
  2. Nearly all papers are vastly improved following peer review and that includes negative, as well as, positive reviews"

Professor Daryl O'Connor - Health Psychologist, Professor of Psychology at University of Leeds, Chair of BPS Research Board

"Dealing with rejections and feelings of failures are unfortunately commonplace within academia."

Dr Linda Kaye - Cyberpsychologist, Senior Lecturer in Psychology at Edge Hill University, Chair of BPS Cyberpsychology section

Check out Linda's blog where she writes about journal rejections, and gives an overview of this process and from this makes suggestions of ways to bolster feelings of efficacy during occasions where it is common to experience failure.

Q5. How do you think more trust in peer review can be achieved? 

"To turn the question around: When all things are balanced, is there a high level of mistrust in peer review?  I'm not sure.

The process is not perfect, but I do think it serves a purpose."

Dr Daniel Jolley  - Social Psychologist, Senior Psychologist at Northumbria University in Newcastle, Executive Committee Member of BPS Social Psychology Section, Member of the BPS Early Career Network

"Rather than being concerned about trust we should focus on trustworthiness, and at present there is little I find trustworthy about traditional closed review.

Open review is a large part of the answer, where all reviews and decision letters of accepted manuscripts are published alongside the article, with the reviews either signed or anonymous.

Open review, in my opinion, is an essential and long overdue reform that will be effective in educating people about the limitations of peer review, increasing the accountability of journals, and triggering more radical changes to the review process to improve the quality of evaluation."

Professor Chris Chambers - Cognitive Neuroscientist, Head of Brain Simulation at Cardiff University, Winner of the BPS Book Award 2018 for The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice

"Overall, I feel there is a good level of trust in peer review. Nevertheless, I recognize that this is an issue that has been subject to a great deal of debate over the last couple of years.

For example, there has been much discussion about the pros and cons of whether reviewers should sign their reviews or not, and/or whether reviewers should be blinded to authorship.  Both of these issues are likely to influence trust.

However, in my view, I think trust will be improved further if editorial decisions, particularly desk rejected papers, are made purely on scientific merit and not on whether the study findings are really exciting, newsworthy or because they are statistically significant."

Professor Daryl O'Connor - Health Psychologist, Professor of Psychology at University of Leeds, Chair of BPS Research Board

"Let's get things in perspective. The publishing world is changing, and the available forums (and formats) for communicating research findings and research ideas are expanding.

That, at first glance, might seem only like a good thing; but more opportunities for publishing research does not necessarily amount to wider access to science. 

For many people, scientific research in science (and in other disciplines) is often viewed as inaccessible, elite, and of little personal relevance or practical use.

For psychology in particular, a celebrity can reach a million viewers by expounding an untested theory whereas the average non-self citation of a paper in our field is around 1.

So publishing or getting a grant is a means to an end, but the good academics realise that end is to effect positive social or scientific change not an individual's personal or career advancement.

Peer review is the "in house" system of evaluation, but there's a lot of world outside the house!

Academia forgets at its peril that the purpose of research is not to create a vehicle for intelligent people to demonstrate how much more intelligent than others they are! More of a sense of this common good, and pride in what psychology (and other disciplines) have achieved, would be a good place to start.

That could reduce negative reviewer behaviour and, in turn, generate more trust. Beating ourselves up as a discipline, or as scientists, or trash-talking others' work (either directly or behind the curtain of anonymous peer review) risks playing into the hands of those with anti-science, anti-justice agendas"

Professor Patrick Leman - Developmental Psychologist, Chair of BPS Editorial Advisory Group (Journals)

Read more on these topics