Open-access (OA) science journals such as PLoS ONE operate under a different model of editorial and peer review than the traditional non-OA journals. Perhaps the biggest difference is that, at the traditional journals, reviewers and editors are usually encouraged to take into account what they perceive to be the potential impact and importance of a given study in determining whether or not it merits publication. In contrast, such judgments are irrelevant at many OA journals, where the focus of review is on whether the science itself is technically sound. At OA journals, whether a given study is important is a determination that is made by research consumers themselves rather than by editorial boards. This difference in focus has led some scientists to view OA journals with skepticism and to perceive that their review process is “watered down.” However, I would argue that by not focusing on perceived impact and importance, OA journals take a lot of the subjectivity out of the review process and, in the end, this is ultimately beneficial to science.
The problem with emphasizing impact and importance in publication decisions is that these are purely subjective judgments. What is important to one person may be trivial to the next person. And what is it that makes a scientific finding impactful or important anyway? Something that’s going to generate a lot of citations? Something that will get a lot of media attention? Something that challenges the status quo in the field? Different people have different standards of perceived impact and importance. Consequently, when we introduce these factors into our review process, it leads to arbitrary publication decisions. Let me give you an example from my own personal experience to illustrate this point.
A few years ago, I submitted a paper to Psychological Science, a journal that is generally regarded as a high-profile, high-impact outlet in psychology. Big names in the field publish there and it’s a journal name that many covet having on their resumes. However, I have stopped submitting to this journal because I was not comfortable with how they appeared to make their publication decisions.
My first ever submission there was for a paper that sought to explore some of the psychology behind people’s opposition to legalized same-sex marriage (you can read a summary of that study here, which was later published in a different journal). One of the editors at Psychological Science sent me a rejection notice stating the following:
This is an interesting, and certainly highly topical exploration…but I don't think there is anything particularly surprising here that one wouldn't glean from the daily news. Indeed, a conjoint Google search of "homosexual marriage" or "same-sex marriage" and "threat" gathers at least 45,000 hits. And "same-sex marriage" and "threat to heterosexuals" gathers 16 hits. In other words, these ideas are clearly out there in society and in the ether. Make no mistake; I think this study certainly belongs in the literature, but I don't think most of our 14,000 subscribers would be surprised by these results. Because of this, I don't think the paper is appropriate for the journal.
My paper was desk-rejected, meaning the handling editor refused to even send it out for external review. Why? Not because the paper was unimportant or technically flawed (indeed, not a single negative comment was made about our methods and the editor mentioned that our paper deserved to be published...just not in their journal). Rather, it was because the results weren’t “surprising” enough (notice how the word “surprise” was used twice in the letter). It was almost as though the editor was saying that because our findings weren’t surprising, we didn’t really even need to write them up because we could just rely on common sense instead!
Oh, and apparently Google was the dreaded Reviewer #2 in this case. In addition to our unsurprising results, Google revealed that some of the keywords from our article had previously appeared on the internet. I did not realize exactly how much power Google wields over the scientific review process until I received this decision letter! In the future, I must be more careful to study topics with which Google is less familiar.
To be perfectly clear, I am not griping or whining because Psychological Science rejected this paper. Rejection is a basic part of life in academic publishing and, like every other academic, I have had my fair share of experience with it. Rather, my concern in this particular case is that my paper was rejected based entirely on a Google search and for not being surprising enough. Maybe it’s just me, but I’m not comfortable with this being the main criteria we use to judge which science is "worthy" of publication. "Be more surprising" doesn't constitute constructive criticism either.
Please note that I named the journal in this case only to make the point that totally arbitrary publication decisions occur in some very high profile outlets; however, I did not name the handling editor because this isn't a personal issue--this is a broader issue with the culture of scientific publishing. I should also mention, again, that this review occurred a few years ago. I do not really follow this journal anymore, and not just because on my own review experiences, but because I kept seeing so many extremely underpowered studies squeaking by the review process, seemingly because they only offered counterintuitive or surprising findings. As a result, I cannot speak to its current direction or editors.
That said, a seeming demand for surprising and counterintuitive findings as a prerequisite for publication is pervasive throughout psychology and other fields—I’ve encountered reviewers and editors at various journals who seem to think that science is not important unless it is surprising, and I find this attitude to be very troubling. Science doesn’t have to be surprising in order to make a valuable contribution. For instance, if surprise is our main threshold for publication, then successful replications (which are extremely important to scientific progress) would pretty much never make it into the literature.
It is because of experiences like this that I have begun submitting most of my research to OA journals. I would rather have my work evaluated on the basis of the quality of the science, as opposed to whether I have sufficiently surprised my editors and reviewers. There are numerous other reasons I have switched to OA publishing, which I have extensively written about here (not the least of which is that my research is made freely available to all, instead of giving it away to the big publishing houses, which gouge libraries and taxpayers to access it). I have also documented what my experience has been like publishing in OA journals (including the review process) here.
Of course, this is not to say that editorial and peer review at OA journals is perfect—it isn’t. There are occasional review failures at OA journals too (see here for a recent and epically bad example--in this case, the editor never should have let this review see the light of day). No journal exists in which all reviews and editorial decisions are completely free of subjectivity and bias. On balance, however, a review process that prioritizes competently conducted and technically sound science over pure shock and awe is one that likely contains less subjectivity and is ultimately more helpful to scientific progress.
Want to learn more about Sex and Psychology ? Click here for previous articles or follow the blog on Facebook (facebook.com/psychologyofsex), Twitter (@JustinLehmiller), or Reddit (reddit.com/r/psychologyofsex) to receive updates.
Image Source: 123RF.com
You Might Also Like: