Peer review has become a fundamental pillar of modern science. However, it is a mechanism that has undergone few revisions. Now many researchers want to adapt it to the 21st century.
To understand the ins and outs of peer review (peer review) should be familiar with the process. Peer review It is a form of quality control to which scientific articles are submitted. (The texts in which scientists disclose the processes and results of their research), those that appear in journals such as Nature, Science, PNAS, and a very long etcetera of publications.
How exactly is this process? When a scientific finding is made, the research team sends the text of the article (and as much additional information as necessary) to the journal in which it wishes to publish. An editor reviews the manuscript and, if it considers that the text meets the requirements of your journal, sends it to a group of reviewers (usually three, although it can be more).
These reviewers analyze the text and issue their verdict: they can accept it from the outset, they can accept it with conditions (that the authors review some aspects) or they can reject it. In the second case, an exchange of reviews and recommendations is opened that can last for months. The latest revisions are made by the magazine itself but they are orthotypographic, not technical.
and behold one of the great problems of this mechanism: its slowness. The reviewers can slow down and even stop the process (also depending on the protocols of the journals, some of which limit the response times for authors and reviewers).
Most of the scientific articles do not require urgency, but there is a cost in terms associated with delays that aggravates the budgetary restrictions of the institutions. To this we must add that neither reviewers nor authors charge per publication, indeed, some publications they charge authorswhich usually have dedicated budgets for this purpose.
But sometimes science must be urgent. Cases like the pandemic made it necessary to change the rules of the game to streamline processes without sacrificing the quality of the research. Which was not always achieved.
In contexts like this, the open repositories such as ArXiv, BioRxiv or MedRxiv, portals where authors can publish unreviewed manuscripts. Very often these repositories publish manuscripts that have been submitted to a journal and are at some point in the peer review process.
Because of the revision, the final article will introduce some changes to the manuscript, but scientists around the world can begin to consider whether the work they are reading might in some way influence their own.
To err is human and very often an external view is the best remedy, but as if errors were not enough, there is also malice. Whether for the sake of glory or precisely to point out the shortcomings of the scientific publishing ecosystem, many researchers have submitted false articles to journals, some of them managing to be published.
The review of scientific work does not end with peer review. The methodology is a key section in scientific studies and one of its functions is to allow other scientists to replicate the experiments to validate or falsify the results. This is a different process, not to be confused with peer review, but even more fundamental to science.
there is alternative
To say that the peer review process has remained unchanged in recent decades would be inaccurate, and proof of this is the appearance of alternative or complementary mechanisms. Some on the margins and others already established in what we could classify as the mainstream of science.
An example of this border between the alternative and the established are repositories such as those mentioned above (ArXiv, BioRxiv or MedRxiv) or portals such as ResearchGate, halfway between a repository and a social network for researchers.
The popularizer Richard Sprague proposed in a magazine article neo.life that the science guild take note of technology companiesmuch faster in publishing their progress in the form of products.
It compares the peer review process of science to alpha and beta releases, in which unfinished versions of software are released to a limited number of users who offer feedback.
“What if science had its own bottom-up evaluation system, not just for the big final discoveries but for everything that researchers generate? That is the idea behind some new proposed peer review systems,” explains Sprague.
Sprague cites some of these new alternative systems, the first being the “open peer” of the magazine eLife. The journal began to publish the manuscripts of the articles in the review process so that the community could contribute its “bit of sand” to the process. A few months ago ad that it would dispense with reviewers when deciding whether an article would be published to focus on this”open peer review”.
The environment blockchain it also has a role in this process for some platforms. An example of this, also included by Sprague in his article, is that of The Longevity Decentralized Review (TLDR). This platform, intended for the publication of content related to longevity, aims to use a system of open comments that can also be evaluated positively or negatively.
A similar platform since it combines the contributions of users and blockchain it is researchhub. From the feedback utility, TLDR and ResearchHub offer their tokens to users as a way to compensate them for their work. Work that today, let’s remember, is usually not paid.
These are not fail-safe systems, admits Sprague: “Hyper-competitive people will find a way to cheat, but in the radical transparency of the blockchain, it’s hard.” “Malicious review” is also theoretically possible, with little to lose, “except reputation”.
In contrast to the decentralization that blockchain implies, there are also proposals that tend to the opposite, recentralization. In another article, popularizer Gemma Goldie proposed as an example the use of centralized platforms, which would make it possible to find researchers with the availability and experience to carry out reviews. “This is important because lack of time or experience are the top two reasons researchers cite for refusing to review research.”
The process of evaluating science is changing, but the margins always move faster than the consensus. We will have to wait and see what mechanisms end up being imposed. The journal ecosystem is dominated by large companies that control dozens of titles, from general publications to those aimed at a particular and limited scientific niche. These will probably be the companies that have the final say in how scientific achievements are evaluated.
Images | monstera / Mikhail Nilov / sigmund