Mètode The argument on the role of science journals, turned into assessment tool for science production, is by no means new. |
||
The aspiration of most researchers is to be able to publish in the elite of science journals, in places like Nature, Cell and Science. An article among their pages will allow the research team to play in the major leagues of science. It is not strange, then, that critical comments from 2013 Nobel Prize for Medicine, Randy Schekman, who denounced the «tyranny of the luxury journals», caused rivers of ink to flow. In an article for The Guardian, Schekman attacked these journals claiming they «aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research». The scientist is against a system of assessment of scientific production in which the place of publication is more important than quality. In fact, the Nobel Prize winner refused to keep publishing in these journals and encouraged other scientists to do the same. |
||
A Known Debate The argument on the role of science journals, turned into assessment tool for science production, is by no means new. The Impact factor measures the amount of citations of an article after its publication in a science journal. If a paper is good, it is supposed to get many citations. Theoretically, that is. As Schekman claims, a paper can be cited because it is good «or because it is eye-catching, provocative or wrong». There are studies showing that articles with media coverage are later cited more by scientists. It is not strange, then, that the journals with the highest impact factor become, more and more everyday, powerful public relations departments issuing press releases in order to catch the media’s attention. Maybe it is too tempting to publish a catchier or even wrong paper instead of a better one. «I don’t think they publish wrong information knowingly, even if the number of retracted papers is surprising», said José Pío Beltrán, CSIC Institutional Coordinator in Valencia. The researcher, despite agreeing with Schekman in that those journals could improve on a number of things, believes criticism should not be taken to extremes: «It is true that they set trends, select themes, and that worthy research can be left out, but the respect of the scientific community towards those journals is even more true». Also José Enrique Pérez Ortín, full professor of Biochemistry and Molecular Biology at the University of Valencia, believes the eagerness to get the attention of science journals (and for them, to get the attention of the media) may have increased the interest in publishing for high repercussion publications: «Probably when he started [Schekman], the trend he talks about was not so marked. And it is not that what is published is bad, but that the paper with the most impact is selected from a pool of good papers». |
|
«The argument on the role of science journals, turned into assessment tool for science production, is by no means new» |
Publishing in Open Access Schekman defended open access journals as the way to end the reign of impact factor publications. Open access does not artificially impose limits to the number of papers they accept «and have no expensive subscriptions to promote». He is the director of one of them: eLife. One of his arguments in favour of open access journals is that scientific research, mainly financed through public funds, must be accessible for society from the first moment. In this journal model, that offers its contents for free, the author or research team usually pays to publish, while in traditional journals, the journal assumes the costs. This implies only the ones with a budget for that can consider publishing. On the other hand, whenever we hear about open access, the issue of quality steps in. Are subscription publications more rigorous than open access publications? For José Pío Beltrán, with journals such as Cell, Nature and Science respect is taken for granted, although he points out that «prestige is not everlasting and, therefore, it is necessary to keep a high level of rigour. In the case of open access journals, we would have to check every case». Pérez Ortín, whose team published both in Cell and in Plos One (open access), claims it depends on the journal: «I don’t think high quality open access publications are less strict. Plos One, for instance, keeps the fundamental good-work standards and publishes documents that are maybe not so relevant or catchy at the moment, but are still good». |
«It is true that they set trends, select themes, and that worthy research can be left out, but the respect of the scientific community towards those journals is even more true» José Pío Beltrán |
|
Rigour and Quality Some months ago, Science published a special issue about science communication in which the results of an investigation by biologist and journalist John Bohannon. He had written and sent a fake paper full of mistakes to 304 open access journals: 157 accepted the manuscript, which was full of inconsistencies. Although the most critical ones asked if the results would have not been the same in case of sending a fake article to a subscription journal, the fact remains that more than a hundred open access journals accepted results with serious mistakes. In spite of that, the most prestigious journals are not immune to error and sometimes have to retract papers. Plagiarism, methodological inconsistencies or excessively surprising results are some of the examples of what we find behind a retracted paper. There is even a blog, Retraction Watch, in which we can follow the news concerning scientific retraction. Controversy, even error and retraction, is part of science. But also rigour and verification. Is then everything the journals’ fault? Or can the pressure to publish lead to present unverified research? Apart from ethical questioning, José Pío Beltrán thinks it is absurd to act like that because «anything in Nature, Cell or Science will be instantly replicated by other research teams, and any error will be discovered». |
«It is not that what is published is bad, but that the paper with the most impact is selected from a pool of good papers» José Enrique Pérez Ortín |
|
Imperfection in a Good System? Another of Randy Schekman’s points is the excessive importance of the impact factor of these journals in scientific committees when they assess the scientific production of a scientist or team. The fact is acknowledged by the publications themselves. Nature, who answered to the attacks of the Nobel Prize winner arguing that their selection of papers is based only on its importance, accepts that the scientific community is «over-dependent» on the journal impact factor system. José Pío Beltrán criticises also scientists: «we make a mistake when we pass judgement taking more into account the publication than the quality of the paper». In regard of this, Pérez Ortín believes that «what we call impact is a subjective factor because a topic that seems not relevant or interesting, could change the situation some years from now». In addition, many scientists argued over the last years that the impact factor is not a perfect system, but it is the best one we have. In a letter published in Allergy in 1998, C. Hoeffel claimed: «Impact factor is not a perfect tool to measure the quality of articles, but there is nothing better […] Experience has shown that in each speciality the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. These journals existed long before the impact factor was devised». Even Eugene Garfield, one of the fathers of the Science Citation Index, confirmed in a 2006 article that «the use of journal impacts in evaluating individuals has its inherent dangers». And he added: «In an ideal world, evaluators would read each article and make personal judgments». Will it be necessary to wait for the circumstances of that ideal world? Maybe reflection and public debate, as triggered by Schekman, is a good way to advance in that direction. Lucía Sapiña. The Two Cultures Observatory. Mètode. University of Valencia |