Should impact factor stay or go?

November 3, 2016

Dr. Janice Nigro, Filipodia Editor

 

Science is an art. After reading an article, we might actually be moved to declare “what a magnificent piece of work”, as if we were standing in front of a Picasso. We instinctively feel the power of the results, and yet, we feel compelled to reduce our accomplishments in science to a series of cold numbers for professional assessment and advancement.

 

One of those numbers, journal impact factor, has been a long-standing topic of discussion [1,2]. Based on an experience early in my career, I never thought so much about it. In graduate school, one of the most important papers to ever be reported in my field of research (p53) was rejected by several high impact journals [3]. And guess what, it didn’t matter. Scientists were still able to find and read the article even in an era of print-only journals. At the time, it became one of the most highly cited articles ever in the field because the results were accurate and groundbreaking.

 

I was frankly more worried about what to publish throughout my career at the bench rather than where to publish. If I came up with the right kind of question, the papers would come, and they would fit where they fit. I wanted to tell a scientific story of interest (at least to me) and a deep one over time.

 

Impact factor has become increasingly important, however, because today the very number itself has come to somehow symbolize the quality of scientific work that an individual or institution conducts. But does it really? And, is there possibly of a better or more transparent metric to evaluate the importance of our published studies?

 

Impact factor is not so scientific. Even though we are so quick to use it to evaluate science and scientists, impact factor, ironically, is not so scientifically sound. The impact factor is a ratio of the total number of citations a journal receives in the previous two-year period divided by the number of “citable or source items” in that journal over the same time period. The impact factor thus represents the average number of times an article in a particular journal is likely to be cited. The current impact factor of Nature, for example, is 38, meaning that an article was potentially cited ~ 38 times within the previous two-year period (2013-2015).

 

The impact factor is simply a mean, but it is one that is reported without a standard deviation. The possibility therefore exists that a few critical articles might significantly influence the final number. In 2005, Nature took a closer look at this issue and discovered that only 50 articles out of the ~ 1800 so-called citable or source items were cited > 100 times within the two-year time frame used for the calculation of impact factor, while the majority of articles were cited < 20 times [4].

 

The second major issue with the equation is how denominators and numerators are determined. While citations might be extracted from any type of written article anywhere, including letters to the editor, commentaries, and even retractions as well as any basic research article, they are not all necessarily included in the denominator. Furthermore, journals can more directly influence impact factor by playing with how many and what type of articles they publish. Reviews for example typically accrue more citations. More on top and less on the bottom, however you manage it, is going to lead to increased impact factor. PLoS Medicine reported a fluctuation in impact factor from 3 to 11.3 based on variations in the denominator in 2006 [5]. It seems, at best, we can call impact factor an “in the ballpark” calculation.

 

Finally, regardless of how high, impact factor is not a fair assessment of the quality of science presented in individual articles published in any journal. Retractions occur in even the most prestigious of journals, and the most highly cited basic research articles might not also be the cleverest. Exome sequencing articles are highly revealing and necessary in the cancer field, but no one can deny that the real work is yet to come in terms of functional studies and effective treatment strategies. Those articles that are not as highly cited might report novel findings that are in niche fields or that have simply been discovered before their time [6].

 

New metrics. To address some of the deficiencies in impact factor, new metrics have been recently developed. In the first of two recent articles, the authors investigate the use of citation distribution curves to evaluate journals [7]. In this type of analysis, the number of articles (Y-axis) that achieve a specific citation number (X-axis) is graphically represented. Interestingly, the distribution curves for all journals look generally the same; a peak occurs where most articles are cited less frequently accompanied by a sharp dive into a long tail where a few are cited to varying greater degrees. Such figures clearly provide the proof that a minority of articles are influencing impact factor in all journals.

 

In a second paper, the authors present a strategy to assess the impact of individual articles. They have developed what is called the Relative Citation Ratio (RCR) which is based on the number of citations a paper receives relative to a co-citation network of articles (articles that are referenced simultaneously; [8]). For example, article 1 has in its reference list, the article of interest as well as article 2 and article 3. All articles have their own reference lists, but because the article of interest, article 2, and article 3 appear in the same reference list, they are in a co-citation network. The RCR is the number of citations of an article of interest divided by an expected number of citations (based on the citation rates of co-cited articles), where 0 = no citations, 1 = average, ? 2 = twice the average or more [9]. Ultimately, the impact of a paper can be evaluated independently of the field of interest.

 

But are metrics really necessary? I have to admit that after reading several of these articles, including the new approaches, I did not feel particularly enlightened. Impact factor alone is a metric without much depth, and the newer metrics have not convinced me, not because they are not valid or informative, but because numbers still tell a limited story. We should all find it a bit maddening that after all of the education, hours at the lab bench, and efforts to collaborate that we want to spend time developing a set of metrics to make it so simple to evaluate ourselves and each other. “Why should it be so simple?” I ask, because we do not approach our work that simply. There is no equation accounting for the challenges overcome to achieve similar goals in different environments in the evaluation of a candidate (for example).

 

Action. In 2013, a Nobel prize winning scientist, Randy Schekman, in fact wrote a piece about his frustration with the system and the negative effect he felt it was having on scientific progress [10]. In it, he declared that he would no longer pursue publication in so-called high impact journals. He went on to lead the development of a document, the San Francisco Declaration on Research Assessment (DORA; [11]), which outlines alternative ways to evaluate science and scientists. It has since been signed by thousands of scientists worldwide (anyone can sign it). These two articles emphasize that we need to shift our focus to content and what we can do in our roles as scientists, employers, and reviewers, etc. to make that happen.

 

Most high impact journals have been around for decades, some for centuries, and have thus been historically important (mostly for the West) across disciplines in science. Our scientific world has expanded tremendously since their establishment due to diverse factors, political and otherwise, and moreover, we have the Internet and Open Access. Today we are less limited by what is available to us (to anyone) to read and when and where we have the opportunity to discuss it. We have the chance to be truly academic. So let’s get started!

 

  1. Gowrishankar J, Divakar P (1999) Sprucing up one’s impact factor. Nature 401: 321-322.
  2. Baylis M, Gravenor M, Kao R (1999) Sprucing up one’s impact factor. Nature 401: 322.
  3. Kastan MB, Onyekwere O, Sidransky D, Vogelstein B, Craig RW (1991) Participation of p53 protein in the cellular response to DNA damage. Cancer Res 51: 6304-6311.
  4. (2005) Not-so-deep impact. Nature 435: 1003-1004.
  5. (2006) The impact factor game. It is time to find a better way to assess the scientific literature. PLoS Med 3: e291.
  6. McClintock B (1950) The origin and behavior of mutable loci in maize. Proc Natl Acad Sci U S A 36: 344-355.
  7. Lariviere V, Kiermer V, MacCallum CJ, McNutt M, Patterson M, et al. (2016) A simple proposal for the publication of journal citation distributions. biorxiv: https://dx.doi.org/10.1101/062109
  8. Hutchins BI, Yuan X, Anderson JM, Santangelo GM (2016) Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biol 14: e1002541.
  9. Lauer M (2016) Measuring Impact of NIH-supported Publications with a New Metric: the Relative Citation Ratio. https://nexus.od.nih.gov/all/2016/09/08/nih-rcr/
  10. Schekman R (2013) How journals like Nature, Cell and Science are damaging science. The Guardian. https://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science
  11. (2013) The San Francisco Declaration on Research Assessment. https://www.ascb.org/dora/