The International Federation of Operational Research Societies, or IFORS, recently reposted an article that first appeared in The Chronicle by Richard Monastersky. When I looked up the original article, I realized it was now 6 years old, but many of the issues remain very timely. The article describes the use and over-use of impact factors, and the techniques that some less scrupulous people employ to game the system.
First, my non-academic readers might ask: what are impact factors? They are the brainchild of a Philadelphia-based researcher named Eugene Garfield, who came up in the late 1950s with "a grading system for journals, that could help him pick out the most important publications from the ranks of lesser titles." The system relies on "tallying up the number of citations an average article in each journal received." Fair enough. But the tool now plays an important role in tenure and hiring decisions, because universities want a simple metric to evaluate the quality of the research, and papers accepted by journals with high impact factors seem likely to be of high quality. This, predictably, has led to some abuse.
The calculation of impact factors, by the company ISI (owned by Thomson Corporation), goes as follows: "To calculate the most recent factor for the journal Nature, for example, the company tallied the number of citations in 2004 to all of the articles that Nature published in 2002 and 2003. Those citations were divided by the number of articles the journal published in those two years, yielding an impact factor of 32.182 — the ninth-highest of all journals."
But trying to draw conclusions about average quality from an average number is bound to generate misconceptions and errors. "For example, a quarter of the articles in Nature last year drew 89 percent of the citations to that journal, so a vast majority of the articles received far fewer than the average of 32 citations reflected in the most recent impact factor." If a paper is accepted by a top-ranked journal, does its impact on the scientific field reflect the importance of the journal, or is it on the contrary one of the 50% accepted by the journal whose quality will be below the median?
The following caught my attention: "In China, scientists get cash bonuses for publishing in high-impact journals, and graduate students in physics at some universities must place at least two articles in journals with a combined impact factor of 4 to get their Ph.D.’s." Is putting so much pressure on scientists really a good idea? Maybe such high stakes explain some egregious cases of plagiarism over the past few years, such as the one Prof Alice Agogino of UC Berkeley was recently a victim of. You can find her paper (from 2004) here, and the other paper (from 2007) there.
But what should impact factors really blamed for? I do find the claim that they are "threatening to skew the course of scientific research" and that "investigators are now more likely to chase after fashionable topics" because they want to get published in journals with high-impact factors frankly outlandish. Investigators chase after fashionable topics because they need grants to support their research group and get promoted. Journals in my field (operations research) publish contributions in a variety of fields, from theory to logistics to transportation, and no one needs to know impact factors to be aware that the hot topics right now are in health care and energy, but the fact that your research focuses on a hot topic doesn't mean you'll make the sort of meaningful contribution that will warrant publication in a top journal.
Another dynamic that was left unanalyzed in the article is that between journals and senior researchers. The article spends a lot of time and space discussing the idea that editors might reject a paper they feel won't be cited enough and thus would be detrimental to the journal's impact factor, but if the paper is co-authored by a top researcher and falls within that researcher's core area of expertise, it is hard to imagine that the paper would not be cited many, many times.
So a more relevant question would be: among the authors published by those top journals, how many are authored by junior researchers without senior researchers co-authoring? how many by junior researchers with other junior researchers? how many by junior researchers with students? If many highly-ranked journals publish mostly the work of senior researchers, then it is not reasonable to ask junior researchers - who are supposed to demonstrate their own research potential without the guidance of senior staff - to publish in those journals too in order to benefit from the golden aura of the high impact factor. (Some junior researchers certainly will be able to, but that shouldn't be a widespread standard.) This reminds me of those statistics about average salaries per profession, where journalists forget to mention that the averages are computed for employees ranging from 25 to 64 in age, thus giving the impression that any college graduate can hope to make $80k a year within a few short years out of college. Maybe the high impact factors are due to the predominance of articles by well-established scientists with decades of experience in the profession.
This being said, a lot of the article rang true. For instance: "Journal editors have learned how to manipulate the system, sometimes through legitimate editorial choices and other times through deceptive practices that artificially inflate their own rankings. Several ecology journals, for example, routinely ask authors to add citations to previous articles from that same journal, a policy that pushes up its impact factor."
I had heard of the practice before, although I've never experienced it. I have, though, been in a situation where the anonymous reviewer requested that a citation be added to my (and my student's) paper, and when I looked up the citation there didn't seem to be anything in common with the area of our paper. On the other hand, if the reviewer was one of the authors of the other paper, as seems likely, then having his paper cited increases his number of citations, which is a metric universities take into consideration at promotion time. I could have complained and have my paper left in limbo for several more weeks or months. I decided I didn't have the time to wage that fight and simply made the citation as inconspicuous as possible.
Self-citations, where previous papers published in a journal are cited in that same journal, are certainly a potential issue. The article mentions the case of a researcher who "sent a manuscript to the Journal of Applied Ecology and received this e-mail response from an editor: “I should like you to look at some recent issues of the Journal of Applied Ecology and add citations to any relevant papers you might find. This helps our authors by drawing attention to their work, and also adds internal integrity to the Journal’s themes.”"
The editor who sent the email later said that "he never intended the request to be read as a requirement," which is disingenuous, given that the paper had not been accepted yet and researchers can't submit their paper to other journals until it has been withdrawn or rejected, thus putting them at the editors' mercy.
Recently ISI has developed an alternative methodology to address the flaws of the impact factor metric, called eigenfactor, in which citations from high-impact journals are weighted more heavily. To evaluate a scientist's value to his field, people should use the H-index rather than drawing inferences from journals' metrics, although the H-index also suffers from drawbacks. And, as it becomes more used, it is sure to be more misused too.