Intended for healthcare professionals

Opinion

When I use a word . . . Academic publishing: the impact factor

BMJ 2025; 388 doi: https://doi.org/10.1136/bmj.r333 (Published 14 February 2025) Cite this as: BMJ 2025;388:r333
  1. Jeffrey K Aronson
  1. Centre for Evidence Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
  2. Follow Jeffrey on X: @JKAronson

The citation index, first seeded as an embryo by the information scientist Eugene Garfield in the 1950s, was delivered in the 1960s as a fully formed child, the Science Citation Index. Its twin brother was the journal impact factor. Defined as the average number of citations that a journal’s citable papers receive, the impact factor was originally intended by Garfield to be a tracking device and a retrieval tool, particularly so that fraudulent, incomplete, or obsolete data could be more easily detected. However, since then it has been repeatedly used to assess the quality of authors’ research, particularly for purposes of funding and awarding tenure, against Garfield’s advice that this was not a legitimate use. This has led to a regrettable “publish or perish” culture in academic research, with knock-on effects, such as paper mills and predatory journals, and neglect of other academic contributions, such as teaching and mentoring, reviewing and editing scholarly texts, contributions to policy, and, in medical practice, work at the bedside or in the clinic, all of which have suffered as a result.

“Your four best papers”

It’s the year 2000. The next UK Research Assessment Exercise (RAE) is soon to be conducted, and my university is preparing for the ordeal. Everyone in my faculty receives a message from a colleague who has been deputed to collect data on our publications. He writes asking each of us to nominate our four best papers. But he doesn’t explain what “best” means.

Perhaps at that time some thought that they should nominate the four papers that described what they considered to be their best research. I simply assumed that “best” meant published in journals with high impact factors. It turned out that that was what was wanted—the aim was to discover how many of us had published papers in four journals whose collective impact factors totalled at least 20. By chance I had recently been a co-author on a paper that had been published in The Lancet,1 which at the time had an impact factor of around 15. I didn’t have to try hard to find three other papers whose impact factors took me over the line, although I didn’t know at the time where the line was being drawn.

But The Lancet’s impact factor today is 98.4, and according to Clarivate Web of Science my co-authored paper has been cited only 46 times, while Google Scholar gives it 69. Not particularly impressive.

And if the editor of the day had taken a fancy against our paper, it would have ended up in a journal with a much lower impact factor, and I would probably have received a stiff letter on headed notepaper telling me to do better or else.

Impact

Impact is all the rage these days. The word comes from the IndoEuropean root PAG or PAK, to fix, fasten, or bind. Nasalise it and you get the Latin verb pangere, to insert firmly or to fix by driving into, for example, the ground. Add the prefix in-, and you get impingere, to dash against or cause to collide with, which gives us “impinge.” And the supine form of impingere, impactum, gives us “impact.”

The noun “impact” is defined in the Oxford English Dictionary (OED) as “The act of impinging; the striking of one body against another; collision.”2 Its first attested use is from 1781, and a few years later it acquired a figurative use: “the effective action of one thing or person upon another; the effect of such action; influence; impression.”

The OED also gives an instance of the attributive use of “impact,” i.e. using the noun as an adjective. It comes from a book titled Unseen Universe by Balfour Stewart and Peter Guthrie Tait: “Now, all attempts as yet made to connect it with the luminiferous ether, or the medium required to explain electric and magnetic distance-action, have completely failed; so that we are apparently driven to the impact theory as the only tenable one.”3 The OED gives the date of this as 1878, although it should be 1875, a very minor antedating. However, “impact theory” is the only instance it gives of the attributive use; it does not, for example, mention “impact factor.”

In recent years the derived adjective “impactful” has also been used with increasing frequency, so it is surprising to discover from the OED that it was coined in the 1930s.4 And the verb “impact” is increasingly replacing the simpler and often preferable verb “affect.”

Factor

The origin of the word “factor” is more complicated. It starts with a simple IndoEuropean root, DHE, to set or put, which I have discussed in detail before.5 Briefly, if you take the zero-grade form of the root and add the letter K as a suffix, you get DHƏ-K, from which words such as fact, faction, and factitious eventually derive, via the Latin verbal derivative facere, to do. The English word “factor” is taken directly from the Latin noun factor, one who does something, in many senses, such as an author, perpetrator, or player.

Among the many meanings of “factor” in English is the general meaning “an element or constituent, esp. one which contributes to or influences a process or result.”6 This in turn has many applications, in, for example, mathematics, medicine, and genetics. More specifically, it can mean “with [a] modifying word: the influence or significance of something specified, in the context of a larger situation; the specified element as a (usually important) component affecting the outcome, nature, or perception of something.”

Examples of this use that the OED gives include the human factor, race factor, excitement factor, and cringe factor. But not impact factor.

Impact factor

An impact factor was originally an engineering concept, meaning a measure of the intensity of a physical impact. Here, for example, is an instance from 1918, in a description of the landing gear of a US day bomber: “On the basis of this landing weight, the dynamic or impact factor for the shock absorbers would be ... 5.02.”7 It sounds like a bumpy ride.

The term “impact factor,” referring to a measure of the average number of citations garnered by citable publications in a given journal, was introduced in the 1950s by the information scientist Eugene Garfield (1925–2017). Garfield wrote “I propose a bibliographic system for science literature that can eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers.”8 That was how he introduced the idea of a citation index, which he defined as “an ordered list of cited articles, each accompanied by a list of citing articles.”9 He claimed that such an index would make it easy for scholars to “check all papers that have cited or criticized papers [containing fraudulent, incomplete or obsolete data], if they could be located quickly.”

A citation index, Garfield also asserted, “would clearly be particularly useful in historical research, when one is trying to evaluate the significance of a particular work and its impact on the literature and thinking of the period. Such an ‘impact factor’ may be much more indicative than an absolute count of the number of a scientist's publications.” Garfield’s use of inverted commas around each of the three instances of the phrase that he included in his paper8 suggests that it was newly minted.

Garfield later used his citation index to forecast future science awards, such as Lasker Awards and Nobel Prizes.910 He noted that those who won such prizes tended to be cited highly frequently. However, it was also the case that many highly cited scientists did not win such awards. Garfield suggested that that was not because such individuals had not made the same degree of impact as those who won the prizes, but because of the ways in which prize winners were chosen by committees. Of course, an alternative explanation would have been that highly cited scientists might be influential without necessarily producing outputs directly associated with major impact, as opposed to tools that others could use or ideas that they could develop. An excellent example of this is the well known paper by Lowry and colleagues, who came top of the list of the 50 most cited authors in the citation index for 1967, because of a paper describing a method for measuring protein.11

This illustrates the fact that methods papers are often highly cited and potentially influential, but rarely win major prizes. Another example is a 1979 paper by Hills and Armitage describing how to design a two-period crossover clinical trial, which is one of the most highly cited papers ever published in the British Journal of Clinical Pharmacology.12 The original paper has been cited 1677 times according to Google Scholar.

Having introduced the term “impact factor” Garfield seems to have been suggesting that the impact factor of a paper would simply be the number of citations it had accrued.8 Later, however, he defined what he called the journal impact factor as the “average citations per published item,” which he illustrated in 1969, by dividing the number of citations to papers published in 1967 and 1968 in each of 152 journals by the number of articles published in each journal in those years.13 For example, in 1969 the journal Pharmacological Reviews garnered 448 citations of its 20 publications during 1967 and 1968, giving it an impact factor of 448/20, or 22.4, the third highest in Garfield’s 1969 list.

A final thought

In a long footnote to his 1972 paper13 Garfield discussed the limitations of the impact factor. He noted that it would be adversely affected by papers that had not been cited at all and favourably affected by papers with unusually high numbers of citations. Nowhere in the paper did he discuss the distribution of citations, but his comments make it clear that the impact factor would be a highly unsatisfactory measure when the distribution of citations of individual papers in a journal was not normal, as it generally is not, since a high percentage of papers are never cited at all, not even by their own authors.

Garfield’s purpose in creating the Science Citation Index (SCI), which finally appeared in the early 1960s, was as a tracking device and a retrieval tool, no more. In later years, he often emphasised that it was not the purpose of a citation index or impact factor to be used to judge individuals’ research quality, a warning that has repeatedly been ignored ever since. The idea that the only thing that matters in academic practice is the apparent impact of one’s research has been damaging. It has led to the “publish or perish” culture, with knock-on effects, such as paper mills and predatory journals, the so-called “reproducibility crisis,”14 and neglect of other academic contributions, such as teaching and mentoring, reviewing and editing scholarly texts, contributions to policy, and, in medical practice, work at the bedside or in the clinic, all of which have suffered as a result.

Footnotes

  • Competing interests: None declared.

  • Provenance and peer review: Not commissioned; not externally peer reviewed.

References