Journal Impact Factor

Optical Engineering editor says journal impact factor falls short as single measure of quality.

01 January 2017
Michael T. Eismann

At face value, journal impact factor is a straightforward metric: the average number of citations per published paper over a two-year time period. The underlying assumption is that impact on a scientific field is appropriately measured by number of citations.

Recent discussions about impact factor have focused on the efficacy of the metric and appropriateness of its widespread use. For instance, recent editorials in Nature and other journals use adjectives such as crude, misleading, and invidious to describe the metric and its effects on science. The American Society of Microbiology has decided to discontinue advertising the impact factors of its journals.

Some critics say use of the arithmetic mean for impact factor is a statistically poor measure as it can be sensitive to a few very highly cited papers. They recommend adoption of a median or citation distribution as more statistically sound measures.

Others assert that impact factor is a poor relative measure between journals in different scientific fields, favoring ones with high current interest and undervaluing ones lacking the same level of popularity, even as contributions in the latter fields may ultimately yield higher and more lasting scientific impact.

The increasingly common use of impact factor to assess a scientist’s or engineer’s performance based on the impact factors of journals in which they have published is also called into question as a misuse of the metric.

Even the co-creator of the metric, Eugene Garfield, is said to have drawn a parallel with the unintended consequences of his creation to that of nuclear technology. It is interesting how this deceptively simple number has stirred up such widespread controversy.

CRITICISMS ARE VALID

image for Optical Engineering journalIn my opinion these all are legitimate criticisms. As I analyzed recent citation data for Optical Engineering papers to understand what factors drive impactful papers, it was apparent to me that the data were not well behaved, making simple statistical measures potentially misleading. Especially for an engineering-oriented journal such as ours, I question the inherent assumption that citations appropriately measure the scientific, commercial, or intellectual impact of journal papers.

As an example, analytical and experimental methods are often adopted from published papers and put into practice by engineers whose output is not journal publications but ultimately a new design, test methodology, production process, or other engineering product. This type of impact addresses a core industry-oriented constituency of SPIE and is extremely important for Optical Engineering.

Despite these and other shortfalls of the impact factor metric, I find citation rates to be an important — albeit not singularly important — metric for gauging journal quality. Unfortunately, there just are not many other options for quantitative assessment.

We have number of paper downloads and citations as a gauge for interest and impact, respectively, but most of the other attributes of journal quality are somewhat intangible or not readily measured.

CONSIDER MORE FACTORS

With all due respect to Garfield’s nuclear analogy, I would like to offer an example that raises similar concerns.

We all have experienced somewhere in our academic careers expending many ounces of blood, sweat, and tears in a very challenging class, with the end result captured by a single letter on our academic transcripts. Did our professors really feel the letter grade was a complete statistic, fully representing our mastery of the class material with all its complexities?

And do we feel comfortable with future admissions officers, employers, and others making decisions on our career opportunities based solely on grade point average?

No, we hope that they give consideration to a broader set of factors reflecting the full extent of our knowledge and capabilities, even as we were compelled to agree that grades do provide a useful measure.

In my opinion, the shortcomings of journal impact factor are similar to those of grade point average. It is a useful measure, and unfortunately one of the few that we have at our disposal.

If we over-generalize its efficacy for the sake of simplicity, however, we are selling ourselves short. As scientists and engineers, we should appreciate that, since appropriately dealing with complexities of the real world is what our profession is about.

SPIE Fellow Michael T. Eismann is editor-in-chief of Optical Engineering and a member of the SPIE Publications Committee

SPIE Fellow Michael T. Eismann is editor-in-chief of Optical Engineering and a member of the SPIE Publications Committee. This is an edited excerpt from Eismann’s editorial in the October 2016 issue of the journal. Read the entire editorial.






Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research