A recent issue of Spectra: A publication of the National Communication Association featured an article by Dr. Michael Beatty, Professor of Communication Studies. In the article he explores the recent phenomenon of journal impact factors, investigating how accurate and relevant they are to advancing the discipline.
The entire article, including references, footnotes and citations, is available here. The article is co-authored with Thomas Hugh Feeley, Ph.D., a professor of communication at University of Buffalo, The State University of New York.
Here are some excerpts from the article, entitled “Journal impact factors: Uses and Misuses”:
Although studies tracking research productivity and citation patterns in generalhave appeared in major communication journals over the years, emphasis on journal impact factors (JIFs) in particular is a fairly recent phenomenon. Most colleagues we talk to in the discipline are unaware of how impact factors are calculated, what they do and don’t represent, and how they should and should not be used; even fewer of our colleagues are familiar with the rather large body of research literature that has accumulated regarding impact factors.
Impact factors were initially developed as a metric to assist librarians in decision-making about which journals should be stored electronically, which should be kept in paper form, and which should be dropped altogether. However, like many metrics, they can be misused. This article provides a primer of sorts and a cursory overview of some of the important issues related to journal impact factors.
How Are Journal Impact Factors Calculated?
Impact factors are based on two databases, the Science Citation Index (SCI) and the Social Science Citation Index (SSCI), compiled by Thomson Scientific, formerly the Institute for Scientific Information (ISI). Roughly speaking, the impact factor associated with a journal represents the number of times articles published in a journal are cited during a specified interval of time (typically 1 to 5 years) relative to the number of articles appearing in the journal during the same period.
Simply, journal impact factors represent the average number of citations to a journal per “citable” item (usually a research article) published in the journal for a specified period. Based on this algorithm, a journal that publishes many articles can have a smaller impact factor than another even though the former is cited much more frequently. Criticism of this approach abounds.
Do Journal Impact Factors Necessarily Correspond to Influence on Scholarship?
In the calculation of impact factors, a cited article’s degree of influence is not taken into consideration. Each article cited counts equally regardless of how it did or did not influence the discipline or field. However, when we consider the impact of a scholar’s work, we usually mean something different than merely how many times it has been cited. Differences in impact seem evident in the way different citations explicitly influence scholars’ thinking; some citations clearly play a greater role than others in shaping approaches to communication inquiry. A citation has more impact on the direction of a discipline if, for example, an entire theoretical perspective is based on another’s work or if the measure used was developed in the cited study than if citations are included at the insistence of journal reviewers long after the study was conceptualized or selected to curry favor with potential reviewers or copied from other scholars’ reference lists merely to acknowledge previous research tangentially related with the citer’s study.
Indeed, a small cluster of articles, or even a single essay, can sometimes profoundly influence large numbers of scholars’ thinking and approach to inquiry because, at times, quality and quantity are not the same things. Just as the number of publications by a scholar can be different from the scholar’s influence on a discipline or field, the relative influence of particular journals ought to depend more on how articles published in them shape theoretical and methodological development than on mere citation counts.
In spite of these arguments, some contend that the fact an essay has been cited means it had impact on the scholar.
Whether journals with the largest impact factors indicate which ones have the greatest influence on conceptual and methodological progress in the discipline remains open to speculation.
In an effort to shed some light on the debate, Beatty, Feeley, and Dodd (in press) content-analyzed Communication Monographs (CM) and Human Communication Research (HCR) for 2007 through 2009. Every citation (f = 579) in either journal was coded with respect to the function it served. Some citations made substantial theoretical or methodological contributions to the essay (e.g., the citation was the basis for the rationale for the study, the justification controlling certain effects, or a key feature of the methodology.
Should JIFs Be Used to Evaluate Individual Faculty Members?
Although the literature indicates that JIFs are inappropriate measures for the evaluation of individuals, committees at the university level are beginning to use them. In a 2008 report, Feeley observed, “Journal impact rankings provide objective data for tenure, promotion, and, possibly, grant review committees on the quality of scholars’ works. Publication in higher impact journals is often equated with quality of scholarship.” Similarly, Seglen noted that “ideally, published scientific results should be scrutinized by true experts in the field and given scores for quality according to established rules,” but reviews conducted by committees composed of faculty members from outside disciplines often rely heavily on “secondary criteria like crude publication counts, journal prestige, the reputation of authors and institutions, and estimation of importance and relevance of the research field.”
Indeed, numerous scholars across multiple disciplines have observed the widespread and growing use of journal impact factors as proxies for research quality. Although the allure of an objective, single digit proxy for journal quality is understandable, the validity of journal impact factors as indices of quality should be a central concern to our discipline because recognition for scholarly accomplishment and contribution is foundational to the society of scholars.
What About the Research That Is Cited in Support of the Validity of JIFs?
Support for the validity of impact factors as a metric for assessing quality of research journals is most frequently derived from Kurmis’ literature review. Among the conclusions reached by Kurmis after reviewing the research, however, was that impact factors have proven “invaluable for researchers and librarians in the selection and management of journals” but “extension of the impact factor to the assessment of journal quality or individual authors is inappropriate.” Kurmis further argues, “Individuals and governing bodies that use the impact factor for these purposes demonstrate poor understanding of a tool that should perhaps more appropriately be termed the ‘journal citation ratio’ or the ‘journal citation index.’” The reservations expressed by Kurmis are based on a widely recognized fact that impact factors are affected by numerous factors other than the quality or intellectual influence of articles appearing in them.
How Accurate and Representative Are the Data on Which JIFs Are Based?
Several scholars have identified crucial shortcomings in the Thomson Scientific (formerly ISI) database. Certainly, communication scholars have noted that alternative spelling of authors’ names across citations affects the database and that the database omits many communication journals in which communication research is cited,15 producing an underestimate of the number of citations that should be attributed to a particular communication journal. Moreover, Rosser, Van Epps, and Hill purchased the database for several journals from Thomson Scientific and discovered that “first, there were numerous incorrect article-type designations.… Second, the numbers did not add up. The total number of citations for each journal was substantially fewer than the number published on the Thomson Scientific, Journal Citation Reports (JCR) website.”16 In fact, Rosser, Van Epps, and Hill found that the numbers for some journals were off by as much as 19 percent. They then requested the database actually used to calculate published impact factors from Thomson Scientific and received one that “still did not match the published impact factor data. The database appeared to have been assembled in an ad hoc manner to create a facsimile of the published data that might appease us. It did not.” Concerned that important decisions such as those regarding promotion, tenure, research funding, and which journals to stock in libraries were being based on erroneous data, Rosser, then executive director of the Rockefeller University Press, Van Epps, then executive editor of the Journal of Experimental Medicine, and Hill, then executive editor of the Journal of Cell Biology, concluded that “just as scientists would not accept the findings in a scientific paper without seeing the primary data, so should they not rely on Thomson Scientific’s impact factor.”
Concluding Remarks
The implications of this article are worth considering when evaluating faculty members research, journal quality, or disciplines. Beatty, Feeley, and Dodd’s (in press) content analysis confirmed that many of the criticisms of journal impact factors recorded in the literature are applicable to the comparison of Communication Monographs and Human Communication Research. If different impact factors correspond to the influence work published in journals has on conceptual and methodological development, they failed to do so from 2007 to 2009 for the journals studied.
It is important to recognize that although journal impact factors can provide useful information to professional associations regarding journal recognition and number of citations per article published, the metric is generally uninformative about how the work published in a journal affects scientific progress in a discipline. While it is certainly true, as many proponents of impact factors suggest, that a never cited article has little impact, it does not follow that a widely cited piece necessarily does, nor does it follow that an average article appearing in one journal has more influence on a discipline than an article published in a journal with a lower impact factor. Rather than relying on a single metric, which Martin refers to as the “lazy” approach,25 evaluating the quality of a particular article or a research program can be better accomplished, as Seglen argued, by actually reading the work in light of the discipline’s conventions or standards for excellence. Likewise, perhaps reading the work in which the article in question is cited to determine how, if at all, the piece influenced its citer would lead to a more accurate estimate of influence.
Regardless of how frequently this article is cited, perhaps its most meaningful influence would be as a source that can be used to prevent our journals and the work of colleagues publishing in them from being marginalized by the application of a superficial metric of dubious validity that can be distorted by factors besides quality and actual intellectual influence.
Read the full article: http://www.natcom.org/uploadedFiles/Publications/Spectra/Spectra_March2012_Vol48Iss1.pdf