v22 #4 SNIP Journal Impact Indicator Accounts for Differences in Citation Characteristics and Database Coverage Between Properly Defined Subject Fields

 by Henk F. Moed (Elsevier, Radarweg 29, 1043 NX Amsterdam, the Netherlands) h.moed@elsevier.com

Download PDF

Henk F. Moed is Senior Scientific Advisor at Elsevier Science, where he is responsible for ensuring the reliability of the company’s bibliometric tools, metrics, and techniques. Prior to his current appointment he spent many years as a senior staff member in the Centre for Science and Technology Studies in the Department of Social Sciences at Leiden University

1. Introduction

The journal impact measure most widely spread among the scientific community is the journal impact factor, presented in Thomson- Reuters’ Journal Citation Reports (JCR). Nowadays it is used as a direct reflection of a journal’s prestige or quality. Journal editors and publishers communicate the values of impact factors of their journals to reading audiences. Impact factors are not only used to rank journals, but also to evaluate individual scholars and research groups or departments according to the journals they select for publication, even in decisions about salaries or promotion.

On the other hand, the great majority of practitioners in library and information science, quantitative science studies, and evaluative bibliometrics agree that journal performance is a complex, multi-dimensional concept that cannot be fully captured in a single metric; that in the construction and interpretation of journal citation measures it is crucial to take into account differences in communication and citation practices between research fields; and finally that journal impact measures should not be used as surrogates of actual citation impact of a group’s publications, even though journal quality is an aspect of research performance in its own right (e.g., Moed, 2005; Glanzel, 2009).

In recent years, numerous alternative approaches to the measurement and ranking of journal impact have been explored. Important approaches include a percentile-ranking method for scientific journals developed by Pudovkin and Garfield (2004); weighting citations according to the prestige of the citing journal (Pinski and Narin, 1976; Bollen, Rodriguez, and Van de Sompel, 2006; Bergstrom, 2007; West et al., 2008; González-Pereira, Guerrero-Bote, and Moya-Anegón, 2009); calculation of a journal’s Hirsch index (e.g., Hirsch, 2005; Braun, Glanzel, and Schubert, 2005); and finally calculation of indicators based on modelling of citation distributions as approximately normal (Stringer, Sales-Pardo, and Amaral, 2008) or as negative binomial distributions (Glanzel, 2009). The approaches indicated above are all based on citation counts. But more and more studies explore the analysis of data on downloads of papers in full-text format from electronic publication archives (e.g., Bollen and Van de Sompel, 2008). Journal “usage” factors are constructed and calculated, and their correlation with citation-based measures is examined.

This paper presents a new indicator of journal citation impact that builds further upon Eugene Garfield’s ideas presented in many of his early and later publications (e.g., Garfield, 1972; 1996; 2005). It was developed at the Centre for Science and Technology Studies (CWTS) at Leiden University (Moed, 2010a). Its acronym is SNIP, which stands for Source Normalized Impact per Paper. As from January 2010 SNIP is included in Scopus, together with the SJR Indicator developed by the Scimago Research Group (González-Pereira, Guerrero- Bote, and Moya-Anegón, 2009). SNIP and a series of related indicators are for all Scopus journals and for the past ten years freely available a Website created and hosted by CWTS (CWTS, 2010).

Section 2 of this paper explains why a new citationbased indicator of journal impact is needed, while Section 3 describes, in general terms, the way this new indicator is calculated. For further technical details and theoretical background the reader is referred to Moed (2010a; 2010b). The main features and specific outcomes of the methodology are presented in Section 4. Finally, Section 5 contains some concluding remarks.

2. Why a New Indicator of Journal Citation Impact?

Large differences exist between subject fields in the frequency at which authors cite other documents. For instance, research papers in molecular biology may easily have 60 or more cited references, whereas a typical mathematical paper may contain only ten cited references. As a result, molecular biological papers are cited on average much more frequently than mathematical papers. Large differences also exist between subject fields regarding the extent to which authors prefer to cite recently published documents over older documents, for instance, cite documents that are 1 to 3 years old rather than 5, 10 or 20 year old papers. In molecular biological papers a large majority of cited references is 1-3 years old, whereas in mathematical papers the fraction of 1-3 year old cited references is much smaller, and authors also cite relatively many works that are 10 or 20 years old. As a result, recently published articles in molecular biology tend to be cited more frequently than recent mathematical papers.

Combining the two factors described above — differences in frequency and age distribution of cited references across subject fields — a situation arises in which, for instance, journals in biomedical research tend to have higher impact factors than journals in mathematics, engineering, social sciences, or humanities. Researchers from the latter fields are often confronted with statements suggesting that the low-impact factors of journals in their fields reflect a lack of quality of those journals. But this type of reasoning is invalid. From the point of view of a balanced development of science and scholarship as a whole, it would therefore be good to have journal impact indicators that correct for differences in frequency and immediacy of citation between subject fields.

In view of the large differences in citation characteristics between subject fields, one can arrange journals by subject field and compare a journal with other periodicals in the same subject field. Several subject field classifications of journals are available. In Scopus there is a categorization of journals into 27 main fields and one into over 300 subfields. But indicators calculated by using such systems of journal categories — for instance, a journal’s rank position in a subject category — depend upon the classification applied. Changes in the subject classification may easily lead to changes in the values of the indicators. Moreover, there are many general or multidisciplinary journals that cover several journal categories rather than one. How should one deal with such journals? Finally, differences in citation practices do not only occur between, but also within journal subject categories. The conclusion is that a journal’s subject field must be defined in a more appropriate way, taking into account its scope and content.

Apart from differences in frequency and immediacy of citation across subject fields, there is a third factor that needs to be taken into account: the extent to which the database used in the calculation of the indicators covers a subject field. The currently available large citation indexes do not cover all subject fields equally well. An important reason is that these indexes cover mainly scientific-scholarly journals. Books are hardly indexed, and this affects citation levels in social sciences and humanities. Moreover, the coverage of the conference proceedings literature, though increasing over time, is still limited, which has a negative influence on journal impact factors in engineering and applied sciences.

A further factor that must be taken into account relates to the fact that journals may contain many types of documents. Apart from documents presenting original research findings or thorough reviews, which are normally peer-reviewed, they also publish more informal material such as editorials and letters to the editor. The latter two categories are not usually peer-reviewed. It is appropriate to include in the calculation of citation impact indicators only peer-reviewed publications, not only as cited or targets, but also as sources of citations. In other words, citations that are counted should come from peer-reviewed research articles.

3. How is SNIP Calculated?

The main features of SNIP are summarized in Figure 1. SNIP takes into account all five factors highlighted in the previous section. It is a ratio of two indicators: a journal’s raw impact per paper, and the citation potential in the subject field covered by that journal. Details of the methodology are presented in Moed (2010a).

Raw impact per paper is similar to the Journal Impact Factor presented in Thomson-Reuters’ Journal Citation Reports. It is defined as the average citation rate in a particular year (the citing or impact factor year) of papers published in a journal during the three preceding years. By considering three cited preceding years rather than two — as is the case in the ISI Impact Factor — results for journals in fields showing a slowly maturing impact that does not peak after one or two years can be expected to be more reliable.

A journal’s subject field is defined as the collection of papers citing the journal. In this way a subfield is defined by the (formal) users of a journal, whose behaviour can be expected to properly reflect the journal’s scope. Peer-reviewed papers include the Scopus document type articles, reviews, and proceedings papers. Informal, non-peer-reviewed communications such as editorials and letters to the editor are discarded both as cited and as citing sources.

Citation potential in a subject field captures how frequently authors in that field cite other documents in their reference lists. The simplest expression of this would be the average number of cited references in articles covering a field. But the SNIP methodology takes into account three additional factors.

  • It counts only cited references that are 1 to 3 years old. In this way, raw impact per paper and citation potential relate to the same time window. This is appropriate, because the probability for a 1-3 year old article in a journal to be cited is proportional to the average number of 1-3 year old cited references contained in papers in the journal’s subject field.
  • SNIP’s measure of citation potential only takes into account cited references published in sources that are indexed for the database (e.g., Scopus). For instance, humanities papers may have long reference lists mainly due to citations to books. A high share of such citations indicates a moderate database coverage. These are “lost” in a citation analysis of indexed journals.
  • Citation potential is itself normalized by calculating a relative database potential. In a first step, citation potential is calculated for all journals in the database. Next, the median journal in terms of citation potential is identified, and the value of its citation potential is used as a normalization factor by dividing the citation potential of each journal by that of the median journal.

4. General Features and Examples

Comparing a journal’s SNIP with its raw impact per paper, — which, in its turn, is to some extent similar to the Thomson impact factor — the following features can be noted.

  • If a journal covers a subject field in which the citation potential is higher than that for the median journal in the database (in other words, the relative citation potential is above one), its SNIP value is lower than that of its raw impact per paper. For instance, for journals in the field of molecular biology SNIP tends to be lower than the raw impact per paper.
  • On the other hand, for journals in subject fields in which citation potentials are lower than that for the median journal, the SNIP value exceeds that of the raw impact per paper. For example, for journals in the field of mathematics SNIP tends to be higher than the raw impact per paper.
  • SNIP is so constructed that, by definition, for 50 percent of journals SNIP is higher than the raw impact per paper, while for another 50 percent it is lower. In other words, taking the raw impact per paper as the norm and characterizing SNIP, the impact of 50 percent of journals increases, while the impact of the other 50 percent goes down.

Legend to Table 1: Data relate to the citing year 2007 and is obtained from a bibliometric version of Scopus created at the Centre for Science and Technology Studies (CWTS) at Leiden University, the Netherlands, based on raw data extracted from Elsevier’s Scopus in September 2008.

Table 1 presents outcomes for selected journals. Outcomes of the journals Inventiones Mathematicae and Molecular Cell illustrate the SNIP methodology quite clearly. The raw impact per paper of the latter journal is almost nine times that of the former (13.0 against 1.5), but the SNIP values are statistically similar (4.0 versus 3.8), as the citation potential of the molecular biological journal is eight times that of the mathematical periodical (3.2 versus 0.4).

Table 1 lists two journals from the journal subject category Behavioural Neuroscience. Behaviour tends to publish mainly research on animals. The journals most frequently citing this periodical are in fact Animal Behavior, Ethology, and Behavioural Ecology and Sociobiology. Physiology & Behavior is more focused on human brain research and is frequently cited from journals such as Behavioural Brain Research, Hormones and Behavior, and American Journal of Physiology. The subject fields covering the two listed journals have different citation potentials (1.5 against 2.4) and raw impacts per published paper (1.8 versus 2.9). Correcting for these differences, their SNIP values are equal (1.2 for both). It shows that this subject category is rather heterogeneous in terms of topics and approaches.

The journal pair in the subject category Applied Mathematics again illustrates large differences among journals within the same subject category. The International Journal of Nonlinear Science & Numerical Simulation can be said to cover a more specialized, topical subject, whereas Communications on Partial Differential Equations is a more general journal. The Impact Factor of the former is almost four times that of the latter, but the citation potential in its subfield is only one-fourth of that of the latter, so that the SNIP values of the two journals are almost identical. 

5. Concluding Remarks

The article introducing the SNIP indicator (Moed, 2010a) provides a list of what the author believes to be strong points of SNIP, a list of issues that should be taken into account when interpreting SNIP values, and problems that have yet to be further analyzed. These points are not repeated here in detail. In summary, the strong points of the SNIP methodology are as follows. The delimitation of a journal’s subject field does not depend upon some pre-defined categorization of journals into subject categories; it can be properly calculated for general or multi-disciplinary journals; it corrects for differences in the frequency and immediacy of citation and in database coverage between journal subject categories, but also between periodicals from the same subject category; and it takes into account only peer-reviewed articles.

Important points to keep in mind are that SNIP values tend to be higher for journals publishing review articles or showing a high journal self-citation rate. Moreover, the source normalization applied in SNIP does not take into account the growth of the literature in a field, nor the extent to which papers in a field are cited from other fields. More sophisticated methods to define subject fields using citation analysis can be explored, together with any biases they may cause. Finally, the relationship between rankings of journals based on SNIP and peer judgements on these journals should be further analysed.

References

Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News, 68(5). Retrieved April 24, 2008, from www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may07/eigenfactor.cfm.

Bollen J., Rodriguez M. A., and Van De Sompel, H. (2006). Journal Status. Scientometrics, 69, 669-687.

Bollen J., and Van de Sompel H. (2008). Usage impact factor: the effects of sample characteristics on usage-based impact metrics. Journal of the American Society for Information Science and Technology, 59, 1 14.

Braun T., Glanzel W., and Schubert A. (2005). A Hirsch-type index for journals. The Scientist, 19, 8.

CWTS (2010). CWTS Journal Indicators Website. www.journalindicators.com. Last accessed: 17 June 2010.

Garfield, E. (1972). Citation Analysis as a tool in journal evaluation. Science, 178, 471–479.

Garfield, E. (1996). How can impact factors be improved? British Medical Journal, 313, 411–413.

Garfield, E. (2005) “The agony and the ecstasy: the history and meaning of the Journal Impact Factor,” International Congress on Peer Review and Biomedical Publication, Chicago, September 16, 2005. www.garfield.library.upenn.edu/papers/jifchicago2005.pdf.

Glanzel, W. (2009). The multi-dimensionality of journal impact. Scientometrics, 78, 355-374.

González-Pereira, B., Guerrero-Bote, V. P. & Moya-Anegón, F. (2009). The SJR indicator: A new indicator of journals’ scientific prestige. arxiv.org/pdf/0912.4141. Last retrieved 26 May 2010.

Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences. 102, 16569–16572.

Moed, H. F. (2005). Citation Analysis in Research Evaluation. Dordrecht (Netherlands): Springer. ISBN 1-4020-3713-9, 346 pp.

Moed, H. F. (2010a). Measuring contextual citation impact of scientific journals. Journal of Informetrics, DOI:10.1016/ j.joi.2010.01.002.

Moed, H. F. (2010b). A new journal citation impact measure that compensates for disparities in citation potential among research areas. Annals of Library and Information Studies, Special Issue celebrating Eugene Garfield’s 85th birthday, to be published.

Pinski, G., and Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.

Pudovkin, A. I., and Garfield, E. (2004). Rank-normalized impact factor. A way to compare journal performance across subject categories. Paper presented at the ASIST meeting, November 17, 2004. Available at: http://www.garfield.library.upenn.edu/papers/asistranknormalization2004.pdf.

Stringer, M. J., Sales-Pardo, M., and Amaral, L.A.N. (2008). Effectiveness of Journal Ranking Schemes as a Tool for Locating Information. PLoS ONE 3(2): e1683. DOI:10.1371/journal.pone.0001683.

West, J., Althouse, B., Rosvall, M., Bergstrom, T., and Bergstrom, C. (2008). Eigenfactor: Detailed methods. Retrieved April 24, 2008, from www.eigenfactor.org/methods.pdf.

Pin It

Comments are closed.