Abstract
The journal impact factor (IF) is the leading method of scholarly assessment in today’s research world. An important question is whether or not this is still a constructive method. For a specific journal, the IF is the number of citations for publications over the previous 2 years divided by the number of total citable publications in these years (the citation window). Although this simplicity works to an advantage of this method, complications arise when answers to questions such as ‘What is included in the citation window’ or ‘What makes a good journal impact factor’ contain ambiguity. In this review, we discuss whether or not the IF should still be considered the gold standard of scholarly assessment in view of the many recent changes and the emergence of new publication models. We will outline its advantages and disadvantages. The advantages of the IF include promoting the author meanwhile giving the readers a visualization of the magnitude of review. On the other hand, its disadvantages include reflecting the journal’s quality more than the author’s work, the fact that it cannot be compared across different research disciplines, and the struggles it faces in the world of open access. Recently, alternatives to the IF have been emerging, such as the SCImago Journal & Country Rank, the Source Normalized Impact per Paper and the Eigenfactor Score, among others. However, all alternatives proposed thus far are associated with their own limitations as well. In conclusion, although IF contains its cons, until there are better proposed alternative methods, IF remains one of the most effective methods for assessing scholarly activity.
Definition
The journal impact factor (IF) is formulated as a simple ratio to measure the citation frequency of articles in a journal within a year. This ratio illustrates the number of citations for a specific journal divided by the number of total ‘citable items’ published by the journal over the past 2 years.1 The IF has further significance than the aforementioned definition. It was originally used by librarians in the 1970s to assist in the purchase of journal subscriptions. The idea to employ this method into scholarly assessment reverts to when Eugene Garfield first developed this keen method of assessing the importance of journals.1 From thenceforth, this method has become widely used and recognized in research. It is also acceptable by the scientific community as a reflection of the ranking and reputation of the journal. Journals with a higher IF are known to be more reputable as it implies a more competitive research design and rigorous review process for the articles published in these journals. Although used by many and has several benefits, IF has also received considerable controversy over the past years, as discussed below.
Calculation of the IF
First, it is important to understand how the journal’s IF is calculated. In any given year, a journal’s IF is given by the number of citations for publications of a given journal over the previous 2 years divided by the number of total ‘citable items’ for that journal in the previous 2 years.1 The ‘total citable items’ over the past 2 years is also known as the citation window. The exact formula is shown in figure 1A. For example, there were 320 citations to items published in Journal X in 2010 and there were 225 citations to items published in Journal X in 2011. In 2010, there were 38 items published eligible for citation in Journal X and in 2011 there were also 73 items published eligible for citation in the journal. Therefore, the IF of Journal X in 2012 will be 4.91 (figure 1B).
The citation window
The formula in figure 1 begs the following question: what is considered to be a ‘citable item’ and thus included into the citable window? This issue, without doubt, is very significant and can make a radical difference as it makes up the denominator of the IF calculation. The definition of ‘citable item’ differs in each online scientific citation indexing database. For example, the Web of Science database has its own set of criteria to solve this issue, putting much stress on what the document type is set as in the index field. Any document set as ‘Article’, ‘Review’, or ‘Proceedings Paper’ is considered to be a citable item. Other databases, however, such as the Centre for Science and Technology Studies (CWTS) and SCImago, have their own sets of criteria.2 With classification discrepancy playing a factor, SCImago reports to have more citable items than CWTS, while CWTS has more citable items than Web of Science as reported from 2011 to 2013.2
What is considered a ‘good’ IF?
There is no set definition on what exactly makes a good journal IF. Table 1 and figure 2 show an arbitrary stratification of journals into groups based on IF. Only about 2% of journals have an IF of 10 or higher. However, once again, the definition of a ‘good impact factor’ is unclear. It should be also noted that the IF for each journal is a moving target that can change from one year to the other.
The Journal Citation Reports (JCR) is an online database which tracks all IF for scientific journals. JCR first began in 1997 and is updated annually. Without a doubt, JCR is diverse as it includes 11,000+ indexed journals and 2.2 million articles from 81 different countries. As of 2016, the IFs for 12,061 journals are recorded. It includes two editions: Sciences for biological science journals, and Social Sciences for social science journals.3 However, as diverse as JCR may seem, there are journals that are not accounted for, especially those with a short publication history.
Advantages of the IF
Every year since the 1960s, the scientific journal publications have been vastly increasing. In 1996, it was estimated to be 126,000 scientific journals, and now that number has grown significantly. Due to this massive number, there needs to be a universal and simple method to guide the readers and the authors to the top most relevant journals, and the journal IF does just that.3
The advantages of IF are summarized in box 1. It is a great guide for the research community to the highest quality journals. One of the biggest factors in attracting scholars to this method is its simplicity. It is easily understood by both scholars and junior trainees. Table 2 shows a partial list of journals that have top IFs. Another great advantage is that it takes into consideration the previous 2 years, and thus displays the dynamic and fluctuating status of the journal. This is more accurate than if the calculation only included the previous year.3
Brief summary of the advantages and benefits of impact factor (IF)
With the increasing number of journals, journal IF gives a universal method of assessing journals.
Gives the author an idea of the rejection rate.
Guides readers and authors to the most relevant journal.
Simple calculation.
Calculation involves the previous 2 years.
Gives the readers and the author a sense about the intensity of the review process.
Promotes the authors.
The IF also gives both readers and authors a sense about the review process. If a journal has a higher IF, this usually indicates that the articles in the journal went through more intensive reviewing than a journal of a lower factor. As a result, the information in the articles are implied to be more reliable. IF can also give authors an idea of where to submit their manuscript for publication and expect it to be accepted. Finally, the journal IF is taken into consideration when applying for grants or academic promotions.
Factors that can affect IF
It should be noted, however, that there are a number of factors that can affect IF rating. If a journal is open access, it will be more available for further citation in comparison with non-open access journals, and this can boost the IF. Also, if a journal is specialized, such as journals focusing on VHL gene or hypoxia, it might have less audience than general journals such as the Journal of the American Medical Association. As a result, it may not be cited as much and this can decrease the overall IF.
Limitations of IF
Despite being a great tool for years, IF indeed has its limitations and disadvantages. For more than 60 years, since the time it was introduced until nowadays, IF has been receiving criticism. The major limitations are summarized in box 2. IF relies on which article types Thomson Scientific counts as ‘citable’. This have resulted in a biased acceptance by the journals according to article type. For example, the publication of research papers is always welcomed for being highly citable. On the other hand, journal editors might discourage the publication of certain article types such as ‘case reports’, manuscripts with negative findings or even research articles of specialized topic with limited public interest (like rare diseases) as they are less appealing for citations. Such selectivity is itself can artificially inflate journals’ IF, with subsequent positive impact on the journal as the rate subscriptions, submissions and advertisements will definitely increase.4
Brief summary of the limitations and disadvantages of impact factor (IF)
Selectivity of publishing articles which are predicted to be highly citable.
Boosting IF by self-citation.
Dominance of English-language journals.
Reflects the journal’s quality more than the author’s work.
Augmenting the nominator with non-citable publications.
Cannot compare between disciplines.
Potential bias toward more citation for open access journals.
Affected by the date of release of publications.
Potential bias against specialized journals with limited citation chances.
Also, IF measures the average citations per journal, usually awarding modest productivity and punishes excessive productivity. Also, it has a tendency to augment over time in spite of the journal’s actual performance.5 6 Added to this, IF can be skewed by increasing self-citations, leading to artificial boosting. This practice is negatively affecting the overall scope of scientometrics.5 6 The move toward journal self-citation raises concerns about the level of clarity and to what extent IF can be influenced by editorial policies.7
Another important issue that needs to be considered is discipline bias. IFs vary among research and specialty fields; for instance, publications in breast cancer research are more likely to be cited compared with thymoma research. Thus, in most cases, it is not reasonable to compare IFs between journals from different disciplines. Citations also vary according to the nature of research focus within the same discipline.8
Additionally, there is a preference for English-language science journals.8 The calculation of the IF does not count non-English-language publications.9 Another factor that raises concern is the timing of publication. Papers published at the end of the year contribute to less IF calculations than those published at the beginning of the year, simply because they are being published for a shorter time frame.8 Also, when citable articles are accessible online before printing (prepublication online access), citation number will be added to the nominator but not the denominator. This can falsely increase the IF.10
It is also to be noted that the journal’s IF does not reflect the quality of the author’s individual work but rather reflects the journal’s overall quality.11 Questioning whether an article is judged by the quality of the manuscript or by the IF of the journal has been recently discussed in detail.4 12 13
Additionally, the IF does not reflect the impact of the journal outside the scientific community.13 It does not reflect the significance of the content on the public, including patients and non-government organizations. For the purpose of assessing public impact, IF is a substandard measure.14 Finally, there is a risk that misapplication of the IF may negatively affect scientific advancement.15 Focusing on improving the IF by journals may block the opportunity for good research to be published, which in turn can affect knowledge improvement as well as funding.16 17
IF challenges in the era of open access and social media
New styles of publication can seriously affect the credibility of IF.18 One of these is the concept of ‘open access reviews’. An interesting example is the ‘Bioarchives; BioRxiv ’ hosted by the Cold Spring Harbor Laboratory (http://biorxiv.org). This is a free online archive and distribution service for unpublished articles. Articles are not peer-reviewed, edited, or typeset before being posted online, but once posted on bioRxiv, articles are citable and cannot be removed. The claim is that by posting preprints on bioRxiv, authors are able to make their findings immediately available to the scientific community and receive feedback before submitting their work to journals. The question is who can the IF be calculated in these cases.
Another challenging model is ‘post publication review’. For instance, the F1000Research, launched by the founders of BioMed Central, offers immediate publication after rapid scientific checks, with fully transparent, postpublication review.19 20 In a third model ‘Collabra’, published by the University of California, the reviewers are paid and can be identified. The dialogue remains of whether this will end up with a better and more transparent way for publishing.
Alternatives and modifications of IF
The success and the limitations of the IF have elicited the development of alternative models to evaluate the quality of articles and show their importance. It is now clear that the scientific journal content and impact are too complicated to be assessed by a single metric.21 A number of bibliometric measures have been developed but none has gained enough popularity to be the best replacement of the IF. These include the Eigenfactor Score (ES). The difference between ES calculation and IF is that ES takes into account 5 years of publication instead of 2. Also, in ES, self-citations are ruled out from the calculation. Another characteristic of the ES is that an incoming citation to a particular journal is weighed by the ES of the citing journal. ES is considered to be a measure of a journal’s actual importance to the scientific community.22 Another metrics is the ‘CiteScore’ launched by Elsevier, where calculations are based on citation counts of documents published in a current year to those published the prior 3 years, thus extending the citation window. Also, all documents are counted citable (including abstracts, editorials, or letters) in its denominator. Although this sounds appropriate, however, journals which do not include these documents will score higher compared with others that include it. In addition, it is available without charge, unlike IF.23–25
In a third model, the Source Normalized Impact per Paper measures the citation impact by weighting citations based on the total number of citations within the similar fields. It compares sources in different subject areas.26 Accordingly, in field areas with fewer citations, a single citation has higher impact, overcoming a significant limitation of the IF. Also, citation window is 3 years compared with 2 in IF.27
The ‘SJR - SCImago Journal & Country Rank’ metric defines the journal impact by including both the quantity as well as the quality of the citation received in its assessment. Factors as the reputation of the source they come from, discipline area and a wide database with different languages are considered the most important differences when compared with IF. Also, self-citations are not effective in its calculation, which is over 3 years’ time. However, considering the large database raises a question regarding the accuracy and transparency of SJR.10 28
Other examples include Google Scholar Metrics, including h-index and its alternative forms. The h-index is a tool used to measure the scientific output of each individual author rather than the journal. It considers both the number of papers published (quantity) and the number of citation received per each article (quality). It is, however, not free of limitations.29 It can be influenced by self-citation. In addition, it favors specific disciplines. Also, the way of its calculation favors researchers with long-standing experience over new researchers.30
Altmetrics and social media influence
With research now being shared through other routes rather than scientific publications including social media, there becomes a need for tools that measure the impact of social media. Altmetrics track the effect of social media such as Twitter, Facebook, as well as scientific blogs on publications.31 It measures the frequency of mentions, shares, downloads, and discussions of a specific article in the media, thus knows the amount of attention the article has drawn. Sources of data include Google, Wikipedia, scientific blogs and Twitter. It can evaluate the flow of research into the community and offers a speedy view of the impact of social media on science. Its disadvantages include that it measures interest not quality. Citation count is clear when compared with Altmetric count, which can include different forms. It can still be manipulated by creating fake accounts.32 33 So far, it is successfully used as a useful tool to assess quality.34
Conclusion
‘Journal metrics should always be accompanied by health warnings that are at least as prominent as the ones you see on cigarette packets’, says Stephen Curry, a structural biologist at Imperial College London. ‘Such metrics are at the root of many of the current evils in research assessment’.24 25 Until we agree on a good alternative to IF, editors who have obsession with numbers should reset priorities and focus on publishing papers that will have real impact on the future of science as well as reduce health problems.35
Footnotes
Contributors Project planning: GY, MK, SM. Literature collection: MK, SM, JH. Literature summarization: SM, JH, MK. Manuscript drafting: MK, SM, JH. Manuscript review: MY, JH. Manuscript approval. GY, MK, SM, JH. Manuscript structural preparation and submission: JH.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Patient consent for publication Not required.