Many people will have heard the oft-cited slogan “publish or perish”. It implies how scientists are increasingly being assessed by where and how much they publish. Appointment committees also set great store by an extensive list of publications. In other words, articles in a discipline’s top journals are the currency the academic community deals in.
But how can the impact of a journal in the expert community be gauged? In recent decades, the Journal Impact Factor (JIF) has taken root as a classical reference value, calculating a journal’s “influence” based on the average number of citations achieved by an article published in it.
Figure 1: calculating the Journal Impact Factor. Image source: http://library.buffalo.edu/scholarly/
However, this value tells us little about the actual influence of an individual article and its resonance in expert circles. Therefore, it is hardly surprising that, in light of the increasing availability of academic articles on the internet, there is also mounting criticism of the JIF and its omnipresence in the assessment of academic performance. At the end of the San Francisco Declaration of Research Assessment (DORA), which was published in 2012, a prominent group of scientists and publishers advocates a change in thinking. Under today’s conditions, a form of impact measurement that is calculated solely at journal level seems behind the times. Moreover, a series of studies reveal that the JIF calculation is prone to error and manipulation.
Consequently, so-called altmetrics has emerged as a response to the criticism voiced regarding the JIF. It can be understood as both “alternative metrics” and “article level metrics”. Both concepts aim to gauge the influence of an academic publication at individual-article level (book, chapter etc.), without focusing solely on citation figures; they also primarily consider other influential factors, such as mentions on Twitter, bookmarks on Mendeley or download figures. Therefore, altmetrics don’t just measure the reach of publications in the expert community, but also particularly their reach in a broader public.
Figure 2: one of many altmetric tools, the altmetric donut by www.altmetric.com: it offers a glimpse into how well an article was received in social media and social bookmarking services. Image source: www.altmetric.com.
Some publishing houses have jumped onto the bandwagon and integrated various altmetric applications in their platforms. The open access publisher Public Library of Science, which offers extensive usage data on each article published (see an example) and also provides the data collected for re-use by third parties, is regarded as a pioneer in this field.
But a field of activity also presents itself for libraries here: altmetric applications can be integrated in repositories, university libraries or discovery systems. ETH-Bibliothek, for instance, offers download statistics for individual documents on its document server ETH E-Collection. The service doesn’t just aid authors as a measurement factor for the use of their documents; the repository operator can also use it specifically as a marketing instrument: the range of detailed usage figures increases a publication platform’s appeal and motivates university members to publish their documents there.
Figure 3: Download statistics in the ETH E-Collection
But what does a mention on Twitter or the number of downloads actually say about the academic relevance of an article? Not much, you might think. And sure enough, the significance of some tools needs to be considered with a differentiated approach. At the moment however, the merit of altmetrics primarily lies in the fact that, in the medium term, the discussion on these new tools can lead us to a differentiated consideration of what we understand by Impact.
Figure 4: criteria for academic impact. Image source: http://altmetrics.org/manifesto/
Ultimately, there isn’t a clear, underlying definition for the term “academic Impact”. Therefore, assessing such a category based on a single factor, such as the JIF, cannot be effective. The aim is far more to determine the impact of an article based on a combination of different criteria: usage figures (e.g. downloads), expert opinions (via a peer review process), reception in the wider public (e.g. on social media) and classical citation evaluations.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International Public License.
DOI Link: 10.16911/ethz-ib-1141-en