The Internet is changing how we measure research impacts. Iain Hrynaszkiewicz from the Faculty of 1000 said that the Impact Factor (IF) has long been used as a measurement metric, but it is falling out of favor, as shown by these quotations:
Here are some advantages and disadvantages of the IF.
There are some good things about the IF. One does not need to have a subscription to a database to use it. But it comes from a pre-digital age, and for a long time it was the only way we could measure impact.
We now hear much about alternative metrics (“alt metrics”). Altmetrics have become important; the NSF has started to ask researchers to list “products” rather than “publications” in grant applications. Individual article-based citations are more useful and are becoming part of policy. We need to consider how science is done today and where its research results appear. There are a variety of tools for working with altmetrics:
Profiles for individual researchers are measured using ImpactStory, which can deal not just with articles but with other products such as databases, videos, etc. Altmetric.com serves publishers and measures the “buzz” around an article from citations, social media, Mendeley, etc. This allows us to look at the meaning of the impact. Here are some advantages and disadvantages of altmetrics:
The meaning of the data–context–is often missing. Faculty of 1000 provides not only numbers, but human opinions as well. Every paper is considered a good one, and then variations are explained in the rating. Results are shown in the familiar Amazon or Yelp format. Studies have been conducted of what these non-traditional measures actually mean.
We are not just trying to recreate something based on a citation systems, which we already know is flawed. Sometimes we still cannot measure everything. What is impact? Why do scientists do research? They want their work to be read. High readership is not necessarily high impact.
We now have lots of good new ways to measure impact of papers. We need to keep thinking about what all this data actually means. These new measures are beginning to be taken into account in institutional policy. All metrics have advantages and disadvantages. Let’s drop “alternative” and call everything “metrics”.
Beth Bernhardt from the University of North Carolina at Greensboro discussed why librarians need to know about metrics? It is important that they know about these new measurement metrics. Creating a profile on ImpactStory is a good way to start learning. Librarians might also want to set up an ORCID account for their researchers. Then they need to educate their users. (At UNC, Beth organized a talk entitled “Is Your Research Reaching Its Audience?”)
These metrics are still in their infancy and are not well known. Faculty generally do not know about alternative metrics. Educate them by showing examples of metrics of their own articles.
Individual conversations with researchers are the best way to educate faculty. Try to do things at non-busy times of the year (OA Week in mid-October this year was a bad time). When faculty are trying for tenure, they will be more receptive to hearing these concepts. Show them why metrics are important and what the future may hold for them. Support your Scholarly Communications librarians.
Don Hawkins blogs about conferences for Information Today and Against The Grain. He also maintains the Conference Calendar on the Information Today website and is the Editor of Personal Archiving: Preserving Our Digital Heritage, published by Information Today in 2013, and Co-Editor of Public Knowledge: Access and Benefits, published by Information Today in 2016. He received his Ph.D. degree from the University of California, Berkeley, and has worked in the information industry for over 45 years.