Measure for Measure: The Role of Metrics in Assessing Research Performance

by | Jun 7, 2013 | 0 comments

:ssp13banner:

Melissa Kenneway

Melissa Kenneway from TBI Communications moderated this session on metrics.   The old metrics of citation counts and impact factors are being replaced by a number of new ones.  In her introduction, Melissa said that we are moving towards a world where measurement predominates, and there is a large spread in approaches to research assessment. Peer review is still important, but increasingly the metrics are becoming more interesting. Models are still evolving, and there is an academic, social, and economic impact.

Metrics can be used for individual researcher assessment, which underpins institutional performance.  Publication performance remains a key.  There is a move towards author and article impact assessment, and we now have access to a wide range of data to provide a much richer picture.

Should publishers be interested in metrics? Here are some reasons why they should.

William Gunn

William Gunn, Head, Academic Outreach at Mendeley, discussed some of the problems of alternative metrics.   Is the work advancing the field? Does an institution have an effective research program? How do we measure advancement? What are the units? We need the numbers, and people like them. They may have actual objectivity but it is important to be aware of their context. We must let research be research; the point of metrics is not to constrain people in a certain rigid path. But numbers are still important.

The opportunity in metrics is to help researchers make better decisions. We need to find out what researchers need and try to meet those needs.  What do the different metrics tell us? We still don’t know which metric is good for a specific application.  For example, what does it mean to have 200 tweets vs. 20 citations?  We know that citations don’t correlate with reproducibility.  If we have other metrics for credit and reward, we can learn more things than citations can tell us.

Sharing data is important.  One study estimated that authors who make their data available with their articles are cited twice as frequently as articles with no data.  And Tim Berners-Lee said, “The state of knowledge of the human race is sitting in scientists’ computers and is currently not shared…We need to get it unlocked so we can tackle those huge problems.” Tim O’Reilly said, “Preservation should be baked into the tools that we use.”

We must master the context of the data and get better at understanding researcher needs.  Here are some practical tips:  Collect lots of data, build expertise in analysis, and make that data available openly.

Kristi Holmes

Kristi Holmes from the Baker Medical Library at Washington University asked why we measure things.  What things do people typically count, and what things should we measure?  How do we measure what matters?

Citation analysis is a traditionally used metric in academia, with the assumption that significant efforts will result in a high count.  But do the numbers tell the true story?  The Research Excellence Framework (REF) complements traditional output measures of scientific research with indicators of social and economic impact.  The Becker model involves tracking research output to locate indicators that demonstrate evidence of research impact. More information is available on the Becker project website.  Here are some issues and challenges that have arisen:

 

 

Michael Habib

Michael Habib, Product Manager, Scopus, said that it is sometimes difficult to decide which metric to use.  Considerations include what level and type of impact you are accessing and what methods are available based on them.  Citations are still important, but they are not altmetrics.

The impact factor and H-index are the most widely known metrics in use by researchers.  Publishers should greatly reduce the   journal impact factor as a promotional tool and use a variety of journal-based metrics to provide a richer view of journal performance.  Two new metrics, SNIP (Source Normalized Impact per Paper) and SJR, which were developed in other countries, are showing advantages over the traditional impact factor; they should be embedded on journal home pages.  Elsevier has made it easy to do this by providing code snippets that can be added to the HTML for the page.

Snowball Metrics is a system that provides metrics to support universities’ decision processes.  Its “recipe book” is freely available.

:dhssp13:

Pin It on Pinterest