The Hyde Park Debate—Resolved: Altmetrics Are Overrated

The Hyde Park Debate has become one of the highlights of the Charleston Conference and is always thought-provoking and entertaining. This year’s debaters were Derek Law, Professor Emeritus, University of Strathclyde, Scotland, who was in favor of the resolution; and Maria Bonn, Senior Lecturer, Graduate School of Library and Information Science, University of Illinois, who took the opposing view.  Rick Anderson, Associate Dean for Collections and Scholarly Communication, University of Utah (and a debater at previous conferences) was the Moderator.

First, the audience was asked to vote on the resolution, following which the debaters each made 10 minute presentations, and then there were 3 minute rebuttals, followed by a question and answer period from the audience and the moderator. A final vote was then taken, and according to the rules of the debate, the winner was the one who moved the most people to his or her position.

Hyde Park Debaters: Rick Anderson (Moderator), Maria Bonn, Derek Law

Hyde Park Debaters (L-R): Rick Anderson (Moderator), Maria Bonn, Derek Law

The vote before the debate was almost even, with 44 people in favor and 43 opposed (the closest opening poll in the history of this event).

Note: The full debate was live streamed and can be viewed here.

Here is an edited transcript of the debaters’ presentations:

Derek Law:

In considering a vote, it is not necessary to believe that altmetrics are not wrong or a ridiculous delusion, but that they are overrated. The fundamental problem is that they focus on what is measurable, not what is important. They make qualitative, not quantitative, measurements.  On the Altmetrics Manifesto website, there is an uncertainty of what the term means.  The most contentious issue seems to be whether there is a dash between “alt” and “metrics”!

Much of the world’s literature is ignored by altmetrics; they rely on a very small data set of about 3.5 million articles that excludes books, conference papers, chapters, etc. Altmetrics rely on crowdsourcing rather than expert opinions. After 5 years, we still don’t have an idea of what we are measuring; new measurements give different results to scholarly papers. There are already numerous publishers willing to publish papers on popularizing altmetrics rather than real topics that matter. Metrics can be manipulated; the web is awash with gamification and advice on how to get a higher ranking in search results. And Amazon is taking action against people hired to write high scoring reviews for a fee. The same thing will happen with altmetrics. There is a very limited correlation between altmetrics and citations, but there is a high correlation between a high altmetric score and the appearance of a paper in Retraction Watch.  It is therefore unclear what altmetrics are; they measure what can be measured, not what is important; they use populism not quality and are aimed at bureaucrats and funding agencies; and they can be manipulated to influence outcomes and one’s personal status. They are indeed overrated.

Maria Bonn:

The problem with the resolution is not its absurdity but its phrasing.  Altmetrics are not overrated; all metrics are overrated. Altmetrics arose in response to simplified citation counting and impact factors. They are a symptom of the same cultural disease they are seeking to cure. In our desire for numbers by which we can measure the contribution of scholarship, we believe that truth only emerges from numbers, and classification becomes almost a form of religious expression. We place our faith in counting, but numbers without interpretation have only a limited value in understanding. If you want a good argument or to tell a good story, you need to substantiate your claims. As an information humanist, I value interpretation, critique, and critical approaches to data rather than the data itself. Like any good knowledge worker, I want to understand the fields in which I work and share that understanding with others by telling good stories about the value and reach of scholarship. I need material, and metrics of all kinds are good material.

According to its website, ImpactStory is “an open source web-based tool that helps scientists share the impact of their research. By helping scientists tell data-driven stories about their impact, we are building a new scientific reward system that values and encourages native scholarship.” We are finally arriving at the narrative of claims needed to measure scholarly impact, but we have yet to hear many of the resulting stories. In the summer of 2014, the Journal of Electronic Publishing (which uses altmetrics) published a special issue on “Metrics for Evaluating Publishing Value, Alternative or Otherwise”. In response to the call for papers, we saw a variety of ways of measuring value or telling stories about the attention received by publications and about the use to which scholarship is put. All are ways of making an impact case and all require interpretation to be meaningful. The scenarios may be seductive, but where are the numbers? We must substantiate them and allow others to judge what the stories are that the numbers are telling. I want the the words and numbers–lots of numbers of lots of kinds.

Altmetrics are not overrated but they are underused. By all means, establish and use altmetrics, and then help our scholars to use them. Analyze, interpret, and argue—with supporting evidence. Let us make use of the tools we have at hand and become good storytellers.

The final vote was 77 in favor and 51 opposed, so the winner of the debate was Derek Law.

Don HawkinsDon Hawkins
Charleston 2015 Conference Blogger and Against The Grain Columnist

Pin It

Leave a Reply