v23 #5 A Proposal to Improve and Expand Access to Electronic Resources through Per-Use Pricing

by Peter McCracken  (ShipIndex.org)  <peter@shipindex.org>

It’s clear that now is as good a time as any to introduce a complete shift in how libraries purchase electronic content.  The combination of a dramatic increase in patron-driven acquisition, the rapid uptake of Web-scale discovery layers, the continuous push to electronic delivery of most content, and the continued bleak outlook for library budgets means that there’s no time like the present for a radical shift in how libraries acquire content, especially one that benefits nearly every member of the electronic content supply chain: content providers, discovery layer vendors, library budgets, and most especially, library patrons.

Two years ago I left Serials Solutions, which I’d co-founded in 2000, so I could turn to a project that predates Serials Solutions, and is even closer to my heart than electronic resource management.  I’ve always had an interest in maritime history, having studied at Mystic Seaport as an undergraduate, followed by completing a Masters in Maritime History at East Carolina University after finishing my MSLS, and eventually working at, presenting in conferences at, and even getting married at Mystic Seaport.  I find maritime history incredibly important in the development of the human experience, and the history of vessels is as important, in my opinion, as the history of any other mainstream subject.  But I knew, from my own experience of working in a small maritime museum library, that it was very hard to research the history of a particular vessel, as there was no index to maritime resources.  I started building a Website that tells users which books, journals, Websites, databases, CD-ROMs, and more, mention specific ships.  It’s essentially a Biography & Genealogy Master Index, for ships.

Alas, I soon discovered that public and academic librarians were not as enamored with the site, now called ShipIndex.org, as I was.  Given that patrons had not been clamoring for this type of index, even libraries with strong genealogy or maritime collections have been slow to adopt the service, or have ignored it completely.  The continuing economic downturn, which has obviously hurt most library budgets, has not made it any easier for a library to take a risk on an unknown service that fills a need it doesn’t feel needs to be filled.  I understand the concerns and objections librarians provide when considering this product.  I don’t, however, accept those objections — I believe that, given the opportunity, a meaningful number of patrons will find this resource to be invaluable.  I believe that it will help them, and the librarians who assist them, to make important new discoveries that simply wouldn’t otherwise occur.  As I wondered how to make this service more acceptable and available for librarians, I realized that a combination of many different factors has created an opportunity for dramatically improved access for library patrons.  Conversations with a range of librarians, database vendors, and discovery layer vendors, have further solidified my feelings that this would be a huge improvement for nearly everyone in the electronic resources supply chain.

Up to now, essentially all library resources have been purchased in a “buffet” approach, in which an institution pays a set annual fee and has unlimited access to the database.  Some services, most notably DIALOG and Lexis/Nexis, were pay-per-use, but the shift to an annual fee for unlimited access has been the dominant approach over the past decade.  I believe that a limited shift in the opposite direction would be beneficial to all.

An “a-la-carte,” or pay-per-use, approach to selected electronic resources, would be a great boon to both library budgets and the patrons they serve.  A library might feel that a certain database is not particularly useful, or feel they are paying more than the database is worth to them, but don’t want to lose it completely; consider, for example, a database that’s being purchased solely to appease one or two faculty members’ egos.  Pay-per-use access allows this, across many different databases, while freeing up funds for more efficient use.

The rapid introduction of Web-scale discovery layers, such as Serials Solutions’ Summon,1 EBSCO’s EBSCO Discovery Service, Ex Libris’ Primo Central, OCLC’s WorldCat Local, and others, provides the perfect layer for applying an a la carte approach across large swaths of data.  All of the pieces of technology are available to make this work; all that’s needed is some modifications to administrative interfaces in discovery layers, some additional data tracking and reporting, and most importantly, a willingness among libraries and content providers to try something new.

In this new paradigm, the discovery layer administrative interface offers libraries three choices for each database it indexes: the library can indicate that they subscribe to the database, that they want to offer pay-per-use access to the database, or that they don’t want to offer the database to their patrons.  When the library has a direct subscription to the database, its contents are made available to anyone from that library accessing the library’s collections through the discovery layer, as is done today.  When the library does not want its patrons to see a database’s contents, those contents are hidden from patrons that are using the discovery layer, as is done today.  (Patrons can, of course, choose to see all results in the discovery layer’s index, even if they cannot access that content through their library.)  But when a library wants to offer content from a database that it thinks might be useful, but doesn’t want to subscribe to it directly, it could, in this model, offer access to the database in a pay-per-use manner.  Any databases for which the library feels that it’s not getting full value from its subscription dollars can be switched to this model, as well.  In many cases, the library will find that the cumulative annual cost for per-use access will be far lower than the cost of the annual “buffet” subscription.

Since libraries pay on a per-use basis, tracking that usage is critical, but also easy: standard Web advertising software will allow the discovery layer to track how often results from a given database are presented to a library’s patrons.  Results pages will show a combination of buffet and pay-per-use results.  For the pay-per-use results, a library will be charged a small fee each time citations from one of those databases appears in the results set — a PPV, or pay-per-view, charge.  When a patron clicks on one of those citations, the discovery layer will count a PPC, or pay-per-click, charge against the library’s account.  On a monthly or quarterly basis, all those PPV and PPC charges, plus a percentage-based service charge for the discovery layer (perhaps around 4%?), will be billed to the library.  There is no charge, other than the standard fee for subscribing to the discovery layer, for any results from databases to which the library already subscribes directly.  Overall, I predict that libraries will most likely spend less money, but almost certainly get far more relevant and useful results, when a certain percentage of the library’s databases are converted from buffet access to a la carte access.

Libraries will be protected from unlimited per-use costs by a cap of, say, 120% of the database’s annual cost for the library in question.  If the library discovers that its patrons are making significant use of a particular database, it can choose to subscribe directly.  If it does subscribe directly, the database’s content will continue to be available through the library’s discovery layer access, but the library will no longer pay each time the content appears.  The library will also have access through the database vendor’s interface, as well, which, in most cases, will provide additional functionality that won’t be available through the discovery layer.  Usage will have shown that it is in the library’s best financial interest to subscribe directly, and the library will benefit from the additional functionality offered by direct access.  (The vendor will also benefit; usage data from the discovery layer vendor will help the content provider find its ideal price point, based on its usage expectations.)

Smaller libraries will benefit greatly from this service.  They will be able to offer more databases to their patrons without investing more money.  Since these smaller colleges will see lower usage of the less-valuable databases, they can take more risks in which ones they offer.  While they could switch many databases to a pay-per-use model as soon as it’s offered, it might make more sense for them to review the databases they currently offer, determine which have the highest cost per use, and compare that with the costs charged via the discovery layer, and slowly switch over databases to a pay-per-use model.  At the same time, they can review their  “wish list” and add many new databases, as well.  Since they will be saving a lot of money on the rarely-used databases, they’ll have more dollars available to spend on new databases.

Large libraries will similarly benefit.  While they generally purchase access to many more databases than small libraries, those databases are often used much less.  Large libraries tend to pursue more of a “just-in-case” model than do smaller libraries, so they’ll be able to find significant cost savings among those less-used databases, while not giving up access to the databases.  Using the a la carte model suggested here, large libraries can (like their smaller cousins) more effectively use their database dollars, by spending only when individuals use each database.

Large libraries would lose access to these databases via the native interfaces, but content would still be available through the discovery layer.  Apart from losing access through the native interface, patrons would not know how libraries are paying for access.  The results screens from the discovery layer, for instance, would not have any indication at all as to which databases a library receives via buffet access, and which are paid on a per-use basis.

The discovery layer vendors benefit greatly from this proposal: not only do they have a new recurring income stream in the service fees to cover the costs of managing the pay-per-use databases, they further solidify their role as the de facto core search engine for the library.

While it is perhaps less certain, I believe that most niche content providers will benefit from this service, as well.  High-use, core databases, such as standard EBSCO, ProQuest, Gale, and JSTOR databases, would likely not be purchased as pay-per-use databases (unless their subject area is far outside a given library’s focus).  Databases that have more of a narrow focus, however, would benefit in the long term.  In the short term, they may see some significant loss of buffet subscriptions as libraries move each database to the pay-per-use model.  Over time, however, they will find more uptake among those who are willing to try out the database with minimal financial risk.  Sales efforts, for example, will focus not on a library subscribing directly to the database and paying a set amount to the content provider, but on getting a library to add the database to their discovery layer’s low-risk pay-per-use collection.  If the database is not used, there will be no cost to the library.  If the content is good, and the database’s results appear often, and are clicked on often, the content provider will benefit greatly.  In the end, far more of their revenue will come from libraries via the discovery layers, rather than directly.  Payments will also come to the vendor more regularly — discovery layers will pay them monthly or quarterly, rather than the annual payments that come from libraries.

One of the most important beneficiaries of this proposed system, however, is the library patron.  If implemented effectively and correctly, a patron doing a search in a discovery layer at a small institution will have a nearly identical result to a patron doing a search in the same discovery layer at a much larger institution.  How the libraries pay for that content will be very different, but that will not matter to the patron.  And most importantly, when the libraries do pay for this content, they will be paying in a much more effective and efficient manner: the money they spend will far more closely align itself with what their patrons use.

A central tenet of patron-driven acquisition is that librarians simply cannot know with certainty what resources patrons will need.  It is worth it, whenever financially and practically possible, to offer as many of those resources as we can.  This solution makes that possible.

The proposal is not perfect.  First, databases that are not available via discovery layers would not be available for pay-per-use access.  But as discovery moves more and more quickly to Web-scale layers, it will behoove all but the most complacent database vendors to make their content available through multiple discovery layers.  Expensive databases that are rarely used but even more rarely canceled will be in trouble: if they cannot justify the content they offer, it would seem that they will be the big losers in this scenario.

Pricing will be an interesting challenge, and will certainly take some time to figure out.  Not only will vendors need to set prices for views and clicks, they’ll also need to set list prices for each library that adds their database as an a la carte database.  It would make the most sense if vendors set a standard price for views and clicks for each given database, and not vary that price based on the institution in question.  (PPVs and PPCs will, certainly, vary from database to database.)  Perhaps some sort of percentage discount or surcharge could be applied on an institution-by-institution basis, to address currency exchange inequities, or other institutional subscription variations.

There’s no doubt that many content providers will see an initial drop in subscription revenue as libraries move from buffet access to a la carte access whenever they feel their bottom line will benefit from this switch.  But at the same time, many more libraries will be willing to try offering access to a narrowly-focused database, as these libraries will have nothing to lose if no one uses the databases.  Instead of always feeling that they don’t have any dollars with which to add new resources, libraries will be able to easily try out many different databases at once, and see which ones their patrons actually use.  Over the course of several years, it will become readily apparent which databases have content that is actually meaningful and useful to library patrons, and which do not.  If a database is only useful to a small number of users, that’s no longer a problem, as the library can add access to the database in question, and only pay for the limited use that the library finds.  A library can now offer content that is as varied as its population and their interests, and must no longer only offer the content that primarily serves the center.

What is necessary for this to happen?  Discovery layers must build administrative tools that allow them to track PPV and PPC statistics and fees for each database in their collection, track which databases are managed in what fashion by library, track discounts offered by content providers to libraries, bill libraries for usage on a monthly or quarterly basis and distribute funds to content providers on a similar schedule, and much more.  Libraries must be willing to try this new approach, and be comfortable with much of their content dollars going to content providers by way of their selected discovery layer; and do a fair bit of soul-searching about which databases they want switched from buffet access to a la carte access, as well as choosing many more databases to offer to patrons through this system.  Content providers must be willing to take a risk with the data they offer to their subscribers.  They must be comfortable enough with the quality of their content to accept that, unless their database is a core database, many libraries will drop direct subscriptions to their databases.  On the other hand, they should feel comfortable in believing that other libraries will be willing to try their databases on a pay-per-use basis.  I believe that content providers would find that, if their content is good, they’d find many more users (and thus revenue) through institutions that don’t realize their patrons need the content they offer.  By getting content into discovery layers, getting those discovery layers into use among many patrons, and making their content available even to those who wouldn’t otherwise subscribe to the database, vendors with quality content will, I believe, see usage and revenue increase.

It is difficult for me to know if this proposal will come to pass.  I am certainly not in a position to make it so, and while I have proposed it to a number of different discovery layer vendors, I have not seen a response that suggests any vendors are actively working on it.  But in the spirit (though not the complexity) of Adam Chandler and Tim Jewell’s DLF-ERMI documents2, in which they outlined what they felt vendors should offer when creating an ERM module, I aim to define what I believe would benefit all members of the electronic resources supply chain, and hope that it will generate discussion, further improvements, and perhaps eventually a new delivery mechanism that will help all library patrons find the resources they seek.  Especially in maritime history.

Endnotes

1.  Though I co-founded Serials Solutions, I was not directly involved in Summon development, and I have had no connection with its promotion or sale since I left the company in September 2009.

2.  Tim Jewell, et al. Electronic Resource Management: Report of the DLF ERM Initiative. Washington, DC: Digital Library Federation. http://old.diglib.org/pubs/dlf102/ (accessed 25 July 2011).

Pin It

Comments are closed.