Out and About: Reports of Information Industry Meetings
By Donald T. Hawkins
NFAIS held a workshop on June 21 in Philadelphia which examined digital intake of content. As the number of devices owned by consumers grows, the diversity of content has also grown, and so have the barriers to ease of use. This workshop featured speakers from academic institutions and content providers discussing various aspects of their experiences.
The Landscape of Digital Content Consumption
Ann Michael, President of Delta Think, a publishing and digital media consulting firm, led off with an overview of the landscape of digital content consumption. She said that in the beginning, we had print, but its first progression to digital was just a simple repurposing of the print, and in many cases functionality was actually lost. When the realization dawned that the print could be enhanced, content started to come out of silos and changes occurred.
Now we have new delivery channels, especially mobile, and users’ expectations have changed and expanded. The voice of the customer keeps getting stronger and louder; their opinions come to us whether we are prepared for them or not. And the rate of change continues to increase. Historically, innovations in publishing focused on more efficient composition and print processes; now, technology is enabling faster iterations in content, hardware, and software development.
As digital media continue to evolve, challenges increase. For every new technology that appears, changes must be considered by publishers and libraries. Considerations include:
- Should adoption be done immediately or after the technology has peaked?
- How likely is it that new things will become standards?
- How can new technologies become part of a library’s ecosystem?
- How will they affect current practices and how will they be managed?
- Will new technologies enhance library users’ experiences?
We must begin thinking about how to deal with content diversity. Do we need to get feedback from users before we integrate new content into our systems and collections? What kind of content must be produced to meet user needs? What is the best device on which to display it? These and other questions will continue to influence publishers, content providers, and libraries. We tend to get stuck on “How will X replace Y?”, but the right question is “Where does X fit into the mix that we already have?” There are many ways that people want to consume content, and how they want to use it drives how they consume it.
Think about a portfolio of content and how it matches the needs of your users. There is lots of challenge in creating content for diverse devices. Today’s emphasis is on responsive design, which really means “as screen sizes change, what will users likely accomplish on the size of their screen, and which functions should be featured prominently?” We understand how people have used content, but we do not always understand how they want to use mobile devices. Different uses drive different requirements, but often the same or similar content types and formats are used to fulfill diverse needs. “Operationalizing” mobile technology is a challenge, and the uses of mobile are still emerging.
Even though we operate in professional markets, we must remember that our users are consumers and are influenced by the consumer marketplace. They continue to want more and more, and they need more customization and personalization. We are like spoiled children: we want content all the time, and on any device! The idea of access being restricted to specific times is almost ludicrous to many people—even at 2 AM! Users also want enhanced discoverability which means more than simply tagging content; it means that people must know about the content.
Satisfying users’ needs is difficult. Interoperability requires clear standards, and we must be able to move content between devices. Organizations want their content on third-party platforms. This seems hard now, but there is more coming because nothing stands still in the current environment. Analytics, workflow, and the user experiences are all areas that need to be in our focus.
Trends in E-Book Use and Acceptance
It seems as if every conference on content and information has a session on e-books, and this workshop was no exception. Deborah Lenares, Manager of Acquisitions and Resource Sharing at Wellesley College, reported on a survey of e-book use and acceptance that was recently done there. The survey reached 1,661 students and faculty and had a 57% response rate. 72% of the respondents reported recent use of e-books. Faculty acceptance of e-books was slightly higher than student acceptance. Faculty members are more likely to have purchased e-books; students tend to use those that are free, which is not surprising. Most people read e-books on a computer or laptop; 63% of student respondents and 45% of faculty reported reading at least a chapter of an e-book. Readers not owning a reading device tend to simply skim e-book material. Many readers print out material from e-books, particularly if they need more than 10 pages.
Sometimes changes are so far-reaching that organizations must rethink their strategies and do things they have not done in the past. As Bob Boissy, Manager of Account Development and Strategic Alliances at Springer Science + Business Media reported, that has been their experience. They found that they not only had to enter new areas, but they even had to direct customers to non-Springer products at times. One of the symptoms of the changes was that e-book usage increased, but Google referrals declined. Springer’s response was to emphasize and encourage discovery by creating MARC records for its products, implementing full text indexing, and creating a dedicated content management and metadata staff. And Boissy revealed Springer’s secret strategy: Hire librarians for library-related tasks—an encouraging note (see the graphic below). He also noted that in many libraries, although collection development will continue to evolve, it will be less about collection and selection and more about how to find ways to use what they already have.
Springer’s “Secret Strategy”
According to Boissy, Springer does the following things now that it was not previously doing:
- Be in the MARC records business. Offer these records as part of the sale, not separately, and constantly improve them,
- Increasingly provide post-sale technical support and support for proxy servers, OpenURLs, and link resolvers, especially for smaller libraries that do not have technical services librarians,
- Learn about and care about all sorts of discovery services, and
- Keep up with metadata and knowledge base services, and know what to tell the client to select, even if it is one of another company’s services.
The e-book session concluded with a presentation by Edward Reiner from the School of Continuing and Professional Studies at New York University. He said that there are 3 problems in today’s e-book market:
- Many providers have complicated the market with mixed messages about value, utilization, access, and pricing,
- There is a lack of clarity about what librarians are buying and what students want, and
- The buying public is confused about the state of e-books.
Reiner said that while library budgets frequently fund acquisitions of journals in physical sciences, business, and related disciplines, the humanities are often not as well funded. Many colleges are cutting down on their humanities offerings, and many of their libraries are not clear about the directions of the e-book market. According to Reiner, many librarians cannot distinguish e-book platforms and providers, and many colleges do not have an e-book strategy. Librarians are being inundated and confused with marketing and sales efforts that frequently convey different messages.
Reiner conducted a study of academic librarians and students at a wide range of institutions and found a strong preference for buying over licensing. Cost is the strongest influence in this decision. Off-campus access to e-books is expected, and more tools are needed to enhance the e-book experience. Students balk at buying expensive textbooks and hate carrying them to classes. They use their computers or iPads in class and want to read books on them. They frequently expressed an interest in reading printed consumer books “for fun”. (Detailed results of the study are in a white paper at http://www.humanitiesebook.org/help/heb-whitepaper-4.html.)
It is very difficult to convert e-books from one format to another, and finding the steady state for pricing is also difficult. Buying a collection is very different than buying individual titles. Some librarians are only permitted to purchase materials, and cannot license them. But some publishers are set up to only license, which causes major problems. Licensing appeals more to small institutions with small budgets.
Fostering Use: The Mobile Environment
The afternoon began with two sessions on fostering use, the first on the mobile environment, and the second on content sharing, annotation, and review. Stephen Rhind-Tutt, President of Alexander Street Press (ASP), began by noting that mobile phones now outnumber desktop PCs in China, and information is now being created with mobile phones. In 2009, only 20% of mobile phone users thought they would be useful for study aids; now according to Pew Research, 90% say they use phones to read books, do searches, and do research. The next step will be an evolution into native mobile applications based on books.
Rhind-Tutt said that measuring usage on mobile devices is very fluid. What does it mean to measure usage? Today’s metrics have come from books, but performance—how effectively a service performs the task for which it was developed— is a better metric. ASP delivers clips from videos from which users can make playlists. That does not equate to usage as typically measured by libraries. How will mobile play in the standard publishing model, and what is good performance?
ASP has recognized that because they are a small publisher, they cannot compete with the large ones, so they make sure that they have an extremely robust platform that plays well with all of their content and includes APIs that allow people to use their tools. Today, access and usage are not the main part of the game, but tomorrow they will be the whole game! Everyone will have a mobile, and it will be in their pocket!
Founded in 1876, the American Chemical Society (ACS) is a major professional organization with a staff of 2,000 and over 163,000 members. Melissa Blaney, Assistant Director, Platform Advertising and Analytics, described ACS’s Web Editions Platform, which hosts a digital archive of all articles published in ACS journals since 1879 (over 465,000 articles), more than 27,000 chapters from ACS books, and the full text of the archives of over 100,000 news stories and articles from Chemical & Engineering News, ACS’s flagship journal distributed to all its members. The ACS Mobile App features new articles from ACS journals the moment they are published to the web. Initially available on iOS, an Android version was launched in 2011; both versions became free to access in January 2012.
In two comprehensive reader surveys conducted in 2009, respondents reported that they are likely to read web editions of scientific journals at least weekly, if not daily. About 60% of the readers identify articles by browsing online tables of contents, and about half also depend on e-mail alerts. Three key trends driving the design of the ACS Mobile App are growth in abstract usage, peak usage near the web publication date, and the challenges of keeping current with the literature. Scientists’ reading habits are changing: they read more articles every year, but they spend less time on each article. Thus, publishers like ACS must design their mobile apps for “on the go” reading, multi-tasking, and the extended usage times provide by mobile devices. Publisher platforms must provide quick, easy, and accurate searching; links to related content; and personalization options.
Considerations for future developments include mobile authentication providing the ability to access institutional subscriptions from mobile devices (which is difficult in current access models), different navigation options, mobile-friendly formats to accommodate additional platform features (such as elimination of Flash in favor of HTML5), and mobile marketing and advertising.
Ron Burns, Vice President of Global Software Services at EBSCO, concluded the session on mobile usage. He said that most graphs of mobile usage are similar, with a large and rapidly growing increase in recent years, and he referred to the widely read annual presentation of Internet Trends by Mary Meeker of Kleiner Perkins Caufield Byers (http://allthingsd.com/20130529/mary-meekers-internet-trends-report-is-back-at-d11-slides/), which reported that mobile usage now accounts for over 15% of Internet traffic.
EBSCO’s mobile platform was reinvented last year with three goals:
- Promote usage growth and provide a better user experience,
- Create a single code base to easily keep up with new platform features, and
- Maximize user flexibility for mobile-specific mobile solutions (especially in the medical area).
This required the ability to take advantage of key trends: convergence of apps and mobile websites, open standards and the anti-DRM movement, HTML5, and responsive website design (the capability to display content for different user devices and screen sizes). This process requires avoiding hybrid designs and using default designs for user interface. EBSCO serves several different types of users, and it is important to understand their different backgrounds, skills, and experiences, and also what they have in common.
Mobile involves movement, so it is unpredictable. There is a large difference between devices. New developments should be tested as soon as possible, even without waiting for a prototype. Get up close and personal with users and use ideas from other industries in design. EBSCO’s new mobile discovery service launched last year provides automatic detection of users. It meshes seamlessly with library websites, allows setting of preferences, and saving an article in the cloud for later reading. iPads are also given the desktop experience, and APIs are provided so that customers can build their own mobile apps. (See Burns’ slides on the NFAIS website for screenshots.) EBSCO will continue to invest in its mobile website, continuing to improve its responsive design, and creating new apps only when the functionality requires it.
Fostering Use: Enabling Content Sharing, Annotation, and Review
This session looked at two interesting systems that enable common research tasks to be done more effectively and efficiently. Paolo Ciccarese, architect of the DOMEO annotation tool began by stating that science is big. We are dealing with a huge number of sources, of which journals are only the “tip of the iceberg”. Journals are how scientific research becomes known, how scientists are recognized and promoted, etc., but there are so many of them that researchers could spend all their time reading them. So scientists have organized themselves and are social. They publish their research, participate in conferences, network, and communicate with their colleagues via e-mail, blogs, screencasts, Twitter, and other methods. All scientific research builds on the previous work of others.
Scientific journals and their papers are linked on the web. But how many links are of interest in a paper? How many can we access and read? How do we keep track of all this knowledge? And when we find time to read and find something interesting, what happens next? We commonly use annotation—comment, bookmark, tag web pages, blog, etc. But can we retrieve all those annotations after time passes?
Domeo (http://swan.mindinformatics.org/) is an open source system for producing and sharing annotations, even while reading a journal article. Users can search vocabularies and ontologies, annotate images, or attach notes and semantic tags. Annotations can be public, shared with a designated group of colleagues, or kept private. Domeo can also query external services and use anything that has a unique identifier as a qualifier, thus providing an automatic annotation facility. Images can also be annotated, making them discoverable.
Tucker Harding and Jonah Bossewitch from the Center for New Media Teaching and Learning (CNMTL), Columbia University, described Mediathread (http://ccnmtl.columbia.edu/portfolio/custom_software_applications_and_tools/mediathread.html), a system to help faculty use new media to improve their teaching. It has been used in over 200 courses at Columbia since it was launched, and is now being used globally.
Teachers commonly bring videos into the classroom, show them to their students, and then the students write essays about the videos. Mediathread is a tool that connects with multimedia repositories and brings the library close to the students’ study areas. Because it simply links to resources and does not host them, there are no copyright and storage issues. Harding and Bossewitch have found that it is important to work closely with vendors so they understand how to work with the system.
Future directions include partnering, compliance with open standards, more modular features, better access, and interoperability. Performance must be evaluated vs. usage. How many classes and how many users can be accommodated? How will student performance be affected? Faculty members are starting to invent new kinds of assignments in the context of their teaching and learning goals. How closely do students come to accomplishing the objectives of the course, and can the results be incorporated into further developments of the tool? In the same way we quoted text, now we can quote anything and actually show it to strengthen an argument. Students are comfortable with manipulating text; can we assume they can be as comfortable using multimedia? What obligation do we have as inventors? The next steps will focus on pedagogical use cases, not usage through interviews of students and faculty.
Usability and Sustainability from a Business Perspective
Business models are a common concept, but they may not be widely understood. Victor Camlek explained business models as they apply to the STM publishing industry. A business model is a plan explaining how a firm will generate revenue and make a profit from its business. Business models can be simple or very complex. There is no such thing as a standard profit model across industries. Competitive and disruptive changes may threaten the sustainability of prevailing business models, and some business models fluctuate by season (oil prices, for example).
In the STM publishing business, there is somewhat of an adversarial relationship between publishers and libraries or content users. Camlek presented a series of scenarios giving details of the reasons for the issues. In a nutshell, publishers feel they must be compensated for their efforts, but authors feel that they have no share in that, arguing that the publishers benefit from their research but do not support it financially. Publishers reply that they bear the all the costs of the specialized skills required in publishing, such as editorial functions, peer review, and production.
Elsevier is one of the main targets of researchers and is frequently criticized for its business arrangements, including the “Big Deal”, yearly price increases, and control of text mining and data mining practices. Its 20-F Form (a required form for non-US corporations similar to the 10-K for domestic companies) gives insights into its business practices. And an analysis of the STM publishing market by BNP Paribas (http://www.bnpparibas.com/en) in 2003 suggested that STM journals are overvalued by the market. Bernstein Research studied the “Big Deal” and concluded that it is becoming unsustainable in the current funding market (see http://220.127.116.11/wp-content/uploads/2011/06/bringingdown.pdf).
Open access (OA) has had a major impact on the STM publishing business. There appears to be a stronger move towards OA in the UK than in the US. Recent government regulations mandating the deposit of taxpayer-supported research results in open repositories are strongly impacting the marketplace. In the UK, government-funded science agencies want authors to pay publishers’ article processing charges (APCs) in advance to make their work free to read immediately, but this may put stresses on research budgets. APCs can be substantial; PLoS founder Michael Eisen has called this move “a massive sellout of public interest to publishers.”
Camlek concluded that we will likely see incremental changes but nothing earth shattering in the near term as OA continues to move forward. He said that STM publishing is in the midst of a transition, and winning models in the future will be those that demonstrate the most usable value to the user community.
Moving Forward: Resolving the Tension Between Content Usability and Business Sustainability
Melissa Levine, Lead Copyright Officer, University of Michigan, said that copyright is based on the idea of a book but has been stretched to cover many other types of content. Her discussion focused on how to make the content ‘usable’ and thus ‘sustainable’, and how copyright fits in this effort.
Creative Commons (CC) is one of the tools that can help make material more useful. It is practical for sharing ideas and communicating what you are intending from a legal standpoint. CC is good for libraries, archives, museums, and businesses. It supports them in making their public domain content available. And it helps resolve the inherent tension between open access and protection of intellectual property owners’ rights. Many faculty members are making legitimate uses of content but are afraid to be publicly associated with them for fear of being accused of copyright infringement. CC can help resolve these concerns.
Control does not always add value. The difficulties of copyright ownership are stressing the system, particularly as copyright extends to movies, videos, and other non-textual forms of content. Change is a given as use of content is extended and enabled.
Speakers’ slides are available at the workshop website, http://nfais.org/event?eventID=525.
Donald T. Hawkins is an information industry freelance writer based in Pennsylvania. In addition to blogging and writing about conferences for Against the Grain, he blogs the Computers in Libraries and Internet Librarian conferences for Information Today, Inc. (ITI) and maintains the Conference Calendar on the ITI website (http://www.infotoday.com/calendar.asp). He recently contributed a chapter to the book Special Libraries: A Survival Guide (ABC-Clio, 2013) and is the Editor of Personal Archiving, a forthcoming book from ITI. He holds a Ph.D. degree from the University of California, Berkeley, and has worked in the online information industry for over 40 years.