ATG Hot Topic: Project Outcome and Library Assessment

by | May 27, 2019 | 0 comments

Courtney McAllister

By Courtney McAllister

Assessment is not a new aspect of librarianship. Libraries and information centers of all types have been evaluating patron satisfaction, collections usage, circulation statistics, door counter info, and other data points for many years. Standards, like COUNTER, have shaped our assessment practices, and institutional assessment and accreditation requirements have shifted over time.

Even though all libraries, archives, and museums deal with some form of assessment activity, there can be a lot of local variations when it comes to what we measure, how we measure it, who is responsible for it, and how we translate qualitative and quantitative data into action. Allocating staff time for assessment, analysis, and implementation can be especially variable. Some libraries have designated assessment personnel, or a team to coordinate assessment activities. Others are dealing with staff shortages and diffuse assessment into a range of job responsibilities and departmental operations wherever they can. Because of these variations at the local level, it can be inherently difficult to make comparisons inter-institutionally. Even within a library system or consortium, getting a comprehensive picture of a single data point across all member libraries can be extremely challenging. This fragmentation and subsequent inconsistency can make it more difficult to tell provocative stories about the work we do and how it impacts our user communities and colleagues.

During a session at last month’s ACRL, I learned about Project Outcome, which was recently introduced to help all public and academic libraries engage in meaningful, sustainable assessment. The free tool can help make assessment more manageable so all institutions can measure and evaluate a wide range of library services and facilities, including:

  • Digital and special collections
  • Events/programs
  • Research
  • Teaching support
  • Instruction
  • Space
  • Library technology

The tool helps make assessment methodology and analysis more intuitive and accessible. The built-in surveys help libraries capture existing baselines and measure the impact of change. Reports synthesize outcomes and data, making it easier to communicate assessment findings to a wide range of stakeholders. I feel it’s important to emphasize that Project Outcome consolidates data collection and analysis that is often diffused throughout many different programs or systems in a library environment. Without a comprehensive tool or platform, manual tallies and homegrown spreadsheets might be stored in a shared drive along with Google Analytics data, COUNTER reports, and assorted student or faculty survey results. Keeping track of all of these information sources can be exhausting in its own right, leaving even less time and energy to interpret and act upon assessment findings. Having a “one stop shop” all the information we collect about our variegated operations is a tremendous asset, and frees up valuable time to actually engage with the data and work towards meaningful decision-making.

The data dashboard in Project Outcome is also very interesting, in my opinion, because it structures assessment undertakings that can otherwise seem overwhelming. As information professionals, we often emphasize the importance of lowering access barriers for our users and making processes as streamlined as possible. I do not think we consistently apply that concern to our own tasks and responsibilities. The Project Outcome portal is a great example of internal usability. It’s a user friendly tool that makes a challenging and complex task more accessible to all practitioners.

 

But It Raises Some Questions

 

It’s probably fairly obvious at this point that I am a full-blown assessment nerd. While I consider Project Outcome to be a very exciting development, it also makes me wonder what we do in libraries that complicates or somehow resists assessment. What are the most difficult aspects of our services, collections, and/or spaces to evaluate? What kinds of value are elusive from a data or narrative perspective?

Perhaps relationship-based services are inherently difficult to measure or quantify. Liaison activities, for example, can be challenging to assess. Liaisons can keep logs of their activities – emails sent, department meetings attended, consultations or workshops led – but can that really capture the impact on a faculty member or department? Feeling like there is a point person in the library to help or support you can’t really be quantified. The rapport that develops between a librarian and their liaison area can lead to more responsive collection development, more robust integration of Information Literacy into the curriculum, better attendance at library events or lectures, and other opportunities for engagement and collaboration. A superb liaison program can even help recruit and retain faculty. Perhaps the extensive consequences of this kind of program can only be determined in its absence.

Resource sharing and Interlibrary Loan strike me as another example of a “hidden utility” kind of service. From a financial perspective, ILL can be a very costly service, when one starts to tally up the labor, system costs, postage, copyright fees, and other expenses. The value it adds can be tricky to pinpoint. A graduate student’s thesis might rely on dozens of niche or rare works beyond the scope of their local library’s collection. Users might use ILL to obtain access to materials that they would not be able to get through traditional channels like Amazon or Google Books. The benefits of having access to out of print, rare, and primary source materials cannot really be defined in digestible datasets. Libraries sharing their resources with others through ILL lending can also find it difficult to justify the expense and time, when the beneficiaries are other libraries’ users. Because of the relatively high financial costs, and diaphanous benefits, ILL can seem like an expendable offering for the chopping block when budgets are strained.

Libraries in Ontario discontinued their ILL services due to budget cuts. The public was not shy about expressing their dismay. In fact, their response was intense enough to prompt a reconsideration.  But there are plenty of cases where ILL is cut and it has no hope of being resuscitated. Without the built-in lifeline that gives library users access to resources far beyond the scope of their local collections, one wonders what needs are going unmet. Which students are trudging through their papers uninspired because they cannot access the one article that would send their argument in a new and interesting direction? Which grant proposals are rejected because competing proposals had better, more developed research platforms and bibliographies? Who feels unable to complete a literature review because they do not have access to the expansive range of resources required (and cannot afford to pay for access out of their own pockets)?

Sometimes it seems we only see the true value of these services when they are discontinued, and a sense of loss or disappointment takes root. But how do services with less concrete or demonstrable value fit into an assessment strategy? In my opinion, streamlining some relatively straightforward assessment endeavors gives us more time and energy to fill in the gaps and represent the less obvious reservoirs of library value. Framing assessment in the context of one’s local mission and vision or strategic plan can also help support a more comprehensive measurement of impact. Perhaps ILL might seem like a high cost/low reward service upon initial inspection, but its philosophical underpinnings might be integral to a library’s mission and vision. Making connections between what we do, how well we’re doing it, and how it relates to our sense of purpose and aspirations can help contextualize the value we track in assessment data. Perhaps it can also be leveraged to integrate less assessable services or activities into the broader assessment initiatives.

 

Discussion questions:

 

  1. Which library services or operations do you think are the most difficult to assess?
  2. Who do you think benefits from library assessment? End users? Library/campus administrators? Others?
  3. Do your assessment activities have a direct relationship to your organization’s mission/vision or strategic plan? If so, what does that relationship look like?

 

Pin It on Pinterest