v31#4 Outreach Assessment: A Two-Pronged Approach

by | Oct 4, 2019 | 0 comments

by John Jackson  (Head of Outreach and Communications, William H. Hannon Library, Loyola Marymount University, Los Angeles) 

“Outreach Librarian” is one of those nebulous terms that seems to encompass a universe of possibilities as it stretches the phrase “other duties as assigned” to its breaking point.  One can easily find outreach librarians housed within almost every department known to libraries: in reference departments as instruction designers, in communications departments as marketing directors, in administration units as program coordinators, and in collections departments as faculty liaisons.  Determining how to assess the activities of an outreach librarian is no less complex than the myriad nature of the job itself. To that end, it is probably worth noting from the start that assessing outreach varies from library to library based on the expectations and where the position is housed. 

At the William H. Hannon Library at Loyola Marymount University (LMU), outreach is a stand-alone department that oversees programming and events, marketing and promotion, and orientations and tours.  It houses an outreach librarian, a student engagement librarian, and an events manager. So much of what follows is based on the needs and nature of that collaborative work environment but will still apply to other variations of outreach work, including as a vehicle for library instruction or faculty relations. 

We delineate outreach at LMU primarily as a combination of programming and communication activities:  orientations, tours, speaker series, specialized workshops, events, social media, print, and digital media, intercampus and external partnerships, donor relations, signage, internal communications, photography and video production, and more.  There is no single assessment method that can encompass, much less measure, the success of all these activities and yet they are all interconnected and contribute to the overall success of our outreach activities. For the sake of simplicity, we put all these activities into one of two buckets, programming or communications, and attempt to assess them accordingly. 

Assessing Programming Outreach 

Let me begin with programming.  Our Faculty Pub Night series is one of the most successful programs we host at LMU.  Each year, we invite faculty to give a talk about their recent “pub”-lication in an informal, pub-like atmosphere (we set up a bar, hire a bartender, serve light snacks, and encourage attendees to get up and treat themselves to the fare as they wish during the event).  Over the past ten years, this program has brought thousands of students and faculty into the library to celebrate and recognize faculty research and creative works. We use many traditional metrics, like attendance and audience composition, to measure the success of this program.  We also use a standard set of feedback questions which we have found to be successful indicators of a program’s relative success: (1) Why did you decide to attend today’s event? (2) What did you learn from attending today’s event? And (3) was there anything that surprised you and if so, what? 

The first of these questions gets at whether the nature of the program and our promotional efforts generated an appropriate level of curiosity.  It also tends to let us know if students are attending for extra credit, which is useful information to have. Responses to the second question can be tracked to expected learning outcomes if those have been developed in advance.  The final question is the wildcard and tends to produce the most interesting responses from attendees. These responses range from “I didn’t realize how complex this issue could be” to “I didn’t know the library hosted so many events!”  When viewed collectively, the responses to these questions help us to identify the 2-3 most salient aspects of the library program which can then be used for a variety of purposes, from how we promote the video recording of the event on social media to how we connect the success of the event to our strategic plan.

The beauty of our standard feedback form is that we can use it for every Faculty Pub Night event, regardless of the speaker’s topic, which can range from international trade law to 19th-century murder mysteries.  What every Pub Night has in common is our goal to introduce our audience to new ideas and perspectives. In short, we want our attendees to be transformed.

Two years ago, we developed a rubric for assessing the “transformative” nature of an event based on attendees’ responses to our standard feedback forms.  This rubric ranks responses as they relate to our learning outcomes from 1-No evidence (“[Attendee’s response] shows no evidence of understanding/comprehension.  Denies any change in perception/outlook”) to 4-Significant Evidence (“Acknowledges aspects of expected learning outcome and applies knowledge to other experiences, structures, subjects.  Critically analyzes and applies experience. Articulates values of knowledge or change in perceptions.”). This rubric is a helpful tool for quickly identifying programming that has an impact.  Of course, reading the feedback forms alone and using one’s “gut feeling” could also extract this information, but having the rubric provides some stability to the assessment and can be used by other librarians. 

Ideally, we would create learning outcomes for each Pub Night, but we do not have the bandwidth to create customized assessment tools for every event, much less for the 40+ programs we offer each year, and so these standard feedback forms (which can be quickly printed and distributed) have sufficed for many of our other programs as well.  Nonetheless, for some events, we create customized assessment tools. Most notably: our Long Night Against Procrastination and our Library Open House. 

The Long Night Against Procrastination at LMU happens every Spring during Finals Week.  The event is a structured study session for a limited number of students.  During the course of the event (which lasts 4-5 hours), we provide students with food, coffee, swag, and one-on-one support from research librarians and writing tutors.  Our primary goal for this event is for every student to feel that they have made significant progress on their final projects and/or exam prep in a stress-free, fun environment.  We have developed an assessment strategy accordingly. At the top of the event, we ask every student to write down what they plan to accomplish. These projects usually include writing a final paper, creating a study guide, or developing a presentation.  At the end of the event, we email each of the students a Qualtrics survey which asks, among other things, what did you accomplish this evening? Compared to those initial responses, we are able to measure whether the event was high or low impact for the students who attended.  The survey also asks students to rate their satisfaction with different aspects of the event, like food, space, research support, etc. in order to determine if the event did indeed provide a “stress-free, fun environment.”

Our 2018 Library Open House had more specific goals.  This food and swag extravaganza targeted first-year students and had two expected outcomes:  (1) reduce library anxiety and (2) provide students with detailed information about library services, spaces, and collections.  Every attendee was asked to provide an email address in order to enter the open house space. We then emailed those attendees and asked them to rate, “To what extent do you feel comfortable asking for help at the library?”  Additionally, we asked students, “What was the most helpful thing you learned?” The results of those two questions indicated if we met our expected outcomes (79% said they felt “very comfortable” asking for help!) and can be used from year to year to measure the relative success of each subsequent open house. 

Other custom assessment measures that we’ve developed for library programs include: creating an online dashboard to track edits and citations for Wikipedia editing events;  interrogating changes in attitudes/perceptions about cultural stigmas among attendees at our annual Human Library; using juries and peer review to qualitatively assess student art work connected to our Common Book program;  doing content analysis of student write-ups of events as a qualitative measure of whether the event met its intended purpose; and surveying library partners and guest speakers about their experience working with the library programming team.  The unifying factor in all of these custom assessment measures is that they are developed to identify specific expected learning outcomes that are set in advance of each library program.

Assessing Communications Outreach

Compared to programming, I find assessing communications outreach to be much easier: that is, the techniques and workflows are simpler.  Part of this is due to how I define success in my communications strategies: not by use of services or by attendance at events, but by eyeballs alone (i.e., how many people saw our messaging) and whether that number is growing steadily over time.  In this sense, I take a decidedly limited approach to how I assess our communications efforts. 

To make it more complicated, some of the outreach we do only manifests on/in our communications channels (e.g., social media) and there is no programming, service, or collections-based correlate.  For example, one of our most successful Twitter projects was encouraging other units on campus to post about the ways their student workers enabled them to meet their institutional goals. This short-lived pile-on thread did not generate additional followers or drive people to our website, but it did have a record-breaking (for us) number of impressions: more than six times our average organic impressions at the time.  A lot of people on campus saw that post. What they did with it or how it changed their perception or use of the library, we will probably never know, short of conducting longitudinal studies of library perceptions. Like many of our social media projects, it came about suddenly, organically, and unexpectedly: something which is difficult to replicate in a formal study. 

There are, however, some things we can know.  For example, we use Hootsuite to track our success on Twitter, Facebook, and Instagram.  Like many social media managers, Hootsuite allows us to create short URLs (ow.ly) whose usage can be tracked over time.  Examining our social media content month-to-month, we can create an indicator of “URL engagement” by tracking the number of URLs posted vs. the number of click-throughs vs. the number of impressions.  The same method can be applied to customized URLs that we post the digital screens in our lobby that highlight electronic resources (e.g., bit.ly/name_of_resources) and links in our e-newsletters.  If there is a URL for it, we can track it.  Though, it is worth noting that we only track URL hits and not personally identifiable information or other types of personalized metadata.

For URLs that go directly to our library website domain from social media, digital displays, or newsletters, we use SiteImprove.  Among other useful tools, SiteImprove allows us to see where traffic to our website originated and what it does once it is there (stay on the site, bounce off, etc).  Traffic from social media or other sources can be compared to overall site traffic to create yet another indicator (social media traffic vs. overall traffic) to measure social media engagement month to month.  Showing how much traffic drives users to our website allows me to make a case for the continued investment in social media resources.

Interestingly, we also use RSVPs to track the success of our communication and outreach efforts.  Yes, the number of RSVPs is probably a more accurate indicator of the general interest in a program, but we have had extremely popular events with a small number of RSVPs.  And since many of our events tend to be similar in nature (e.g., a lecture by a historian; a workshop for Wikipedia), we can also use RSVPs as an indicator of how well we are “getting the word out there.”  Low RSVPs for an event that usually brings in a packed audience is a quick-and-dirty measure for the relative success of our communication strategy. Looking at the past two years of RSVP data, we can reasonably expect the number of actual attendees to range between 30% below or 20% above the number of RSVPs.  Anything outside that range can usually be ascribed to a communications anomaly (e.g., we forgot to post it to the university calendar, or the event got picked up by the local press).

Final Thoughts

The mother lode of outreach assessment will be found when someone develops a way to combine multiple data points into a single indicator of success, similar to the Happiness Index or a Klout score.  Perhaps the culture of learning analytics that seems to be growing on college campuses will provide solutions, though as many have noted, this raises certain ethical quandaries for librarians. Until then, we are left to assess each outreach project according to its own merits, nature, and expected outcomes.  Onward and upward!  

Sign-up Today!

Join our mailing list to receive free daily updates.

You have Successfully Subscribed!