Against the Grain Backtalk
Back Talk — How much blood is there on your digital
edge? A Digital Library Measurement
Tool.
by Tony Ferguson (Columbia University) ferguson@columbia.edu
Everyone seems to be concerned with determining how well
they are doing in the race toward digital relevance. I am. So when I recently
reviewed reports for ten members of the Digital Library Federation
(http://www.clir.org/diglib/pubs/news01/reports.htm), it occurred to me that if
I developed a list of the categories of activities in which these leading
institutions were involved, I could create a test or measure of how well an
institution might be doing.
Before I reveal my test (located in this issue on page 94),
let me confess it is anything but scientific.
Nor is it inclusive of every kind of digital/electronic activity
possible. These are the activities that
were mentioned in the Federation’s Website reports, or activities which
occurred to me upon reading these reports.
If I left out any important categories, just integrate them into my
scheme and compute your own score.
Since not all digital activities are equally important or easy/difficult
to accomplish, you’ll note that I decided to introduce a weighting factor. These factors have not been agreed upon by a
committee of experts, just a couple of friends and I, and we couldn’t agree
completely with other, so I don’t imagine you will agree with me either.
In general, if I thought a library could buy a service and
implement it without creating a completely new position, I gave it a one (1)
difficulty rating. If I figured the
technology level was more difficult but still no new position was required, it
got a two (2). If a new position was
needed, the technology level was even more complex and/or it required the
cooperation of multiple parts of an organization or multiple organizations to
cooperate in very new ways, it got a three (3). A four (4) difficulty rating was given to services/projects that
would require several new positions, very significant new expenses, and/or the
technology was extremely complex or had to be developed locally.
So, here is how the test works.
Step One. Go through
the list of digital activities on the list and give your library one to three points:
One (1) point if you have thought about (prior to taking this survey) doing
this but have not even started doing anything.
Two (2) points if you have planned your work and are in the process of
implementing your plan. Three (3)
points if you have accomplished the activity or have it in production. Write each activity’s points in the space
provided in the “Your Library’s Score” column.
Step Two. Multiply
each of your point scores by the weighting factor and write that in the space
provided in the “Your Library’s Weighted Score” column.
Step Three. Add up
all of your library’s weighted scores and divide by 435 (The total points
possible. If you add or subtract
activities be sure to adjust this total).
Now that you have your score, what should you do with
it? First of all, you can compare it
with the following non-scientifically arrived at scale and feel good or bad
about your library:
— 80 to 100 points means your library is a super digital
library
— 60 to 79 points means it is well on its way to becoming a
super digital library
— 59 points and below means your library needs a lot more
money to get a higher score (or you can declare it dumb to want more points).
Unfortunately, the thought that the digital future is cheap
was a dream.
Second, you could go through the list and line out all the
things that don’t match your library’s needs and/goals and then re-compute how
well you are doing. Be sure to adjust
the total points possible in step three above.
In either case, I have found it useful to think about all the things
that could be done and to do a self inventory for my library.
I would be happy to hear how well your library did on this
digital library measurement scale or what you thought of the scale. Drop me an e-mail: <ferguson@columbia.
edu>.
digital library measurment test on page 93