In case you missed it linux.com is running an article by Michael Stutz on Evergreen, an open source integrated library system developed by the state of Georgia to support a consortium of 44 different libraries. (Thanks for the link Adam)
Hanging out with miker_ and bradl in irc and having open-ils in my feed reader makes me take this sort of work for granted sometimes…and Michael’s article made me wake up and marvel at how truly remarkable the work they’ve done is.
The evergreen folks are hosting this years code4libcon where I’m supposed to be doing a presentation on the Atom Publishing Protocol. It’s a low cost/pragmatic alternative to the usual library technology conference options–and will be a good opportunity to buy these Evergreeners a beer. I hope to see you there.
Recognizing and leveraging the benefits of protocols like bittorrent in “legitimate” media distribution seems like a huge step forward. I guess I have to admit I’m also pretty excited about the prospect of .torrent files for Red Dwarf and Doctor Who episodes. But, still there will be some kind of digital-rights-management built in, so it won’t be totally open.
Tim Reiterman has a good article about imperiled federal libraries, and their collections…some of which are already ending up in dumpsters.
I think we are living in a world of digitized information…In the end there will be better access.
(Linda Travers of the EPA)
Which makes me wonder what “end” she is talking about. I think there is a real danger as more and more information goes online that people simply assume that paper collections are no longer necessary.
Perhaps it’s no coincidence that the libraries that are currently in danger the most belong to the Environmental Protection Agency, whose library budget is being slashed by 80 percent. These collections and others that are in danger (like NASA’s Goddard Space Flight Center) have collections that support research into global warming.
If you are interested in learning more and what you can do about it ALA has a useful resource page that allows you to contact your representative using a service similar to EFF’s action center.
If there world’s population were reduced to 100, it would look something like this.
In case you missed it in your overstuffed RSS reader Jon Udell recently interviewed John Price-Wilkin who is coordinating the University of Michigan’s joint digitization project with Google.
The interview covers interesting bits of history about the University of Michigan Digital Library,
Making of America, JSTOR (didn’t realize there was a book), and of course the project with Google.
The shocker for me was that while the UMDL has been able to digitize 3000 books per year, Google is doing approximately that number a day. Wilkin wasn’t able to go into details about just how Google is doing this, but he does talk about details such as resolutions used, destructive vs non-destructive digitization, and how federations of libraries could work with this data.
Wilkin has been at the center of digital library efforts for as long as I’ve been working with libraries and technology, so it was really fun to hear this interview.
Just saw this float by on simile-general
… thanks to Ben, we now have permission to publish the barton RDF dump (consisting of 50 million juicy RDF statements from the MIT library catalogue). They are now available at
Juicy indeed…it would be nice to see more libraries do this sort of thing.
Pretty sweet :-)
Can you imagine (back in the day) going to a page like the one at O’Reilly’s Safari doing a view-source in Mosaic and trying to learn HTML and how the web works?
So Ross beat out 11 other projects to win the OCLC Research Software Contest for his next generation OpenURL resolver umlaut. Second place went to to Jesse Andrews’ BookBurro–so the competition was fierce this year. Much more so than last year when there were 4 contestants.
Those of us who hang out in #code4lib got to hear about this project when it was just a glimmer in his eye…and had front row seats for hearing about the development as it progressed. Essentially umlaut is an openurl router that’s able to consult online catalogs (via SRU), other OpenURL resolvers (SFX), Amazon, Google, Yahoo, Connotea, CiteULike and OAI-PMH. It’s all written in Ruby and RubyOnRails.
I feel particularly proud because Ross is enough of a mad genius to have found a use for some ruby gems I wrote for doing sru, oai-pmh and querying OCLC’s xisbn service.
Speaking of which we’ve been collaborating recently on a little ruby gem for querying OCLC’s OpenURL Resolver Registry. This registry essentially makes it easy to determine what the appropriate OpenURL resolver is given a particular IP address. So you could theoretically rewrite your fulltext URLs so that they were geospatially aware. For example:
gabe discovered that the code4lib.org drupal instance was littered with comment spam. Someone had actually registered for an account and proceeded to add comments to virtually every story.
Since there was an email address associated with the account I figured I’d send an email letting them know their account was going to be zapped.
From: edsu To: email@example.com Subject: code4lib.org spam