evergreen


In case you missed it linux.com is running an article by Michael Stutz on Evergreen, an open source integrated library system developed by the state of Georgia to support a consortium of 44 different libraries. (Thanks for the link Adam)

Hanging out with miker_ and bradl in irc and having open-ils in my feed reader makes me take this sort of work for granted sometimes…and Michael’s article made me wake up and marvel at how truly remarkable the work they’ve done is.

The evergreen folks are hosting this years code4libcon where I’m supposed to be doing a presentation on the Atom Publishing Protocol. It’s a low cost/pragmatic alternative to the usual library technology conference options–and will be a good opportunity to buy these Evergreeners a beer. I hope to see you there.


the beeb and file sharing

Recognizing and leveraging the benefits of protocols like bittorrent in “legitimate” media distribution seems like a huge step forward. I guess I have to admit I’m also pretty excited about the prospect of .torrent files for Red Dwarf and Doctor Who episodes. But, still there will be some kind of digital-rights-management built in, so it won’t be totally open.


Imperiled Federal Libraries

Tim Reiterman has a good article about imperiled federal libraries, and their collections…some of which are already ending up in dumpsters.

I think we are living in a world of digitized information…In the end there will be better access.

(Linda Travers of the EPA)

Which makes me wonder what “end” she is talking about. I think there is a real danger as more and more information goes online that people simply assume that paper collections are no longer necessary.

Perhaps it’s no coincidence that the libraries that are currently in danger the most belong to the Environmental Protection Agency, whose library budget is being slashed by 80 percent. These collections and others that are in danger (like NASA’s Goddard Space Flight Center) have collections that support research into global warming.

If you are interested in learning more and what you can do about it ALA has a useful resource page that allows you to contact your representative using a service similar to EFF’s action center.


miniature earth

If there world’s population were reduced to 100, it would look something like this.

(thanks Jeroen)


John Price-Wilkin Interview

In case you missed it in your overstuffed RSS reader Jon Udell recently interviewed John Price-Wilkin who is coordinating the University of Michigan’s joint digitization project with Google.

The interview covers interesting bits of history about the University of Michigan Digital Library,
Making of America, JSTOR (didn’t realize there was a book), and of course the project with Google.

The shocker for me was that while the UMDL has been able to digitize 3000 books per year, Google is doing approximately that number a day. Wilkin wasn’t able to go into details about just how Google is doing this, but he does talk about details such as resolutions used, destructive vs non-destructive digitization, and how federations of libraries could work with this data.

Wilkin has been at the center of digital library efforts for as long as I’ve been working with libraries and technology, so it was really fun to hear this interview.


got data?

Just saw this float by on simile-general

… thanks to Ben, we now have permission to publish the barton RDF dump (consisting of 50 million juicy RDF statements from the MIT library catalogue). They are now available at

http://simile.mit.edu/rdf-test-data/

Juicy indeed…it would be nice to see more libraries do this sort of thing.


>js

So I’ve been dabbling with that four letter word at \(work to create a hierarchical journal/volume/issue/article browser. <a href="http://rubyonrails.org">Le rails</a> and <a href="http://web.archive.org/web/20130216003249/http://script.aculo.us/">scriptaculous</a> make it pretty easy indeed.</p> <p>I figured I'd be a good developer and try to understand what's actually going on behind the scenes, so I picked up a copy of <a href="http://www.amazon.com/Ajax-in-Action-Dave-Crane/dp/1932394613">Ajax in Action [Illustrated]</a> and am working through it.</p> <p>There is so much hype surrounding Ajax that I had pretty low expectations--but the book is actually very well written and a joy to read. I noticed before diving in that there was an appendix on object-oriented JavaScript. I've been around the block enough times to know that JavaScript is actually quite a <a href="http://interglacial.com/hoj/hoj.html">nice</a> functional language; but apart from DHTML I haven't really had the opportunity to dabble in it much. This appendix really made it clear how JavaScript is really quite elegant, and for someone who has done object-oriented-programming in Perl the idioms for doing OOP in JavaScript didn't seem quite that bad.</p> <p>Anyhow, I quickly wanted to start fiddling around with the language with a JavaScript interpreter so I downloaded <a href="http://www.mozilla.org/rhino/">Rhino</a> and discovered that you can:</p> <pre> frizz:~/Projects/rhino1_6R4 edsu\) java -jar js.jar Rhino 1.6 release 4 2006 09 09 js> print(“hello world”); hello world js>

Pretty sweet :-)



rsinger++

So Ross beat out 11 other projects to win the OCLC Research Software Contest for his next generation OpenURL resolver umlaut. Second place went to to Jesse Andrews’ BookBurro–so the competition was fierce this year. Much more so than last year when there were 4 contestants.

Those of us who hang out in #code4lib got to hear about this project when it was just a glimmer in his eye…and had front row seats for hearing about the development as it progressed. Essentially umlaut is an openurl router that’s able to consult online catalogs (via SRU), other OpenURL resolvers (SFX), Amazon, Google, Yahoo, Connotea, CiteULike and OAI-PMH. It’s all written in Ruby and RubyOnRails.

I feel particularly proud because Ross is enough of a mad genius to have found a use for some ruby gems I wrote for doing sru, oai-pmh and querying OCLC’s xisbn service.

Speaking of which we’ve been collaborating recently on a little ruby gem for querying OCLC’s OpenURL Resolver Registry. This registry essentially makes it easy to determine what the appropriate OpenURL resolver is given a particular IP address. So you could theoretically rewrite your fulltext URLs so that they were geospatially aware. For example:

require “resolver_registry”


a funny way to make a living

gabe discovered that the code4lib.org drupal instance was littered with comment spam. Someone had actually registered for an account and proceeded to add comments to virtually every story.

Since there was an email address associated with the account I figured I’d send an email letting them know their account was going to be zapped.

From: edsu To: evgeniy1985@breezein.net Subject: code4lib.org spam