So RepoCamp was a lot of fun. The goal was to discuss repository interoperability–and at the very least repository practitioners got to interoperate, and have a few beers afterwards. Hats off to David Flanders who clearly has got running these events down to a fine art.
I finally got to meet Ben O’Steen after bantering with him on #code4lib and #talis … and also got to chat with Jim Downing (Cambridge Univ) about SWORD stuff, and Stephan Drescher (Los Alamos National Lab) about validating OAI-ORE.
Stephan and I had a varied and wide ranging discussion about the web in general, which was a lot of fun. I really dug his metaphor of the web as an aquatic ecosystem, with interdependent organisms and shared environments. It reminded me a bit of how shocked I was to discover how rich and varied the ecosystem is around a “simple” service like twitter. If I ever return to school it will be to study something along the lines of web science.
It was also interesting to hear that other people saw a parallel between OAI-ORE Resource Maps and BagIt’s fetch.txt. The parallel being that both resource maps and bags are aggregations of web resources. Of course bags can also just be files on disk, it’s when the fetch.txt is present in the bag that the package is made up of web resources. It would be interesting to see what vocabularies are available for expressing fixity information (md5 checksums and the like), and if they could be layered into the resource map atom serialization. Perhaps PREMIS v2.0? It might be fun to code up what a simple OAI-ORE resource map harvester would look like, that checked fixity values — using LC’s existing BagIt parallelretriever.py as a starting point. God I wish I could just hyperlink to that
At any rate, I now need to investigate OAuth because Jim thinks it fits really nicely with AtomPub and SWORD in particular. And if it’s good enough for Google it’s probably worth checking out. Jim also said that there is a possibility that the SWORD 2.0 might take shape as an IETF RFC, which would be good to see.
Thanks to all that made it happen, and for all of you that traveled long distances to join us at the Library of Congress.
One little bit of goodness that has percolated out from my group at $work in collaboration with the California Digital Library is the BagIt spec (more readable version). BagIt is an IETF RFC for bundling up files for transfer over the network, or for shipping on physical media. Just yesterday a little article about BagIt surfaced on the LC digital preservation website, so I figure now is a good time to mention it.
The goodness of BagIt is in its simplicity and utility. A Bag is essentially: a set of files in a particular directory named data, a manifest file which states what files ought to be in the data directory, and a bagit.txt file that states the version of BagIt. For example here’s a sample (abbreviated) directory structure for a bag of digitized newspapers via the National Digital Newspaper Program:
| `-- batch_lc_20070821_jamaica
| |-- batch.xml
| |-- batch_1.xml
| `-- sn83030214
| |-- 00175041217
| | |-- 00175041217.xml
| | |-- 1905010401
| | | |-- 1905010401.xml
| | | `-- 1905010401_1.xml
| | |-- 1905010601
| | | |-- 1905010601.xml
| | | `-- 1905010601_1.xml
The manifest itself is just the relative file path, and a fixity value:
The manifest format happens to be the same format understood and generated by the common unix (and windows) utility md5deep. So it’s pretty easy to generate and validate the manifests.
The context for this work has largely been NDIIPP partners (like CDL) transferring data generated by funded projects back to LC. Although it’s likely to get used in some other places as well internally. It’s funny to see the spec in its current state, after Justin Littman rattled off the LC Manifest wiki page in a few minutes after a meeting where Andy Boyko initially brought up the issue. Andy has just left LC to work for a record company in Cupertino. I don’t think I fully understood simplicity in software development until I worked with Andy. He has a real talent for boiling down solutions to their most simple expression, often leveraging existing tools to the point where very little software actually needs to be written. I think Andy and John found a natural affinity for striving for simplicity, and it shows in BagIt. Andy will be sorely missed, but that record store is lucky to get him on their team.
There are some additional cool features to BagIt, including the ability to include a fetch.txt file which contains http and/or rsync URIs to fill in parts of the bag from the network. We’ve come to refer to bags with a fetch.txt as “holey bags” because they have holes in them that need to be filled in. This allows very large bags to be assembled quickly in parallel (using a 100 line python script Andy Boyko wrote, or whatever variant of wget, curl, rsync makes you happy). Also you can include a package-info.txt which includes some basic metadata as key/value pairs … designed primarily for humans.
Dan Krech and I are in the process of creating a prototype deposit web application that will essentially allow bags to be submitted via a SWORD (profile of AtomPub for Repositories) service. The SWORD part should be pretty easy, but getting the retrieval of “holey bags” kicked off and monitored propertly will be the more challenging part. Hopefully I’ll be able to report more here as things develop.
Feedback on the BagIt RFC is most welcome.