q & a

Q: What do 100 year old knitting patterns and a lost Robert Louis-Stevenson story have in common?

A: A digitally preserved newspaper page.

Q: What about if you add:

A: Just a typical lunch time conversation at Pete’s with a couple people I work with. The cool thing (for me) is that this is normal, involves a host of smart/interesting characters, and is routinely encouraged. I love my job.


Some folks at LC and CDL are trying to kick-start a new public discussion list for talking about digital curation in its many guises: repositories, tools, standards, techniques, practices, etc. The intuition being that there is a social component to the problems of digital preservation and repository interoperability.

Of course NDIIPP (the arena for the CDL/LC collaboration) has always been about building and strengthening a network of partners. But as Priscilla Caplan points out in her survey of the digital preservation landscape Ten Years After, organizations in Europe like the JISC and NESTOR seem to have understood that there is an educational component to digital preservation as well. Yet even the JISC and NESTOR have tended to focus more on the preservation of scholarly output, whereas digital preservation really extends beyond that realm of materials.

The continual need to share good ideas and hard-won-knowledge about digital curation, and to build a network of colleagues and experts that extends out past the normal project/institution specific boundaries is just as important as building the collections and the technologies themselves.

So I guess this is a rather highfalutin goal … here’s some text stolen from the digital-curation home page to give you more of a flavor:

The digital preservation and repositories domain is fortunate to have a diverse set of institutional and consortial efforts, software projects, and standardization initiatives. Many discussion lists have been created for these individual efforts. The digital-curation discussion list is intended to be a public forum that encourages cross-pollination across these project and institutional boundaries in order to foster wider awareness of project- and institution-specific work and encourage further collaboration.

Topic of conversation can include (but is not limited to)

  • digital repository software (Fedora, DSpace, EPrints, etc.)
  • management of digital formats (JHOVE, djatoka, etc.)
  • use and development of standards (OAIS, OAI-PMH/ORE, MPEG21, METS, BagIt, etc.)
  • issues related to identifiers, packaging, and data transfer
  • best practices and case studies around curation and preservation of digital content
  • repository interoperability
  • conference, workshop, tutorial announcements
  • recent papers
  • job announcements
  • general chit chat about problems, solutions, itches to be scratched
  • humor and fun

We’ll see how it goes. If you are at all interested please sign up.


One little bit of goodness that has percolated out from my group at $work in collaboration with the California Digital Library is the BagIt spec (more readable version). BagIt is an IETF RFC for bundling up files for transfer over the network, or for shipping on physical media. Just yesterday a little article about BagIt surfaced on the LC digital preservation website, so I figure now is a good time to mention it.

The goodness of BagIt is in its simplicity and utility. A Bag is essentially: a set of files in a particular directory named data, a manifest file which states what files ought to be in the data directory, and a bagit.txt file that states the version of BagIt. For example here’s a sample (abbreviated) directory structure for a bag of digitized newspapers via the National Digital Newspaper Program:

|-- bagit.txt
|-- data
|   `-- batch_lc_20070821_jamaica
|       |-- batch.xml
|       |-- batch_1.xml
|       `-- sn83030214
|           |-- 00175041217
|           |   |-- 00175041217.xml
|           |   |-- 1905010401
|           |   |   |-- 1905010401.xml
|           |   |   `-- 1905010401_1.xml
|           |   |-- 1905010601
|           |   |   |-- 1905010601.xml
|           |   |   `-- 1905010601_1.xml

The manifest itself is just the relative file path, and a fixity value:

ea9dee53c2c2dd4027984a2b59f58d1f  data/batch_lc_20070821_jamaica/batch.xml
72134329a82f32dd44d59b509928b6cd  data/batch_lc_20070821_jamaica/batch_1.xml
dc5740d295521fcc692bb58603ce8d1a  data/batch_lc_20070821_jamaica/sn83030214/00175041217/1905010601/1905010601_1.xml
e16e74988ca927afc10ee2544728bd14  data/batch_lc_20070821_jamaica/sn83030214/00175041217/1905010601/1905010601.xml
fd480b2c4bcb6537c3bc4c9e7c8d7c21  data/batch_lc_20070821_jamaica/sn83030214/00175041217/1905010401/1905010401.xml
e0e4a981ddefb574fa1df98a8a55b7a4  data/batch_lc_20070821_jamaica/sn83030214/00175041217/1905010401/1905010401_1.xml
c8dffa3cdb7c13383151e0cd8263d082  data/batch_lc_20070821_jamaica/sn83030214/00175041217/00175041217.xml

The manifest format happens to be the same format understood and generated by the common unix (and windows) utility md5deep. So it’s pretty easy to generate and validate the manifests.

The context for this work has largely been NDIIPP partners (like CDL) transferring data generated by funded projects back to LC. Although it’s likely to get used in some other places as well internally. It’s funny to see the spec in its current state, after Justin Littman rattled off the LC Manifest wiki page in a few minutes after a meeting where Andy Boyko initially brought up the issue. Andy has just left LC to work for a record company in Cupertino. I don’t think I fully understood simplicity in software development until I worked with Andy. He has a real talent for boiling down solutions to their most simple expression, often leveraging existing tools to the point where very little software actually needs to be written. I think Andy and John found a natural affinity for striving for simplicity, and it shows in BagIt. Andy will be sorely missed, but that record store is lucky to get him on their team.

There are some additional cool features to BagIt, including the ability to include a fetch.txt file which contains http and/or rsync URIs to fill in parts of the bag from the network. We’ve come to refer to bags with a fetch.txt as “holey bags” because they have holes in them that need to be filled in. This allows very large bags to be assembled quickly in parallel (using a 100 line python script Andy Boyko wrote, or whatever variant of wget, curl, rsync makes you happy). Also you can include a package-info.txt which includes some basic metadata as key/value pairs … designed primarily for humans.

Dan Krech and I are in the process of creating a prototype deposit web application that will essentially allow bags to be submitted via a SWORD (profile of AtomPub for Repositories) service. The SWORD part should be pretty easy, but getting the retrieval of “holey bags” kicked off and monitored propertly will be the more challenging part. Hopefully I’ll be able to report more here as things develop.

Feedback on the BagIt RFC is most welcome.