The news about 100,000 books on Freebase got me poking around with curl. I was pleased to see that Freebase actually distinguishes between a book as a work, and a particular edition of that book. To FRBR aficionados this will be familiar as the difference between a Work and a Manifestation:

For example here is a URI for James Joyce’s Dubliners as a work:

and here is a URI for a 1991 edition of Dubliners:

If you follow those links in your browser you’ll most likely be redirected to the human readable html view. But machine agents can use the same URL to discover say an RDF representation of this edition of Dubliners, for example with curl:

curl --location --header "Accept: application/turtle"

@prefix fb:
@prefix rdf:
@prefix rdfs:
@prefix xml:

 <> a 
     <> "0486268705";
     <> "91008517";
     <> <>;
     <> <>;
     <> "823";
     <> <>;
     <> "1991";
     <> "Dubliners";
     <> <>. 

 <> a <>;
     <> "152"^^<>;
     <> <>. 

There are a few assertions that struck me as interesting:

  • the statement in red that states that the resource is in fact an edition (of type
  • the statement in green which links the edition with the work (
  • and the assertion in blue which states the Library of Congress Control Number (LCCN) for the book

I was mostly surprised to see the library-centric metadata being collected such as LCCN, OCLC Number, Dewey Decimal Classification, LC Classification. There are even human readable instructions for how to enter the data (take that AACR2!).

Anyhow it got me wondering what it would be like to stuff all the Freebase book data into a triple store, assert:

<> <owl:sameAs> <> .
<> <owl:sameAs> <> .

and then run some basic inferencing and get some FRBR data. I know, crazy-talk … but it’s interesting in theory (to me at least).