SPARQL endpoint

disclaimer: was a prototype, and is no longer available, see for the service from the Library of Congress

I’ve set up a SPARQL endpoint for at If you are new to SPARQL endpoints, they are essentially REST web services that allow you to query a pool of RDF data using a query language that combines features of pattern matching, set logic and the web, and then get back results in a variety of formats. If you are a regular expression and/or SQL junkie, and like data, then SPARQL is definitely worth taking a look at.

If you are new to SPARQL and/or LCSH as SKOS you can try the default query and you’ll get back the first 10 triples in the triple store:

SELECT ?s ?p ?p 
WHERE {?s ?p ?o}

As a first tweak try increasing the limit to 100. If you are feeling more adventurous perhaps you’d like to look up all the triples for a concept like Buddhism:

PREFIX skos: <>

SELECT ?s ?p ?o 
  ?s ?p ?o .
  ?s skos:prefLabel "Buddhism"@en .

Or, perhaps you are interested in seeing what narrower terms there are for Buddhism:

PREFIX skos: <>

SELECT ?uri ?label 
  <> skos:narrower ?uri .
  ?uri skos:prefLabel ?label

Or maybe you don’t know the skos:prefLabel (aka authorized heading), so look for all the lcsh headings that start with Independence

PREFIX skos: <>

SELECT ?s ?label  
  ?s skos:prefLabel ?label.
  FILTER regex(?label, '^independence', 'i')

Feel free to use the service however you want. I’m interested in seeing what its limitations are.

Benjamin Nowack’s ARC made it extremely easy to load up the 2,441,494 LCSH triples in a few hours with a script like:

$config = array(
    'db_name'               => 'arc',
    'db_user'               => 'arc',
    'db_pwd'                => 'notapassword',
    'store_name'            => 'lcsh',
    'store_log_inserts'     => 1,
$store = ARC2::getStore($config);
if (!$store->isSetup()) {
$rs = $store->query('LOAD &lt;;');

Then it’s just a simple matter of putting up a php script like:

/* ARC2 static class inclusion */
/* MySQL and endpoint configuration */
$config = array(
  /* db */
  'db_host' => 'localhost', /* optional, default is localhost */
  'db_name' => 'arc',
  'db_user' => 'arc',
  'db_pwd' => 'fakepassword',
  /* store name */
  'store_name' => 'lcsh',
  /* endpoint */
  'endpoint_features' => array(
    'select', 'construct', 'ask', 'describe'
  'endpoint_timeout' => 60, /* not implemented in ARC2 preview */
  'endpoint_read_key' => '', /* optional */
  'endpoint_write_key' => 'fakekey', /* optional */
  'endpoint_max_limit' => 1000, /* optional */
/* instantiation */
$ep = ARC2::getStoreEndpoint($config);
/* request handling */

Ideally I would’ve been able to quickly bring up a SPARQL endpoint on top of the rdflib Sleepycat triple store that is being used to serve up the linked data at But rather that pursuing elegance (this is kinda side work after all), I wanted to quickly put the SPARQL service out there for experimentation, and this was the quickest way for me to do that. If the service proves useful I’ll look more at what it takes to create an rdflib SPARQL service, or porting over the little python code I have to php (gasp).

SKOS displays w/ SPARQL

I’m just in the process of getting my head around SPARQL a bit more. At $work, Clay and I ran up against a situation where we wanted a query that would return a subgraph from an entire SKOS concept scheme for any assertions involving a particular concept URI as the subject. Easy enough right?


The thing is, for human readable displays we don’t want to display the URIs for related concepts (skos:broader, skos:narrower or skos:related) … we want to display the nice skos:prefLabel for them. Something akin to:

So how can we get a subgraph for a concept as well as any concept that might be directly related to it, in a single query? We came up with the following but I’d be interested in more elegant solutions:

PREFIX skos: <>

CONSTRUCT {<> ?p1 ?o1. ?s2 ?p2 ?o2}
    {<> ?p1 ?o1.}
        {<> skos:narrower ?s2.}
        {?s2 ?p2 ?o2.}
        {<> skos:broader ?s2.}
        {?s2 ?p2 ?o2.}
        {<> skos:related ?s2.}
        {?s2 ?p2 ?o2.}

The above ran quite nicely in my Arc playground. Any suggestions or ideas on how to boil this down would be appreciated. I also wanted to jot this query in the likely event that I forget how I did it.

calais and ocr newspaper data

Like you I’ve been reading about the new Reuters Calais Web Service. The basic gist is you can send the service text and get back machine readable data about recognized entities (personal names, state/province names, city names, etc). The response format is kind of interesting because it’s RDF that uses a bunch of homespun vocabularies.

At work Dan, Brian and I have been working on ways to map document centric XML formats to intellectual models represented as OWL. At our last meeting one of our colleagues passed out the Calais documentation, and suggested we might want to take a look at it in the context of this work. It’s a very different approach in that Calais is doing natural language processing and we instead are looking for patterns in the structure of XML. But the end result is the same–an RDF graph. We essentially have large amounts of XML metadata for newspapers, but we also have large amounts of OCR for the newspaper pages themselves. Perfect fodder for nlp and calais…

To aid in the process I wrote a helper utility ( that bundles up the Calais web service into a function call that returns a rdf graph, courtesy of Dan’s rdflib:

  import calais
  graph = calais_graph(content)

This is dependent on you getting a calais license key and stashing it away in ~/.calais. I wrote a couple sample scripts that use to do stuff like output all the personal names found in the text. For example here’s the people script. note, the angly brackets are missing from the sparql prefixes intentionally, since they don’t render properly (yet) in wordpress.

  from calais import calais_graph
  from sys import argv
  filename = argv[1]
  content = file(filename).read()
  g = calais_graph(content)
  sparql = """
          PREFIX rdf:
          PREFIX ct:
          PREFIX cp:
          SELECT ?name
          WHERE {
            ?subject rdf:type ct:People .
            ?subject cp:name ?name .
  for row in g.query(sparql):
      print row[0]

Notice the content is sent to calais, the graph comes back, and then a SPARQL query is executed on it? Here’s what we get when I run this OCR data through (take a look at the linked OCR to see just how irregular this data is).

  ed@curry:~/bzr/calais$ ./people data/ndnp\:774348 
  Edwin W. Joy
  A. Musto
  George Dlxoh
  Le Roy
  Charles P. Braslan
  Siegerfs Angostura Bitters
  James Stafford
  Herbert Putnam
  H. G. Pond
  Charles F. Joy
  Santa Rosa
  Allen S. Qlmsted
  Pptter Palmer

Clearly there are some errors, but you could imagine ranked list of these as they occurred across a million pages, where the anomalies would fall off on the long tail somewhere. It could be really useful in faceted browse applications. And here’s the output of cities.

  ed@curry:~/bzr/calais$ ./cities data/ndnp:774348 
  San Jose
  Santa Clara
  St. Louis
  New York
  San Francisco
  San Francisco
  Los Angeles

Not too shabby. If you want to try this out, install rdflib, and you can grab and the sample scripts and OCR samples from my bzr repo:

  bzr branch

If you do dive into you’ll notice that currently the REST interface is returning the RDF escaped in an XML envelope of some kind. I think this is a bug, but extracts and unescapes the RDF.