Serves 23,376 SKOS Concepts



  1. Open a new file using your favorite text editor.
  2. Instantiate an RDF graph with a dash of rdflib.
  3. Use python's urllib to extract the HTML for each of the Times Topics Index Pages, e.g. for A.
  4. Parse HTML into a fine, queryable data structure using BeautifulSoup.
  5. Locate topic names and their associated URLs, and gently add them to the graph with a pinch of SKOS.
  6. Go back to step 3 to fetch the next batch of topics, until you've finished Z.
  7. Bake the RDF graph as an rdf/xml file.


If you don't feel like cooking up the rdf/xml yourself you can download it from here (might want to right-click to download, some browsers might have trouble rendering the xml), or download the 68 line implementation and run it yourself.

The point of this exercise was mainly to show how thinking of the New York Times Topics as a controlled vocabulary, that can be serialized as a file, and still present on the Web, could be useful. Perhaps to someone writing an application that needs to integrate with the New York Times and who want to be able to tag content using the same controlled vocabulary. Or perhaps someone wants to be able to link your own content with similar content at the New York Times. These are all use cases for expressing the Topics as SKOS, and being able to ship it around with resolvable identifiers for the concepts.

Of course there is one slight wrinkle. Take a look at this Turtle snippet for the concept of Ray Bradbury:

@prefix rdf: <> .
@Prefix skos: <> .

<> a skos:Concept;
    skos:prefLabel "Bradbury, Ray";
    skos:broader <>;
    skos:inScheme <> 

Notice the URI being used for the concept?

The wrinkle is that there's no way to get RDF back from this URI currently. But since NYT is already using XHTML, it wouldn't be hard to sprinkle in some RDFa such that:


Ray Bradbury


And voila you've got Linked Data. I took the 5 minutes to mark up the HTML myself and put it here which you can run through the RDFa Distiller to get some Turtle. Of course if the NYT ever decided to alter their HTML to provide this markup this recipe would be simplified greatly: no more error prone scraping, the assertions could be pulled directly out of the HTML.