After my last post about collecting 13 million Ferguson tweets Laura Wrubel from George Washington University’s Social Feed Manager project recommended looking at how Mark Phillips made his Yes All Women collection of tweets available in the University of North Texas Digital Library. By the way, both are awesome projects to check out if you are interested in how access informs digital preservation.

If you take a look you’ll see that only the Twitter ids are listed in the data that you can download. The full metadata that Mark collected (with twarc incidentally) doesn’t appear to be there. Laura knows from her work on the Social Feed Manager that it is fairly common practice in the research community to only openly distribute lists of Tweet ids instead of the raw data. I believe this is done out of concern for Twitter’s terms of service (1.4.A):

If you provide downloadable datasets of Twitter Content or an API that returns Twitter Content, you may only return IDs (including tweet IDs and user IDs).

You may provide spreadsheet or PDF files or other export functionality via non­-programmatic means, such as using a “save as” button, for up to 100,000 public Tweets and/or User Objects per user per day. Exporting Twitter Content to a datastore as a service or other cloud based service, however, is not permitted.

There are privacy concerns here (redistributing data that users have chosen to remove). But I suspect Twitter has business reasons to discourage widespread redistribution of bulk Twitter data, especially now that they have bought the social media data provider Gnip.

I haven’t really seen a discussion of this practice of distributing Tweet ids, and its implications for research and digital preservation. I see that the International Conference on Weblogs and Social Media now have a dataset service where you need to agree to their “Sharing Agreement”, which basically prevents re-sharing of the data.

Please note that this agreement gives you access to all ICWSM-published datasets. In it, you agree not to redistribute the datasets. Furthermore, ensure that, when using a dataset in your own work, you abide by the citation requests of the authors of the dataset used.

I can certainly understand wanting to control how some of this data is made available, especially after the debate after Facebook’s Emotional Contagion Study went public. But this does not bode well for digital preservation where lots of copies keeps stuff safe. What if there were a standard license that we could use that encouraged data sharing among research data repositories? A viral license like the GPL that allowed data to be shared and reshared within particular contexts? Maybe the CC-BY-NC, or is it too weak? If each tweet is copyrighted by the person who sent it, can we even license them in bulk? What if Twitter’s terms of service included a research clause that applied to more than just Twitter employees, but to downstream archives?

Back of the Envelope

So if I were to make the ferguson tweet ids available, to work with the dataset you would need to refetch the data using the Twitter API, one tweet at a time. I did a little bit of reading and poking at the Twitter API and it appears an access token is limited to 180 requests every 15 minutes. So how long would it take to reconstitute 13 million Twitter ids?

13,000,000 tweets / 180 tweets per interval = 72,222 intervals
72,222 intervals * 15 minutes per interval =  1,083,330 minutes

1,083,330 minutes is two years of constant accesses to the Twitter API. Please let me know if I’ve done something conceptually/mathematically wrong.

Update: it turns out the statuses/lookup API call can return full tweet data for up to 100 tweets per request. So a single access token could fetch about 72,000 tweets per hour (100 per request, 180 requests per 15 minutes) … which only amounts to 180 hours, which is just over a week. James Jacobs rightly points out that a single application could use multiple access tokens, assuming users allowed the application to use them. So if 7 Twitter users donated their Twitter account API quota, the 13 million tweets could be reconstituted from their ids in roughly a day. So the situation is definitely not as bad as I initially thought. Perhaps there needs to be an app that allows people to donate some of the API quota for this sort of task? I wonder if that’s allowed by Twitter’s ToS.

The big assumption here is that the Twitter API continues to operate as it currently does. If Twitter changes its API, or ceases to exist as a company, there would be no way to reconstitute the data. But what if there were a functioning Twitter archive that could reconstitute the original data using the list of Twitter ids…

Digital Preservation as a Service

I’ve hesitated to write about LC’s Twitter archive while I was an employee. But now that I’m no longer working there I’ll just say I think this would be a perfect experimental service for them to consider providing. If a researcher could upload a list of Twitter ids to a service at the Library of Congress and get them back a few hours, days or even weeks later, this would be much preferable to managing a two year crawl of Twitter’s API. It also would allow an ecosystem of Twitter ID sharing to evolve.

The downside here is that all the tweets are in one basket, as it were. What if LC’s Twitter archiving program is discontinued? Does anyone else have a copy? I wonder if Mark kept the original tweet data that he collected, and it is private, available only inside the UNT archive? If someone could come and demonstrate to UNT that they have a research need to see the data, perhaps they could sign some sort of agreement, and get access to the original data?

I have to be honest, I kind of loathe idea of libraries and archives being gatekeepers to this data. Having to decide what is valid research and what is not seems fraught with peril. But on the flip side Maciej has a point:

These big collections of personal data are like radioactive waste. It’s easy to generate, easy to store in the short term, incredibly toxic, and almost impossible to dispose of. Just when you think you’ve buried it forever, it comes leaching out somewhere unexpected.

Managing this waste requires planning on timescales much longer than we’re typically used to. A typical Internet company goes belly-up after a couple of years. The personal data it has collected will remain sensitive for decades.

It feels like we (the research community) need to manage access to this data so that it’s not just out there for anyone to use. Maciej’s essential point is that businesses (and downstream archives) shouldn’t be collecting this behavioral data in the first place. But what about a tweet (its metadata) is behavioural? Could we strip it out? If I squint right, or put on my NSA colored glasses, even the simplest metadata such as who is tweeting to who seems behavioral.

It’s a bit of a platitude to say that social media is still new enough that we are still figuring out how to use it. Does a legitimate right to be forgotten mean that we forget everything? Can businesses blink out of existence leaving giant steaming pools of informational toxic waste, while research institutions aren’t able to collect and preserve small portions as datasets? I hope not.

To bring things back down to earth, how should I make this Ferguson Twitter data available? Are a list of tweet ids the best the archiving community can do, given the constraints of Twitter’s Terms of Service? Is there another way forward that addresses very real preservation and privacy concerns around the data? Some archivists may cringe at the cavalier use of the word “archiving” in the title of this post. However, I think the issues of access and preservation bound up in this simple use case warrant the attention of the archival community. What archival practices can we draw and adapt to help us do this work?