Following on from my reading of Russell’s Open Standards I decided to read Martha Lampland and Susan Leigh Star’s Standards and their Stories. Star had a huge influence on the establishment of infrastructure studies, and the application of ethnographic methods to sociotechnical research generally. So this book has been on my bucket list for a while. The book is a collection of essays by Lampland, Star and others – and it is interspersed with news material that provide useful examples of how the standards operate. I found the alternation between the more theoretical and the more mundane to be really useful…since part of the message of the book was methodological, or how to go about studying standards and their impacts. Discourse analysis and content analysis are useful techniques when examining how standards are deployed, and being able to analyze the news critically and imaginatively is something that Lampland and Star really got me thinking about.
In my own research I’m generally interested in how standards operate as part of a community of practice, but also specifically how software, especially widely deployed software (like Archive-It) can function as a de facto standard. The behaviors embedded in the code of the software become the limits or boundaries of what is thinkable or possible. The interfaces that are tangled up with, and expressive of, that code govern what can and cannot be done. Sometimes standards are accepted as reality by some groups, while they are and subverted and worked around by others who seek alternate futures. For example in some ways its possible to look at web archiving practices and standards as resisting the much larger body of web standards which define the architecture of thw web.
So I came to Standards and their Stories hoping to see if it could provide any insight into the work of software studies. Of course I found more things to read such as Abbate (2000) (which sounds like a classic I should already be familiar with), and Slaton & Abbate (2001). It also got me thinking that I need to spend some time with Bettivia:2016 to see how she structures her dissertation to analyze the development of OAIS.
In their introduction to the collection Lampland and Star present a set of questions that are useful in study standards:
Hence we can say that slippage … between a standard and its realization in action becomes a crucial unit of analysis for the study of standardization and quantification. Using historical analysis, this may mean analyzing irreversabilities and processes of generative entrenchment. What is being standardized, for what purpose, and with what result? When did it begin? What were its first entrenchments? What can and should be changed? Who are the actors engaged in the process of standardization, and do they change at different moments of a standard’s genesis and maturation? What small decisions have ramified through the life and spread of the standards? When does a standard become sufficiently stabilized to be seen as an object or quality influencing social behavior? How do we address the objectlike quality of standards while keeping a keen eye on the necessarily historical and processual quality of its emergence, transformation, and (variably) long life? How do standards developed in one context acquire a modular character, enabling them to be moved around or to serve as templates for the development of other standards? (p. 15)
These are lots of useful questions for coming at standards from different angles. The focus here and elsewhere in the book on historical methods has much in common with [Russell]’s work. It makes perfect sense that access to records and documents of standardization processes, would be important for analyzing the development and impact of standards. But nonetheless I was surprised by it for some reason I couldn’t quite put my finger on. Perhaps I was just pleased to see the application of humanistic methods in the social sciences, since my own project sits at the crossroads or hinterlands (as Law (2004) calls them) of the humanities, social sciences and information technology. Thinking of Law, these questions also remind how much Star has in common with Bruno Latour’s, who she worked with in the 1980s, and specifically the method/philosophy of Actor Network Theory (ANT). The questions are made in the same spirit as Latour’s call to “follow the actors”, where the actors aren’t only people:
Using a slogan from ANT, you have ‘to follow the actors themselves’, that is try to catch up with their often wild innovations in order to learn from them what the collective existence has become in their hands, which methods they have elaborated to make it fit together, which accounts could best define the new associations that they have been forced to establish. (Latour, 2005)
Another connection that was pleasing was how work on standards intersects with that of James Scott. I spent a summer reading his Seeing Like a State which was very formative for me early on in the PhD program. In particular the idea of the role of legibility, how things are made legible, and for whom, and what legibility entails for the communities and environments that are being rendered legible. Elizabeth Dunn writes:
What are standards, anyway? Artifacts? Practices? A mode of governance? In Seeing Like a State, Scott (1998) argues that utopian “high modernism” techniques to improve the human condition are a form of episteme, or abstract knowledge. Certainly, bodies of standards fall in that cateogry, too; as a set of codified rule, they exist in a placeless place where the complexities and differences of real landscapes and actual people are glossed over. When standards are used to dictate practice or to grade products, they often replace metis–the unwritten practical know-how that local producers gain over the years as they work to adapt to the ever changing conditions of their lands, their markets, and their communities. (p. 118)
Reading this made me wonder how standards can operate sometimes to unintentionally (or intentionally) to push groups out of markets, as both producers and consumers. For my own research it suggests questions of how standardization of WARC could have operated to consolidate particular ideas and architectures of the web archiving, while invalidating others. I’m thinking here of Michigan’s early web archiving using httrack and the California Digital Libraries own homegrown web archiving infrastructure, both of which were retired to use Internet Archive’s Archive-It service. I don’t think this was an intentional market consolidation, but perhaps an end result of how we conceptualize and standardized what constitutes a web archive. What are the knowledges and practices that are lost when this happened?
Dunn reminded me of a significant (and useful) term from Scott, oikodomi:
Although standards present themselves as episteme, as pure idea that exists outside of particular places, standards need an oikodomi, a material context in which they are transformed into action and effect. Fully analyzing standards requires seeing them within specific infrastructures and observing how they modify and are modified by particular physical environments. (p. 120)
As I’m researching infrastructures of web archiving I expect that the case study, multi-sited ethnography and documentary analysis will be extremely important methods for seeing where standards meet and merge with practice. Also, it will also be good to reacquaint myself with how Scott theorizes and mobilizes his idea of oikodomi. I don’t usually think Scott is often thought of as an STS scholar, at least he’s not in this list. Perhaps that’s because of his interest in politics? But I’d argue that he really does have a great deal to offer to the STS project, particularly because of interest in the goals and effects of legibility.
One chapter that stood out for me was Millerand and Bowker’s Metadata Standards. Of course, Bowker was a frequent collaborator with Star – and as they were partners they had a deeply shared vision of the STS project. How data relate to standards is a super relevant topic for m own work with web archives. The chapter covers how metadata standards were deployed for the preservation and access of scientific data in the Long Term Ecological Research Network. They focus on the deployment of the Environmental Metadata Language (EML) as way-in to looking at how data sharing and use is functioning. In many ways this discussion really prefigures some of today’s discussions around data science:
A central problem here is that the storage of, access to, and evaluation of the validity of data are extremely dependent on the ways in which the data have been collected, labeled, and stored. (p. 150)
One insight that I thought was particularly significant was that infrastructures don’t obey single timelines or life cycles. There are different intersecting time scales that depend on actors purposes.
There is no linear narrative to be told: “The time of innovations depends on the geometry of the actors, not on the calendar” (Latour & Porter, 1996, p. 80). In other words, we cannot track a single life cycle (development, deployment, and death) but must pay attention to the diverse temporalities of the actors. This perspective allows us to better grasp how the existence and even the reality of projects vary over time, in line with the engagement or disengagement of actors in the development of these projects or objects. (p. 151)
Their study relied on interviews and participant observation with different people involved in the standard as well as document analysis of standards and other documentation. Interestingly they focused on two different groups of people: the creators of the EML standard and the enactors of the standard (the people who implemented it). I kind of wish more was said about how they arrived at this grouping and the methodology in general. One of the reasons I really enjoy reading Law so much is because of how long he dwells (perhaps even to a fault) on methods (Law, 2004). For example, were there people that participated in both groups? Since Latour is being referenced I was kind of curious if they consider non-human actors: the environments being studied, the instruments, etc. I think this does come out later as they discuss the significance of different measures, and how many of scales of measurement were locally adapted for particular types of data collection, that made them more difficult to aggregate and share consistently. This idea again recalls Scott’s oikodomi discussed above.
This question of multiple timelines, and the resistance to a linear narrative also reminded me of Records Continuum model (McKemmish, Upward, & Reed, 2010) that looks at archives in a pluralistic way that embraces the many forms that archive records can take, and the times in which archival processes engage with them:
The life-cycle concept has been useful in promoting a sense of order, a systematic approach, to the overall management of recorded information. However, strict adherence to its principles undermines any trend toward greater cooperation and coordination of archivists and records managers. It ignores the many ways in which the records management and archives operations are interrelated, even intertwined. It may be convenient in a large bureaucracy to attempt to clarify roles and responsibilities by delineating carefully the records management and archival functions. It may also be counterproductive. Atherton (1985), p. 47.
In both cases attention is being paid to the actual movement of actors (be they people, data or records) instead of projecting a particular model onto their movements (e.g. a timeline or lifecycle model). This kind of empirical work means attending to the details and differences as much as the patterns and continuities. Getting back to Millerand and Bowker, the difference they see between deployment and enactment of standards is that our stories of standards often privilege the invention of the standard over its enactment (Baker & Millerand, 2007).
One document that Millerand and Bowker study is the projects data dictionary. They were able to look at its revision over time as a way of reconstructing the various collaboration activities. I’m hoping that version control histories in code repositories such as GitHub can be used in a similar way, as a record of the enactment of software. This technique has something in common I think with Trace Ethnography which was developed as a method for studying data ecosystems/infrastructures (Geiger & Ribes, 2011).
Use of enactment and thinking about the “staging” of the standard as a method also seems to be a useful technique they develop in this chapter. Is it possible to see how standards and tools operate as a play script in need of interpretation? Connecting standards and data practices led the authors to observe that ultimatey ontology matters:
We slice the ontological pie the wrong way if we see software over here and organizational arrangements over there. Each standard in practice is made up of sets of technical specifications and organizational arrangements. As Latour (2005) reminds us in another context, the question is how to distribute qualities between the two–what needs to be specified technically and what can be solved organizationally are open questions to which there is no one right answer. (p. 165)
The book ends with an appendix that contains a course plan for an Infrastructure Studies class. It struck me then how relevant that would be for the UMD iSchool, because of our proximity to Washington DC where so much thinking about infrastructures and governance is centered. I think it would be useful for our students to get exposure to these ideas around standards and infrastructure to show how the mundane details of how they operate are actually quite interesting, and super relevant for our work.
Abbate, J. (2000). Inventing the Internet. MIT press.
Atherton, J. (1985). From life cycle to continuum: Some thoughts on the records management-archives relationshi. Archivaria, 21(Winter), 43–51.
Baker, K. S., & Millerand, F. (2007). Articulation work supporting information infrastructure design: Coordination, categorization, and assessment in practice. In 40th Annual Hawaii International Conference on System Sciences, 2007. HICSS 2007.
Geiger, R. S., & Ribes, D. (2011). Trace ethnography: Following coordination through documentary practices. In 44th hawaii international conference on system sciences (pp. 1–10). IEEE. Retrieved from http://www.stuartgeiger.com/trace-ethnography-hicss-geiger-ribes.pdf
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
Latour, B., & Porter, C. (1996). Aramis, or, the love of technology (Vol. 1996). Harvard University Press Cambridge, MA.
Law, J. (2004). After method: Mess in social science research. Routledge.
McKemmish, S., Upward, F., & Reed, B. (2010). Records continuum model. In M. Bates & M. N. Maack (Eds.), Encyclopedia of library and information sciences. Taylor & Francis.
Scott, J. C. (1998). Seeing like a state: How certain schemes to improve the human condition have failed. Yale University Press.
Slaton, A., & Abbate, J. (2001). Technologies of power. In M. T. Allen & G. Hecht (Eds.) (pp. 95–144). Cambridge, MA: MIT Press.