This is just a mental bookmark for the metaphor of ā€œfoldingā€ for studying algorithmic processes which I encountered in Lee et al. (2019). The motivation for studying folding is to refocus attention on how algorithms are relationally situated rather than simply examining them as black boxes that need to be opened, made transparent, and (hopefully) made accountable and just. The paperā€™s main argument centers on dispelling the idea that algorithms can be made ā€œfairā€ in their instrumentation, without looking at how the algorithms are related together with non-algorithmic systems.

Speaking about the extensive attention that has been paid to the research into bias of algorithms (Noble, 2018 ; Eubanks, 2018 ; Pasquale, 2015 ; Burrell, 2016 ; Diakopoulos, 2016) Lee et al point out:

One line of reasoning in this critical research implies that if only algorithms were designed in the optimal and correct way, they would generate results that were objective and fair. It is precisely this rule-bound and routinised nature of algorithms that seems to promise unbiased and fair sentencing. We find this reasoning misleading as it hides the multitude of relations algorithms are part of and produce. In a sense, the very notion of biased algorithms is linked to an objectivist understanding of how knowledge is produced, and worryingly sidesteps decades of research on the practices of knowledge production. In this article, we instead want to stress that algorithms cannot offer uniquely objective, fair and logical alternatives to the social structures of our worlds.

This is a pretty strong point that they are making in the last sentence. One reason why this paperā€™s orientation appeals to me is that it draws the concept of ā€œfoldingā€ out of work of STS scholars such as Law, Mol, Serres and Latour to act as a method for studying algorithmic systems as agents that participate in a larger network of shifting relations:

Rather than thinking about objects, relations and concepts as stable entities with fixed distances and properties, we might attend to how different topologies produce different nearness and rifts. In this way, technologies, such as algorithms, can be understood as folding time and space as much as social, political and economic relations. By analysing algorithms in this manner, we argue that we can gain a better understanding of how they become part of ordering the world: sometimes superimposing things that might seem distant and something tearing apart things that might seem close. To be more concerete, using operations of folding to understand algorithms allows us to pay attention to how diverse things such as values, computations, datasets or analytical methodologies are algorithmially brought together to produce different versions of the social and natural orders.

The paper uses four cases studies involving AIDS, Zika virus, and financial metrics to highlight three types of questions that help tease out the various types of folding that algorithms participate in:

  • What people, objects or relations are produced as proximate or far away by algorithms?
  • What is made part ofthe universal and what becomes invisible?
  • How do assumpions about eh normal become folded into algorithms?

The topological idea proximity is particularly salient for me because it helps talk about how algorithms can pull disconnected things into relation and thus pull them closer together. Processes that are separate in physical space may be closely aligned in an algorithmic space. Also useful is the idea of the ā€œthe normalā€ and how algorithms often have hidden away within them an argument about what is normal. He isnā€™t cited, but Iā€™m reminded of Foucaultā€™s work on governmentality here, not just for his critique of power, but also for his methods for examining how knowledge and power go hand in hand to produce practices and norms.

This way of thinking of algorithms appeals to me as Iā€™m analyzing the results of my field study, where I examined how a federal agencyā€™s choices about what to collect from the web (appraisal) were expressed as part of a sociotechnical system. This system has a particular set of algorithms at its core: in this case fixity algorithms. Fixity algorithms in themselves are as close as we can imagine to an objective measure, since they are mathematical procedures for summarizing the contents of distinct bytestreams. This neutrality is the means by which their archive is assembled and deployed. But all sorts of phenomena factor into what collected data files are ā€œfixedā€ and it is the means by which fixity algorithms are folded into other processes (forensics, preservation, data packaging, law enforcement, surveillance) that the story gets interesting.

The process of folding seems like a useful way to talk about how human and non-human actors are brought into collaboration in algorithmic systems. It accomodates some measure of intentional design with social and political contingency. In my own work it is helpful for decentering my own tendency to zoom in on the technical details of how an algorithm is implemented in order to unpack the various design decisions, and instead look laterally at how the algorithm enables and disables participation in other social, political, and technical activities.

References

Burrell, J. (2016). How the machine thinks: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1).
Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 58ā€“62.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martinā€™s Press.
Lee, F., Bier, J., Christensen, J., Engelmann, L., Helgesson, C.-F., & Williams, R. (2019). Algorithms as folding: Reframing the analytical focus. Big Data & Society.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce oppression. New York University Press.
Pasquale, F. (2015). The Black Box Society: The secret algorithms that control money and information. Harvard University Press.