Visualizations do not show us things that are evident—visualizations make things evident. Visualizations, in other words, reveal something about the world that would not have been obvious without the work they do.
Instead of trotting out some of the tools we’ve developed on Documenting the Now I thought it might be more appropriate to talk about a specific case study that I was involved with partly as a result of my work on Documenting the Now.
A few years ago I got to work with Damien Pfister and some others in the Communications Department at UMD on a project to analyze the rhetoric of computational propaganda that occurred on Facebook during the 2016 election. These were Facebook posts that were released to Congress as part of the Mueller investigation into the operations of the Internet Research Agency.
The plan is to just share some background and the impetus for the work, how the released PDFs were processed as data, and then used as a corpus for annotation with a set of “codes” for qualitative analysis.
In the process of putting together the slides, and thinking about the project from a media studies angle, it occurred to me that this project connected somewhat with my interest in the work of Trevor Paglen and Harun Farocki’s idea of Operational Images. For Farocki operational images are images that are made by machines for machines, or as Paglen’s says:
Harun Farocki was one of the first to notice that image-making machines and algorithms were poised to inaugurate a new visual regime. Instead of simply representing things in the world, the machines and their images were starting to “do” things in the world. In fields from marketing to warfare, human eyes were becoming anachronistic. It was, as Farocki would famously call it, the advent of “operational images.” (Paglen:2014?)
Of course machines don’t independently make images for other machines–they do so because people tell them to. Operational images are often part of a control apparatus, designed by people for people (Deleuze, 1992). While these IRA ads do not appear to be made by autonomously by machines, they were composed, targeted and delivered with machines, with very specific aims in mind.
Here I’m also reminded here also of Amelia Acker’s work on data craft where “manipulators … create disinformation with falsified meta- data, specifically platform activity signals” (Acker, 2018). The use of metadata is clearly apparent in the way these ads were targeted, and their content was honed to foment division and polarization.
I guess this should have been an obvious connection from the start when I was working on the IRAds project…but having to get up and talk about your work is always clarifying in surprising ways. It seems like this idea of operational images, might need some updating or clarification in light of data craft?