This weeks seminar was focused on citizen science. We had three readings: Wiggins & Crowston (2011), Quinn & Bederson (2011), Eveleigh, Jennett, Blandford, Brohan, & Cox (2014) and were visited by the author of the first paper Andrea Wiggins. This class was a lot of fun because prior to talking about the readings we spent an hour walking around the UMD campus looking for birds, and collecting observations with Andrea’s eBird mobile app.

Along the way we chatted about how her dissertation research used an in depth case study of eBird (a project from the Cornell Lab of Ornithology), in which she did a great deal of participant observation. I was particularly struck by her observation of how important the knowledge she gained about birding, and the relationships she developed as part of this work were to her dissertation work, as well as her academic career. Although she works in the field of information science, some of her most well known work has been with ecologists that she was put in touch with as part of this birding observation. Andrea stressed how important it is for her research to be put to use in the world, when it comes to creating applications like eBird, or effecting policy. This seems like a deep lesson for discovering and building a meaningful research topic. Another thing that occurred to me as I was writing up these notes was how meta this part of her research was: observing the humans who were observing the birds.

We did spot a few birds on our walk, which you can see in Andrea’s eBird checklist. When we came back to the class we took a brief tour through the eBird website, and looked at how the data was collected and made available. Andrea said that they had some initial difficulty drawing people to using the eBird application, but this changed when they brought in some active birders to help design the application, which helped spread the word about the application. Perhaps there was a participatory design story that could be told, or that has been told. Now they are swimming in data, which they make quarterly and annual of snapshots available to the public. My only quibble with the datasets they make available is that they have their own peculiar license instead of using a Creative Commons license, like CC-BY-NC-SA. The participation in the project is truly impressive; take a look in your area to see what birds have been observed. I found a handful of people in my neighborhood had documented some 104 species of birds, mostly in 2014 and 2015.

One additional topic that came up was ethical considerations when making the data available on the Web. A lot of birders will use their actual names, so in sharing observation data you are also providing information about your location at particular times. There are obvious privacy implications here, that are necessarily balanced with the birders desire to participate in the community of other active birders. Another consideration is rare birds that are found, which can result in an increased number of people to come and see the bird, which could impact their environment. eBird themselves provide some guidance on these concerns. I suspect some of these issues will come up again in a few weeks when Katie Shilton visits our class to talk about values in design.

The papers provided a nice variety of views into the domain. Quinn & Bederson (2011) surveyed the landscape of human computation, which seems to have its genesis in the pioneering work of [Luis von Ahn] at Carnegie Mellon (who invented ReCAPTCHA which he later sold it to Google). The paper is quite structured in its approach to what is in and out of scope for human computational work, and provides a taxonomy or rubric for the field. It’s a nice article to help situate ideas in the field of human computation. Wiggins & Crowston (2011) similarly provided a useful look at the relatively new field of citizen science with particular attention to how the degree of virtuality and goal orientation can be additional participatory types. It also seems like this is one of the first papers to deliberately include purely virtual citizen science projects like GalaxyZoo.

The last paper Eveleigh et al. (2014) was suggested by Jonathan who led the discussion and also is working with Andrea on citizen science projects. I really enjoyed this paper because it took a deep dive into a user study of GalaxyZoo. There is already a significant body of research of how crowd sourcing projects like Wikipedia tend to have a large number of contributions from a small numbers of people. The general approach is that the more we understand about how these super-users behave the better these systems can be built and sustained. There is a certain logic to that approach, but what hasn’t been explored so much is how the users who submit less behave, and how important they are to the health of the overall system. The long tail of small contributions is actually extremely important, and designing systems that allow for this level of engagement is under-developed.

The paper actually almost felt like two papers to me, since it was a mixed methods paper that first surveyed OldWeather users about their motivations (extrinsic and intrinsic) for participation, and then did a series of in depth follow on interviews to help identify what the barriers to and constraints of participation were for the individuals on the long tail. In the classroom discussion I indicated that it felt like two papers to me, but on rereading pieces of it now I see that the two parts of the study were more connected than I initially recognized. The results of the survey were used to sample OldWeather users that had different motivations and participation patterns.

The findings were interesting, especially regarding the identified design patterns in OldWeather that helped encourage lower volume contributions:

  • Facilitate independent working and participant choice.
  • Optimize tasks to fit within busy lives.
  • Publicize scientific outcomes.
  • Sell citizen science snacks, not gourmet meals!
  • Enable personalized feedback to affirm quality.

There seems a lot of useful information here to build and test new citizen-science and crowd-sourcing projects. I know I have habitually thought of the expert user when desigining user interfaces and applications. Focusing on the dabbler seems like an extremely valuable lesson. Even the dropout who no longer contributes, but enjoys getting project update emails, and spreads the word about the project to friends and colleagues is important. Now that I think about it this was one of the underlying themes in Mauricio Giraldo’s talk about NYPL’s Building Inspector earlier this year in MITH:

I think his talk is largely about what it means to design for dabbling, and how important this activity is for building substantial engagement.

PS. the more I think about it the more I like the model Andrea presented for using particpant observation as a core part the work I do in studying appraisal in Web archives. Finding the balance between observation, participation and collaboration will be difficult, because I don’t want to maintain too much critical distance from the work.


Eveleigh, A., Jennett, C., Blandford, A., Brohan, P., & Cox, A. L. (2014). Designing for dabblers and deterring drop-outs in citizen science. In Proceedings of the 32nd annual ACM conference on human factors in computing systems (pp. 2985–2994). Association for Computing Machinery.

Quinn, A. J., & Bederson, B. B. (2011). Human computation: A survey and taxonomy of a growing field. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1403–1412). Association for Computing Machinery.

Wiggins, A., & Crowston, K. (2011). From conservation to crowdsourcing: A typology of citizen science. In System sciences (HICSS), 2011 44th Hawaii international conference on (pp. 1–10). IEEE.