Human Nature and Conduct

Human Nature and ConductHuman Nature and Conduct by John Dewey
My rating: 5 of 5 stars

This book came recommended by Steven Jackson when he visited UMD last year. I’m a fan of Jackson’s work on repair, and was curious about how his ideas connected back to Dewey’s Human Nature and Conduct.

I’ve been slowly reading it, savoring each chapter on my bus rides to work since then. It’s a lovely & wise book. Some of the language puts you back into 1920s, but the ideas are fresh and still so relevant. I’m not going to try to summarize it here. You may have noticed I’ve posted some quotes here. Let’s just say it is a very hopeful book and provides a very clear and yet generous view of the human enterprise.

I don’t know if I was imagining it, but I seemed to see a lot of parallels between it and some reading I’m doing about Buddhism. I noticed over at Wikipedia that Dewey spent some time in China and Japan just prior to delivering these lectures. So maybe it’s not so far fetched a connection.

I checked it out of the library, but I need to buy a copy of my own so I can re-read it. You can find a copy at Internet Archive for your ebook reader too.


Method and Materials

Now it is a wholesome thing for any one to be made aware that thoughtless, self-centered action on his part exposes him to the indignation and dislike of others. There is no one who can be safely trusted to be exempt from immediate reactions of criticism, and there are few who do not need to be braced by occasional expressions of approval. But these influences are immensely overdone in comparison with the assistance that might be given by the influence of social judgments which operate without accompaniments of praise and blame; which enable an individual to see for himself what he is doing, and which put him in command of a method of analyzing the obscure and usually unavowed forces which move him to act. We need a permeation of judgments on conduct by the method and materials of a science of human nature. Without such enlightenment even the best-intentioned attempts at the moral guidance and improvement of others often eventuate in tragedies of misunderstanding and division, as is so often seen in the relations of parents and children.

Dewey (1957), p. 321.

Dewey, J. (1957). Human nature and conduct. New York: Modern Library. Retrieved from https://archive.org/details/humannatureandco011182mbp


Something Horrible

There is something horrible, something that makes one fear for civilization, in denunciations of class-differences and class struggles which proceed from a class in power, one that is seizing every means, even to a monopoly of moral ideals, to carry on its struggle for class-power.

Dewey (1957), p. 301.

Dewey, J. (1957). Human nature and conduct. New York: Modern Library. Retrieved from https://archive.org/details/humannatureandco011182mbp


Energies

Human nature exists and operates in an environment. And it is not “in” that environment as coins are in a box, but as a plant is in the sunlight and soil. It is of them, continuous with their energies, dependent upon their support, capable of increase only as it utilizes them, and as it gradually rebuilds from their crude indifference an environment genially civilized.

Dewey (1957), p. 296.

Dewey, J. (1957). Human nature and conduct. New York: Modern Library. Retrieved from https://archive.org/details/humannatureandco011182mbp


Tweets and Deletes

Archives are full of silences. Archivists try to surface these silences by making appraisal decisions about what to collect and what not to collect. Even after they are accessioned, records can be silenced by culling, weeding and purging. We do our best to document these activities, to leave a trail of these decisions, but they are inevitably deeply contingent. The context for the records and our decisions about them unravels endlessly.

At some point we must accept that the archival record is not perfect, and that it’s a bit of a miracle that it exists at all. But in all these cases it is the archivist who has agency: the deliberate or subliminal decisions that determine what comprises the archival record are enacted by an archivist. In addition the record creator has agency, in their decision to give their records to an archive.

Perhaps I’m over-simplifying a bit, but I think there is a curious new dynamic at play in social media archives, specifically archives of Twitter data. I wrote in a previous post about how Twitter’s Terms of Service prevent distribution of Twitter data retrieved from their API, but do allow for the distribution of Tweet IDs and relatively small amounts of derivative data (spreadsheets, etc).

Tweet IDs can then be hydrated, or turned back into raw original data, by going back to the Twitter API. If a tweet has been deleted you cannot get it back from the API. The net effect this has is of cleaning, or purging, the archival record as it is made available on the Web. But the decision of what to purge is made by the record creator (the creator of the tweet) or by Twitter themselves in cases where tweets or users are deleted.

For example lets look at the collection of Twitter data that Nick Ruest has assembled in the wake of the attack on the offices of Charlie Hebdo earlier this year. Nick collected 13 million tweets mentioning four hashtags related to the attacks, for the period of January 9th to January 28th, 2015. He has made the tweet IDs available as a dataset for researchers to use (a separate file for each hashtag). I was interested in replicating the dataset for potential researchers at the University of Maryland, but also in seeing how many of the tweets had been deleted.

So on February 20th (42 days after Nick started his collecting) I began hydrating the IDs. It took 4 days for twarc to finish. When it did I counted up the number of tweets that I was able to retrieve. The results are somewhat interesting:

hashtag archived tweets hydrated deletes percent deleted
#JeSuisJuif 96,518 89,584 6,934 7.18%
#JeSuisAhmed 264,097 237,674 26,423 10.01%
#JeSuisCharlie 6,503,425 5,955,278 548,147 8.43%
#CharlieHebdo 7,104,253 6,554,231 550,022 7.74%
Total 13,968,293 12,836,767 1,131,526 8.10%

It looks like 1.1 million tweets out of the 13.9 million tweet dataset have been deleted. That’s about 8.1%. I suspect now even more have been deleted. While the datasets themselves are significantly smaller the number of deletes for #JeSuiAhmed and #JeSuisJuif seem quite a bit higher than #JeSuisCharlie and #CharlieHebdo. Could this be that users were concerned about how their tweets would be interpreted by parties analyzing the data?

Of course, it’s very hard for me to say since I don’t have the deleted tweets. I don’t even know who sent them. A researcher interested in these questions would presumably need to travel to York University to work with the dataset. In a way this seems to be how archives usually work. But if you add the Web as a global, public access layer into the mix it complicates things a bit.


The Adventure of Experiment

Love of certainty is a demand for guarantees in advance of action. Ignoring the fact that truth can be bought only by the adventure of experiment, dogmatism turns truth into an insurance company. Fixed ends upon one side and fixed “principles” – that is authoritative rules – on the other, are props for a feeling of safety, the refuge of the timid, and the means by which the bold prey upon the timid.

Dewey (1957), p. 237.

Dewey, J. (1957). Human nature and conduct. New York: Modern Library. Retrieved from https://archive.org/details/humannatureandco011182mbp


twarc & Ferguson demo

Here’s a brief demo of what it looks like to use twarc on the command line to archive tweets that are mentioning Ferguson. I’ve been doing archiving around this topic off and on since August of last year, and happened to start it up again recently to collect the response to the Justice Department report.

I kind of glossed over getting your Twitter keys set up, which is a bit tedious. I have them set in environment variables for that demo, but you can pass them in on the command line now. I guess that could be another demo sometime. If you are interested send me a tweet.


JavaScript and Archives

Tantek Çelik has some strong words about the use of JavaScript in Web publishing, specifically regarding it’s accessibility and longevity:

… in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web

It is a dire warning. It sounds and feels true. I am in the middle of writing a webapp that happens to use React, so Tantek’s words are particularly sobering.

And yet, consider for a moment how Twitter make personal downloadable archives available. When you request your archive you eventually get a zip file. When you unzip it, you open an index.html file in your browser, and are provided you with a view of all the tweets you’ve ever sent.

If you take a look under the covers you’ll see it is actually a JavaScript application called Grailbird. If you have JavaScript turned on it looks something like this:

JavaScript On

If you have JavaScript turned off it looks something like this:

JavaScript Off

But remember this is a static site. There is no server side piece. Everything is happening in your browser. You can disconnect from the Internet and as long as your browser has JavaScript turned on it is fully functional. (Well the avatar URLs break, but that could be fixed). You can search across your tweets. You can drill into particular time periods. You can view your account summary. It feels pretty durable. I could stash it away on a hard drive somewhere, and come back in 10 years and (assuming there are still web browsers with a working JavaScript runtime) I could still look at it right?

So is Tantek right about JavaScript being at odds with preservation of Web content? I think he is, but I also think JavaScript can be used in the service of archiving, and that there are starting to be some options out there that make archiving JavaScript heavy websites possible.

The real problem that Tantek is talking about is when human readable content isn’t available in the HTML and is getting loaded dynamically from Web APIs using JavaScript. This started to get popular back in 2005 when Jesse James Garrett coined the term AJAX for building app-like web pages using asynchronous requests for XML, which is now mostly JSON. The scene has since exploded with all sorts of client side JavaScript frameworks for building web applications as opposed to web pages.

So if someone (e.g. Internet Archive) comes along and tries to archive a URL it will get the HTML and associated images, stylesheets and JavaScript files that are referenced in that HTML. These will get saved just fine. But when the content is played back later in (e.g. Wayback Machine) the JavaScript will run and try to talk to these external Web APIs to load content. If those APIs no longer exist, the content won’t load.

One solution to this problem is for the web archiving process to execute the JavaScript and to archive any of the dynamic content that was retrieved. This can be done using headless browsers like PhantomJS, and supposedly Google has started executing JavaScript. Like Tantek I’m dubious about how widely they execute JavaScript. I’ve had trouble getting Google to index a JavaScript heavy site that I’ve inherited at work. But even if the crawler does execute the JavaScript, user interactions can cause different content to load. So does the bot start clicking around in the application to get content to load? This is yet more work for a archiving bot to do, and could potentially result in write operations which might not be great.

Another option is to change or at least augment the current web archiving paradigm by adding curator driven web archiving to the mix. The best examples I’ve seen of this are Ilya Kreymer’s work on pywb and pywb-recorder. Ilya is a former Internet Archive engineer, and is well aware of the limitations in the most common forms of web archiving today. pywb is a new player for web archives and pywb-recorder is a new recording environment. Both work in concert to let archivists interactively select web content that needs to be archived, and then for that content to be played back. The best example of this is his demo service webrecorder.io which composes pywb and pywb-recorder so that anyone can create a web archive of a highly dynamic website, download the WARC archive file, and then reupload it for playback.

The nice thing about Ilya’s work is that it is geared at archiving this JavaScript heavy content. Rhizome and the New Museum in New York City have started working with Ilya to use pywb to archive highly dynamic Web content. I think this represents a possible bright future for archives, where curators or archivists are more part of the equation, and where Web archives are more distributed, not just at Internet Archive and some major national libraries. I think the work Genius are doing to annotate the Web, archived versions of the Web is in a similar space. It’s exciting times for Web archiving. You know, exciting if you happen to be an archivist and/or archiving things.

At any rate, getting back to Tantek’s point about JavaScript. If you are in the business of building archives on the Web definitely think twice about using client side JavaScript frameworks. If you do, make sure your site degrades so that the majority of the content is still available. You want to make it easy for Internet Archive to archive your content (lots of copies keeps stuff safe) and you want to make it easy for Google et al to index it, so people looking for your content can actually find it. Stanford University’s Web Archiving team have a super set of pages describing archivability of websites. We can’t control how other people publish on the Web, but I think as archivists we have a responsibility to think about these issues as we create archives on the Web.


Facts are Mobile

To classify is, indeed, as useful as it is natural. The indefinite multitude of particular and changing events is met by the mind with acts of defining, inventorying and listing, reducing the common heads and tying up in bunches. But these acts like other intelligent acts are performed for a purpose, and the accomplishment of purpose is their only justification. Speaking generally, the purpose is to facilitate our dealing with unique individuals and changing events. When we assume that our clefts and bunches represent fixed separations and collections in rerum natura, we obstruct rather than aid our transactions with things. We are guilty of a presumption which nature promptly punishes. We are rendered incompetent to deal effectively with the delicacies and novelties of nature and life. Our thought is hard where facts are mobile ; bunched and chunky where events are fluid, dissolving.

Dewey (1957), p. 131.

Dewey, J. (1957). Human nature and conduct. New York: Modern Library. Retrieved from https://archive.org/details/humannatureandco011182mbp