Why we love algorithms

I was lucky enough to be invited by my colleagues at Linköping University – Francis Lee and Lotta Björklund Larsen – to discuss algorithms and the algorithmic, at Smådalarö Gård (see right) in the Swedish archipelago. Yes, I know, a tough job, especially the evenings, with sauna, beer, and the icy Baltic.DSC_0039

I’ve done some work on algorithms in recent years, notably the study of transplant allocation published in the new book Value Practices in the Life Sciences (Oxford University Press: 2015) and a paper with my colleague Shona Chillas, on online dating. But in this workshop we were asked to bring some issues that we consider important, in order to open up further discussions about researching algorithms. I took the opportunity to share some ideas that I have been thinking about for a while, even if they’re still in a relatively incomplete form.

Here’s a filled out version of my notes for a talk on ʻWhy we love algorithms’ (including the examples which I left out for reasons of time – and still ran over the allotted 10 minutes!) My thanks to everyone involved in the workshop for comments and feedback, and I’ve incorprated some of that below. Here we go:

ʻI would like to start with some provocations – the things I find interesting or challenging about algorithms – then flesh out with some examples, and finally talk through some possibilities for thinking about these problems.

Here are some other things that trouble me about algorithms:

  1. First of all, that people get incredibly excited about algorithms. We hear all kinds of wild claims made about what algorithms and the algorithmic offer, some of which are convincing, and some of which are downright terrifying. Journalists and managers seem particularly susceptible. But,
  2. most of us don’t actually understand how algorithms work. In fact, it turns out that the programmers themselves often don’t know how algorithms work: the purpose of machine learning is to let the computer teach itself, a much quicker process than coding by hand. Nonetheless,
  3. the expectations of knowledge placed upon algorithms are remarkable. For example, when comparing the market for my flight here I had a genuine expectation that every single permutation and possibility has been considered. I’m not expecting some satisficing ʻgood enough’, but the whole deal. In addition to this omniscience, we place great reliance on
  4. the additional social and organizational expectations that are repeatedly woven into algorithmic processes.

In other words, we expect rather a lot of algos. Let me offer some examples, to make these points a bit clearer.

Cyber currencies like bitcoin promise a techno-libertarian utopia, freedom, anonymity, sound money, expansion of financial services to the global unbanked and an algorithmic, transparent public ledger with the potential to end all other kinds of public record, and put lawyers and all sorts of other middlemen out of a job.

The matching algorithms for transplant organs that I describe in Value Practices in the Life Sciences make numerous promises: we trust them to offer the best match – in terms of scientific outcomes – of all patients in the database, and to promise fairness and equality of access for all patients. We expect them to be transparent while retaining the ability to pursue certain political demands in terms of managing resources and coping with waiting lists. Note, by the way, that in doing so they simultaneously offer different and conflicting demands to different publics.

Online dating algorithms promise at the very least an optimising match of habits and preferences, and at most a match of characteristics so perfect that you can spend the rest of your days with this individual. More pithily, they promise love. It’s true, when I have chatted to users, that they react with a certain scepticism, dismissing such grand claims and placing emphasis on fun and meeting new people. But equally the willingness of users to go on dates in the first place, and their repeated disappointment when yet another potential partner turns out to be a scoundrel and a hound shows, that they can’t help trusting the algorithm in the first place.

You’ll notice my repeated use of the word promise, as well as themes such as optimism and expectation. I even mentioned trust. Is it possible to trust an algorithm?

Katherine Hawley is a philosopher who worries about such things for a living. She has developed a theory of trust based on the ability to offer commitments. This, she argues, is what distinguishes trust from reliance: we rely upon our car to start and we may be irritated and inconvenienced when it doesn’t, but we are not let down, or betrayed, in the way we would be by a breach of trust. Unlike our friend who promised to be there on time, the car makes no commitment to us.

This sense of reliance may be built up by repeated practice. My car always starts, and so I can rely on it to do so, even on frosty mornings. Any student of economic exchange knows that trust can be worked into economic transactions by the offering of commitments on the part of, for example, a manufacturer: my new Mercedes (if only!) can be trusted to start, because it is a Mercedes.

It seems to me that our expectations of algorithms go way beyond reliance: we trust online algorithms on sight. We trust algorithmic devices as part of complex sociotechnical arrangements which also incorporate branding, reputation, intellectual capital and so forth. We trust the organ allocation algorithm because of the institution of the NHS and because we can see, should we choose to, that it has been built by qualified medical experts. We trust the dating website in the same way that we trust many other manufacturers and vendors of services: because they have spent large sums of money on adverts, warranting the credibility of the algorithm. So perhaps we can explore the way that warranties are worked into algorithms by individuals, firms and organizations.

More interesting still is the possibility that in an age of machine learning and artificial intelligence the algorithm is able to make commitments on its own account. If the programmer can’t understand what’s in the box because the machine has taught itself, perhaps the machine itself should be responsible for its outputs. Who is accountable for the algorithm’s actions, and by what standards should it be judged? Donald MacKenzie pointed out that many of the disputes over fairness in high frequency trading stem from the transposition of a human moral order – the queue – into the algorithmic world. Should algos play by our rules? We didn’t decide.

That’s a big idea to think about , I know, but not much stranger in the end than some of the examples presented at the workshop: Nick Seaver spoke about his doctoral fieldwork. He’s been watching a programmer build a machine able to teach itself the difference between obscure sub-genres of heavy metal. (Nick’s quick quiz: Here’s Swedish djent legend Meshuggah and here’s metalcore trailblazer Earth Crisis. Can you tell the difference? Me neither, but the machine can. Imagine what Pierre Bourdieu would have said to that!)

However, there is one snag in this argument. Reliability and trust are predicated on performance. As Hawley makes clear, repeated breaches will cause us to stop trusting. If on the algorithm’s advice, I arrive at the mathcore gig in a deathgrind[i] T, well, how can I ever trust it again? But we still seem to trust the machine despite our every day experience of algorithms not doing what they promise: matching us with scoundrels, over and over again.

I wonder if we are mistaken in thinking about algorithms in terms of the formal rational, bureaucratic tradition. Perhaps we should be thinking in a different direction. I suggest that we are enchanted and delighted by algorithms. We are in love with them. Bitcoin seems to me the exemplar: politics, mysticism, utopian visions, all sorts of things, are woven into the blockchain. Caricaturing Weber, bureaucracy is boring but it works. Algorithms are not boring, but they don’t necessarily work either; they are temperamental, high maintenance, spiteful and problematic. They have personalities. They require tending, nurturing and attention.

There is another intellectual tradition that might make sense of the algorithm better. It runs something like this: Smith and the Scottish Enlightenment – a secular appropriation of providential conceptions of natural order, where the market is the mirror of nature and therefore the divine. Townsend, Malthus, Darwin – life on Earth as driven by some form of search and selection mechanism, optimising under constraints of scarcity. The process was cast as a specifically individual, economic problem by Herbert Spencer, who coined the phrase ‘survival of the fittest’. From Spencer we can move to Hayek, with his catallaxy, a spontaneous order driven by what is starting to look like algorithmic processes. Then contemporary philosopher Daniel Dennett takes the final leap, recasting evolution as algorithmic process and life on Earth as the ultimate algorithmic computer.

This tradition is all about knowledge, and in particular the omniscience of algorithmic processes (forgiving an anachronistic use of the term) compared with the limited knowledge available to individuals. The Hayek and Austrian economic tradition stressed the calculative limits of central planners, compared to those of the market. But now algorithmic economics seems much more confident in its calculative abilities – even to the possibility of calculating all knowledge, Google’s raison d’etre.

I suggest that there’s a slippage of scale here. On the one hand, there’s the actual computational process, small, straightforward; on the other hand, there’s the algorithm as a sublime object, rootless, inchoate, but importantly, omniscient; a placeholder in which all kinds of expectations, problems and politics can be placed, around which communities of practice and discourse can get together. Of course, the two are knotted together in a recursive, performative loop, but there’s space in the dilectic for all manner of hopes and dreams.

Let me push even further. The philosopher Simon May has suggested that love stems from the sense of ‘ontological rootedness’ – that we love those people or things which give us a feeling of place in the world.[ii] Love of the divine is his exemplar: we must strive to love, but in the impossibility of such love being requited, we must be prepared for injustice, cruelty and abandonment.

One other such sublime object in contemporary political thought is the ‘market’: those who love the market – really – are prepared to suffer by the market. I propose that we can understand the algorithm as the heir to the market in contemporary economic thinking, and as the beneficiary of a quasi-theological tradition focusing on omniscience and ontological apartness. And I suggest that this positioning offers a useful means of thinking around the appeal, attraction, even fanaticism, surrounding algorithms and algorithmic arrangements.

[i] I was going to put a link up here, but the first video you tube offered involved lots of footage of a decomposing corpse in a bath. Not what you or I need on a Saturday morning.

[ii] May, Simon. 2011. Love: A History. New Haven and London: Yale University Press.

Leave a comment

Filed under Writing

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s