Monthly Archives: May 2015

Why we love algorithms

I was lucky enough to be invited by my colleagues at Linköping University – Francis Lee and Lotta Björklund Larsen – to discuss algorithms and the algorithmic, at Smådalarö Gård (see right) in the Swedish archipelago. Yes, I know, a tough job, especially the evenings, with sauna, beer, and the icy Baltic.DSC_0039

I’ve done some work on algorithms in recent years, notably the study of transplant allocation published in the new book Value Practices in the Life Sciences (Oxford University Press: 2015) and a paper with my colleague Shona Chillas, on online dating. But in this workshop we were asked to bring some issues that we consider important, in order to open up further discussions about researching algorithms. I took the opportunity to share some ideas that I have been thinking about for a while, even if they’re still in a relatively incomplete form.

Here’s a filled out version of my notes for a talk on ʻWhy we love algorithms’ (including the examples which I left out for reasons of time – and still ran over the allotted 10 minutes!) My thanks to everyone involved in the workshop for comments and feedback, and I’ve incorprated some of that below. Here we go:

ʻI would like to start with some provocations – the things I find interesting or challenging about algorithms – then flesh out with some examples, and finally talk through some possibilities for thinking about these problems.

Here are some other things that trouble me about algorithms:

  1. First of all, that people get incredibly excited about algorithms. We hear all kinds of wild claims made about what algorithms and the algorithmic offer, some of which are convincing, and some of which are downright terrifying. Journalists and managers seem particularly susceptible. But,
  2. most of us don’t actually understand how algorithms work. In fact, it turns out that the programmers themselves often don’t know how algorithms work: the purpose of machine learning is to let the computer teach itself, a much quicker process than coding by hand. Nonetheless,
  3. the expectations of knowledge placed upon algorithms are remarkable. For example, when comparing the market for my flight here I had a genuine expectation that every single permutation and possibility has been considered. I’m not expecting some satisficing ʻgood enough’, but the whole deal. In addition to this omniscience, we place great reliance on
  4. the additional social and organizational expectations that are repeatedly woven into algorithmic processes.

In other words, we expect rather a lot of algos. Let me offer some examples, to make these points a bit clearer.

Cyber currencies like bitcoin promise a techno-libertarian utopia, freedom, anonymity, sound money, expansion of financial services to the global unbanked and an algorithmic, transparent public ledger with the potential to end all other kinds of public record, and put lawyers and all sorts of other middlemen out of a job.

The matching algorithms for transplant organs that I describe in Value Practices in the Life Sciences make numerous promises: we trust them to offer the best match – in terms of scientific outcomes – of all patients in the database, and to promise fairness and equality of access for all patients. We expect them to be transparent while retaining the ability to pursue certain political demands in terms of managing resources and coping with waiting lists. Note, by the way, that in doing so they simultaneously offer different and conflicting demands to different publics.

Online dating algorithms promise at the very least an optimising match of habits and preferences, and at most a match of characteristics so perfect that you can spend the rest of your days with this individual. More pithily, they promise love. It’s true, when I have chatted to users, that they react with a certain scepticism, dismissing such grand claims and placing emphasis on fun and meeting new people. But equally the willingness of users to go on dates in the first place, and their repeated disappointment when yet another potential partner turns out to be a scoundrel and a hound shows, that they can’t help trusting the algorithm in the first place.

You’ll notice my repeated use of the word promise, as well as themes such as optimism and expectation. I even mentioned trust. Is it possible to trust an algorithm?

Katherine Hawley is a philosopher who worries about such things for a living. She has developed a theory of trust based on the ability to offer commitments. This, she argues, is what distinguishes trust from reliance: we rely upon our car to start and we may be irritated and inconvenienced when it doesn’t, but we are not let down, or betrayed, in the way we would be by a breach of trust. Unlike our friend who promised to be there on time, the car makes no commitment to us.

This sense of reliance may be built up by repeated practice. My car always starts, and so I can rely on it to do so, even on frosty mornings. Any student of economic exchange knows that trust can be worked into economic transactions by the offering of commitments on the part of, for example, a manufacturer: my new Mercedes (if only!) can be trusted to start, because it is a Mercedes.

It seems to me that our expectations of algorithms go way beyond reliance: we trust online algorithms on sight. We trust algorithmic devices as part of complex sociotechnical arrangements which also incorporate branding, reputation, intellectual capital and so forth. We trust the organ allocation algorithm because of the institution of the NHS and because we can see, should we choose to, that it has been built by qualified medical experts. We trust the dating website in the same way that we trust many other manufacturers and vendors of services: because they have spent large sums of money on adverts, warranting the credibility of the algorithm. So perhaps we can explore the way that warranties are worked into algorithms by individuals, firms and organizations.

More interesting still is the possibility that in an age of machine learning and artificial intelligence the algorithm is able to make commitments on its own account. If the programmer can’t understand what’s in the box because the machine has taught itself, perhaps the machine itself should be responsible for its outputs. Who is accountable for the algorithm’s actions, and by what standards should it be judged? Donald MacKenzie pointed out that many of the disputes over fairness in high frequency trading stem from the transposition of a human moral order – the queue – into the algorithmic world. Should algos play by our rules? We didn’t decide.

That’s a big idea to think about , I know, but not much stranger in the end than some of the examples presented at the workshop: Nick Seaver spoke about his doctoral fieldwork. He’s been watching a programmer build a machine able to teach itself the difference between obscure sub-genres of heavy metal. (Nick’s quick quiz: Here’s Swedish djent legend Meshuggah and here’s metalcore trailblazer Earth Crisis. Can you tell the difference? Me neither, but the machine can. Imagine what Pierre Bourdieu would have said to that!)

However, there is one snag in this argument. Reliability and trust are predicated on performance. As Hawley makes clear, repeated breaches will cause us to stop trusting. If on the algorithm’s advice, I arrive at the mathcore gig in a deathgrind[i] T, well, how can I ever trust it again? But we still seem to trust the machine despite our every day experience of algorithms not doing what they promise: matching us with scoundrels, over and over again.

I wonder if we are mistaken in thinking about algorithms in terms of the formal rational, bureaucratic tradition. Perhaps we should be thinking in a different direction. I suggest that we are enchanted and delighted by algorithms. We are in love with them. Bitcoin seems to me the exemplar: politics, mysticism, utopian visions, all sorts of things, are woven into the blockchain. Caricaturing Weber, bureaucracy is boring but it works. Algorithms are not boring, but they don’t necessarily work either; they are temperamental, high maintenance, spiteful and problematic. They have personalities. They require tending, nurturing and attention.

There is another intellectual tradition that might make sense of the algorithm better. It runs something like this: Smith and the Scottish Enlightenment – a secular appropriation of providential conceptions of natural order, where the market is the mirror of nature and therefore the divine. Townsend, Malthus, Darwin – life on Earth as driven by some form of search and selection mechanism, optimising under constraints of scarcity. The process was cast as a specifically individual, economic problem by Herbert Spencer, who coined the phrase ‘survival of the fittest’. From Spencer we can move to Hayek, with his catallaxy, a spontaneous order driven by what is starting to look like algorithmic processes. Then contemporary philosopher Daniel Dennett takes the final leap, recasting evolution as algorithmic process and life on Earth as the ultimate algorithmic computer.

This tradition is all about knowledge, and in particular the omniscience of algorithmic processes (forgiving an anachronistic use of the term) compared with the limited knowledge available to individuals. The Hayek and Austrian economic tradition stressed the calculative limits of central planners, compared to those of the market. But now algorithmic economics seems much more confident in its calculative abilities – even to the possibility of calculating all knowledge, Google’s raison d’etre.

I suggest that there’s a slippage of scale here. On the one hand, there’s the actual computational process, small, straightforward; on the other hand, there’s the algorithm as a sublime object, rootless, inchoate, but importantly, omniscient; a placeholder in which all kinds of expectations, problems and politics can be placed, around which communities of practice and discourse can get together. Of course, the two are knotted together in a recursive, performative loop, but there’s space in the dilectic for all manner of hopes and dreams.

Let me push even further. The philosopher Simon May has suggested that love stems from the sense of ‘ontological rootedness’ – that we love those people or things which give us a feeling of place in the world.[ii] Love of the divine is his exemplar: we must strive to love, but in the impossibility of such love being requited, we must be prepared for injustice, cruelty and abandonment.

One other such sublime object in contemporary political thought is the ‘market’: those who love the market – really – are prepared to suffer by the market. I propose that we can understand the algorithm as the heir to the market in contemporary economic thinking, and as the beneficiary of a quasi-theological tradition focusing on omniscience and ontological apartness. And I suggest that this positioning offers a useful means of thinking around the appeal, attraction, even fanaticism, surrounding algorithms and algorithmic arrangements.

[i] I was going to put a link up here, but the first video you tube offered involved lots of footage of a decomposing corpse in a bath. Not what you or I need on a Saturday morning.

[ii] May, Simon. 2011. Love: A History. New Haven and London: Yale University Press.

Leave a comment

Filed under Writing

Mr Miliband not ‘leadership material’? That suits me fine

I’m not well qualified to write about politics, but I do have a professional interest in chief executives, and one thing is clear to me: you wouldn’t want a CEO running your country. Not a heroic corporate leader of the sort that business schools have been churning out for the last two decades. Nor, I think, a chief executive of the careful, bean-counting variety, very good at balancing the books; someone who talks shareholder value and tough choices as factories close and food banks fill up.

When someone says Ed Miliband isn’t leadership material, they really mean he isn’t CEO material. It’s true, he isn’t. He lost his footing as he walked off the stage at BBC Question Time. He eats a bacon sandwich in a funny way, and his face moves around when he’s thinking. In short, he seems a pretty ordinary chap.

You can’t imagine Richard Fuld, formerly of Lehman Brothers, tripping on some stairs. But then, Fuld went out of his way never to be seen on stairs at all. He lived a life of chauffeured limousines, personal jets and private elevators; you’d expect Fuld to catch a magic carpet rather than use his own legs.

When most people make mistakes, they slip, stumble, or struggle with a sandwich; a missed deadline or bureaucratic error, something that can be put right easily enough. Chief executives, however, make mistakes of Olympian proportions. The chief executive slips up, and everyone pays, sometimes for decades. When Fuld’s testosterone-charged, corrupt Lehman Brothers imploded it very nearly took the global financial system with it.

Why should this be? How can a chief executive’s capacity for disaster be so exponentially greater than our own? Why doesn’t someone, somewhere in these globe-circling corporations waggle a finger and say, look, Sir Fred, if you do this much more you’ll be the most hated man in Britain for a decade?

Part of the problem, I think, lies in the ‘charismatic’ management style often seen in big corporations. By this account, chief executives really are something special. They’re godlike individuals, possessed of unique moral and intellectual skills, and personally responsible for every success of their corporation (though, remarkably, almost none of its failures). Management gurus peddle this kind of nonsense all the time; I’m not joking about the God stuff either – if you’re interested and have an hour and some to spare, here’s guru Ken Blanchard explaining how to ‘lead like Jesus’.

Or take the late Steve Jobs. No need to say more.

No one is allowed to question, or to puncture the chief executive’s bubble. The smallest misdemeanour can lead to tantrums and dismissals: remember no-longer-Sir Fred Goodwin and his pink wafer biscuit apoplexy? Jobs, by all accounts, couldn’t get in a lift without firing someone, and used to lob prototype iPods into his fish tank to show his cowering engineers they could be made still smaller (bubbles = space = room for improvement).

The very worst chief executives remind me of Roman politicians, who squirmed their way up the greasy pole of provincial administration until they reached the top and could squeeze whole countries for all they were worth. Here’s Jimmy Cayne, who worked his way up from traveling salesman to chief executive of giant investment bank Bear Stearns, and who was playing bridge in another state when his bank went under. There’s Verres, a corrupt governor so uninterested in the world outside his own pleasure palace that he knew it was spring only when the first bouquets of flowers arrived. Whether reported by the Wall Street Journal or Cicero, there’s not much difference.

But here’s the rub. We hold our national leaders to account on the very terms we should level at chief executives. While chief executives stuff themselves with gold until they burst, we harangue would be prime-ministers about balancing the books, or what they can offer GB plc. As the sociologist William Davies has pointed out, we have achieved an ‘ontological parity’ between chief executive and PM, puffing up business leaders as charismatic visionaries, and grinding down politicians through relentless business-style audit.

Pity Natalie Bennett, not knowing how she would fund the building of her half million starter homes. Who cares, she should have said. When the banks were in trouble, the Chancellor found a trillion pounds down the back of the sofa – I’m sure a few little houses won’t even be noticed. Doesn’t the fact that five hundred thousand families suddenly have somewhere to live, that they are no longer socially and economically excluded, seem in anyway important to you, Mr Ferrari?

Chief executives, ever ready to weigh into political debate, might be loath to admit it, but states and corporations are not the same thing. Corporations, in the end, exist to make and sell us things: lawnmowers, burgers, mobile phones and face cream. They pay taxes in Luxemburg or the Bahamas and hire the cheapest labour they can find on the worst possible contracts. And, as Lehman Brothers showed us, their capacity for inflicting misery is immense.

Nations, on the other hand, should serve and support their people. Their job is to guarantee law, arbitrate fairness, and permit free and flourishing lives for their citizens. Their leaders needn’t be charismatic, but should be decent, upstanding, and empathetic. A bit of humanity, vulnerability – even the occasional trip – goes a long way in showing they have the capacity to know what matters. Sure, Miliband isn’t CEO material. That suits me just fine.

Leave a comment

Filed under Uncategorized