top of page

SEASON 1 - EPISODE 1

All About Algorithms

Welcome to Algorithms 101. In this episode, Dr Hannah Fry explains how algorithms are powering our world making decisions in justice, health, transportation and crime. We delve into what happens when they go wrong and why there will always be a need for common sense.

Podcast

TRANSCRIPT

Eleanor: Technology it’s everywhere and it's the future. But if there is an inbox and outbox of life, engaging with tech, at least for me, is in the box that reads too hard.

But if engaging with tech is hard, ignoring it is impossible, especially if you're concerned about the world in which we live. So Technocurious is going to examine the biggest tech topics of the day, and we'll be speaking to some of the brightest minds working and thinking about technology.

This first episode was recorded live at the office group in Kings Cross London. In it, I spoke with Dr Hannah Fry about how algorithms, or if you prefer artificial intelligence, is been used to make some important, even life-changing decisions, in criminal justice, health and sentencing.

What follows is an introduction to algorithms. And without giving anything away, I'll just tell you that I mistakenly awarded Hannah a promotion by calling her professor before we started recording...

 

Hannah you're an assistant professor…

 

Hannah: ...associate professor. You just gave me a demotion.

Eleanor: It’s just a sliding scale, isn’t it? So associate professor at University College London in the mathematics of Cities, which sounds way cooler than just normal mathematics, doesn’t it? Your also host a very popular podcast called “The Curious Cases of Rutherford & Fry” which is with the wonderful Adam Rutherford, and you fronted a number of documentaries for the BBC.

We're here because we're going to talk about your book Hello World, or as the subtitle goes: How to Be Human in the Age of the Machine. this book is all about algorithms...

Hannah: We are now living in a world which is increasingly dominated by algorithms. I've worked with data and algorithms for about the last decade in my day job as an associate professor. You know I think in the last 3 or 4 years anyway, we've really seen this new trend of machines being able to compete with us, and often outperform us, in things that we thought of that were uniquely human abilities, that we thought that only we were able to do. I just wanted to kick off with something that really demonstrates that fact.

I've got an example for you from the world of music, just to see if you can tell the difference between what a computer is capable of and what's a human is capable of. I'm going to play you to pieces of music: both of them are chorales. Both of them were performed by a live orchestra. One was composed by the great baroque master Bach, and the other is composed by a computer in the style of Bach. I want to see if you can tell the difference.

Option 1 -
00:00 / 00:00
Option 2 -
00:00 / 00:00

[The public had to vote between the two musics.]

Hannah: Option 2 was written by a computer - namely Experiments in Musical Intelligence - by the composer David Cope. It’s a very famous experiment, that really demonstrates how capable machines are at replicating human levels of ability. The way that that computer mimicked Bach was by using an algorithm.

 

I am very aware that when people hear the word algorithm, it makes about 85% of people want to gouge out their own eyes, I’m very aware of that. I mention that to someone at a tech conference, and they agreed with me but they added that it makes the other 15% of us mildly aroused, so I let you decide which of those camps you’re in.

 

Just to be sure that we are all completely on the same page, I just want to define what an algorithm is properly. It’s just a series of logical steps, that takes you from some input through to some output. So a cake recipe for instance, that is an example of an algorithm: the input would be the ingredients, the logical steps is the recipe itself and the outputs then is the cake that you get, right at the end.

Now the way that David Cope’s algorithm worked, the way that they composed that Bach music, is much like the kind of predictive text algorithms that you have on your phone. Rather than being the text that you’ve written into your phone over a number of years, its input is instead all the chorales that Bach has ever composed and the logical steps are very simple: all you do is seed the algorithm, you give it a chord, and then it tells you what's the next most likely chord to come up in Bach original work is. You repeat that process until eventually you have a string chords together in an original piece of music.

Now it is very like the game where you write into your phone: you seed out some words, for example “I was born”, and then you press the middle button on the predictive text of your phone and allow it to complete your own autobiography. I've done my own here for you, I’m going to show you just because it's amusing. You can get a sense of the type of things that I write on my phone. It starts up fine, it gets weird towards the end... “ I was born to be a good person and I would be happy to be with you a lot of people I know that you are not getting my emails and I don't have any time for that…” Just a little insight into my character...

If I play you back a little snippet of that Bach music: you can just about hear those very simple chord transitions that are going on the background and that's essentially what the algorithm is doing. The results, especially with that David Cope example, are very impressive and you know half of you were persuaded by it as being the real master himself. But I do think that algorithms are still a very long way away from being as good as humans at truly composing very beautiful music.

 

But I think there are some areas in real everyday life where they probably are better than us already, and where we start to rely on them and defer to them more than our own judgement. A very good example of that is in the case of navigation : I think that this raises a very interesting point.

Let me tell you the story about some Japanese tourists who went on a holiday to Brisbane, west coast of Australia. One day they decided to hire a car and go on little road trip; so they were going from the destination where they were staying to a very popular tourist destination. They popped into their sat nav and the sat nav said that it's essentially just a straight line between the two, which is great. Small problem... There was actually a whopping great body of water between their origin and destination that they haven't spotted. In fairness, these Japanese tourists, they weren't locals: perhaps they didn’t have access to a map, perhaps they didn't speak English very well, so they didn't notice that the place that they wanted to drive to was called North Stradbroke Island. But you would think that when it came to actually driving on water, they would know how to overrule the sat nav. But apparently they didn't. I know, if you are listening to the podcast, that you can't see the photograph here, but it is worth googling, because eventually, they actually literally drove into the see, and had to abandon their car. But my favourite thing of all that happened was about half an hour later: a real ferry sailed past their abandoned hire car.

article-2115821-1231E3FC000005DC-533_634

I think that we can all chuckle at the very silliness of this story, but I think that within it, there is this moral that speaks about how much we are willing to put our trust in technology. And actually I think I've come to believe that these Japanese tourists aren't alone. I think that when it comes to placing blind faith in a machine, it's actually is a mistake that all of us are really capable of making.

So one last story for you to round off my time and to illustrate the kind of ideas that I’m playing with within the book. If you take the story of what happened in Idaho just as an example of this. So back in 2014, it was a group of 16 disabled residents of Idaho who got some unexpectedly bad news: the Department of Health and Welfare in Idaho just introduced a new algorithm, a new budget tool that was going to calculate how much state support each of the residents are entitled to. These were people with severe disabilities who qualified for residential care but who had chosen to be cared for in their own home so cared for within the community. The money that they received was really instrumental to them keeping their independence. Every resident went into the department and sat down and the budget tool calculated how much they were entitled to. Some of the residents, as she found out, had more money than in previous years, thousands of dollars more than in previous years, whereas other residents ended up with a deficit of tens of thousands of dollars, putting them in serious risk of being institutionalized. From the outside no one could really make sense of what was going on with this: it looked like the machine was just making these choices completely at random, but the problem was: it was kind of impossible to argue with the computer. So many people trusted it within the government that you couldn't question its authority really... In the end the residents had to bring a class action lawsuit against the department to insist that that budget tool was handed to scrutiny. And then when it was finally opened up for scrutiny, it was revealed that this swanky algorithm that held so much power over the residents wasn't some kind of sophisticated AI, or some beautifully crafted mathematical models. It was actually just an Excel spreadsheet, and quite a crappy Excel sheet with that... It was covered in errors, loads of flaws all over the formulas. It had so many statistical flaws that the court would eventually rule it unconstitutional.

The point really I think that I want to make here is: thankfully an awful lot of the algorithms that we have put in position of authority aren't quite as flawed as this one, but I think that there is a problem that we have these machines, that are capable of doing the most remarkable things, but they're also going to be capable of making mistakes, and I think that if we put flawed machines in position of power, we have to think very, very carefully about what happens when things go wrong. We have to acknowledge that we can't always trust ourselves to know where the line is.

 

Eleanor: Thank you very much! I should say that an important element of Technocurious is that we challenge ourselves to have a go with technology ourselves. So this afternoon at home, I got a bit hands-on with an algorithm, and I brought the outcome actually. It was the rule-based algorithm…

 

Hannah: A plate of biscuits!

 

Eleanor: A plate of techno-chip cookies (™). So the algorithm behind these cookies came from the BBC good food website, and everyone can have a little taste of them later.

Actually a rule-based algorithm is something I can understand as you see… It’s the other algorithms that get a bit tricksy, and what I want to know is: is A) AI essentially jumped-up statistics…?

 

Hannah: Haha, yes! To be honest, as it stands at the moment, I think that we have had more of a revolution in statistics then we have had in intelligence, but that's not to say that it will always be the case. I mean I think that there are some people, within this block, who are attached to this building and giving our position next to where Google and deepmind are, who are doing some really remarkable things that are trying to push that boundary forward. But I think that all the progress that we've seen to date is really mostly about statistics.

I should say about the difference between rule based and non rule-based actually. The cookie example is an example of the sort of traditional type of algorithm where you write a very clear list of instructions: “do this, do this, turn your oven on to 200 degrees, and whatever beat the butter”. AI or machine learning takes a slightly different approach: you could think of this in terms of training a dog, how to sit... So when you're training a dog how to sit, you don't write out a list of instructions for that dog. You don't say “move this muscle” and then “move that muscle” and then whatever. Or you don't make the dog sit and watch hours of YouTube videos about the dog sitting. What you do is you just clearly communicate with the dog what your objective is, what you want the dog to do, and then you reward it whenever it does that behaviour. And everything in the middle, everything between you communicating the task and rewarding it when he get it correct, you let the dog working out for itself. And that is much more the approach of machine learning. But up till now, the things that they've been doing have been kind of quite statistical.

Eleanor: And tell me, when a machine learns, it can take unexpected turns right? Ones that we don't understand... Is that something we should worry about?

 

Hannah: Well, to give you an example: there was one, I think it's like a physics simulator where you had a sort of simulated spider. Imagine you're in a quite crappy computer game. They had this spider, and they wanted this spider to move from one side of the screen to the other, all within a completely simulated environment, and the rules were that it was only allowed to use two legs or something. So I think they were trying to encourage it to stand up and walk. But quite often when you're not explicit about the rules of these things, your machine learning algorithm, so your AI, can end up being a bit naughty, and it can end up cheating and surprising you. I think this one in particular flipped over onto its back and then skidded along the floor using two legs at any one time to push itself, rather than what the designers had expected it to do. You do see this quite often, things kind of ending up finding some strange route that you weren't intending.

Should we be worried about that? I mean, if you had human level intelligence creatures that could control all kinds of different things; had power enough to control all kind of different things, then I think there's an argument that you would be worried about that, but I think we're quite a long way away from that right now.

 

Eleanor:  There's no point worrying about evil AI…

 

Hannah: For me, personally I am not worried about evil AI. There's a computer scientist called Andrew Ng, he has this great phrase which I really like, which is that: “Worrying about evil AI is a bit like worrying about overcrowding on Mars”, so far into the future, that there are so many steps to get there before… You know, it's like thinking a century in the future. Not everyone agrees with him, admittedly, but I think I'm on his side actually, there are much more pressing concerns about, you know, our relationship with machines and the bias that they have, and the sort of tangled web of complications that comes in when you try and shift traditional human structures into taking these machines into account. That, for me, is a far more pressing concern.

 

Eleanor: I think there's an awful lot of misapprehension about AI and algorithms and tech, so let's move away from that, and let's talk about the best ones: I mean what are algorithms really best at doing?

Hannah: Best at doing…? Ok, they are good at a lot of stuff. They're very good at a lot of stuff. One of the things that algorithms are incredibly good at is picking up on very, very, very tiny clues, and looking way into the future as to what those clues might mean.

So an example of this kind of thing, like the kind of thing I'm talking about is something called the nun study. And it is called the nun study because it involves 678 nuns and was some work done by the epidemiologist David Snowdon. So these nuns were aged between about 75 and I think 101 years old at the start of the study, and David Snowdon persuaded all of them to let him test their cognitive ability every year of their life until their death. So you have this beautiful data today which shows the kind of decline that people have as they get older, how some people get dementia very early on, how others managed to maintain comfortability into old age. But David Snowdon also persuaded these nuns to donate their brains to the project after their deaths. That meant that you could compare the symptoms they had in life with the physical symptoms of dementia within their brains in death. The unusual thing that they spotted is that actually there is not a direct comparison between those: it's not true that the people who suffer from dementia in life are the ones who have the worst brains in death. There is one woman in particular called Sister Mary who maintained excellent cognitive ability until she died aged 101. But you opened up her brain and it looked no different, the lesions, all the marks of dementia, looked no different to someone who had lost cognitive function.

So the question is: what is it about some people that manage to keep their cognitive strength until later age? And the clue, it turns out, may actually be lying in data set altogether from decades and decades and decades before any of these women ever showed any sign of the disease. So they also happen to have the essays that these women wrote when they entered the sisterhood, when they were 19 and 20 years old.What you can do, you can write a very, very simple algorithm that will analyse the complexity of the language in those essays, and you can move forward to some statistical connections between that and the chances of you maintaining good cognitive strength into older age.

Across medicine we are seeing this pattern repeated, that algorithms are trained to look not just to find tumors, but to predict the survival rates of say, for example, women with breast cancer. They are not only good at finding the tumors, but they can also pick up on tiny, tiny, tiny clues in the surrounding tissue that can give you a really good indication of how serious that cancer will go on to be. And that is the stuff that's really important.

 

Eleanor: You also mention in the book, a whole chapter about how algorithms are used in Justice. And they are particularly used for calculating the risk of reoffending, which can then influence judges’ decisions about bail or not bail, prison terms, etc. If you were in the dock, who would you prefer to be judging you? A human or a machine?

 

Hannah: I think the answer of this question changes depending whether you're the person in the dock, or whether you're designing the criminal justice system for the whole country.

Because, the fact is that there is much greater variation in what a human judge will do. So if you just think of something very simple like sentencing: the variation in sentencing that you can get from different human judges is going to be massive, whereas an algorithm will give you the same answer every time with the same data.

Now, a lot of people I speak to say that they would prefer a human judge because, you know, they have empathy, they have emotion, but also they seem to think that the error will always work in their favor. Everyone believe that they would be the one to persuade the judge to be lenient. And I do understand that position, I would probably feel the same, but then I'm fortunate enough to be a member of a group that's not traditionally discriminated against. Whereas I think if you are designing a criminal justice system for a country altogether, your top priorities should be that your judgements are as logical, as consistent and as unbiased as possible. And in that respect, I actually think that the algorithm, though they are definitely not perfect, actually offer a step in the right direction.

 

Eleanor: I can understand that on the whole, a machine will be much more consistent than the irrational, imperfect human. But what happens when it goes wrong?

Hannah: Bad stuff... The fact is: if you’re building an algorithm to decide who poses a high risk to society, the only way you can face that decision within the algorithm is by looking historically at all the people who've already passed through the criminal justice system. Within all of that data, there is this tangled web of, you know, centuries of bias and discrimination that is very, very difficult to disentangle yourself from. There have been some stories in the press about racial bias, and extended that racial bias within these algorithms.

It's a little bit like the analogy I was trying to give: if you do a Google image search for mathematician or maths professor on Google, I mean I guess you could all guess what you'll see... It's almost exclusively white men, I think there’re two women in the top 20, and I think one non-white face. And the thing is, that image is actually depressingly accurate, you know... 94% of maths professors in the UK are indeed male.

Sometimes we don't actually want technology to be a mirror, we don't want it to reflect the society that we live in back to us. Sometime I think there's an argument that you want technology to be there to nudge you into the direction that you want your society to move in. There is a choice here, you could... If they want to, and I’m not suggesting that they should… If Google wanted to, they could decide they weren't happy with this Google image search being a mirror, and could prioritise images of female professors or non-white professors over those of white males. And it's the same thing in the criminal justice system: you can decide that actually, just repeating the kind of injustices of the past, or all the statistics that you’ve seen in the past, and projecting those forward into the future isn't what we want to do. And you know you can instead nudge it in the direction that you wanted to go in…

 

Eleanor: Is that not what the CEO of Twitter Jack Dorsey was trying to do…? His algorithm, I’m not sure if you're aware, but Trump sent some of his wonderful tweets in August accusing Twitter’s algorithms of having a left wing bias…

 

Hannah: To be honest, I tend to try to stay away from Trump on Twitter for the sake of my mental health…!

Eleanor: Fair enough, but I mean, just to make the point actually, who is the arbiter then of algorithms?

 

Hannah: Well that is an enormous question! Because at the moment, up until now, there is no arbiter. You know that you have private companies making massive decisions that impact on everything, from our justice systems to our healthcare systems. And they’re making them within closed boardrooms. I think that these are the kind of things that should be public debate really, these are the kind of things that you should be debating in Westminster and in newspapers, not within closed doors private companies.

 

Eleanor: Tell me something else. I want to talk about data and about nudging, and behavioral economics. Tell me how algorithms are being used in advertising…

Hannah: oh gosh...Ok, so, the first people to recognize the value in our data were actually the supermarkets. And Tesco in particular, they were leader in this field. So back in 1993, when they launched their club card, which actually lot of people credit, it is the reason why they overtook Sainsbury's as the most popular supermarket in the United Kingdom.

You know I think we're all quite comfortable, or we are aware of the fact that when you use something like a loyalty card, the supermarket will then know what you buy and send you out coupons that try to get you to buy more of the stuff that you already buy. I think what we're not totally aware of is how much you can infer from just what's in our shopping baskets.

There was a very famous story from America: a shop called Targets, kind of like Woolworths I guess, you can buy everything in there, and they don’t have a loyalty card, but they can work out on who you are based on your credit card, and when you fill in forms and that kind of things. So they have a database of their customers, and they know when they spend money in their shop. They brought on this new statistician who started doing some analysis and he worked out something quite clever, which is that if you have a female customer who suddenly starts buying loads of unscented body lotion, the chances are that, if you scroll back in time by about 3 months, she will have started also buying some vitamins, right? Scroll back in time a little bit further, and she had stopped buying alcohol and if you roll forwards in time, (the unscented body lotion by the way is for stretch marks) but if you scroll forwards in time, you can even predict - this woman is essentially pregnant - you can even predict when the woman will give birth based on when she starts buying cotton wool... So what they started doing was having for all of their female customers, they had something called the pregnancy predictor: it was at a statistical thing that run in the background, and when you bought enough of a certain product or your pattern of behaviour reached out a certain threshold, it would come of, and assume that you were pregnant, or likely to be pregnant. It would send you out some coupons in the post. Now up until this point, it’s a little bit creepy, but it's not terrible. Except that, in Minneapolis one day, a father of a teenage girl walked into a Targets store and was absolutely furious that his teenage daughter had received these pregnancy coupons in the post. And he was like: “Your normalizing teenage pregnancy, this is abhorrent, why you’re treating her in this way ? it's just really awful.” They apologized and he went home. The next day, the area manager called up his home to apologise again, and by which time, in the phone call, he basically said: “Actually I think I owe you an apology. There were some things happening in my house that I wasn't aware of…”

I think essentially that we've got to the point where a supermarket algorithm trying to advertise to you tells your dad about your pregnancy before you've even had a chance to. I mean I think that's really very far over the creepy line.

And that was a decade ago: things have moved on since then in really, really creepy ways. There are these data brokers who have files on essentially every single one of us, and the things that they are inferring are eye-watering... Standard things like our age, our name, our gender, our net worth, but also really personal stuff like our sexuality - but not just our declared sexuality, but our true sexuality as well - you know, whether your parents were divorced when you were young, whether you’ve had an abortion, whether you’ve had a miscarriage, whether you’ve had used drugs... All of these things! There are files on essentially all of us, on a server somewhere, which are where people have calculated all of this stuff.

 

Hannah: I think that a lot of the bias to we are seeing at the moment doesn't necessarily imply that people are being either you know racist or particularly trying to discriminate against a socio-economic group, or any of those things. I think sometimes it's just genuinely silly emissions.

There was one example of a Taiwanese American lady who had a Nikon camera and she would take photos of her and her family, and the camera would keep flashing up saying: “Did someone blink?”. And she responded to this message by doing a blog post where she said: “no I didn't blink I'm just Asian”. It's a real indication of the lack of diversity in the design process, that you can have such a glaring omission. And you know, these kind of algorithms, particularly facial recognition algorithms, there is nothing about them that makes them work better on Caucasian faces than any others. In fact, the people who lead the charge in the world are the Chinese on facial recognition. And yet often face recognition algorithms - you know things like the funny faces that work on Facebook or Snapchat - don't work on faces with darker skin and it's just the fact that the people who are designing these do not have that diversity in their team and are just making these omissions.

As to how you counter it, I think that it has to be - absolutely has to be - an independent process. It cannot be the same people who are designing these. I like to think that the companies that we are describing here are pushing for diversity as much as possible in the process. I think that it is on the agenda, I think they are trying to do that. But I think that this has to be something that happens independently.

What I would really like to see is like an FDA type of thing but for algorithms, where a piece of software is tested for biases and then those biases are made very clear. Like when you get a prescription, it says “side effects may include blah blah blah blah blah”. I'd like to see algorithms come with those sort of warnings.

 

IBM's Watson when it beat “Jeopardy!”, you know about this? It played this American game show called “Jeopardy!”, and beat the reigning champion at it, and it was all very clever. But the really clever thing that it did during that process is rather than just saying “here's an answer”, rather than just picking an answer and shouting out, it actually gave 3 answers at the same time, and it gave a confidence in each answer, gave a probability of how likely it thought that was to be the answer. So when it was uncertain, you could see it. It wore its uncertainty very proudly in front and center.

I think that is actually in the mainstream something with we are moving away from. With the voice assistance, we say: “Where is the nearest pub”, and it gives us one single answer and maybe, in that case, it's not particularly important, but I think in things like facial recognition that is being used by police forces, or algorithms to predict whether someone will go on to commited crime in future, or cancer diagnosis algorithms, I think that that's where it's really, really important that the algorithm is very honest about where the error might lie, very honest about the fact that it can never be perfect.

 

Eleanor: so that algorithms 101. Algorithms are transforming our world in countless positive ways, but there's a clear need for greater transparency and accountability when it comes to them. Maybe what we need is a sort of algorithm authority that can arbitrate internationally when algorithms go wrong.

 

[credits]

 

Thank you to Hannah Fry, the Office Group, Miller’s Gin and Printka.

Host: Eleanor O’Keeffe

Produced by: Pauline de Gourcuff & Daisy Leitch

Music: Matshidiso Mohajane

Design: Sam

Editorial Support: Dana Harman & Naomi Sheldon

Technocurious is supported by 5x15 Ltd and How to: Academy

bottom of page