https://youtubetranscript.com/?v=jti77KQKYuc

So hello everybody, I would like to introduce you to Dr. Paula Bottington. Dr. Bottington is a moral philosopher, as she teaches at several schools, and she’s also a published author on questions of morality, and several issues as well she’s kind of dabbles in all kinds of subjects. I’ve seen her on television in the UK she discusses the problems of AI, which is why I have her on our show today to talk about the narratives around AI the ethics of AI because I know it’s something that interests you, the people that watches this channel quite a bit. Hi, I’m Jonathan Peugeot. Welcome to the symbolic world. So Dr. Paula maybe you can start by introducing yourself because I do what I can to introduce yourself and tell us a little bit how you got interested in how you you moved into the field of AI and morality. Hi, thanks very much. So, well I started working on issues about AI and ethics about five or six years ago. Just by one of those chances because I was working with some, I was working at one of the colleges in Oxford, and there were a couple of people there who were really interested in this so we applied and round about the time what was happening was, it was And then in about 2014 2015. There was a lot of attention to the problems of there might be forthcoming disasters in relation to AI. So people like Stephen Hawking and Elon Musk was speaking out, claiming that that well really worried about concerns about what about AI in the future. And then what happened was that there’s an organization called the Future of Life Institute which is based in Boston, and Elon Musk donated a whole lot of money for research projects looking at beneficial AI. And then at the end of the year, we got a grant, and Elon Musk we put in a proposal and we’re lucky enough to get, we got got one of their grants. So I was working on that for a couple of years. But the topic we were looking at was developing, developing codes of ethics for AI. So I worked on that which is, it kind of, it’s really interesting there’s now, there are, there are so many codes of different codes of ethics for AI all over the world. But but what I was really interested in was not so much developing a code of ethics because, well, what I’m really interested in is looking at the issues behind that, looking at how you might even start to think about what the ethical issues are, and then talking about about codes of ethics. There’s two layers to it because you don’t have just to think about what the ethical issues are you have to think about how you actually going to bring this about in practice, how you’re actually going to bring about good practice because there are so many examples of organizations that have got fantastic codes of ethics, and it makes no difference whatsoever. One of my favorite examples is Enron, who Enron, you know, younger people might not remember, collapsed disastrously taking the livelihoods of hundreds of thousands of people, had a fantastic code of ethics. There’s also a really fantastic code of medical ethics, but giving dignity to the patient and informed consent that was around in Germany before the Second World War. So that was before the Nazis. So that’s always really, really, really important to remember that kind of thing. So, so that’s how I got to work into in the ethics of AI, but prior to that I’ve done quite a lot of other work in philosophy in relation with. I’ve previously looked, looked at work with teams of genomic scientists looking at ethical issues and genomics research, and before that in clinical genetics. And before that I’ve done a lot of work in relation to how we think about what a person is. So I’ve done some work in relation to with a psychologist in relation to how people with learning disabilities are are regarded. And I also do. So what I’ve also been doing, because I’ve done lots of different bits of work. So this is a really long introduction. So one of the things I’ve been doing with both of us for the last few years, at the same time as doing work in relation to the ethics of AI, is I’ve been working with a team of sociologists who do ethnographic work, looking at the care of people with dementia in hospital wards, which is like this really, it’s such good luck to be looking, doing working with us two different things at the same time because they seem like they’re worlds apart but it’s really really valuable because, because of a because of a contrast because of how people people with dementia are so marginalized and and also it’s related to cause to ideas of a loss of cognitive capacity. On the other hand, you’ve got people working in AI, who feel that intelligence and cognitive capacity is is the pinnacle of human endeavor, but we should try to surpass so as I said, it’s a really really long introduction. No, but I think that it’s very, I think that you really right away in your introduction I kind of see the issue or the question, because one of the problems that we’re not of the problem one of the realities of AI is that there’s a personification of AI. It just happens naturally, you know, in terms of how we how we treat it how we approach it, you know, and it’s a long story with robots and Isaac Asimov and it’s a very old story it’s not something that just right right away happens but we have this tendency to personify AI and like you said, there is a there is a tendency to deep personify people who have cognitive breakdown because we have associated personhood with with cognitive ability or with with with mental ability. Yeah, yeah, precisely so but yeah those are the issues I’m really really interested in. And when we think about AI, there are different kind of narratives that we are told, not always consistent with each other but different narratives in different areas of AI, about AI, but they always go along with the entombed or explicit or often implicit narrative about humans, because humans and AI are seen in relation to each other. So that so what we use AI or AI is being developed in order to sometimes replace humans. So when you have to ask the question of why would you want to replace a human, sometimes for good reasons actually. So, for example, I mean one really good reason would be if you wanted to clear a landmine field of landmines, so much better to use a robot. Some of it is good, but so you either want to replace humans, or you want to try to enhance humans. But if you want to enhance humans if you want to enhance something, you have to have an idea about what would make it better. So, so you have to have an idea because not all, you know, not all changes are enhancements are they, so you have to have some idea about what it is that humans are lacking. And so it always goes along with some idea about how humans stand in relation to the rest of the world, and some idea about what faults humans might have, or what imperfections they might have that we can surpass. It does really, really tend to to reinforce there’s been work in moral philosophy quite a long time about notions of personhood which I think are my personal views that they are too biased towards the cognitive, but the work in AI only just reinforces that. And there’s something, there’s something about what you’re saying in the idea of this image of the Golem or right this image of the artificial being that we saw in a kind of popped up very strongly in the time of the, the romantics and you know Frankenstein and all of all of this kind of narrative is very important to understand this mirror and it really does end up acting like a mirror. So we end up projecting our politics, our morality, and like you said, the most dangerous part of it is that it’s often implicit. We don’t actually know. So like you said like when when Elon Musk says he wants to make human beings better by implanting a chip in their brain. No one ever asks what you’re talking about like what is this better. And it usually just means more powerful in terms of like adding capacity, like it seems that seems to be the, the, the, what we’re talking about but adding capacity. You know what is it the Spider-Man famous Spider-Man quote right with great power. And so adding capacity is not always something you want to do. Yes, yes, yes, yes, yes, yes, it’s interesting. So one of the things that one of the things I was just looking up this week, thinking about talking to you is that very often you talk about a bit. If you Google it you’ll find lots of people talking about AI defroning humans, but it’s really really quite interesting as to the context in which that’s used the idea that we’ve been knocked off our throne. And actually, it’s also really interesting because a lot of people are kind of like welcoming that because a lot of people are really against. So a lot of people say, for example, have a really simplistic notion of religious view of humanity but where the crown of creation and, and, but no Darwin has come along and knocked us off our pedals and that’s a really good thing because we shouldn’t lord it over the rest of the rest of it. But if you look at the context in which they talk about defroning AI’s defroning humans. It seems a lot it’s often in context of something like when alpha go beat Lisa doll the world go champion, or in something like chess or something where there’s some where there’s some beating the humans at some intellectual task, or often where it’s really really easy to work out is it. It’s really easy to work out whether somebody’s better ago, because they’re a criteria for whether you want or not completely certain, but but it also has made me think about the sorts of things that we’re worried about being defroned because, as far as I know, Arnold Schwarzenegger is not crying his eyes out, because a forklift truck can lift heavier things than he can. And, and, you know, Usain Bolt isn’t worried that cheaters can run faster than him. So it’s, it’s something it’s really is something about it really does tell a lot about why we think that like being clever at certain things are kind of defroning us, we’re not not at all worried but chimpanzees could chimpanzee could rip you from limb to limb because they’re so much stronger than us. We don’t care. Some people seem to care about that Joe Rogan seems to care about that a lot. All the time. Yeah, the idea I think that the idea that you’re talking about is, is, because we, the West has really taken up the underdog story as being the story. Right, we’ve taken up the revolutionary story as being the, the story itself and so all our movies all our stories, you know, since the enlightenment have been revolutionary. Yes. Yes. And so because of that, then there’s this surprising thing like you said, which appears in stories, which appears in something like the desire to have a ID thrown us, or the desire for to come and show us up, or, or just even the cuckolding narrative which has become strangely popular all of this is this strange manifestation of this revolutionary tendency which turns against you it’s almost like, it’s like you’re revolutionary but then you, you expect almost almost like implicitly that something from underneath is going to come and and take your place. You know, and you see that in, it’s a it’s an old story it’s the story of the Greek gods how they castrated their father and then, you know, that’s what it is it’s a it’s a. So, one of the things that I’ve been that I’ve been saying on my channel and I’ve said it kind of here and there but I want to say clearly here I guess, because I have some friends that are in computer science and very smart people you know who I’m talking about here. And I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and I have a friend who’s a computer scientist and it actually happens and you can’t know something’s conscious anyways. How can you know that? Like, I can’t know that of you. I can’t know that of completely know it like I can’t know it internally the way I know my own consciousness. I don’t know how you would even evaluate that. No, I always wonder about about people talking about AI becoming conscious because because you know you don’t know if another person is conscious. I remember when I was a philosophy undergraduate. I mean I had to write an essay about the problem of other minds and I spent I spent a whole week really really really worried because what do you think about anybody else is conscious. So it’s actually that’s one of the things that’s really interesting about how we project these things onto AI, because a lot of people are talking about oh if if AI or robots become conscious, then they, they’ve got rights we should treat them in the same ways we treat humans, but it’s really interesting because even if they were conscious you wouldn’t know you wouldn’t how would you tell they were conscious, but also how would you know what their consciousness was like, because we know lots of, I mean loads and loads of animals we assume are conscious, but we don’t think they have the same rights as human beings. We don’t think that a dog’s got a right to vote because he’s conscious. So it’s really interesting how quick, how quickly we assume, but, but, but how quickly so many people assume. But also it’s, I mean actually it’s really human tendency. It’s a really human tendency to extrapolate humanity onto, onto things that you see faces of things all the time. And there’s a tendency to sort of extrapolate humanity onto robots really quickly or worry about. So, Joanna Bryson, who’s a works in robotics and she also works now a lot in ethics. She said she noticed that when she was working with robots, people would really worry about whether it was like cruel to unplug the robot. So yeah, it’s really interesting project project onto it, but also I think I think that’s another reason for so we could. But it’s really it’s super it’s not it’s not just a little interesting. It’s very interesting because what’s mostly interesting about that to me is the notion of the, the making of a body. Right. And so the idea that, yeah, the idea that you can make a body to host intelligence is something which has existed forever. That’s what idols are idols have always been, you know, statues of gods have always been the creation of bodies that would host intelligences, and and intelligence would manifest itself through proxy like in a proxy through proxy through it’s the believer that they are the people sacrificing to that God through this through this image. And so, it seems to me that we’re noticing we’re seeing something similar happen now is that the the the say the intelligence of artificial intelligence is through us like we are the proxies of the artificial intelligence needs us in order to be intelligent. But that doesn’t mean that it doesn’t act the way that an ancient God would also act, because we trust it like the way that the ancients trusted the gods we trust Google we trust Facebook we trust these algorithms, we kind of good give our trust and we all look to it. So it ends up acting like really doesn’t end up lacking almost like a like a god like an ancient God. Yes, but also also it’s worse than that because we’ve painted ourselves into a corner, because we have to, we have to trust it because everything is set up, everything is set up in order to have to you have to use it. And even if you are you’re walking down the street and it’s being used it. So we’ve paid we painted ourselves into a corner there. So yeah, so you say you, because there’s also a lot of distrust as, as well, but it actually gets back to the question about you say your computer have artificial general intelligence, which I mean who knows because others, others argue that you can. But you can, it’s always in a sense you always have to oversimplify to explain points but you can divide you can divide concerns in ethics of AI, broadly into people who are looking at the future so the possibility of super intelligence, something which might be in the future sometimes maybe 25 years in the future or whatever. There’s lots of debates about how long it’s going to be effect if that happens so some people are worried about super intelligence and then worried about whether so that that’s in two branches so some people really look forward to that and think this is really great and our descendants will actually be digital or humans will become digital. So that’s one one lot actually think that and another lot are really worried about this and we need to prepare. So one argument is that maybe it won’t happen but we need to be ready just in case it does, trying to make AI sort of beneficial for humans and really worried about AI is going to take control. So there’s a worry that in the future it’ll take control. And then there are people who are concerned about what’s happening now with the AI we have now, and all the other computing which is maybe not quite AI, but I think that’s a false dichotomy, because it’s already taken control. Yeah, it’s already taken control. Yeah, and I think the, to me the best way to see it is still in the analogy of the ancient God or the ancient temple is that it’s taken control through, through a priestly cast, which is what would have happened in the all the temples before that the will of the God manifest itself through the priestly cast which in which inform the people, you know and tell them to look to the God right the people that the cast is saying, look to the statue, and then then they are explaining the will and it’s like I’m not even saying it’s bad, like, it’s actually how reality works, but it’s a friend, but you need to know who your priestly cast is. And I think that that’s one of the things that at least in certain spheres is worrying people the most is that these tech elites are not definitely not a superior cast like, they’re definitely not the people you want to entrust your civilization to. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no I think what’s really intriguing me is the idea that the problem of the ethics of like the problem of the ethics of AI isn’t just about deciding what the code of AI is. It’s that there are certain, there are certain patterns which are there, which will lay themselves out. And so you can say whatever you want. It’s almost as if you said, like, I’m going to create an ethics of weapon making. And my ethics of weapon making is that you can’t use these weapons to kill people. And I’m like, that’s really nice. But though you’re making weapons, right. And so, and so it’s the same with AI. It’s like there’s, there’s an internal mechanism to what AI is doing. That means that you can’t just impose an ethical system on it. You can recognize the ethical possibilities of it. And you have to, and you can try to implement them as best you can. But there’s certain things, for example, the, the near infinite amount of power that it can give certain aspects of reality. It’s something that it doesn’t matter what ethical system you apply to. Right, you can, whatever, whatever ethical system you try to apply to something which has massive power is not going to not going to hold. Yeah, well, yes. I mean, there’s, there’s so many different things you could say about that. So what’s one of the things that’s happening. So for example, it’s changed. So, we’re trying to. Sorry, I’m suffering because there’s so many different things to think about, but you’re, there are ways in which people are trying to make certain that the ethical systems are as ethical as, sorry, the AI systems are as ethical as possible. So there are loads and loads of people working on like technical aspects of how do we ensure privacy so there’s big, big issues with privacy because of all the data that’s been collected all the time, but not the data that’s being collected for ways in which we can analyze it. So you can extract stuff out of the data that you had no idea that you could extract, for example, so they could work out that they can they can work out from. They can work out from your shopping habits whether or not you might be developing dementia, they can work out your mood from how you’re typing, they can work out all sorts of all sorts of things about you they can guess all sorts of things about you, but by analyzing the data. But, but, so, the way in which we think about privacy is all being shaped by how we use the tech, the same time as we’re trying to make certain that the tech guards our privacy, is that the kind of thing you’re thinking of. The way in which the technology lures us into posting stuff on social media, the way in which it manipulates our emotions so that it, it tends to like Twitter for example tends to incite the worst emotions it tends to get at your kind of like angry feeling. And you, you, you, you’ll, you’ll, you’ll, you’ll then reveal things about yourself that you wouldn’t ordinarily reveal. So, so is it so the way in which we’re thinking about the notion of the self, how we relate to other people, how we think about all those things is changing all at the same time as we’re trying to try to control it. I think that’s exactly the right way to say it is that it’s actually modifying the manner in which we engage with each other. Yes, who in its image. Yes, like in the image of the tech and so it’s having a, like a parasitic effect, and I mean that’s also obviously the danger of any kind of supplement right is that it has a parasitic possibility, which is that it then starts to shape reality to its own image and I think that one of the things we saw with like coven, you know, I did this interview with Christian while about Marshall McLuhan and he talks about how McLuhan said And, you know, it’s basically already the sex organs of the machine. And it’s like I, it sounds like something we need to say and you know you think okay that’s really pithy, but I think that I’ve been seeing it more and more now with coven like just how much that’s true because all of the things that we did for coven would not have been possible without this. Not even possible without zoom without social media without all the technical. Yes, technical tools that we have, and it actually modified our, the way that we perceive coven, because if coven like I’m sorry to say this for people who have it and like have suffered from it, like if the, the, something, the size of coven happened 100 years ago, it would have gone pretty much unnoticed. Like it would have not, it would not have changed the very structure of reality, like it would have been noticed like as a disease but it would have changed everything. So I think we, I think it’s a strong argument to be made for saying we couldn’t possibly have had a lockdown. We couldn’t possibly have done a lockdown if we didn’t have the internet. And even even if it was just like 10 years ago or five years ago, then we couldn’t possibly have had this kind of lockdown No, it’s now this is it’s like if this is coven is the image of, of now. And it’s the, and it’s, and it looks like the internet, and let’s say, this this technology, yes, yeah, wanting to get it lives through attention. And so it’s, it’s actually, it’s actually increasing the amount of attention, we give it through, through the mechanisms, its very own mechanisms and how it’s able to adapt to reality it’s like it’s like almost like a weird Darwinian thing where it’s like adapting to reality and taking opportunities to increase its, its, its effect and power. Yeah, yes, but it’s, but it is. I, the idea that the, where the sex organs of the technology is, is kind of really intriguing but also kind of want to fight back about it a bit before it. If we take that too literally, any kind of it lets the tech guys off the hook, because they are deliberately, they are. And it does, we don’t want to let them off the hook we really don’t want to because otherwise they’re just Mark Zuckerberg it just the helpless porn and could do nothing. So there’s a sense in which they are. There’s a combination that it’s like they’re the Dr. Rebels henchmen or something that they’re actually working in, in conjunction with it but yeah, so the way but I think it’s really interesting have a way in which we relate to each other through all this zoom and not meetings is changing so much. Of course one thing to remember is that that lockdown hasn’t changed the lives, some people’s lives haven’t been changed very much at all. So the people who are delivering they’re still going out delivering. So that it’s like, it’s that’s interesting as well the way in which it helps to divide people because it’s basically, as you could point out there was a tweet the other day but I really laughed at, but because you know how tweets are always taken too literally these days. So somebody said, in reality there has been no lockdown middle class people have been hiding while working class people bring them stuff. So, which is a cool set of people saying oh no no no I’m middle class I’m a doctor I go out, but the point, the point is that it’s actually it does actually divide people a lot. So the technology divides. We have different experience of it. But the way in which we’d relate to each other, say over zoom is really really interesting because it changes how we think of ourselves, because it makes you much more conscious because you can see yourself thinking, do I look stupid, do I look stupid, you know, what’s in the background, like I need to go to class, a couple of months ago and realized that I had my washing in the background. So I didn’t like didn’t want the students to see my washing is all that sort of thing. But how we relate to the other person. So, even the visual information you get from a camera, even a really good camera is a tiny fraction of what you’d get if you were in the room with a person. Yeah, exactly because you don’t realize that when you’re with someone, you notice very small things like you notice them almost kind of implicitly the way they’re sitting the way their hand is twitching the way you know the smell of the room the. There’s so much happening like you know you notice how you notice like how dirty someone’s hair is or like it’s like these little things that you sense when you’re with someone and it gives you an impression of how that person’s doing or what’s kind of going on around them whereas with zoom, you know and it’s just it’s a it’s another version of something like social media you know I’ve argued for a while that social media is is basically, you know, a form of, of showing yourself and, and then watching other people it’s like entertainment brought to the individual level, and it’s not true it’s not true interaction right it’s a form of even this like I always see that we’re having a public discussion right this is the same discussion we would have if we were sitting together in a in a in a coffee shop. No, no, having a cup of tea for a start and yeah. Yeah, it would be really, but there’s something I’ve been thinking about recently which is which I’m really would be interested to know what the effectiveness is, because also again it like in lockdown I think the experience of millions of people who’ve been in lockdown completely on their own for month after month after month. So I think that experiences, because well be really quite different. But then one of the things I was thinking about in terms of in terms of, I was thinking about in terms of how AI, AI sort of runs on electricity and the importance of electricity and then it occurred to me, but of course we run on electricity as well as biological creatures, and our electric fields from our bodies extend beyond our bodies. So that I actually have no idea what impact that would be and whether you could pick it up from other people if you’re in the same room as people. I’m just thinking about the physical distance of not physically, because we can communicate as much as we want. Sort of verbally. Yeah, no, but I think I think you’re absolutely right, you know, the idea that you know that someone’s looking at you from behind is real. Like that’s not fake. But you know, well, all of a sudden you know someone’s looking at you. And it’s like, but that why would that like I don’t know why that would bother anybody. Like why would that bother you because animals have all kinds of, of ways of perceiving that are not just like the four senses or whatever. I think that you’re right that we obviously that this technology is is extremely focusing I could extremely limited in terms of our human experience. Yeah. And I’m with my kids I see it it’s nuts like my kids. You know, we had very limited tech in our house very very limited like an hour of of watching something, a week, pretty much. And then, COVID hit, and we just lost absolute. There was no way because the only way they could have contact with their friends was through was through digital. And then you realize that some of your kids actually prefer it. It’s safe. Yes, yeah. Right. It’s a, it’s safe because you have total control over what you’re projecting. Yes, and you you don’t you don’t have all the side effects of, of a conversation you know and all the possibilities of of miss speaking and of making a mistake and you know whatever of having a snot come out your nose or whatever it is that kind of this little thing that can happen that makes you embarrassed or makes you whatever in a real conversation it’s much it’s nice and clean. Yes, but it but it but it does focus towards certain parts of experience, isn’t it, because I think all of this tech focuses very much on cognitive and linguistic and away and away from embodiment. I was thinking about it like being so distant from people. So it just occurred to me but like the family I’m from like my mum and dad, especially my dad really, we’re not the sort of family who actually spoke very much, you know, we weren’t terribly chatty, you know what I mean, but you still want to go and see somebody and you would like I’d often spend hours of my dad just like sitting together in silence. It’s that kind of thing. Maybe that’s thinking about that maybe you really realize how this is all focusing towards certain aspects of human experience only. So would that be an example of how the technology itself is, is, is twisting stuff without even actually having any content the way in which it’s the way in which it’s we’re using it is Yeah, definitely. Yeah. And there’s also a kind of, it’s also it’s all based on attention, which is really important. I mean, I kind of talk about that a lot how attention is the basis of reality, but the, the tech companies and these these platforms they seem to have understood that. And I think that’s how they are using it in a crazy manner because the like, like the like thing is so smart, like whoever came up with the like button is an absolute demonic genius like it’s, it’s amazing, because you can get pure attention. It’s not messy, or it’s not messy attention. It’s not like the kind of attention where someone is looking at you, and and all of a sudden you see something in their eyes and you don’t understand what it is like are they angry with you or like it’s none of that it’s just like, like, pure, pure attention. Yes, like, like, or dislike, like or dislike. Yeah, exactly. What doesn’t matter really it’s all about quantity of attention. Yes, yes, yes, it is but it’s, it’s only it’s, it’s not attention the same as a sort of attention if you really wanted to attend to a person. But that is going to be that’s really complex attending to a person is really really complex thing to do isn’t it if you really attack, but it’s, it’s so it’s, it’s sort of attention but so the attention to a tiny bit of attention to currency of attention. Attention transformed into currency, because that’s what it is. And it’s hilarious because, you know, it’s like I fall in that all the time people ask you, how many people are following you on YouTube, how many people are following you on Twitter and you tell them, right, and then you see in their eyes that it gives you power. It’s like, I have so much attention. This is how this is the currency of attention that I hold. And like, if I want to be invited on a podcast or if I wanted to have someone come on my on my, on my show, my currency of attention is going to decide whether or not they’ll accept. And so it’s like it’s a very I mean it was always kind of like that obviously procedures always been a part of, of how how things work but now it’s been so quantified and it’s, it really is a type of currency. Yeah, yeah, it’s interesting. It’s really interesting how it’s quantified so you can like count, you can count. I mean that’s just count the likes yeah you can see there’s a number and it’s like, you can see how many people have said they like you. Yes, it’s not about quality it’s just about quantity because, you know, it’s like someone, one person saying they like something, because they’re scrolling through and they see it and they find it mildly interesting and they just like it, and another someone else says they like it, and it’s like they choose it and it like changes their life. And then all they can do is still just click like, like, yeah, or low or put a heart instead of a thumbs right it’s like just a little little more but still, it just shows you how much it has been transformed into a, and just a kind of quantifiable currency. Yeah, yes. So let’s say like that for example like that system let’s say that this system. Now, if you try to apply an ethical layer on top of that. Right, it’s limited by the very system itself, but the very system of that the current turning attention to currency will reduce the ethical possibilities of that system, I don’t know if that makes sense. I don’t know, but I was just going to point out something, something really paradoxical at this about how this is if this relates to AI and computing. So the value of AI, we can see the currency of that is intelligence and trying to introduce the capacity and intelligence is in fact the most important part of human beings, but then how it treats us. It’s nothing to do with intelligence. It’s, it’s, it’s hitting the sort of base a part of ourselves. It’s like hitting raw emotion, and not just even general emotion but just a really, just, just a really attenuated part of it. So, the ways in which, yeah so this is an aspect of control that’s like really really disturbing I think and really really worrying. And I also a big split in how on what values there are in the system so that it is different. I actually see this in a lot of a futuristic futuristic talk about AI. It’s as if there’s a lot of futuristic talk as if humans are okay, we’re pretty intelligent we’ve come as far as we can get, and then the next stage is going to produce even more intelligence to try to either have a transhumanist to extend our intelligence, or maybe to see that biology is has got its limits and we need to go into a digital form and get more and more and more intelligence. But then how we’re being treated is we’re being we’re basically being treated like farm animals. We’re just basically being treated like, like objects. Yeah, yeah, when you’ve reduced intelligence to quantity, then, and I mean it’s the mecca and you you can’t just mechanize it and you you have this idea that mechanized reality is somehow higher, then, then you’re treated like a machine that’s just how it’s gonna it’s not you can’t stop that right it’s gonna. Yes, yes, yes, yes, yes, yes, we just we just manipulated so emotion, our emotions are just manipulated. And, yeah, and just just reducing us then, yeah. Yeah, like for example like if you, but this is also one of the ways that for the idea of like submitting to a god let’s say, and how, once you do that, then you can’t change that say the direction right so I have this contention maybe I’m wrong maybe I give them too much good good. Good reasons for their action like good intentions but you know what I think that like when Facebook and Twitter started. They basically, they understood one insight which is attention. We just need to keep people’s attention. Yeah, and then they didn’t at all realize that the easiest way to get someone’s attention. Right, is through appealing to the lizard brain right appealing to the rawest emotion you can appeal to, and then, then you’ll get have their attention. And so now they’re, it’s like they, they were. They set up like an ideal like they set up this false god and now they’re paying the price for it and they don’t know what to do with the, with the results of it and it’s like, so how would you do now that you created this. Well, I, I suppose there’s quite a lot you could do. Yeah, there’s a sense in which they create something you don’t know what to do with. But also there’s another sense in which if you’re already, if you’re a billionaire you’ve got quite a lot of choice. And it’s, and there’s, there’s a lot, there’s a lot going on which is like really shocking and awful of the things that they, but the way in which you’re projecting is it. So, just as an example, a lot of the stuff, it’s, it’s as if they’re projecting this idea. So one of the narratives is there’s an idea that AI is going to take over is going to do a lot of work for it make life better for humanity in general. And that’s the beneficial AI movement is all about that. But if you look at a lot of the things that’s happening with things like social media and Facebook. So there’s all sorts of work, all sorts of hidden work that’s going out to human beings actually doing it. So appalling jobs doing content moderation, which are generally really really poorly paid on a job that no human beings should have to do having to look at the It’s just, just horrendous. And a lot of stuff, a lot of stuff like in machine learning is farmed out to people who are labeling images and doing sorts of, it’s actually people doing the labor. But in a sense, it’s a bit of a, it’s a bit of a condescendant some machines doing it because there’s armies and armies of people doing it really low grade really really poorly paid people. So that’s a really interesting point because it’s, like you said, because intelligence actually comes from humans like real intelligence. And what they’re doing is they’re farming intelligence in people. Yeah, they’re feeding it to the machine. Yes, yes, yes, yes, yes, yes. That’s fascinating. It’s interesting that it’s like you said it inevitably happens that the people that are doing this like the content moderation, and then the image labeling are like semi slaves right there. They’re at the bottom of the social sphere. They get paid minimum wage or whatever, you know, and, and they work at night and do all kinds of crazy like it, they end up looking like the dregs of society, you know, and so then it, it even, it even makes even more sense to imagine it like, like the matrix like really the matrix. But instead of like instead of getting body. Instead of farming for body which is what the matrix suggests right is that they’re getting energy from the humans, they’re not it’s the opposite. They’re actually farming intelligence and farming farming capacity to the capacity to identify quality, because that’s what the machine doesn’t have. So, yes, yes, get that from the humans. Yes, yes, yes, yes. Yes, really interesting. Yes, it’s really interesting. I know some another dirty secret is it uses up masses of energy. So it’s like it’s really really energy hungry. So a lot of us, a lot of the stuff is a lot of stuff, a lot of stuff is really really hidden and it’s, it’s, it’s not at all what you expect. So that’s a dive. That’s a real divergence of the narratives, there’s a narrative in the future, but we’ll have lives of endless leisure and be able to do whatever we want to be really creative and so on because the AI is doing all this stuff, which is really interesting as if that’s going to be really fascinating but yeah that’s, yeah, that’s, that’s, that’s one of the things that’s going on. Yeah, that’s very, that is very, that it definitely does show you. I mean I think the content moderation is one of the most fascinating sense is that it’s almost like the people who do content moderation have to be lured in, like you said from very low, like low lungs of society because who in their life would do that like everybody knows that doing that would destroy you. Yeah, it’s like you hear those ads on TV like you hear those ads on the radio of like going to test pharmaceutical products like the people who do that are the same people who will be doing content moderation because. Yes, yes, yes, yes, yes. Yes, really terrible. But as you say is that it’s the humans have to make the judgment. I mean, that would be given that you’re going to have to have people doing content moderation that would be one thing that you really would want to be able to train an AI to do, because you really do not want any human being to ever have to do that. And I mean some of the stuff would have to then be handed over to the police wouldn’t it some of the stuff, some of the stuff there is like illegal and needs to be given to the police like, yeah I’m sure it happens all the time. Yeah, yeah, but. But then the thing is that you don’t want the AI to get me in charge of that because the problem is always the side effect of power. Yeah, yeah, so in the sense that it’s like it’s almost it really you know what it is it’s like the genie and the wish. That’s what we’re getting. Yes, you know the genie comes and says you’ve got a wish, and then the person wishes but the genie gives that with absolute power. Yes. And so the person never thought of the side effect of what they wish. And then the side effect comes crashing in. Yes, yes, yes, yes. So, yes, so, so we keep on coming back to the fact that you that you that you really do need humans in all of this so one of the one of the side effects, one of us like side effects of what’s going on in social media is like the proliferation of so called hate speech. That’s like a really interesting thing to think about because there are loads and loads of people working on say developing algorithms to be able to detect hate speech, so that you can automatically take it off. But of course, you, I mean you’d be able to work out a lot of the problems with that straight away. Because detecting irony is a little bit of a problem for a machine. Yeah, yeah, it’s a problem for a lot of people actually detecting irony. So there are problems then you get problems where so there are also people working on bias in all the data because they want to eliminate bias, and that’s another whole story as well. So then there’s bias in hate speech, because certain language groups or subcultures will maybe do something which seems to hate speech, but then there’s also the problem that it’s really interesting how it how it creates itself because one of the reasons is that there are so many issues, and then cultural issues. So, one of the reasons why you might, so let’s suppose we take the idea of hate speech. One of the reasons why there’s so much stuff which counts as hate speech is because of how social media works, it encourages it. So it’s created the problem. And then it’s trying to solve the problem, but it all spills over into society, because as well as this happening, we have issues that going on in culture where people are now falling out having rows, getting thrown off social media, all those kinds of stuff, and then the idea of hate speech is then getting built into laws various kinds of laws. So there’s a really interesting. This is one of the reasons why I think we need to modify the idea that that whoever sex organs of a machine because there’s far more complex going on as well. So we can’t just look at the technology, we have to look at what simultaneously happening in in the culture, and how that feeds back into it. Yeah, what you said about hate speech is actually brilliant because, like you said, the anonymization, and the fact that we are physically distanced from someone and we can forgo getting punched in the face right by saying something offensive means that people have feel like they have all the opportunity to, to let loose their it and to let lose all these darker aspects of them. And then the same system which brought it about is now trying to clamp down, and it ends up like you said, spilling out into into normal society and then it ends up becoming a becoming the standard by which we, we have rules in in reality. Yeah, because like when I meet someone in real life, you know, the type of insult that’s that you would be capable of saying online is, you’ll be really reticent to do that in person. I mean, some people will but it’s very rare and that person will be ostracized rather quickly. Yeah, yeah, and then things can happen like you know pylons on Twitter so you’re not going to get that in real life where somebody else is going to join in and thousands of people who just start like giving you death threats that doesn’t happen in like when you’re walking down the street and you, you flip off someone or you like say something, say something stupid. Yeah, yeah, but then of course people can get, then of course the solution is, is it can also be a problem as well because people get ousted unfairly. So you, so there’s no kind of right answers people can get thrown off, thrown off social media, and then, then there’s a, there’s a, there could be, you could say oh you don’t need social media but there’s a problem with it becomes so we become so used to it, and they’re appeal for actually was really interesting. Last time I was in the middle of writing a lecture on censorship online. I thought I just checked Twitter, or you discover that I’ve my Twitter account that been suspended. I have no idea why not. I have no idea why not. You have this real really weird appeals process I had to put in an appeal and say why it was I thought that I shouldn’t have been suspended, but I didn’t know why I’ve been suspended so I couldn’t reply. It’s like, it is like, and this is something that I’ve seen. It’s, it’s really insane it’s like it’s almost it’s it’s a strange thing it’s like, how is it that tech companies are bringing about the like Kafka state like the insane bureaucratic communist type state where you don’t even know who to talk to. And it’s so like there’s no human on the other end of the line you have no idea how to to appeal to whatever it is they’re doing to you and like you said and because they’re afraid of you gaming their system, then they make all their rules completely opaque. Yes, yes, it’s yes it’s it’s yeah it’s really interesting it’s really interesting that you have no idea what to do what to do about it. But it’s also interesting that, but a lot of people working on it I think haven’t actually necessarily thought about the problem of being the problem of being ousted because of hate speech because they assume, because a lot of people working on this problem there are lots and lots of people working on this problem technically, and we’re looking at hate speech and how the algorithms work and so on. But a lot of them have certain assumptions, I’ve noticed have certain assumptions where it hasn’t occurred to them that the hate speech is going to be biased in certain directions. So it actually was interesting a couple of months ago I was reading a paper where some researchers had looked at an algorithm that Facebook uses, and how it tends to be biased towards what they called hyperactive users so certain users who are on online a lot and comment a lot and so the algorithm boosts them so they get more visibility Yeah, so you might kind of think well good for them, they’re putting in the effort, but they were concerned about this when I first read it, I said they’re concerned about it because these groups tend to be far right. But they want to read it again, I’d actually misread it, they were just concerned that the groups were right wing. Yeah, so it wasn’t that they were concerned that they have a certain that they’re biased politically. It’s because they’re concerned that they were right wing, I think, hang on a second, you are allowed to be right wing, aren’t you, last time I looked. It’s because they have the people writing it have got themselves their particular bias. Yeah, they’re there and they’re blind to it, they don’t even see it’s like yeah you hear it all the time it’s like it’s like oh but he’s right wing I’m like what. Wait, what is that did you just say that as if it’s like he’s not human or like that he shouldn’t exist, because he’s right wing that is just, and people will say it like just like that without even thinking. I know. Yeah, it’s pretty weird and that’s quite new I think it’s quite new so I think that’s, I think that’s, that’s an example. Well I think it’s an example, I think that’s an example of how the tech of a culture a kind of working hand in hand. So another example is another example of a lot of the stuff that’s going on is really contradictory. So on one hand, a lot of the AI, a lot of the AI is pending towards uniformity. So uniformity of language, dominance of English for dominance of particular sorts of English actually dominance of American English. Oh, things like, you know that when we spoke before we were talking about that annoying, I find it really annoying when you’re writing an email and tries to write for you suggest Gmail does that yeah yeah yeah so that’s going to, that’s going to sort of unify. At the same time fragmenting a different to different identity groups, but you might think those are intention, but the social media is actually encouraging and enabling the identity identity groups to happen, and to happen so quickly, because. And then there’s different gender categorizations that has happened so fast, the proliferation of gender categories, it could never have happened as fast as that before the internet, it just could never have happened because we don’t communicate so quickly could never have happened as fast as that. Yeah, and also because it’s because let’s say people with alternative gender, like identification are so rare. Because they appear as a marginalized group on the social media platforms. So the social media platforms want to protect them because they’re marginalized. Therefore bolster their voice, therefore make them appear as a substantive like category of human of society, when when in reality you know like actual transgender people are like how many people like unless you’re in that scene, you’ll maybe meet one or two people you know in the world that you rarely meet people like that unless you like really in a certain scene. So, I mean, I mean that’s just, that’s fine it’s it’s that’s reality but it’s just it just like you said, the very system itself creates this weird thing. And then because it bolsters. Let’s say the group then it also ends up attracting attention to it. And then for attracting people who want to bash, because that’s just just how it works. Yeah, yeah, yeah. Yes, it’s also it’s also actually linked to a consequence of trying to do the right thing, ethically. So, there’s quite rightly concerned about being biased, because who wants to be who wants to be biased who wants to exclude people. And I think because there’s such a concern about being biased in the in the data and biased online, that that feeds into the idea that you need to be need to protect these minorities, because of the focus on particular particular minorities but I also discovered something, I also discovered something really interesting you know famous Facebook’s famous 72 different genders. Yeah, identities that you can have. I was looking up, and you could, if you put in your agenda you have to start typing it if you drop down menus are different, different sorts of different sorts of ones but then underneath it underneath it underneath it there’s one that you can click interested in who you’re interested in. But when you click on that, the options. It’s not 72 options. The options are men or women. Yeah, I think we’ve at least added 70 back in there you know it’s like, have this like exponential possibility. It’s like that because then they’re like wait a minute. We have, like we don’t really actually like we just want attention so we don’t care what you say you are, we just want to know what you’re attending to. It’s like if we put 70 down there then we can’t be a lot harder to track that it’ll be harder to have our algorithms, they’ll like to sell that data. Yes, yes, yes, yes. Oh mercy that yeah that is that is some frightening stuff but so what ends up happening this is a fascinating thing is that because there’s so much power that in order to, like, for example, in order to counterbalance this the hate, then they create this weird upside down world. It gets actually there. Right if you write straight couple on Google. All right, right. Or you’ll get straight couples that are talking about homosexuality, or that say something like I think one of the top things on Google that came if you wrote straight couple was something like a straight couple that said we won’t get married until LGBT rights are accepted, you know, right. So it’s like, so you get this like weird, this like really strange world where it’s like, it’s like you want, like, let’s say you, you, they end up having to, to promote the margin to a certain extent like more than normal. Right. And so it creates this like I talked about this weird upside down hierarchy which is manifesting itself. Yeah, yeah. And of course, and of course the margin of a margin because it’s only going to be the really vocal people, the only the only the activists who are going to be out there, because there must be, there must be lots of transgender people who just want to carry on and live their lives. Of course, I would say probably most. Yeah, yeah. But that’s why it’s because it’s an attention economy so it’s even more like it always used to be that the squeaky wheel gets the grease but now, it’s it’s it’s it’s increased by the very system. Yes, yes, the fact that people can can can congregate and become map mobs right they become these attack mobs and they do all these, these, these, these kind of gestures online. But so it’s fascinating like, I mean and it’s fascinating. One of the most fascinating things is that what AI is doing is that it’s basically just giving more power like huge amounts of power to whatever it is that’s leading it like to whatever like the priesthood has power and so it seems like the fight like the cultural fight and that’s something people haven’t. I think there’s a little bit of it like the fight for what comes up on the first page of Google, like that should probably be the ultimate culture war, like what it is that appears on the first page of Google, because that’s really become like the frame of reality, it’s not. It’s there are people behind that like you can see, like there’s something when you auto type then it doesn’t auto type it and it’s something that’s there’s all these things and you know that there’s like people pulling the strings. Yes, yes, yes. Yes. Yes, yes, but yeah but it’s like the Wizard of Oz it’s actually it’s actually. It’s wonderful because they can just say it’s the algorithm. They can even make AI, they can even point AI and say, oh it’s these algorithms, it’s not it’s not. It’s not our bias right it’s not our bias isn’t there it’s just these, it’s just AI doing it. But yes, that might also be one of the more frightening aspects of AI. Yes, yeah yeah yes, I’m gonna hope the whole notion of bias is really interesting as well actually, because, because it tends to rest upon an idea that we can get an unbiased view of the world. My view of the world is unbiased, what are you talking about? But to get an unbiased view of how would you get an unbiased view of the world, collect all the data. And you can’t, the world is not just collecting data you have to analyze it so how could you get a completely unbiased view of analyzing the data. So, so, that again, but about actually some some interesting things can come up so when when people talk I mean, if you say anything critical it sounds like you’re against trying to get rid of bias, but what I’m interested in and how people speak about it and fail to see but underneath it you’ve got to have some view of how the world is divided up. So, um, so it’s not yeah it’s not it’s not the problem of saying you want to get rid of bias is the problem of saying you want to get rid of bias but secretly wanting to impose your bias on everybody else. That’s really the problem right it’s not like if we had a normally. Like if we had like a normally recognized hierarchy of being that it’s like we recognize it we see it’s problem we see it’s, and we just say this is what it is right, let’s say, like the ancient world is like okay so you have the king you have the, the bishop or the or the pope, and it’s like and then the world kind of lays itself out, so you can hate the king but he’s still above you and you recognize that hierarchy. And so, and then the world kind of lays itself out in that way and everybody kind of knows that that’s how it happens. But now because we say, no, there is no hierarchy. It’s equal, and it’s objective and it’s all this stuff. But then, secretly, there is a hierarchy. There’s like a secret hierarchy and the secret bias that no one wants to acknowledge. This is something that has been frustrating so many people in the media as well like not just media, media and social media companies that they keep saying that they’re being objective but obviously they’re not just wish they would tell us. And so, and then you say that you’re not and so at least we can start there. Yes, yes, yes, yes, yes, but there’s, I mean it’s one of the really interesting things about about emphasis on bias is because I think it also speaks to an idea about the idea about what’s what AI, like how I see what’s wrong with humans, and way in which I might seem to be better than humans. So it’s like if you, if you, this is talking in really really crude simplistic terms. But if you think of AI as being better than us because of the intelligence and its data processing capacity, then it’s as if the problems with human decisions come in from our from our biases. And that’s because of our emotions, which in the emotions are getting in the way of getting to the right answer and being completely objective. That’s fast fast really really interesting because the emotions are seen as so on the one hand, having these emotions and being biased because of our emotions is seem to be a problem. Whereas at the same time as we’ve seen the whole system is working on manipulating our emotions. So it’s kind of like it’s slapping us around the face, and accusing us of stuff, but then encouraging us to do the same thing is, you know what I mean. It’s, yeah. And that seems to be like the, that seems to be kind of what Tim Cook was trying to get at in his speech at the EDL, where he talks about the God out of the machine. And he says, he’s saying he actually doesn’t like the idea, like he says I don’t think that AI can do it on its own. But he says, it’s people behind the AI. And then he says that’s the God out of the machine, which is even more frightening to me. It’s like even scarier to me because it’s like, it’s like, Mr. Cook, like, I really don’t want you as the God riding on the machine, not you seriously my friend. Yes, yes, that frightens me tremendously. Yeah, yeah, because I mean it’s so quickly goes to a sort of totalitarianism because these guys are not elected you can’t get rid of them. Yeah, it’s, it’s much more frightening isn’t it. So the power of a tech companies, the power of a tech companies in relation to politicians and to politics is, is, is, is a really frightening thing actually. Oh, it is. Well, I would say that this, that this election, the American election was basically run by social media, maybe the last one to this one for sure. They just decided some things you’re not allowed to talk about. Yeah, not allowed to talk about certain things, and then and then certain things are put up at the top, know of trending. And so that’s just, it ends up being and also because we’re all on lockdown anyways, and we’re all looking at our screens all day. So it’s like, it just, it just, it’s, it has become the ultimate totalitarian possibility is right there in front of us. It’s right there in the situation in which we’re in. Yes, yes, yes. Yeah, yeah, yeah, quite right. So I think it’s one of the reasons why it’s the idea that worrying about super intelligent AI and the future taking control of us is just a misnomer because it’s already it’s already happened. But, but then, on the other hand, I really do think that we have to try not to sort of counsel despair because we had to try and do something about it. What do you think we can do? What do you think we can do about it? Well, I suppose there’s stuff happening already isn’t it like people trying to set up alternative, you know, alternative social media and yeah like parlor right? Yeah, yes. Then it was deleted. Yeah, right. And so I don’t, it’s like right now it feels like they’re, they’re, the control is is extremely strong, it seems like it’s going to be very difficult to create alternative narratives and also because, also because the alternative narratives like, I don’t know if you saw the article on the Times and in the New York Times today about what is the name of this, this new app where you just talk man I can’t believe I can’t forget, I forget what it is. Anyway, it’s this new I don’t I don’t I’m not on it but it’s like this app where you just pop into conversations. So you have these conversations, but you can just pop into a conversation, and they were saying unfettered discussions are being had in this app. And they were seeing it as a bad thing. Yes, they were saying unfettered discussions are being had and it raises the problem of intimidation and bullying and all this stuff. You’re like, it’s insane. And so, and so, and you see it like even some articles have been going out against the messenger applications like the encrypted messenger application saying, like this is shady, shady these shady applications that are encrypted, you know why you know we why would you want to hide your conversations. It’s like, it’s just insane that it’s become. It’s become it’s been taken for granted that private conversation. Yes, is in itself. Yes, shady. Yes, yes. And that’s, that’s something that you really have to watch out about because that thought then just can translate into the real world. So for example, in Scott, in Scotland at the moment for example, they’re discussing as a new hate crime bill up for discussion, or one of the aspects of it was to repeal as a private dwelling exception for like hate speech that you could you like, basically, but you’re, but if you’re inside, say things in your house. Well, there’s discussions about whether we should get rid of that. But I kind of think that what’s happening on social media is, is like warming the brain up for thinking that maybe that’s a good idea, but of course, luckily, there’s a huge backlash against it. There’s a huge backlash against it. So, I’m at one of the things that’s happening in lockdown is we’re forced into all this social media but I do wonder what might happen when, if we ever get out of it. I kind of think we might be like, have you ever seen cows let out at the end of the winter. You know when they’re left under the fields and they jump. I think we might be like that. So but I think that we might just kind of realize what we’ve been missing, and maybe that’s a good thing. Someone, someone, it’s hard to know what these polls are real or not but someone posted a poll in the US saying 75% of people say this they’ll still wear masks after coded. So, yeah, yeah, we take your mask off during a zoom call though that’s all good. Yeah, it’s, it’s difficult. See because it’s one of the things I find really interesting about working in the ethics of AI, I kind of like the most interesting area I’ve ever worked in. Because partly it’s like it’s the most scary. But also I think one way when they’re looking really really closely at how the technology might clash with human values. I think it has got the potential to be really really fruitful and useful because it helps to remind us of what we’re missing helps to remind us of what our human values are. So, yeah, and if you can use the things you’re saying to put a mirror up to people so that we can see ourselves in the way that we talk about AI and the narratives that are surrounding. Yes, that’s, that’s probably the first step in helping us to major pitfalls. Yeah, yeah, but it’s, I was. Yeah, it may be maybe there’ll be little groups, you know, like in. Is it, I can’t remember it’s brave and nice and brave new world when they got the savages. Is it, was it, I get it. Yeah, yeah, yeah, brave new world they have like this little tribe of savages. Yeah, we can be, we can, we can be a few groups of savages that actually like that actually see each other face to face. Yes, yes, yeah. But I also think one of the one of the one of the big problems. One of the background problems for when people are trying to think about the ethical issues here, and the paucity of how they’re thinking about it. A lot of a lot of the work that I see around AI ethics is basically built into what you might broadly call like broadly utilitarian or broadly consequentialist ways of looking at things, which also fits into the ways in which intelligence is is understood. So, so those are really interesting narratives I think as well. I’ve been running behind AI. So there’s lots and lots of different disputed definitions of how you might define intelligence and lots of different forms of intelligence. But, but but but the form that’s often used, which fits really well with computing is the idea that something along the lines of intelligence is the capacity to reach your goals to be able to take steps to reach your goals. So that intelligence is just instrumental. So that it’s like it’s just like a, an instrumental account of like a means and rationality that you have a goal that you’re aiming to aiming towards. So then it fits really neatly, it fits really really naturally into a utilitarian way of looking at things where you’re going to try to produce as much benefit as possible we have certain goals and preferences. So it fits into notions of trying to maximize happiness or minimize unhappiness, or looking at how we might achieve our desires or achieve our preferences. And that’s really bad. So, one of the problems with that is that it fits into it fits into a, an idea of reason that fits really well into say like an enlightenment ideal of reason that we’re just making progress because we’re having more and more rationality, or where we’re achieving more and more of our goals, like a Steven Pinker type view. Yeah, yeah. And the thing is that it’s, I don’t, I don’t actually mind that way of seeing intelligence, the capacity to reach your goals, it’s just that the, there are higher goals and there are lower goals. Yeah, the world, like the materialist world and the Steven Pinker world. It’s like, it’s a lower world it’s a world of, of whims and desires and pleasures and pain, whereas virtue, and the good, let’s say in a, in a, in a Christian sense or a Platonic sense, that’s the goal, right, that that’s actually the actual goal of reality. And not only is it the goal of reality, but it is also the Achilles heel of artificial intelligence, which is why it has to farm, because what it’s farming from people is not, is not just, it’s not information. It’s the good. It’s, it’s farming quality. The right, the capacity to recognize quality and to engage in quality. Yeah, that’s the purpose. Yes. So it’s flipping everything upside down, where it’s, it’s instrumental, it’s using instrumental, I can’t say that word, making instrumental the capacity to view quality in order to attain practical, more like money making and desire, desire, filling goals. So, Yeah, but, but I think that’s, I agree, but I think that’s one of the points where, that’s one of the points where it might flip and turn on its head because people will realize the futility of it. So you can, you can see those kinds of things in narratives or discussions around how what we might do if super intelligent AI develops, how we might be protected. So the idea that you might need to, one of the, one of the worries that if it’s like the, it’s like the wish fulfillment worry, but if you program a super intelligent AI to say, make us happy, then what it might do is like plug us into electrodes so that we’re just getting our pleasant centers stimulated. Yeah, we’re just getting endorphins injected into our brain. Yeah. So, so, but people work it, so we don’t want that. We don’t, we don’t want that. We can’t have that. So we say, Oh, Sam Harris talks about that positively all the time. My goodness. Yes, yes, I know. Actually, it’s interesting because these debates have been going on in philosophy for a long time. So the utilitarian, it’s, it’s, it’s comes out of the discussions about utilitarianism people have been talking about this for a long, because they discovered decades ago that that’s what rats would do. If you, if you put their brains, that’s what they do, they do it until they collapse with exhaustion. So, but of course, you would have to be out of your mind to out of your tiny mind to think that that was a good way of living life wouldn’t you, it kind of, you would have to be out your tiny mind. So then you then there’s a branch was looking at we need to try to fulfill human desires, but you also really quickly realize that fulfilling your desires which desires. So there’s a problem about which desires you choose but it makes it worse because of what AI is already doing to us, because it makes it absolutely apparent but how easily our desires can be manipulated. Yeah, yeah. But see the worst part, and this is the worst step is when the people behind AI, understand the nature of desire, and actually understand that human beings are religious in nature. And then they’ll try to implement that. So we’ve got which better to have Zuckerberg like moronic Zuckerberg’s for now, then have someone who actually understands, let’s say, the, let’s say that the qualitative aspect of the human person and how it looks for communion, real communion and towards a transcendent ideal, and then, like, they, you don’t want them to get a grab of that and try to plug us into that because that’s going to be right. Well, on that. Should finish our conversation. Yeah, yeah, yeah, yeah. I’m more of a doomsayer I feel like I’ve been constantly pulling the rug from under your feet here. Oh well well I mean, but I just, yeah, you might be you might be right, I’m just kind of trying to. It’s partly because I don’t want the tech guys to get off the hook but but yeah, I think, even if I there’s a sense of which even if you’re right we have to try not to be too doomsayery, especially because there’s a pandemic on, we don’t want to be too. Yeah, so what’s your final word on this so I’ll let you have the final word on the possibilities for the future or for us as people to deal with this. Well, I think for us for people to deal with this I think we need to realize that actually we still have free will, we can still be we still have some control over it. But we can also use it we can also use it to good because it’s in opening up all sorts of transmission of knowledge and understanding different sorts of things and communication and just use it for that. And, and, and, but also really really think really really deeply about what kind of life you actually want to live, you know. Well, yeah, thank you so much for your time. And I’m really looking forward to seeing the conversation that’s going to be to be had in the in the comment section. And I’m sure we will have another discussion at some time. Yeah, that would be great because there’s some fair, I have loads of questions I wanted to ask you and I haven’t asked any of them. All right, well definitely then we’ll definitely have another conversation. Okay, yeah, yes, yes. Okay, great. Okay, it’s good to talk to you. You too. Okay, bye bye. Bye bye. So I hope you enjoyed my discussion with Paula Bottington about artificial intelligence the stories and ethics that surround it. By now the symbolic world is a whole network of things. There are of course these videos there’s a podcast version of these videos that goes out. There’s also a clips channel run by several people that are moderators and on the Facebook group as well. And we also have a blog where a lot of symbolic thinkers are working out their ideas on different subjects from religious symbolism to even movies and video games. So make sure you check all that out on the symbolic world dot com. And if you appreciate what I’m doing, please consider supporting what I’m doing as well. Everybody who supports my work gets a Patreon only video once a month where I deal on more tricky or prickly subjects that are more difficult to deal with in these videos that are for public consumption. So check that out and thank you for your attention.