https://youtubetranscript.com/?v=jj69RnoE9qI

I think chat GPT stops at collecting data at 2021, I think, but if it carries on updating the data, it’s going to be collecting its own data. It’s gonna be like. Yeah, it’s gonna be a feedback loop. That’s the really, that’s really the biggest thing that I’m seeing is that if you project AI very long-term, it’s going to be diminishing. There’ll be diminishing returns and a kind of, when people talk about like singularity and like AI, exploding and becoming, I don’t see how that’s possible because it has no relevance. It has no desires. And so all it can ultimately be is a kind of spinning back onto itself with maybe with a long arc, but on that long arc, you’re looking at a kind of leveling, a kind of intelligence leveling. I don’t see, I can’t see otherwise. Yes. This is Jonathan Peugeot. Welcome to the Symbolic World. So hello, everyone. I’m here with Paula Bodington. Those of you who’ve watched my channel have seen us talk about the symbolic world. And I’m here with Paula Bodington, the founder of the Symbolic World. And I’m here with Paula Bodington, the founder of the Symbolic World. Paula has been with us since 2011. She’s been with us since 2011. She’s been the founder of the Symbolic World. And she’s been in the Symbolic World for the last seven years. And she’s been leading us talk on several subjects. She is an ethicist. She also teaches in different institutions. And she’s been thinking a lot about the relationship between biology and ethics, but also technology and ethics. And so it seems like this is the perfect time to talk about it with the rising of AI you. The main problem is trying to focus on something where it doesn’t just go all over the place because everything is so interconnected. Yeah, well I’d like to know a little bit about because AI has now really exploded and we’ve also not only has it exploded but the religious aspect of AI seems to be coming more and more to the fore, you know with Elon Musk going on the record talking about the heads of Google literally saying they’re building a god, you know, and and then many of the transhumanists, you know, have pronounced themselves in publications but now it’s becoming more and more obvious that there is a religious aspect to AI even in the people that are making it and so I don’t know if you had thoughts about where we are now with Chad GPT taking over really within just a few months all of a sudden it’s everywhere. Okay, yeah well I mean I have lots of, I have, there’s lots of things that could be said about this, so I mean so the religious aspect of AI has been there really for a long time and maybe it might be also helpful to start off by clarifying some things because there are lots of people who work, who are basically working in that field, who no longer will use the term AI because it’s so confusing, so they might be doing, so there are people who are trying to say build artificial general intelligence or trying to build AI with very very broad capacities and other people who are working on really discrete issues like for example using AI to improve reading of medical images, accuracy of how you diagnose cancers and so on, I know a lot of those people, there’s a big divisions ideologically within people working on this, a lot of those people will now just prefer to talk about machine learning rather than AI because I think it could be helpful, it’s always, it’s important to break up different ways of looking at it so we can think about AI in terms of the ideology of what we’re actually calling artificial intelligence, we can also think about the actual mechanisms of software involved, we can think about the hardware and how it’s infiltrating our lives and the infrastructure so they can all operate on different kinds of levels, so even one question even is whether or not how it got called artificial intelligence, which is in a sense a sort of, historically it dates from a summer school in 1956 at Dartmouth where John McCarthy and a like a score of people got together because they thought they would be able to build like an intelligent machine over a summer, so hilarious really, and so that’s how the name stuck but a lot of people don’t really like the name because intelligence has got such broad connotations and a lot of things is really really discrete but the chat GPT is really interesting, so thinking about what avenue to go down. One of the things that chat GPT, the reason why it’s so interesting is because unlike for example using AI to diagnose cancer, what chat GPT is doing is that it’s a chatbot, it’s interacting with humans and it’s interacting with the public which means that it’s not an elite question of asking ourselves what is this doing, where are we pointing this, no it’s basically loosed onto the public and it’s acting really in all kinds of ways that are unpredictable, maybe they are predictable, but it’s becoming really, I think it’s becoming something like a form of divination for people and people are treating it that way. Yeah it’s kind of becoming a sort of better way of kind of reading your horoscope, isn’t it, and trying to find answers to things, but I think it’s really interesting to think about the question about the fact that chat GPT is about language because not all AI is about language but I think it’s really easy to fall into thinking maybe this is conscious, it’s really an agent simply because of the fact that it’s focused on linguistic intelligence which is only one aspect of intelligence, so I think one of the things that’s going on is that we’re drawn towards, some of the things going on with AI sort of emanate from AI and some are about general things in the culture in the air which feed around in a circle and then help to shape how we’re thinking of AI, so focusing on intelligence is really really closely connected to sort of like Cartesian notions of the mind and the body thinking that what we are essentially is just some mental substance and focusing very much on cognition as a hallmark of humans and in particular even the cognition, all cognition is not language, in particular thinking of linguistic skills and falling with that, I actually because I have so many different things to talk about, I actually make, I actually, where did I write it down, I wrote down a sketch of a sort of feedback, there are feedback loops, so I was like I was really interested in your video you did about a month ago about AI and Moloch and how it’s a form of agency which I think you’re right about that but I think it could be really interesting to break it down and see how it’s operating because in some ways it acts as a kind of ever escalating feedback loop but there are also our ways in which it’s operating on us as agents and it’s getting us to carry along with it like a kind of, you know for example like you can get funguses that infect ants that make the ants behave weirdly and they go up and die and spread the fungus, that kind of thing but also one of the things that’s happening I think is that it’s breaking things down in a way that if we’re thinking really carefully it gives us the capacity to go back and see hang on this is wrong, this is not working, we can go back and see what the problem is, so if we take the general light, let me just illustrate, it kind of operates a bit like, I think it operates, it can operate a bit like, I was using a metaphor of how bushfires spread, so they can spread in Australia where I used to live, I know they can spread really really rapidly, one reason is because you get the eucalyptus gas forming balls of fire that leap ahead but then after the fire there’s kind of devastation but the very eucalyptus trees that will only sprout from seed if they’ve been through a fire so that new things can come up, so there’s a kind of like a cycle of things that are happening, so if we go back to the notion that some people are trying to build AI to be like a god and there’s theology and AI, one of the things that’s happening is that there’s a response to AI that we need to try to combat it, so we need to think about the ethics of AI, so we need to have ways of trying to think about that and one of the problems with that, one of the problems of ways of trying to think about that is that the standard ways we’ve been thinking about ethics kind of break down, so that you can think about ethics broadly in terms of harms and benefits across the population but we can’t just carry on with old notions of harms and benefits because what we think of as a harm and what we think of as a benefit is changing because of technology and so like a basis, I mean like a really popular commonly used way of looking at ethics, a uteritarian model would be based upon we need to try to fulfill human desires and try to work out what our desires really are, you know not our kind of short-term desires but what our higher end desires are, but what AI is doing is hacking our desires, it’s working on us as an object and hacking them, so what that then means is that this ethical basis of simply looking at fulfilling our desires or what makes us happy cannot be used, we should understand it cannot be used, which means we then have to go to try to think really clearly and what the basis of our values are which can end up going back to thinking we need to think about God or certainly some higher value, so you sort of mean like have I explained it okay like you can kind of get a loop where we do actually have to, so yeah so yeah so we have to think about, we end up having to think about not just some superficial notion of ethics but about human nature where we are in the universe because of the fact it’s being taken apart by AI, so I’m a bit pessimistic and a bit optimistic but I think we need to think really carefully about it, have I explained that okay? No I think you’ve explained it well, yeah let me think about what you said, one of the issues that AI is doing and it’s something like it’s what technology does in general, yes, that it increases power and it externalizes the means right, it’s like so you have these means of action and those means get externalized but the intention is usually always in the person right, so a car makes you more powerful, makes you go faster, but you need to add things to you and what AI seems to be doing and it’s not just AI, it’s in general information technology seems to have been doing that for a while is that it’s externalizing thinking and externalizing the very process of thinking and what you need to get to the, so you can remain with a desire let’s say or a question but the means by which you get to the answer is now completely can be completely externalized, yes, you need to and so at the very least what AI seems to be able to do is to increase our power in a way that will also atrophy the muscle of intelligence that humans have, so then that bigger question like you asked like what it is to what does it mean to be human and then also what does it mean to be happy, like what is it that are we just people who want the kick, like we just want it to happen or do the very process of being involved in something is part of what gives us meaning and purpose, yeah precisely so the people who are producing this aren’t hoping, I think this is when people talk about technological unemployment and they’ll be you don’t need to do any work anymore, they’re going to be working, they’re going to be building all this, they work because they enjoy it, I mean they work on it and they think the rest of us are just going to sit around doing nothing but it so depends on what it is that you want to do, there are some things where it’s kind of like if you need to do organize lots of numbers it’s really handy to be able to do it fast outsource that but other things we want to do it ourselves, I mean what’s the first thing kids start saying is that I want to tie my own shoelaces up, they want to be able to do it stuff yourself so that’s that’s it’s taking us away but it’s also it is actually at revising us so that’s what chat GPT is doing, it doesn’t do it on its own so of course for example the first people immediately people in an education were really worried because immediately people started cheating on their essays right away, I mean it’s like my kids found out about it in five minutes and they knew exactly how to use it and it was like and all their classmates know how to use it so yeah yeah but if but if you think about you but you think if you think about the incentives for doing that why would you be incentivized to cheat on an exam it’s only because education what’s happened in education the idea that education is about getting a formal certificate so you can prove that you can do it to get to the next step not about actually wanting to learn it for your own sake if you really wanted to if you really wanted to learn I mean like when you learn carving you didn’t think oh I wish I could just get it done by a machine did you yeah no myself yeah yeah years of value of doing it but so what it’s doing to our language so it’s so it’s it’s it’ll be reducing our capacity to understand language for sure more more than that it’s it’s controlling our language so through through I don’t know through my chat gtp but also language in general online it’s being censored because of no sensitive control for so-called misinformation misinformation actually usually based upon a totally bogus knowledge of a model of what science is because you know all scientific knowledge is up for grabs um it’s also it’s atrop being our capacity to to learn language you know you just have to use spell check or something but also it’s also imposing a kind of uniformity on us uniformity on language so actually so I should mention actually I wrote a book recently I wrote a book can I am I told my book sure of course okay came out a couple of months ago AI ethics all right AI ethics yeah it came out a couple of months ago which is I wrote it as a I wrote it as a I should I should give you the information okay the springer it came out with it’s uh look it’s really long it’s it’s like I wrote it as a textbook um to try to sort of uh put lots of ideas together but even by I mentioned it now it’s because the um the publishers decided that they’re going to use an AI editor to edit it it’s like I couldn’t believe it I couldn’t believe it I thought my neighbors were going to call the police because I was like shouting into my computer so much you just couldn’t believe what it did to it but one of the really interesting things is I could choose between whether I wanted to use British spelling or American spelling I used British spelling which meant I used British English and it really made me realize there are massive differences between American English and British English and this AI couldn’t tell the difference it just put everything into American English which like no offense to American English but it’s really different and it kept changing it kept changing the meaning it just really kept changing the grammar but the the present tense is used slightly differently I didn’t even realize that it really mucked it all up but but chat gpt I asked chat gpt if it knows the difference between different forms of English because Indian English again is different and then the regional dialects in britain are really though it doesn’t it had no idea so maybe it does now because I because I asked it but so what that means it’s doing it’s looking at all the stuff in English and just collating it into an amorphous mass of English yeah so different ways of using English are all being collated into one and then there’s stuff online I think chat gpt has stopped at collecting data at 2021 I think but if it carries on updating the data it’s going to be collecting its own data it’s going to be like it’s going to be a feedback that’s the really that’s really the biggest thing that I’m seeing is that if you project AI very long term it’s going to be diminishing it’ll be diminishing returns and a kind of you know when people talk about like singularity and like AI you know exploding and becoming I don’t see I don’t see how that’s possible because it has no relevance it has no desires and so all it can ultimately be is a kind of spinning back onto itself with maybe I’m with a long arc but on that long arc you’re looking at a kind of of a leveling you know a kind of intelligence leveling I don’t see I can’t see otherwise yes yes yeah I know so it’s going to it’s going to be the same everybody so like in in in music but apparently how the music is set up online I don’t know much of the technical details it’s set to a particular sort of calibration based on keyboards and and so it’s it’s just it’s a limited way of looking at the music so if it’s it all ends up online it’s the same with everything language is just going to I asked chat GPT about it but isn’t isn’t it’s a problem and it just replied it’s actually quite funny I was asking chat GPT about AI ethics it’s quite funny it’s kind of like a way of skimming the surface of how bad things are it just replied by saying but oh that human language has always evolved but this is going to stop it evolving isn’t it and it’s always evolved in different ways it’s always evolved in in really different ways I mean because yeah it’s different because it can’t because it’s not referring to anything that’s the problem it does it has no relevance and that’s and I think and but then I don’t know if you listened to the discussion that if you saw John Brevecky’s video on AI I still need to talk to him about it explicitly but he he said the only solution to AI is embodiment and not just embodiment but biological embodiment and then so that it attains relevance yeah I completely agree actually I mean I’ve listened to some of this stuff but I just think that’s I think one of the big problems one of the big problems is how it’s making us less and less embodied so partly through that’s partly why I said we should think about how we’re thinking of what AI is the ideology of it so that because so it’s interesting how we think about intelligence as soon as we start thinking about AI is intelligent based on its use of language mostly we then start thinking if it’s intelligent therefore maybe it’s like us maybe it’s an agent but intelligence is just one one aspect of who we are yeah and manifest manifest in all sorts of embodied ways yeah you know but I think I think our intuition we did we talked about this a long time ago where we talked about how the systems seem to be farming intelligence from humans and then basically yes acting as a extension of that farmed intelligence that seems to be more and more true because most of the eyes now are what they call hybrid a eyes which means that they’re trained by humans using different methods like mid-journey is a is training you know because people mid-journey will select will produce several images and then you will choose which one to have mid-journey refine and so as you’re doing that you’re constantly telling it what’s good and what’s bad and so it seems like that notion of that notion that that AI is basically not intelligent or agentic the way that we think about it but that it’s it’s farming it from humans yes yeah it’s also doing in terms of the hardware because in order to do that it eats hardware and the hardware is most the hardware is mostly made made by people working in really awful conditions sometimes under conditions of slavery or what’s effectively slavery kids in the Congo digging up minerals with their bare hands and people in China working in really important conditions so that’s another way in which it’s hacking humanity so that it’s so interestingly actually one of the ways in which is also hacking our agency is in response is in responses to it to try to control try to control it it was sometimes through like controlling what language people can say online and so on but also through through through legislation so the European Union has got an AI act which is just about to go through that they’ve been working on for a couple through two or three years to try to try to reduce the harms of AI so it’s focusing on it’s focusing on it’s focusing on because of course AI can cover lots of different things so it starts off with trying to define AI and then it’s focusing on trying to legislate about AI if it’s going to cause you know sort of serious or major harms but the preamble the preamble to the act written written by written by the uh gonna sound a bit brexit now the unelected Ursula van der Leiden who’s in the European Union says that it’s important to have legislation to control AI to build public trust in AI so that we can so that we can advance innovation so that we can increase the uptake of innovation so it’s focused on human rights but it’s focused on the human rights of people within the European Union because if you’re going to increase innovation you’re going to increase the hard way which is mostly outsourced to people in the rest of the world who are working in really terrible conditions yeah yeah but also i mean the idea that we the reason is because you want to build public trust in AI that’s the reason why we’re doing it’s like we AI is a given it’s going to take over and so now we want to build public trust in AI you know Paula my my daughter who’s 15 she she received an email from the government from the the the education uh ministry of education uh and it was asking her to fill out a quiz online like uh you know a questionnaire online and you know it said that she could win 50 dollars or whatever and the questionnaire was from the government asking her about AI and what they were doing is asking her what they think about AI uh school counselors for psychological issues what yeah that’s where we are think about it because and and in the questionnaire it was basically suggesting that you know this this AI counselor would not have any prejudices wouldn’t be racist wouldn’t be sexist right and yeah it would it would be able to avoid all the prejudices of a human uh counselor well for one thing that’s a complete lie because it’s going to have prejudices built in by whoever built it yeah but they’ll be the right people that built that that put those you know they’ll be the non-hate prejudices right but wow but also i mean one of the things that will be happening then it’ll be collecting data collects data on people all the time it’s taught constantly but constantly so that’s another way in which it spreads that’s another way in which the ideology spreads because of the metrics of all the data that’s being collected so so all the all the time we’re having data collected about us and all the time data that you thought was just ambient sort of like junk data turns out to be data that can tell you something about yourself or could potentially do that so that’s another way that’s another way in which we’re being sort of trained or seduced into thinking a sort of mentalistic conception of a human being because you’re trying to think that we’re just basically therefore made up of information or made up of data yeah ai human counselor but also might you might ask the question so all these things are nested in other things going on around them you know like the issue about cheating on exams with chap gpt is nested in a wider issue about competitiveness in education and exams and but there’s also an easy solution to that which is um i got a really novel idea for that you could just sit in a room at a desk with a piece of paper and a pencil if you wanted what people know you could do that it’s something really novel i don’t think they’ve ever been done before yeah a lot of it could really easily be sorted out that kind of way but but you might when i was at school there was no such thing as a school counselor now maybe there should have been there maybe there were some things but were overlooked it was a bit more brutal in those days but you might start thinking why why why why are so many school children so many teenagers having such massive range of mental health problems yeah yeah there’s there’s definitely there’s definitely some all of this is kind of crunching at the same time which is this mental health crisis with with young people yeah but it does have to do with ai in the sense you know that it is the question of what it is to be human what it is to be part of a society what it means to be someone in a community all these things are exploded right now yes yeah i mean i really i think the pandemic really worsened it as well actually yeah because lots of technology was introduced but just never went away i mean lots lots and lots of things have just been introduced and they’ve all this works fine or it’s slightly better in some way um and so they’re just keeping it but i think that all the online presence again is just encouraging us to a disembodied way of relating to people i think we’ve talked about this before but it’s sort of but you can get easy access to a sort of so-called community but it’s just completely not the same as actually spending time with actual people is it but it brings about this idea that some ways that we are these cartesian disembodied thing there is a rift between your mind your your identity and your body you know this is accelerated so much because the thing is that even this conversation i mean it’s fine as a conversation but i’ve noticed now because now i’ve done both where i’ve talked to someone on zoom that i met them in person and you know there’s all the there are all these unconscious cues about a person that have nothing to do with just what you see and what you hear but you know body position muscle tension there’s even smells probably there’s all these things that are part of our human engagement that that we somehow because of because of these screens we’re able to alienate ourselves from that seems to be part of what’s feeding into the massive alienation between our bodies and our our identities and our minds yeah yeah i mean it i mean it could be so there’s so much so much concern now about having people for example wanting their identity to be validated or being really upset if you don’t completely agree what their identity is as if it’s really really fragile so i said but i think if if you weren’t just online so much if you were with somebody you get you get validation and a sense of belonging simply from you know sitting and having a cup of tea with somebody even without speaking or just i’m sure i mean i’m absolutely certain we pick up signals from people we pick up because when we pick up signals from people like we have electrical fields that go a really a long way from our bodies i’m sure that’s one of the ways in which we we we are interconnected and have a sense where we’re animals like we are we’re we’re human animals we have bodies animals you know they smell each other they they have like as if we don’t have any of those we’re just these these exactly just like these eyes and these this mind yes but but that’s it that is definitely that’s definitely part of the whole issue of what’s going on and the fact that we’re able to conceive ai as being similar or even superior to us in terms of what of intelligence and in terms of agency means that we we are deeply misunderstanding what it is to be a person yeah yes yeah yeah precisely so there’s another aspect of work that i do that’s but when we’re looking really closely into this did i definitely mention i work i do some work part-time with a group of sociologists we look at the care of people with dementia in in particularly in hospitals so so one of the things we’re really really really um concerned about is so it’s not it’s not to deny that dementia um isn’t a really difficult condition that can cause lots of cognitive issues but it’s to try to sort of fight back about the idea that we’re simply cognitive creatures and there’s lots and lots of work that indicates that people have got can even one even after they’ve lost almost all language can retain a real strong sense of where they are in the community and of how to behave and a sense of belonging and understanding with other people and often mediated through things like so so the lots of work we’ve done in my colleagues have done in hospitals looks at looks at things like um even the clothing that people wear so that especially if you have dementia if you have if you’re dressed in your own familiar clothing it helps to give you a sense of where you are and who you are but they’re much less likely to have their own clothing on we found much more likely just to be in a hospital gown that for men not to be shaved lose their glasses and so on so it’s that it’s that that sense of embodied belonging or just left in beds left in beds a person could be transformed if they’re taken out of bed and just sit and have a cup of tea with somebody yeah it’s not it’s not that’s not just that happens with people with dementia that happens for all of us yeah well the image of the old person you know sitting on their porch on their rocking chair this is an image that we have here in in canada for sure you know the idea that that person is there but is also kind of not totally there you know yeah but they’re still there and they still have they’re still part of life and they still have their little routine and they do their things and and we we kind of understand that maybe they’re kind of slipping away into some extent but that doesn’t mean like you said that right exactly so now let’s let’s put them in a hospital bed in a hospital gown with the septicized walls and let’s just make it all worse because yes because now they’re not part of anything yes yeah yes so that’s a little bit we’ve got an artist joined our team last last term last year and she’s doing some doing some work with painting workshops with people who are living with dementia as a way of helping to people to express to express ideas so i’ve been going along some of the workshops with her um it’s it’s so it’s looking at the idea of the materiality of the paint as a way of as a way of actually but this is actually a form of thinking but expressed through the materiality of of what you can understand and and communicate and express through the actual physical materials that you’re using so so that’s that’s a like a really embodied way of being a human and and acting something out yeah but you can see it like here in canada especially all of these things are definitely related because you know the the the scandal around made here around medical assistant dying is i mean it’s now it’s blowing up because i think it’s just been a few years since it’s become legal and i think it’s 30 000 people have been have have gone through made and so you can see that it is this idea that you i mean just this this very idea that exactly that you’re this thing and then at some point you decide all right now’s my time to to die because it’s all cognitive it’s all it’s all mental and so then you just decide and then you die and it’s like it really is this reduction of the human person to to the to this conscious abstract conscious being yeah yeah i mean it’s also related to the use of technology as well yeah that’s one that’s one of that’s one of the things that’s happening is it you think that every problem must have a technological solution or if you have a bit of technology you must use it so that every every no every you every desire must be fulfilled every every time if you’re if you have like this much pain that has to be taken away yeah and you can imagine that ai is going to be perfect for this because you know you can take away the guilt or the or the problem ethical problem of the doctors having to make certain decisions where it’s just the eye is going to decide who gets to live and who gets to die and it’ll be an objective decision uh you know it’s all good it’s just like the ai counselor you know no prejudice no no none of the human foibles no guilt no yes yes but but but but that probably will happen but that is a that is a way of also outsourcing responsibility because that would only happen within the context of a of a of a medical profession where you know people still for some unknown reason tend to trust doctors there’s still there’s still an aura that is somehow health health care yes how how we get how we get labeled as health as health care if they did it in a butcher shop that might be you’d have a better idea of what was actually happening yeah well the doctors at least here are almost already reduced to machines you know if you go to the doctor now all they do is sit in front of computer and fill out fill out a report as they’re talking to you right yeah they’re sitting at their computer and they’re filling out the report and so you know they’re already just basically data gathering uh that’s all they’re doing and you know their decision is kind of kind of handed down by the system yeah so you know it’s just it’s just a question of time really uh before yeah before this becomes more and more automated yeah yeah it’s not i’m definitely not a short-term optimist on all these fronts well yeah but i i yes i mean there are so many one of the things that’s going on there is the use of metrics and also the idea that these it systems are actually going to save time we’ve been told that we’ve been told that over and over and over and over again it’s going to save time whereas i can i can remember when i first started as a lecturer you would just like write your reading list out on a piece of paper and give it to somebody to type and we have we have so much more work to do but the metrics the metrics and so one of the things the metrics do is capture it and then the information belongs to this belongs to the institution so that’s a way of controlling things so i’ve noticed you know i’ve noticed that you know people are explicitly told the management can read your emails so you just notice that everyone’s being really polite about oh we’ve got this change of university going to be really good you’re because you know because they’re going to read your emails um so yeah when you wrote something on a piece of paper and you gave it to someone then nobody could read it obviously yeah yeah so so that’s how it’s how it’s controlled but it’s always it’s always done in terms of this is going to save it’s going to save you time it’s going to save time so i’m going to say they can introduce ai into medicine because then the doctors will have more time for like communicating with the patients yeah of course obviously that’s what’s going to happen we all know that it’s going to lead to doctors spending more time with patients my goodness it’s not it’s not going to happen there’s no way that it’s going to happen but everything also it’s just reduced to metrics that’s one of the way which the ai is also you know is also controlling us by if things are reduced to metrics then what gets measured is what we look at yeah if you if you reduce things to sort of harms and benefits it depends on how you measure them so what do you think about in the moloch video what i was trying to present clearly and it’s not me who came up with this obviously that’s in the moloch problem itself is that we’re so focused on the ai’s agency and the fact whether or not ai becomes a consciousness or whatever that we forget that we stop noticing the way that it’s actually acting on us which is sometimes outside of the software outside of the hardware but rather in the very competition to implement so the us as agents we are in a competition to implement ai because we know that whoever implements it will have an advantage and so it’s almost like an evolutionary race towards the implementation of ai and so the agency seems to that agency seems much stronger than the question of whether or not the software and the hardware whatever become conscious yeah well yes i mean there is there is a kind of arms race there’s lots of there’s lots of different sorts of arms races going on so one of the one of the problems that like like ordinary people have is working out what they can do about it and i think there’s a there can be on the one hand you could say there’s not much we can do about it but i think there’s a there’s a sort of a day there’s a kind of a danger of thinking this is going to be some big crisis so just let’s wait for the crisis and then something will happen but i kind of think there’s still a lot of things that i kind of think there’s still way there’s still ways in which there’s still ways in which people can be aware of what’s going on and try to get a bit more control in their life because i mean one of the things that ai is going to do and this is something that i’m noticing already you know we’re doing this this snow white project and uh and we’re going to see is going to be uh a deliberate focus on human intentionality will become will happen so it’s not just that ai people think that ai is going to like take over art but i don’t think so i think that there’ll be even a more of a fetishization of the art object with ai because it’ll become very precious you know this idea of the actual someone making something will will have a kind of aura that that uh you know that all the the like let’s say the wave the drowning wave of mid-journey images and of ai generated images won’t have so people will will coagulate will kind of rally around intention uh more so that’s a hopeful aspect of what ai seems to be bringing yeah yes i think i think it will lead i think it’s lead will lead to a turn to people actually wanting more human connection because they’ll see i think i’m sure that will happen i mean because why do people i mean why do people actually go to galleries to see you know try try to get to see the Mona Lisa you can’t get anywhere near it because people don’t want to people want to go and see the actual the actual thing don’t they i mean the galleries are just absolutely absolutely crowded and so many so many people will be are are really keen to own a piece of art yeah but some people painted even if it’s not even it’s not like even the value might be it’s not terribly good they can love they just love it because somebody has actually made it and i think there’s going to be a move to people i mean there is a move to people like doing performing performing new live music i mean because i mean live music like tinned music is just nothing nothing like just nothing like live performance but also even in terms of even in terms of um online censorship and people being you know permanently cancelled from things there are people and people and people are meeting up so you know so people are people are going no i know in london there people are going to they’ve been cancelled from so many different things but their views on various things people you’re going to stand in speaker’s corner yeah so you’ve got the police protecting you can say whatever you want yeah there’s when you talk about live music and i think that that’s a good example to help people understand the difference is that when you go to an event where there’s people there there is a kind of electricity i don’t know what it is but there is a kind of electricity that develops a kind of of um feeling of participating in a in a group and participating in a movement participating in a a way that’s going through that you cannot have you just cannot have that when you’re sitting at a screen and so that that that will not go away you know the goggles if you get the the vr goggles they’re not going to provide that kind of that kind of uh electricity that you get from going to a sports game or going to just can’t no no we can’t i think i’ve heard you talk before about we’ve some we’ve a bit of skepticism about like concerts and with comparison between taking well for sure i think concerts are downstream from yeah from from real participative actions like you know like whether it’s folk dancing or religious participation they’re downstream from that but they they still have a level of reality that is undeniable yeah yes precisely so if you i mean if you if you if you go to see i mean actually i’ve been i’ve been to some fantastic operas recently and you get back up you can’t sleep for hours you can’t because it’s just but there are also there are levels of participation in those sorts of things as well because people go to those they’ll know a lot about the performers they’ll have they’ll you know join some society where they can go backstage there are all sorts of levels of participation but but yeah i think i think that’s the way that’s a way to go actually actually meeting up actually meeting up with with people um uh there’s some do you know have i talked to you before i mentioned there’s a really interesting person who works in vr called i hope i pronounce his name correctly jaron laniere he’s from california l-a-n-i-e-r he’s pronounced laniere i think so he’s one of the people who actually developed vr but he’s got some really interesting stuff to say one of the things you might find he’s got some videos online and a number of books and one of the things he says about vr is the best thing about it is taking the headset off because then it makes you realize how absolutely fantastic reality is yeah because the amount of colors the amount of depth like if you look if you look at you know i was walking today this morning when you look at the sun like come through the leaves at in in a wooded area you can have as much you can have as much resolution on your 4k whatever it’s just not you just can’t compare it because you don’t also you don’t get like you say you don’t get the smell you don’t get feeling the heat coming you know from from the sky you don’t have all these sensations that are so rich when you yeah yeah and something else i’ve noticed well actually so in order to counteract so much awful stuff online i joined lots of lots of lots of kind of like facebook groups like welsh lands great photography like the corrugated iron appreciation society things like that just post photos of stuff and then people started posting posting ai photos and things people can tell the truth of this really quick you can tell right away like yeah i don’t know like they they keep telling us that at some point we won’t be able to tell the difference between ai and and real photos but i’m not sure at least you know when they posted those pictures of the pope everybody got tricked by these pictures of the pope like you could just tell that it was ai i don’t know what there’s a there’s a subtle difference in the way that that there’s something about it i mean maybe at some point we won’t be able to tell but at least for now it’s quite still very easy to tell it’s the same with chap gbt a lot of people said oh this is really fantastic it’s these answers are really really great but you can if you just keep on answering it you can tell that it’s you can tell that it’s just bullshit yeah well it’s low level whatever it just basically it can synthesize some some things like it tends to be able to synthesize some some some ideas but they’re very it’s very superficial yes i mean the other thing as well actually even if the end result is the same or already close enough to be good enough for what you want it hasn’t it hasn’t got to the end result in anything like the same way yeah in constructing it and constructing an an image or doing a really good like ai painting or something it hasn’t been it hasn’t been years in life drawing class it doesn’t it doesn’t it doesn’t look at the figure and sort of you know you know if you go to drawing class and it will there’ll be different techniques for sort of thinking about you know how you might think about what it is that you’re seeing and how you might see it doesn’t do any of that it doesn’t do any about at all so so in terms of saying it’s intelligence it can it can do it can do things and achieve a result but whether it’s whether it’s intelligence the same to the way as us it’s so it’s a really difficult question to fall in so but chap gbt it just it just predicts what the next word is likely to be he doesn’t like that no but what’s interesting about about that is you know because the people who are programming ai and the people that are putting it together and amassing all this insane amount of language data and then pumping it into these to these algorithms that are predicting the future they don’t understand they don’t think they totally know what’s encoded in human language it is it is true that some of the very very deep aspects of human consciousness will necessarily be encoded in the structures of languages but the fact that that’s true doesn’t mean that those that are playing with this know what they’re playing with like they’re playing with these very dangerous tools and some things are going to come out of ai that nobody will be able to predict because because they’re not aware of what the very very strange and deep drives that exist within the human consciousness and also then ultimately are hidden in human language so that’s why it’s you know when i talk about the idea that the ai is is like you know the body of a fallen angel you know people i know that some people struggle to to understand what i’m saying i think i’m just talking about mythological images and and yeah of course the mythological mythological image is captured very well but in terms technically you can understand it that way is to say that you don’t know what’s in human language you don’t know what all the motivators are you don’t know what all the hidden structures are that that make that underlie meaning and so we’re just playing with this and we’re kind of tossing the dice and and and tossing it around but there are agencies in there that are very dark the type of agencies that made human sacrifice are the humans the type of agencies that made you know people commit genocide all these agencies are there in the very structure of human language and and the fact that people don’t seem to totally understand that i mean probably the raw version of chat gbt has it all and now they’re just trying to patch over it to stop that stuff from leaking out but it’s like it’s probably there in the raw version of of the of open ai all that stuff is probably bubbling up all the time yeah well yeah i mean there’s there’s so many attempts to try to try to work out ways of general ways of trying to control ai so so what i think i might have mentioned it to you before was it’s stewart russell for example for example has written a book called human compatible which is supposed to be a response to nick bostrom’s book on super intelligence so nick bostrom wrote this book a while ago now actually i think 2015 or 16 about super he’s a paperclip he’s a paperclip guy yeah about how you might try to control super intelligence you know stewart stewart russell is he’s like a real ai expert like he knows what he’s talking about he wrote he wrote the major textbook on ai co-rug it he’s he’s got an idea that we should try to make certain that that ai that we’re producing has is aligned with what humans want but then the problem is what do humans want he recognized do you know what he was he recognizes that problem so here’s his here’s his solution which is which is absolutely terrible it’s honestly he should he should stick with ai and stop doing this it’s absolutely terrible because he thinks we should try to train ai to observe human behavior and extrapolate from from our behavior honestly he’s got a book on this look it’s got a book he did he did like a set of lectures on the bbc a year or two ago on it it’s extrapolate from our behavior what what it is that we really want what it is that we really want so what’s that going to be what is that i mean but also could you even interpret you could probably interpret human behavior in in any loads and loads of different ways what do you what do you what do you really want i mean maybe or maybe our darkest desires are really our stronger ones right yeah that’s right exactly maybe maybe the the the drive towards murder and war and rape and all these things they’re there i don’t know what to tell you the idea that you would just watch that you would just extrapolate from human language and human behavior it’s just not that’s just not gonna fly yeah and i was like ps please exclude all Hannibal Lecter types and then also exclude yeah but there’s a there’s a narrative so there’s a narrative in human society you know which which is encapsulated in the murder of socrates or the murder of christ which is that that human quality is found in in exactly that in not in the quantity it’s found in the in these exceptional shining bright lights that we look to and we align ourselves on but that most of the stuff is all this really chaotic dark stuff so so the idea that you would extrapolate you know mathematically uh you know or statistically from human behavior is insane that’s a crazy thing yeah i’m because i’m like a really really strong motivator for human beings is envy isn’t it so that if you want to be like if you want to be like if you want to be um picked if you want to be like picked for something there’s two ways of doing it one is to make yourself better than the others and the other is just to knock the others down because then you’re one who’s picked so envy is such a unbelievably strong motivator so that if if it picked on that where would we be what would we be would just be just knocked down just knocked down all the time yeah and especially and what’s interesting about the the chat gpt’s and the ai’s the way they’re being trained now is that they’re trained on on attention grabbing mechanisms they’re trained on the internet they’re trained on all these things the social media platforms these are the these are really the worst place because because they’re only there to get your immediate attention yes you tend to immediately default to the most immediate you know uh pleasures that we can find you know which are you know envy all these things that’s what people are you know rage anger lust all these passions are the ones driving these platforms yes yes so actually so i mean one of the things you could say about about ai acting as a kind of super agency is you can kind of turn it on its head and sort of think what’s a character what a broad common characteristics of agents and and in terms of a common characteristic human agents is to actually try to manipulate and dehumanize other people so manipulate people by dehumanizing so that’s what the ideally that’s what agent we should treat each other you know as emmanuel can’t said and others have said you know we should always strive to treat other people as an end in themselves and never merely as a means but but something like the stewart russell setup is actually built into it but the ai has been asked to treat the human as a means as a means to finding out what the humans want so it’s looking at us from like a from a third person perspective like a like a behaviorist would trying to looking at our behavior from from the outside so that’s one of the things that’s happening in the agency is that we’re seeing it as an agent partly because some people think it must be an independent agent of its own but also because we’re actually we’re allowing it to dehumanize us by focusing our attention on something but to avoid something else yeah but also the wider the wider metrics that are using the wider fact is measuring stuff it means there’s some things it’s not measuring so that’s left out so we just focused on what’s measuring the thing the thing that actually the one thing that i’m afraid one day i’m going to be like arrested for shouting at somebody in the street the thing that but really really upsets me is parents of small children who are just looking at their phones and not talking to their children it’s like it’s actually not funny because human babies are sort of actually going to get really upset human babies and little kids are designed they’re designed to grab our attention yeah they need it they need it to survive because they don’t know how to keep themselves safe but they also need it for language development cognitive development they need constant interaction not constant but you know yeah i don’t know where it’s like you see there’s so many parents who just looking at their phones and not looking at their kids yeah well you see that in parks and i remember even before i i had a phone really late before i had a phone i would go play with my kids in the park and i would watch parents sitting on the bench with their phones and yeah yeah but another thing it does it isolates us from each other in ridiculous in ridiculous so here’s kind of even like just one really stupid if you look at these tiny little examples tiny little examples you can sort of think about oh yeah that’s happening in my life and just not do it so i went from being in a pub with some friends last week and the bloke we went to pay proudly announced that they don’t it’s not just that they only take cards they pay by qr code so for one thing if you hadn’t got a smartphone you couldn’t pay so but that also meant that the one person who was paying had to scam is it proudly saying oh it’s really good for the environment because we don’t have a paper bill instead of having a little scrappy tiny bit of paper bill they came out carried out a card a nice little posh card with the pub’s name on printed on both sides in color so like so he’s basically lying look is it how it’s it’s like a conjuring trick lying to your face that they’re saving paper because he’s come out with a piece of paper and then the person who was paying had to scan it onto his phone so nobody else can see the bill and he’s having trouble doing it so the lie was this is quicker it took longer because he had trouble doing it had to do it and then we paid and then remembered that one of the things we’d ordered hadn’t been available so we wanted to try and check did we get charged for cauliflower cheese that we didn’t have couldn’t check because it’s vanished couldn’t couldn’t and then the waiter was just like boasting boasting boasting about it do i say listen mate read my book you know yes it’s just honestly but that’s when you wonder what the hell is driving this because it’s the same even just the menus you go into a restaurant and then instead of just giving you a plastic menu they give you a qr code you have to scan the qr code and then you’re like on your phone trying to figure out the menu it’s so ridiculous and it’s like but it’s so much more complicated like why i don’t understand like like i do understand but you know it’s interesting to notice that it doesn’t necessarily make things easier it’s just some fetishization of the process yeah it’s better so i shouldn’t have a notion of progress so it must be progress because you’re using technology uh it’s also excluding people i mean like this is my phone proudly this is my phone i can’t do it i couldn’t have paid i don’t have to do the washing up so um but but it’s so excluding people because then so my friends always ridicule me for that phone but so so that people get picked on and ridiculed but it’s also excluding people because if if you had the whole menu you i mean usually they bring many for more than one people but if we’d had a paper bill we could all have looked to say oh look they’ve charged us for colifera cheese but we can’t we can’t it’s just one one person looking at it it it puts you into that little kind of like it all these times it’s set sold to us a saving time now yes sometimes saving a tiny bit of time is is what you want like if you’re a pit stop in a grand prix or if you’re trying to save somebody’s life in icu incremental difference an incremental difference you know that can be gradually saved more people’s lives but when did anybody go to a restaurant and think the real problem is that it takes 30 seconds per favor bill i want to pay it in 25 seconds yeah what do you think that yeah yeah yeah so but yeah so so what do you think i mean what do you think are some of the ways because it seems like i mean ai is is taking over there’s no doubt about it like there’s no way around it really and so what are the ways do you think that we can engage with it or not engage with it you know that find ways to kind of stay as human as possible in this context well i kind of think i probably say the similar things things similar things to you really but we just we need to do as much as stuff much stuff as possible in in the real world and but also take the take the time and a chance to sort of think really seriously about what it is to be a human being but to you know to join in you know to to join in community to join in rituals to sort of see to you know to see people as much as possible to spend as much time spend as much time as you can in nature because wherever you are you could find a bit of nature um but but also just really just really think i mean one of the things you can do is just stop just kind of stop ribbing your friends for for not that’s one of the things that’s happening people people get people join in or just always for example you can just point you know maybe write to institutions and point things out so but i noticed for example the qr code menace the national gallery in london most of all most all the other places you can get a paper map or you might have to pay a pound for it so you get paper map of the museum then with your friends you can say which gallery do you want to go to you can have a little look and carry it around with you the national gallery you have to have a qr code on your smartphone now so so you could maybe i should do that at the end of this write to the national gallery and say look you shouldn’t do this for one thing it’s got disability implications because not everybody can read stuff on the smartphone you know it’s too so small it’s a ridiculous thing it’s like it’s so impractical to look at a map to look at the map on your smartphone yeah so but i would just i would just suggest i think just look at how it’s just become integrated into the infrastructure of everything you do and just just ask yourself is that what you i mean like maps for example it’s really undermining people’s capacity to i was walking somewhere the other day in london and we it’s a slightly confusing area because the streets just go at different angles and we had to try to get to st pancreas it’s an always somewhere heading north it’s fine so how do you know we’re heading north well because there were shadows on the buildings this west over there but it’s you know it’s that kind of thing uh sorry it’s just it but just just look that’s why i think it’s like important to think about what the agency is doing in really kind of minute little details because that’s the only only place that’s the only place we have to try to right try to think about unless you happen to be you know have a have some particular position um yeah all right all right let’s take that as a lesson for today so so paula where can people get your book if they are interested in your in your your textbook is it possible to get it somewhere online or to order it oh you can get i mean you can get it it’s you can get it from um the springer website it’s on you can get it on amazon you can get it on all the usual all the usual places it’s quite expensive because it’s it’s a textbook or get or get your school or university library to get it yeah so yeah so one of the things one of the things i’ve tried to do it’s called ai ethics but one of the things i do is actually explain how the standard ways of looking at ethics just don’t work they break down or be looking at ai and how you need to look at notions of human nature i mean there’s i mean there’s there’s stuff in there about religion actually because i talk about how many of the visions of ai are actually fundamentally really religious and we need to understand understand the roots of that in order to understand in order to understand what what what visions of the future are being presented to us uh you know what what position of humans how how it’s understanding a position of of humans i mean some of some of the stuff is some of the stuff is like bears uncanny resemblance to lots of religious ideas like the general idea that humans are really special because where the ones in a position who are going to be able to create this fantastic being who’s going to be the fulfillment of the universe well one of the things that it seems that we could use right now is an understanding of non-human agency and what those non-human agencies were understood to be like in the ancient world and this is going to sound silly to people but yeah fairies and all these little types of agencies how is it that ancient people understood them like the the the demons the angels you know how do they see them acting on people and how do they see their agency kind of manifesting itself i think that we’ve evacuated all these ideas of non-human agency and now we’re basically dealing with the problem of non-human agency but we don’t have any of the tools to to deal with them because we don’t understand them we don’t think about you know the idea of if you think about for example a fairy tale like the shoemaker and the elves you know and the notion of of action that is beyond the will of the of the of the person it’s like these these tropes that are in fairy tales you know the pucks and all these kind of household agencies these are important in understanding the way and the dangers of non-human agency in our lives yeah yeah i mean that that’s that’s one of the dangers of the way in which we’ve been encouraged to think about the mind in a purely sort of cartesian mental way because we have this illusion then that we are we are totally in control of our minds and that we’re just undermined by the constant need for affirmation from other people but we’re just isolated from from from from other people i mean have you are you familiar with um the canadian philosopher yep yeah like his his his distinction between the the um the buffered sense self and poor self yeah yeah but those kinds of ideas are ones that people well worth well worth thinking about in terms of we just got this idea that we’re just completely buffered which means we’re actually we’re not when we’re more porous than ever exactly and then as people yeah and as people kind of join with their ai familiar they’ll have this little ai familiar that’ll be working for them you know it’s like you should you should start to think about these old stories again because yeah the the buffered self is over you know we are moving into a very very ambiguous state of relationship of identities and agencies yeah are you going to do the elves in the shoemaker in one of your stories it’s my yeah we’ll see it’s not part of the plan for now but that could be an interesting idea you know i’m trying to think about it in terms of obviously i’m just telling the story it really is just a fairy tale but i do want there to be a certain angle in it that is alluding to some of the things that are going on now like in snow white we did a whole you know host there’s a whole aspect of snow white which is the obsession that the queen has with the mirror and the mirror revealing who’s most beautiful in it and so this this whole idea of vanity and of the cell phone and of the you know this affirming machine that’s affirming or denying your value value uh you know it’s like it’s so relevant today that you know there’s a that’s a there’s more emphasis on that in the way that i tell the story uh and so i think it’s the same with something like shoemaker and the elves i would have to find a way to explain these kind of agencies that are acting out you know side and on the side of our will and how they’re there and how to deal with them uh in a way that would be helpful for people so yeah but it’s a good idea think about it great it’s my favorite one actually yeah i love that story definitely definitely all right everyone so so uh so if you can uh you can check out uh paula’s paula’s book and thanks again for your time and and uh your your thought about all these crazy these crazy times i really appreciate it thanks great to talk to you all right