https://youtubetranscript.com/?v=ssGKniyIUGk
And it’s really like a horror movie scenario, right? It’s like Lord of the Rings, really. And so Sauron is trying to build a body for himself. Like he’s not completely incarnate, but he’s using all these people’s desire for power in order to create a body for himself. That’s pretty much what it looks like is happening with this AI, which is that we’re all running towards a cliff. We all know we’re running towards a cliff. This is Jonathan Pajot. Welcome to the Symbolic World. It’s interesting because often I talk about things and I talk to you guys about certain subjects. But then when they come in the news, it’s as if I don’t really feel like talking about it anymore. I don’t know why I’m like that, but that’s kind of how I am. And so everybody’s telling me, you know, you need to talk about AI, you need to talk about AI. And my reaction in some ways is that, you know, I’ve been talking to you guys for years about AI, you know, and I’ve been saying things like, if you want to understand what a fallen angel is, pay attention to AI for years, literally. And so now that the entire thing is exploding, you know, I don’t know why I hesitate to talk about it, but I will talk about it. And I’ll talk about it through one interesting tack, which is, you know, someone called attention for me to what they call the Moloch alignment problem. And to be honest, I never heard of this issue. It’s something in game theory. It’s kind of a scenario in game theory. And it’s interesting, first of all, it’s called Moloch. It’s based on a Ginsburg poem. And basically it’s the idea in some ways that technology has a will of its own, you could say, and it seems to have a kind of agency that no one can stop, like no one can act against. And, you know, the people that are behind this notion of agency, I don’t know if they totally see it as a full agency, or if they just see it as a kind of illusion of agency or mock agency, but in the end it doesn’t really matter, does it? Because it’s definitely experienced as a kind of agency in the same way that you would experience individually something on you that you can’t totally control, you know, a force or a process in your own behavior, like an addiction or something like that, that you can’t stop, that you can tell the person, why don’t you just stop, you know, why don’t you just stop smoking? Why don’t you just stop doing whatever it is that you’re addicted to, but for some reason they get caught in this agency. And so it’s interesting because the way that the Moloch problem lays itself out is that, you know, there’s a certain setup. And the person in the paper, or the article that talked about it, I forget his name, I’ll link to it in the description, he uses the arms race as an example of this, which I’ve used, you know, when I talked about basically the book of Enoch, my video in the book of Enoch, I talk about the idea that the arms race is a good example of noticing agency in the world, transpersonal agency that you can’t stop, right? That it’s like, you can, you know, why don’t we all just stop this? And it’s like, okay, why don’t you do that? You know, it just doesn’t work that way. And so this is of course what’s happening with AI. And so what’s happening is that people are in a kind of arms race where they can’t stop the process in that everybody knows that by making it happen, they are making, let’s say they’re making the situation worse in terms of, let’s say, humanity, but they also know that whoever doesn’t do it will be in the worst position of all the others. And so, you know, the fact that everybody knows that now that AI is kind of out of the box and is unleashed, whoever isn’t on board with AI will fall at the end of the line and whoever is on board with AI, although they know that they might pretty sure they’re making the world a worse place and a more chaotic place and also a place where there’ll be more control, those two things at the same time. They also know that they have to do it or else their competitors, whether it’s a direct, whether it’s a competitor like another company or whether it’s a competitor like a country that say like China, they have no choice because if we, do we want China to be in control of AI? Like, no, we want to be in control of AI because we don’t trust the Chinese. But then we also do we trust Microsoft to be in control of AI? And so it’s been very fascinating to see the recent developments. Most of you will probably have seen the interview that Elon Musk did with Tucker Carlson, where he straight up said that the head of Google, knew that he was wanting to create a God. And when Elon Musk objected too much, the head of Google said that he was speciest. And it’s very interesting to see that. It’s so fascinating to notice that, where the move, now we’re seeing it almost clearly, right? Where it is that the desire to grab power leads to. And so if you try to grab power for yourself, like in a revolutionary manner, right? The Promethean idea that you just grab power for yourself, Adam and Eve taking the apple for themselves, it seems like what it ultimately leads to is a desire to be overturned, a desire to die, a desire to be overrun by something that will do the same to you. Now I’ve often felt like, in some of the myths it’s manifested as trying to stop it, right? So if you see in the myth of Zeus, he tries to stop his father from eating him. He tries to stop, I think, no, it’s not Zeus, sorry, it’s Kronos. He kills his father, so then he eats his children and to stop them from coming up against him. And in some ways Zeus does the same. You see, he does the same with Prometheus, right? Prometheus tries to take the fire, he ties him to a wall and has an eagle eat his liver every day, but it seems like there’s an even more perverse part, which is almost a desire to be upended, a desire to die. It’s like a death desire, I don’t know how else to describe it. And so that’s very fascinating to watch with the question of AI. But this idea that the agency that people are noticing in terms of not just the AI, but the way that AI is acting on us in a larger scale, that is the agency of AI is not just the agency of the AI itself, it’s the manner in which it’s acting on us and the fact that it is in some ways working to its own birth, right? Let’s say, and it’s really like, it’s like a horror movie scenario, right? It’s like Lord of the Rings, really. And so Sauron is trying to build a body for himself. Like he’s not completely incarnate, but he’s using all these people’s desire for power in order to create a body for himself. That’s pretty much what it looks like is happening with this AI, which is that we’re all running towards a cliff, we all know we’re running towards a cliff. But in some ways, it’s almost like as if we’re pushing others towards the cliff. And we know that unless we push, someone else will push us over the cliff. So we think we might as well push everybody else over the cliff so we’re the last ones to fall. But that seems like that’s what’s happening. But it’s interesting to notice that the image of a demon is used to, the image of Moloch, to whom the children were sacrificed, is used to image that. And I think that that’s not a coincidence because ultimately what we’re seeing is agency. And so that’s important to understand is that the agency of AI, we try to limit it to the AI itself and ask ourselves, does it have consciousness? Does it have agency? All of that, et cetera, et cetera. But there’s a way on which AI has agency over us, which is outside of the software, right? It’s outside of the program. It’s the very way in which we’re developing AI, which is showing that it’s manifesting a principality, because its power is not limited by the, is not limited by the software. It’s not limited by OpenAI or Chaggpt or any of the other people competing. It is literally acting on us from beyond that. And the very way in which we’re building its body is part of its agency over us. And so I think that at that point, it’s very difficult to not see it as different types of agency. And then of course, in the AI itself, this kind of movement towards it becoming a type of agency on us, that also seems to be inevitable. But I also really do believe that there’s a way in which it’s not agency, it’s not the AI again that has the agency. It’s something beyond it. It’s something that is manipulating it. And I think that that’s the most important. Most of the types of AI that are succeeding right now, they’re not strict AIs. They’re something we could call like hybrid AIs, where like we talked about with Dr. Paula Bottington, what they’re doing is that actually they’re farming intelligence. So the AI doesn’t have intelligence. The AI is power, right? The AI is very much, yeah, it’s the best way to understand it, it’s power. And the agency is us. And so the best image to understand that is to understand the story of the genie’s lamp. Now I’ve alluded to this story several times, but the level to which it corresponds to what’s happening right now is very important to meditate upon. Now take the elements of the genie’s lamp. So the lamp is technology. That’s what it is. A lamp is a way to have light at night, right? It’s a way for you to continue to have light when there shouldn’t be light. So it’s an increase in your power to have light. Now, the interesting thing about the lamp in particular, it doesn’t have to be, sometimes it’s a bottle, sometimes it’s a ring, sometimes it’s all kinds of technical objects that increase your power, but the lamp is particular because it’s light. And so in some ways it is closer to what AI is doing. That is, it is an artificial way to perceive the world. It is an artificial light. It’s a frame of vision. It’s something which is projecting light on surfaces that is artificial. And so we see it framed, we’re starting to see with the frame of technology. And so it’s much closer to what AI is. Now, but what the AI, what the genie offers is simply power. It’s simply increase in power. And what it needs is a reason. It needs a purpose. It needs a wish. It needs a direction. And so we are the ones that provide the direction. And so the genie appears as this infinite power, couched in technology that now is just waiting for you to tell it where to point. And once you do, then you get what you asked for with all the power of the genie. And so this is of course, so it’s not just a simple morality tale. It’s something like, it’s something like all the power, if you take something, if you direct, if you have a direction, if it’s not the right direction, if it’s not God, let’s say. So imagine someone asks the genie, if the genie says, what it is that you want, I can grant you three wishes. And imagine if someone said something like, I want like what Solomon says, I want wisdom. I want to love others. I want true knowledge, right? And not in the sense of artificial, but let’s say knowledge of the good, let’s say something like that. That would be very different. Now, all of a sudden, there would be fewer ways that the genie could, that the answer of the genie could go wrong. The problem is that we ask for secondary goods and all those secondary goods have side effects. And so if we become hyper efficient and we increase the power to do anything by a million, then all the side effects of that will appear inevitably because they’re not, because you’re putting all the power towards something which is secondary. And so it’s an image of sin itself, right? It’s an image of pride itself. It’s an image of all these things. And so that is of course the difficulty with AI is that it is now going to offer almost near infinite power being directed in any kind of direction, in any direction we wish. And so what that will bring is of course a kind of chaos which will lead to an authoritarian answer, right? That seems to be how this is going to end. And if you follow Mathieu on Twitter, if you don’t, you should, because he’s basically going, he’s going scorched earth on us. So be attentive because he’s gonna say things that are gonna be more and more mysterious but more and more important to understand. He retweeted a video of someone using some video game to make a point about AI, which is that what AI will do is lead to centralized control. And that is almost inevitable because of the infinite power and chaos that it’s unleashing. And so if you go, I started following, what’s his name, Sam Altman, the head of OpenAI on Twitter. And I went to his Twitter page and I noticed that he is advocating for centralized digital ID. And at first I was like, why is he pushing for digital ID? What does that have to do with OpenAI? And then I realized very quickly that it has very much to do with AI because the situation we’re in is that within a few months, you’ll get a phone call and you’ll have no way to know if it’s the right person calling you. You’ll see a picture, you won’t know if it’s real. There’ll be all, you’ll have no way to discern what is truth and what is fiction. And so the only way to do that will be to give absolute power to a centralized authority to basically manage identity so that we at least know what is real and what isn’t real. And so all this push, right, towards censorship, towards fact checking, all this stuff that’s been going on that’s been buzzing around in the internet was just a prelude for what’s coming. Because what’s coming is really exactly the problem of not even knowing if the picture you’re seeing is something that happened. And that’s coming super fast. Now, what are we gonna do about that? We’re going to call, we’re gonna ask for our governments, we’re gonna plead for our governments to please come in and prevent this chaos. And in doing that, we will be inviting all the other censorship as well because all this stuff is going in the same direction. And so there’ll be a necessary ideological aspect to the movement towards digital ID because those that will be in charge of it are already like that. They’re already in terms of digital ID. I just saw an ad today from the Quebec government. We’re from the Quebec government where there’s a police officer dressed in pink, which already tells you a lot of what’s going on. There’s a guy that comes up to her and he’s wearing, he has something covering his face, and it’s like a profile and it’s a snake or something. And then she says, oh, you’re dressing up as a snake to be intimidating, are you? Like you’re trying to intimidate everybody as you’re dressed up as a snake. And the guy says, yeah, what’s that to you? And so the pink police officer removes the profile to show his real identity and says, oh, you’re not so tough anymore, are you? Once we revealed your real identity. And it’s like, yeah, that’s where we’re going. That’s inevitable. And yeah, and I don’t see a way out of it. I don’t see a way out of it because people will take advantage of the AI to create way more chaos. And so, I mean, right? The question is almost becoming like, when do we run into the woods, right? Like when do we completely disconnect from this? I don’t think it’s yet, but there might come a time where there’ll be no choice because all the alternatives will be moving towards a kind of madness of control or like a chaos. So, sorry, I don’t want to end on a very dire note, but I just want to say that the stuff that I’m with, that if you watch my video, the ones on the book of Enoch, the ones on the Mark of Cain, all these videos about technology and about Cain, you’ll find clues in there to kind of help you understand the situation of what is happening now and what’s coming in the future. So thanks everyone for your support. I hope you enjoy the new website. I’ve been really excited to see it flourish and to see it kind of come together. And so I go on the forum, I read your posts, I even comment on some of them, like some of them. And so it’s really become the place where I try to involve with the symbolic world, we could say. So everybody, thanks for your support. Thanks for everything. And yeah, talk to you very soon. Bye-bye.