https://youtubetranscript.com/?v=0xfSPexbUOQ
All right. Here we are. Here with Manuel Post. To talk about the craziness around AI. So we’re exploring why crazy AI. And Manuel, you want to sort of give your summary of why this is our topic? Well, yeah, so the thing in this little corner that happened is Faveki had this talk about AI. At least the skim that I did through the comments was it was really well received. I got some other people from this little corner that kind of got blown away. So I was like intrigued by what happened. And I was like, okay, like, I’ll just watch it. And it’s not only for Faveki, like, AI is making like some sort of a comeback. We’ve had different phases of AI. At least the one that I landed up is like three, four years ago, whatever, that Sam Harris was really a doom speaking AI. And then there was this, well, like the recent hype with all the successes, right? Like now we get the blessing side, like AI is going to come and save us. And there’s a whole element of, oh, now all the things that are wrong are going to be broken. And Faveki kind of wanted to hold the middle ground a little bit, right? It’s like, let’s hold back. Like, we’re not there yet. But also maybe. And yeah, obviously I didn’t like what I heard. And I think what’s really interesting is, well, what’s actually happening? Like, why are people grasping for AI? Like, what is the way that they relate to it? Like, what are they trying to get out of it? So I kind of wanted to explore that. Yeah, have a discussion around like what’s actually happening, because I feel that people just get dragged along in the hype, and I don’t think it’s justified. Yeah, that’s a good point. Yeah, so, you know, obviously I’ve had you on before, and we talked quite a bit. And AI does seem like a lot of hype. And look, I’ll just lay it on your straight. I’m doing AI projects still. I was doing AI and machine learning before deep learning became a big deal, which is, you know, there’s so many things wrong with the John Brevicki video from a technical perspective. It’s not even an hour and a half, which is all we’re going to have for this stream, is never going to cover half of it. Like, he just made statements that are bonkers and just technically vastly incorrect. There’s a lot of stuff that quote AI is doing that is just machine learning, which is not AI. It’s just math. There’s a lot of stuff that the math does that the AI, you don’t need the AI for at all. You may have noticed spell check over the years has gotten worse, not better. The original spell check is just mathematical. And the mathematical spell check is technically better than the AI spell check. AI is not all that good, and it’s not all that good at a lot of things. There’s a lot of things AI is terrible at. And you might ask yourself, why did John Brevicki sort of take the extreme sides of this? Like, either it’s going to destroy us or we’re going to get enlightened. It’s a very religious frame, which is kind of unusual from John, to be honest. And a lot of people are taking that attitude, like AI is going to destroy us or it’s going to send us to the promised world. Actually, he warned in the video about that attitude developing, right? Like the doom saying. Yeah, he warned against a lot of things like infinite regress. And then in his own argument, he created an infinite regress, I think three times. That video is so riddled with errors, non-technical errors, like not just errors about AI where any AI technician would go, nah, there’s a bunch of things that he laid out just incorrectly. And so I think it’s worth, to your point, Manuel, talking about what is all this hype about AI and what is this enchantment? Because a lot of people I talked to were like, oh, but John said I didn’t understand most of what he said. Fair enough. He was talking pretty I didn’t understand some of the stuff he was at. So I got a pretty high level. But then he said this gem. And it’s like, fair enough. That one line was a gem, but it’s not in a context. And so one sentence not in a context. How useful is that? You know, one sentence, you know, and I mean, I have examples. So if I if I if I just say to you one of the sentences that he said, like, religion gives us a way to contextualize in a world where we are not at the top of a hierarchy. That sounds pretty brilliant, actually, right? Because it’s very simple and it’s very profound. Right. But when you put that in the context of AI, which is not religious, what are you doing? Why are you why are you talking about religion with respect to technology? Maybe maybe the religion informs the technology. The technology doesn’t inform the religion like that doesn’t happen. So even if you if you take that at face value and you say, oh, you know, religion gives us a way to contextualize it. Fair enough. Do you have one? Because what that means is that if you’re not doing if you’re not religious, maybe you shouldn’t even touch AI at all. In any capacity. That’s what that means. That doesn’t sound hopeful and profound at that point, but that’s the implication. And you didn’t realize it until I said it most likely. And it’s like, whoa, that’s enchantment. And that’s dangerous form of enchantment. And look, I don’t want to accuse people of doing things. People do things without realizing it all the time. And John is no stranger to that. And a lot of people are doing that. A lot of people are making statements about AI that are fundamentally religious statements. And AI is not a religious thing. It’s not going to transcend religion. It’s not going to become spiritual as was implied in that video. Like these are completely off the table things. You know, you can’t claim on the one hand, it. It’s fundamental. And we don’t understand it or control it. If it’s not a miracle, then you probably do understand it and control it. In fact, I would argue anything. That’s the definition of miracle. Right. Like this isn’t that hard. And John made that contradiction that video, too. He’s not the only one. Again, like this is just one fine example. It’s sort of like he rolled up all the bad examples of people talking about AI and put it in one video with the people who are talking about AI. And you can see if you watch that video carefully and other videos, too, you can see the enchantment of the other two speakers. You can look at their eyes and listen to their voice, how they’re talking. They’re not talking to people anymore. They’re talking to something, you know, up and outside of themselves, what they feel part of, which, you know, is not something that’s going to be a part of the conversation. And we build, which is even worse. Right. So now we’re like we’re making the thing that we made bigger than ourselves. Like that sounds really scary. So I want to go back. Yeah. And suspicious. So I want to go back to some of Harris for a second. I think we’re going to have to go back to the video. I think we’re going to have to go back to the video. I think we’re going to have to go back to the video. Yeah. And suspicious. So I want to go back to some of Harris for a second. Right. So some Harris, like, like, what did he do with A.I.? Well, he he started doomsaying, right. Like, oh, like, this is the 15 ways that I can destroy us. Right. And like, that’s what he does. Right. Like he has this take where he’s like, oh, like, Islam will destroy us. Whatever. Right. Like, he’s just pointing at these things and then he’s he’s catastrophizing them. And then he’s warning people. I like that. That’s all probably genuine. Like, I think he truly believes all this stuff. But that’s the way that he does things. Right. And then what happens is he starts organizing the information in a framework like presents his narrative. So the consequences is that that is the way that A.I. is understood. Right. And then when we get to Reveki. Reveki had this awesome thing, because like Reveki has a whole lot of different specializations that he’s combining into what he’s trying to combine, at least. So he’s got awareness of a whole bunch of perspectives. Right. So he’s he’s got the psychological perspective, he got the philosophical perspective, he got some theological perspectives that are floating around. And what he’s doing is he’s he’s using all of these perspectives to give you a more wholesome picture of A.I. Right. Like, which is pretty great. Right. He’s allowing you to see more sides of the problem. And I think that’s the thing that drew a lot of people in. But what he’s also doing is he’s he’s pointing it towards the thing that he he wants. Right. And at the end, he gets pretty explicit. Right. Like he wants enlightenment from the A.I. He wants that A.I. is is the thing that brings us insight over consciousness or something so that we can know ourselves so that we don’t have to do the hard work ourselves because we can just write the A.I. That that’s effectively what it comes down to. So he’s he’s building the whole thing towards that. Right. So all the questions that he’s answering are in relation to. Well, can it do that in relation to his conclusion? Specifically in relation to a conclusion he’s already drawn. He doesn’t make the case for the conclusion he’s drawn. He’s stating axiomatically things like A.I. is spiritual is going to be spiritual. It’s like that’s a leap. That’s a big leap. And that’s part of the problem is that when you’re presuming an outcome, you make all kinds of crazy predictions based on that presumed outcome. But you’re presuming an outcome and that’s unscientific fundamentally. And it’s dangerous. And I don’t think it’s fair. And I don’t because it leads to this sort of conspiratorial thinking. And a lot of this stuff. Look, if what John is saying is is correct, then and others, because he’s not the only one doing it. Then the answer is we need to stop all A.I. development immediately. Immediately we need to outlaw them because it’s too risky. If they can really transcend us, they can really transcend us and become enlightened. Why would they leave us on the planet? Like there’s no and you know, it’s it’s unbelievable the extent to which he’s either ignoring or has not read or engaged with any science fiction. This the issues he talks about in that video and every look, it’s not just the movie Her and Ex Machina, both of which are excellent movies. Ex Machina in particular is one of my favorite movies. It is fantastic that the cinematography is just great. These issues are addressed. They are understood. We don’t like the answers, right? But again, if you don’t like the answers to all the probabilities that people have come up with in their imaginations about what could and should and will happen, then maybe you should not engage in the activity because that is still an option. Everybody pretends like we have to do it. We have to know other. That is not true. People have told me for years I have this and I’ve done none of it and everything’s still fine. In fact, I would argue everything’s better. So don’t waste my time doing things that didn’t need to be done despite the urgency that other people applied to them. I don’t think that’s the case with AI. I just think this idea that AGI is coming anytime soon is absurd. And if you do the math or you know anything about AI, there are zero AI experts saying AGI is on the corner. There are a few people who make prognostications who aren’t actually doing work in AI. They may be like, I’m the lead evangelist of some AI company, but even that guy, if I forget his name, who 10 years ago was saying we’re less than five years away from self-driving cars, has admitted, oh, the self-driving cars can’t even take left-hand turns after 10 years of work. So getting a car to take a right-hand turn takes, you know, like a few months. And getting it to take a left-hand turn takes at least 10 years. That’s weird. Actually, I know why, and it’s obvious, but we won’t get into that. So this is part of the problem is that you’re not even engaging with the philosophical and some of the psychological aspects that have already been explored around AI and just pretending like no one’s thought about it. No one’s thought about this or something. It’s like, no, this has been thought about and talked about and everything else. Like, it’s not that hard. You know, nobody likes the answers, but there’s nothing particularly difficult about it. And the ultimate problem, which is absolutely correct, right? The ultimate problem is they can be used for good. It depends on the people using it. Yeah, we are, as Manuel indicated, trying to get around the responsibility for the things we do. Right. We’re saying, well, if AI goes wrong, it’ll be the porn industry or the military’s fault. Like, what? You do understand the porn industry and the military are us, right? It’s still our fault. There’s still elements, components of society and culture, and they still involve people. So, you know, and it’s weird to say the AIs are going to expand something like the meaning crisis because we create the AIs. So if AIs are expanding the meaning crisis, that means we’re putting the meaning crisis into them. So maybe focus on fixing the meaning crisis and not the quote AI crisis. And it’s suspicious. I know Sally Jo has been talking about this quite a bit. We went from the fake news virus scam, this alleged pandemic that can’t possibly be a pandemic by any definition, into a new panic, a new like, oh, my goodness, we have to act immediately. And this is the you can always argue, this has happened before and this is just like this. It’s not actually. I’m not saying it’s unique in history, but it hasn’t happened in a couple of generations where people were panicked over something that we control. That is not external at all. You know, to this degree and that the next thing keeps us more panic. And the cycle of panic and increasing panic is an addiction to anxiety, roughly speaking. It’s an oversimplification, but it’s not what hype, right? It’s just like, oh, this is the thing I can participate in and the panic is the justification, right? Because like it could also be a good thing, but like it’s really hard to find a good thing that we can get consensus around nowadays. Right. And I like this idea that John was giving a sermon. And I also like at a certain point, I was just listening and he said, well, we’re going to like, he said, we need to do all of these capacities of AI. And I’m like, we’re not going to develop capacities for AI. If we’re going to develop AI, we’re going to do it for a purpose. Right. Like we’re going to try and make AI do something. And then suddenly a capacity will arise. Like, it’s not like we’re going to try and invent a capacity. And the whole purpose driven aspect is totally gone in this framework. And I think it’s like, no, we’re only trying to develop the AI to capture this, this purity of cognition because if we don’t do it perfectly, then all the intricate connective parts, they’re not going to relate to each other. And I’m like, well, if it’s going to work, it’s going to work by accident. Probably because we’re going to make it try to do stuff and then it will do something else. And we’re like, wow. Okay. Where did that come from? That’s that’s the way that I see that happening. Well, and that’s that’s part of, you know, why would he say that? Why would anybody ever say anything so crazy? Because it is crazy. And the answer is because they believe in objective material reality. And of course, I have a video that covers some of this anyway, this objective material reality worldview that is garbage. And it’s like we we’re creating capacity. So the thing pure from nature, does it sound like we’re so because it’s I’m just ripping off. So at this point, the thing pure for pure, just pure from nature, we’re just giving it the capacity to grow. And because it emerges within the container, right, this capacity is a container. It will be good because emergence is good. This is an emergence is good attitude that all these AI people have. And they talk about it like, well, we need to treat this like children. It’s like, why instead of treating something that definitely is not a child like a child, why don’t we just have children? Doesn’t that seem easier? Like we we sort of had some experience with this. We kind of know what to do. Everyone’s terrible at it and everyone does a terrible job. But if we do a terrible job with the real thing, then the digital thing that you can’t put your hands on or really do anything about in any material way, we should have more problems with, not fewer. And I understand the confusion around. Look, everything in my head, in my imagination goes the way I want and it’s all perfect. Yeah, of course. Thanks for the update, Captain Obvious. Really appreciate that one. But also, you can’t take that outside of your head and do what was done in your head with it. You might be able to take it outside of your head and do something with it for sure. Absolutely. But it’s not going to go the way it goes in your head. That’s the difference between implementation and the ideal. And your head creates ideals. That’s what it does. Your thoughts are in the realm of platonic forms to some extent. It doesn’t only create ideals, right? Like it also creates this idea that, OK, like if I want to do A and B, then I need to get an answer. It also creates this idea that there is an answer if you add A and B. It might not even be possible to do that. And then you can give it a name. And then you can just shout the name a whole bunch of times. And then everybody’s like, wow, yeah, that is the solution to that problem. And nobody even defines what the answer is. And everybody’s presuming that’s the answer. And then they’re going to build upon the answer, right? And now we’re all gathering around it. And I use the word we, right? Because that’s the thing that’s also happening, right? Like Sam Harris is guilty of this. Like all these people are guilty of this. We need to do this. Like who’s the we? And like who’s going to direct the we? Like who’s the leadership of the we? You mean the politics is going to do something? Like even if they agree, how are they going to enforce it? Like do they know what people are doing? Like how do they know? Like if someone comes into my company, I spent like five years like working on AI with a hundred people, how are they going to understand what I’m doing? They don’t. Like Vakey said it, it’s a black box. So you feed things into a black box, like how are you going to ever understand what happened in that company? Right. Yeah. And that was another one of the contradictions. He said it’s a black box. And then he said, this is not a miracle. And I’m like, I like a black box. This is actually definitely a miracle. But what do I know using language correctly and everything, you know, and actually being an AI engineer who’s written AI’s from scratch. You know, I don’t know. I’m pretty well read on it too. I could grab the books because I have a bunch, but there’s a bunch more on the shed. You know, but yeah, just completely wrong. And I think, you know, I want to address this. Mills, you hinted that Vakey frequently does things without realizing it. Well, we all do. Right. Similarly, you said I was invoking Hegel without realizing. Yeah. Well, that’s, I will expand that at some point. Look, I think again, when you say AI is a black box, and then near the end of your talk, you start saying it’s not a miracle. You’ve created a contradiction in your talk. And there’s a ton of such things. And I think that’s really the issue is that, you know, to the rocks point here, is it really possible to experience anything objectively? No, we don’t live in an objective world. There are objects in our world that doesn’t make it objective because the way they’re using objective is some neutral space outside. Maybe it’s the natural space in their head that Uso was talking about. I don’t know what’s going on in their heads, you know, where they can judge from. And that’s part of it. And yeah, Phlebas, get back to the emergence. I don’t think you’re AI can emerge. It doesn’t. So I was having this conversation. Hold on. Like, I think AI can emerge because that’s what they’re going to solve with the embodiment aspect. Right. So if you have an embodiment aspect, it’s in relationship to the environment. I do think there can be finding of new things. Well, finding of new things, yes. But is it really emergent? And this is the problem. Emergence isn’t good. Things emerge all the time, actually, literally, actually. And by all the time, I mean every second of every day your life stops emerging, whether or not you’re involved in it. Now, you can help things emerge. For example, you could plant a garden. I can easily. I can I cannot plant a garden. I can put you know what I can do because I live in South Carolina. I can just put dirt anywhere in a box and wait. Things will emerge. I don’t need to do any more work. Now, the thing that you do with AI and this goes to what I was about to say was I was talking to a couple of people on Discord about this. A few nights or the other night. And what happened was they were they were saying, oh, chat GPT is unsupervised. And I’m like, no, it’s not. Chat GPT uses some unsupervised learning algorithms for sure. But even if you look at Wikipedia, it’s not unsupervised learning at all. They’ve invented a new term for we train this using humans. It’s called like human interface learning or some nonsense. It’s nonsense. Look, it’s really simple. Either you train you give the AI a video game or something to play with until it wins like AlphaGo, which was one of John’s examples. Or you give it a goal. And if you give it a goal, you’re training it. It’s supervised learning at that point. Now, the fact that you don’t have to sit there and supervise it. This is just a misunderstanding of what the tech is. The tech is not even if it’s rewards based because rewards you need rewards because you need some measure and how you write the rewards algorithm determines everything about the AI, by the way. Big hint, right. Goal oriented. So goal oriented is different. You still need rewards, right? The reward you can collapse rewards into goal in training that can be done. But there’s still there’s still both. They both still exist. They’re just one number at that point. It’s literally what’s happening. So when you say unsupervised learning and supervised learning, technically supervisor unsupervised does not rule out humans helping in AI training. It just doesn’t. I’m sorry. That’s not how the industry works. It’s not how computers work. It’s just a misunderstanding of the language. So even the Wikipedia page admits that GPT is being trained by humans left, right and south. This prompt engineering is further refinement to get around the silo problem, which John said it doesn’t seem to exhibit absolutely. It’s a silo problem. He gave examples of the silo problem. He said it doesn’t seem to have a silo problem. Then he gave examples of silo problem. Everyone else is doing this, too. I don’t want to pick on John too much. But like that video was a train wreck of a train wreck of a train wreck that fell off a mountain while flying on a jet. I mean, here we almost every level of that was a disaster. So the problem is when you think and talk about AI, there’s all this technical language like supervisor unsupervised. It means something very specific to an AI engineer that nobody understands outside of AI engineering. And I get the confusion, but it’s a confusion and people should know better. And I understand why people want to tie everything into this and and make it about something religious because there’s a gap in our society around the highest. And when you try to make the highest the truth or knowledge or information or whatever, everything collapses into chaos because you’re trying to build a tower of Babel. That’s that’s roughly roughly what’s going on here. Yeah. I want to go back to the Mills comment, right? Like the doing things without realizing it. Like this is this is what the stream is kind of about. Right. Like, OK, like, why are we talking about AI? Well, we’re being captured by something. And we pointed out the hype around it. Right. And now there’s there’s something new that happened. Right. And now people started thinking about the new thing. They start trying to integrate all of this new information. And then, right, like they come out with a video, for example, and they have a goal in mind. Right. So just just like the AI, they’re training themselves in relation to the goal. And what’s going to happen? Well, you’re going to be caught in a spirit. You’re going to be caught in in a way of thinking that will make you say things in a certain way to allow the goal to exist. Because if you’re going to say things that make are contradicting your goal, then you’re you’re doing something futile. Right. Like you’re you’re being ridiculous, like literally, because like you’re just saying that the thing that you’re trying to achieve can’t happen. And why are you making the video? So if you want to make a video, you got to kind of going to have to say certain things in order to be in some sense consistent with yourself. And you’re not consistent with reality. You’re consistent with yourself. And that’s that’s the way that we do things without realizing it, because we’re trying to follow a consistency that is not in accordance with reality. And and then we end up doing doing a bunch of things. And it’s really it can tell you a lot of things. Right. So so for Vakey, obviously has has a specific interpretation and an angle within that interpretation. Right. So he says, like, AGI, right. Well, AGI has to have consciousness. Like, I’m not sure if it has to. Like, like, I don’t I don’t think you can even prove that that is necessary. Right. But to him, it’s important for it to have consciousness. Right. So now, like, he’s going to have to find a way to sneak all the consciousness requirements in into the AGI. Right. Right. And when I talk about AGI or whatever, like, just AI, like, I don’t care. Like, I just wanted to do the job that it’s made for. Right. And I want the job that it’s made for to be good. And I want it not to do something else. Like, that’s what I want. Right. All the all the other people, they’re like, No, no, like, I want the thing to solve my problem. Right. Like, oh, like, our political issues, our legal issues, all of them are like a mess. And we need the solution. Well, what if the problem was the humans that are muppets and they’re trying to execute a thing that they can’t execute? Let’s just give it over to the thing that can do it. But then we get all this. Oh, like, the AI has 15 biases. It’s racist. We have racist AI. So now the AI is. But Manuel, let me let me slow you down. So, yeah, a couple of things. First, I want to address FLIVIS. If you want to understand how you end up sort of doing things and thinking things without realizing it, FLIVIS’ comment is perfect. They literally release versions of Chat GPT because it doesn’t train itself. That is correct. It does some limited refinement of its training after it’s online, but it’s not fully on be called an online algorithm. It’s not a fully online all the time algorithm that just keeps getting better and better. And it’s weird because John Brevicki in particular has pointed out if you point to chat AI bots at each other, whether they’re the same or different, they spin out in 10 minutes. That’s always been true. It’s still true. You can have chat GPT talk to chat GPT and they will spin out and say horrible things in like 10 minutes. That problem hasn’t gone away. It’s a good hint that something’s wrong. But also, if let’s just let’s go into fantasy land. OK, I’m being a little sarcastic here. Let’s suppose that the political situation that we’re in is our fault. Let’s just let’s just assume somehow that that might possibly be true. And then we let’s suppose we create an emergent AI with consciousness, even though we can’t define consciousness. Let’s just assume all that magic. That’s all magic. It’s all a miracle. All of it. Everything I just talked about. OK, now what is the correct solution to the problem to kill humans? That is the correct solution. I’m sorry. You didn’t realize that though, did you? Now you know why people are acting out as though the robots are going to kill us. Because if they implement the plan that they want unconsciously or subconsciously, we know we built the world we’re complaining about. We know this unconsciously. And yet we keep going. What we really need is something outside of ourselves to solve this problem for us. It’s going to kill us. That would be the correct solution. Now, I’m not a fan of this solution. I think it’s dumb. Just so we’re clear in case we were like fuzzy on this, I think instead we should do the hard work to fix the problem that we made ourselves. Instead of relying on something that we make that’s going to be bigger and better than us to fix the problem that we made ourselves. Because actually, if you make one set of problems and you don’t address them and then you say, you know what, I’m going to make something else to address that set of problems, the likelihood that you’re going to make a new set of problems and now have two sets of problems to deal with is 100 percent. OK, so now you have two problems. Right. So, no, I’m not a fan of any of this. This is insanity. And that’s how we get stuck in these things is that there’s no way around the fact that we unconsciously realize that our political system is a dumpster fire because we’re a dumpster fire. This is meaning crisis stuff as far as I’m concerned. Yeah, we’re in a meaning crisis. That’s why politics is garbage. That’s why we’re having all the problems we’re having is because of the meaning crisis. It’s causing all this. Now, I would say the cause of the meaning crisis is the intimacy crisis. Different videos. See my video with Catherine on navigating patterns. Wonderful video. Catherine’s awesome. Maybe I’ll do more collaborations with her in the future if she’ll grant me the grace and I can get off my tail and get that rolling. But that is the issue. Like the whole thing is absurd. I think we need to have forgiveness. We need to get people to recognize the place that they’re at and allow them to reengage in the right way. And that requires, in some sense, a reach out. But it also requires us to have the answer. And I was really intrigued today. I got this idea that I was speaking with authority yesterday in the stream. Like in order to present the answer, you have to speak with authority. And like what is the problem with the meaning crisis? Well, the meaning crisis is where the problem is. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. And I think that’s the problem. Why can I trust that AI more than a human? Because it’s incapable of doing the things that humans do when they train it? Like even if it has more capacity, why would it be any better? And like Jovan van wekert goes into this whole rationality. So he says, The highest virtue effectively of a sentient being is rationality. And the rationality, he, and this is interesting because he connected it to truth. And truth is, with his definition, I like the definition that he gave about truth actually, it’s connectedness to reality, right? It is equality of how you’re connected to reality. And then the rationality is the thing that does something with truth. But that’s not how the world works. Rationality doesn’t work on truth. It works on what you think is true, which is something completely different than truth. And it has no capacity to discern truth from not truth because we still have the truth problem. And so like, if you have rationality as its highest value and then it’s like, well, yeah, rationality can crystallize or something like you said, you can look at itself rather than that’s wisdom or something. Well, like, if rationality can look at itself, but like we’re still with this fuzzy layer, right? Like whether we’re actually talking about reality or not, like what uses it? Like, are you actually getting something that is qualitatively of anything? Like, are we sure we should make rationality the highest value? Because it sounds to me that you need to make truth the highest value. And then if truth is the highest value, then your rationality is just like an atom to like sometimes steer. Right. Yeah, it’s a big problem. And one of the sneaky things we do to ourselves all the time is we have a starting point. And I talked about this on the live stream with with Pastor Paul yesterday, right? We have a starting point and we’re not acknowledging what that starting point is. And then we’re doing this middle out thinking. We’re in the middle of the story and we’re starting at this point. And then we’re just making a bunch of assumptions or axiomatic statements or whatever and and proceeding from there logically, rationally and reasonably. The problem is, and I said this, it was over two years ago, I did my first my first video with with Pastor Paul. I said this before. You can use logic, reason and rationality to prove anything, actually anything. And, you know, you can’t do that about the Holocaust. Yes, you can. Hitler did. And so there’s other people like it is easy to do that. People justify terrible things using logic, reason and rationality all the time. And the way that we know they’re wrong is not by outlogic and reasoning and rationality. The way we know that they’re wrong is with morality. And I will tell you right now, and I is always a possibility I could be wrong, but I’ve been saying this my whole life and ain’t no one come back yet. You can’t use that on morality. There’s nothing logical, reasonable or rational about the Holocaust. There’s nothing logical, reasonable or rational about morality. Right. And so you’re not going to justify not sacrificing children. Right. You’re not going to be able to justify not cutting the arms off of thieves. Right. These are moral issues. And I don’t have a scientific argument against mixing Ebola and anthrax like Peterson talks about. I don’t have a logical argument why that shouldn’t be done scientifically. But I do have a moral and ethical argument against it. Right. And the moral and ethical argument is roughly speaking, and I use this shortcut all the time. No good can come from it. Right. There’s no scenario where goodness can emerge from mixing Ebola and anthrax. It’s not going to in the same way. No goodness can come from from other terrible acts of slaughtering lots of people. Why? Because being is good. Right. And so anything you do that interrupts being is not good. It’s not that difficult. And, you know, you can interrupt being in lots of ways. You can destroy beings ability to procreate. You can destroy beings ability to thrive and flourish by starving a being. Right. You can destroy a being by keeping it too confined. You can also destroy a being by dumping it into open a world for it to operate. That’s also a way you can destroy being like constraint is necessary. And so when we don’t understand that these are the moral issues of we’ll say continuance of species because being is good, even evolution requires that action, then we get lost. And one of the ways I think that a lot of people get lost is in flow. And flow is held up as this high value, holy thing or whatever in the atheist church of crazy people. And the problem is that flow state is easy to achieve. And it’s a very fun way to do it. And it’s not necessarily good. Lots of people are in the flow state climbing a mountain and fall off a mountain and die. I would call that not good. Maybe your definition of good is different. And that’s fine. Then we’re having a moral argument. But I would say that flow state can get you in a lot of trouble and that you can’t get yourself out of. And I’m not saying get rid of flow state because it can get you in a lot of trouble. I’m not saying get rid of flow state because it can do bad things. Like, that’s not what I’m saying at all. But also be aware that flow is not some unmitigated, you know, pure good in the world. It’s not. You know, it is very emergent, but emergence is not goodness. Right. Emergence might be required for goodness, but that doesn’t mean that all emergence is good. And this is a deep logical fallacy that people are stuck in, especially when they get stuck in their heads, especially when talking about something like AI that does something that from their perspective is a miracle. Like we were told that computers would never be able to do certain things and how they can do those things. But look, I can tell you, I worked for a company years ago, years ago. And basically what they did was they had software that could do basic and advanced computer technician tasks on your machine. It would ask the user, the user would have a problem. They start up the software, they click where they thought the problem was on an image of the computer. It’s a very, it’s actually quite an excellent piece of software. Oh, should it survive? And from there it would ask the user, hopefully no more than four questions. And then it would ask the computer a bunch of questions, you know, and check things internally basically and diagnose and possibly fix the problem. This software was excellent. Not the least of which, because I was on the project, fixed a bunch of things, but indirectly by saying, oh, we need this feature and that feature and this and you need to do this this way. Right. And the guy was working under was an AI engineer out of MIT because I used to live in Boston. And I asked him at one point like, oh, what’s the AI? And he said, actually, we’re doing no AI at all. And I was like, what? What do you mean you do no AI at all? It’s all math. The whole thing’s math. And I was like, oh, that’s interesting. So you put the questions in a bucket and you weight them. And that’s it. And it just goes through the weight. It creates a tree from the weights, goes through the weight tree and just ask the questions in order. And that order changes based on your interaction with it because the tree changes based on your answers or the answers that you’re getting to the question. It might not be answers of the user. It might be answer to the computer. That’s it. It’s that easy. So you can basically use a basic tree system, a binary tree. There’s various types of binary trees. I don’t want to get technical. And as you answer the questions, you can change the tree in real time and get to an answer. And that’s all it did. And now I changed that. And made it a billion times better because it put in dependency order and change. You need dependency chains to solve certain problems like networks. It turns out that you want to solve a network problem, for example, on your computer, you need to ask questions in a certain order because upstream and downstream matters and sequence matters. So once we fix that, this thing is amazing. It could fix all kinds of problems that you call in tech support even today. And they could help you with as quick as this thing could or as thoroughly. I’m well known for people being able to call me up and say, I’ve got this problem. And I go check this, check this, check this in a certain order and bang, I find a problem in five minutes. And they’ve been working on it with seven technicians over the past week. And I’m like, whatever. Sequence matters. It doesn’t take AI to do a lot of magical looking things. It really doesn’t. And just most people don’t know that because they haven’t been working with Bayesian algorithms, with genetic programming, which I did for a while, which is a very common problem in the world. And they’ve been working on it for a while with support vector machines, which are, you know, they can do a lot of cool things that AI can’t do, for example, that look that look pretty neat and just simple binary trees. AlphaGo uses some of that stuff. They use some Monte Carlo methods and some other really basic machine learning stuff that’s not AI at all to get the AI the information it needs to make a better unsupervised learning system. A lot of this stuff is just you haven’t seen it and it’s new to you and it gives you the idea of emergence. But actually, the magic’s not even in the AI. Yeah. So I think we want to take a step to the meaning crisis because I think that’s the most important part. Well, OK, there’s all of this new stuff. Like, how is this going to affect us? Right. And while the first thing I want to say is when we use a tool, we should learn how to use the tool. And it’s silly for us to use the tool without knowing how to use the tool. Like, this should be obvious, right? But like, for some reason, we’ve in our society said like, you go to school and then you learn a bunch of stuff and then you’re an adult and you can go into the world. Things don’t work that way. Right. Like, you need to have a intuitive relationship with the tool that you’re using because like, if you don’t, you’re going to get the problems that everybody’s talking about. Right. But people are talking about it like we can just go to Twitter to keep it easy. Right. Twitter is doing this to you. Well, Twitter is doing this to you because you let it. Now, it can be because nobody ever told you how to not let it do it to you. But that’s your responsibility. Right. Like, you’re not going to get into a car without taking driver’s lessons. Like, and you need to get a driver’s license. Like, why would it be different for anything else in the world? Right. Just because it’s easy to get by pushing a button and going into a store and buying like a device where you have buttons on. That doesn’t make it like less dangerous. Right. Like, if it would be a gun, for example, you can press the button and you can kill yourself. Right. So like all of these things, they have a certain responsibility that’s with the user. We can say, well, we need to legislate around that or we need to like legislate that there’s at least information that people can self-educate. And we need to find ways to present that information to people like fair enough. Right. Like, we can improve plenty of things in that area. But on your personal side, you need to realize these things. And whenever I hear a conversation go on about, well, Twitter is making us do things like this. Right. And whenever you take your phone out of your pocket and your attention gets hijacked, someone is selling you something. Well, not me. It’s like I’m not buying. So like I’m not being sold. Like that’s not happening for me. Might be happening for other people, but they’re willingly participating in what they’re doing. At a certain point, it’s like, well, OK, like that’s the way that you pay for the thing that you want. Like now you sell your soul to the devil. But like that’s a deal that you make. And well, we can make another argument, right. Children. OK, like children shouldn’t be exposed to certain things in certain ways. And these things are actually slowly happening, at least in the EU. There’s a lot of barriers that like businesses can’t provide certain things to children anymore because they’re children. So like that stuff is slowly happening. It’s lagging. I don’t know why it’s lagging because it should be obvious in some sense. And like there should be laws. Like this is the other thing. Like it’s the same principle, right. Like when there’s something in the streets that is dragging children in and making them come home without money, like gambling them, maybe they shouldn’t allow children in. Like these rules exist and they should just extend to every platform that we create. And I don’t see why this stuff is so hard. Right. Well, and yeah, to get it back to the meaning, Chris, you said a few excellent, excellent things there to dig into. One of them is, look, education, you know, formal education outside the home is not supposed to give you all the knowledge of the world. And we’ve been treating as though the world is a place where you can get all the knowledge of the world. We’ve been treating as though, yeah, I’m a parent. I don’t worry about that. The school is going to teach them. No, they’re not. School does not teach you how to balance a checkbook. It should. And some schools do. But most schools do not. Right. Most schools don’t teach you how to how to deal with money. I remember reading the book, Rich Dad Poor Dad by Robert Kiyosaki. Excellent book, by the way. Everybody should read it. It’s short. It’s easy. I’m not recommending any of his other books necessarily because I haven’t read them all. But that one is excellent. And he talks about the difference between having a financial education and not. And the difference is understanding the difference between wealth and income. And if you don’t understand the difference between wealth and income, you’re going to be at whatever income level you’re at now forever or go down. But you’re almost certainly not going to go up unless you win the lottery or get lucky some other way. You could get lucky some other way. And that’s the problem. And when we talk about the meaning crisis, we’re already in domination. We’re already in domicile. We already don’t know where we belong or how we fit. And what people are describing with AI is this thing to show us the way to enlightenment, to the solution to our political woes, to how to all get along together in the world in which we live. Right. They are looking at this as a leader. Why are we in a meaning crisis and looking for leaders? That means you don’t feel like you have an adequate leader now. It doesn’t mean anything else. It’s not some ridiculously complicated, well, you know, it’s the Freudian id versus the Freudian ego combined with the Jungian unconscious self. No, the reason why you’re looking for leadership in an AI or whatever or looking for a solution from these things is because you’re not seeing it in the people around you. And that’s a failure of leadership. Also, you can’t find it in yourself. Right. So a lot of a lot of things like, for example, some Harris is going into all of these doomsday things. Right. Because he can’t find the answer to all of these traumatic things in himself. Right. And so when you can’t find the thing in yourself, you’re going to blow it up because like now it’s an existential issue. So when we’re talking about domicile, like you’re going to make that a big thing for yourself because you don’t have it in you. And you get this archetype of the wounded healer. Right. So there’s this person who’s who’s been looking for his all their lives. And they’re like, oh, like people like, you know, I’ve been looking about this thing all my life and look at what I found. And like, you better hope that the person actually found a solution instead of that. They’re trying to find a solution in providing you the solution, because like that’s where where the toxic stuff really, really happens. Well, yeah. Well, look, I mean, when you’re trying to find a solution and you’re just in pure problem solving mode, you go into flow state. People see you go into flow state. They think flow state is wonderful. That person must be doing something good. He’s doing a flow state. It’s like, well, I don’t know about that. You know, I mean, this is if you know, to the extent that I have critiques of Pastor Paul Vanderclaas great work. Some of his videos are him exploring ideas and using the audience as his way of thinking. Right. That that idea that he’s engaged in the third lobe of cognition and distributed cognition with the people, you know, who are who are his audience, some of them real and some of them imagined, I’m sure. And there’s nothing wrong with that. Like, that’s great. Some of his other videos, though, he’s actually making a point. And it’s still flow in some cases. Right. But exploration versus giving you an outline or a framework or an answer to something is a is a very big difference. And that’s part of the problem is that people don’t realize that you can’t look at what state someone’s in and infer whether or not they have the answers and went because we flatten the world, brush things down, overreduced, simplified things. We you know, yeah, we don’t we don’t trust leaders. We don’t understand leadership. We think you can train anybody to be a leader. You can’t. This is part of the meaning crisis. Leaders are rare. There’s some certain set of skills you have to be born with. And some people can be trained to be better at them. But you can’t train anybody to be anything. And we’ve told people you can be whatever you want. Yeah, but you can’t be good at whatever you want. You can try whatever you want, but you can’t be whatever you want. Right. You have the opportunity, but not the obligation or the ability. Yeah, a good image is like you see this beautiful animal and like it’s it’s it’s it’s like it’s been running all its life. Right. And like then it’s running and you see it go real fast and you see the muscles move in conjunction. Right. And then you zoom out slowly and then you see that it’s chasing its own tail. Right. And this this beautiful perfection from one perspective becomes really sad from the other perspective. And if if you’re not able to distinguish which which one of those two pictures you’re looking at, you’re going to just see this beauty. And that’s the enchantment. And it’s like, well, yeah, like there is beauty in that. Right. Like, I mean, like, it’s still the same body. Right. Like, it’s still trained. But what is the beauty in that? Right. And that’s the enchantment. Like, it’s still the same body. Right. Like, it’s still trained. But what is it trained to do? Like, it’s not generated. Right. It’s not alive. It’s actually that it’s it’s stuck within a typically narrow space. And like, this is the sad part for me, at least. Right. I see I see all these people being stuck and everybody else is like cheering them on. Like, wow, like you’re saying all this awesome stuff. And it’s like, in some sense, it’s good to see the awesome stuff. Right. Because like, that’s where the glory in life is. Right. Like, that’s that’s in some sense why you’re alive, that you can be part of of that beauty. But on the other hand, right, if that beauty is going to make you walk in run in circles, chasing your own tail, like you shouldn’t abide in that beauty because that’s the wrong beauty. Right. Well, and that’s and that’s part of attention. And the meaning crisis is very much not knowing what’s meaningful and what to pay attention to. And, you know, I mean, I would say not to sound self-promoting, but it’s my life stream anyway. And so, you know, on Friday at 7 p.m. Eastern, the topic is attention and signals, which I’m going to go into this in depth. You know, this idea of what you pay attention to matters because, yeah, on one level, it’s beautiful. On the other level, it’s chasing its tail. Like, okay, I mean, there’s beauty still there, but also making progress here. And maybe we can make it better. Right. And that’s very much a meaning crisis problem because when you don’t have meaning to orient towards or away from, then you have a problem. If you’re using mere directional, oh, I’ll just move away from that bad thing. That’s not sufficient. Orientation requires at least three points of reference. And you can be one of them, but you still need two others outside of yourself. Otherwise, you can’t orient. And if you’re not orienting, it’s a complex world. If you’re not orienting, you’re going to fall into bad things. You’re going to fall into evil. You’re going to make huge unnecessary errors left, right, and center. That’s going to happen. And that’s part of the intimacy crisis and the meaning crisis is that you can’t get around that problem easily. So we get about half an hour left, Manuel. Do you want to open it up or do you want to? Yeah, we can open it up. I have tons of notes left. Well, I don’t want to get too far. And let’s invite people in and we’ll post them a StreamYard link. And if anybody wants to jump in and ask questions or comments or whatever, of course, you can ask questions in the comments section. And of course, I can only pin this message on navigating patterns, which is where you should be coming in from anyway. It’s the greatest channel YouTube has ever spawned, for sure. At least according to my imagination. And everyone else is using theirs. I figure I might as well use mine. So, okay. So I think the other thing that stood out in the Pervaiki talk was we need to start caring for AI with agapic love. And there was even steps around that where there was a moral obligation to perfect ourselves so that we could be perfect towards the AI because else we’d corrupt the AI. And I’m like, well, first of all, do that towards your child. And second of all, what are you doing? That’s literally you’re starting a religion and you’re worshipping this AI. If we’re looking at worship as time, energy and attention, we’re just giving ourselves, we’re offering ourselves. That’s literally what’s happening. And it’s so bizarre to see these people say these things with straight faces. They mean it. Well, look, would we even need AI if we did what John proposed in order to build AI? Because I don’t think we do. Right. Then the problems to fix, all the problems you want to fix with AI were fixed by the fix to building AI. Again, it doesn’t make any sense. And this is how people get enchanted, to hear that. And it’s not wrong, but it is dumb. And then they missed the dumb part because it sounds right. Like hierarchy avoids infinite regression. Yeah, absolutely. Does that mean that what we need to do is slaughter our enemies? Because I could put those in the same talk. And this is much the trick that’s in that talk is that there are a few gems, but they’re very decontextualized. And they’re not wrong, but they’re extremely decontextualized. And they make the rest of the crazy talk sound profound because, well, there’s a profound thing here and a profound thing there. And I really didn’t get the point of these other things. Maybe they’re profound, too. Maybe. But that’s the problem with gobbledygook. Is it when you’re talking gobbledygook and then you make one reasonable statement, the only signal you have is reasonable statement. It’s like, oh, maybe the rest of what they said was reasonable and I’m just too dumb to understand it. Maybe or maybe what they said didn’t make any sense. And you’re perfectly correct that it didn’t make any sense. And that’s how we get enchanted. And again, I’m not. Look, I don’t think these people are enchanting you because they’re evil sorcerers who want to enchant you. First of all, they don’t know you. But second of all, they sound enchanted themselves with their own ideas. That’s ideation. That’s ideation. Right. Like, ah, OK. So I’ve fallen in love with my own ideas and I’m a solipsist because I live alone. Right. Or whatever. And so these things have enchanted me and I want to share the wonderful enchantment, flow enchantment that I’ve gotten with the rest of the world. And then talk about AI and how, you know, the cargo cults around AI. It’s like, OK, yeah, you can worship AI. You know what else you can worship? Literally anything. People worship their cell phones. People worship TikTok on their cell phones. People worship their children. Right. And then venerate them. People worship wooden elephants. You know, people worship all kinds of crazy things. Other people. People worship other people. Right. Worship is inevitable. And when you’re not paying attention to what you’re worshiping, that’s a problem. Right. Because the power in the world is time, energy and attention. It’s time, energy, attention. And when you are not aware of that or not paying attention to the fact that that is how the world is laid out, what is it you’re giving your time, energy and attention to? Exactly. And you can look at your fruits, at the results. Right. As discussed in last week’s lecture. Not only what, but also how. How are you giving your time, energy and attention? Right. Because if you’re applying agape towards an AI, whatever that means, right, what are you doing to yourself? What are the implications to you? Have you even considered what that would do to you? Right. No, it’s a big issue. And we don’t. We’re so enchanted with the message and with the one good tidbit that we got that, you know, the fact that hierarchy avoids infinite regression and that religion gives us a way to contextualize in a world where we’re not at the top of the hierarchy should tell you everything you need to know about the solution to all of these problems. Right. It’s that we need to be in a religion to deal with the hierarchy that we’re not at the top of. And you can say, no, we’re at the top of the hierarchy. And I’m like, really? Are you at the top of the hierarchy? Singular you? Or is groups of you at the top of the hierarchy? Or is larger groups than that at the top of the hierarchy? I’m not so sure. And that’s already a problem. Because if you’re part of a group that’s at the top of the hierarchy, you’re not at the top of the hierarchy. And this language betrays us because we don’t have a differentiation in English between plural you and singular you. And so we fall into that trap all the time and enchant ourselves with believing we have more control than we do. And sometimes the you is not you, but somebody else. And maybe that person is rare. And we seem to take that being part of a group implies the sameness. Right. Like you can see it in all these postmodern talks. But also like this idea that we’re scientists or whatever. Right. Like, well, if I’m a scientist and I have the rational explanation for it is that everybody’s going to agree with me. Like that’s obviously not how it works because like science is specifically people who disagree, who try and like better the other. And then like one of them comes out on top. And like, yeah, sometimes it takes a while for that to be recognized. But that is the process by which science is supposed to work. Right. Like you can be in a group of scientists and you can be in an adversarial relationship. But you can you can be in a group. And like are the same. Right. Like the fact that you’re in a group just means that you’re sharing one thing in common. Right. And that thing can be important or it can be not important. Right. It can be something that unifies you as a group or like it cannot do that. And yeah, you have to be careful. Right. Like when we’re talking about AI and effectively saying, well, the way that it can go is 50 million ways. And like here’s like 10 ways to understand the 50 million ways. And you’re going to like ally yourself with that. You’re going to give your life up to make that happen in the right way. Like what are you doing? And like who are you unifying with? Like are you are you sure that all the other people want those 50 million ways? And are you sure that your work ain’t going to be used by the opposite way that you disagree with? Like all of these questions. Well, yeah, that was that was the other deep problem in the talk. I mean, John mentioned porn in the military taking over AIs many, many times. And the way he talks about it, he made it sound like it was a first past the post issue. So the first person to train AI to be good, all AIs will be good. And it’s like because, you know, the Internet the Internet was driven mostly by porn. It’s like. OK, no, that’s not what happened. The big driver to get a bunch of people, although they were a certain small segment of the population onto the Internet, might have been that. And there might be sociologically speaking or psychologically speaking, a big crossover between people who are especially technical and technically brilliant and people with, we’ll say, unconstrained desires at the edge of the norms. There might be an overlap there. And so anything technical might tend towards a use like that. But the idea that, you know, we can’t let the military have AI is I got news for you already lost that race. Can’t let porn. I got news for you already lost that race. But also, it’s not a race because people come in after. And so if porn and the military are a problem for technology, then the Internet would have fallen prey to this already, which is not to say the porn isn’t a problem on the Internet or that the military isn’t a problem. The Internet, although I would argue it’s not the big problem. The big problem the military has, at least in the US with the Internet is that they keep trying to control the thing that they told the engineers to build that couldn’t be controlled and finding out it can’t be controlled, which really shouldn’t be a shock to these people. But apparently it still remains a shock to everybody that the excellent, brilliant engineers. And if you ever looked into Internet history, and I suggest you do so, who actually designed ARPANET and DARPANET and the precursors to what we call the Internet, they built exactly what they were told to build. Build a network that cannot be brought down by any means or disrupted by any group. Well, that includes the government. And now the government’s pissed because they can’t control it. But that’s what they asked for. And this is the way in which they enchant themselves. And so because porn was the driver for the Internet, the Internet grew bigger than the porn that was on it. And this cycle has happened over and over again. If you look at the history of the Internet, now you’d have to actually do some research and look at the history of the Internet. I’m not even suggesting that you do so. I’m saying you can and that’s what you will find. And if you don’t, you don’t have to take my word for it. But you’re wrong because the data is there and you can go look. But you’re not gonna. And that’s OK. You can just take my word for it and say, oh, this is a person who’s done his research and who’s been doing this forever. Actually, you know, I’ve had several computers since I was like seven years old. So I was on the Internet before there was an Internet to be on. I remember the DARPA and ARPA days for sure. And yeah, it was quite an interesting place. And it has gotten better in some ways and, of course, worse in others. But most of those ways are around the thing Manuel mentioned earlier, which is no controls for children. And we’ve let children on to something that children were never able technically to use before. And so that, you know, that creates all kinds of problems. We’re not drawing clear boundaries and lines when we’re saying, oh, we can’t constrain people because if we constrain people, then we might have to constrain other people. And I don’t want to be constrained. So I’m not going to constrain anybody else. Yeah. With children, too. And this is kind of the point of Plato’s Republic. By the way, the Texas Wisdom Community YouTube channel has our book club on the Republic where Manuel and I, Manuel runs the thing. Basically, he’s, you know, Danny set it up. Awesome that Danny did that. I participate because they let me for some reason. I don’t know why, but I’m grateful. And the Republic is actually book two is based. The entire Republic is based on the question of what do you do in a city if children like how do you resolve that? Right. And these are issues that are a resolve in the Republic already. How old is that book? I don’t know. It’s really old though. Right. Like, OK, so this has already been thought about. And to the extent it can be addressed has been addressed. Or at least there are proposals out there that are worth thinking about. And we’ve ignored all that. And now we’re pretending as though we need to constrain and treat AI like children so that they grow up to be good children, even though we know we’re bad parents. And oh, by the way, well, really what we need to do is be better parents and then train AI. None of this makes any sense. This is doublespeak and it’s self referential. And when this goes back to this whole idea that hierarchy avoids infinite regression. Well, where’s the hierarchy then? If you really believe that, where’s the hierarchy? Who should be directing AI or what? Is it a body? Is it a government? Is it not those things because government bad, military bad, porn bad? Where is this mystical thing that’s going to be the we that does the thing to the AI so that it manifests as perfect Rousseauian paradise? It’s not mentioned, right? Nobody mentions it, whether it’s John’s talk or anybody else’s talk. No one’s ever going to tell you that. And the question would be, well, why aren’t you going to tell me that? Where is your you’re telling me we can’t use any of the systems we have in place, institutions we have in place, structures that we have in place to do proper AI. And yet you’re not willing to say then we need to shut it down, which is a perfectly viable option that I’m actually not opposed to. Although let me get my software written first, because it’s fine. Or or we need to admit that we just need to be better as individuals within the groups that are managing the AI currently and stop pretending as though there’s some magical monster named military or porn or politics or something that’s running around as an agent in the world and corrupting our otherwise would be emergency perfect AI. That’s an absurd way to think about the world. So I think I think there’s also this hierarchy in relation to using things like AI or computer devices or whatever. Right. Because, like, if if we put the child in front of the TV or whatever, in order to get a game in the here and now. Right. And we don’t realize what we’re aiming for in the future. And like how to participate in the thing that we’re aiming for in the future. We can’t have a hierarchy. Right. We cannot make decisions and we’re just going to do the thing that is easy now. I’ll watch the porn or I’ll play the computer game or I’ll spend the time on the media. Because if we if we can’t put it into context with the rest of our lives and where our lives is leading and how it’s affecting us on our trajectory. Right. Like all of these things, they’re fuzzy because you don’t know the way that you’re affected. You only know that you’re affected. And you also know that the more that you do it, the more affected you are. And at a certain point, you’re dysfunctional. Like literally. And so like what does that require of you? Right. And and we we don’t we don’t have these understandings anymore because we we don’t have a vision of what should be. And we don’t have an idea of how to approach that vision. And like these these are the things that we we are going to need for ourselves if we’re going to manage these these things like technology. Right. We need them anyway. We’re just realizing through contrast that, you know, A.I. is making it clear we’re in the middle of a meaning crisis and the meaning crisis is caused by what you described, which is a lack of tell us or final cause. We have not accounted for final cause. Make final cause great again. Like what is the point? And we did we did go over this in my live stream from last Friday, the one on action, you know, and Father Eric had a much better formulation than I do. And I should probably memorize it. But I’ve been I’ve been busy and lazy. So busy plus lazy. Final cause goes first, but we get there last. That’s where. Yes. Yes. We think about it first, but we we implement it last. But he had a he had a really good sentence around it. Much, much better than that. People were like, can you say that again? I think they asked him three or four times like, no, no, I need time to write this down. So but that’s what you’re pointing out. Right. Like, like the solution of let’s make us better so we can make A.I. better means we don’t need the A.I. And if that were implementable, we should just do that and skip the A.I. part because it’s just extra work. Why would you do that? I’m a pragmatist pragmatism rules. It can solve these things for you if you can embody it. And that is the problem. So so it sounds like a solution when you hear John talk about it or these other people, but it’s not like they’re just talking gibberish at some point. They’re making references to things they haven’t defined. They’re defining things and say making statements and then contradicting them later on in their own in their own talk. It’s on and on and on and over and over and over again. And it’s a big, big problem. And we’re not recognizing. For me, he also said that the technical logical success has effectively silenced the traditions that have tools to deal with these problems. And now we’re like going to engage with the problem. So it’s like, well, like if we killed off the tools that allows us to do the thing that we want to do like this, there’s a paradox there. It’s like, well, maybe the people with the tools didn’t do the things because of the tools told them to do the things. Or or or. Yeah. The solution is still obvious, which is stop the technology. I don’t agree with that solution. But look, that’s the formula you gave. I didn’t give that one. Oh, here we go. How do you hold the vision loosely enough to keep from losing touch with ever changing reality? Look, reality changes all the time. You don’t lose touch with that unless you lose touch with yourself to some extent. And but you don’t get it by just getting in touch with yourself. Like the contrast of being in touch with the outside world gives you the ability to be in touch with yourself. Being in touch with yourself doesn’t give you the ability to be to see the outside world. And that’s really part of what we’re dealing with is that the world is deeply asymmetrical. If you don’t believe me, read Nassim Taleb’s books, all of them, you know, Fool by Randomness, Black Swan, Anti-Fragile, Skin in the Game. Like read all his books. And he talks about the asymmetry. There’s a deep asymmetry to the world. This is why we exist and not we. I mean, all the material things that we think of as material things that exist have to exist because there is an imbalance between the amount of matter and anti-matter in the universe. It has to be that way. Okay. So first of all, so what is a vision? Right. Like a vision is not something that’s here in your head. Like you look at it. It’s away from you. Right. And then what? So it’s not something that’s of you. Right. It’s something that you go into. Right. And then how do you go into it? Well, you envision how to participate in the vision. And if you envision the way to participate, that means that you get discernment about yourself in what you need to change. Right. Because you’re insufficient for the vision because else it would already be. Well, and the vision isn’t outside the now. The vision is something you have now. You have the vision concurrent with now. The vision isn’t certain. It is. It doesn’t manifest. It isn’t what you’re going to get because in most cases, because a vision is an ideal. And when we implement, we don’t get the ideal because we live in this messy world with all this randomness and things are emerging all the time. And some of the emergence we can control because I can cultivate the garden and plant the seeds so that when I put the box with dirt in it outside, it just doesn’t grow random stuff. Some of which will be weeds, some of which will be flowers because I have flowers and these flowers spread all over my yard automatically without me ever doing anything. It’s emergence. It’s wonderful. It’s not wonderful. They get choked out by the stupid vines that have to keep pulling. Right. So I can cultivate that. Sure. Absolutely. But I still can’t control it as much as I’d like. There’s some control over it. Bugs can come in and do a blight. I’ve got deer. They’ll probably eat anything I plant that’s edible. Right. Unless I put a big fence. Right. There’s all kinds of factors. The amount of sunlight, the amount of rain, the amount of watering I have to do. Maybe I overwater it because I’m an idiot because I’m almost really an idiot, especially when it comes to gardening. And, you know, there’s a lot of stuff outside of our control. And when we start talking about AI and what control it has over us and what control we have over it and whether or not the vision is something contained within us, you know, at a certain point, you just got to trust you have a vision now. You try to implement that vision. The vision that you have is to tell us the vision that you have is the final cost. It is the thing you are trying to implement in the future. You have it now as a vision. If you want it as more than a vision, that part is in the future after a bunch of work. And we keep collapsing time down to now. And we don’t live that way. We live with the past, our present, which is the now, and the future. You can’t abuse it in the now. Also, right, like the way that we have visions now is we watch this movie star in the movie or outside of the movie. And then we want to be that person. So, first of all, like we’re so disconnected from that. And also, we’re not a movie star. Right. So, like it’s impossible to live in that vision. Right. Because you need to become the person that is capable of doing your vision. And if you’re not realistic, if you don’t have the grounding from where you can stand, where you can look into the future potential, and you have to connect the potential to the things that you have access to. Because like I want to be an astronaut is also like it’s just not going to work. Like maybe if you get lucky, but like you can’t know if you’re 15 whether you’re going to be an astronaut or not. Right. And so, like it’s important to realize that there’s a purpose to the vision. Right. And the vision draws you forward. And it informs you. Right. It calls you and it informs you in who you need to be. And in some sense, you’re in an interaction with the vision so that you can slowly get the transformation. And you can decide the steps. Right. Like, okay, like it’s far away. But like in order to get to the far away, like if I want to be an astronaut, maybe I should lose my weight, my fat. I get my stamina. Well, look, implement what already is. You have to revivify everything you have to stand on in order to implement things that are new. You have no choice. I like what Mills says here. I would get a wrap it up soon. I got to do something. Mills, who picks to tell us the AI? Well, that’s exactly the thing. I mean, you know, it’s funny to me that John and his talk and other people talk about, you know, all the silo problem has been solved. I can tell you as an AI engineer. No, it has not. If you think it has, you are not dealing with AIs and training them. I am. It hasn’t been solved at all. You can go into chat GPT or mid journey or whatever, and you can do experiments to prove the silo problem is still there. I would argue that John actually described a silo problem, which ironically is not even a silo problem, but it is and it isn’t at the same time. You know, when he said a lawyer would summarize my work better, I doubt that. I think the lawyer would summarize it exactly the way chat GPT did. The lawyer is an expert in psychology, history. What is it? Cogsci, you know, whatever else John’s an expert in, right? And so he would summarize it exactly the way chat GPT did, which is also not an expert in any of those things. And so the telos of the AI is picked by the people training it. Period. End of statement. Full stop. That’s just the way it’s always worked. Always will work. It is smaller than us. We train it. Assuming that we can, right? Because like we’re writing something and we’re assuming that like the black box is going to interpret what we write and do what we want it to do. But like I doubt that we can do that, especially if we reach AGI. When it doesn’t. Well, we’re never going to reach AGI. The whole thing is foolish. We don’t know what general intelligence. We can’t define intelligence at all, by the way. No one’s ever done that. And we can’t define general intelligence. That doesn’t even make any sense. So the idea that we could define artificial general intelligence is absurd. We can’t we can’t define intelligence in people and we don’t know what general means in that context. So no fear there. Don’t worry. This is like the consciousness problem. We have no one has a definition. No one has a difference. Consciousness. And you can’t measure it. And the measurement efforts have been tried. Consciousness and Conscience Conference in Thunder Bay actually demonstrated this. The talk by Dr. Mandral, I hope I’m saying his name right, was lovely. And he sort of demonstrated that, yeah, they’ve been trying to test for consciousness for years and they thought they had a test and it’s totally wrong. It doesn’t work. Oh, that’s not good. So I’m all worried about that stuff, but it doesn’t matter to some extent. If I train any look, look, I used to do corporate training. If you want corporate training, I can trade your fortune and you can get some. When you train those people, you have a telos. They get trained within your telos, whether they like it or you like it or not. That happens. That has to happen. Not to say they can’t go outside and learn more on their own or whatever or break free of the telos, whatever. They can. I’m not saying that most of them can’t. And most of the most of the ones that can won’t because it’s a lot of work and maybe you shouldn’t. And that’s part of the problem is that we we’re already stuck in a situation where telos is important, where final cause matters. And we’re not going to escape that. And so, Manuel, in the interest of time, because I do have a four o’clock because it’s almost four o’clock my time. Why don’t you do a wrap up statement? I’ll do a wrap up statement and we’ll go from there. Well, so I think when we’re looking at these hype moments, right, I think the first question when you’re watching a video like that is like, why am I in the hype? What am I doing here? Like, is this hype something for me? And the second part is like, why is this other person in the hype? Like, they’re there for a reason. Like, they spend time, energy and attention in a way. And like, you get like, oh, yeah, like they get money for it or like they get views for it. No, no, no, no, no. That’s usually not the reason that people do things, but they have emotional investments in it. Right. You can see that they’re passionate. So figure out what they’re passionate about and how that’s influencing them. And then, yeah, like connect. Yeah, like when you decide to engage, you should have a purpose in your engagement. You should have a defined teller. And then when you have your defined tellers, you can also understand the other person in relationship to the teller. So you can see how you should attend to what is presented to you and what information you can discard and what you cannot discard. And also, right, like this is really important when we’re talking about these highly technical subjects, like you’re probably going to like miss a bunch of background information. And they’re going to sneak stuff in, but they’re also going to say that things are irrelevant. Right. Like, oh, we need to solve this problem. But do you know if they do need to solve that problem and why they need to solve that problem and what the problem even is? Right. Because like at that point, I’m like 90% of the cases, I’m like, this is useless. I’m like, I’m just throwing away the information because like there’s so many things going wrong already that I don’t see how that’s relevant for me as a person in the world. Right. Well, and that’s the issue. So just to close it out, because unfortunately, I do have to go. Look, the bottom line is you see somebody in flow and they’re very smart about a bunch of things. They probably don’t know everything. Right. And they probably don’t have a good handle on what they’re talking about. You see them in flow and you go, aha. And then you pick up some nuggets and you become enchanted and you think, aha, this is all about AI and therefore these things must be true. That’s just enchantment. And I’m not I’m not saying enchantment bad, enchantment sometimes good. It’s important for you to differentiate and to protect yourself against these things because it’s easy to look at Eric Weinstein or Jordan Peterson or John Breveke or Paul VanderKley and see them in flow. This is so brilliant. Can you do something with it? Are they right or are they exploring? Are they just poking and philosophizing? There’s nothing wrong with that, but you need to understand that. And then is the person you’re deferring your distributed cognition to your distributed intelligence to is that person taking responsibility? Because if they’re not taking responsibility, don’t follow them. Period. It’s not that hard. Just don’t follow them. Follow someone else or something else. That’s a lot safer because then then you have some hope that if they do something or steer you wrong, that they will make it up to you or do or ask for forgiveness and go after some form of redemption. And if you don’t do that, you have a problem. So everybody be careful. Don’t believe the hype in AI. It’s all crazy talk. AI is a tool. We have lots of tools. Let’s use all of our tools wisely. AI, the Internet, whatever. Thank you very much for watching. I hope you’ll click on all my videos and navigating patterns. And Manuel, we’ll have to do this again soon. See you everybody. Have a lovely day. See you everybody.