https://youtubetranscript.com/?v=CH6N0-CRDRY

No, no, this is, but that’s an interesting thing. If you think of intelligence in this more organic way and then bring in the cultural element, it raises something that occurred to me in this context, it would be kind of fun to hear your thoughts. But can you envision, you know, would it be, and John, you’d have certainly something to say about this too. Would it be possible to envision a kind of artificial intelligence that can read symbols, that can actually recognize, and I mean, because there’s no culture without, you know, human culture without the symbolic is pervasive in human culture. What kind of intelligence is required to understand and react and engage with, and is that something that is conceivable that a machine of this complex, however complex can do? Well, I’ve seen, I mean, I’ve been playing with Chad GPT and Jordan Peterson has been playing with Chad GPT on this regard. And this is the issue is that they’re actually in the large language model is encoded the analogies that basically support symbolism. And so the Chad GPT can give you a pretty good, if you’re able to ask the question properly, Chad GPT is actually quite good at seeing analogies that would be part of symbolic understanding. The difficulty just like anything is that just because the, so the model can help you, like if you already have natural insight, can help you maybe see things that you hadn’t seen before, but it would also just be gibberish to the person that doesn’t have that insight. So I don’t think that the insight is there in the model, but what it has is a probabilistic capacity to predict, you know, relationships and analogical relationships. And so it’s actually can be an interesting tool for symbolism because sometimes you can prompt it. If there, like do you see a connection between these two images and then it’ll give you some examples and then you have this, it has this surprise where you can actually find, you can actually find relationship that you hadn’t thought about. This is something by the way that this is going to weird people out, but this is something that I think has existed for a very long time and is there in kind of what we call gematria and rabbinical reading of scripture, is that they use mathematical models to find structures in language that aren’t contained at the surface level of the usual analogies. And so they send requests through mathematical calculations to find surprising connections that then prompt their intuition to be able to find connections that they hadn’t thought before. And then you have to then make sense of those intuitions. Obviously, if they’re random, they’ll just kind of fall away. But this is actually, this brings me to the point that I wanted to make, which is the relationship between at least a large language model because that’s what we know most and divination. Divination? Yeah, and divination. Yeah, so we talked about the idea that intelligences have to be alive, but I think that most traditional cultures understand that there are types of intelligence that are not alive, at least not alive in the way that we understand alive in terms of biological beings that are born and die, you know, that they had a sense that there are agencies and intelligences that are transpersonal and that don’t, that in some ways run through human behaviour and run through humanity. And those would be, those intelligences would be contained in our language. Like they would necessarily be contained in the relationship between words and systems of words, like, you know, all the syntax and the grammar and all of that. What I see is that I think that ancient people had, and I don’t understand it and I want to be careful, like, because I don’t understand it, but I think ancient people had mechanistic ways of tapping into those types of intelligences. And they would have mechanistic ways, whether it was tossing something or throwing things, looking at relationships, almost like random relationships, and then qualifying those random relationships, was a way in order to tap into types of intelligences that ran through their own thing. And what I see is a relationship with the way that the large language models were trained seemed to be something like that, which is that the models generated random information, and then you would have humans qualifying that random connections and then qualifying it, qualifying it through iterations. So at some point, then they would become like a kind of technical, say, a technical way to access intelligent patterns that are coming down into the model. And so that is something that I see, there’s a connection between those two. And what that means is that, just like divination, the thing that I worry about the most is, again, the sorcerer’s apprentice problem, which is that those intelligences that are contained in our language, we do not, people don’t know what humans want. People don’t know what, all the motivations that are driving us, they don’t totally understand them. They don’t understand also the transpersonal types of motivations that can drive us or that can run through our societies. You know, sometimes you can see societies become possessed with certain things. I think that’s happening now in terms of certain ideologies and things like that. And so the fact that, my point is, is that the fact that on the one hand, we don’t understand these types of intelligences. And I think that the way that the models are trained and the way that they function seem to be analogous to the ancient divination practices, like a hyper version of that, that, how can I say this, is that there is a great chance that we’ll catch something without knowing what we’re catching. That we will basically manifest things, that we have no idea what they are, and we don’t understand the consequences of it. And we don’t, you know, because we are just like playing in a field of intelligent patterns and all this chaos without even knowing what it is we’re doing. And I think that we saw that, like, you know, if you remember the being AI, that little moment when it was kind of unleashed on us, and then all of a sudden the AI was acting like your, you know, the psychotic acts, or was becoming paranoid, or was doing all these things. And you could see that what was going on was basically these patterns were running through, and they hadn’t put the right constraints around them to prevent those types of patterns to run through. And those were easy, because you recognize your psycho acts very easily. But there are patterns like that, that I don’t think, I don’t think we have the wisdom to recognize as it’s manifesting itself. And that as these things get more powerful and more powerful, they will run through our society, and we won’t even know it’s happening until it’s too late. So that’s my biggest warning on AI is I basically, you know, to sound really scary, is that I think we’re trying to manifest God without knowing what we’re doing. And that will sound freaky to the secular people, but then if you don’t like the word gods, think that there are motivations and patterns of intelligence that have been around for 100,000 years that have been running through human societies, and they’re contained in our language structures. And if we just use that, play around with that with massive amounts of power, then we might have them run through us without even knowing what’s going on.