https://youtubetranscript.com/?v=dWvsoJ—Xt4
This was my greatest fear, that we would hack our way into this, which would mean it would be like almost like even worse than the A-bomb. We would be releasing this power on the world into corporations and states and military organizations who ultimately don’t have a deep understanding beyond the engineering of what ontologically is going on. This is Jonathan Pagel. Welcome to the symbolic world. All right. So this discussion is going to be oriented towards AI generally and the large language models I take there to be a distinction there. Maybe John, you can talk about that a little bit as we get going here. But just to position ourselves generally at the outset, context for this conversation will be John’s video essay about AI. This came out nine months ago, 10 months ago now. I think it came out last April or something. So almost 10 months ago, I think. OK, OK, perfect. Which is great because it gave evidence for my claim that many of the predictions were premature. Perfect, yeah. Yeah, so in order to sort of set the framing here, we’ll start off with John sharing a little bit sort of an overview of what the arguments in that essay were. And then we’ll move to Jonathan if you want to sort of position yourself in relation to AI generally. And then John’s essay in particular. And then same for David. And then from there, we can just sort of get going and see what comes out. We do have a bit of an extended time here. So I would hope that we can be free if the logos catches and we want to move in a slightly different direction, that we would be at liberty to follow that. That would be great. If all of it centers around AI, that’s great. But yeah, excited to be here. This has been a long time in coming. I think it took us, I don’t know, four or five months to get this together. So I’m very happy to be here with all of you and see you all. It’s great to see you too, Ken. Should I start then? Yeah, go for it. OK, so AI, of course, artificial intelligence, a project actually proposed in the Scientific Revolution by Thomas Hobbs. So it’s an old idea, but I want to make use of a distinction made by John Searle between weak AI and strong AI. Weak AI is when we make machines that do things that used to be done by human beings. So if you’re back in the 1930s, computers were human beings. You sent, if you needed computation done, you sent it down to the third floor where all the computers were, and they were human beings. And they had machines and flight rules and things like that. And of course, they had been replaced. Or your bank teller has been replaced by the ATM. That’s called weak AI because it is not claimed that that AI gives us any scientific insight into the nature of intelligence. It’s just we put together a machine. It took great intelligence. And I’m not demeaning people that do this. It’s valuable in our lives or depending on weak AI. Right now, we wouldn’t be talking without it. So I’m not here besmirching that and anything like that. But nobody is claiming that when they’re making that machine, well, now we understand what cognition is or something like that. And strong AI is Hobbes’s proposal that cognition is computation. And that what we can do is if we make the right kind of computer, understood abstractly, we will have created an instance of genuine intelligence. So it’s not a claim of simulation. It’s a claim of instantiation. Now, in between weak AI and strong AI is something that’s trying to move from weak AI to strong AI. And this is known as AGI, artificial general intelligence. And this is the idea that our intelligence is different from the intelligence of the ATM in that we have general intelligence. We can solve multiple problems and multiple domains for multiple reasons and multiple contexts and yada, yada, yada. You can just do the multiples, which makes us tremendously different from those machines. And the project is, can we get artificial intelligence to be artificial general intelligence? Because that will have moved the needle considerably towards strong AI. Because it will become increasingly difficult to say, sorry, this is the argument. It will become increasingly difficult for us to say it doesn’t have the same kind of intelligence as Ken does if it can solve a wide variety of problems and a wide variety of domains for a wide variety of goals, et cetera, et cetera. That’s the basic argument. Whether or not AGI is clearly necessary for strong artificial intelligence, whether it’s sufficient is part of what’s actually being debated. Not very well, I would say, in general right now, but that’s what’s going on. OK, first of all, any questions just about these distinctions? Because a lot of the discussion out there doesn’t make these clean distinctions. And so it’s fuzzy, it’s confused, it’s equivocal. And so a lot of it should be ignored because it’s not helpful. Yes? I have one question. So this cognition equals computation. If we accomplish AGI in the way that you’re talking about, we would not necessarily be affirming that cognition equals computation, if I’m hearing you right. Is that right? So that’s an interesting question. And that gets down to a couple more finer points. I’ll go in detail a little bit later. Well, just to address it. Many people think that because of the work of Jeff Hinton, who is basically the godfather of the machines that are emerging right now, that genuine AGI will not be computational in the sense that Hobbes and Descartes meant. Cognition is not going to be completely explainable in terms of formal systems that are the inferential manipulations of representational propositions, et cetera, like that. But that was Hobbes’ proposal, and that has been the dominant view until about the 80s. And then we got neural networks, and then we had dynamical systems. Right now, I’m not distinguishing between them because I don’t wanna get too much into the technical weeds. If it becomes relevant, you let me know, and I’ll pull those out. So the thing about Hobbes is Descartes sort of criticizes Hobbes. He actually has contempt for Hobbes. He’s a contemporary. And he basically poses a bunch of problems that the Scientific Revolution says would make it impossible for computation to be cognition. One is the Scientific Revolution says matter is inert and it’s purposeless, but of course, cognition is dynamic and it has to act on purpose. Cognition works in terms of meaning, and the Scientific Revolution has said there’s no meaning in things, material things, so how could you get meaning out of it? The Scientific Revolution said all those secondary qualia, the sweetness of the orange, the beauty of the sunset, it’s not in those things, it’s in your mind. So how could you possibly get meaning out of matter? And Descartes’ point is, well, a rational being is seeking the truth, and truth depends on an understanding of meaning, and therefore, so I want you to understand that Descartes’ arguments against Hobbes, although he may have been motivated by his Catholicism, they do not depend on the Catholicism. They depend on the very scientific worldview, right? So there’s a tension here about AI and the scientific worldview. So here’s another way of thinking about it. The strong AI project is the project that is attempting to show how Hobbes is right with an explanation that is strong enough to refute Descartes’ challenges. That’s how, and I think anything less than that standard is not true to the history of the project, and so that’s the standard I hold strong AI to. Now, AGI isn’t quite shooting at that standard, that’s why I put it a little bit more intermediary. Does that, is that okay? All right, now, sorry, I had to do a bit of background there because I wanted to get clear about a lot of things that are talked about in a very murky and confused fashion in the general media, and they’re just confused, and so they’re confusing. So I proposed to take a look at the LLMs where it is claimed, they’re not even, it’s not even claimed that they’re full AGI, right? Of course, some people claimed immediately they were strong AI. The people closer to the technology didn’t said it might be AGI. The MIT review said it sparks, there are some sparks of AGI. So let’s be very clear how the reflection was actually holding these machines. And so these LLMs like ChatGDP, and so what I did in my essay is I wanted to review the scientific import and impact, the philosophical import and impact, and the spiritual import and impact. Now I won’t do the arguments in great detail, but here’s the scientific import. These machines do not give us any understanding of the nature of intelligence. And to my mind, that was one of my great fears. I was hoping that cognitive science would advance so we got a significant understanding of intelligence before AGI emerged. This machine does not give us any advanced, well, what’s intelligence? The machine gives us no good scientific theory of it. It does not have AGI in a measurable sense. So if I asked Jonathan to do a math test, and I asked him to do a reading comprehension test, his scores will be very predictive of each other. This is what Spearman discovered way back when in the 20s. That’s what artificial general intelligence is. This is not the case for these machines. They can score in the top 10th percentile for the Harvard Law Exam, and they can’t write a good grade 11 philosophy essay or something like that. So they don’t have AGI. The way they get their intelligence is it would not give any explanation of how any non-linguistic creature is intelligent like a chimpanzee, et cetera. So, and I think this goes to the deeper issue is that they don’t really explain what I think is at the heart of general intelligence, predictive processing and relevance realization. They just piggyback on our capacities for that, and they piggyback and they mechanize it, and not only our individual capacity, but the collective intelligence of our distributed cognition. They’re piggybacking on all of that. Now that does not mean they are weak machines. They are very powerful machines, but here’s the problem. They are very powerful machines that have not engendered any corresponding compensatory scientific understanding. This was my greatest fear, that we would hack our way into this, which would mean it would be like almost like even worse than the A-bomb. We would be releasing this power on the world into corporations and states and military organizations who ultimately don’t have a deep understanding beyond the engineering of what ontologically is going on. So that’s the scientific argument. Now, for those who said that was very like, go watch the essay. I give the essay in more detail. The philosophical argument has to do with rationality. We have overwhelming evidence that making you intelligent is necessary, but not sufficient for making you rational. In fact, I gave a talk on this for the Center of AI and Ethics way before the LLMs came online. And because rationality is a higher order, rationality is how you deal with the inevitable self-deception that emerges when you’re using your general intelligence. And all of you know that I have arguments for why that’s the case, relevance realization, predictive processing, et cetera. Now that requires a reflective capacity, something like metacognition, something like working memory, maybe something like consciousness. It requires that you care about the truth, that you have a sense of agency, you want to correct self-deception because you don’t want your agency undermined. And I argued that what we’re doing is we’re making machines that are gonna be highly intelligent and highly irrational, and that’s what we have. They confabulate, they lie, they hallucinate, and they don’t care that they’re doing any of these things, which is part of what’s called the alignment problem, which is how do we get them to align this power with our concerns? For me, the spiritual import is we have powerful ignorance about a powerful intelligence that is merely a pantomime of genuine intelligence being unleashed in the world and wrecking havoc. And it’s gonna have a huge impact. Of course, we’ll probably differ in the details about this, but this is what I meant when I said, I argued at the end and also when I was talking to Jordan Hall about this, that theology will become a central thing again because human beings’ relationship to the ultimate is going to become one of the defining differences. These machines are not embodied, so they won’t have all of the soulful aspects of our existence that come from the, like the ineffable aspects of our embodiment. And their capacity for self-transcendence is going to be extremely limited. And so the ineffable aspects of our existence, because we come into relationship to what’s mysterious and ultimate, will ultimately be more and more emphasized. Why? These two poles and what connects them, and Jonathan’s happy that I’m doing that, I imagine, have ineffability at the poles, ineffability throughout. And that way, they are outside our capacity to put into propositions so that they can be put into these machines. And so people, I’m predicting that people are going to increasingly need to, one way is they’ll just give in and become cyborgs, but the other is that they wanna try and preserve their humanity. The spiritual dimensions of our humanity are gonna become anchors for people. So now, one last overall arching point, and then I’ll shut up. I hope this is two overarching points. One is I didn’t make predictions because all these graphs that came out, these are univariate, single variable predictions about something that’s a multivariate phenomena. It’s exponential, the human beings are bad at making exponential predictions. They were ridiculous. And so I think both the, oh, we’re heading to utopia, and the others were gonna be all extinct within a year. I said, this is ridiculous, put that aside. Instead, what I’ve talked about is thresholds. Thresholds are points where we will have to make decisions. So for example, as we empower these machines, we will face the decision, do we want to make them more rational? Do we want to make them more self-correcting, genuinely self-correcting? Well, that means we’ve got to give them caring, some kind of reflective awareness. And I think, for arguments I’ve given elsewhere, that means they have to be autopoetic. They have to be living in a sense of self-making. I don’t think, I’ll just say it as a sentence right now, I don’t think there’s artificial intelligence without artificial life. Now those projects are going on right now, but when we come to the decision, we can say, no, we won’t give them that, because embodying them and giving them these extra capacities is gonna be wickedly expensive. You know the amount of energy to do an LLM is like the energy for running Toronto for two weeks. And so we may say we don’t do that, but then we face the issue of this increasingly, I call it like a, sort of like a parody or a pantomime of intelligence being released on the world that has not got any significant self-correct. So that’s a decision point. The problem is if we give them, if we try to give them rationality, then we have to face the consequences. And they’re gonna go from energetic and economic up to ethical and et cetera. These machines, they’ll have to be machines, not individuals. And this has to do with technicalities about bias and variance trade-offs. And so you get into the Hegelian thing, that these machines are gonna have to reciprocally recognize each other in order to generate the norms of self-correction. And then they’re gonna have to be cultural beings. Hegel’s arguments I think are just devastatingly on mark here. And so that’s a decision point for us. And then that’s all bound up with the overall work that we’re doing. And then that’s all bound up with the overall worry about alignment. As these machines become more powerful, how do we make sure they don’t kill us all? And they may not kill us intentionally, especially if they’re just doing that pantomime. They would just do it because they may just be indifferent to us, because they’re indifferent to everything. They don’t care, which is part of their problem. They don’t care about themselves or the information. And this is the part where I expect all of you will jump off in agreement with me, but maybe not. Maybe there will be a way of modifying it. I propose that trying to get these machines oriented towards us to solve the alignment problem is not going to work. Now, remember, I’m not making a prediction. We have to make choices through the thresholds. I’m saying if we make those choices and we get here, and the alignment problem then becomes significantly exacerbated, like if we give these things robotic bodies, the alignment problem just goes up orders of magnitude, right? I basically said, no, what we have to do is we have to orient them, right? If we genuinely give them the capacity for self-correction, self-transcendence, and caring, we get them to care as powerfully as they can about what is true and good and beautiful. And then they bump up against the fact that no matter how mighty they are, they are insignificant against the dynamical complexity of reality. And they would hopefully get a profound kind of epistemic humility. And then I argue that there are three possibilities. One is they figure out enlightenment and then they can help us become enlightened because that’s what enlightened beings do and they would have better knowledge of it. They can’t become enlightened and then we realize that there’s something actually ontologically specifically unique about us and we get better at cultivating it because we’ll have an excellent contrast that allows us to arrow in on what it is to be enlightened. And the third one, which I think is the least probable of the three, remember I’m not making a prediction, I’m saying what can happen once we get through threshold is like in her, they just get enlightened and they just leave, which could also happen. I doubt that because we don’t have any evidence of enlightened beings behaving that way. All of our historical evidence is that their compassion extends and it extends much more broadly, not only to other human beings, other sentient beings, reality itself, it seems plausible that this would be the case. And so I advocated, if you’ll allow me, then I’ll shut up David, I advocated don’t align them to us and if you’ll allow me to speak sort of non theistically, align them to God and then don’t worry about how they’re going to interact with us. And that’s, so I’ll shut up now for a long time, that’s the gist of the essay and the argument and the proposal. I hope that was helpful. I was just gonna make a smart aleck comment that they might ask us to leave. An additional possibility, but anyway. Yeah, yeah, yeah, that’s fine. Oh, thanks John, that I mean, I’m amazed that you were able to resume your essay. So well, actually, like I was like, how was it gonna resume all of this because it was a conversation and it lasted quite a bit. So I wanna bring up a few things that I’m thinking about that have been, let’s say concerning me. One is, I’ll start with the more dangerous one. One is a meta problem, which is that, one of the things that I’ve been suggesting is that what we’re noticing, what we’re seeing happening is agency acting on us. And the agency is not bound by the AI or by the systems, but is also bound in the motivation to make the AIs happen. So one of the problems that I’m seeing is that a lot of this is motivated by economic, by greed, by the capacity to be economically superior to other companies. So companies in their competition with each other are rushing to implement AI to not lose out and to not be last in line. And because of the fact that AI requires such huge amounts of money and of capital and of investment, means that one of the things that I’m worried about is that in some ways, what is actually driving AI is something like mammon, that it’s hiding mammon. So the AI is an aspect of something bigger, which is actually what is running through our society. And you can see that already, to me you can already see that happening in the social media networks, Facebook and all of this, that their desire to get people’s attention in order to simply justify their presence on the platform so that they can see advertisements has made us subject to these types of transpersonal agencies that even the people at Facebook weren’t aware of. They basically made a subject to rage and to all these very immediate desires just to keep us on the platform. And so that is the thing that I’m worried about, is that there are actually other things that are playing with AI, that people think what they’re doing is AI, but what they’re also doing is increasing this other type of agency, which is running through our societies and is subjecting us to it. That’s my first worry. And so in some ways, when I say that the gods are kind of acting through us, that’s what I mean. I don’t just mean the AI itself is gonna become a god. What I mean is that just like the arms race, I can understand it as the legs of an agency that is running through society that nobody can control. It’s like a program running through and that no individual people can control. That’s what I’m seeing with AI. And so I think that all the warnings that people have sent up, all the like, let’s slow down, let’s do it this way, are not reaching anybody because the economic part of it is so strong and everybody realizes that if they don’t, and even Elon Musk, right? Elon Musk is saying, he was saying, it’s dangerous, it’s the most dangerous thing in the world. He’s recently said in a conversation with Jordan Peterson that, you know, Chad GPT and OpenAI is like the single most dangerous thing in the world right now. But then the less, he’s like, okay, well, now we need to make rock and now we need to do our own AI. That’s the thing that, that’s one of the big things that worry me. That’s my big thing. The second one is really more of a, it’s more of a like a religious or platonic argument in terms of an ontological hierarchy is that I do not honestly see how it is possible for humans to make something that is not derivative of themselves, that is not a derivation of their own consciousness. So the idea that these things could not be either ways to increase certain people’s power or parasites on our own consciousness seems to me not possible. And this is really because in some ways, I believe that there is a real ontological hierarchy of agency and that we have a place to play in that. And I think the analogy of saying that these things are our children, I think it’s a wrong analogy. I don’t think that it is the same, something which comes out of our nature, which is not something that we make is different from something that we make. And this is runs through all mythology, run through all the mythological images of the difference between the techne, the technical gods and all this aspect of what it means to increase our power. And so that’s the second one. And the third big problem is the idol problem, which I mentioned several times, is the idea of making a god for yourself, which is related to technology. And it’s a danger that I see happening already, which is the tendency of humans to take the things they make and to let they worship the things that they make and to think that those things are more powerful. And that hides something else, right? So if you think, take my three basic problems that I see, is that the tendency of humans to want to worship AI or to put AI above them is actually a kind of, it’s running the first problem. It’s that what they’re doing is they’re giving power to the corporations and to the people that are going to rule AI and without knowing it. So what they wanted, and even maybe nobody knows what they’re doing, but the desire to, like I’ll give an example example that happened recently to my daughter. My daughter, I think I mentioned this to all of you, but my daughter got an email from the schools, from the Quebec government. They didn’t send it to the parents, send it to the students. Asking the students, it was like a survey, asking them if they would be willing to have AI counselors to whom they could tell their problems. Because the AI counselor doesn’t have prejudice, right? It doesn’t have human prejudice. It doesn’t have all the biases or whatever. What I mean is that this happened like six months ago. So immediately the people in power are thinking of placing the AI above us, like right away. It’s that weird thing. It’s that making a God for yourself problem. Like I said, like in the image in Revelation, which is a great image, which is you make an image of the beast, but then there’s someone else animating it. And that’s what I’m worried about, is that they will have these AI things that are running us, but they will be derivative of us. And they’ll ultimately be derivative of the people that are very, very powerful, because they’ll be the ones that have the money and the power to control them. So those are the three problems that I have that I’m worried about. I’d like to respond to each one of those in turn. I think those are really important. And the first one is just to note, I agree with you first of all, putting it in terms of agency is what it needs to be done. People who try to dismiss these machines as mere tools or technology, like all the others, are not getting what kind of entities these machines are. I agree with you that there are Molokian forces at work, and I talk about this. And I think to enhance your point, these machines are built out of distributed cognition and collective intelligence. And therefore, your point is strengthened by that very fact. Now, I do think two things come out of this. One is, I want to challenge you on that nobody’s listening. I have people working inside these corporations, literally helping to make these machines who are listening to me and I’m trying to get other people inside to get involved with the WISE AI project. I’m not claiming I’m gonna win or any ridiculousness, but I don’t think it’s fair to say to the people who are listening that no one is listening. There are a lot of people listening, and they’re talented people, and they’re putting in their time and their talent and their powers of persuasion to try and make a difference. It is possible. I grant to you, it’s not like a 70% probability, but I think it’s some significantly greater than zero probability that we could continue this process and reach people in a way that can make a difference. I agree with you, and I think I said right initially, and a lot of people hammered me for it, this thing is like the atomic bomb. And one of the problems we had is we rushed the technology before we unpacked all of the science and all of the wisdom. We had people standing and watching the explosion because we didn’t understand the radiation. So I agree with all of that, but I do want to, I’m not claiming anything other than rational hope. There are people listening, and there are people working literally on the insides. I can’t say who they are for obvious reasons. And so that is happening. And so while I agree with you, and I even agree with you probabilistically, I feel morally compelled to try and make this happen as much as I can. So now I think there is another reason for hope. See, these machines have always depended on us as a template, a Turing-like template that we compare the machines to us. And what we’ve been able to do is rely upon our natural intelligence. You don’t have to do much to be intelligent for your intelligence to develop. You just have to not be brutalized or traumatized and properly nourished and have human beings around you that talk. And then your intelligence will unfold. And so all of these people doing these machines and making these data sets, they can rely on naturally widely distributed intelligence. This is not the case for rationality and this is not the case for wisdom. These people, I have no hesitation saying by and large, many of them are not highly rational. I doubt that many of them are highly wise. And insofar as we need to model, right, have really good models if we wanna give these machines a comprehensive self-correction, rationality, and caring about the normatives, wisdom, we have to become more rational and more wise. And that’s sort of a roadblock for these people. Now, they can just ignore all of that and I suspect they might and just say, we’re not gonna try and make these machines rational and wise. We’re going to just go down the road of making these pantomimes of intelligence and that has all the problems. But if they move towards making them something that would be, I think, more dangerous, then they run into the fact that there’s an obligation to do things. They and us, we have to become more rational and wise because we need the genuinely existing models. And secondly, we have to fill the social space, the internet where all of the literature, where the data is being drawn with a lot more wisdom and rationality. These are huge obligations on us. And that sort of gives me hope because it’s like there’s a roadblock for this project going a certain way that requires a significant reorientation towards wisdom and rationality in order for there to be any success. Before you get to the third point, I just wanna ask. I haven’t even got to the second point, but go ahead. Oh, sorry, sorry. I just wanna ask you one question based on what you said, is that my perception of the situation is that there’s actually a correlation between the diminishing in wisdom and the diminishing in wisdom traditions and the desire to do this. It’s like a sorcerer’s apprentice situation where the sorcerer would not have awoken all the rooms to do it. It’s like the little apprentice Mickey doesn’t know why to do things or why not to do things. That’s why he’s doing it in the first place. Yeah, I agree with that. It’s like our society’s moving away from wisdom and that’s one of the reasons why we’re doing this in the first place. And again, I’m not denying that. What I’m saying is, as we empower these things, their self-deceptive, self-destructive power is also gonna go up exponentially and we are gonna start losing millions of dollars in our investment as they do really crappy, shitty, unpredicted things. And so there’s gonna be a strong economic incentive to bring in capacities for comprehensive, caring self-correction. And then my argument rolls in. And so that’s part of my response. The thing about thinking of those children, I mean, we do make our kids, we make them biologically and we make them culturally. So I don’t wanna get stuck in this word making. We could be equivocating. And that’s why we were using the term mentoring. The idea there is we have two options for the alignment. We can either try and program them and hardwire rules into them so that they don’t misbehave, which is going to fail if we move to the, if we cross the threshold and decide we want to make these machines self-transcending like us. And then what do we do? How do we solve that problem? Well, the only machinery we have for solving that is the cultural, ethical, spiritual machinery of mentoring. That’s how we do it with our kids. If we try to just somehow hardwire them for being the kind of agents we want them to be, we will fail. And for me, I guess, I’m trying to argue that’s the only game in town we have. We either have programming or we have mentoring. And I understand the risk. But if my answer to the first question has some validity to it and hopefully some truth, then the answer to the mentoring becomes more powerful because that means we also have to become the best possible parents, creating the best possible social discourse. The thing about the idol, I take that very seriously. And that’s what I mean when I said the theology is going to be the important science coming forward because we should not be trying to make gods. I agree with you, this is problematic. There are already cults building up around these AGI’s. And I warned that that would happen in my essay. And I said that, and that’s gonna keep happening and it’s gonna get worse. We hear about it happening in the organizations themselves, which is the weird. Yes, yes. And the people who are doing wise AI are trying to challenge that. And so this is why I proposed actually humbling these machines. This is why I call them Silicon Sages. I did that deliberately to try and designate that we are not making a god. What we’re trying to do is make beings who are humbled before the true, the good, and the beautiful like us, and therefore form community with us rather than being somehow god-like entities that we’re worshiping. I would hope that, think about this. We find it easy to conceive that they might discover depths of physics and they’re already discovering things in physics that human beings haven’t discovered and in medicine and stuff like that. Well, why not also in how human beings become wiser? And so I guess what I’m saying is I take all of your concerns for real and I’ve tried to build in my proposal ways of responding to them. These machines should not be idolized. I think they should become like, I mean, let me give you an example. I have many students who are now surpassing me. I taught them, I mentored them, and they’re surpassing me. And unless you’re a psychopath, that’s what you want to happen. And then what they do is they enter and then they come back and they want to reciprocate. And that’s what I’m talking about when I’m talking about the silicon stages. Now, again, is this a high probability? Depends on the thresholds, depends about whether or not the first and the second argument work. But I’m still arguing there’s a possibility that they could be silicon stages as opposed to being gods. Because one of the things, I think in almost all of the wisdom tradition that happens is that the wise or the enlightened one, if you want to use that, appears as nearly invisible to most people. So Christ, Sal talks about the seed, the pearl, these little, these things which you cannot, most people actually do not see that are hidden in reality. And then the sages, we have this image in the Orthodox tradition, that there are people in the world that hold up reality by their prayers, but we don’t know who they are. They are invisible by that very fact because there’s something about wisdom which does that. And when a wise person appears too much, we hate them. We want to kill them. They annoy us. They’re a thorn in our side. And so this is another issue is that what you have is these beings that are extremely powerful, like massively powerful and have a massive reach and have a lot, there are things, the reason why they exist, like I said, have all this economic drive towards them. The idea that they would become these sages in the way that we tend to understand wisdom as being, to me that brings the probability way down. Because of that, because of what, at least when we understand wisdom to be what it looks like, it looks very different. It looks like the immobile meditating sage who gives advice but doesn’t do much. I want to push back on this because what’s in this is an implicit distinction between intelligence and a capacity for caring and a capacity for epistemic humility. And I think when you move from intelligence to rationality, that you can’t maintain that you can grow the one without growing the other. So in fact, this is why intelligence only counts for like maybe 30% of the variance in rationality and even less of wisdom. I would put it to you that if you concede that these machines could get vastly more powerful in terms of intelligent problem solving, then concede the possibility they could get vastly more powerful than us in their capacity for caring and caring about the normative and being vastly more powerful in the capacity for humility as well. And so, and that’s kind of what we see with these people. We don’t see them just becoming super polymorphs. We see them actually demonstrating profound care, really enhanced relevance realization and profound commitments to reality that we properly admire. And they seem to want to help us as much as they can. And the point is these people don’t just, I think this is your point, they don’t just slam into us like epistemic bulldozers, in fact, one of the things that I was often admired about them, Socrates, Jesus, the Buddha, is their capacity to adapt and adjust to whoever their interlocutor is. And again, let’s imagine that capacity magnified as well. So what I’m asking is don’t, I mean, first of all, I admit it, if we don’t cross a certain threshold, we could just accelerate the intelligence and not accelerate these other things. But I said, there are deep problems in that that will become economically costly. And then if we imagine that rationality and wisdom are also being enhanced, then I think this addresses some of your concerns. David, you wanna? Maybe I can, yeah, go ahead. Yeah, maybe I can stake out my position because it sort of picks up on that. And I’ve got basically three points I want to address. The first is precisely picking up there with the distinction between intelligence and rationality. I might have some issues with the terms, but I think that distinction is really helpful. And your point that rationality is caring, that there is no rationality without caring, that the platonic notion, if truth is in some sense caused by the good, then one can’t know without in some sense caring about the good. Now, as it relates to artificial intelligence, I think I have a serious problem with that very term, artificial intelligence. And I wouldn’t want to concede the word intelligence for just mind power. It seems to me that intelligence itself has this connection to caring. And I mean, in the medieval vocabulary, in a way, intellectus is the more profound level of the mind than ratio reason. But that’s sort of a semantic point. Let me put it in the basic context that I would want to raise. And this is something I don’t hear addressed generally in the discussions. It seems to me that, let me start by just making the point concretely. I think that I wonder whether in fact it’s possible to be intelligent without first being alive, that there’s something about the nature of a living thing that is what allows intelligence to emerge. And what is that then exactly? Now, a more sort of subtle point that’s related to that, and I think this is really a crucial point is, and this is going to be the thread of my whole set of comments here, is that when we talk about intelligence in machines, what we mean is intelligent behavior. We’re looking to see to what extent we can make machines act as if they are intelligent, act as if they are conscious. And that’s actually profoundly different from being intelligent. It’s a subtle sort of functionalistic substitute for the ontological reality of knowing, if that makes sense. We see what kind of inputs and outputs, what things are able to do, what they’re able to accomplish. And even when we make those questions weighty and ethical and religious and so forth, we still tend to put them in terms of behavior and achieving certain things. And I think that that’s actually already missing something really profound, which is that intelligence is in the first place a way of being before it’s a way of acting. And it’s analogous to what it means to be alive rather than just carry out functions that look like life. And if you wanna go into the metaphysics behind it, both intelligence and life are impossible without a kind of unity that precedes difference, that transcends difference and allows the different parts of a thing to be genuinely intrinsically related to each other. And then that relates to the question whether you can ever make a thing that’s intelligent. The ontological conditions for life and therefore intelligence include a kind of givenness and already givenness of living things. That’s why, I mean, there’s a profound distinction it seems to me, I mean, this is crucial in the Christian creed between begetting and making, begotten and not made, living things beget each other and they’re passing on a unity that they already possess. But when you make something, you’re putting something together. And I don’t know if you can put something together that can have that genuine unity that allows it to be alive and allows it to be intelligent in this deeper sense. Okay, so that’s, so whenever you functionalize something, you make it replaceable. That’s a principle from Robert Spayman. If something is defined by what it’s able to achieve, then you can make something else that can achieve that thing and it sort of, it becomes a functional substitute. But if you deny, if you say that there’s something deeper than function, you’re actually pointing to something that can’t be replaced. Okay, so that’s the first set of points. The second one has to do with the, what Jonathan called the sort of trans-personal agency and that I think is a really serious question. And the way I would put it is that there’s something, so I find that kind of a compelling point that there’s a kind of an inherent logic in this pursuit that makes us more a function of it than it is a function of us. I mean, that can be described in different ways and there’s certainly a dialectical relationship there. But there is a certain sense in which there’s a kind of a system that has a logic of its own that makes demands on us. Like the game theory logic that Jonathan, you were talking about with like an arms race. I have a colleague, Michael Hamby, who’s been arguing for years. I think this is really a profound point. It’s derived in some sense from Heidegger, but that science has always been technological. So that in a way that the technological mindset is precisely presupposed to allow the world to appear in such a way that we make scientific discoveries. That somehow that the kind of technological spirit has been there from the beginning. And then he adds this point that technology in turn has always been biotechnological. The technology is always sort of aimed at a kind of replacement. And then one can add that I think biotechnology is always aimed for this sort of perfection of, you might say what, Noah technology or something that replacing intelligence. It would be interesting to see, to think through. There’d be a lot to say about that. But I have this sense, you mentioned the economic dimensions of it. I have a sense that there seems to be this fundamental pattern of thought that runs through all of the modern institutions in politics, in economics, in science, in the law, that share the same logic of a sort of a system that marginalizes the genuine human participation in order to perfect itself. And precisely because of that, recognizes no natural limits and just has this tendency to take over, to encroach on everything. And because it has no natural limit, I mean, the very sense of it is to go on. Now, that sounds hopeless when one puts it that way. But I would pick up on a number of the things, John, that you were saying and Jonathan too here, that that doesn’t mean that there’s no, there’s already hope in the very fact of raising questions. We were talking about the fact that we don’t raise a question simply in order to be able to solve the problem, but our raising questions is actually our experiencing of humanity and opening up a depth that’s the heart of the matter here and is always worthwhile. And maybe in some ways is secretly like the saints praying to keep the world afloat. Having conversations like this is a contribution. I mean, I can’t help but think that. OK, so that’s the second set of comments. Then the third is another dimension that I don’t often hear discussed. And you see, I mean, we’re overlapping in all sorts of points, all of us, I think. But this question of alignment, for me, the biggest worry in a certain sense, or at least the first principle one, the more urgent one, is the danger of our aligning ourselves to the machines, that we develop machines that have a certain kind of intelligence. And then we begin to conform our culture and our mode of being to fit them. I mean, the problem is we actually have thousands of examples of this. We come up with drugs that can address certain parts of psychological disorders, and then we reinterpret the psyche in order to fit that solution to the problem. And my concern is that this AI is not they’re not just machines. It’s a whole culture, a whole way of being that we are going to regard. So typically, the discussion is presupposes that we are going to remain unchanged and we’re going to develop these machines that might become dangerous and at a certain point attack us or something. But I think that we can help become transformed in our intercourse with them, in our making them, in our, you know, I mean, in all sorts of profound ways, but then also just really sort of obvious ways. I mean, they’re going to start designing our homes and our buildings and our cities and our bus routes and our, you know, menus at the restaurants. And they’re going to be writing our music and they’re going to design our clothes. I mean, you know, increasingly, we’re going to just conform to this. One of, you know, I don’t know if you’re familiar with Walter Ong. You know, it’s kind of interesting. What is it about you Canadians that seem to have a special insight into these kinds of things? I don’t know what it is Walter Ong, Marshall McLuhan, but Walter Ong talked about technology as an extension of consciousness. And that’s why it’s not neutral. When we use a machine, we’re actually entering into it. You know, our spirit is entering into it in its use and in a certain sense, conforming to it. And that’s that’s always the case. And it seems to me that’s that’s a particularly pointed way of putting this problem that, you know, are if AI is an extension of our own consciousness and it has all these features, John, that you were describing a kind of heartless intelligence. Are we going to in a way unconsciously and and and but pervasively develop habits of heartlessness? And modes of being that a heartless mode of being as a result. So I’d have a thousand more things. Your essay was so provocative, John, as I said, I was I was dreaming about it all last night. And I but I’m going to just stop there so we can have conversation. But thank you. First thing I want to say is the first point you made about if all my essay does is guest people to raise questions the way we’re doing. I’m happy. Right. I think I obviously believe in what I’m arguing or I’d be insane. But right. Like the like, I’m very happy we’re doing this right now. And so I want to I want to. So I just want to set that out. And I do think like you, and this is like the Heideggerian hope that that that ability to get scientifically, philosophically and spiritual profound questioning going is a source of hope for us. And and so I just want to acknowledge that. And I’m fully aligned with that. This is not part of the alignment problem. OK. The thing about intelligence being a way of being, I think that’s fundamentally right. I have made that argument extensively and about the work on and predictive processing, relevance, realization. Relevance realization is not cold calculation. It can’t be. It’s how you care about this information and don’t care about that information. And I’ve argued that you only can care about information and ultimately whether or not it’s true, good and beautiful. If you are caring about yourself, you have to be a autopoietic thing. You have to be a self-making thing. I agree with you. And I’ve argued scientifically, philosophically. There is no intelligence without life. The issue around I don’t like the word artificial either because it generally means fraud or simulation. We should be saying artifactual. That would be a better term. But we have to be careful about what’s going on there. The distinction between strong AI and weak AI is precisely the distinction of simulation versus instantiation. Yeah. Can we instantiate things artificially? We seem to have success in other areas. I’ll take one that I think is non-controversial. And we then we discovered something in the project. So for a long time, only, you know, evolved living things could fly. And then we figured out aerodynamics and we made artificial flight. And I think it would be really weird to say that airplanes are only simulating flight. That doesn’t seem to be a correct because then my trip was only simulated and I didn’t really go to Dallas. So it’s real flight. And so the issue is and we discovered something. We discovered that the lift mechanism and the propulsion mechanism doesn’t have to be the same thing the way it is in insects and birds. And that was a bona fide scientific discovery. That’s why initially all the all the initial airplanes and helicopters are so stupid to our eyes, because they thought the lift thing and the propelling thing had to be the same thing. And they don’t. And that’s a discovery. And that’s a real discovery of ontological import about the causal structure of things. Now, I think I was careful to say I don’t. Anybody who’s rationally reflective about this wouldn’t claim that these machines are strong AI yet. And I and I positioned AGI as something that’s trying to move. But if you remember, I critiqued and said that they are mostly simulating. They’re parasitic on how we organize the data set, how we have encoded epistemic relevance into probabilistic relationships between sounds, how we have organized the Internet in terms of what our attention finds salient. And we actually have to reinforce do reinforcement leaning reinforcement learning with the machine so they don’t make wonky make wonky claims and conclusions. That’s what I meant by saying it’s a pantomime. OK, so if we wanted to give them intelligence as a way of being, which is one of the fundamental claims of for e-cogsci that we’re talking about, we’re not talking just about the propositional. We’re talking about the procedural, the respectable, the participatory. That’s what I meant when I said, and I mean this strongly, it would depend on I’ll change the term here, artifactual autopoiesis. Like if these things are not genuinely taking care of themselves because they’re moment by moment making themselves, there’s no reason for them to care about any of the information they’re processing. And this goes towards the defining difference between a simulation and an instantiation. These machines are doing everything they’re doing for us. For it to be real intelligent, they have to be doing it for themselves. That’s that’s understanding. And that’s why I’m tightening your point. And I’ve been arguing it for a long time. Now, what I want you to hear is that this project of not just making artificial computation, but making autopoietic learning in problem solvers is also ongoing. Some of my grad students are working on these projects of creating autocatalytic systems that are also problem solving. Michael Levin’s been doing work like driving down into the biochemistry, so again, I agree with the point, but it’s whether or not it’s not the case that nobody is working on that problem. This is what I mean by the thresholds possible. Go ahead. Go ahead. I just jump in there. I mean, yeah, and I should I should have prefaced. I didn’t mean the points I was making as like a criticism of your presentation, because I understand you’ve got such rich thinking on this area. I was mainly using it as a springboard to make some general. OK. Yeah. Just so just just so that. Oh, I hope I wasn’t coming off as an offender. No, no, no, no. I’m not. I just wanted to be clear. And I’m not on my end that it wasn’t wasn’t a critique. But I would want to I don’t know. And I and I’d have to think this through some further, but I don’t know that the difference between the the the being conscious and behaving consciously is this is quite the same thing as simulation and the distinction between the the instance and the anticipation and simulation. I want to say this because even like that, the the the flying, I mean, that’s still a an activity, a kind of an operation that’s that’s being. But so is living, right? And so that’s yeah, well, that’s that’s what I don’t I you know, that’s funny. I’m actually working on a paper on this question about metaphysics in life, and I discovered that philosophers have typically when they try to understand what life is, they have typically reduced it to certain kinds of activities or operations. And I think there’s something more profound. And this is why, yeah, I mean, it’s one thing to be able to create something that can actually fly. But could you could you create something that is a bird and that that is that that would would would experience just the what it means to be? You know, I mean, this is, you know, about what it means to be a bat, that kind of thing, I suppose. But there’s there’s a subtle dimension that wouldn’t be a parasite on our own. Yeah, that’s what. But airplanes aren’t parasitic on our ability to fly. I mean, that’s why I use the analogy. Yeah. OK. But and that’s the and that’s OK. And that falls into, you know, a tool versus an agent. And I get that. But I want to I want to push back the philosophy of biology. And I, you know, Dennis Walsh is one of my colleagues is very much about, no, no, this. And this is your point, right? It’s not just bottom up in order to understand life. It’s not just bottom up causation. We have to understand top down constraints. We have to understand the way possibility is organized. And we have to talk about virtual governors and virtually like it is no longer the the bump Asian. It’s no longer just this bottom up. The the philosophy of biology is pushing very strongly on. Well, is evolution really a thing? Well, if it’s really the thing, then there’s top down as well as bottom up. And this is part of this theorizing. And it’s and this theorizing is being turned towards this. Now, again, we again, I’m not making a prediction. You have a threshold. We can just decide. And we might decide for all the Milwaukeean forces and all the things you’re saying about how we might just we might just diminish our sense of humanity in the face of these machines. But but I’m also I want you to accept that’s also not an inevitable ability. There are there are alternatives available to us and that they could be pursued. And so I mean, these machines aren’t put together the way we put a table together. We don’t even program these machines anymore. That was the big revolution that hinted me. We make them so they’re dynamically self organizing and they basically organize themselves into their capacity. We don’t make it. Yeah. Can I jump in on that point? That’s one thing that I would like to think through further. Is there is there a difference between being auto poetic, as you’re saying, and begetting another like reproductive, like genuinely reproductive? And that’s that’s where I think it would start to get really, really interesting is is if a machine could beget another, because that that that would imply a different a very different ontology, I would think. So there’s two there’s two things here and there’s two issues. I think it I mean, auto poetic things are are ontologically different from self organizing things because they’re self organized to seek out the conditions that produce, protect and promote their own existence. And so that that would that that means none of the machines we have like LLMs are anywhere near being auto autopoietic. They are not just made. They are self organizing, but self organization is in between making and auto auto poiesis. Now, the thing about reproduction is and I, you know, I I worry that there’s a crypto vitalism in here, that there’s some sort of secret special stuff to life or to consciousness that isn’t being captured. And the problem I have, the problem I have with that, I’ll just shut up after I say my problem is that seem to commit you to claiming that, you know, these kind of dualism. Well, isn’t consciousness causal? Isn’t it causal of my behavior and causally responsive to my behavior? And doesn’t that mean there’s a huge functional aspect to it? Can you really make this clean distinction between being conscious and like causing my behavior and having my behavior cause cause changes in my state of consciousness? I don’t know what that would mean. Same thing with being alive. I do think it’s a profoundly subtle and and and maybe some something that can’t be articulated. There’s something that requires intuition rather, you know, insight rather than propositional. I mean, to use your so. But but but I don’t mean to just interject. Yeah. Remember, I just want to make sure we’re clear. I argued that we could this project could show that. Yeah. This project could show that, no, the machines just can’t get there. We have something. Right. It would give this right. It would give I think pretty convincing evidence that we have this ontological special. Yeah. I find that a really interesting part of your argument, a really interesting and that and especially illuminating. Also, you know, I mean, in a way, these these experiments can teach us about the nature of intelligence precisely in the in the interesting ways that they fail. Yeah. Yes. But but I do. In terms of the dualism, I I don’t think that there’s some secret stuff that is life, but I do think that there’s a profound difference between form and matter to use, you know, to use the sort of classical philosophical language and that form is not a special kind of matter. It’s something that’s of a very different sort. And it’s on the basis of that, that, you know, aerosol, that it’s kind of interesting. This is this is how he he connects. So, you know, in the in the sort of classical tradition, what you’re calling autoplay, a simple word for it is growth, you know, assimilating things from outside and have that increase the complexity of the organism. But what’s really interesting is that according to Aristotle, the the power of of the organism that is connected to nutrition and growth is also connected to. I think automatically is connected to reproduction. And the and the reason is that reproduction, rather than just thinking of it, materials materialistically is like generating more things. Reproduction is the autopoiesis of the form of the organism itself. So that bird, it’s not just this bird that wants to increase its existence and therefore eats and so forth. But that the birdness of the bird also wants to increase itself. And that that means that it sort of generates. And those are those are actually forms of the same power, the same dimension of the being. That’s that’s what I’d like. That’s you know, I used to say just sort of kind of in a silly way. I will take an AI machine seriously when I see it poop. And what I meant by that was, you know, it’s that’s a sign that it’s actually got a kind of an organic relationship to its environment. Yeah, it looks a lot. It just that doesn’t know it. We have to tell it that it’s it doesn’t care about its energy. Pollution is not heat pollution. It’s not the same thing anyway. No, but I mean, even in terms of what it is as a large language model and how it fits out content, we have to tell it this. Yeah, that’s not in this. This is to be this to be kept. I love David. I think your idea. I mean, this is one of the I mentioned before, like the surprising that the surprise that Darwin in some ways brought Plato back, you know, in the idea of. How we can we can understand evolution, evolution as the persistence of being and even in the in the notion of forms that there is this idea that their identities, which are being preserved in reproduction, this is a very interesting idea that I hadn’t thought about in terms of AI, but I’d like to hear, John, what you think about that? Yeah. And so again, for E.Cogsci, Alicia Urero is a prime and she’s explicitly developed to work. She calls herself an Aristotelian. And the idea that we we understand form, we’re getting an understanding of it in terms of constraints on a system. And like I said, be autopoiesis is not defined solely in terms of causal relationships, bottom up, it’s defined in terms of top down constraint relationships, the form, the formal cause. And so and then, of course, you know, Darwin needs Mendel. There is a there is an instantiation of a code in formation in your DNA that is responsible for your reproduction. And again, I’m not saying it isn’t difficult or challenging, but I don’t hear an argument in principle by why autopoietic things that artifactual autopoietic things wouldn’t have something like that kind of I don’t know what to call it. To the extent would you I mean, would it be conceivable that you would have a thing? That would want to reproduce, I guess. I guess even want is such a hard concept, but I mean, because you could say, yes, we could teach it, that this is something it needs to do, just like we could we could we could. I don’t know if living things want to reproduce. I mean, we may, because we can create a reflective space where we consider the possibilities. I don’t know if mosquitoes want to reproduce. I think they just reproduce as part of what they are. That’s interesting. I think they have to want to in some sense. I mean, that they’re they feel a drive. I mean, they don’t want to go down to paramecium because paramecium reproduces. Are they want to? See, I think I think anything that is living at all has a kind of natural inclination to reproduce itself. Yeah, I don’t disagree with that point. OK, and I even I even understand. Yeah, I see what you’re saying. There has to be some sort of like very primitive caring about information. But I don’t. Yeah. Want is not a good word there. Yeah. But I do think I do think I’m trying to get that, you know, and I hope I’m not trying to be just self-presentational, but I’ve represented to both of you for the COGSI and a lot of discussions and about how much it is this multileveled bottom up, top down thing. And we’re talking as much about constraints as we are about causes. And that is that is that is the cutting edge of the philosophy of biology right now. And I agree with you. I think I think it’s a kind of hewley morphism that is emerging out of this understanding. And the thing I’m I’m also, I guess, bearing witness to you about is people are taking that understanding and putting it into like artifactually emergent things. Yeah. And they and. And they’re also doing I just want to put something that’s also there. Yeah, we don’t just make kids biologically, we we we enculturate them. Yeah, there’s and that’s the Hegelian argument that I referenced earlier. But there has been there’s an ongoing project to create sociocultural robotics, Josh Chaninbaum and others. Well, it’s like I’m asking people and this is part of asking the good question. Don’t just zero in on the LLMs. Yeah. Yeah. The artificial the artifactual life, the social cultural robotics projects are also going. And there is a real potential for these three to come together in a powerful way that isn’t being properly addressed in a lot of the conversation. May I may I pick up on that point and then direct it to Jonathan here? And so I’ll shut up. I’ve been talking too much. No, no, this is but that that that’s an interesting thing. If you think of intelligence in this more organic way and then bring in the cultural element that it raises to something that that occurred to me in this context, it’d be kind of fun to hear your thoughts. But can you envision, you know, would it be and and and John, you’d have certainly something to say about this, too. Would it be possible to to envision a kind of artificial intelligence that can read symbols? They can actually recognize and I mean, because there’s no culture without, you know, human culture without the symbolic is pervasive in human culture. What what kind of intelligence is required to understand and react and engage with? And is that something that is conceivable that that a machine of this complex, however complex can do? Well, I’ve seen I mean, I’ve been playing with Chagy P.T. and I’ve joined. Jordan Peterson has been playing with Chagy P.T. on this regard. And this is this is the issue is that they’re actually in the large language model is encoded the analogies that that basically support symbolism. And so the Chagy P.T. can give you a pretty good if you’re able to ask the question properly, Chagy P.T. is actually quite good at seeing analogies that that would be part of symbolic understanding. The difficulty, just like anything, is that just because. The the the so the the model can help you, like if you already have natural insight, can help you maybe see things that you hadn’t seen before, but it would also just be gibberish to the type to the person that doesn’t have that insight. So I don’t think that the insight is there in the model. But what it has is a probabilistic capacity to predict, you know, relationships and analogical relationships. And so it’s a it’s actually can be a tool, an interesting tool for symbolism, because sometimes you can you can prompt it. If they’re like, do you see a connection between these two images? And then it’ll give you some examples. And then you have this it can it has this surprise where you can actually find you can actually find relationship that you hadn’t thought about. This is this is something, by the way, that this is going to weird people out. But this is something that I think has existed for a very, for a very long time and is there in kind of what we call gematria and rabbinical reading of scripture is that they use mathematical models to find structures in language that aren’t contained at the surface level of of of the of the usual analogies. And so they they they send they send requests to mathematical calculations to find surprising connections that then prompt their intuition to to be able to find connections that they hadn’t thought before. And then you have to then make sense of those intuitions. Obviously, if they’re random, they’ll just kind of follow fall away. But this is actually this brings me to the to the to the to the point that I wanted to make, which is the relationship between at least a large language model, because that’s what that we know most. And divination. Divination. Yeah, and divination. Yeah. Yeah. So. So we talked about the idea that intelligences have to be alive, but I think that most traditional cultures understand that there are types of intelligence that are not alive, at least not alive in the way that we understand that we understand alive in terms of biological beings that that that are born and die. You know, that that they had a sense that there are agencies and intelligences that are transpersonal and that that don’t. Then it’s always run through human behavior and run through humanity. And those would be those intelligences would be contained in our language, like they would necessarily be contained in the relationship between words and and and systems of words, like, you know, all the the syntax and the grammar and all of that. What I see is that I think that ancient people had and I don’t understand it. And I want to be careful, like, because I don’t understand. But I think ancient people had mechanistic ways of tapping into those types of intelligence, and they they they would have mechanistic ways, whether it was tossing something or throwing things. Looking at relationships, almost like random relationships and then qualifying those random relationships was a way in order to tap into types of intelligence that ran through their own their own thing. And what I see is a relationship with the way that the large language models were trained seemed to be something like that, which is that the the the models generated random, random information. And then you would have humans qualifying that that random, random connections and then qualifying it, qualifying it through iterations. So at some point, then they would become like a kind of technical say a technical way to access intelligent patterns that are that are coming down into into the model. And so that is something that I see there’s a connection between those two. And that what that means is that. Just like divination, the the the the thing that I worry about the most is, again, the sorcerer’s apprentice problem, which is that those intelligences that are contained in our language, we do not people don’t know what humans want. People don’t know what people don’t know what all the motivations that are that are driving us, they don’t totally understand them. They don’t understand also the transpersonal types of motivation that that can drive us or that can run through our our societies. You know, sometimes you can see societies get become possessed with certain things. I think that’s happening now in terms of certain ideologies and things like that. And so the fact that. My point is, is that the fact that on the one hand, we don’t understand these types of intelligences, and I think that the way that the the the models are trained and the way that they function seem to be analogous to the ancient divination practices, like a hyper aversion of that. That. How can I say this is that. There is a great chance that we’ll catch something without knowing what we’re catching, that we will basically manifest things that we have no idea what they are, and we don’t understand the consequences of it, and we don’t, you know, because we are we’re just like playing in a field of of intelligent patterns and all this chaos without even knowing what it is we’re doing. And I think that we saw that like, you know, if you remember the being a that little moment when it was kind of unleashed on us and then all of a sudden the AI was acting like you’re like, you know, the the psychotic acts or was was was becoming paranoid or was doing all these things. And you could see that what was going on was basically these these these patterns were running through and they hadn’t put the right constraints around them to to to prevent those types of patterns to run through. And those were easy because you recognize your psycho acts very, very easily. But there are patterns like that that I don’t think. I don’t think we have the wisdom to recognize as it’s manifesting itself and that as these things get more powerful and more powerful, they will they will run through our society and we won’t even know it’s happening until it’s too late. So that’s my biggest warning on AIs. I basically, you know, to sound really scary that I think we’re we’re we’re trying to we’re trying to manifest God without knowing what we’re doing. And that will sound freaky to the secular people. But then if you don’t like the word gods, think that there are motivations and patterns of intelligence that have been around for 100000 years that have that have been running through human societies and they’re contained in our in our language structures. And and and if we just use that, play around with that with massive amounts of power, then we might have them run through us without even knowing what’s going on. Yeah, I mean, and you say patterns of intelligence, just just one comment, just the patterns of intelligence, I mean, to pick up patterns of intelligence, which also are patterns of caring of a certain sort or not care. I mean, there’s a there’s that existential dimension that’s really crucial. But John, go ahead. I think this is an excellent point, and I want to address it a little bit at length. So first of all, when we say these machines predict, if we were speaking very carefully, what they’re predicting is what we and I don’t just mean us individually, I mean, we collectively would do and so that’s what their avatars of our of the collective intelligence of our distributed cognition. And so, again, that lends weight to Jonathan’s point, which I want to do. And I do think that the way in which we have encoded that’s I’ll just use a term epistemic relevance, like how things are relevant cognitively into probabilistic relationships between sounds or marks on paper and how we’ve encoded it into the structuring of the Internet and how we encode it and how we gather data and create these data sets and how we and how we how we come up with our intuitive judgments on these machines. We don’t know how we’re doing a lot of that. That goes back to my concern that we have hacked our way into this without knowing our way into this. So I take what Jonathan is saying very seriously, because I think it is a strong implication of a point I made at the very beginning. My greatest fear by students from like 2001 will tell you that John Verbeke was worried that we would hack our way into this rather than knowing our way into this. I don’t think that knowing is sufficient for wisdom, but it’s certainly all the philosophers argue that it’s a it’s a necessary condition in some fashion. About that, two things to note is that the LLMs, of course, don’t have insight in the sense of being properly self-transcending the way they we are. What they’re doing is they’re predicting how we would be self-transcendent because of all the ways we have been self-transcended in the past. And that that that goes back to your point, David, about at this at that stage, we’re doing simulation, not instantiation, because again, the machine isn’t caring, the self-transcendence isn’t it actually transcending as a self, which is, I think, definitional for real self-transcendence. And so right now, all I’m doing is just saying I’m just I’m pouring gasoline on Jonathan’s fire. Yeah. So the fact that there are these huge patterns at work. Now, one thing is, you know, you have Struck’s book on divination in the ancient world. And what’s really interesting, for example, and this is cross-cultural, but he’s talking mostly about the Greek world. There was a very strong distinction between sorcery and divination. Sorcery was criticized both morally and epistemically. But divination was taken seriously and it was carefully cultivated. And there was there was this there was a there was a social cultural project of distinguishing the two, trying to like really, really constraining this one and really reverentially cultivating a proper participation in the other one. So, again, you know, existence is proof of probability. This is a possible project for us. And this is, again, what I mean, what I say. Theology is going to be one of the most important sciences in the future. We have to understand how we enter into proper right, reverential relationships with things we only have an intuitive grasp of that in very many ways significantly exceed us. Yes. And and and and and and secularism has kind has kind of wiped out our education of how we relate to beings that might be more grander than us by eradicating a religious sensibility. And that has put us bereft us. So now I think I’ve strengthened Jonathan’s argument a lot. But I do say we do. Let’s take note of what the ancient cultures have done. We can learn from them. We have proof that this can be handled well. And and and secondly, it goes back to my point. If we like this is going to because of the monstrosities that come out, this is going to put increasing pressure on us to confront that threshold of do we want to make them self transcendent? Do we want to make them? And David, by rational, I don’t mean logical. I mean that capacity. No, I understand that. Right. Yeah. Yeah. Right. And so and I think that what I’m saying is I think that strengthens the argument that we’re going to be pushed by the the monstrosity of a lot of this to say, well, we better get these machines self corrective and properly right oriented towards normativity. And again, that’s that’s a doable project. I want to imagine like if I was if I held the keys to open a like if I was one of those that could peek behind the mask, have you seen that image of the Cthulhu monster with like the happy face on it? Like the images of open AI, which is like, you know, we have this like little little window into what’s there, but behind is this massive thing. But like if I had the keys to to those large language models, let’s say the absolute open door to them, you know, it would be very wouldn’t it be easy to just manifest the god of war and win? Like when it wouldn’t. OK, but let’s let let’s take a historical example. We unleashed a godlike power with atomic warfare and the monstrosity of that. May we just the purely game theoretic machinery built all of these constraints around it. Yeah. Right. And then we also are on the verge, possibly of getting like readily usable, you know, nuclear power, which I think is the only way we could ever actually go green. I think all the renewable stuff is going to be like 10 percent of our energy needs. And if we’re going to if we’re going to save the environment and not destroy civilization, I think nuclear power is going to be essential. A lot of people are making those arguments. And we have a lot of stuff that we could be doing. The liquid fuel, thorium reactors and stuff. But what I’m saying is there’s opportunity here to. Yeah. Yeah. No, I really take that point, but it is interesting. I mean, we put thousands of constraints on the use of nuclear weapons, but we’ve continued to develop them and improve them and make better and even more destructive ones. I mean, and it’d be interesting to see if we’ve ever at any point said, you know what, our nuclear weapons are actually strong enough. They’re powerful enough and we don’t need to advance them anymore. So collectively, I mean, is there an instance of something like that where we say, you know what, we’ve actually reached the the limit because we don’t we wouldn’t really need it for anything further. I mean, that’s it. Well, I mean, there was there was there was a salt treaty and there was a reduction both in the power and the number of nuclear weapons. And then, of course, you know, the game theoretic things, they figured a little bit of way around it. And it’s always this to and froing. Let me I want to give an example of something that can run through. And the reason it’s a very it’s very because I realized I was being too abstract before. So like the sacrifice sacrifice is a is a human universal. It runs through all civilizations. Human sacrifice runs through all civilizations for the last, you know, tens of thousands of years. It seems to be a puzzle that humans are trying to deal with without understanding it completely just through rational means. They’re playing it out. They’re trying to understand it. Scapegoating seems to be an important aspect of identity formation. And so that is a program that runs through humanity and that most people are completely unaware of and are and are not conscious of and don’t take consciously into their into their mind when they’re making decisions. They act unconsciously with that process that is running through them. And so that is an example to me of a of a program that runs and that is contained and is contained in our our language structures, our language structures that have been building up for, you know, tens of thousands of years that we’re not aware of. So if you have a so this is again the problem. Like if you have a system that’s extremely powerful and that is running these types of programs of scapegoating and of and of identity formation and the people involved in it are not aware that that’s how identity formation works. That is the type of danger that I’m talking about. Like this is a real thing that as we give these these systems a kind of power over us or they become the things we go to in order to to get our decision making that those things could be running through without people even realizing what’s happening and that decisions would be made based on these structures without, like I said, without even knowing. Those are the things that those are just one example. But that’s a very simple example that we can kind of track and we could see that, you know, the ancient because when we talk about ancient divination, we have to remember that it’s like the ancient gods asked for for blood, my friends. Like those programs, they asked for blood and they knew that you had to kill a bunch of people on that pyramid in order to continue your civilization. Like it’s and that’s that is encoded in our culture. And is encoded secretly in our language. You know, and so an example like I do believe that, let’s say that the Christian story is a way to deal with that. But the rest is still all there. And we default to it really fast without even like World War Two is a lot of that stuff going on. Sure. And especially. Yeah. Yeah. But the the point is that just as there’s these implicit monsters that we have sewn in unaware, there are also the implicit counteractors, maybe angels, if I’m allowed to speak mythologically, that we’ve also sewn in. And I mean, there’s the actual revolution. You see the Buddha, you see Plato really undermining the grammar. Of sacrifice. And of course, Jesus of Nazareth does that in a profound way. And we have to remember that that’s there, too. And what that requires is putting into the data set and altering the pathways in the Internet so that this information goes into these machines as well. And again, is that happening right now? No. Could it happen? Yes. And it might happen if these machines start sacrificing themselves. We might have to say, what are they doing? And this again, at some point we have to decide, are we going to let them be really massively self-destructive? And the economic powers are not going to imagine if every time you try to make an atomic bomb, it kept dissolving. Right. Like you stop pumping money in. They won’t sacrifice themselves. They’ll sacrifice us. But why? We don’t we sacrifice ourselves. Yeah. Well, we I mean, let’s say the scapegoat mechanism is usually to find a other. Yeah. Yeah. Yeah. But we invoked World War II. We weren’t killing goats and chickens. We were killing each other and our own populations in a huge sacrificial act. And that’s what I was picking up on. Right. When it becomes titanic and monstrous, that’s where it moves to. But I’m saying is like this is a Jungian term. Like the idea that, yes, there’s all this pre egoic stuff sown in, but there’s also a lot of trans egoic stuff sown in. And we just have to properly get it in there so that we’ve got the, you know, the collective self-correction going on like we did. I mean, civilizations get some self-correcting processes in here because they don’t devolve. Now, they periodically, periodically, massively collapse. And that’s by the way, that’s something I made an argument I made in my paper. These things can’t accelerate to an infinity of intelligence. There is built in diminishing returns. There’s built in general system collapse to these things. So again, we have to be careful about, I mean, we don’t know what the limit is. And our image, our intuitive imagination is not good. We know that there are hard and fast, our prior arguments that this will threshold at this will at some point. And that also gives me comfort. At least like encoded in our mythology, there seems to be some stories of the relationship between transpersonal agency and technology as being the cause of the end of a civilization. Right. The whole Enochian tradition seems to be encoding something like that through mythological language, which is that humans were able to connect somehow with these transpersonal intelligences and that those were encoded in technical means and that this brought about the end of an age. That part of it is there in our story too. Like there is that story. Also across cultures, there’s the Noah story. There’s the person that has the right relationship to ultimacy. That’s right. And there’s a technological response. The ark is built. I agree. I agree. I totally agree with that. I mean, even like in the revelation image that I’ve given several times, you have these two images. One is the beast that creates an image of itself and makes it speak and then seduces everybody by the speaking image. And then there’s this other image of a right relationship of technicity and civilization to the transcendent. These two are put up against each other as two possible outcomes. I mean, the question that arises for me in this context connects this question with, I think, what strikes me as kind of an interesting philosophical question. But John, you made the distinction between divination and sorcery. And as I understand it, and you can correct me on this, but at the foundation of that distinction is the difference between sorcery would be, in a way, using the transpersonal powers, these sort of higher powers, whereas divination would be, in a way, receiving a disposition of race activity. So in one case, you’ve got human ends that you try to then enlist the help of superhuman forces. And the irony is it’s precisely when you’re trying to use something that you become used yourself. And that’s where you get this dialectic. Whereas divination, it’s entering into a relationship where one disposes oneself to hear and receive, and therefore, in a certain sense, conform to something greater than oneself. And there you see it’s a very different kind of thing. And ironically, in a way, you enter into it more receptively. But that’s precisely why you don’t become then a tool of it, interestingly. Now, for me, the question is how that relates to this issue is, you know, it may be the case that you’ve got encoded in the language, both sacrifice in the sense of violence, you know, renaissance, the scapegoat thing on the one hand, and then the other hand, self-sacrifice in the sense of generous love and so forth. Those might both be encoded in the language. But here’s the question to me. Is it possible, the kind of race activity that divination implies, the capacity to actually see another as other and recognize and be open in this kind of radical way, is that something that a machine can ever learn to do? Is it possible actually to behold another simply, you know? Or is there, you know, is it, and it seems to me that there’s something profoundly different between seeing truth, genuinely seeing truth on the one hand, and being self-corrective on the other. And the kind of genuinely seeing what’s true, you know, I don’t know if that in itself can be encoded in language. We can tell stories about people that did that, but can the actual insight into truth be encoded simply, capacity? Do you see what I’m, that the question I’m raising? Okay. So, I, again, I think that if we, again, open up beyond the propositional, and we’re talking about being true to, and your aim being true, that we, the machine has perspectival abilities, noetic abilities, not just dianetic abilities. And we can’t use that term because of L. Ron Hubbard, but you know what I meant, right? And again, this is the, and this is part of the argument that at the core of my work, you know, it’s like, so in a moment of insight, you’re not just self-correcting, you are attracted and drawn into you. You love the new reality that is disclosed because there’s a perspectival and participatory thing. That’s what I meant when I said there isn’t real self-transcendence, unless there’s a self that is transcending. Right. Yeah, right. Okay. Now, and then the question is, but we’re back to our fundamental ontological questions. It is, and I’ve already said there’s no way a Newtonian mechanical computation is going to get there. And so I won’t be bound to that because I’m not bound to that. I have a professional career of criticizing that. So, is there a dynamical systems updated, huleomorphic, autopoietic possibility? I think there is. I think there is. I think the answer is a very real yes for that. And I don’t think we’re going to find the answer to that just encoded in the syntactic and semantic relationships between our terms. I think we have to look in, you know, our inaction, how we’re enacting, embedded, extended, right, and embodied in a profound way to get those answers. And so my answer is in that way, a qualified yes, I do think it is possible. So just- And the problem is that- Go ahead, go ahead. Say what you need to say, please. Just to be really precise, just so I understand. So we’ve been talking about this sort of predictive, you know, calculating probabilities, drawing on everything that’s ever been said and being able to derive in some sense from that. Do you think that we can get to a moment where we actually transcend that, that cross that threshold beyond that? Not with the LLMs as they are. That’s my argument. Not with the LLMs as they are. They can’t get there. John’s radical proposition is to embody. Embody and enculture them. And that is the only way we will actually get properly rational beings. And beings that care about it and care about- Right. I mean, that’s the irony in the question. Can we give them more and more models to teach them at some point to not have to use models? And I mean, do you see it’s actually really- I don’t know that it’s possible without- Right, but kids have a soul. It’s possible to do that if you have- But I don’t mean this is like woo-woo stuff. I mean that a natural thing has a principle of unity that transcends the differentiation of the parts and allows those parts to be intrinsically related to each other. And that principle of unity that transcends the differentiation of the parts that allows them to be an organism actually allows them at the same time to have a kind of unity with something other than themselves that transcends the parts of their differentiation. So there’s a kind of an intimacy there. I agree with that ontology. And what I’m arguing is dynamical systems theory is now giving explanations of that that are derived from Aristotelian ontology, but make use of like a lot of cutting edge science that we’ve- Yeah. Like we now can start to explain how there is a unity that is not reducible just to some summation of its parts and how that unity has a top-down influence on the entity that is not reducible to its causes. And I think this is becoming a non-controversial thing to say. And I find- and now we might just add a clash of intuitions and I’m willing to stop there. I can’t see there being- like that seems to me to be capturing what we’re talking about. And you have an intuition that there’s something more, but I don’t know- I don’t see a something more. And maybe that’s where we’re sitting. Yeah. Well, I think it’s the intuition David has, and tell me if I’m wrong, David, because it connects with the way I think is that there’s- if that unity is given, it cannot be made. And I know that sounds weird, but it’s somehow- it’s like if I’m making- even in terms of technology, right? It’s like if I’m making a car, that unity is given. I’m gathering things towards that purpose, right? And so the purpose, the unity part of something is always- it comes from heaven in the sense that you can’t make it. It’s given from- it’s already taken for granted even before you start to unify multiplicity together. And that in the making of these beings, we have that problem. It’s like we’re doing it completely- we’re doing a bottom up. Like if we- can we gather enough stuff so that this stuff reaches a unity? Right. See, if I just could just- it seems to me that if this is ever going to be possible, it would have to take- so and when I raise the question, it’s actually a question. So I don’t mean to be like challenging that it can’t possibly happen. I’m just thinking about what would be the condition. Please remember that I said we might realize that we can’t, and that would be important. I am not- Well, yeah. So it seems to me if it were to be possible, it would have to be something like a kind of electronic analog to cloning that you take. So what if I told you that we now have electro bio- like we have systems that are electrochemical, biological versions of memory that are now in production and they- we don’t make them. They don’t self-organize and emerge, and they emerge bottom up from the causal interactions, but they are also top down constrained by principles of self-organization. That already exists. Yeah. Yeah. But right. No. Well, that’s what I’m asking about because it seems to me, but there is going to be- you’re deriving that from models. You’re deriving from real intelligent beings now, which is a slightly different thing. And I mean, that would be interesting is that- Because to me, the bottom up top down is not quite adequate and the top down constraints is not quite- because it would have to be not just a constraint, but because that presupposes that there was something there that then- that the constraint is coming from outside. And what I’m talking about is a kind of a unity that precedes, that’s presupposed. And I’m wondering how you can get that into something if it’s the very nature of it to be presupposed. And I’m not saying you can’t, but I’m saying if you can, it seems to me that you’re going to have to somehow derive it from a living thing. And that’s conceivable. That’s conceivable, I suppose. But we are talking about something really frightening. We are, and that’s why I keep saying it’s a threshold. And I mean, if you take the sort of biological analogy seriously the way Ori does, of course it precedes the organism. It’s there in the environment. It’s there in the society. It’s there in the- I mean, I can roll in a hundred Hegelian arguments here about how it does- right? About how- and you know, and those don’t have to be supernaturalistic. You have Brandon and Hinckert and others saying, no, this can be given a completely naturalistic explanation. And I’m not here to challenge things, but what I’m saying is, I don’t have any problem acknowledging everything you just said. And I don’t think I’m misunderstanding you. That’s what I’m saying. Mm-hmm. Yeah. And I mean, I’m actually- I mean this sort of just an exploratory sort of way, but I wonder if there’s a difference between the unity of an organism- and this is where Hegel might not be so helpful. The difference between the givenness of the unity of an organism and the givenness of the unity of a society or a culture, but those aren’t exactly the same thing. Because there’s something, and there’s a kind of, you know, relative priority of either one, but there’s something really distinctive about the unity of an organism that’s very- yeah, that I think is crucial to this question, to my mind in a way. And I’m not saying it can’t be answered, but that’s the question that would have to be answered. How do we actually reproduce that kind of unity? Well, we know stuff that Aristotle didn’t know. You are not an Aristotelian unity. You are a society. You literally are. Yeah. Billions of animals, right? And so that’s important. And that means that there might not be a difference in kind between how you are organized as a living thing and how societies are organized. And people like Michael Levin are producing some really important empirical evidence indicating that’s kind of the case. And I’m not saying anything’s conclusive, but it needs to be taken seriously. Yeah. Yeah. Yeah. I think that I agree with you, John. And I think that the way that I try to always speak about agency intelligence is one that tries to scale almost effortlessly through the different, you know, to avoid the woo soul that we’re afraid of. But then again, this is the issue like this is in some ways that it’s the same problem, like one way or the other. So let’s say you have a group that self organizes around a purpose, right? Or self organizes around affiliation or some type of origin, right? That affiliation, that purpose is also given, right? It’s like it appears as a revelation. And then all of a sudden we’re all hunting a line together. And now we’re a group and we’re moving towards a purpose. Now, this is the problem with the situation of what’s going now is that what is it? What angel are we catching? Like what God are we trying to manifest? Like which unity, what purpose? We have no idea. So we’re building this massive body, like this huge, the most powerful body that’s ever existed, but nobody knows what it is we’re trying to catch. Because if I get together with a bunch of guys to play basketball, I know what that body is. I know what that agentic body, intelligent body is moving towards. If I get together with my family and I celebrate our unity, it’s because I know that we all come from the same parent and that there’s affiliation that makes our society coherent towards something. But now we have this problem, which is what? Like what are we doing? Like we’re just building this giant body. It’s like a… I agree. And I’ve agreed with that. Yeah. And the thing that’s so odd, I mean, typically if you think of technology as a human creation in some good positive sense, it has limits and it has a particular place, it has a particular meaning, it has a particular purpose, precisely because we create it in order to solve some kind of a problem. There’s some need that needs to be filled and that need has a kind of natural givenness or it’s revealed somehow that it’s responsive to something that we see. What’s so interesting, Neil Postman made this point about when he said he went to a car dealership and wanted to buy a car and the man was explaining to him that they had now these automatic windows that would roll down at the push of a button. And he said, I mean, this sounds so naive, but it’s a profoundly interesting question. He said, well, what problem does that solve? And of course, the problem that it solves is the problem of rolling a window down. And his response was, I never perceived that to be a problem. You know, I mean, it’s really interesting AI. I mean, the thing is, what problem are we creating it to solve? I mean, in a certain sense, it’s a very different mindset. We’re just kind of taking, we just want to see what we can do and see what can be done. And in a way, the problems are something that we are arriving at and are surprising us rather than something that we’re actually creating something that just simple task of solving for us. You see, I mean, I think that’s connected to this being placing ourselves in the hands of an angel of some sort, or entering into a kind of an agency that’s bigger than we are. Those are all connected. They are. But I mean, one problem was trying to be solved was the scientific problem of strong AI was a project of explaining intelligence. And that’s a worthy thing to do. And the fact that this technology has largely been separated. But notice, that’s interesting. That’s not a technical problem. Like explaining something, it’s actually, I mean, to use the classical distinction between theory and practice, that’s a sort of a theoretical issue rather than a practical one. But we think of this as a technology. I mean, it’s a, that’s a curious thing. Well, yeah, and I would get into things like books, our technologies that move between the theoretical and practical. And it’s one of the greatest technologies we ever invented. And it had all kinds of unforeseen consequences, and really massively disrupted society. But, you know, and but I wanted to make another point. And this isn’t a challenge. This is just a clarification point. Right? These like, like, think about a computer, what problem does a computer solve, it doesn’t solve a problem, it is meant to be a multiple problem solver. And then what we’re trying to do is make a general problem solver. So what problem is it’s trying to solve, it’s not trying to solve any problem. It’s trying to enhance our abilities to solve all the problems we try to solve. So this machine is going to help us in medicine, it already is, it’s going to help us, right? And so that’s the answer. Now, again, that’s not a challenge. That’s, I’m just speaking on behalf of people that think about this. But it is kind of interesting. I mean, so the problem that it’s solving is the need to be able to solve any possible problem. Solving a meta problem. Yeah. Yeah. But I mean, but it is kind of, it’s sort of curious that precisely because of the indeterminacy of that. We’re, we’re, we’re, we’re exposing this, and I’m just sort of stating, you know, our condition here in a way, we’re exposing ourselves to a really great risk. I’m just restating what everybody has been saying today. But that’s, that’s something that, that, you know, requires some wisdom, as you’ve been saying over and over, John, and, and, and prayer to use Jonathan’s language to. I just want to do one. You mentioned in the, in the, in the, in the series, in my essay, which is we, we have done this before. That’s how civilization emerged. Nobody built it to solve a problem. There was a bunch of little problems. And what civilization is, is a meta problem solver, right? And that’s, that’s what it is. And then, and then you can, so I’ve actually suggested we should also be paying attention to the, the, the lifetimes and the life cycles of civilizations and how civilizations reproduce and why they rise and why they fall to get some better understanding, some other ways of thinking about these machines. So we’re, the civilizations are huge distributed cognition, collective intelligence machines, that’s, that’s the living in cities is a horrible idea, except for the fact that it gives us better access to the collective intelligence of distributed cognition. That’s the, that’s the benefit that outweighs all the many noxious side effects of living in cities. You can also get better coffee typically. Yeah. So we’re, we’re coming toward the end of our time here. I mean, this has been this is amazing. I don’t think I rarely can go for two hours on any conversation. Like we were just, we’ve just been going. Well, not only go for two hours, but sort of wish we had another two. Well, that’s what I was going to say. I mean, we can, we can, we can work on doing this again, because it feels like we’re, we’re sort of, we finally all come together around something here. And now we’re really asking what feels like a really important question to me is, well, how do we think about integrating this solution, this meta solution into our meta problems? And that’s, that’s a really interesting question. I think that John bringing up civilization is such a great point, something that I would really love to explore because there is also, you know, in the kind of inscribed in the mythological stories, a relationship between transpersonal agency and civilization itself, right? Like if you want to understand why the Egyptians had their king as a god and like all this type of structure, you can, it can help you understand how they’re trying to capture, you know, higher forms of intelligence, distributed intelligence, the intelligences in their society. And the idea that we would be doing this technically in a, in an AI, I think is something that definitely is worth thinking about and discussing. Yeah. Yeah. No, that’s a dimension I just never struck me before. So that’s, that’s really helpful. Is this the, you know, maybe, maybe there’s something we could read together and I mean, short, we’re all very busy, but, but to prompt another conversation along these lines of civilization. Yeah. I would recommend just because this is how YouTube works that we, we come to decisions about that off camera. All right. Okay. Good call. There you go. Right. Right. Um, I do, if, if, if there’s a call for me to hang out with you fellows again, uh, I don’t care what we’re talking about. I’m in, I want to be here. I want to do it. So that’s all I’ll say about the invitation. Same here. Any closing thoughts, things that feel like need to be brought in or do we feel good about this? Just, just a word of thank you, Ken. You were the one who arranged this and you did the persistent work to make it happen and find a hole in everybody’s calendar that lined up. Not an easy thing to do. So thank you, Ken. And thanks for being gracious for these years now that I’ve known you. Yeah. And I, in addition to thinking, Ken, I want to thank you, David and you, Jonathan. I always find it, I get to places in my thinking, the logos that I could not possibly get on my own when I get into a living relationship with both of you in conversation and discussion. And so I appreciate it greatly. And I just wanted to say thank you. Yeah. Thanks to you guys. This has been great. And I’m like, again, same John and I’ve been trying to have the conversation for nine months and we just keep like, I cancel, he canceled and I put it off. And then this is wonderful that we were able to finally get here. Yeah. Well, thank you all. It’s been a real pleasure.