https://youtubetranscript.com/?v=r3VXcPK7fG8
Welcome everyone. The video you’re about to watch was originally posted on Ken Lowry’s channel, Climbing Mount Sophia. It’s a discussion with Ken, DC Schindler, and Jonathan Pagio about the scientific, the philosophical, and the spiritual import and impact of the emerging AI machines like ChatGBT4 and on the other LLMs, large language models. It’s a really scintillating conversation. For those of you who might be interested, we’ll put a link to the video essay that I gave a while back where I laid out the argument that I review in this video more extensively. And perhaps for many people much more accessible is the new book I have out with Sean Coyne from Storygrid called Mentoring the Machines. It’s coming out in four parts. The first two parts are already out. There’ll be a short video after this explaining mentoring the machines. Please enjoy this quite rich discussion with DC Schindler, Jonathan Pagio, and Ken Lowry. John Vervecky and Sean Coyne have together authored a new book, Mentoring the Machines. It’s a book about artificial intelligence and the path forward that further develops the arguments of how to align artificial intelligence to human flourishing and it sets those arguments into beautiful and accessible writing. All right. So this discussion is going to be oriented towards AI generally and the large language models. I take there to be a distinction there. Maybe John, you can talk about that a little bit as we get going here. But just to position ourselves generally at the outset, context for this conversation will be John’s video essay about AI. This came out nine months ago? Ten months ago now? I think it came out last April or something. Yeah. So almost, yeah, ten months ago, I think. Okay. Okay. Perfect. Which is great because it gave evidence for my claim that many of the predictions were premature. Perfect. Yeah. So in order to sort of set the framing here, we’ll start off with John sharing a little bit sort of an overview of what the arguments in that essay were. And then we’ll move to Jonathan if you want to sort of position yourself in relation to AI generally. And then John’s essay in particular. And then same for David. And then from there, we can just sort of get going and see what comes out. We do have a bit of an extended time here. So I would hope that we can be free if the logos catches and we want to move in a slightly different direction, that we would be at liberty to follow that. That would be great. If all of it centers around AI, that’s great. But yeah, excited to be here. This has been a long time in coming. I think it took us, I don’t know, four or five months to get this together. So I’m very happy to be here with all of you and see you all. It’s great to see you too, Ken. Should I start then? Yeah, go for it. Okay, so AI, of course, artificial intelligence, a project actually proposed in the scientific revolution by Thomas Hobbs. So it’s an old idea. But I want to make use of a distinction made by John Searle between weak AI and strong AI. Weak AI is when we make machines that do things that used to be done by human beings. So if you’re back in the 1930s, computers were human beings. You sent, if you needed computation done, you sent it down to the third floor where all the computers were and they were human beings and they had machines and flight rules and things like that. And of course, they have been replaced. Or your bank teller has been replaced by the ATM. That’s called weak AI because it is not claimed that that AI gives us any scientific insight into the nature of intelligence. It’s just we put together a machine. It took great intelligence and I’m not demeaning people that do this. It’s a valuable in our lives or depending on weak AI. Right now, we wouldn’t be talking without it. So I’m not here besmirching that and anything like that. But nobody is claiming that when they’re making that machine, well, now we understand, right? What cognition is or something like that. And strong AI is Hobbes’s proposal that cognition is computation. And that what we can do is if we make the right kind of computer, understood abstractly, we will have created an instance of genuine intelligence. So it’s not a claim of simulation. It’s a claim of instantiation. Now, in between weak AI and strong AI is something that’s trying to move from weak AI to strong AI. And this is known as AGI, artificial general intelligence. And this is the idea that our intelligence is different from the intelligence of the ATM in that we have general intelligence. We can solve multiple problems and multiple domains for multiple reasons and multiple contexts and yada, yada, yada. You can just do the multiples, which makes us tremendously different from those machines. And the project is, can we get artificial intelligence to be artificial general intelligence? Because that will have moved the needle considerably towards strong AI. Because it will become increasingly difficult to say, does it have this? Sorry, this is the argument. It will become increasingly difficult for us to say it doesn’t have the same kind of intelligence as Ken does if it can solve a wide variety of problems and a wide variety of domains for a wide variety of goals, et cetera, et cetera. That’s the basic argument. Whether or not AGI is clearly necessary for strong artificial intelligence, whether it’s sufficient is part of what’s actually being debated. Not very well, I would say in general right now, but that’s what’s going on. Okay, first of all, any questions just about these distinctions? Because a lot of the discussion out there doesn’t make these clean distinctions. And so it’s fuzzy, it’s confused, it’s equivocal. And so a lot of it should be ignored because it’s not helpful. Yes. I have one question. So this cognition equals computation. If we accomplish AGI in the way that you’re talking about, we would not necessarily be affirming that cognition equals computation, if I’m hearing you right. Is that right? So that’s an interesting question. And that gets down to a couple more finer points. I’ll go in detail a little bit later. Well, just to address it. Many people think that because of the work of Geoff Hinton, who is basically the godfather of the machines that are emerging right now, that genuine AGI will not be computational in the sense that Hobbes and Descartes meant. Cognition is not going to be completely explainable in terms of formal systems that are the inferential manipulations of representational propositions, etc. like that. But that was Hobbes’ proposal. And that has been the dominant view until about the 80s. And then we got neural networks, and then we had dynamical systems. Right now, I’m not distinguishing between them, because I don’t want to get too much into the technical weeds. If it becomes relevant, you let me know, and I’ll pull those out. So the thing about Hobbes is Descartes sort of criticizes Hobbes. He actually has contempt for Hobbes. He’s a contemporary. And he basically poses a bunch of problems that the scientific revolution says would make it impossible for computation to be cognition. One is the scientific revolution says matter is inert, and it’s purposeless. But of course, cognition is dynamic, and it has to act on purpose. Cognition works in terms of meaning, and the scientific revolution has said there’s no meaning in things, material things. So how could you get meaning out of it? The scientific revolution said all those secondary qualia, the sweetness of the orange, the beauty of the sunset, it’s not in those things, it’s in your mind. So how could you possibly get meaning out of matter? And Descartes’ point is, well, a rational being is seeking the truth, and truth depends on an understanding of meaning. And therefore, so I want you to understand that Descartes’ arguments against Hobbes, although he may have been motivated by his Catholicism, they do not depend on the Catholicism. They depend on the very scientific worldview. So there’s a tension here about AI and the scientific worldview. So here’s another way of thinking about it. The Strong AI Project is the project that is attempting to show how Hobbes is right with an explanation that is strong enough to refute Descartes’ challenges. That’s how, and I think anything less than that standard is not true to the history of the project. And so that’s the standard I hold Strong AI to. Now, AGI isn’t quite shooting at that standard. That’s why I put it a little bit more intermediary. Is that okay? All right. Now, sorry, I had to do a bit of background there, because I wanted to get clear about a lot of things that are talked about in a very murky and confused fashion in the general media, and they’re just confused, and so they’re confusing. So I proposed to take a look at the LLMs, where it’s not even claimed that they’re full AGI. Of course, some people claimed immediately they were strong AI. The people closer to the technology didn’t say it might be AGI. The MIT review said it sparks. There are some sparks of AGI. So let’s be very clear how the reflection was actually holding these machines. So these LLMs like ChatGDP, and so what I did in my essay is I wanted to review the scientific import and impact, the philosophical import and impact, and the spiritual import and impact. Now, I won’t do the arguments in great detail, but here’s the scientific import. These machines do not give us any understanding of the nature of intelligence, and to my mind, that was one of my great fears. I was hoping that cognitive science would advance so we got a significant understanding of intelligence before AGI emerged. This machine does not give us any advanced… Well, what’s intelligence? The machine gives us no good scientific theory of it. It does not have AGI in a measurable sense. So if I ask Jonathan to do a math test and I ask him to do a reading comprehension test, his scores will be very predictive of each other. This is what Spearman discovered way back when in the 20s. That’s what artificial general intelligence is. This is not the case for these machines. They can score in the top 10th percentile for the Harvard Law Exam, and they can’t write a good grade 11 philosophy essay or something like that. So they don’t have AGI. The way they get their intelligence is it would not give any explanation of how any non-linguistic creature is intelligent, like a chimpanzee, etc. I think this goes to the deeper issue, is that they don’t really explain what I think is at the heart of general intelligence, predictive processing, and relevance realization. They just piggyback on our capacities for that, and they piggyback and they mechanize it, and not only our individual capacity, but the collective intelligence of our distributed cognition. They’re piggybacking on all of that. Now, that does not mean they are weak machines. They are very powerful machines, but here’s the problem. They are very powerful machines that have not engendered any corresponding compensatory scientific understanding. This was my greatest fear, that we would hack our way into this, which would mean it would be like almost like even worse than the A-bomb. We would be releasing this power on the world into corporations and states and military organizations who ultimately don’t have a deep understanding beyond the engineering of what ontologically is going on. That’s the scientific argument. Now, for those who said that was very like, go watch the essay. I give the essay in more detail. The philosophical argument has to do with rationality. We have overwhelming evidence that making you intelligent is necessary, but not sufficient for making you rational. In fact, I gave a talk on this for the Center of AI and Ethics way before the LLMs came online. Because rationality is a higher order. Rationality is how you deal with the inevitable self-deception that emerges when you’re using your general intelligence. All of you know that I have arguments for why that’s the case. Relevance, realization, predictive processing, etc. Now, that requires a reflective capacity, something like metacognition, something like working memory, maybe something like consciousness. It requires that you care about the truth, that you have a sense of agency, you want to correct self-deception because you don’t want your agency undermined. I argued that what we’re doing is we’re making machines that are going to be highly intelligent and highly irrational. That’s what we have. They confabulate, they lie, they hallucinate, and they don’t care that they’re doing any of these things, which is part of what’s called the alignment problem, which is how do we get them to align this power with our concerns? For me, the spiritual import is we have powerful ignorance about a powerful intelligence that is merely a pantomime of genuine intelligence being unleashed in the world and wrecking havoc. It’s going to have a huge impact. We’ll probably differ in the details about this, but this is what I argued at the end and also when I was talking to Jordan Hall about this, that theology will become a central thing again because human beings’ relationship to the ultimate is going to become one of the defining differences. These machines are not embodied, so they won’t have all of the soulful aspects of our existence that come from the ineffable aspects of our embodiment. Their capacity for self-transcendence is going to be extremely limited. The ineffable aspects of our existence because we come into relationship to what’s mysterious and ultimate will ultimately be more and more emphasized. Why? These two poles and what connects them, and Jonathan’s happy that I’m doing that I imagine, have ineffability at the poles and ineffability throughout. That way, they are outside our capacity to put into propositions so that they can be put into these machines. I’m predicting that people are going to increasingly need to—one way is they’ll just give in and become cyborgs, but the other is that they want to try and preserve their humanity. The spiritual dimensions of our humanity are going to become anchors for people. Now, one last overall arching point, and then I’ll shut up. I hope this is two overarching points. One is I didn’t make predictions because all these graphs that came out—they’re univariate, single-variable predictions about something that’s a multivariate phenomena. It’s exponential. Human beings are bad at making exponential predictions. They were ridiculous. I think both the, oh, we’re heading to Utopia and the others, we’re going to be all extinct within a year. I said this is ridiculous. Put that aside. Instead, what I’ve talked about is thresholds. Thresholds are points where we will have to make decisions. For example, as we empower these machines, we will face the decision. Do we want to make them more rational? Do we want to make them more self-correcting, genuinely self-correcting? Well, that means we’ve got to give them caring, some kind of reflective awareness. I think for arguments I’ve given elsewhere, that means they have to be autopoetic. They have to be living in a sense of self-making. I’ll just say it as a sentence right now. I don’t think there’s artificial intelligence without artificial life. Now, those projects are going on right now. But when we come to the decision, we can say, no, we won’t give them that because embodying them and giving them these extra capacities is going to be wickedly expensive. You know the amount of energy to do an LLM is like the energy for running Toronto for two weeks. We may say we don’t do that, but then we face the issue of this increasingly, I call it like a parody or a pantomime of intelligence being released on the world that has not got any significant self-correct. So that’s a decision point. The problem is if we try to give them rationality, then we have to face the consequences. They’re going to go from energetic and economic up to ethical and etc. These machines, they’ll have to be machines, not individuals. This has to do with technicalities about bias and variance trade-offs. So you get into the Hegelian thing that these machines are going to have to reciprocally recognize each other in order to generate the norms of self-correction. And then they’re going to have to be cultural beings. Hegel’s arguments I think are just devastatingly on mark here. And so that’s a decision point for us. And then that’s all bound up with the overall worry about alignment. As these machines become more powerful, how do we make sure they don’t kill us all? And they may not kill us intentionally, especially if they’re just doing that pantomime. They would just do it because they may just be indifferent to us because they don’t they’re indifferent to everything. They don’t care, which is part of their problem. They don’t care about themselves or the information. And this is my report where I expect all of you will jump off in agreement with me, but maybe not. Maybe there will be a way of modifying it. I propose that trying to get these machines oriented towards us to solve the alignment problem is not going to work. Now, member, I’m not making a prediction. We have to make choices through the thresholds. I’m saying if we make those choices and we get here and the alignment problem then becomes significantly exacerbated, like if we give these things robotic bodies, the alignment problem just goes up orders of magnitude. I basically said, no, what we have to do is we have to orient them. If we genuinely give them the capacity for self-correction, self-transcendence, and caring, we get them to care as powerfully as they can about what is true and good and beautiful. And then they bump up against the fact that no matter how mighty they are, they are insignificant against the dynamical complexity of reality. And they would hopefully get a profound kind of epistemic humility. And then I argue that there are three possibilities. One is they figure out enlightenment and then they can help us become enlightened because that’s what enlightened beings do and they would have better knowledge of it. They can’t become enlightened and then we realize something actually ontologically specifically unique about us and we get better at cultivating it because we’ll have an excellent contrast that allows us to arrow in on what it is to be enlightened. And the third one, which I think is the least probable of the three, remember I’m not making a prediction, I’m saying what can happen once we get through threshold is like in her, they just get enlightened and they just leave, which could also happen. I doubt that because that we don’t have any evidence of enlightened beings behaving that way. All of our historical evidence is that their compassion extends and it extends much more broadly, not only to other human beings, other sentient beings, reality itself. It seems plausible that this would be the case. And so I advocated, if you’ll allow me, then I’ll shut up David, I advocated don’t align them to us. And if you’ll allow me to speak sort of non-theistically, align them to God and then don’t worry about how they’re going to interact with us. So I’ll shut up now for a long time. That’s the gist of the essay and the argument and the proposal. I was just going to make a smart aleck comment that they might ask us to leave an additional possibility. But anyway, yeah, yeah, yeah, that’s fine. Oh, thanks, John. That I’m amazed that you were able to resume your essay so well, actually, like I was like, how was it going to how was going to resume all of this in because it was a conversation and it lasted quite a bit. So I want to bring up a few things that that that I’m thinking about that have been, let’s say concerning me. One is, one is, I’ll start with I’ll start with the more dangerous one. One is a meta problem, which is that one of the things that I’ve been suggesting is that what we’re noticing, what we’re seeing happening is agency, you know, agency acting on us. And the agency is not bound by the AI or by the systems, but also is also bound in the motivation to make the AIs happen. So one of the problems that I’m seeing is that a lot of this is motivated by economic, by greed, by the capacity to be, you know, economically superior to other companies, what companies in their competition with each other are rushing to implement AI to not lose out and to not, you know, to not be last in line. And because of the fact that these that this that AI requires such huge amounts of money and of capital and of investment means that one of the things that I’m worried about is that in some ways what is actually driving AI is something like mammon, that it’s like it’s hiding mammon, you know. So the AI is an aspect of something bigger, which is actually what is what is running through our society. And, you know, you can see that already to me, you can already see that happening in the social media networks, you know, Facebook and all of this, that their desire to get people’s attention in order to simply justify their presence on the platform so that they can see advertisements has made us subject to these types of transpersonal agencies that even the people at Facebook weren’t aware of, right? They basically made a subject to rage and to, you know, to all these very immediate desires just to keep us on the platform. And so that is the thing that I’m worried about is that there are actually other things playing with AI that people think what they’re doing is AI, but what they’re also doing is increasing this other type of agency, which is running through our societies and is subjecting us to it. That’s my first worry. And so in some ways, you know, when I say, you know, that the gods are kind of acting or through us, that’s what I mean. I don’t just mean the AI itself is going to become a god. What I mean is that, you know, just like the arms race, I can understand it as the, like, let’s say the legs of an agency that is running through society that nobody can control. It’s like a program running through and that no individual people can control. That’s what I’m seeing with AI. And so I don’t… So I think that all the warnings that people have sent up, all the, like, let’s slow down, let’s do it this way, are not reaching anybody because the economic part of it is so strong and everybody realizes that if they don’t… And even Elon Musk, right? Elon Musk is saying, he was saying it’s dangerous, it’s the most dangerous thing in the world. He’s recently said in a conversation with Jordan Peterson that, you know, Chad GPT and OpenAI is like the single most dangerous thing in the world right now. But then the less, he’s like, okay, well, now we need to make Grok and now we need to do our own AI. So that’s the thing that… That’s one of the big things that worry me. That’s my big thing. The second one is really more of a… It’s more of a religious or platonic argument in terms of an ontological hierarchy is that I do not honestly see how it is possible for humans to make something that is not derivative of themselves, that is not a derivation of their own consciousness. So the idea that these things could not be either ways to increase certain people’s power or parasites on our own consciousness seems to me not possible. And this is really because in some ways I believe that there is a real ontological hierarchy of agency and that we have a place to play in that. And I think the analogy of saying that these things are our children, I think it’s a wrong analogy. I don’t think that it is the same… Something which comes out of our nature, which is not something that we make is different from something that we make. And this is runs through all mythology, runs through all the mythological images of the difference between the technical gods and all this aspect of what it means to increase our power. And so that’s the second one. And the third big problem is the idol problem, which I mentioned several times, is the idea of making a god for yourself, which is related to technology. And it’s a danger that I see happening already, which is the tendency of humans to take the things they make and to worship the things that they make and to think that those things are more powerful. And that hides something else. So if you take my three basic problems that I see is that the tendency of humans to want to worship AI or to put AI above them is actually a kind of… It’s running the first problem. It’s that what they’re doing is they’re giving power to the corporations and to the people that are going to rule AI and without knowing it. So what they wanted… And even maybe nobody knows what they’re doing, but the desire to… I’ll give an example that happened recently to my daughter. My daughter, I think I mentioned this to all of you, but my daughter got an email from the schools, from the Quebec government. They didn’t send it to the parents, send it to the students. Asking the students, it was like a survey, asking them if they would be willing to have AI counselors to whom they could tell their problems. Because the AI counselor doesn’t have prejudice, right? It doesn’t have human prejudice. It doesn’t have all the biases or whatever. What I mean is that this happened like six months ago. So immediately the people in power are thinking of placing the AI above us. Right away, it’s that weird thing. It’s that making a God for yourself problem. But like I said, like in the image in Revelation, which is a great image, which is you make an image of the beast, but then there’s someone else animating it. And that’s what I’m worried about is that they will have these AI things that are running us, but they will be derivative of us. And they’ll ultimately be derivative of the people that are very, very powerful. Because they’ll be the ones that have the money and the power to control them. So those are the three problems that I have that I’m worried about. I’d like to respond to each one of those in turn. I think those are really important. And the first one is just to note, I agree with you, first of all, putting it in terms of agency is what it needs to be done. People who try to dismiss these machines as mere tools or technology, like all the others, are not getting what kind of entities these machines are. I agree with you that there are Molokian forces at work. And I talk about this. And I think to enhance your point, these machines are built out of distributed cognition and collective intelligence. And therefore, that your point is strengthened by that very fact. Now, I do think two things come out of this. One is I want to challenge you on that nobody’s listening. I have people working inside these corporations, literally helping to make these machines who are listening to me and trying to get other people inside to get involved with the WISE AI project. I’m not claiming I’m going to win or any ridiculousness. But I don’t think it’s fair to say to the people who are listening that no one is listening. There are a lot of people listening. And they’re talented people. And they’re putting in their time and their talent and their powers of persuasion to try and make a difference. It is possible. I grant to you it’s not like a 70% probability. But I think it’s some significantly greater than zero probability that we could continue this process and reach people in a way that could make a difference. I agree with you. And I think I said right initially, and a lot of people hammered me for it. This thing is like the atomic bomb. And one of the problems we had is we rushed the technology before we unpacked all of the science and all of the wisdom. We had people standing and watching the explosion because we didn’t understand the radiation. These are just, you know. Yeah. So I agree with all of that. But I do want to, um, I’m not claiming anything other than rational hope. There are people listening and there are people working on literally on the insides. I can’t say who they are for obvious reasons. And so that is happening. And so while I agree with you, and I even agree with you probabilistically, I feel morally compelled to try and make this happen as much as I can. So now I think there is another reason for hope. See, these machines have always depended on us as a template, a Turing-like template that we compare the machines to us. And what we’ve been able to do is rely upon our natural intelligence. You know, you don’t have to do much to be intelligent for your intelligence to develop. You just have to not be brutalized or traumatized, properly nourished and have human beings around you that talk. And then your intelligence will unfold. And so all of these people doing these machines and making these data sets, they can rely on naturally widely distributed intelligence. This is not the case for rationality. And this is not the case for wisdom. These people, I have no hesitation saying by and large, many of them are not highly rational. I doubt that many of them are highly wise. And in so far as we need to model, right, have really good models, if we want to give these machines a comprehensive self-correction, rationality and caring about the normatives, wisdom, we have to become more rational and more wise. And that’s sort of a roadblock for these people. Now, they can just ignore all of that. And I suspect they might and just say, we’re not going to try and make these machines rational and wise. We’re going to just go down the road of making these pantomimes of intelligence. And that has all the problems. But if they move towards making them something that would be, I think, more dangerous, then they run into the fact that there’s an obligation to do things. They and us, we have to become more rational and wise because we need the genuinely existing models. And secondly, we have to fill the social space, the internet where all of the literature, where the data is being drawn with a lot more wisdom and rationality. These are huge obligations on us. And that sort of gives me hope because it’s like there’s a roadblock for this project going a certain way that requires a significant reorientation towards wisdom and rationality in order for there to be any success. I before you get to the third point, I just want to ask you one question based on what you said, my perception of the situation is that there’s actually a correlation between the diminishing in wisdom and the diminishing in wisdom traditions and the desire to do this. It’s like a sorcerer’s apprentice situation where the sorcerer would not have awoken all the rooms to do it. Like the little apprentice Mickey doesn’t know why to do things or why not to do things. That’s why he’s doing it in the first place. Yeah, I agree with that. It’s like our society is moving away from wisdom. And that’s one of the reasons why we’re doing this in the first place. And again, I’m not denying that. What I’m saying is, as we empower these things, their self-deceptive, self-destructive power is also going to go up exponentially. And we are going to start losing millions of dollars in our investment as they do really crappy, shitty, unpredicted things. And so there’s going to be a strong economic incentive to bring in capacities for comprehensive caring self-correction. And then my argument rolls in. And so that’s part of my response. The thing about thinking of them as children, I mean, we do make our kids, we make them biologically and we make them culturally. So I don’t want to get stuck in this word making. We could be equivocating. And that’s why we were using the term mentoring. The idea there is we have two options for the alignment. We can either try and program them and hardwire rules into them so that they don’t misbehave, which is going to fail if we move to the, if we cross the threshold and decide we want to make these machines self-transcending like us. And then what do we do? How do we solve that problem? Well, the only machinery we have for solving that is the cultural, ethical, spiritual machinery of mentoring. That’s how we do it with our kids. If we try to just somehow hardwire them for being the kind of agents we want them to be, we will fail. And I, for me, I guess I’m trying to argue that’s the only game in town we have. We either have programming or we have mentoring. And I understand the risk, but if my answer to the first question has some validity to it and hopefully some truth, then the answer to the mentoring becomes more powerful because that means we also have to become the best possible parents, creating the best possible social discourse. The thing about the idol, I take that very seriously. And that’s what I mean when I said that theology is going to be the important science coming forward because we should not be trying to make gods. I agree with you. This is problematic. There are already cults building up around these AGI’s. And I warned that that would happen in my essay. Right? And I said that’s going to keep happening and it’s going to get worse. We hear about it happening in the organizations themselves, which is the weird- Yes. And the people who are doing wise AI are trying to challenge that. And so this is why I proposed actually humbling these machines. This is why I call them Silicon Sages. I did that deliberately to try and designate that we are not making a god. What we’re trying to do is make beings who are humbled before the true, the good, and the beautiful like us and therefore form community with us rather than being somehow god-like entities that we’re worshiping. I would hope that… Think about this. We find it easy to conceive that they might discover depths of physics and they’re already discovering things in physics that human beings haven’t discovered, and in medicine and stuff like that. Well, why not also in how human beings become wiser? And so I guess what I’m saying is I take all of your concerns for real and I’ve tried to build in my proposal ways of responding to them. These machines should not be idolized. I think they should become like… I mean, let me give you an example. I have many students who are now surpassing me. I taught them, I mentored them, and they’re surpassing me. Unless you’re a psychopath, that’s what you want to happen. Then what they do is they enter and then they come back and they want to reciprocate. That’s what I’m talking about when I’m talking about the Silicon Sages. Now again, is this a high probability? It depends on the thresholds. It depends about whether or not the first and the second argument work. But I’m still arguing there’s a possibility that they could be Silicon Sages as opposed to being gods. Because one of the things, like I think in almost all of the wisdom tradition that happens is that the wise or the enlightened one, if you want to use that, appears as nearly invisible to most people. So Christ, Sal talks about the seed, the pearl, these little things which you cannot, most people actually do not see that are hidden in reality. And then the sages, you know, we have this image in the orthodox tradition, for example, that they know that there are people in the world that hold up reality by their prayers, but we don’t know who they are. They are invisible by that very fact because there’s something about wisdom which does that. And when a wise person appears too much, we hate them. We want to kill them. They annoy us. They’re a thorn in our side. And so this is another issue is that what you have is these beings that are extremely powerful, like massively powerful and have a massive reach and have a lot. There are things, the reason why they exist, like I said, have all this economic drive towards them. You know, the idea that they would become these sages in the way that we tend to understand wisdom as being, to me that brings the probability way down, you know, because of that, because of what, at least when we understand wisdom to be what it looks like, it looks very different. It looks like the immobile meditating sage who gives advice, but doesn’t do much. I want to push back on this because what’s in this is an implicit distinction between intelligence and a capacity for caring, the capacity for epistemic humility. And I think when you move from intelligence to rationality, that you can’t maintain that you can grow the one without growing the other. In fact, this is why intelligence only counts for like maybe 30% of the variance in rationality and even less of wisdom. I would put it to you that if you concede that these machines could get vastly more powerful in terms of intelligent problem solving, then concede the possibility they could get vastly more powerful than us in their capacity for caring and caring about the normative and being vastly more powerful in the capacity for humility as well. And so, and that’s kind of what we see with these people, right? We don’t see them just becoming super polymaths. We see them actually demonstrating profound care, really enhanced relevance realization, profound commitments to reality that we properly admire. And they seem to want to help us as much as they can. And the point is these people don’t just, and I think this is your point, they don’t just slam into us like epistemic bulldozers. They are, in fact, one of the things that I’ve often admired about them, Socrates, Jesus, the Buddha, is their capacity to adapt and adjust to whoever their interlocutor is. And again, let’s imagine that capacity magnified as well. So what I’m asking is don’t, I mean, first of all, I admit it, if we don’t cross a certain threshold, we could just accelerate the intelligence and not accelerate these other things. But I said there are deep problems in that that will become economically costly. And then if we imagine that rationality and wisdom are also being enhanced, then I think this addresses some of your concerns. Maybe I can stake out my position because it sort of picks up on that. And I’ve got basically three points I want to address. The first is precisely picking up there with the distinction between intelligence and rationality. I might have some issues with the terms, but I think that distinction is really helpful. And your point that rationality is caring, that there is no rationality without caring, that the platonic notion, if truth is in some sense caused by the good, then one can’t know without in some sense caring about the good. Now, as it relates to artificial intelligence, I think I have a serious problem with that very term, artificial intelligence. And I wouldn’t want to concede the word intelligence for just mind power. It seems to me that intelligence itself has this connection to caring. And I mean, in the medieval vocabulary, in a way, intellectus is the more profound level of the mind than ratio reason. But that’s sort of a semantic point. Let me put it in the basic context that I would want to raise. And this is something I don’t hear addressed generally in the discussions. It seems to me that, let me start by just making the point concretely. I wonder whether in fact it’s possible to be intelligent without first being alive, that there’s something about the nature of a living thing that is what allows intelligence to emerge. And what is that then exactly? Now, a more subtle point that’s related to that, and I think this is really a crucial point, and this is going to be the thread of my whole set of comments here, is that when we talk about intelligence in machines, what we mean is intelligent behavior. We’re looking to see to what extent we can make machines act as if they are intelligent, act as if they are conscious. And that’s actually profoundly different from being intelligent. It’s a subtle, functionalistic substitute for the ontological reality of knowing, if that makes sense. We see what kind of inputs and outputs, what things are able to do, what they’re able to accomplish. And even when we make those questions weighty and ethical and religious and so forth, we still tend to put them in terms of behavior and achieving certain things. And I think that that’s actually already missing something really profound, which is that intelligence is in the first place a way of being before it’s a way of acting. And it’s analogous to what it means to be alive rather than just carry out functions that look like life. If you want to go into the metaphysics behind it, both intelligence and life are impossible without a kind of unity that precedes difference, that transcends difference and allows the different parts of a thing to be genuinely intrinsically related to each other. And then that relates to the question whether you can ever make a thing that’s intelligent. The ontological conditions for life and therefore intelligence include a kind of givenness, an already givenness of living things. That’s why, I mean, there’s a profound distinction, it seems to me, I mean, this is crucial in the Christian creed between begetting and making, begotten and not made. Living things beget each other and they’re passing on a unity that they already possess. But when you make something, you’re putting something together. And I don’t know if you can put something together that can have that genuine unity that allows it to be alive and allows it to be intelligent in this deeper sense. So whenever you functionalize something, you make it replaceable. That’s a principle from Robert Spayman. If something is defined by what it’s able to achieve, then you can make something else that can achieve that thing and it becomes a functional substitute. But if you deny, if you say that there’s something deeper than function, you’re actually pointing to something that can’t be replaced. So that’s the first set of points. The second one has to do with what Jonathan called the sort of trans-personal agency. I think it’s a really serious question. And the way I would put it is that there’s something, so I find that kind of a compelling point that there’s a kind of an inherent logic in this pursuit that makes us more a function of it than it is a function of us. I mean, that can be described in different ways and there’s certainly a dialectical relationship there. But there is a certain sense in which there’s a kind of a system that has a logic of its own that makes demands on us. Like the game theory logic that Jonathan, you were talking about with like the an arms race. I have a colleague, Michael Hamby, who’s been arguing for years. I think this is really a profound point. It’s derived in some sense from Heidegger, but that science has always been technological. So that in a way that the technological mindset is precisely presupposed to allow the world to appear in such a way that we make scientific discoveries that somehow the kind of technological spirit has been there from the beginning. And then he adds this point that technology in turn has always been biotechnological. The technology is always sort of aimed at a kind of replacement. And then one can add that I think biotechnology is always aimed for this sort of perfection of you might say what, Noah technology or something that replacing intelligence that it’d be interesting to see, to think through. There’d be a lot to say about that. But I have this sense, you mentioned the economic dimensions of it. I have a sense that there seems to be this fundamental pattern of thought that runs through all of the modern institutions in politics, in economics, in science, in the law that share the same logic of a sort of a system that marginalizes the genuine human participation in order to perfect itself. And precisely because of that, recognizes no natural limits and just has this tendency to take over, to encroach on everything. And because it has no natural limit, the very sense of it is to go on. Now that sounds hopeless when one puts it that way, but I would pick up on a number of the things, John, that you were saying and Jonathan too here that that doesn’t mean that there’s no, there’s already hope in the very fact of raising questions. We don’t raise questions simply in order to be able to solve the problem, but our raising questions is actually our experiencing of humanity and opening up a depth that’s the heart of the matter here and is always worthwhile. And maybe in some ways is secretly like the saints praying to keep the world afloat. Having conversations like this is a contribution. I mean, I can’t help but think that. Okay, so that’s the second set of comments. Then the third is another dimension that I don’t often hear discussed. And you see, I mean, we’re overlapping on all sorts of points, all of us, I think, but this question of alignment, for me, the biggest worry at certain centers, or at least the first principal one, the more urgent one is the danger of our aligning ourselves to the machines that we develop machines that have a certain kind of intelligence. And then we begin to conform our culture and our mode of being to fit them. I mean, the problem is we actually have thousands of examples of this. We come up with drugs that can address certain parts of psychological disorders. And then we reinterpret the psyche in order to fit that solution to the problem. And my concern is that this AI, they’re not just machines, it’s a whole culture or a whole way of being that we are going to regard. So typically, the discussion is presupposes that we are going to remain unchanged, and we’re going to develop these machines that might become dangerous and at a certain point, attack us or something. But I think that we can help become transformed in our intercourse with them, in our making them, in all sorts of profound ways, but then also just really sort of obvious ways. I mean, they’re going to start designing our homes and our buildings and our cities and our bus routes and our menus at the restaurants, and they’re going to be writing our music and they’re going to design our clothes. I mean, increasingly, we’re going to just conform to this. I don’t know if you’re familiar with Walter Ong. It was kind of interesting. What is it about you Canadians that seem to have a special insight into these kinds of things? I don’t know what it is. Walter Ong, Marshall McLuhan, but Walter Ong talked about technology as an extension of consciousness. And that’s why it’s not neutral. When we use a machine, we’re actually entering into it. Our spirit is entering into it in its use and in a certain sense conforming to it. And that’s always the case. And it seems to me that’s a particularly pointed way of putting this problem that, you know, if AI is an extension of our own consciousness, and it has all these features, John, that you were describing a kind of heartless intelligence, are we going to, in a way, unconsciously and, but pervasively, develop habits of heartlessness and modes of being, a heartless mode of being as a result? So I’d have a thousand more things. Your essay was so provocative, John, as I said, I was dreaming about it all last night. But I’m going to just stop there so we can have conversation. But thank you. So the first thing I want to say is the first point you made about if all my essay does is get people to raise questions the way we’re doing, I’m happy. I obviously believe in what I’m arguing or I’d be insane. But I’m very happy we’re doing this right now. And so I just want to set that out. And I do think, like you, and this is like the Heideggerian hope, that that ability to get scientifically, philosophically and spiritually profound questioning going is a source of hope for us. And so I just want to acknowledge that. And I’m fully aligned with that. This is not part of the alignment problem. Okay. The thing about intelligence being a way of being, I think that’s fundamentally right. I have made that argument extensively about the work on predictive processing, relevance realization. Relevance realization is not cold calculation. It can’t be. It’s how you care about this information and don’t care about that information. And I’ve argued that you only can care about information and ultimately whether or not it’s true, good and beautiful. If you are caring about yourself, you have to be an autopoietic thing. You have to be a self-making thing. I agree with you. And I’ve argued scientifically, philosophically, there is no intelligence without life. The issue around, I don’t like the word artificial either because it generally means fraud or simulation. We should be saying artifactual. That would be a better term. But we have to be careful about what’s going on there. The distinction between strong AI and weak AI is precisely the distinction of simulation versus instantiation. Can we instantiate things artificially? We seem to have success in other areas. I’ll take one that I think is non-controversial. And we discovered something in the project. So for a long time, only evolved living things could fly. And then we figured out aerodynamics and we made artificial flight. And I think it would be really weird to say that airplanes are only simulating flight. That doesn’t seem to be a correct. Because then my trip was only simulated and I didn’t really go to Dallas. So it’s real flight. And so the issue is, and we discovered something. We discovered that the lift mechanism and the propulsion mechanism doesn’t have to be the same thing the way it is in insects and birds. And that was a bona fide scientific discovery. That’s why initially all the initial airplanes and helicopters are so stupid to our eyes because they thought the lift thing and the propelling thing had to be the same thing and they don’t. And that’s a discovery. And that’s a real discovery of ontological import about the causal structure of things. Now, I think I was careful to say, anybody who’s rationally reflective about this wouldn’t claim that these machines are strong AI yet. And I positioned AGI as something that’s trying to move. But if you remember, I critiqued and said that they are mostly simulating. They’re parasitic on how we organize the data set, how we have encoded epistemic relevance into probabilistic relationships between sounds, how we have organized the internet in terms of what our attention finds salient. And we actually have to do reinforcement learning with the machine so they don’t make wonky claims and conclusions. That’s what I meant by saying it’s a pantomime. So if we wanted to give them intelligence as a way of being, which is one of the fundamental claims of 4e cogsi that we’re talking about, we’re not talking just about the propositional. We’re talking about the procedural, the perspectival, the participatory. That’s what I meant when I said, and I mean this strongly, it would depend on, I’ll change the term here, artifactual autopoiesis. Like if these things are not genuinely taking care of themselves because they’re moment by moment making themselves, there’s no reason for them to care about any of the information they’re processing. And this goes towards the defining difference between a simulation and an instantiation. These machines are doing everything they’re doing for us. For it to be real intelligent, they have to be doing it for themselves. That’s understanding. And that’s why I’m tightening your point and I’ve been arguing it for a long time. Now what I want you to hear is that this project of not just making artificial computation, but making autopoetic learning in problem solvers is also ongoing. Some of my grad students are working on these projects of creating auto catalytic systems that are also problem solving. Michael Levin’s been doing work, like driving down into the biochemistry. So again, I agree with the point, but it’s whether or not, it’s not the case that nobody is working on that problem. This is what I mean by the thresholds possible. Go ahead. Go ahead. Can I just jump in there? I mean, yeah, and I should have prefaced, I didn’t mean the points I was making as like a criticism of your presentation because I understand you’ve got such rich thinking on this area. I was mainly using it as a springboard to make some general. Okay. Yeah. Just so that’s clear. Oh, I hope I wasn’t coming off as an offender. No, no, no. I just wanted to be clear and I’m not on my end that it wasn’t a critique. But I would want to, I don’t know, and I’d have to think this through further, but I don’t know that the difference between the being conscious and behaving consciously is quite the same thing as simulation and the distinction between the instance and the infatuation and simulation. I’d want to say this because even like the the flying, I mean, that’s still an activity, a kind of an operation that’s being. But so is living, right? Well, so that’s, yeah, well, that’s what I don’t, you know, that’s funny. I’m actually working on a paper on this question about metaphysics and life, and I discovered that philosophers have typically, when they try to understand what life is, they have typically reduced it to certain kinds of activities or operations. And I think there’s something more profound. And this is why, yeah, I mean, it’s one thing to be able to create something that can actually fly. But could you create something that is a bird and that would experience just the what it means to be, you know, I mean, this is, you know, about what it means to be a bat, that kind of thing, I suppose. But there’s a subtle dimension that wouldn’t be a parasite on our own. But airplanes aren’t parasitic on our ability to fly. I mean, that’s why I use the analogy. OK, but and that’s OK. And that falls into, you know, a tool versus an agent. And I get that. But I want to I want to push back the philosophy of biology. And I, you know, Dennis Walsh is one of my colleagues is very much about no, no, this. And this is your point, right? It’s not just bottom up in order to understand life. It’s not just bottom up causation. We have to understand top down constraints. We have to understand the way possibility is organized. And we have to talk about virtual governors and virtually like it is no longer the bump Asian. It’s no longer just this bottom up. The philosophy of biology is pushing very strongly on. Well, is evolution really a thing? Well, if it’s really the thing, then there’s top down as well as bottom up. And this is part of this theorizing. And it’s and this theorizing is being turned towards this. Now, again, we again, I’m not making a prediction. We have a threshold. We can just decide and we might decide for all the Milwaukee and forces and all the things you’re saying about how we might just we might just diminish our sense of humanity in the face of these machines. But but I’m also I want you to accept that’s also not an inevitability. There are there are alternatives available to us and that they could be pursued. And so I mean, these machines aren’t put together the way we put a table together. We don’t even program these machines anymore. That was a big revolution that hinted me. We make them so they’re dynamically self organizing and they basically organize themselves into their capacity. We don’t make it. Yeah. Can I jump in on that point? That’s one thing that I would like to think through further. Is there is there a difference between being autopilotic, as you’re saying, and begetting another like reap reap like genuinely reproductive? And that’s that’s where I think it would start to get really, really interesting is is if a machine could beget another because that that would imply a different a very different ontology, I would think. So there’s two there’s two things here and there’s two issues. I think it I mean, autopilotic things are are ontologically different from self organizing things, because they’re self organized to seek out the conditions that produce, protect and promote their own existence. And so that that would that means none of the machines we have like LLMs are anywhere near being auto autopilotic. They are not just made. They’re self organizing, but self organization is in between making and autopilotic uses. Now, the thing about reproduction is, and I, you know, I, I worry that there’s a crypto vitalism in here, that there’s some sort of secret special stuff to life or to consciousness that isn’t being captured. And yeah, the problem I have with that, I’ll just shut up after I say my problem is that seem to commit you to claiming that, you know, these kind of dualism, what isn’t consciousness causal? Isn’t it causal of my behavior and causally responsive to my behavior? And doesn’t that mean there’s a huge functional aspect to it? Can you really make this clean distinction between being conscious and like causing my behavior and having my behavior causes cause changes in my state of consciousness? I don’t know what that would mean. Same thing with being alive. I do think it’s a profoundly subtle and and and maybe some something that can’t be articulated. There’s something that requires intuition rather, you know, insight rather than propositional. I mean, to use your so but but but I don’t need to interject. Yeah. Remember, I just want to make sure we’re clear. I argued that we could this project could show that. Yeah, this project could show that no, the machines just can’t get there. We have something it would give it would give I think pretty convincing evidence that we have this ontological special. Yeah, I find that a really interesting part of your argument, a really interesting and that then especially illuminating. Also, you know, I mean, in a way, these these experiments can teach us about the nature of intelligence precisely in the in the interesting ways that they fail. Yeah, yes. But but but I do, you know, in terms of the dualism, I I don’t think that there’s some secret stuff that is life. But I do think that there’s a profound difference between form and matter to use, you know, to use the sort of classical philosophical language. And that form is not a special kind of matter. It’s something that’s of a very different sort. And it’s on the basis of that, that, you know, aerosol, that it’s kind of interesting. This is this is how he he connects. So, you know, in the in the classical tradition, what you’re calling autoplayetic, a simple word for it is growth, you know, assimilating things from outside and have that increase the complexity of the organism. But what’s really interesting is that according to Aristotle, the the power of of the organism that is connected to nutrition and growth is also connected to I think automatically is connected to reproduction. And the and the reason is that reproduction rather than just thinking of it materials, materialistically is like generating more things. Reproduction is the autopoiesis of the form of the organism itself. So that bird, it’s not just this bird that wants to increase its existence and therefore eats and so forth, but that the birdness of the bird also wants to increase itself. And that that means that it sort of generates. And those are those are actually forms of the same power, the same dimension of the being. That’s what I’d like. That’s you know, I used to say just sort of kind of in a silly way, I will take an AI machine seriously when I see it poop. And what I meant by that was, you know, it’s that’s a sign that it’s actually got a kind of an organic relationship to its environment. It just that doesn’t know. We have to tell it. It doesn’t care about its energy. Energy pollution is not heat pollution. It’s not the same thing anyway. No, but I mean, even in terms of what it is as a large language model and how it spits out content, we have to tell it this. Oh, yeah. That’s not in this. This is to be this to be kept. I love David. I think your idea. I mean, this is one of the I mentioned before, like the surprising that the surprise that Darwin in some ways brought Plato back, you know, in the idea of how we can we can understand evolution, evolution as the persistence of being. And even in the in the notion of forms that there is this idea that there are identities which are being preserved in reproduction. This is a very interesting idea that I hadn’t thought about in terms of AI. But I’d like to hear, John, what you think about that. Yeah. And so again, the four E. coxii, Alicia Urero is a prime and she’s explicitly developed to work. She calls herself an Aristotelian and the idea that we understand form, we’re getting an understanding of it in terms of constraints on a system. And like I said, be auto poesis is not defined solely in terms of causal relationships. Bottom up, it’s defined in terms of top down constraint relationships, the form, the formal cause. And so and then, of course, you know, Darwin needs Mendel. There is a there is an instantiation right of a code in formation in your DNA that is responsible for your reproduction. And again, I’m not saying it isn’t difficult or challenging, but I don’t hear an argument in principle by why auto poetic things that are de facto auto poetic things wouldn’t have something like that kind of I don’t know what to call it. To the extent would you I mean, I would it be conceivable that you would have a thing that would want to reproduce? I guess I guess even want is such a hard concept. But I mean, because you could say, yes, we could teach it that this is something it needs to do. Just like we could we could. We could sort of I don’t know if living things want to reproduce. I mean, we may because we can create a reflective space where we consider the possibilities. I don’t know if mosquitoes want to reproduce. I think they just reproduce as part of what they are. That’s interesting. I think they have to want to in some sense. I mean, that they’re they feel a drive. I mean, they don’t want to go down to paramecium because paramecium reproduces. Are they want to see? I think I think anything that is living at all has a kind of natural inclination to reproduce itself. Yeah, I don’t disagree with that point. OK, I even understand. Yeah, I see what you’re saying. There has to be some sort of like very primitive caring about information. But I don’t. Yeah, want is not a good word there. Yeah, but I do think I do think I’m trying to get, you know, and I hope I’m not trying to be just self-presentational, but I’ve represented to both of you for a cogsign a lot of discussions and about how much it is this multi-leveled bottom up top down thing. And we’re talking as much about constraints as we are about causes. And that is that is that is the cutting edge of the philosophy of biology right now. And I agree with you. I think I think it’s a kind of hewley morphism that is emerging out of this understanding. And the thing I’m I’m also, I guess, bearing witness to you about is people are taking that understanding and putting it into like artifactually emergent things. Yeah. And they and and they’re also doing I just want to put something that’s also there. Yeah, we don’t just make kids by all biologically. We we we enculturate them. Yeah, there’s and that’s the hegelian argument that I referenced earlier. But there has been there’s an ongoing project to create social cultural robotics, Josh Chaninbaum and others. It’s like I’m asking people and this is part of asking the good question. Don’t just zero in on the LLMs. Yeah. Yeah. The artificial the artifactual life, the social cultural robotics projects are also going. And there is a real potential for these three to come together in a powerful way that isn’t being properly addressed in a lot of the conversation. May I pick up on that point and then direct it to Jonathan here? I’ve been talking too much. No, no, this is but that that that’s an interesting thing. If you think of intelligence in this more organic way and then bring in the cultural element that it raises something that occurred to me in this context, it’d be kind of fun to hear your thoughts. But can you envision, you know, would it be and and John, you’d have certainly something to say about this. Would it be possible to to envision a kind of artificial intelligence that can read symbols? That can actually recognize and I mean, because there’s no culture without, you know, human culture without the symbolic is pervasive in human culture. What what kind of intelligence is required to understand and react and engage with? And is that something that is conceivable that that a machine of this complex, however complex can do? Well, I’ve seen, I mean, I’ve been playing with chat GPT and Jordan Peterson has been playing with chat GPT on this regard. And this is the this is the issue is that they’re actually in the large language model is encoded the analogies that that basically support symbolism. And so the chat GPT can give you a pretty good if you’re able to ask the question properly chat GPT is actually quite good at seeing analogies that that would be part of symbolic understanding. The difficulty just like anything is that just because the the so the the model can help you like if you already have natural insight can help you maybe see things that you hadn’t seen before. But it would also just be gibberish to the type to the person that doesn’t have that insight. So I don’t think that the insight is there in the model. But what it has is a probabilistic capacity to predict, you know, relationships and analytical relationships. And so it’s a it’s actually can be a tool an interesting tool for symbolism, because sometimes you can you can prompt it. If they’re like, do you see a connection between these two images, and then it’ll give you some examples. And then you have this, it can it has a surprise where you can actually find, you can actually find relationship that you hadn’t thought about. This is this is something, by the way, that this is going to weird people out. But this is something that I think has existed for very, for a very long time, and is there in kind of what we call gematria and rabbinical reading of scripture is that they use mathematical models to find structures in language that aren’t contained at the surface level of the usual analogies. And so they send requests to mathematical calculations to find surprising connections that then prompt their intuition to be able to find connections that they hadn’t thought of. And so they send requests found before. And then you have to then make sense of those intuitions. Obviously, if they’re random, they’ll just kind of fall away. But this is actually this brings me to the to the to the the point that I wanted to make, which is the relationship between at least a large language model, because that’s what that we know most. And divination. Divination? Yeah, and divination. Yeah, so. So we talked about the idea that intelligences have to be alive. But I think that most traditional cultures understand that there are types of intelligence that are not alive, at least not alive in the way that we understand that we understand alive in terms of biological beings that are born and die, you know, that that they had a sense that there are agencies and intelligences that are transpersonal and that that don’t, that in some ways run through human behavior and run through humanity. And those would be, those intelligences would be contained in our language, like they would necessarily be contained in the relationship between words and systems of words, like, you know, all the syntax and the grammar and all of that. What I see is that I think that ancient people had, and I don’t understand it, and I want to be careful, like, because I don’t understand. But I think ancient people had mechanistic ways of tapping into those types of intelligences. And they would have mechanistic ways, whether it was tossing something or throwing things, looking at relationships, almost like random relationships, and then qualifying those random relationships was a way in order to tap into types of intelligences that ran through their own thing. And what I see is a relationship with the way that the large language models were trained seemed to be something like that, which is that the models generated random information. And then you would have humans qualifying that random connections, and then qualifying it, qualifying it through iterations. So at some point, then they would become like a kind of a technical way to access intelligent patterns that are coming down into the model. And so that is something that I see, there’s a connection between those two. And what that means is that just like divination, the thing that I worry about the most is again the sorcerer’s apprentice problem, which is that those intelligences that are contained in our language, we do not, people don’t know what humans want. People don’t know what, people don’t know what, all the motivations that are driving us, they don’t totally understand them. They don’t understand also the transpersonal types of motivations that can drive us or that can run through our societies. Sometimes you can see societies become possessed with certain things. I think that’s happening now in terms of certain ideologies and things like that. And so the fact that, my point is, is that the fact that on the one hand, we don’t understand these types of intelligences. And I think that the way that the models are trained and the way that they function seem to be analogous to the ancient divination practices, like a hyper version of that, that, how can I say this, is that there is a great chance that we’ll catch something without knowing what we’re catching. That we will basically manifest things that we have no idea what they are and we don’t understand the consequences of it. And we don’t, because we are just playing in a field of intelligent patterns and all this chaos without even knowing what it is we’re doing. And I think that we saw that, if you remember the being AI, that little moment when it was kind of unleashed on us, and then all of a sudden the AI was acting like your, the psychotic X was becoming paranoid or was doing all these things. And you could see that what was going on was basically these patterns were running through and they hadn’t put the right constraints around them to prevent those types of patterns to run through. And those were easy because you recognize your psycho acts very, very easily. But there are patterns like that that I don’t think we have the wisdom to recognize as it’s manifesting itself. And that as these things get more powerful and more powerful, they will run through our society and we won’t even know it’s happening until it’s too late. So that’s my biggest warning on AI is I basically, you know, to sound really scary, that I think we’re trying to manifest God without knowing what we’re doing. And that will sound freaky to the secular people, but then if you don’t like the word gods, think that there are motivations and patterns of intelligence that have been around for 100,000 years that have been running through human societies. And they’re contained in our language structures. And if we just use that, play around with that with massive amounts of power, then we might have them run through us without even knowing what’s going on. Yeah, I mean, and you say patterns of intelligence, just one comment, just the patterns of intelligence, I mean, to pick up patterns of intelligence, which also are patterns of caring of a certain sort or not care. I mean, there’s that existential dimension that’s really crucial. But John, go ahead. I think this is an excellent point, and I want to address it a little bit at length. So first of all, when we say these machines predict, we were speaking very carefully, what they’re predicting is what we and I don’t just mean us individually, I mean, we collectively would do. And so that’s what their avatars of our of the collective intelligence of our distributed cognition. And so again, that lends weight to Jonathan’s point, which I want to do. And I do think that the way in which we have encoded, I’ll just use the term epistemic relevance, like how things are relevant cognitively into probabilistic relationships between sounds or marks on paper, or and how we’ve encoded it into the structuring of the internet and how we encoded and how we gather data and create these data sets, and how we come up with our intuitive judgments on these machines, we don’t know how we’re doing a lot of that. That goes back to my concern that we have hacked our way into this without knowing our way into this. So I take what Jonathan is saying very seriously, because I think it is a strong implication of a point I made at the very beginning. My greatest fear by students from like 2001, will tell you that John Verbeke was worried that we would hack our way into this rather than knowing our way into this. I don’t think that knowing is sufficient for wisdom, but it’s certainly all the philosophers argue that it’s a it’s a necessary condition in some fashion. About that, two things to note is that the LLMs, of course, don’t have insight in the sense of being properly self transcending the way they we are, what they’re doing is they’re predicting how we would be self transcendent because of all the ways we have been self transcendent in the past. And that goes back to your point, David, about at least at that stage, we’re doing simulation, not instantiation, because again, the machine isn’t caring, the self transcendence isn’t it actually transcending as a self, which is, I think, definitional for real self transcendence. And so right now, all I’m doing is just saying I’m just I’m pouring gasoline on Jonathan’s fire. Yeah. So the fact that there are these huge patterns at work. Now, one thing is, you know, you have Struck’s book on divination in the ancient world. And what’s really interesting, for example, and this is cross cultural, but he’s talking mostly about the Greek world is there was a very strong distinction between sorcery and divination. Sorcery was criticized both morally and epistemically, but divination was taken seriously and it was carefully cultivated. And there was this there was a there was a social cultural project of distinguishing the two trying to like really, really constraining this one and really reverentially cultivating a proper participation in the other one. So again, you know, existence is proof of probability. This is a possible project for us. And this is again what I mean when I say theology is going to be one of the most important sciences in the future. We have to understand how we enter into proper right, reverential relationships with things we only have an intuitive grasp of that in very many ways significantly exceed us. Yes. And secularism has kind of wiped out our education of how we relate to beings that might be more grander than us by eradicating a religious sensibility. And that has put us bereft us. So now I think I’ve strengthened Jonathan’s argument a lot. But I do say we do let’s take note of what the ancient cultures have done. We can learn from them. We have proof that this can be handled well. And secondly, it goes back to my point, if we like this is going to because of the monstrosities that come out, this is going to put increasing pressure on us to confront that threshold of do we want to make them self transcendent? Do we want to make them and David by rational? I don’t mean logical. I mean that capacity. No, I understand that. Right. Yeah. Right. And so I think that what I’m saying is I think that strengthens the argument that we’re going to be pushed by the monstrosity of a lot of this to say, oh, we better get these machines self corrective and properly right oriented towards normativity. And again, that’s a that’s a doable project. If I was if I held the keys to open AI, like if I was one of those that could peek behind the mask, have you seen that image of the Cthulhu monster with like the happy face on it? Like the images of open AI, which is like, you know, we have this like little window into what’s there, but behind is this massive thing. But like if I had the keys to those large language models, let’s say the absolute open door to them, you know, it would be very wouldn’t it be easy to just manifest the God of War and win? Like, wouldn’t it? Okay, but let’s take a historical example. We unleashed a godlike power with atomic warfare. And the monstrosity of that made we just the purely game theoretic machinery built all of these constraints around it. Yeah, right. And then we also are on the verge, possibly of getting like readily usable, you know, nuclear power, which I think is the only way we could ever actually go green, I think all the renewable stuff is going to be like 10% of our energy needs. And if we’re gonna if we’re gonna save the environment and not destroy civilization, I think nuclear power is going to be essential. A lot of people are making those arguments. And we have a lot of stuff that we could be doing, the liquid fuel, for reactors and stuff. But what I’m saying is, there’s opportunity here to Yeah, yeah, no, I really take that point. But it is interesting. I mean, we put 1000s of constraints on the use of nuclear weapons, but we’ve continued to develop them and improve them and make better and even more destructive ones. I mean, and it’d be interesting to see if we’ve ever at any point said, you know what, our nuclear weapons are actually strong enough. They’re powerful enough, and we don’t need to advance them anymore. So collectively, I mean, is there an instance of something like that, where we say, you know what, we’ve actually reached the limit, because we don’t we wouldn’t really need it for anything further. I mean, that’s a well, there was there was there was a salt treaty, and there was a reduction both in the power and the number of nuclear weapons. And then, of course, you know, the game theoretic things, they figured a little bit of way around it. And it’s always this to and fro. Let me I want to give an example of something that can run through. And the reason it’s a very, it’s very because I realized I was being too abstract before. So like, the sacrifice, sacrifice is a is a human universal. It runs through all civilizations. Human sacrifice runs through all civilizations for the last, you know, tens of thousands of years. It seems to be a puzzle that humans are trying to deal with without understanding it completely just through rational means. They’re playing it out. They’re trying to understand it. Scapegoating seems to be an important aspect of identity formation. And so that is a program that runs through humanity, and that most people are completely unaware of and are and are not conscious of and don’t take consciously into their into their mind when they’re making decisions, they act unconsciously with that process that is running through them. And so that is an example to me of a of a program that runs and that is contained and is contained in our our language structures, our language structures that have been building up for, you know, tens of thousands of years that we’re not aware of. So if you have a so this is again the problem, like if you have a system that’s extremely powerful, and that is running these types of programs of scapegoating and of and of identity formation, and the people involved in it are not aware that that’s how identity formation works, that is the type of danger that I’m talking about. Like this is a real thing that as we give these these systems a kind of power over us or they become the things we go to in order to to get our decision making, that those things could be running through without people even realizing what’s happening and that decisions would be made based on these structures without, like I said, without even knowing. Those are the things that those are those just one example, but that’s a very simple example that we can kind of track and we could see that, you know, that the ancient because when we talk about ancient divination we have to remember that it’s like the ancient gods asked for for blood, my friends, like those programs they asked for blood and they knew that you had to kill a bunch of people on that pyramid in order to continue your civilization. Like it’s and that’s that is encoded in our culture and is encoded secretly in our our language, you know, and and so an example like I do believe that like say that the Christian story is a way to deal with that, but the the rest is still all there and we we default to it really fast without even like World War II is a lot of that stuff going on. Sure, and especially yeah yeah, but the the point is that just as there’s these implicit monsters that we have sown in unaware, there are also the implicit counteractors, maybe angels, if I’m allowed to speak mythologically, that we’ve also sown in and I mean there’s the actual revolution, you see the Buddha, you see Plato really undermining the grammar of sacrifice and of course Jesus of Nazareth does that in a profound way and we have to remember that that’s there too and what that requires is putting into the data set and altering the pathways in the internet so that this information goes into these machines as well and again is that happening right now? No. Could it happen? Yes, and it might happen if these machines start sacrificing themselves and we might have to say what are they doing? Like and we’d like and this again we’ll at some point we have to decide are we going to let them be really massively self-destructive and the economic powers are not going to imagine if every time you try to make an atomic bomb it kept dissolving right like like you stop pumping money in. Yeah, they won’t sacrifice themselves, they’ll sacrifice us. But why? We don’t, we sacrifice ourselves. Yeah well we I mean let’s say the scapegoat mechanism is usually to find a other sacrifice. Yeah yeah yeah but we invoked World War II, we weren’t killing goats and chickens, we were killing each other and our own populations in a huge sacrificial act and that’s what I was picking up on right when it becomes titanic and monstrous that’s where it moves to. What I’m saying is like this is a Jungian term right yeah like the idea that yes there’s all this pre-Egoic stuff sown in but there’s also a lot of trans-Egoic stuff sown in and we just have to properly get it in there so that we’ve got the you know the collective self-correction going on like we did. I mean civilizations get some self-correcting processes in here because they don’t devolve. No they periodically massively collapse and that’s by the way that’s something I made an argument I made in my paper. These things can’t accelerate to an infinity of intelligence. There is built-in diminishing returns, there’s built-in general system collapse to these things. So again we have to be careful about I mean we don’t know what the limit is and our imagine our intuitive imagination is not good. We know that there are hard and fast a prior arguments that this will threshold at some point and that also gives me comfort. At least like encoded in our mythology there seems to be some stories of the relationship between transpersonal agency and technology as being the cause of the end of a civilization right. The whole Enochian tradition seems to be encoding something like that through mythological language which is that humans were able to connect somehow with these transpersonal intelligences and that those were encoded in technical means and that this brought about the end of an age. So it’s like it’s there that part of it is there in our story too like there is that story. Yeah but there’s also across cultures the Noah story. There’s the person that has the right relationship to ultimacy. That’s right. Right and there’s a technological response. I agree. I totally agree with that. I mean even like in the in the the revelation image that I’ve given several times you have these two images. One is the beast that creates an image of itself and makes it speak and then seduces everybody by the speaking image you know that and then there’s this other image of a right relationship of technicity and civilization to the transcendent. These two kind of are put up against each other as two possible outcomes. I mean the question that arises for me in this context connects this question with I think what strikes me is kind of an interesting philosophical question but John you made the distinction between divination and sorcery and as I understand it and you can correct me on this but at the at the foundation of that distinction is the difference between you know sorcery would be in a way using the trans personal powers these sort of higher powers whereas divination would be in a way receiving you know the disposition of race activity. So in one case you’ve got human ends that you that you try to then enlist the help of superhuman forces to and the irony is it’s precisely when you’re trying to use something that you become used yourself and that’s where you get this dialectic whereas divination it’s entering into a relationship where one disposes oneself to hear and receive and therefore in a certain sense conform to something greater than oneself and there you see it’s a very different kind of thing and ironically in a way you enter into it more receptive receptively but that’s precisely why you don’t become then a tool of it interestingly. Now for me the question is how that relates to this issue is you know it may be the case that you’ve got encoded in the language both sacrifice in the sense of violence you know renais gyrard the scapegoat thing on the one hand and then the other hand self-sacrifice in the sense of generous love and so forth those might both be encoded in the language but here’s the question to me is it possible the kind of receptivity that divination implies the capacity to actually see another as other and recognize and be open in this kind of radical way is that something that a machine can ever learn to do? Is it possible actually to behold another simply you know or is there and it seems to me that there’s something profoundly different between seeing truth genuinely seeing truth on the one hand and being self-corrective on the other and the kind of genuinely seeing what’s true I you know I don’t know if that in itself can be encoded in language we can tell stories about people that did that but can the actual insight into truth be encoded simply compassionate do you see do you see what I’m that the question I’m raising okay so um I again I think that if we again open up beyond the propositional and we’re talking about being true to and um and your aim being true um that we the machine has perspectival abilities noetic abilities not just dianetic abilities and we can’t use that term because of L. Ron Hubbard but you know what I meant uh right um and again this is the and this is part of the argument uh that at the core of my work um you know um it’s like so in a moment of insight you’re not just self-correcting you are attracted and drawn into you you love the new reality that is disclosed because there’s a perspectival and participatory thing that’s what I meant when I said there isn’t real self-transcendence unless there’s a self that is transcending right yeah right right okay now and then the question is and but we’re back to our fundamental ontological questions it is and I’ve already said there’s no way a Newtonian mechanical computation is going to get there and so I I won’t be bound to that because I’m not bound to that I have a professional career of criticizing that right so but is there is there a dynamical systems updated hulimorphic autopoietic possibility I think there is I think there is I think the answer is a very real yes for that um and I don’t think we’re going to find the answer to that just encoded in the syntactic and semantic relationships between our terms I think we have to look and you know our inaction how we’re enacting embedded extended right and embodied in a profound way to get those answers um and so um my answer is in that way a qualified yes I do think it is possible and so just in the public go ahead go ahead say what you need to please be really precise just so I understand so um we’ve been talking about this sort of predictive like calculating probabilities um drawing on everything that’s ever been said and being able to derive in some sense from that um do you think that that we can get to a moment where we actually transcend that that that cross that threshold beyond that not with the llm’s as they are that’s my argument not with the llm’s as they are they can’t get there yeah I like John’s radical proposition is to is to embody embody and enculture them and that is the only way we will actually get properly rational beings and beings that care about it and care about right I mean that’s the irony in the question you know can we give them more and more models to teach them at some point to not have to use models and and I mean do you see it’s it’s actually really I I don’t I don’t know that it’s possible without right but but kids have a soul you know it’s possible to do that if you have and but I don’t mean this is like woo stuff I mean that they that that a natural thing has a principle of unity that transcends the differentiation of the parts and allows those parts to be intrinsically related to each other and that that principle of unity that transcends the differentiation of the parts that allows them to be an organism actually allows them at the same time to have a kind of unity with something other than themselves that transcends the parts of their differentiation so there’s a kind of an intimacy there I agree with that ontology and what I’m arguing is dynamical systems theory is now giving explanations of that that are derived from Aristotelian ontology but make use of like a lot of cutting edge science that we’ve we now can start to explain how there is a unity that is not reducible just to some summation of its parts and how that unity has a top-down influence on the entity that is not reducible to its causes and and and and I think this is becoming a non-controversial thing to say and I find and now we might just add a clash of intuitions and I’m willing to stop there I can’t see there being like that seems to me to be capturing what we’re talking about and you have an intuition there’s something more but I don’t know I don’t see you something more and maybe that’s where we’re sitting yeah well I think it’s the the intuition David has and tell me if I’m wrong David and because it connects with the way I think is that there’s it’s that unity is given it cannot be made and and I know that sounds weird but it’s some it’s somehow it’s like if I’m making even even in terms of technology right it’s like if I’m making a car that unity is given I’m I’m gathering things towards that purpose right and so the the purpose the you the unity part of something is always it comes from heaven in the sense that you can’t make it it’s given from from it’s it’s already taken for granted even before you start to unify multiplicity together and that in the making of these beings we have that problem it’s like we’re doing it completely we’re doing a bottom up like if we can we gather enough stuff so that this stuff reaches a unit right see and I if I just could just uh it seems to me that if this is ever going to be possible it would have to take so and when I raise the question it’s actually a question so I don’t mean to be like challenging that it can’t possibly happen I’m just thinking about what would be the condition please remember that I said yeah we might realize that we can’t and that would be important I am well yeah but so so it seems to me if if it were to be possible it would have to be something like um a kind of um uh electronic um analog to cloning that you that you take so what have I told you that we now have uh electro bio like we have systems that are electrochemical biological versions of memory that are now in production and they we don’t make them they don’t self-organize and emerge and they emerge bottom up from the causal interactions but they are also top down constrained by you know principles of self-organization like that already exists yeah yeah and but right no well that’s that’s what I’m asking about because it because it seems to me but there there is going to be you’re deriving that from models you’re deriving from real intelligent beings now which is a slightly different thing and and I mean that and that would be interesting is is is is that because I to me that the bottom up top down is not quite adequate and the top down constraints is not quite because it would have to be not just a um a constraint but because that presupposes that there was something there that then uh that the constraint is coming from outside and what what I’m talking about is a kind of a unity that precedes that’s presupposed um and and and I’m wondering how you can get that into something if it’s the very nature of it to be presupposed and I and I’m not saying you can’t but I’m saying if you can it’s it seems to me that you’re going to have to somehow derive it from a living thing and that’s conceivable that’s conceivable I suppose but it but but we are talking about something really frightening we are and that’s why I keep saying it’s a threshold and I mean if you take the the sort of biological analogy seriously the way or he does of course it precedes the organism it’s there in the environment it’s there in the society it’s there in the I mean I can roll in a hundred hegelian arguments here about how it does right about how right and you know and and those don’t have to be super naturalistic you have brandum and hinker and others saying no this can be given a completely naturalistic explanation and and and I’m not here to challenge like yeah things uh but what I’m saying is um I don’t have any problem acknowledging everything you just said right yeah and I and and I don’t think I’m misunderstanding you that’s what I’m saying yeah and I I mean I’m actually I mean this sort of just an exploratory sort of way but I wonder if there’s a difference between the unity of an of an organism and this is where hegel might not be so helpful the difference between the the givenness of the unity of an organism and the givenness of the unity of a society um or a culture but those aren’t exactly the same thing because I there’s there’s something and there’s a kind of you know relative priority of either one but there’s something really distinctive about the unity of an organism that’s very yeah that that um that I think is crucial to this question to my mind in a way and I’m I’m not saying it can’t be answered but that’s the question would have to be answered how do we actually reproduce that kind of or unity well we we know stuff that Aristotle didn’t know you are not an Aristotelian unity you are a society you literally are yeah billions of animals right and so right that’s important and that means the there might not be a difference in kind between how you are organized as a living thing and how societies are organized and people like Michael Levin are producing some really important empirical evidence indicating that’s kind of the case and I’m not saying it’s not saying anything’s conclusive but it needs to be taken seriously yeah yeah yeah I I think that that I agree with you John and I think that that uh that’s the way that I try to always speak about agency intelligence is one that tries to scale almost effortlessly through the the different you know to avoid the woo soul that that didn’t we’re afraid of um but then again this is the this is the issue like this is in some ways that it’s the same problem like one way or the other so let’s say you have a group that self organizes around a purpose right or self organizes around affiliation or some type of origin right that that affiliation that purpose is also given right it’s like it appears as a revelation and then all of a sudden we’re all hunting a lion together and now we’re a group and we’re moving towards towards a purpose now this is that this is the this is the problem with the situation of what’s going now is that what is it what angel are we catching like what what what god are we trying to to manifest like which unity what purpose we have no idea so we’re building this massive body like this huge the most powerful body that’s ever existed but nobody knows what it is we’re trying to catch because if I get together with a bunch of guys to play basketball I know what that body is I know what that with that that that agentic body intelligent body is is is moving towards right if I get together with my family and I celebrate our unity is because I know that we all come from the same parent and that there’s a there’s affiliation that makes our society coherent towards something but now we have this problem which is what like what are we doing like we’re just building this giant body it’s like I agree and I’ve agreed with that yeah and the thing that that’s so odd I mean typically if you think of technology as a human creation in some good positive sense it’s it’s it it it has limits and it has a particular place it has a particular meaning it has particular purpose precisely because we create it in order to solve some kind of a problem we you know there’s some need that needs to be filled and that need has a kind of natural givenness or it or or it’s revealed somehow that you know it’s a responsive to something that we see what’s so interesting neil postman made this point about you know when when he he said he went to a car dealership and wanted to buy a car and the and the man was explained to him that they had now these you know automatic windows that that that would roll down but the push of a button and he and he said he said his risk his I mean this sounds so naive but it’s it’s a profoundly interesting question he said well what problem does that solve and of course the problem that it solves is the problem of rolling a window down and he his response was I never perceived that to be a problem you know I mean and it’s it’s really interesting AI I mean the thing is what problem are we creating it to solve I mean in a certain sense it’s a very different mindset we’re just kind of taking we just want to see what we can do and see what can be done and and in a way the problems are something that that we are arriving at and are surprising us rather than something that we’re actually creating something that just simple simple task of solving for us you see I mean I think that’s connected to this being placing ourselves in that in in the hands of an angel of some sort or or you know entering into a kind of an agency that’s bigger than we are those are all connected they are but I mean one problem was trying to be solved was the scientific problem of like strong AI was a project of explaining intelligence and that’s a that’s a worthy thing to do and the fact that this technology has largely been separated but notice that’s interesting that’s that’s um that’s not a a technical problem like you explaining something it’s actually I mean to to use the classical distinction between theory and practice that’s a sort of a theoretical issue rather than a practical one but we think of this as a as a technology I mean it’s a it’s a that’s a curious thing well I yeah and I would get into things like books um our technologies that move between the theoretical and practical and it’s one of the greatest technologies we ever invented and it had all kinds of unforeseen consequences and really massively disrupted society uh but you know and but I wanted to make another point and this isn’t a challenge this is just a clarification point right these like like think about a computer what problem does a computer solve it doesn’t solve a problem it is meant to be a multiple problem solver and then what we’re trying to do is make a general problem solver so what problem is it’s trying to solve it’s not trying to solve any problem it’s trying to enhance our abilities to solve all the problems we try to solve so this machine is going to help us in medicine it already is it’s going to help us right in physics at all like that’s and so that that’s that that’s the answer now again that’s not a challenge that’s I’m just speaking on behalf of people that think about this but it is kind of interesting I mean so the problem that it’s solving is the is the the need to be able to solve any possible problem solving a meta problem yeah yes yeah but but I mean it but it but it is kind of it’s it’s sort of it’s sort of curious that precisely because of the the indeterminacy of that we’re we’re we’re exposing this and I’m just sort of stating you know our condition here in a way we’re sort of exposing ourselves to a really great risk I’m just restating what everybody has been saying here today but I um that’s that’s something that that you know requires some wisdom as you’ve been saying over and over john and and and prayer to use john with him’s language too I just want to do one can’t you’re jumping in mentioned in the in the in the syrup in my essay which is we well we have done this before that’s how civilization emerged nobody built it to solve a problem there’s a bunch of little problems and what civilization is is a meta problem solver right and that’s that’s what it is and then and then you can so I’ve actually suggested we should also be paying attention to the the the lifetimes and the life cycles of civilizations and how civilizations reproduce and why they rise and why they fall to get some better understanding some other ways of thinking about these machines yeah so we’re civilized nations are huge distributed cognition collective intelligence machines that’s that’s the living in cities is a horrible idea except for the fact that it gives us better access to the collective intelligence of distributed cognition that’s the that’s the benefit that outweighs all the many noxious side effects of living in cities you can also get better coffee typically yeah so we’re coming toward the end of our time here um I mean this has been fantastic this is amazing I don’t I rarely can go for two hours on any conversation like we were just we’ve just been going well not only go for two hours but sort of wish we had another two that’s right yeah well that’s what I was going to say I mean we can we can we can work on doing this again because it feels like we’re we’re sort of we finally all come together around something here and now we’re really asking what feels like a really important question to me is well how do we think about integrating this solution this meta solution into our meta problems and that’s that’s a really interesting question I think that John bringing up civilization is such a great point something that I would really love to explore because there is also you know in the kind of inscribed in the mythological stories a relationship between transpersonal agency and civilization itself right like if you want to understand why the Egyptians had their king as a god and like all this type of structure you can it can help you understand how they’re trying to capture you know higher forms of intelligence distributed intelligence intelligences in their society and and the idea that we would be doing this technically in a in a AI I think is something that definitely is worth thinking about and discussing yeah yeah no that’s a dimension I just never struck me before so that’s that’s really helpful is this the enot maybe maybe there’s something we could read together and I mean short we’re all very busy but but to prompt another conversation along these lines of civilization yeah I would I would recommend just because this is how YouTube works that we we come to decisions about that off camera all right okay there you go right right I do if if if there’s a call for me to hang out with you fellows again I don’t care what we’re talking about I’m in I want to be here I want to do it so that’s all I’ll say about the invitation same here any closing thoughts things that feel like need to be brought in or do we feel good about this just just a word of thank you Ken you were the the one who arranged this and you did the persistent work to make it happen and find a hole in everybody’s calendar that lined up not an easy thing to do so thank you Ken and thanks for being gracious for these years now that I’ve known you yeah and in addition to thinking Ken I want to thank you David and you Jonathan I always find it I I get to places in my thinking dialogous that I could not possibly get on my own when I get into a living relationship with both of you in conversation and discussion and so I appreciate it greatly and I just wanted to say thank you yeah thanks you guys this has been great and I’m like again same John and I’ve been trying to have the conversation for nine months and we just keep like I cancel he cancels and I’m going to talk and then this is wonderful that we were able to to finally get here yeah well thank you all it’s been a real pleasure