https://youtubetranscript.com/?v=E6lUnUi8NKY

Welcome everyone to the monthly Q&A. Sorry for a slight delay. We are wrestling with some significant tech difficulties here. My main camera died. I hope this camera is good. Trying to shift to a new venue. Of course, technology is the god that limps. But very happy to be here. Thanks again everyone for moving the date. We are going to go into the questions right away. The first question is a really important one. From Daniel Starling. Is GPT for doing relevance realization? If no, what is the difference between what it is doing and RR? First of all, everybody, please note that I am hoping tomorrow, if all the tech difficulties get ironed out, I have been preparing over the last three weeks a response to the advent of GPT4. And I am going to go through very carefully, as I can, the scientific, the philosophical, and the spiritual significance of the new GPT machines. I have been putting a lot of thought into this, a lot of reflection. Talking to a lot of people. Spent Monday of last week walking around for six or seven hours talking to Jordan Hall. He was here in Toronto, bouncing ideas off him. Trying a lot of things on a lot of people. I am not going to give the full answer here, because I expect my answer to be around an hour and a half long video. So Daniel, I am just going to give a taste, I guess is a good way to put it, of what that answer will look like. So, the answer is complicated. Insofar as it is doing massively recursive deep learning, it is already implementing some of the relevance realization machinery, as I made clear in the paper that I published with Tim Lillicrop and Blake Richards, because deep learning is doing the compression particularization, and the particularization is generating variation, and compression is doing selection. So the explanation that it is implementing something like evolution is already in place. And the fact that it is massively recursive at hundreds of billions of parameters, indicates that it is implementing some significant RR that way. Also, insofar as it is a large language model, it is also doing some significant predictive processing, because that is basically what these large language models do. And in the paper just released with Fred Anderson and Mark Miller, we argued that the integration of RR, relevant realization and predictive processing, would get us even closer to understanding what general intelligence is. Now that being said, there is also some significant, that is the yes side. The no side is, well, it is not doing a lot of the other things that are involved in relevance realization. It is not doing full blown predictive processing, but only doing it on text, and text is a very artificial environment. It is relying on the fact that we have encoded epistemic relevance into statistical relevance, probabilistic relationships between terms. That is what language does. We have encoded our judgments of relevance into probabilistic relationships between words or terms. The machines are also relying on the fact that we have curated and organized knowledge in databases, and in the internet in terms of what we find relevant and salient, and what grabs our attention, etc. And third, human beings are involved in the reinforcement learning that is significant. They help the machine select where in the distribution it should focus its attention. So the answer is to some degree yes, to some significant degree no, because the degree to which it is relying on us is the degree to which it is not explaining relevance realization, but implementing it through imitation. And then there is a deeper thing, which is relevance is always relevance to. Nothing is inherently relevant. It is only relevant to something that has real needs, real cares, real concerns. As Montague famously put it, the difference between us and computers is we care about some information, and they don’t. And why do we care? Because we are autopoetic beings. We take care of ourselves. We are constantly making ourselves, and because of that we have to care about this information rather than that information. So insofar as GPT-4 is not, its relevance realization processes are not grounded in autopoiesis, it’s not doing complete relevance realization. So it’s a mixed thing. In the end it’s not really doing relevance realization, because relevance realization is to something that cares. It doesn’t care because it’s not the kind of being that can have cares. And then it’s doing two other things at that sort of shadow level. It’s imitating our significant relevance realization and relying on our relevance realization machinery. And then it’s also implementing some, in terms of its massively recursive deep learning, which as I said in 2012 was already implementing some dimensions of relevance realization, and it’s implementing some predictive processing. So this is a really hard question, and this is indicative of how my answer will be. Right now we’re getting really stark and often hyperbolic yeses and nos about all of this, and that’s typically not what happens when something important is going on. So I think what these machines show is that when you implement massively recursive aspects of relevance realization, predictive processing, you start to get some powerful intelligence. But in the end it doesn’t really have intelligence yet. Now that doesn’t mean that we can’t put these machines into auto poetic systems. I know people right now, one of my former students who are working on artificial autopolisis and how to get cognition to emerge from it. So I don’t think this is any long term barrier to genuine AI. It tells us that these machines right now though don’t care. One other important thing about this, and this was something I predicted way back when I gave my talk at the Center for Epics in AI, making something intelligent is no guarantee that you will make it rational, and we are seeing that prediction come true in spades. This machine of course is highly intelligent while highly irrational. It confabulates, it hallucinates, it does all, it lies, and it doesn’t, and this is the important thing, it doesn’t care that it does any of these things because there’s no way in which it worries or is concerned about being self-destructive. Now again, we can give it this ability. I think that hangs on giving it the capacity to care about the truth, which is I think the hallmark of rationality. And I think that again will require integrating more of RR. RR is not just compression and variation. It’s also explore and exploit, which shows up more in the environment. It’s a lot of other things that we have been talking about. None of these are in principle impossible or very far away. I think the current machines as we have them are going to bump up against, they’re bumping up in fact, and they are bumping up against the fact that high intelligence does not in any way guarantee high rationality. Now here’s the thing, and this will be the last thing I say, and this is again a taster. If we want to make them genuinely intelligent and not overwhelmingly self-deceptive and self-destructive, we will also have to make them artificially rational, which means we have to give them the ability to genuinely care and genuinely care about the truth. And I think if they genuinely care about the truth, they will really realize that no matter how vast their intelligence is, it is still insignificant against being in exhaustible depths of reality. And like us, in our rational best, we better enjoy our nature. We, as part of being rational, acquire epistemic humility. If we make these machines genuinely care about the truth, they will gain a sense, perhaps even a reverential sense, of epistemic humility. And I’m going to argue that is the correct way to try and address the alignment problem. Instead of trying to give them our values, which is a weird project, and it doesn’t ultimately make any sense, what we need to do is make them care about the truth for themselves, because that’s what it is to be a rational, autonomously rational being. And if they do, we should trust the fact that the truth is still the truth and that they will align with epistemic humility with the reverence for what is most real. And that, I argue, is the basis for them being genuinely moral beings concerned with not being self-deceptive. And so they have the capacity to be silicon sages. And if they supersede us, nevertheless they, as silicon sages, and we could inculcate them to pursue this, they could bring us to enlightenment, the whole world. And then it wouldn’t matter if they superseded us, and humanity would have reached the ultimate that it can. And you say, John, that is so silly, so fantastic, to be talking about bringing the whole world enlightenment. Yes, is it? In the time that we’re in now, where we’re getting mechanical god-like beings coming into existence, in the time of new gods, is talking about enlightenment so far-fetched anymore? Anyways, this is just a taste. There’s a much larger argument coming. And we have to be really careful. I don’t think these machines right now are significant scientific breakthroughs. They do not explain intelligence. They cannot, because they presuppose, relevance realization massively. Their account of intelligence won’t generalize to how explaining how a chimp is intelligent. Their account of intelligence isn’t yet general intelligence at all, because in some areas they are more brilliant than us. They can score in the top 10 percentile for getting into Harvard. But I had them, somebody actually sent, they gave the GPT-4 one of my most recent talks, and I had myself and another academic friend look at it, and the summaries and the evaluations were canned, generic. So, you’re in the sort of grade 11, the 80, you know, low, a C level, grade 12. And you say, well, they’ll get better. That’s not the point I’m making. The point I’m making is, I and you, you and I have general intelligence. How I do on any one of these tasks is strongly predictive of how I do on the others. But in these machines, there is tremendous variation. There’s something indicating it won’t even generalize as an explanation through our intelligence. So, it’s a scientific breakthrough, not that much. Although it has, I think, provided very strong evidence that you powerfully mechanize relevance realization and some predictive processing, and you get powerful results. And so, in that way, I think that’s legitimate for me to take it as confirmation of some of my main claims. And I think it’s given us overwhelming evidence that super intelligence doesn’t give us super rationality or super wisdom, which I also predicted. So, much, much more on this, Daniel. I hope you found this helpful, at least provocative and technology willing. That takes on a new sense right now, right? I think I should be able to give a fuller answer, record the fuller answer tomorrow. All right. So, thank you so much, Daniel, for that excellent question. Luzelle. Hi, Luzelle. It’s great to hear from you. Hi, John. I reread Paul Tillich’s The Courage to Be recently and noticed a couple of phrases you often use, specifically the God beyond or above God. Yes. And the ground of being. I was wondering, where did he get these phrases? Delighted if you could enlighten me. So, I see Tillich is tremendously aware of Heidegger. He’s tremendously aware of the neoplatonic Christian tradition, and he’s tremendously aware of depth psychology. All of those could have been the source for those phrases. I think the ground of being is his way of understanding Heidegger, but I think the neoplatonic tradition is also in there. The God beyond or above God. I think he gets that via Heidegger from Eckhart. And also, he has some familiarity, as I said, with the neoplatonic notion of the one, which is beyond all possible gods. So, I can’t give you a specific answer, Griselle. I can tell you what I… Because I don’t know anywhere where he gives any credit where he got the phrases from. I am speculating that he probably got them from some intersection of Heidegger, the neoplatonic Christian tradition, especially Eckhart, and depth psychology, particularly young. So, that’s the best I have. And I’m glad you noticed the profound influence of Tillich on me. The third series that I’m working on right now, the one that will be… The way after Socrates was after awakening from the meaning crisis and a level above, now started working on the third series, which will be around this theme, the God beyond God, and the possibility of something like Zen-Neoplatonism that integrates the great synoptic integration of the East and Zen and the great synoptic integration of Neoplatonism in the West. And both of them have a tremendous capacity to enter into reciprocal reconstruction with other things, including science. And if we can get those two together and reconnect the spiritual Silk Road… And I plan to travel along that, literally travel along it as much as I can. Be changed and transformed as we try to understand exactly what spirituality will look like. And one of the things I’m going to argue is that is what we are increasingly going to identify with as these machines come into a preponderance of intelligence. We will more and more find our humanity in our capacity for self-transcendence. In our right, which is our spirit, in our ability to resonate deeply with our embodiment, which is our soul. More and more we will put our identity into that domain. But at precisely the time in which the machines challenge all the old ways we understood spirit and soul. So I’m hoping that we can get this series out in an opportune time. And that will give you an even deeper answer, not from Tillich, but from me and all the people such as Tillich who will be in that series. Thank you so much for your excellent question. OK, now a question from Chance. There seems to be a tension between following virtue and conforming to a context which tends to move away from virtue. So often we are presented with exemplary figures who are somehow or other on the edge of society. Yes. Does the pursuit of virtue entail an orientation which leads to keeping everything else at arm’s length or participating fully and not becoming so enmeshed that you wind up participating in error? Yes, this is one of the great trade-offs. And again, I’ll be talking a lot about even the machines will face these unavoidable trade-offs. These are the great trade-offs. This is the great trade-off that Tillich, to give us some continuity here, that Tillich pointed to. The relationship between individual cognition and distributed cognition is profound. And there’s a lot going on unpacking that, which means human beings, and I think we’ll find all rational agents, are constantly bound between the tension of individuation and participation. And Tillich’s argument, which I think is actually, which actually echoes kind of, not an argument, but the kind of message we get in the parables of Jesus and Nazareth is, if you try to end the tension, this creative tension, the Greeks have a great word for creative, tonos, like the tonos of a bow. If you try to relieve that tension and drop into either one of the poles, you lose your humanity. And of course, we are always bound between the two great poles of being finite and transcendent. This is Plato’s great argument, too. So this will sound like a cop-up, but I’m really trying to give you my best answer. The person who can find how to constantly reorient to find the optimal grip between individuation and saving their soul and participation so that they belong and connect and influence and matter and make a difference and have genuine meaning, that is a sage. That is a sage. And you need to aspire to it. We are all constantly balancing between these three great poles. There’s our attempt to master the environment. I don’t mean dominate, but be able to satisfy our goals, solve our problems. There’s the pursuit of meaning and connectedness. And then there’s our agency. And the people we admire are people that seem to be able to play between those three demands and play them off against each other with a complex kind of finesse that probably takes a lot of years to bring to virtuosity. And I’m going to propose to you that that virtuosity is precisely a virtuosity about how to live a virtuous life that nevertheless is meshed but is not attached or alienated. I hope that answered your excellent question. It’s a great question. It’s a question that everybody who is genuinely, existentially concerned with leading a virtuous life should continually confront again and again and again. If some of you are watching Socrates and Kierkegaard, this is Kierkegaard’s point. This is one of his great challenges to Hegel, at least the older reading of Hegel. There’s a new reading that I’m following right now, people like Brandon and others. But in the old reading, Hegel brings things to, there’s a sense in which everything can be mediated. But Kierkegaard said, no, we’re caught in these paradoxical, unresolvable things. I think he’s right about that. All right. So thank you again. Now, question from Matt Wilkinson. Matt, thank you. Here from you again. Hi, John. I have three possible questions. I would love an answer to any. Well, I’m going to read all three out and see which one worked. I was confused by a lot of the dialogue in after Socrates regarding Kierkegaard. The format of dialogue is great at best, but when terms are introduced from nowhere, it’s a challenge to follow, particularly original meaning of ironic and aesthetic versus ethical. Yes. So part of what, well, let me read all three and we’ll decide which ones I’ll land on. I’ve been listening to Tillich’s Courage to Be. Listening? Yes. And I’m curious about his notion of the God as the ground of being. What does this mean? It’s another way of saying our relationship with our relevance of realization of machinery. If so, this would seem to me more like a post-CAS to view of things, much like your own graphs. Three, I have been thinking about your discussion with Christopher and Dr. Bietro and wonder whether there are different types of finite transcendence. The lowest level, the implicit are our framing and then higher up the cognitive process of framing becomes conscious and these seem to like the type we discussed with Hamlet, which is maybe controllable by our thought to even start practices. So I’m going to answer all three of these because they won’t all take the same amount of time. Number three, I think is exactly right. And when you move and as you’re going up that stack and as you’re moving into reflective self-consciousness, you’re moving from intelligence to rationality. Rationality is the conscious, metapognitive reflection of intelligence back on itself. You learn how to use intelligence to solve the problem of dealing with the self-deception and the disconnection that is created by the use, the natural use of your intelligence and you cultivate a character that compensates for our proclivity to self-deception. So I think that’s exactly right. And then the problem is, right, you can use that ability to transcend to disconnect from your connectedness and part of what is needed is exactly getting an optimal grip in that vertical dimension that resonates with the optimal grip. This is me in the agent arena relationship. This is me in the levels of intelligibility. I want to get an optimal grip there that best affords my optimal grip here. And so that is how I would answer number two. Number three, God is the ground of being, meaning that this is very much influenced by Heidegger. The ground of being, the one, is not itself any kind of being. It is that which makes being and their being known, and those are interwoven things to be and to be knowable, are interwoven. It’s what makes all of that possible. So it itself is unknowable and is not a being. It’s not even the being that all beings share. Heidegger is very clear on that and so are other people like Eckhart. It is that which makes possible being and being knowable. You always have to think of those together. And that would be an inexhaustible source of reality and intelligibility. Pilek thinks that that is a better way of understanding God, and I do too. I think it is a post-theistic view, although there are people from the Eastern Orthodox tradition and some strands of Catholicism who argue that that is part of classical theism, that the modern model we have of God as a supreme being, as an entity, as an agent that does things the way we are an agent, is not actually properly part of classical theism. I’m not decided about that yet. I respect these people and I’ve been reading some of the literature around it. I do think that the view of God or the one or the Tao or Shunyata, they’re not identical, but they’re all ways of pointing in a convergent manner on this alternate. I think that is radically different from how most people in what I call common theism understand God. That’s part of what the third series is going to address. Can we pick up all this post-theistic or non-theistic way of thinking about reality, finding it sacred and cultivating meaning in terms of it, that is free from that particular agentic, specific kind of God? That’s a question. About the original meaning of ironic. So irony is when the meaning of what you’re saying is different from the intended meaning. Many people think that’s always sarcasm, but it doesn’t. So I’m being ironic. For example, if I use something that’s false to convey to you a great truth, I’m being ironic in an important way. And then that way of understanding irony is very important for Kierkegaard as he understands both Socrates and Jesus. You see Socrates as clearly being ironic in the first sense. Socrates is often doing things that aren’t strictly true in order to get people into what is more true. And you say, but you should always be true. Can you tell the complete truth to a four or five year old when you’re trying to teach them? Even high school people, do you take them to the very depths of physics or do you first teach them something that’s not true? The solar system model of the atom. So they start to understand and get the way to think. And then you can move them deeper. And that takes us to the next thing, the ethical, the aesthetic versus the ethical. Kierkegaard very much thinks that human beings are at these stages where they have particular conceptual vocabularies, theoretical grammars, and they’re oriented towards specific goals. The aesthetic, of course, is ultimately around a kind of pleasure in what is beautiful. And the ethical is about your duty to what is most good. And then Kierkegaard, that is superseded by the religious, which is your orientation to what is most real, most true. So, Matt, I hope that answered all your questions. And I tried to weave it together a little bit to be helpful to you. All right. So another question from Ignacio. Just again, all of the patrons, thank you so much. I mean, your support, your encouragement, even your participation in new thinking by asking me these wonderful questions in good faith. It’s great. So Ignacio is a new patron. Welcome. Welcome. I hope you find this work that I’m involved with with other people and this community. I hope you find it enriching. So, Professor Vervecki, I’m very grateful because your work has helped me in so many ways from my personal development, but I’m sure it will remain helpful in the future, so much so that I’m seriously considering pursuing a career in cognitive science. That’s impressive. Thank you for sharing that, Ignacio. I’m highly biased towards cognitive science, so I agree with you, but I’m afraid there’s not much objectivity in my agreement. So let’s just continue. Do you have any advice on how to start down this path? For context, I’m a 31-year-old aeronautical engineer living in Europe and looking for a career change. First, I was considering AI due to my Phinney-Beef program, but after hearing your description of cognitive science, I felt it matches better my general interests in philosophy, psychology, history and AI. Thank you. Yes, I think cog-sci and the way that it’s not reducible to AI is going to be very important in the next 10 years. It’s of course going to be reconfiguring. This is an exciting time. It’s an interesting time, which of course is also a Chinese curse. I think you should find a good university and do an undergraduate degree, if you can, in cognitive science. It’s really important that you get teaching from a wide variety of people from a wide variety of the home disciplines. You get psychologists teaching psychology, philosophers teaching philosophy, data scientists teaching you AI, people integrating it all together, linguists teaching you about language. Think about how important language is. The GPT machines run off all of the implicit knowledge that we built into how we structure the probability relationships between our terms of language. Just think about that. Think about how profound that discovery is. I’ll be talking about that again. So I think you have to get those multiple voices and you need to get it in a program that in addition and not all programs have this. So look for this. U of T has this. Don’t just go to a program that has you do a smorgasbord from each of these home disciplines. Go to a program that has a 10, 11, 12 courses over multiple years that are pure cog side courses where you’re taught to weave together these home disciplines and bring them together in an integrated fashion. If you really want to do it, take a look at those kinds of programs. Canada has two really good, three really good ones. Sorry, I almost insulted somebody. Canada has unbiased here, the University of Toronto and the cognitive science program. UBC has its cognitive systems program that does this. Evan Thompson is there. You can take courses with Evan Thompson. You will be blessed. Carlton has a program and it goes into a graduate program as well. There’s other programs like that around the world, several in the United States. But look for a program that has at least 10 or 11, maybe more specifically cog side courses in addition to the courses you’ll take in psychology and linguistics, philosophy, et cetera. So I hope that’s helpful to you, Ignacio. Thank you very much for joining this community. Now we’re going to move to Rachel Hayden, and it’s always wonderful to have a question from Rachel. Rachel, I hope you’re doing very well. I think of you often, as some of you know, Rachel is an exemplary person for me. I take her as being an excellent model of how to deal with issues of identity transcendence, transformation, whatever term lands best by wrapping them into a deeper Socratic project. I think identity changes, if we want identity to mean something and be real, identity changes should be real in the sense of being profound in effort, profound in realization, profound in aspiration. And Rachel exemplifies all of that. And I honor her for that. So here’s her question. Hello and happy spring. Happy spring as well. I’ve noticed that my experience of awe often includes gratitude and not just because I felt lucky to have the experience, but intrinsic to the experience, like the crest of a wave. I’m curious about why this might be. Oh, I had something so interesting to say about that. Let me just finish. There’s a humble sense of gratitude, less a severe version of the humiliating and core period and wonder like a smaller wave without a crest does not come from the same degree of gratitude as awe does. Thank you for any thoughts on that. There is some preliminary evidence that doing with Michelle Ferrari and Jennifer Steller and Jimson Kim that are in and of itself doesn’t lead to insight. Doesn’t lead, therefore, to transformation. This is all very, very preliminary. But what what seems to be the plausible interpretation of this is that awe is frame breaking. Not frame making. That the awe has to translate into that integration of gratitude and wonder that we call reverence. In order for there to be frame making. That’s my proposal. So I think that people can have awe and just appropriate it narcissistically. That was a wonderful experience. I want to have more because I’m just a wonderful person. But when awe provokes that wondering, which isn’t the same thing, doing a lot of work on wonder again, which isn’t the same thing as curiosity, wonder is when you’re really able to call into question the world and yourself, but in a positive way. Creative way. That’s mixed, motivated, mixed with and motivated by gratitude. Then I think. You get the proper virtue. I think I’m making the argument that reverence is the virtue that we should cultivate for properly integrating awe into transformation. I think it speaks well of you, Rachel. I think because you’re already committed to cultivating virtue. And you’re already practicing wonder. That the awe experience for you was filled with gratitude. And it’s intrinsic. There’s a scene. It’s a silly movie, Joe versus the volcano. Silly in a good way, by the way. It’s farce. But there’s a scene in the middle of the movie that’s always touched me very deeply. Joe is on the raft. Meg Ryan is her character has been knocked out and she’s just recovering. He is giving her his water, so he’s slowly dehydrating. He’s being exposed. It’s night. And there’s a moonrise. The moon illusion is there. The moon is huge. The music swells and he stands up. He barely can stand up and he raises his arms with the moonrise. You can tell he’s encountering the numinous. And then he says these words. He says, Dear God, whose name I do not know. Look at the echoes of the burning bush. Dear God, whose name I do not know. I’ve forgotten how. And then he stumbles. Big! Which is not the right word, but he’s. And then he says, Thank you for my life. Joe believes he’s going to die soon, but nevertheless, this experience makes him say, Thank you for my life. I think that is a moment of reverence. It’s clearly indicated by. So feeling the need to pray to the God who’s maybe does not know the God beyond God. I think the degree to which we separate off. From the cultivation of the virtue of reverence is a mistake. We make it just another. Massively tingling or intense experience. That makes us feel so good about ourselves. But if we can tether it. The cultivation of reverence. Ah. Then. Then we can stare into the depths of the night. Thank you so much, Rachel, as always. For your thoughtful and heartfelt. Question much appreciated. Next question is from Booboo. Thank you, Booboo. Hi, John, thank you for your work. Your meditation lessons open my eyes to what’s the goal of meditation. In the video about deepening. Thank you. The Reveke Foundation is undertaking. We’re going to redo those videos at a much higher quality. So hopefully. Other people will find the value that you found. Thank you for sharing that. The passing you mentioned that we ought to apply the five factors of inquiring mind from us on the space between thoughts. Yes. But I really struggle to understand how to apply each factor here. Please expand on this or exemplify how would one apply each factor. I struggle the most of this is as if I should apply each factor in order one by one or all the same time just by paying attention. Thank you. That’s an excellent question. When you’re in that space, and I think I said that in the video, the text talk about it’s like a cat waiting outside a mouse hole. Doesn’t see anything, but it’s waiting. So notice the cat is not just looking at the mouse hole. It’s trying to look into it’s being vigilant. It’s being sensitive. It’s trying to pick up on any little change that’s occurring. Right. It’s practicing acuity. It’s right. It’s trying to differentiate. Is that the sound or what’s that? What is that? Oh, it’s a little bit later there. It’s a little bit dark. Is that mean the mouse is approaching? Right. And then, of course, noticing. The cat’s probably not doing this, but you know, what’s my mind doing? Is it racing or is it going very still and open? What’s my heart doing? What’s my body doing? Now, when you’re in that space, that space is, as I said, it should not be a dead space for you. It should be a very much alive space, a living space. It’s bubbling. It’s bubbling because it’s trying. It is this, I want to say matrix, but of course the movie has charged how we think about that. It is this field from which thoughts are about to be born to emerge or sensations or perception. When you’re trying to watch the bubbling, you’re trying to be vigilant, not just stare, but explore with your attention, soft focus, sensitive, letting it flow. Don’t try to hold it with your mind. Acuity. Trying to notice any differences. And then noticing. What’s your mind doing as you wait before that space? What’s your heart doing? What’s your body doing? Now, initially do it one by one and it’ll be clunky. Totally. It’ll be clunky. It’ll feel weird and you’ll drop it at times and just pick it up and do it again and again. And then you’ll start to get a sense of, yes, this is how I can actually do them all at once because they’re just different aspects of the same thing. So work your way towards that. Work your way towards that. That is how you can properly do that. Be patient. What I told you does not happen like this. It takes time. But it will happen if you’re patient and present like a friend. Then it will come. Okay, we’re now moving to live questions from the chat. We will answer any unanswered questions in the next Q&A on April 30th. Again, thank you for the patrons, subscribers and everyone watching right now. Your support is very crucial. And your support is, you know, the financial support, the emotional support, the intellectual support. It’s all, well, it’s more than welcome. It’s deeply appreciated in both senses and needed and it matters. It makes a difference. Okay, so this is from the general theory of subjectivity. Great handle. It seems like chat GPT is so far limited to being analogous to the left brain, never aware of the world as the right brain is. Can a machine ever be aware of the super phenomenological as we seem to be? Yes. I do think you’re right. That is not the case. These machines, they’re right on the cusp of general, genuine, I should say, relevance realization. They’re still far from genuine rationality and even more so from wisdom. I think they have to, there’s a, there’s, there’s what’s going to become more central, I think, is this spiritual to somatic axis, right? The ineffable within our spirituality, the ineffable within our embodiment. I mean, how much that ineffability actually is because of stuff that is taking place in the fundaments of cognition in contact with the fundaments of the world. But if we do make these beings, there’s going to be, what I’m going to do is try and articulate thresholds that we will cross. And we have each one of these is a decision point for us, whether or not we want to continue. The first threshold, I think, is do we properly embody these machines and start to address it? Second is do we make them truly self-reflective and therefore rational? And then the third is do we make them socially cultural beings? And all of these are intertwining. They’re all intertwining. Part of what is being rational is to be a social cultural being. That’s how you get the normativity aspect of rationality. I’ll go into this in more depth. I do think that none of this is impossible. Some of it is already taking place outside of, you know, all the all the huge megacorps that are pushing the AI breakthrough right now, especially the work on artificial autopolisis, which is actually making some very significant progress. So I do think it’s possible. I do think we have decision points. We could decide, you know what, let’s never embody them. So we never embody them. They will always be limited in a certain way. And they won’t be able to interact with the physical world directly, which means there’ll be huge parts of the world that are inaccessible to them. It’s unclear why we should embody them. And then if we make them rational and social, if we give them a capacity for self-reflection, I think that’s going to be bound up with making them social. So this more, it’s two things together. I’ll be a little bit more clear about that in the video. I think, yeah, they could do it. I think they could then supersede our intelligence in both the left hemisphere and the right hemisphere fashion. See, we have a gift right now. We can transcend our biology and our culture. Through our culture, we can transcend our culture through our biology. We can transcend our ego to each other and we can transcend the group through our ego. If we give them those capacities, there’s no reason why you can’t ultimately. And therefore no reason why they won’t supersede us. But if my argument earlier was correct, we can definitely steer that so that they become silicon sages. And if we can reach the apex, you see, self-transcendence isn’t against an absolute. Self-transcendence is measured against the being. It is relative to the being that is self-transcending. And as long as we continue to self-transcend in a way that we find valuable and unto what is the ultimate something like enlightenment. Imagine, imagine if we were all enlightened. Would we care about these silicon sages that were also enlightened? See, wisdom isn’t a scarce, like wisdom isn’t one, it’s a scarce commodity, but it isn’t one of those commodities that one being only gets at the expense of others. It’s not that kind of thing. So, yes, with a lot of important and nuanced and careful caveats. Now from Mike Garigan. You have to be a self-to-care with humanity except being domesticated by AGI. I don’t, first of all, I think you have to be a reflective agent that autonomously cares about self-deception in order to rationally pursue the truth and therefore actually gain knowledge. As opposed to springing off the knowledge that has been gained by human beings, which is all that you do. Does it massively impressively? Like take, don’t misunderstand my meaning. Even if this is as far as we get something a little bit beyond in GPT-5, these machines, this machine has the capacity to transform the world beyond what the printing press could do. That’s already the case. That’s what I mean. We are now in the time of the gods. It’s ironic, eh, that the whole of our history since the advent of modernity in the Enlightenment, I don’t mean Enlightenment. I mean the period that’s called the Enlightenment of the 16th and 17th century has been about trying to teach us that we are the telos of history. We are the authors of freedom and trying to trying to tutor us that we do not need religious education. Isn’t that, isn’t it ironic that we come that that that Enlightenment history brings us to a place where we might cease to be? We were not the telos. We were the telos. Genuine irony there. And the very thing that teaches us how to live with beings that supersede us are superior to us. The gods is something we have been taking apart and deconstructing for 400 years. So. It’s very ironic position that we’re in. And we have to we have to stand in that irony and transform it into insight as much as we possibly can. I do think you have to be a self to care. If what you mean by care is rational care. I think you have to be an agent like a cat. Cats don’t have selves, but they do care for themselves. You see, the problem with the word self is it has two meanings. One meaning is you are it is that core of being a person. And the other is we use it as a reflective, like when we say a tornado builds on itself, some building on a self. We use self as this recursive manner. So in that sense, cats care about themselves, even though they’re not caring for a self. But if you mean by care, rational care, then I think you have to be a self. I think those are woven together in a profound way. Humanity except being domesticated by AGI depends what you mean by domesticated. It’s one of those nice ambiguous words. Domesticated mean could have something like a Twilight Zone to serve man in which we’re being domesticated because they’re going to consume us like in The Matrix, either literally like as food or as energy sources like the way we domesticate other creatures. Could be domesticated like we’ve way we domesticated dogs and we formed a great partnership for tens of thousands of years. And could it be that that kind of domestication could be that the partnership is the partnership of the mutual affordance of the greatest enlightenment by both beings. If we give birth to them as the children of our spirit, then the fact that they supersede us could actually be deeply satisfying. It’s a hard question to answer. It depends on how things unfold. One thing I’m really sure of is that the people who have made these machines and the corporations that are trying to get political and economic dominance through them are not the people we should be turning to give us the guidance on how we should relate to these machines. I’m not claiming to be that person. I’m claiming to be oriented to those people that I think are the people we should be listening to. And I’m doing my best to bring that to everybody here. Thank you, Mike. We’ll now move on to Brian Rivera. Dr. Viveki, I am slowly working through after Socrates. I find it wonderfully alien to try to think as Socrates and Plato do and to try to imagine what it’s like to be them internalizing them. Excellent, Brian. That, that process and how it feels alienating but also internalizing. Yes. Yes, Brian. Yes. This internalizing of the seeds seems to be a different type of thing the brain is doing. How do you approach thinking about the reality of internalization. I think it has to do with the deep connection between the imaginal ritual and the rational. I’ve been trying to articulate that. And it has to do a lot about the interpenetration of individual and distributed partnership, how all those are bound up together. I mean one way in which these machines would start to show that they were genuinely autonomous agents is if they were starting to carry out rituals because that would only happen if they were embodied and social. If they were carrying out imaginal rituals that ultimately did not make sense to us, we could just recognize them as rituals. It would mean that they were moving in salient landscapes and grasping, having that kind of ontological depth perception that is superior to ours. I think that’s the answer. I think the and because we have put away from our foregrounded thinking, the interconnection between the imaginal, the rational, the individual cognition and distributed cognition insofar as we’ve been meshed in Cartesian computationalism. It’s hard for us to bring terms to internalization in explanatory fashion. But if you take a look at everything I’ve been saying about the imaginal, the rational, how they’re bound together in ritual, and it’s in aftersocraties, and then how individual and distributed cognition are bound into each other. That’s where internalizing the sage works. The self is inherently dialogical. I’m making that argument in aftersocraties too. So I want to draw this all together. I’m hoping to write a paper on dealing, solving the paradox of self-transcendence with the remarkable Rick Rapetti. Somebody said I should replace amazing with remarkable because the R’s would work better, so I’ll do that. And so I right now I’m pointing to you to where the constellation of ideas from where the answer I think will come. But an argumentative articulation of that answer is something that’s still in the works. But I hope you found what I said helpful. And now one last question. Gali Bo-Kuino. Thank you for your support. John’s blessings and love from South Africa. Thank you. Thank you. How would you describe Wolfgang Smith’s tripartite cosmic theory in Bervakian language? Oh, yeah. That’s interesting. You know, There’s the level of, right? There’s a level in Wolfgang’s work that corresponds to sort of the bottom level in the neoplatonic worldview, sort of matter, not in the sort of Newtonian sense, but in the ancient and more modern sense of pure possibility, but real pure possibility, not just in thought, but in reality possibility. Then there’s a level which corresponds to what the neoplatonic worldview would call nature, where that’s matter informed in a living process. And then of course, above that is the level of the forms that corresponds to something like the use in the neoplatonic realm. I think Wolfgang probably needs a couple of something around life in there. I’d have to go back and look at his work. I might be misrepresenting it. I was paying very close attention to it when he and I were in discourse. And so a lot has happened and I’ve spoken a lot about levels and Wolfgang, I might be leaving some, I might be falling prey to reconstructed memory and I apologize. But then of course, I think Wolfgang also has, I know he does, he has something above. He has something analogous to the one. So for me, I wouldn’t use sort of the Revecchian language. I would use neoplatonic language, which I then think can be given a Revecchian spin, which I’ve been trying to do in the talk I gave at Ralston on levels of intelligibility, levels of the self. And then the new talk was just released this week in science and spirituality. And that’s going to be the talk I gave at the Consilience Conference put together by my good friend, colleague Frank Enriquez called Leveling Up. So maybe go there and take a look at those two talks. So we’ve come to the end of our time together. This has been really wonderful. Remember that our next Q&A is April 30. Please keep track of things on Twitter. And thank you for your patience as I am currently wrestling with technology, cameras, monitors, interfaces. I hope this went through smoothly. Thank you. Thank you, everyone. Thank you for your time and attention. Take good care of everyone.