https://youtubetranscript.com/?v=4mUsDd4jbV0
Hello everyone. The video you are about to see was originally broadcast on the IdeaCast channel hosted by Justin McSweeney. In that, I’m joined by my good friend Greg Enriquez and somebody who I’ve talked with a few times before, Michael Levin, and we’re exploring the ideas of levels of ontology together. It’s a rich and wonderful discussion. I hope you enjoy it. Normally I say welcome to my guest when I start an IdeaCast interview, but I have three persons here who are of great significance to me and in my learning path and journey and my journey towards understanding wisdom and relating to wisdom, and I’m very much looking forward to this conversation. I welcome the YouTube audience. John, Greg, Michael, welcome to the three of you. I’m so glad to have you all here today. Pleasure. Thanks for having me. And I want to acknowledge that you all have your own YouTube platforms, your own huge social relevance, and so to be here on this show, again, I’m humbled and in gratitude to the three of you for doing this and coming here and allowing me to just open up the space for you guys and be a close attendee of what’s about to unfold. So gratitude to you all for that. And I’ll ask your charity here. I’m a lay person. I’m one of those dreaded autodidacts, but I ground a lot of it in the humility of of epok in Peronian inquiry. So I consider myself a student of the work of the three of you. And in my trying to think of how we start this out today is to look at what Michael talks about in the audacity of the imaginal being grounded in empirical evidence and the data and that dance between the two of them. And and as maybe we’re opening the aperture on what intelligence does and what it might be. So and and looking at Greg’s layered ontology, the joint points between different stages of self-organization, I think there’s so much richness in territory. And of course, John, with his compendium of ideas and epistemological rigor. And and and so I just see a beautiful convergence here of the three of you. So I will be a good host. I’m going to step to the side. And Greg, if you would like to just go ahead and maybe start the conversation. And and I know it’s going to flow really well. So all right. All right. I appreciate Justin. And it’s a wonderful opportunity to be here and to have some opportunity to share ideas with people I admire greatly. So here’s what I’d like to just throw out there and see. Michael, you probably you may have seen my depiction of a tree of knowledge diagram for sort of upside down cones, as it were, sort of emerging out of energy information singularity source, sort of a one world naturalism from a big history view. And it’s really an expansion of complexification. But something happens at different points of it that result in a qualitative shift, in particular, a life plane of life emerges, a plane of what I call mind animal and culture person. And it took me a long time to think about what these planes actually were coming off of, in particular, matter. The matter complexification seemed OK, but I wasn’t sure exactly what was happening with these life, mind and culture cones for a couple of months, years after I developed it. And then I basically was like, oh, it’s a complex adaptive system networked together through information processing and communication systems that are affording particular potentialities. And they’re then mediated by certain kinds of systems, gene cell, to put it simply, nervous system, animal, propositional language, collective human intelligence. I was enormously struck by your cognitive light cone analysis. And one of the things I really wanted to talk with you about was how you conceive of a cognitive light cone, how you conceive of intelligence, the emerging evolution. And if there’s a relationship between that cognitive light cone idea and the cones that I am depicting in terms of emerging complexification through information networks, exploration of design space. And then what I’d like to do with that is connect, if that’s the case, connect the edge of that and the contact of that with recursive relevance realization as a dynamic process that we can at least apply to see how these things emerge through what, how do we understand. So I’ll throw that out there and see if you would then in a riff off of that and then pull John in relation and then see where this thing might explode after that. Well, I guess I guess I should start by just describing this cognitive light cone idea. And I should preface this by saying that the version of it that’s out now, which I think I published in 2019 or so is a is very much a 1.0 vision. So it needs significant improvement. That much is clear. And I’m working on that now. But for what exists is it’s the following. We were I was actually at a Templeton meeting, a conference of people studying diverse intelligence and things like this. And Pranab Das tasked us with a challenge of trying to come up with a framework in which you can simultaneously think about truly diverse beings. So we’re talking about not just the familiar apes and birds that we’re used to and not just, you know, maybe an octopus and maybe a whale or something like that, but like really diverse. So we’re talking colonial organisms. We’re talking synthetic biology beings that are going to be that are and are going to be made cyborgs, AIs, whether software or robotically embodied, high brought. So some sort of combination of living material and engineered the material possible aliens. I mean, all of it. Right. So so so and I’ve been thinking about this for a long time and I sort of took that as an opportunity to try. OK, let’s let’s let’s try to formalize some of this. How do we do that? And what I thought was really fundamental to any agent, any any any being that we’re interested in is the scale of its goals. I thought that goal directedness, right? Some degree of it. And I’m not I, you know, I don’t like binary categories. I’m not I don’t believe that there is a thing as goal directed or non goal directed. I like Norbert Wiener’s sort of cybernetic scale that goes from passive matter all the way up to sort of human metacognition and and whatever is beyond that. And so and so I thought, OK, so let’s just for each one of these potential beings, let’s let’s map out what is the largest goal that they could possibly pursue. So this and so and so the obvious. And so you collapse space and time onto a two dimensional sheet. And so you get this thing that looks like Minkowski’s light cones. And and so the size of the goals. Right. And so you can you can start to think of different cases. So so if I ask you what you care about and you say, I care about sugar concentration within this 10 micron radius and I have a memory that goes back about 20 minutes and I have predictor capacity that goes forward about 10 minutes, you’re probably a bacterium. And, you know, and if you tell me that you care about things that happen within a I don’t know, 100 foot or 100 yard radius or so. And you’ve got some memory going back, but you’re really never going to care about what happens three months from now in the next town. I’m going to say you might be a dog. Right. That kind of thing. And if you’ve got goals that are planetary scale goals about the way the financial markets and world peace are going to look a hundred years after you’re gone, you’re probably a human. And and if you tell me that that you can actually care for in a linear range, you know, thousands of millions, all sentient beings, and I’m going to say you’re something beyond a standard, you know, modern human. I don’t know what that is, but, you know, we can’t really do that yet. So so that’s it. So that’s the idea. These cognitive icons. And there’s two kind of two things that I’ll just say about that. One is that I think these these cognitive icons interpenetrate. In other words, in the in let’s say let’s just take a human body, for example, there are many, many subsystems that have their own inner perspective and their own goals that they’re following in various problem spaces. That doesn’t just mean three dimensional space, right? Your body is home to all kinds of structures that live and suffer and strive in other spaces, physiological state space, metabolic space, transcription, gene expression space, anatomical space, if you’re an embryo or something like that. So we’re not very good at recognizing these. And and and there are many of these that that cooperate, compete and so on, all at the same time. And that leads us to the second point, which I think is pretty critical. I like this notion of so we talk about goal directedness. I like this notion of teleonomy and teleology, of course, being goal directedness. And then there’s this concept of teleonomy, which is defined as apparent goal directedness. Now, some people use that word to soften the impact of teleology. They say, well, look, it’s not really teleology. It’s just a parent teleology. Right. I’m not using it that way. I am full blown into teleology. I think it absolutely is a necessary concept to have proper understanding. What I think is important about teleonomy is this. It is, in fact, apparent goal directedness because it reminds you to take the perspective of some observer. There is some observer who has to set who has to make hypotheses about what they’re looking at. Now, that observer, right, in terms of what problem spaces is the agent operating in? What are their goals? What degree of competency do they have to reaching those goals when situations change? All of this, all of these are hypotheses from the viewpoint of some other being or, in fact, the system itself. So once you’re past the certain level of advancement on that spectrum, you to tell story, internal models about yourself. You have a model of yourself as an agent. So, you know, parasites, scientists, right, as external observers, parasites, conspecifics, predators and the system itself all have these perspectives on things. And so I think keeping that apparent in mind that all these things are not, I think, not objective kind of universal truths, but actually some observer trying to make sense of the world as they look at themselves and other things. So that to me is the idea of these light cones. Lovely. And would you say, would it be fair to say then, let’s say, you know, life gets started or whatever, four billion years ago and then explodes, we would actually see in the universe, at least on planet Earth, essentially an emergence of life like cones. Right. In relation, would you say that? Yeah, I think that and I know it’s weird for a biologist to say this, but I don’t think life is a super interesting or discrete category. I think that that it’s that that what’s what I think is more interesting is cognition, the spectrum of cognition and a wide, wide, you know, a wide range of those things are overlapping. So certainly the cognitive, you know, if you think of a Venn diagram, right, the cognitive circle and the life circle overlap quite a bit. But I don’t think they’re the same circle. And I think you can have things that are on the spectrum that currently people would not call alive, which is why I’m less interested in that that characterization. I do think that one thing that if I had to give a definition of life, which I don’t do, but if I had to, what I would say is that life is what we call things that are really good at scaling up their cognitive light cones. So if you have a collection of pebbles, which are basically only good at sort of energy minimization and things like that. And by the way, I don’t think that’s zero on the cognitive life. I think it’s very low, but I don’t think it’s zero. But when you have a when you have a rock made of those pebbles, you have not scaled the cognitive light cones, got exactly the same capabilities. But once you have life, what you find out is that it’s arranged in a way where the components have little tiny light cones and the collective has a bigger like on a bigger cognitive like on that that actually extends into new spaces. So when we see that happening, when we see goal directed systems being multiplexed so that the goal, the size of their goals that they get, they get these grandiose, you know, longer term spatially into poorly goals. We call that life. I think that’s what that’s what life is. But but I do think that that things that we would be hard pressed today to recognize life as as life can have cognitive light cones and maybe large ones at some point. That’s fascinating. All right. I’ll pause and see if John wants to jump in here. Well, yeah, there’s a lot I want to talk about there. I want to I want to build off of Michael’s idea of light cones, which I do mention in some of my lectures at U of T University of Toronto. I want to note that there’s two at least two parameters within a light cone. As I understand it, there’s reach and clarity. And I think that brings in some of the work I’m doing about cognition and which I talk about the two meta problems of that activity. So if you’re going to be a problem solver, there’s two problems you’re always solving as you try to become a more adaptive problem solver. One is anticipation, your ability. And I don’t just say prediction. I think that’s a little bit of a misnomer. I think if you predict and can’t prepare, that’s not very adaptive. And we have experimental evidence for at least living creatures. That’s the case. Right. So I use the term anticipation. So you want to anticipate as deep usually as you can. Typically, it enhances the number and the kinds of problems you can solve because the earlier intervene in a causal pathway for a problem often, not always, but very often, the easier it is to solve that problem. It’s much easier to avoid the tiger than fight the tiger is my sort of slogan. And that’s this idea of the light cone. But the problem with that, which is the second meta problem, is as you increase the reach, you increase the problem that has been the besetting obsession of my career, which is the issue of relevance realization. The amount of information that you have available, the amount of information you have to store, all the possible combinatorially explosive combinations goes up exponentially. And you have the problem that you can’t just arbitrarily choose from that what to pay attention to. You can’t algorithmically search. And so you’re somewhere between the arbitrary and the algorithmic. And this gives you the issue of relevance realization. I have proposed a way in which the two theories, because the two problems depend on each other, you can avoid relevance realization, but what you do is you shrink your cone of anticipation very considerably. And then if you want to increase your anticipation, you increase the relevance realization problem. To make a very long complex argument as brief as possible, it’s something like the predictive processing is trying to always minimize error. It hits inevitable trade-off relationships of error. If it tries to reduce bias, it increases variance. If it tries to reduce variance, it increases bias. If it tries to reduce the errors of exploring, it will crash into other errors of exploitation. And there’s all these inevitable trade-off relationships. And the idea is the predictive processing is going to create these opponent processes that give what’s called an optimal grip on the world. And that’s what I mean by clarity. It’s not just that you reach out well, but you know how to optimally grip what falls within your light cone. And that’s how I think those two go together. Now, what comes out of both the recursive relevance realization and especially the predictive processing is this idea of mutual modeling. In predictive processing, you always have to model yourself. I don’t mean model yourself as a self. Please hear that. But you have to model yourself when you’re modeling the environment because you have to deal with conflation errors, that stuff that’s happening because it’s inside of you is getting projected onto the environment. And so you’re always trying to model the self to some degree to discount the errors being caused by your own embodiment. And so this is the great insight that predictive processing runs off. Don’t try to directly predict the world. Predict yourself interacting with the world. And that will help to solve those problems in an interlocking fashion. And so what you get is you get, when you’re modeling the world, you’re always to some degree modeling yourself. And as you’re modeling yourself, you’re always to some degree modeling the world. The two are interpenetrating. And I think that goes a lot towards the teleonomy that Michael was talking about, that there is something like a self modeling going on. Now, for me, and this might be where Mike and I are different, I think that that self modeling and relevance realization depends on a system in some sense taking care of itself. My argument is to the effect that relevance realization is always caring about this information rather than caring about that information. And care, and I’m not meaning the experiential affect, I’m trying to use this in a very broad, almost Heideggerian sense. You’re caring for yourself is what gives you the capacity to genuinely care about this information or that. This information matters to you. That information doesn’t initially because perhaps that matter actually matters to you. You literally have to take it in or you’re not going to continue. And so I think that relevance realization grounds in autopoiesis. And so that’s something we can talk about. I do think that life does represent a significant capacity change. And we can talk about whether or not there is cognition without caring. Or maybe you have an analog for caring going all the way down, Mike. And I’d like to hear that, because as you know, I’m very interested in this deep continuity. I would put one thing to you that’s at a little bit more of an abstract level. Two points, and then I’ll stop talking. One is if we all are sort of non reductionists and if you have a continuum with non reduction, differences of degree eventually become differences of kind. Because with non reductive continuums, you have to have properties at upper levels that aren’t in lower levels. And so I think you get real emergence. And I think that’s a difference in kind. And I think that is a way in which your continuum and Greg’s series of cones could plausibly mesh together. Here’s my final point. And this is the point that I’ve been also doing a lot of work on. And Greg and I did a lot of it together on our transcendent naturalism series. Is as we start to get this understanding of cognition, we see it as properly transjective, always between the system and the world, always between the organism and the world. And that means these discoveries about minds are also discoveries, their ontological discoveries about something about the structure of reality itself. And those two have to be understood together, how we are understanding. I get it as a continuum, but we talk about it in levels. And I accept that distinction. The levels are properly epistemic. The reality is a continuum. But what I mean by that is as we find levels in the mind, I think unless we’re going to face a correspond, unless we unless we’re willing to bite the bullet of a profound solipsism and skepticism, we have to say that there’s something corresponding in levels of intelligibility in the world and that that’s an ontological claim. And for me, that means we are deeply committed to a different kind of ontology and the flat ontology that we have been doing science in for quite some time. So I won’t belabor this. This is some of the deeper recovery of an older neoplatonic ontology rather than the sort of flat ontology we’ve been with. And I think this is important because I think that can ground a spirituality that is not just about psychological hygiene, but about genuine epistemological and ontological realization. So that’s what I have to say about that. I hope that made sense. That was really compressed. I’m trying not to hog up all the time, but Mike, you always say tremendously provocative things, and I wanted to respond to them in kind. Yeah, thanks. That’s great. I don’t disagree with almost any of that. And I think that especially, and I mean, the whole kind of some of these platonic ideas are really starting to come to a fore in some of our work. I haven’t written too much about it, but I will as the arguments get better and so on. So I’m on board with pretty much all of that. I do. I’ll just say one thing about the kind of continuum business. And then I want to talk about another. I want to add something to what you said, which is very interesting. You know, one thing about the here’s how I think about this difference where in terms of when differences of degree become differences in kind. And this is why I called my framework tame as in technological approach to mind everywhere, because I really want to ground it, not because technology encompasses everything that there is. Obviously, that’s not the case. But the technological approach, I think, is interesting for the following reason. Let’s just imagine the paradox of the heap. Right. So you got a pile of sand and you start taking the grains off. So here’s what I think all of these all of these claims are, including any kind of cognitive claim, any kind of claim about what systems can and can’t do in the in terms of intelligence and all that. I think these are all interaction protocol claims. They’re they’re engineering claims in the sense that what you’re really saying is here is a way I can interact with that system. So, for example, so let’s let’s talk about the heap first. If you if you tell me you need to move a pile of sand, I don’t want to really know whether it’s a heap or not. Here’s what I need to know. Am I bringing a spoon? Am I bringing a shovel, a bulldozer, you know, dynamite? What are we bringing? And so and there will be lots of scenarios in which actually either one write a big shovel or a, you know, or a small bulldozer will do. And so so I think all of these things are fundamentally around a claim about what is the right way to interact with it. So when you tell me that a given system is somewhere on this on this spectrum, I’m less interested in finding sharp categories and looking for kind of emergent new new new new phase transitions and things like that. I’m much more interested in the question of so what tools are you telling me are going to be appropriate? Right. So if you’re telling me something is a simple machine, I understand it’s rewiring and hardware modification, and that’s all you got. If you tell me that it’s a cybernetic thing, I’m thinking, ah, so I’ve got tools of resetting set points and other aspects of cybernetics. If you tell me that it’s a it’s a it’s a learning agent, I say, OK, I understand we have training, we have, you know, all the things that behavioral science can do. And if you tell me that it’s at the level of, you know, let’s say human or above discourse, I say, ah, that means that I have certain other tools. And also, I may be changed by the encounter. In other words, unlike with a simple machine, after we’re done exchanging, I’m also going to hopefully benefit from your agency and we’re going to have a different relationship. So so to me, all of these things are not about looking for categories. We’re going to relate to that to whatever the system is, is in question and in a very specific way. I say engineering, which is applicable to kind of all of the left side of that spectrum. And then after that, it becomes, you know, other things. I don’t know, psychoanalysis or something. But but but you know, love and friendship and when. But but but that’s that’s what I think these things are. I think these things are our interaction frames that we take up and then we see, which is which is why this is really critical. A lot of people really they have philosophical kind of pre-commitments to where things are. You know, they’ll talk about category errors. They’ll say, well, it’s a category error to say that the cells and tissues can think and can I mean, I use the word think. But but, you know, can we have intelligence and so on? Like, well, you know, in the Middle Ages, it was a category error to think that it was the same forces that that moved rocks on Earth and celestial objects in the sky. Right. That used to be a category error, except that these categories need to evolve with the science. And I think these are all empirical questions. I don’t think we get to sit back and have feelings about what is and isn’t intelligent. I think we have to do experiments. And so you pick a frame, you try it, you make a hypothesis about here’s the space I think it’s working in. Here’s the goals I think it has. Here’s the degree of competency I claim it has. Let’s try it. Let’s let’s do the experiments. We’ll intervene in some way. We’ll see does this thing and then we’ll know, am I overdoing it? Am I under, you know, am I under recognizing mind? Am I over am I over recognizing it? Then then we pick. So I think I think it’s a it’s a it’s a scientific problem about optimizing relationships in the end. So I think that’s great. And I’m mostly in agreement with that, too. Two things come to mind. I mean, I think there is still a proper philosophical job in that scientific endeavors experiment presuppose things that therefore can’t be given by scientific experimentation. That doesn’t mean that philosophical level gets to dictate. It means that the two discourses have to continually talk to each other. And that’s, of course, why I’m a cognitive scientist. For example, the the the model you propose, which I think is good. There is a fundamental presupposition of relationality being central to a grasp on ontology. And that opens up the question. Well, notice that information and intelligibility are inherently relational things. Maybe we should be prioritizing relationality over the relata in our ontology. That’s a kind of philosophical question that emerges by reflecting on what is presupposed in the science. And I think, Mike, and I hope you take this not as an insult. I think you’re actually doing work that is pushing towards that, that saying pay attention to the relationality over the relata and prioritize that. And I think that is actually a deep and fundamental challenge to our kind of standard ontological grammar, which goes back to a Cartesian substance where we talk about individual things having properties that can independently exist, independent of the relations to other things. And I think and there’s a lot of people and I’m one of those group that are saying we need to challenge that fundamental Aristotelian ontology in order to actually accommodate into our worldview what the current science is disclosing. What do you think about that is what I just said? Yeah, yeah, no, I think I think it’s exactly right. And I’ve sort of you know, if you were to you ask some people, what is the what is the central thing that persists through time? And they’ll say, well, it’s genes and somebody else will say, well, it’s information. And what I think it is, is perspectives. I think what we have is perspectives, ways to ways to actually reduce. I mean, what perspective is, is a is a chosen reduction of all the stuff you could take in from some vantage point. You’re going to agree to ignore some things. You’re going to emphasize other things. So so that so perspectives are what change evolve interact like I think it’s all about interaction and perspectives, observers, perspective interactions. I think that’s the basis of everything that’s that we have to do in science. And that’s very similar to Ladyman’s structural realism, that what is persistent across all the sciences are these kinds of broad, real patterns by which we’re doing sort of this. Like you said, this compression and selection of information and what survives are not the particular semantic content we give to it, but sort of these structural patterns and stuff. I think that’s something deep. And so what you have is you have not only a neoplatonism up and down, you have a neoplatonism across time, which I think is really, really interesting. So I’m going to stop for a bit because I feel that you and I are starting to get into a rhythm and I don’t want to exclude Greg at all. And I really appreciate this, what you’re doing. I make reference to your work a lot because I. Yeah, thank you. Yeah, I think our work, I think our work is complementary and we mutually strengthen each other’s positions in a way that’s intellectually respectable and justifiable. But I do think the same thing is the case for my work and Greg’s. I want Greg to talk now. Well, actually, that’s a nice segue because I do want to check in with you, Michael, in terms of what I mean you’re doing such intense, brilliant theoretical work. You and I touched on this a little bit in our private conversation. You know, John and I talked about this meaning crisis. I’m a clinician. I’m deeply concerned with how we see ourselves as human beings and what science says about what we are, what we know, how we think about it and how does that connect to wisdom traditions in a particular way. And I see your work as brilliant empirical work that open that challenges certain old pre-existing notions that at the times dominated the paradigmatic natural science view or at least it opens up a wide variety of different perspectives. As a psychologist who looks at the way people think about themselves in the world, I hope we evolve to new frames. John and I are doing a series called transcendent naturalism, basically anchoring us in a naturalistic way to the potentialities of transcendence at individual and collective levels. What kinds of worldviews afford that and what kinds of understanding scientific understandings of the world afford that. So I’d really like to hear your thoughts about that. What has been your experience as you open up this realm, as you share this teleomic perspective, open up us thinking about light cones across a wide variety of different domains? What does it say about us in the universe from a scientific perspective and what does that mean? Yeah, yeah. Yeah, great, great topic. So I’ll say something general first and I’ll dive into a specific example of what I think this means. Overall, I think the whole crisis of meaning thing is incredibly important. The work that I try to do, I view very strongly as trying to climb out of it, not trying to reductively dig a hole deeper. And this is really important because I mean, I’m not a clinician and I, you know, but I get tons of emails from people who say, OK, I’ve read your paper. I understand that, you know, I’m a collective intelligence of cells. And now I’m I don’t know what to do with myself anymore. Like literally, I don’t, you know, I don’t. And what should I do? You know, maybe I’ve read some Sapolsky and now I don’t think I have free will anymore. So like, I’m really confused. I don’t have any idea what to do. So I that’s that’s I think that’s important because because I think it’s really critical that the stuff that we do is seen as what I think it really is, which is providing now a way to climb out of all of the things that we were told, you know, by evolutionary theory, by neuroscience, by physics. Well, you don’t have this and actually you don’t have that. And, you know, and it’s all about competition and survival of the fittest. So, you know, I’ve got that. And so, right. So, OK, I mean, those, you know, there are a lot of bad ideas that needed to go. Great. But but now we’ve got to we’ve got to climb our way up the other side of this and rebuild on a better foundation, rebuild some of the things that that are necessary for us to flourish. And and I yeah, I think I think that’s that’s part partly what what we’re doing. And I think a huge part of that is the whole the whole diverse intelligence field and this idea of building tools that go beyond our very narrow kind of monkey brain affordances that we have for recognizing other kinds of minds, I think, is critical. I think if we do this, things are going to be once we are once we are able to recognize other sentient beings around us and we commit to this notion of enlarging our own cognitive light bone so that we actually can recognize and have compassion for beings that don’t look like us. They don’t have the same origin as we do. They’re different. They’re different in every way. Yeah, I look forward to a future in which the kinds of distinctions that we make currently about, you know, within the normal human variation, we say, oh, that’s, you know, these are like us. That was other that, you know, they’re not like us. Like these things are going to be so laughable in the future when when the wide, you know, when when really a freedom of embodiment really takes off and we’re all, you know, in whatever, right? And you’re not stuck with whatever body evolution just happened to have, you know, and genetics happened to have landed you in the diversity of bodies and minds that are going to be out there is going to make all these current distinctions completely laughable. And I think that’s good. I think we have to to mature. I think we have to drop a lot of old categories, which made sense in olden times, but they don’t make sense anymore because they don’t actually capture what’s what’s unique about about sentient beings worthy of compassion. And so so anyway, so so that’s kind of the general stuff. I want to I want to say one one thing about the more specific issue of what we are. So this goes back to John’s point about the problems that any any being faces. So there’s one more interesting problem, which is which is this. It’s it goes across scales and evolution is called Bateson’s paradox. And the idea is that if you’re a species, the world’s going to change and you’ve got two options. You if you don’t change, you try to remain the same. You’re done for. You’re going to you’re going to disappear. If you do change in a certain sense, you’ve also disappeared because now you’re something else. You’ve changed. So every every agent faces this interesting problem that if you’re going to persist or better yet, learn and improve and whatever your journey is going to be, you are not going to be the same. So committing to a static representation of what you are is is is doomed. Right. We it’s doomed at the evolutionary scale. It’s doomed at the personal scale for the following reason. And this and this also goes back to the point that John raised about the relevance or the salience. You called it or no, I call it salience. You said relevance of information. Imagine let’s let’s just for a minute, think about the butterfly caterpillar kind of situation. Right. So so you got a caterpillar caterpillar lives in a two dimensional world, eats leaves, and it has to turn into and it’s a soft bodied creature. So it’s a very particular kind of controller you have to have when you can’t push on anything. There are no hard parts has to turn into a butterfly. So in order to do that, what happens is the brain basically gets dissolved. Most of the cells are killed off or the connections are broken. You build a new a new kind of brain. So the so one amazing thing that has been found in various systems is that the butterfly or moth actually remembers things that you train the caterpillar on. So memories persist. Now, you might focus on the question of, wow, where is the memory? If you refactor the brain, how do you still have it? And so that’s a fantastic question for for developmental biology, for actually computer science. We don’t have any any memory media that do work that way. But there’s a but there’s a deeper there’s a deeper issue here, which is that I also and so I should say what it is that they learn. So so you have a disk of a particular color. Let’s say, I don’t know, purple and and the caterpillars learn that they get fed on this purple disk. And then when you get the butterfly, it will go there and try to eat. Well, here’s the interesting thing. So not only do butterflies and caterpillars not eat the same stuff. OK, caterpillar wants leaves. Butterfly doesn’t care about leaves. Butterfly wants nectar. Not only that’s the case, but also the physical embodiment is completely different. So what you have to do, it’s not enough to keep the memory as it is. The memory as it is, is completely useless. You have to transform that memory, keep the salience, dump the details and remap it into a new. So another sort of a weirdly grandiose way of putting it is in your new life, in your new higher dimensional life, like literally because the butterfly lives in a 3D world. So literally in your new high dimensional life, you will not store, you will not keep the memories of your past life, but you will keep the deep lessons you learn. Right. You’re not going to you’re not going to know that that that moving certain muscles in a certain stimulus, you know, gets you to leaves. You don’t care about leaves. You don’t have those muscles anymore. You have something completely different. And so so being able to remap across, you know, when when everything changes, right, being able to remap that information is really fundamental. And so when we think about what we are, here’s here’s here’s what I’m getting at. You might think that what we are, you know, so you might think, OK, so so butterfly caterpillar, that’s a really sort of extreme example. I mean, we don’t do that. A plein area that learn and then you chop off their heads and they regrow a new brain and they imprint their memories. OK, we don’t do that. So these are like weird. I think this is this is all of us. This is we are absolutely that that type of being that is not a static structure. And our job is to keep that structure intact against, you know, all the things that happen. Fundamentally, I think that at any at any given moment, you don’t have access to the past as it were, as it was, what you have access to are the N grams, the messages that your past self left for you in your mind, in your brain and your body. And you have to interpret those. Right. So so puberty, you know, will will will alter your brain in various ways. All your priorities will change. You know, your preferences will change in many ways when you’re when you’re 90, you will still have memories of your of your childhood. But but not because you’ve kept the there is no molecular structure in the brain that stays the same for that period of time. I mean, everything’s bubbling around. Molecules come in and out cells and so on. What what you are constantly doing is reconstructing yourself and your memories to make them applicable in the new, you know, in the new scenario. So this this. So what does this look like across scales for the human? It just means that as things as things in your in your brain and body go in and out, you are you are maintaining a coherent self model of some sort in evolutionary terms. It means that evolution long before we had brains or any of that doubled down on this idea that everything is going to change. You know, the environment is going to change. Your parts are going to change because you will be mutated. We know you’re going to change. And so this is why we have these amazing examples of, you know, when when we make tadpoles with an eye on its butt instead of in its head, they don’t need new new generations of adaptation. They can they can see and they can learn and visual assays immediately. Right. There are many. I write about this stuff a lot. There are many amazing situations where you can you can radically change the not just the environment, but actually the parts themselves. You can put, you know, put in weird nano materials and then all this stuff. You always get something coherent because I think because what biology does is assumes that you can’t you can’t just learn the structure of the past. You have to learn. You have to make problem solving agents and the body and then eventually the brain and the mind are are continuously reconstructing because you know everything is change. So so this and there’s there’s some other there’s some other things that could be said about that, but I’ll stop in a minute. I think this is one of those things that we’re learning from all of this is that if you want to know what we are, it is it is less plausible to think of ourselves as some sort of static structure that tries to hold on to the to the you know, to the endgrams of the past. We are reinterpret. We are we are a continuous process of sense making and reinterpretation. I mean, I’m obviously not the first person to say this, but but but we now see that across scales right from from evolutionarily to molecularly to developmentally from from from the robustness of the body to the robustness of cognitive systems, you know, confabulation like all this. You know, the the noise and the unreliability of the substrate is not a bug. It’s a feature. It’s the thing that makes us intelligent and robust because you assume right off the bat that everything’s going to change and that our number one fundamental capacities to remap on to on to new on to new scenarios. Right. And if you think about if you think about what happened in computer science and robotics, they went exactly a different way. Right. So we work super hard to make sure that your hardware always works correctly. And then we code on top of that, knowing that our hardware is reliable and you end up with a completely different set of systems versus what biology does, which is it knows all the stuff underneath is going to change. It’s going to die. It’s going to mutate. It’s going to be poison. And we’re still going to remap. So so that’s you know, this is one thing that I think we’re learning about what we are. That’s really fascinating. Have you ever by any chance come across relational frame theory? I don’t know if it’s a bridge off of skin area in theory. But basically what it’s essentially saying is the operant is a relational set of patterns rather than a particular thing or stimulus. And actually what you’re doing is you’re tracking patterns and being pulled into operant patterns through relational frame. So listening to that and that very I think consistent both with John mentioned structural realism, really the idea of what we can really track in the world are patterns and track patterns. And if we’re building our recursive relevance realization salient structures in an unbelievably changing world, what are the things that we can track? Well, pattern relations might be the thing that affords our cybernetic goal tracking. Yeah, yeah. And one and one last kind of piece to throw in there is that if you think about if the goal if what biology does is take a some kind of complex because it was some kind of complex state of stimulus and effects and all of that, squeeze it down into a very compressed representation and then try to re-expand. Right. So the caterpillar learned all this stuff and get squeezed down into some sort of molecular substrate and then re-expanded or remapped onto the butterfly. That squeezing is just two quick things about that. One is that this squeezing and expanding thing is everywhere. So metazoan organisms, right? You’ve got your organism, you squeeze it down to an egg, you re-expand. You and I having this conversation, I have some sort of complex brain state. If I gave you a spreadsheet of all my neuronal activation levels, that would do you no good because your brain is different. What we do is we use language, we squeeze it down to a simple low bandwidth message. You will have to re-expand and reinterpret that message. Do I know that you re-expanded it the way that I did? No, but we do our best. But that squeeze, right? You can think of science this way as in writing papers and giving talks is like the squeezing down. So as we think about, so I’ve been thinking about this a lot about what are the features of the architecture that would allow this, that would enable this kind of amazing process. And one of the things that struck me was William James had this really cool thing when he said, he said, the thoughts are thinkers. And if you can dissolve, and I just like doing that, dissolving boundaries between things. So if you dissolve the boundary between data and the cognitive system that operates on that data, then you might say that, well, maybe the data isn’t just passive. Maybe the thing you learned isn’t just a passive thing that sits there and is hoping for this other cognitive system to come and read it and remap it. But maybe it’s got a little bit of, I don’t know how much, but maybe it’s got a little bit of activity on its own. Maybe it’s got an agenda. Maybe the agenda that it has is to be properly or optimally placed in some cognitive system. Maybe it wants to be understood. You know, yeah, we got to be using quotes because I don’t know to what degree, but I actually don’t think that there’s a sharp boundary here. So maybe memories are not actually, I thought of this again because of the frame theory thing you mentioned. Maybe these these these patterns, these frames and maybe even perspectives are have a little bit of agency to them. They help. The reason that the reason any of this works is because it’s not just, okay, here’s a here’s a passive molecule. Good luck figuring out what what this may well meant to your past self. But actually, maybe these things have a little bit of activity in terms of working to get themselves remapped. Maybe it’s again, it’s like this two, you know, two directional thing. So I don’t know. That’s just that’s just some stuff that we’ve been working on lately. Well, I want to reply to a lot of this. This is really rich. I want to start with that idea of kind of a bidirectional conformity that was not only the mind is conforming to the world, but the world is conforming to the mind. Of course, you might get tired of me doing this. This is a neoplatonic claim. Right. And this is the idea that this is sort of the central idea behind what I call participatory knowing. And so that we it’s not just a passive reception. It’s a co shaping. It’s a mutual affordance. It’s a coming together. It’s a logos. I think that is deeply right. And I think that’s at the core of what I try to get at when I talk about. And I think relevance is a cognitive psychological phenomenon that is exactly that. We aspectualize the world and it’s sort of it. But relevance isn’t just objectively given. We don’t just read it off, but we don’t just project it onto an empty canvas. The world and us shape and coordinate each other so that we fit together. And you know, this and you this is kind of like an analogy analogous to how niche construction works and things like that. Right. There’s activity on both ends. There’s shaping on both ends. So I think that’s deeply right. And I think that that what you just said, that compression and in the 2012 paper we did on relevance realization, we talked about compression and particularization as sort of the engine of how you get the mind, at least what we were talking about in that paper, how you get it to be doing something that is structurally the same as what evolution is doing. You get the variation and then the compression. And this means that noise in the system is actually inherently valuable, as you indicated a few minutes ago. And what’s happening is machine learning is actually finally figuring this out that you have to at very many stages, you have to throw noise into the system to break it up so it doesn’t get locked into local minima. And it can explore many more environments than the one it’s getting locked into. And I think that is very important. And I’m building towards an argument here, because I think that maps into something that goes with your butterfly that human beings do. And this is L.A. Paul and transformative experience. Human beings go through these profound changes. And so she does the Gidonkin experiment of people offering to turn you into a vampire, which is very much like your butterfly example. And the problem is you don’t know what it’s going to be like, what your perspectives are going to be like. You don’t know who you’re going to be, what your preference structure is going to be, your traits. And so you don’t know if you should do it or not do it because you’re ignorant. You’re deeply ignorant. So you can’t do standard decision metric inference your way through. So you can’t do standard decision metric inference your way through. And this is very interesting. And she says, of course, you face this when you decide to have a child or you decide to take up long term education or you decide to get into a long term romantic relationship, etc. So I think this is exactly right. And I think transformative experience is pervasive in our cognition. And when you put that together with what we just said a few minutes ago about the noise and all of this, what this means is our model of rationality has to be fundamentally changed because here’s the and this is what Agnes Keller does. Well, I’m not very rational right now and I’m aspiring to be more rational. I’m actually aspiring to go through a transformative experience. So this is actually central to being rational. Like being rational is a normative demand that I become more rational than I am. And it’s not just a quantitative more. It’s a qualitative. It’s a transformative experience. So somehow these nonlinear, non-inferential processes are central to being a rational agent because rationality is fundamentally a transformative experience. And what I’m saying is this feeds back and then that rationality also has to take account of this perspectival and this participatory knowing we’re not we’re not representing things over there. We’re as you’re suggesting, Mike, we are we are participating the world and us. We’re participating together in the co-instanciation of, you know, important real relations. And I think therefore that Bateson’s paradox actually slams into the paradox of self-transcendence, which is, well, if I become something that other than I am, then it’s not self-transcendence because something other has come in. And if I just extend what I am, then it’s not transcendence. It’s just growth. And that paradox is only a paradox if you have a static single model of the self. But if you have a model that is flowing and I’ll connect to something else you say a model of the self that is inherently a collective and flowing the way you’re doing it. I think you put those together and you get right. Multi-multi-mutually evolving cells. I don’t think we are a self in any kind of magnetic sense. I think and this is what a lot of the therapy, all the parts work and the IFS and a whole bunch of stuff. We are properly dialogical. We are dialogical within. We are dialogical without and trying to find sort of the soul thing that is the self is a mistaken category. And this becomes important because when you look at debates, I’m teaching a course on the self right now. And Greg and I with Christopher Mastapietro did a series called The Elusive Eye. People will say the self isn’t real. And then what they’ll do is the arguments are because they’ll admit that all this stuff we’re talking about is going on. And that’s all there. But that’s not a self. Well, why? Because it doesn’t give you something like a soul, a single monadic substance that’s on the unchanging bearer of properties. And then they say, therefore, it’s not real. And I turn around and I say, well, then by that standard, nothing is real. Because what science is showing us is that nothing is a substance. So all you’re really saying is the self is as real as everything else or as unreal as everything else. I think saying everything is unreal is a useless thing to say. I don’t think that gets you anywhere or advances anything. And so I think what we’re what we’re what this self no self debate is ultimately pointing to. And I’m trying to show you it’s deeply continuous with the biology you’ve been arguing is we have we’re facing a fundamental transformation in what we understand the self to be dialogical and what we understand rationality to be. And I think those two things are really profoundly important at a cultural level. But if you’ve if you’ve agreed with the argument I’ve made, they ground out in deeper stuff in the biology and let alone even in the physics. And I think this gives them powerful plausibility because we’re proposing a fundamental paradigm shift. Here’s the final thing I’m going to say. I think that that mutual transformation of the notion of self and rationality is crucial to getting out of the meeting crisis. I think as long as we remain in that Cartesian framework, we are locked. We are locked into normalism. We are locked into dualism. We are locked into antagonistic processing. And we are locked into all many of the central drivers of the meeting crisis. I think that’s a did you want to respond to that, Mike? I have things, but please go ahead. Yeah. Well, I mean, I think certainly. So one of the things that I, you know, one of the things that I would be looking for and this is what John and I do in the transcendent naturalism is to consolidate certain kinds of messages that afford people ways of gripping the world that enable them to make sense of their lives, make meaning in their lives, you know, as ecological agents, you know, in a particular exploration of design space and finding that kind of participatory relation, you know, as being sought in a, you know, there is a way to embed oneself on the cusp of this aging arena of relation, I believe that many wisdom traditions have identified, you know, as being fundamentally core to one sense of being present in the world. And I, to me, one of the things that your work is doing, one of the things that I was so drawn to John’s work, and again, sort of as a way to share with people ways of being in the world. To me, what it is that this shows scientifically, philosophically, and participatory is it points a particular direction for in many ways at the core being in the world is a relationship to the world that emerges in this dynamic process, I think from both of your work. That is a very, very important transformation for us to communicate society and embrace as we go through. So that’s again, I kind of keep coming back for me is gripping these elements to embed our structure, our grammatical structure of relating to nature, to the world, to the future, in a particular way is deeply important to me. So I just wanted to make that point and resonate with it. Yeah, yeah, no, I think I love all of that. I think I think you’re absolutely right. And I think it’s, it’s critical for us when for for for people to realize that when we re imagine what the self is and take away take take us away from this this notion of a of a substance, you know, some kind of monadic substance and all that. It’s different than what you said before, which is that well, it’s, you know, every everything is equally illusory. I mean, there’s there’s nothing at that point. It’s that’s a deeply destabilizing concept for a lot of people. And I think that’s where that’s where they think we’re going. And the example that I try to help people think about is this. I mean, it is it is true that that we are patterns more than anything else. But like, okay, so so you’ve got a rat and you train the rat to press a lever and get a reward. Now if you if you zoom into what’s going on here, you’ve got some cells that have interacted with the lever, you’ve got some cells that got the sugar of the reward, they’re not the same cells, there is no single cell that had both experiences, who owns the collective who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns the collective, who owns something that we excellent brought out there. And one of the things I mean, there’s nobody like, literally said, I read your thing about this collective intelligence. What do I do now? And all I could say was whatever amazing thing you were going to do before you read my paper, go do that like that. You can still do that. You can still do all that. Because you can, even though, you know, you’re a set of patterns that they’re interacting in a particular way. You can become a better pattern, So commit to a bigger cognitive light cone, to helping others have a better embodiment, whatever it’s gonna be. It doesn’t dissolve all that stuff, it just gives you a new window on it. And you still, after all that is said and done, you still got the opportunity and the responsibility of moving forward as that and doing things. So, yeah. I wanna reply to that. I think that’s right. I think getting it clear to people that we’re not dissolving, we’re revealing or disclosing. We’re disclosing as opposed to dissolving, I think is, and that’s what I was trying to argue for. And I agree with what you said, tell people to go back and maybe use this to reinterpret so they can recover what has been lost because of an inappropriate frame. With the Vervecki Foundation’s help, we set up ecologies of practices. We have a practice called Dialectic in the Dialogos that helps people get into mutually shared flow states of cognitive exploration, and people discover collective intelligence as something that is phenomenologically present and almost agentic in what’s happening. They get the we space that takes on a life of its own and leads people into each other and everybody beyond each other into something deeper and more profound. And people will say things like, I discovered a kind of intimacy I didn’t know existed and I’ve always been looking for. And if that doesn’t sound platonic to you, I don’t know what does, that’s anamnesis through and through, right? And so I agree with what you said, but what I’m also suggesting, Mike, is if we, and we have to do this carefully and ethically, virtuously and with virtuosity, but we can reverse engineer by paying attention to the wisdom traditions and the science, the cognitive science, the kind of science you’re doing, we can reverse engineer practices for people that help them to do the recovery and also the development of the cognitive light cone, of a recovery of a lot of what is lost for people in the meeting crisis. There’s so much more, but just pick up on that sense of, this is a, they say they had always been looking for this kind of intimacy, but they didn’t realize that they were. That’s a really interesting state that they’re, and we’re getting this, and this, we get this across groups who come in, I’m not pretending it’s a random sample, it’s obviously self-selected, people are coming in because they have some orientation to my work, so I’m not claiming this is like a scientific study, but it’s not nothing either. The fact that many different groups of people, religious, non-religious, many different backgrounds, different places in the world come together, and this is a reliable thing that happens, I think that’s indicating something that, so yes, we can tell people, yes, go back and try to recover, do the wonderful things you’re trying to do, and don’t try and dissect it away because of a Cartesian framework, but on the other hand, here’s a bunch of new practices, or at least old practices that have been recovered, or at least reverse engineered, in which people can deeply recover a lot of the experience and the learning of what we’re talking about here. So it goes from being something they may propositionally assert into being something they procedurally and perspectively and participatorily realize, and I think that’s an important thing to say as well. When people ask, and I’m not saying you have that responsibility, I have chosen to take on that responsibility, and a lot of people with me, I’m not taking single credit, but I think one of the things to say is to say what you say by all means, but also to say, well, why don’t you try ecologies of practices that are based on this and see the positivity that comes out of being in these practices, see what you realize and recover in these practices, and I know Greg is doing something very similar, and our work, Greg is a powerful theorist, but he’s also creating an ecology of practices, and his work and my work and the foundation’s work, where we’re doing a lot together, and I just wanna know what, I mean, obviously, there’s great risk here, there’s people turning into gurus, there’s weird cult formations, there’s exploitation, there’s money pumping, you have to do a lot, you have to try to build a lot into safeguard against this, but I’m proposing that we could sort of reverse engineer a complex ecology of practices that could be properly understood as spiritual, in that it affords people transformative experiences in which they are recovering this deep connectedness, this intimacy, their learning and reality and themselves are being deeply disclosed together, within and without and between each other, and you’re getting the cultivation of a reorientation towards meaning, virtue, wisdom. I think this is also something we can say to people now. Yeah. Yeah, I think that’s where you find the bridge from a lot of the is of the science to the aught of humanism and a new opening for fusion and connection. One thing I wanted to ask you, Mike, I haven’t seen, and I certainly haven’t tracked all of your stuff, but I know you focus on continuity, and I know the approach that you take, but I’m curious, I just wanted to make sure I had this opportunity to ask you, when you think about the human condition and the human intellect, and you think about kind of, is there something, what is the thing, what are the multiplicity of things, when you think about the human intelligence structure, what do you identify, if anything, that is kind of at the root of our explosion over the last half million years and dominating the planet, building technologies, giving rise to certain kinds of thought. Where do you see that? Do you think much about that particular kind of question? Have you reflected on that? I’d just love to get your thoughts since I have you here on the line. Yeah, well, let’s see. It’s a little bit of a switch topic, but I wanted to check in with you on it. Yeah, yeah, so I don’t think I have anything brilliant to add over what a lot of smart people have said about the unique capacities of humans and why this is such a successful embodiment and all that. I can say a couple of things. First, someone, and I don’t remember who it was, but someone said that, maybe Yuval Harari, I don’t know. Somebody said that the special thing that humans have is that we’re storytellers. And I think that’s a compelling vision, except that I think all agents are storytellers, fundamentally, from the first bacterium that had to compress a very chaotic, noisy experience into a simple model of what the hell is going on and which of my effectors can I use to improve certain scenarios. You’re now a storyteller. You are no longer Laplace’s demon trying to track microstates. You have committed to a certain story of what effectors you have and what’s going on. I think we’re all storytellers. So I don’t think it’s that. I think we crank it up to an amazing degree. And I think that language, I’m sure language is an important part of it in the sense that that tool that can compress complex brain states into a simple thing that can be passed on to somebody else for uncompression, I think is super powerful. And as much as I like to use various tools of cognitive and behavioral science in other places, I’ve not seen anything that suggests that language exists other than in brain. So I wouldn’t claim that, although we don’t know. I’m not saying it’s impossible. I’m just saying we haven’t seen anything like that. So I think language is key. I would say a couple of other things about humans. One weird thing about humans is that we have a cognitive light cone that’s longer than our lifespan, which is a bit different. If you’re a goldfish, all of your goals are likely achievable, right? You might have a 20 minute horizon of goals and you’re probably gonna live 20 minutes and most likely your goals are all achievable. Humans are uniquely, we have many goals that are absolutely not achievable in our lifespan and we know it. And so what kind of unusual pressures or capabilities that unlocks, right? Having goals that you can commit to that you know are not achievable within your own lifespan, maybe that’s something. And I guess the final thing I’ll say is that and this becomes very important because people are now because of AI and all those people are trying to define proof of humanity certificates and these kinds of things. I wanna say a couple of things about what a human is and isn’t according to my humble opinion. The first thing to realize is that, and I have a diagram of this, but I’ll try to sort of pantomime it. You got your standard modern human in the middle and it’s got this like a gentle glow about him and all the philosophy is about the human. And so going up back here above him is a very smooth gradation of evolutionary stages all the way back to a single cell microbe. And when you say the human, well, which human? So the human of today, the human of 100,000 years ago, the human of 300,000 years, right? It’s, and they say, well, it developed, this and that developed very fast. They say, what’s very fast? One generation? No, no, no, no, no, it takes you. Well, then what was going on in between, right? If you think that humans have responsibilities and right, they can be good. Where exactly, right? What can you blame one of these hominid ancestors for what they did or are they still, right? So you got this spectrum, right? I mean, I like continuing spectrum, okay. Down below, you got the exact same thing on a developmental timescale. So again, what human? You used to be an unfertilized Oocyte. It was a very slow and gradual process of how we got here. So which human are we talking about? And then even more sort of widen this all out, horizontally, you can imagine now, now as we will with tech, as we already are and will more with technology, you can step away and you say, well, I can modify, I can be modified biologically. I might get some tentacles and I might live underwater someday and I really would like to see an infrared. I mean, what’s this with these limited retinas, you know? And so you can biologically also technologically, right? I can have implants and all kinds of, you know, at some point today, maybe 2% of my brain is an implant that’s helping me out, but eventually it might be, you know, 58% of my brain is some kind of, you know, construct. So you got all of this. And then that really, I mean, obviously, you know, science fiction has been on top of this for a hundred years, but a lot of people, especially who talk about AI are just now catching onto this idea that human is not a sharp category. And then that raises the question of, so what do we really mean? And I tend to think about this as the kind of thing that, you know, you’re going to Mars for the next 30 years, you get to take something with you. What do you take with you? What’s important that you take? You know, you don’t want a Roomba, you don’t want, you know, you want, what are you really looking for, right? So I want a human companion. What does that mean? Is it the DNA? Do you care about the DNA? I don’t, a lot of people are super into the DNA. And if you change your DNA, my God, then, you know, you’re no longer, I don’t care about DNA. It’s then people like, well, it’s the standard body. Once you’ve put wheels on and gotten, you know, tentacles in a propeller, you’re no longer human. I don’t care if you have all your standard parts that evolution happens to have given you and you’re subjected to bag, you know, lower back pain and astigmatism, all this dumb stuff that we ended up evolving. I don’t think that’s, I don’t think that’s what we mean by human, so what do we mean? So I think it’s really interesting to think about what’s essential about it. And I think what we mean when we say human is a certain impedance match between us with respect to the size of your cognitive light cone of compassion, literally like what are the size of your goals and what, you know, what is the radius of compassion that you can muster? Because if you’re, because, and actually the mismatch can be in either direction. If the cognitive light cone is tiny, we’re not gonna have much of a relationship if you can’t care about the same level of thing. But conversely too, if you’ve got this like, you know, galactic scale mind, we may not be able to do the normal thing that, you know, the normal human interaction. So I think that’s what we’re talking about. We’re talking about the size of your goals and the things you can care about in the compassion sense of the act, the practical, not the affect, but the practical like pursuit of goals. That’s what I think. Lovely. I really appreciate that. Yeah, go ahead. I wanna respond to that because I think that’s important. I think that I agree with Mike, the discussion. I think the discussion around human is actually an equivocation. I think it’s equivocation from some sort of biological notion and Mike is just can just devastate that as he just did. And another notion, which is a moral legal notion, which is a person. And there we’ve got enough science fiction that lets us know that you don’t have to be humans to be persons. And then I think we try to find some anatomical locus of personhood within a biological humanity. And that is just a doomed project from the beginning that will not work. And I think a lot of the tech people and the AI people are like, they’re bumping into this, but they’re bumping into it. And we’ve said this multiple times with the old categories and old schemas, and they’re saying often equivocal and sloppy things about it. And the move you made, and of course you brought this in, Mike, you brought in the notion of compassion. This is ultimately a Kantian, but even properly a Hegelian move. It’s like, well, persons are beings that can recognize each other as having moral responsibility and moral obligations. And I give that to you as you give that to me. Do unto others as you would have them do unto you. The golden rule, I’m compressing a huge amount of much more sophisticated argument. But this notion of reciprocal recognition of our responsibilities and our authority, I can obligate you, I can say, don’t do that, because that’s immoral. And I don’t have to appeal to your desires. I don’t have to appeal to your projects. I can just say, don’t do that, that’s immoral. And you are, if you’re a moral agent, you’re at least responsible to that. You don’t have to agree with me, but you’re responsible to that. And I think, and Hegel said this is when we become geist-like. We become spiritual beings when we become capable of this reciprocal recognition of moral authority and moral responsibility such that we are no longer driven just by our desires. We can be driven by what we are obligated to do. Unless people think I’m just talking about ethics, reason is that kind of obligation thing. You should conclude this because of that. And I can say that to you, regardless of your desires. In fact, we criticize people, motivated reasoning, who deviate from what they should conclude because of their desires, et cetera. Sorry for that, they’re building a battleship next door. It’s very annoying. So I think that that compassion, if you understand it, more broadly as this reciprocal recognition of normative responsibility and normative authority, that’s what we’re talking about when we’re talking about personhood. And notice we do that even with human beings. We do this weird thing. We don’t obligate two-year-olds to our moral obligations. And we say, well, they’re persons. Well, they are and they aren’t. They’re in this nebulous status. They’re persons in that we have moral obligations to them because by undertaking those moral obligations, we will actually turn them into persons. And so, but we don’t let two-year-olds get married. We don’t let them vote. We don’t let them bear arms. We don’t let them drive cars. We can hold them in a location kidnapping them. We can force them to go where we want. But many of the standards of personhood, we don’t allow them to have. And so I think what needs to be done is a clean separation from this discussion of human, which can mean some kind of psychobiological, psychosocial biological entity. And I agree totally with you, Mike. I think trying to pin that down is a fool’s errand. And I think the reason why people are trying to pin that down is they’re trying to find a place for personhood. And here, I know you don’t like it, but here I think that is a category mistake. I think personhood is different from, and it is not locatable in a psychosocial biological entity. It’s about this capacity for mutual recognition, reciprocal recognition. Yeah, I don’t disagree with that. I think that’s exactly right. I would just say that it’s a degree, right? That’s all I’m saying is that I think it’s a, so for example, you know, we in the, let’s take the legal system, we’ve arbitrarily decided that 18 means adults, right? I mean, it’s total nonsense. Nothing happens on your 18th birthday. However, what, at least in the US, if you wanna rent a car, you gotta be 25. Why 25? They didn’t do what the legal system did, which is just to kind of guess, and well, just to kind of set it. They have actuarial data, and they just realized that 25 is when your brain’s mature enough that we ought to be, you could be trusted with a car. That’s empirical. You know, that comes from, and so I think putting a more, understanding that it is a continuum and that certain things develop faster than other things. And if we agree, right, if we can, and I don’t know, but it’s way beyond my pay grade to try and figure out a legal system that will work in the future of hybrids and all this stuff, like, I don’t know. But just as a step to accept that it’s not a yes or no thing, that it’s not, you know, the Twinkie defense is crazy, but serotonin actually does make neurons go. I mean, there’s gonna be a, like, it’s a spectrum. We need to figure this out. That’s, I think, part of it. So I agree with you. I just, you know, I can think of too many in-between cases, which I think will all show up. I think we’re gonna, you know, you got your, like, right now you’ve got people that we say are non-neurotypical. I mean, wait till you see what’s coming, right? When everybody’s got all kinds of, you know, somebody’s got a third hemisphere grafted on, so now they can actually, you know, they’ve got extra IQ points. And so you say, you know, you read, like the rest of us wouldn’t have been responsible, but you really should have known what you were doing, because, you know, you got that third hemisphere. You know, these kinds of things are eventually gonna show up. We’re gonna have to figure it out. And I agree with you. And that’s why I brought up the example of children. We don’t have a definitive thing where they’re persons. And in fact, we have this weird capacity. We can even do it to some degree with the raising of dogs. If we can sort of treat things in the right way as persons and they start to approximate personhood. And of course we have individuals and people have seriously, and I don’t mean just sloppily, reflected on whether or not psychopaths, people who seem to be amoral, they seem to be blind to moral normativity, are properly persons. Precisely because they can’t, they lack the ability to undertake exactly that reciprocal recognition. And so, again, I agree with you. I wasn’t proposing a hard deadline, but I was proposing that there’s confusion around personhood and humanity. And I think calling somebody a human being, I think should be largely a psycho-biological designation. I think calling somebody a person is, we’re bringing in a whole bunch of other criteria. Those criteria are probably gonna shift. I don’t think they’re finally definitive because I don’t think anything is intrinsically or inherently relevant. And that goes back to your butterfly again, right? But I do think there are mistakes that are happening around this. And that’s what I’m trying to point to. Yeah, yeah, absolutely. Gentlemen, we’ve reached that time that we had agreed prior to hitting record that we would all have to come to a stop. So if you wanna wind it down in the next couple of minutes, maybe some closing thoughts, and then we’ll wrap this one. And I’ll say right now that if you wanna come back to this show, or if John and Greg, if you wanna take this onto transcendent naturalism and continue the conversation, the option is yours. But yeah, if you wanna go ahead and maybe make some final thoughts. John? Sure, well, I’ll offer some. First off, it’s been a joy. Your continuum of intelligence, Michael, is a beautiful thing to play with. It’s an enlightening thing. And I deeply appreciate the way you’re, both the way you think about it, the way you have researched it and the way you’ve articulated it here. So for me, again, sort of I’m coming back, keep coming back to, I built Utah, unified theory of knowledge afford us a potentially new grip, both in relationship to the world and relationship to ourselves. It affords this deep ontological continuity and the potential for enormous change going forward. It embeds our understanding of categories in a structural relational patterning, close to a process theology of Whitehead. But also then not so much getting into the incredible weeds, but giving a basic optimal gripping of, hey, energy matter, life, mind, culture, in these kind of, there’s a continuity and discontinuity that can frame us and then place us as agents in the arena in a particular way that orients us more towards meaning in life. And of course, this is John’s work. So to then get together and to jam and riff around that and have that music come alive here has been a real pleasure. So I deeply enjoyed it. Yeah, likewise. So thank you, Justin, for putting this together. And I think you guys, the work that you do is super important and I’m extremely happy that some of the biology and the computer science that we do can be connected with these issues of personal, interpersonal, these things that are very important for people. So, yeah, so thank you for doing that. I think that’s really important. Yes, thank you, Justin, for putting this together. Great pleasure. Always a great pleasure to interact with you, Greg. And Mike, I think this is the third time we’ve spoken and I’m continually amazed by the deep conversions between our work. We started in very different places and in some ways it looks like we’re tackling very different problems, but when you push on them, they seem to converge in really important and mutually supporting ways. And I find that very, very powerfully encouraging about the plausibility of the overall framework. And so I’m deeply grateful for your work and always a pleasure. I hope that you and I talk again. And we sort of share students here and there, but it would be nice if you and I talked a little bit more regularly. So just opening the invitation to that. Yeah, anytime, anytime, absolutely. Yeah, thank you all so much. Great fun, thank you. Yeah, absolutely. I’ll say goodbye to you gentlemen after we stop record real quick, but I also wanna acknowledge the YouTube audience. Thank you guys. I hope this was a, I was jokingly thinking it’s a tri-electric into tria logos here. We had three incredible minds. So that was a frame that I had set up and speaking of frames, I really appreciate that you guys shed some light on the dimensionality of a frame that’s very large that we’re working with. Also perhaps maybe some frames getting broken on the smaller scale of beingness, selfness and the fluidity, the continuation or the continuum of these selfness, beingness, of the psychosocial dynamic of that, all of that. So thank you guys so much. This was everything and more that I was aspiring to when I imagined this get together. So thank you very much. I appreciate your hard work. And again, I am a loyal student, maybe a little shallow in depth, but I aspire nonetheless. So anyway, but like I said, I’ll say goodbye to you all off camera here and bye bye YouTube audience again. Thank you. Thank you. It is my conviction that philosophical argument alone is not enough to get people to turn to confronting the meaning crisis and the cultivation of wisdom. We need to be seduced into the love of wisdom through beauty. That beauty is the beauty not necessarily of making things pleasant or comfortable. That’s the beauty that draws us in, intoxicates us and attracts us to something that we know we need to confront and encounter. This is one way of making sure that our philosophical task does not degenerate into mere philosophical discourse. We need to bridge between the conceptual and the non-conceptual and the metaphors, the symbols, the themes, narrative structure, all of these other dynamics of meaning making will be there to help bridge between the more conceptual aspects of a philosophical reflection on meaning and a properly, like I say, embodied aspiration to becoming more wise. I will bring in pivotal moments that I think in some sense epitomize and portend the book as a whole, the work as a whole, and try to articulate some of the symbolic poesis that’s going on there, the attempt to make sense and do that, not just conceptually, but symbolically in a profound sense of that. What kind of impact the author is trying to convey, what the author is suffering, what the author is undergoing, and how much that transformation of the author is the message in addition to the semantic content of the text itself. I look forward to seeing you. We journey together in the literature of the meaning crisis. First class will be April the 29th at 10 a.m. Eastern time. All the information you need for joining the course will be found in the notes to this video. Thank you.