https://youtubetranscript.com/?v=TxfBR2jkH2E

Welcome back everyone to the Illusivai, the nature and function of the self. I’m joined by my good friends and co-dialogers. I’m going to make that a new noun. Greg Enriquez and Christopher and Master Pietro, welcome guys. It’s great to be here again. Thank you. Amen. Again, for those of you who are joining us, we’re trying to do two things here at the same time. We’re trying to present an argument in a shared fashion, but we’re trying to also basically advocate for and exemplify a new way of trying to do theoretical work, namely doing it in a dialogical fashion, which is much more dynamic and self-correcting and open to emergent insight rather than in a monologic fashion, which tends to eschew all of those elements in order to pursue, I would say, a quixotic pursuit of the false idol of monological certainty, which I think we should give up. At some point, we’ll talk about things about that later, especially, I suppose, when we move towards Greg’s end of things and the justification system. Amen. When we start talking about how the emergence of personhood as an essential feature of selfhood, which I think is a word. Okay, so where did we get to last time? We got to the point where we were problematizing the self. One of the problematizations that Hood brings up, of course, is we can see the self emerging. We can see that human infants, as far as we can tell, aren’t born with selves, that they emerge and come into existence, and we can actually trace out the developmental process. We talked about Vygotskyian processes, et cetera. I won’t repeat all of that argument, but what that led to was the idea that the self is in an important sense socially constructed. That then brought up, I think, very reasonably on Hood’s behalf, right, the, I think, the if the self is only socially constructed, it might only exist in a socially attributed fashion. Now, that’s the distinction made famous by John Searle in his 1990 book, The Rediscovered the Mind, where he makes a distinction between intrinsic properties. Those are properties that would exist in the thing, independent of human existence or any sentient or sapient being existence. So we take it something like the mass of a hunk of gold, its atomic structure, for example, is intrinsic, where attributed things only exist because human beings agree together to treat something in a certain way. The classic non-controversial example is money, right? Money doesn’t intrinsically exist. You can have some pieces of paper from the American Confederacy, but it’s not money because there’s no group of people that will agree to treat it as money. It’s only money if we all agree to treat it that way, and if we all stop treating it that way, it ceases to exist as money. So the question comes up, maybe the self only exists because we all act and treat each other as if we have selves. There’s an idea very similar to this in Dan Dennett’s, the idea that selves are just centers of narrative gravity. We tell all these stories, and then we attribute to each other centers from which these stories are meeting and intersecting, and that’s all that a self really is. It’s a useful fiction, just like in engineering or physics, we have this fiction of the center of gravity. And so that brings up a point made by Hood, which is when we say the self is an illusion, we’re not saying it doesn’t exist at all, typically. What we’re saying is it doesn’t exist in the way we think it exists. A plausible way of understanding that, and I think pretty much implied by earlier parts is that the self only exists in an attributed fashion, it doesn’t intrinsically exist. And so that’s about where we got to. I think, Chris, you wanted to say a few things at this point? Yeah, I was thinking about this a little bit, and it’s interesting because I think a lot of people, when faced with the prospect of the self being socially constructed, certainly socially attributed, feel very threatened by that idea. And I think there’s a lot of very deep reasons why that idea is very threatening, but I think there are a couple maybe worth noting. And I don’t know if this amounts to a theoretical challenge necessarily, but it certainly amounts to an existential challenge. Maybe we can parse out exactly where one meets the other. One is that it introduces a problem of arbitrariness. And arbitrariness is a very, very threatening thing to the espousal of oneself, right? So if we think about it, if given a different set of social constraints, I could just as easily be another kind of self or just as easily be no self at all. If there’s no kind of immutable ontology by which I become myself necessarily, then the idea of myself becomes incredibly dissolute, right? And that sense of being dissolute, I think, has all kinds of pathological implications as to how then I experience myself. It becomes a real problem, right? The other thing is that, so you talked about the idea of money and just as the analogy of when you’re embedded, when you’re socially embedded, money has the attributed semiotic value of carrying a certain kind of currency. And I might say you retreat out to the desert. What happens over the course of time is that your money, if you don’t have any kind of sustained social contact such that the money remains relevant, the money depreciates in value. It erodes eventually to nothing, right? It ceases to signify. Its semiotic purchase basically diminishes. But when we think about the self, especially when we think about the espousal of oneself, right? In a kind of an affirmative sense. The IMI kind of sense. Right, right. So if the self can’t survive the negation of its socially attributed conditions, that it can’t define itself negatively against those conditions. And when we think about the sort of intuitive existential experience of being a self, especially as we grow, becoming oneself, if I can put it that way, puts those quotes around that, seems to proverbially involve defining oneself negatively against social attribution. We go out into the desert, as it were, precisely so that we loosen ourselves from the currency and its particular mode of valuation and removing ourselves from the conditions of social attribution becomes a very, very important way of confronting ourselves. So this is all to say there’s something implicitly in the experience of being oneself and confronting oneself or the idea of oneself across time that seems to involve the negation of those social attributions. I don’t know exactly know what theoretical implications that has. I’m sure there are arguments to be made about the fact that the internalization of the construct perhaps persists even in the absence of its founding conditions, perhaps. But there is something very intuitively threatening. And I think those are a couple of reasons why. So I just thought I’d just sort of throw that into the middle and see what happens. I think that’s important. I’ll try and stay at an existential level, but try and bring out some of the theoretical import. You do have traditions, for example, in Buddhism, in which that confrontation with oneself and the challenging of social attribution goes on to challenge self-attribution and self-affirmation and such that people come to, or claim to come to. And this also happens within the Christian neoplatonic tradition. They come to an experience of no self as being somehow a true or more liberating way of defining themselves. And now I’m using self in the recursive sense, not in the endic sense. So don’t try and trip me out on that. Right. And so and we all have, of course, the experience of this. These are not these are not, you know, these are not wrong. I’m trying to open it up. Right. And of course, we all have, or at least it’s universally accessible to us, experiences of the flow state in which that sense of self-affirmation explicitly drops away. And nevertheless, the sense of agency is significantly increased. And so again, we that the idea of self-affirmation is not necessarily a form of self-affirmation. The idea that it is threatening to us because the self is defined in that way. I don’t I don’t want to deny that. I think that’s a very pertinent existential. And Greg is nodding. I think it’s clearly a clinical and therapeutic fact we need to take into account in any account of the self. So the self is not a simple kind of, you know, mirage that you can just dispel easily. But on the other hand, we have established traditions that and we shouldn’t question their phenomenological reports where that ability to challenge social attribution is taken further and deeper into the psyche until the, you know, the egocentric self-affirmation also is claimed disappears. And what is disclosed is a pure kind of agency, a pure kind of freedom, because freedom without agency makes no sense to me. So pure kind of agency independent. And then then we can ask about again, do we have to decide between these? And by the way, this is taken up in an anthology. Evan Thompson is one of the editors, I believe it’s called the self no self. And this has become a central sort of debate. So I’m not trying to decide this issue right now. I’m trying to deepen both sides and say, I think you made a very excellent point, make the kind of point that I think is central to why people want to assert the self. And then I want to say, but we can deepen the other side correspondingly and thereby deepen the debate and the dispute. I think this is really very much at the crux of the elusive eye point in many ways. And I think there really is a threat, a rightful that thoughtful people need to embrace both from the social constructive side. So what is my name? My name is right. That’s not intrinsic. My name would be pretty close to fully socially attributed. Right. Yeah. Yeah. Yeah. OK. But, gosh, I feel like Greg, you know, it’s like, well, that’s, you know, if I’m going to pull that and then wait a minute, I’m a professor and wait a minute, I’m an American. There are actually when you really swallow the socially constructed pill, I mean, I think you have to realize that there’s an enormous amount of what we experienced as constituted, embedded in this social person, culture plane of existence in a particular way that we we have to wrestle with. And at the same time, there also is this thing that’s beyond that or intrinsic or grounded in that, the thing that fades away in our transcendental experiences or any number of frames, even in the Buddhist sense, there’s still some observing system. Right. So I think we’re confused about this. I think it’s very, very powerful. And I think this is for me very much the crux of this exploration. So I’m really glad you brought that up. Yeah, I think it is. I think that’s well said, Greg. And I think that is exactly why it’s entitled the elusive eye. And again, I want to make it clear that I’m not trying to argue for a resolution. I’m just trying to see to point out how deeply this debate and as Greg has rightfully pointed out, potential confusion runs. And that’s why I think I mean, I think a minimal conclusion we should come to is we can’t rely upon our common sense, folk psychology and folk phenomenology, because it just does not have the resources to address these issues that need to be addressed. Now, I want to pick up on sorry, Chris, you wanted to say some more? No, I was just going to say, yeah, I mean, I agree. We can’t we can’t just recourse to folk phenomenology and expect it to hold water against these theoretical arguments all the time. But because it mediates so much of the way that these things are signified to us, it’s it’s right. Its relevance can’t be overstated either. And I think I think it’s interesting is what you say, John, about the fact that that there are many traditions that rely on the negation of social attribution as a way of attaining to a state that is a state of no self. But then there’s also oftentimes in those same traditions, a right of return. Yes, yes. To to re embed into the socially attributed conditions and somehow. And I mean, this phenomenologically, phenomenologically to somehow make them intrinsic. Yeah. Right. Right. Or, you know, if I put a Kierkegaardian spin on, I might say sort of to make to make to make to make a necessity out of out of the possibility of one’s social self or something like that. So so there’s a dialectical relationship between those things, too, right. Between the dissolution and the resolution, I think, that the public mind picks up as we get into this more. And let me give you some some added I don’t know what I want to say ammunition because you’re not firing up, but some added planks for your platform or something. I’m trying to choose a much less violent metaphor, which is you have the central Buddhist paradox, which I think you’re alluding to very well. You return and you’re supposed to have compassion for all beings while realizing that no there is no atmon to any being. And there’s no all things have no self. Well, what do you have it like? But love is, as Frank says, is a voluntary necessity. Why would you have this relationship? And well, I want to remove suffering and remove suffering from what? And so I think you’re right. There’s a dialectical paradoxical nature. But I want to be fair. I think the paradox insinuates from both directions. The substantial self gets insinuated with the paradox. And so does the non self or the no self, which is precisely why I want to try another move. We’re going to get to it in a minute. But what I want to propose is let’s try and maybe reverse it. Instead of trying to figure out the relationship between self and agency, let’s take it that agency, right, in some sense has to intrinsically exist. And I’m going to butcher stuff with something in a second and then see what we need, what we can build from agency in terms of some of the key dimensions of selfhood. This is this is a really hopeful move from my vantage point, because what we’re talking about is building an ontology from a scientific, naturalistic view that’s going to bridge into our folk ontology and allow for a much more restorative connection. Yeah, I think so. But I do. I do think it is going to require. Well, let me let me make the move. And then I’ll say what I want to acquire. So one move that people made, and we’ve already squirted around it, especially by the invocation of Buddhism or some of the, you know, higher states of consciousness within the neoplatonic mystical traditions, etc. And this is a move made by somebody who’s very oriented towards within psychology, but oriented towards mystical traditions. And that’s the work of Arthur Dikeman. And he’s got he goes back to way back in the 60s. He’s one of the first people to do the, you know, some of the psychology of mystical experiences, meditation, etc. And as I mentioned, he wrote the book The Observing Self, which was a version that we’ve already talked about, about the no thing I and the thingy me and their relationship together. Now, Dikeman basically confronts the intrinsic versus attributed by relying on something that Searle relies on, which is consciousness has to intrinsically exist because consciousness is the entity by which attribution occurs. So what that means is consciousness can’t exist by attribution because attribution depends on consciousness. You would just have an infinite regress if you tried to say that consciousness was socially constructed and only existed in a socially attributed fashion, because what’s what is doing the attribution? Right. And so that is very powerful move. And it says, well, there might be something in consciousness that intrinsically exists. And it might be, for example, that in Buddhism, you’re slipping from, you know, a a sense of intrinsic identity attached to social constructions into a sense of intrinsic identity attached to the intrinsic existence of consciousness. That’s kind of the move that Dikeman makes. He’s trying to say that the I equals consciousness. And there’s some value to that. I would say that it’s not consciousness alone that does attribution. It’s conscious cognitive agency that does attribution. You have to have something that’s an agent in acting. So let’s take it that some intersection of conscious agency intrinsically exists. And that’s pretty, I think, non-controversial because it’s hard to deny that, again, conscious agency intrinsically exists because you get an infinite regress. Now, why also do I want to include agency? I want to include agency because just trying to make this work on consciousness seems too weak. Why? Because the self isn’t the I, right? The self seems to be the relationship between the I and the me. I want to read a quote, sort of a consensus quote from Where is it? Yeah, this is from Leary and Tagre in 2016 on their consensus article on the psychology of the self. They said, so quote, at its root there, sorry, at its root, sorry, at its root then, not there, at its root then, We think it is useful to regard the self as the set of psychological mechanisms or processes that allow organisms. So notice that they already are assuming the intrinsic existence of agents because that’s what organisms are. Allows organisms to think consciously about themselves. The self is a mental capacity that allows an animal To take itself as the object of its own attention and to think consciously about itself. So they are arguing that And we said this before, within all the conceptual morass, there seems to be this constant convergence on the I, me relationship. And why am I bringing that up here? It seems to me that consciousness per se is inadequate for grounding the self because the I, me relationship, which is central requires self consciousness, which is not the same thing as consciousness. So that’s, I think, an important distinction. So let’s be very careful here, right, because there’s three things, again, we can confuse. There’s consciousness, which is, right, I think, for example, a cat has consciousness where consciousness is a somehow, you know, subjective awareness and the ability for that subjective awareness to direct attention on the world. And if you want more on consciousness, take a look at Greg and the series that Greg and I did together, Untangling the World, Not the Problem of Consciousness. So I think you can have consciousness, but I doubt whether or not a cat has self consciousness. I doubt whether or not a cat steps back and goes, what kind of thing am I? I’m a cat or I’m the cat, you know, Aslan, I’m Aslan the cat, right, or something like, so that’s not happening. In fact, we have good evidence that linguistic human beings, three year olds, even up close to four year olds, are capable of introspection. They are incapable of metacognitive report. They can’t ever step back and do that transparency, opacity shift and look at their own cognition. We can come back to whether or not they have selves. That’ll be an interesting question we can talk about. And so we have to distinguish, therefore, between consciousness and self consciousness. So I think trying to identify the self with consciousness per se is inadequate. Now we come to self consciousness. And in my example, right, distinguishing between the cat and the adult human being, for example, or the three year old and the adult human being, we also have to make a finer grain distinction. Because there’s a kind of metacognition that is available in any cognitive agent. Let me be very clear about this. This goes to the work that Prost did and I believe it’s her book, The Philosophy of Metacognition. Any agent has to be able to detect the consequences of its action and correct and direct its behavior in order to achieve the desired goals amongst the consequences of its actions. Because every, look, everything behaves. This makes noise when I hit it. I’ll let it go, it will drop and Greg can say a lot about behavior and we’ll let him at some point. But an agent can detect the consequences of its behavior and adjust accordingly. So any agent, even the paramecium, has to be able to detect its own errors. Every agent qua agent, as an agent, has to be capable of self correction. That means that every agent is in some sense monitoring its own cognition. But this is a purely procedural sense of monitoring. It’s not any sense that gives rise to any self narrative, any self assertion, any self espousal, anything like that. This is when the dog realizes it’s made a mistake and it backs up. That is not the same kind of self consciousness, the same kind of metacognition as the dog going, gee, I wonder if my life’s worth living. I wonder what kind of being I am. It’s not that kind of, right? So again, there’s two meanings of self consciousness. There’s a purely recursive procedural metacognition that all agents, all organisms possess. And then there’s consciousness of a self, which is right, the ability to take oneself as an entity and an object as awareness. That’s the self as an entity version. So I wanted to take time to really pull these apart because people slide between these all the time and it adds to the confusion, rather than helping to clarify what it is we’re talking about. Because we want to be able to distinguish the kind of purely procedural metacognition in the paramecium, right, from what you and I, right, you both of you and even people watching. I’ve got more self than a paramecium, John. Yeah, we have something different. So we have to make a distinction between consciousness and self consciousness and then a distinction between purely recursive self consciousness, procedural, purely procedural, right, recursive self consciousness and entic self consciousness in which we are, we have an eye taking a me as the object of which it knows and has knowledge and can make, construct narratives, make espousals, do identification, etc. Is that okay? Yeah, yes, exactly. Second order thinking. So, yeah. I’ll just finish my point, Greg, and then you can say something. So that’s a long way of saying I think the attempt to identify the eye with consciousness is a mistake because it has to be the eye in relationship to me. And then that eye in relationship to me is not a purely procedural relation. It is some set in some sense, a different kind of entic relation. Yes. One of the questions I have, and maybe we don’t need to get it now, but I’ll just throw it out there in terms of the levels of self modeling and differentiated layers of self modeling, perhaps. So for example, I would say that as I get in relationship to others before I’m talking, for example, if I’m thinking evolution of sort of complexity of animals, that self modeling and is that self consciousness? I’m going to say, hey, there’s like there begins to get some really good self modeling. Is that self consciousness? So I guess the thing is, if a self modeling entity is there, is that self conscious? And we can talk about that or sort that out. And the other thing I want to layer on top of that always as well. Then you get propositional explicit self reflective dynamics, which is then going to be a layer of self. This is a great opportunity. Well, go ahead, Chris. I was just going to say I was thinking along the same lines, Greg. I was thinking, how do these, how do like the distinction that sort of the blurred spectrum between metacognition and the kind of second, the second kind of second order self consciousness that John’s talking about. How does that actually map its way over the different the typology of the different kinds of knowing from participatory to propositional and might actually be very helpful to map those together. So anyway, yeah, I think we should. And I’m going to try and do that in terms of a unifying idea of recursive relevance realization. So that’s part of the argument I’m going to make. But I want to show why that emerges out of what we’re trying to do here. I want to directly answer Greg’s question here, at least partially, because we’re going to come back. We’re going to go back to modeling and mutual modeling and all of that in connection to the self. But so there’s current work going on. I think I believe the name of one of the people you can search on YouTube is I think some videos by a person by the name of Lipton. But he’s only representative of a general thing that’s going on is so it used to be you try, you know, you were trying to get like a robot to learn like you gave it a neural network and you gave it some learning algorithms and you’re trying to get up. For example, here’s the arm and it’s going to pick things up and it’s going to learn, pick it, pick up this thing with this kind of grip, pick up this thing with this kind of grip, move it around. And that takes a wickedly long time. Right now, instead, what he figured out you do is you come in and you let the arm just flail. Right. And what it does is it’s not trying to learn how to pick things up. It’s actually making a model of itself, not a picture. Got to be really careful here. But what it’s doing is it’s learning all these complex, predictive contingencies between itself. So if this happens, then these things happen. And if this and so what it does is it learns a very complex model of itself. And then you give it the task of learning how to pick things up and it picks those things up. There’s a pun here. It picks up the picking up. It picks things like it learns that much, much faster. And so there are now he he slips in just the way that we’re concerned with here. He calls them self aware robots. And I’m wondering, is that self awareness? I don’t think that self awareness. I think there’s an insufficiency of a lot of the properties of awareness, especially when it overlaps with consciousness and attention. But it’s directly germane here. We can see the temptation to slide from self modeling into self awareness. But we can see also the value, right, the functional value of self modeling, even at this purely procedural level. So if a system is going to become more intelligent and this is going to be part of the argument we take up later, right. It modeling of itself. And as Greg has said, it’s going to be very it’s going to be increasingly indispensable and it’s going to become increasingly complex because one of the things the self has to do, right, the self model. Sorry. Forgive me for that mistake. The self model has to do is it has to take into it like so. If this just goes from being an arm that sort of can move around and find things, right, to also having a camera that looks at things, the system has to model not only itself. It has to model when it’s when it’s self actually obscures its ability to perceive. It has to learn to discount. So where you say, well, where does that come in? Well, here’s an obvious example. I’m blinking and I’m sick. Codding all the time. And during what I’m blinking and so cutting, I’m technically blind because there’s no information traveling along my optic nerve. But I don’t have the world isn’t flashing on and off for me because my brain is modeling itself to such a degree that it can discount that interference. So it doesn’t distort my intelligent awareness of the world. What I’m trying to do is build in a case here that self modeling is going to be intrinsic to sophisticated agency. And so to answer your question, Chris, we can start with a purely kind of procedural self detection in that metacognition and then as the organism pursues goals that are more displaced in time and space. And this is Greg’s point. So as we start requiring it, the capacity to learn, then that self detection and and minimal sort of procedural self correction and metacognition is going to start to shade into more complex self modeling. And then and then we’re going to say and then at some point that self modeling is going to have to deal with novel, ill defined, complex situations. And we’re going to start having self modeling that is deploying consciousness. And so I think that’s part of the continuum we’re talking about here. That’s about as much as I can do right now. No, that’s good. That makes sense. And isn’t like I’m thinking isn’t part of the procedural self modeling that you’re talking about. Isn’t that really the function of play? Or part of the function? Yeah, that’s completely apropos. When you watch these robot arms flailing around, you know what they look like? They look like infants. Yeah, they look like infants flailing their legs and their arms and doing all this and distorting their face and doing all the stuff to try and create a self model. Yeah, right. Very much like kids spend a good part of childhood just tumbling over each other. Right. Yeah. It’s not the world’s a buzzing, massive, confused, and I’m a buzzing, massive, confused. Once I hone in on it, then I can factor it out. So that’s exactly right. It’s and you can see why eventually that that self detection is going to start to become something like the capacity for transparency to opacity shift. And then that makes possible the I-me relationship. At least that’s what I’m going to argue, if that’s OK. All right. If so, I’m not going to repeat arguments that we have in depth elsewhere, but I want to start. I want to try and show the deep interconnections between agency and recursive relevance realization. I used to call it relevance realization. And there’s a sense in which you can talk about it at that level. But in the work I’ve done with Greg, especially on in and on untangling the world, not of consciousness. He’s correct. We should call it recursive relevance realization because the recursivity is an important aspect of its functionality. And so I think that’s important. So. I’m not going to repeat these arguments in full. I’m going to I’m going to give a synoptic gist of them and point people to where they can go to look at the arguments in more depth. One of the arguments I’ll have to do a little bit more deeply here. But the one of the core arguments published and elsewhere, well, I’ll put in links in this video is that our capacity for general intelligence, our capacity to be a general problem solver is crucially dependent on our ability to do recursive relevance realization. And what do I mean by that? That at the core of your intelligence is the ability of all of the information available to you, which is combinatorially explosive. You can zero in on the relevant information out of all the potential sequence of actions and options for actions available to you, which is also combinatorially explosive, you can zero in on the relevant sequence of actions of all the information in my long term memory. I can zero in on the relevant information and connect it in the relevant way that makes it relevant to what I choose to pay attention to and what I choose, right. What how I choose to act. And so the idea is the core of intelligence is relevance realization. And then the basic idea there is that relevance realization, we need to explain it in a non homuncular fashion. The idea being that many of our abilities, we think we could use to explain how relevance realization occurs, like concepts and categories, judgments of similarity. They actually can’t explain relevance realization because they ultimately presuppose it. Now, it’s it’s always recursive relevance realization, well, at least in part, and we’ll talk about this more later, especially when Greg starts to take us through layers, ontological levels. But part of it is recursive relevance realization is that self correction has to be built into the very mechanism and process of relevance realization. And you know what that self correction feels like. It’s a moment. It’s the aha moment of insight. It’s like, oh, I thought this was relevant. And it turns out it’s not. It’s irrelevant. This is relevant. Right. And that’s what the moment of insight is. And that’s again, at that procedural self correcting level. So the basic idea is how do we give a non-homogeneous account of recursive relevance realization? The idea is that relevance realization is a dynamical system. Briefly, what’s a dynamical system? A dynamical system is any system in which there’s at least one feedback cycle, often many in recursive layers feedback cycle that’s regulated. It’s regulated by a set of enabling constraints that open up possibilities for the system. Right. And selective constraints that limit the possibilities for the system. So let me give you an example. And it’s an example I need to talk about relevance realization, which is evolution. So what does evolution do? Well, there’s variation within the population between individuals that opens up the options for design and then there’s scarcity of resources, which is put selection, selective pressure. That’s why it’s called natural selection. It kills off most of those options. So only some of them survive. And those ones that survive go into a feedback cycle, namely they reproduce and produce the next generation. So here’s the feedback cycle, reproduction, scarcity, limiting the options, variation, opening the options. So the options are opened by variation and then they’re slammed down by selection. And the cycle doesn’t just revolve. It’s going to be altered. It evolves. It changes. And so organisms change their morphological design changes. In a similar fashion, you have a sensory motor loop with the environment. You’re acting, which causes sensation, which drives your action, which causes your sensory perception, the sensory motor loop is ongoing. And the idea is you have sets of constraints. I won’t go into the details here, but basically you have constraints that are putting selective pressure on what you’re paying attention to and how you’re interacting and sets of constraints that are opening up the options. Let me try and give you one quick, concrete example. What do I pay attention to? Notice that you have two opposing poles in you. You have a part of you that is trying to select what you’re going to pay attention to. It’s it’s it’s it’s killing off many of the options. But you have this other part of your mind and you can feel it right now. That wants to be distracted, mind wander and consider other things like look at that green tree that seems to be growing out of Greg’s head right now. And I wonder what that weird object is that’s kind of spherical. I wonder if I should be doing my laundry at weather than watching this. Oh, no. Pay attention. Notice what you’re doing. You’re constantly cycling between varying and selecting. And what’s that doing? It’s constantly evolving what you’re paying attention to. Your sensory motor loop. OK, so biological evolution is constantly fitting you biologically to the environment. The dynamical system of relevance realization, recursive relevance realization is constantly fitting you cognitively. What you pay attention to, how you formulate your problems to the environment. That’s the basic proposal about what recursive relevance realization is. Now, one more thing. We can talk about the the selective constraints. This is your RROS idea as a virtual governor. It limits like an actual governor limits like the cycles of an engine. Right. And we can talk about a virtual generator that generates more options. This is like the this is like the gasoline actually exploding in the pistons. Right. And then we talk about a systematic relationship between a governor and a generator that’s a virtual engine. So dynamical systems theory is I find a complex feedback cycle and I specify the virtual engine that is regulating it so that it can evolve to stay fitted to its environment, either biologically in biological evolution or cognitively in cognitive intelligence. Intelligence is basically very online, second by second, evolution of your cognitive fitness to the environment. So that’s sort of the gist of it. I hope that was OK. I love that. I mean, you know, I get all excited every time I hear that. Let me add, you know, what I like to do, because that’s my my psychological background is always like, what’s behavior and what’s mental. Right. Yeah. Yeah. Yeah. And so I go back to Skinner and I wrestled a lot with Skinner, but he was an intelligent guy, even though he denied mental in particular ways. But what did he say from the outside? He basically said, hey, the let’s you talked a lot about relevance and recursive. Let’s talk about your beautiful word realization, realization of which you mean two things by I see and make happen. So I can flip to the outside of the behaviors and you are having consequences that Skinner would have called a commerce with the environment and it is that return on investment. I do behavioral investments like, well, what is the return on investment? That’s the bridge to realize your pathway to your goals. And it is that recursive relevance realization that that’s dynamic, evolutionary feedback loop of a complex adaptive system that boom that system to rock and roll. I think that’s great. And I love you bringing in the bioeconomic metaphors because I think they’re very important. And notice how they even slip into our folk our folk language because we talk about paying attention because we’re aware that what we’re doing is spending a precious resource and we’re trying to get the most cognitive bang for the least metabolic effort, and that’s basically what relevance realization is constantly evolving and so it’s using this opponent processing. Here’s another example. You’re you’re you have to constantly adjust your level of arousal. And you’re you’re you’re I got to come down. Yeah, right. High level and I don’t mean sexual arousal. I mean, metabolic arousal. Right. And so you’re constantly adjusting this. How do you do it? Well, you have a self organizing a dynamical system where you have two opponent processes, you have the sympathetic nervous system that’s always trying to increase your arousal and it’s biased to see everything as a threat or an opportunity and then it’s locked together with your parasympathetic system, which is biased to seeing everything is not a threat. And as right as a situation in which you can relax and withdraw and heal. And they’re constantly doing this because there is no perfect level of arousal, because as the environment changes, as your tasks change, you’ve got to constantly re evolve because that’s what you’re doing, your level of arousal. So I’ve given you a couple of examples of what I’m talking about here. By the way, just to just to ring a little bell. You know what that part of your nervous system is called? It’s called your autonomous right nervous system, right? Because it is a self governing thing. It’s an inherent part of your agency. So here’s that. That’s the core idea. So the idea is that’s at the core of intelligence. OK, which is your ability to be a general problem solver, to be some kind of agent. Next, and I’m not going to repeat all of these arguments either because Greg and I went through them in great detail, you can take a look at most of the existing neuroscientific theories of consciousness and psychological theories of consciousness. And they all converge on what is consciousness doing, what’s its function? And the argument is it’s doing recursive relevance realization. The argument is, right, what the point of consciousness is, is it allows you to zero in on relevant information. You need that. That explains why you have consciousness for certain kinds of problems and not for others. You have consciousness when you’re facing novel situations, ill defined situations, complex situations, and you don’t need consciousness when those situations become familiar, when they become regular, reliable, when they become well defined. So when you’re first learning to drive a car, you’re very conscious and then no pun intended. Eventually you can drive very automatically without having to give it much attention or awareness at all. So if you take a look at Bars global workspace theory, Tononi’s integrated information theory, Clearman’s theory, we go through all of these argument that they all converge on recursive relevance realization. Now, these two things are actually linked, intelligence and consciousness as recursive relevance realization, where in a very central construct within cognitive psychology, which is called working memory, working memory is the kind of memory where you can you hold things in your mind and you’re manipulating information all the time. Now, here’s what’s important about working memory. One of the best accounts of working memory right now, Lynn Hashers, my colleague from the University of Toronto, has argued that working memory is a higher order relevance filter. It’s not just a holding place. This helps to explain a phenomenon called chunking and other things. There’s a lot of converging evidence that working memory is a higher order relevance realization. It’s doing recursive relevance realization. It’s taking information that’s already been processed by unconscious mechanisms. And what it’s doing is making seeing if relevant connections have been made between them, it’s doing recursive relevance realization. And so it overlaps very much with one of the dominant theories of consciousness, the global workspace model of Bars. Basically, there’s a deep connection and there’s even the deep connection between sort of areas of the brain associated with fluid intelligence and working memory and between consciousness and working memory, working memory. So notice that ability to do the I me requires working memory. You have to step back and hold in mind the me and through working conscious, through working memory, shine the light of attention on the me. There is deep connection between selfhood, self-consciousness and working memory. But as I just indicated a few minutes ago, there’s deep connections between working memory intelligence. In fact, if you take a look at some of the best measures we have of general intelligence and the best measures we have of working memory capacity, they approach parity. We’re measuring something very, very similar. So relevance realization at the core of intelligence, relevance realization at the core of consciousness, relevance realization at the core of the link between intelligence and consciousness, especially self consciousness, especially self conscious, intelligent agency. OK, is that so far so good? Yeah, I mean, you know, I’m sold. I mean, you know, I’ll say this for me and see if this resonates. I mean, when as I you know, I talk about sort of the neurocognitive functional organization of the nervous system before I met you. Now I can substitute the cognitive for really recursive relevance realization. Like that’s the fundamental structure. Yes. Yeah. Then we can say, well, how is that fundamental structure? If we use a little integrated information theory, it’s networking stuff together in a particular kind of way. And our conversation was, well, how is it networking for me, at least? And we landed on this idea of this base of sentience kind of like bringing the system in. That’s the active and passive pleasure pain guidance system. And for me, that gives you really just a flash of, say, sensory motor experience. You know, it would be pretty simple, but wouldn’t have working memory. Doesn’t really bring the whole system online until it gets later for talking in terms of the evolutionary structure of the system. And what happens is more and more capacity for a focal relevance realization, which really brings that global network. Yeah. Global neuronal workspace online as adjectival, adverbial consciousness on a working memory field. Exactly. And you can see how the working memory and the higher order recursive relevance realization and Greg pointed to this, what could really enhance the self detecting and self modeling into something much more starting to move towards self awareness. Absolutely. In fact, I would argue that what’s happening, if we if I’m tracking some of the evolution stuff, is that animals actually are moving from the water to the land. Then it needs to have a much more cognitive map and just to do things like find where water is. And then you’re doing working memory with deliberation about possible paths of investment and you can really see the evolution of the cognitive structures exactly along that path. Excellent. Excellent. That’s interesting. That would also mean that that would serve to explain why the arena of memory plays host to the integration of the different kinds of knowing of oneself as it progresses more toward the propositional. Yes. And that’s called the cultivation of memory becomes a way of cultivating that integration and then appreciating it across. Exactly. And so what you can see to foreshadow, you can see the inherently. So in order to be a relevance, a recursive relevance realization mechanism, long term memory is reconstructive in nature. It’s not reproductive. But you can see that reconstructive thing being taken up by working memory processes and you start to get autobiographical. So, for example, if you take narrative into working memory, you can take the reconstructive nature of memory and especially episodic memory and you can start to stitch it together into autobiographical memory and you can get a temporally extended sense of self. You can actually see you can actually see kids doing that as they learn narrative. And we’re going to come back to the connection between narrative and right, especially autobiographical self narrative. If you notice kids around three to four years of age are really bad at it, really, really bad at it. And that’s and we forget that we forget that it has become automatic to us. But it took a lot of effort and a lot of education in order to get us to that place. Not to mention the fact that tectonic shifts in identity occur often as a consequence of resituating the relevance of memories. Yes, yeah, we’ll get. I’m sure we’ll get into that later, but no, that’s excellent. And that that goes even into the heart of I mean, that’s a deep connecting point between existential aspects of the cell and clinical psychotherapeutic aspect. So right. Exactly. Very, very. Yeah. Very good point. That’s an excellent point. OK, so I want to I want to go right. So notice that we were starting to talk about different temporal scales here. Right. So consciousness is really sort of online, you know, novel right here, right now, relevance, recursive relevance, realization, intelligence and consciousness and intelligence are bound up together. But intelligence, it spreads out more through time. Right. You’re doing you’re playing a game of chess. It’s not fully held in your consciousness moment. Like it’s held moment by moment. But I mean, the whole game isn’t in your consciousness. You but your intelligence is progressing throughout as you’re playing the game. So intelligence is a little bit more spread out, if you’ll allow me. It’s in terms of its cognitive, its cognitive grasp of temporal spatial reality. But then we can move to the sort of long term, even more long term, long, more more temporal spatial scale when we start to talk about those aspects of the self that we we we were talking about when we were talking about character and personality and temperament. And there’s two things I want to say here about how relevance realization is relevant at that level, not totally intended at the level of temperament, at the level of sort of how, you know, our genetic constitution and our environment give us the sense of traits talked about by big the big five of big five personality theory. And I’m putting this in quotes because I take seriously Greg’s critique, theoretical critique. So I’m just using it because that’s the theory. Yep. No, it’s a good name. A colleague. So many of you might have heard of this, you know, traits like openness, conscientiousness, extraversion, agreeableness, neuroticism. And this goes to some work that I’m doing with Gary O. Ovi Yanneson. But it’s based on work of a former student and T.A. of mine, Colin Young, because Colin has basically argued for a cybernetic model of trait theory. He says that all of trait theory basically maps into both within. And he doesn’t say this as much, but he implies it also between individual of what’s called the stability plasticity problem. So the stability plasticity problem, and this goes towards memory and why memory is reconstructive, by the way, is stability is, well, I want to I don’t want I don’t want my, you know, my cognitive machinery to change very much. For those of you are a Piagetti, and this is an assimilation strategy. I want to keep things very stable. I don’t want my memories to change very much. Right. For example, and the plasticity is no, no, I need to introduce novelty. I need to write. I need to change. And notice how we without question sort of put and that’s more accommodation rather than assimilation. We put these together in the notion of the self. The self is somehow stably plastic. It’s it’s stable. It’s the unchanging self. But look at how much I’ve grown, which is a plasticity statement. And the thing about this is you can see this go down into the brain’s machinery. They’re the biological, the bioengineering problem the brain’s trying to solve is how stable should I be, how plastic should I be? And without going into a lot of detail, you can see this right again. This is going to be a issue of relevance realization. It’s well, I’m going to be stable when that’s more appropriate to the content, my environment, I’m going to be plastic when that’s more appropriate. And I’m going to I want this constantly to be shifting around like my level of arousal, but I want that rate of shift to be much slower because some issues exist on terms of more longitudinal patterns and so stability. Your your your your traits are therefore more stable, but they are ultimately to some degree malleable, even think I don’t get into it. But like mystical experience even seems to like change the openness that people possess. So at the trait level, what you see is relevance realization machinery. What about at the character level? What I would argue is that character is about sets of virtues. And again, we can we can not just moral virtues. I want to take into account what Greg is talking about. Also, intellectual virtues, cognitive virtues, social virtues, etc. But I still take it that Aristotle’s model of virtue, because Aristotle did talk about intellectual as well as moral virtues, for example, is the basic model. The basic model is we’re trying to find the optimal place between being too deficient and being too excessive. Right. So courage is, well, I’m not I’m not a coward. I’m willing to face fear, but I’m not an idiot. I don’t just give in to I don’t just, you know, challenge every fear that I have, because then I’ll run into traffic and be hit by a truck. So I’m trying to get between being a coward and foolhardy. But is that the same for me at all times? No, it’s going to be different for me when I’m an adult male than when I’m a child. It’s going to be different for me when I’m with friends or with I’m with in enemy territory as a soldier, right. So this is going to constantly move around. And notice what I’m doing again. What I’m doing is I have a I’m trying to create a set of selective constraints so that I don’t do too much fear facing and I’m trying to do a set of enabling constraints, so I’m willing to do enough, you know, fear facing. And I’m creating a virtual engine that’s constantly evolving. So I’m constantly fitting virtuously or with virtuosity to the context I’m in. And so you see relevance realization also in character. But it’s the kind of relevance realization that again is going long term. So you can see. That all these different scales about things that are sorry for this relevant to our sense of self at the level of consciousness are online where we can get the IME, the level of intelligence where we get our agency and the level of our character and our traits, right. We these are all happening at different temporal scales, but they’re all doing at these different temporal scales, but they all shade and influence each other. They’re all doing this very complex recursive relevance realization. Amen. Now, the other way. Which is to say relevance, recursive relevance realization depends on and I mean, ontologically depends on is grounded in agency. What do I mean by that? Relevance is not a property of the world. Is this relevant? It depends. It depends on what it depends on, what’s happening and what it’s happened and who it is happening to. This might be relevant to me. It’s not particularly relevant to Greg. It’s not relevant to that fly over there at all. OK, because well, I care about it and the fly doesn’t. This coach that I’m sitting on can’t care about it at all. Relevance depends on a capacity to care about some information, some actions, right, some shaping of your agency over others. That Montague, Reed Montague, the neuroscientist makes this point very clear. He says the distinction between us and computers is computers don’t care about the information they’re processing. We care about it. And why do we care about it? We care about the information processing, right, the information we’re processing. We’re doing relevance realization precisely because we’re taking care of ourselves. And why are we taking care of ourselves? Because we are auto poetic, autonomous, adaptive agents. Only a self making thing has needs. A fire needs wood. No, it doesn’t. The fire needs wood. If we care about the fire staying lit. But that’s different from a paramecium. A paramecium needs food for itself and it seeks out food. Food is relevant. It sees these chemicals as food, what it detects, right, as food. It cares about some things because it is an auto poetic thing that is taking care of itself. Relevance realization and agency co-depend on each other. They co-determine each other. They co-define each other. There is no agency without adaptive recursive relevance realization. And there’s no relevance realization in existence unless there are auto poetic, autonomous, adaptive agents. OK, so what does that mean for all of this talk of the self? Where does all this start to come together? It comes together in this central notion for which there is a lot of very cutting ads, but converging evidence. This is this idea. All relevance is relevance to an auto poetic system, an auto poetic organism. In that sense, and here’s where we need our distinction, all relevance is self relevance, not relevance to a self, but self in the recurrent, right? So the paramecium is finding things relevant to itself. Right, the bird is finding things relevant to itself. So the branch that it can perch on is salient to it. It stands out in its working memory because it’s relevant to the bird. So all relevance realization is self relevance. That’s why we even have notice the synonym we have for relevance, importance. Import, importance, importing it into myself. Food, listen to my words, literally matters to you. You import it and it matters to you. You see, we have all this language precisely because at the core of relevance realization, its grounding condition is self relevance. Now, the work of Sui and Humphreys and I’ll put some notes in this video. And they bring together a lot of converging work that this process of self relevance is and this is their verb for it, is the glue of cognition and perception. It is what does the gluing, the fitting together so it fits to you of perception and cognition so that if I take pieces of information and make them relevant to you, self relevant, they will glue together and stick to you much better if their self relevance, their relevance to you is not apparent to you. And this works from very low level perception to very high cognition. At the core of your cognition and perception is self relevance. It is the glue of cognition and perception. Now, here’s the thing. It glues you together as a cognitive agent. It makes things stick together and stick to you and different processes within you stick to each other. They emphasize all of these things that I’m saying. And the evidence for this is convergent and mounting. So notice how far we’ve got. We’ve already got, right, we’ve got intelligence, we’ve got consciousness, the possibility of recursive self-consciousness, we have the possibility of long term traits and character all being integrated with recursive relevance realization. And we see that in a central function that has become a key part of the psychology of the self, which is this process of self relevance. Here’s what I want to propose to you as a question that I want to try and undertake going forward, guys. Can we can we come just as we got all of this machinery to emerge out of basic recursive relevance realization so that we have self relevance in existence? We have self relevance to an agent. Can we explain the emergence of relevance to a self from self relevance? Can this very complex process of self relevance happening in a coordinated fashion at all these different temporal scales, all these different scales of cognitive scope and grasp on the world, cognitive perceptual scope and grasp on the world, can that give us the theoretical machinery to explain how the antique sense of self emerges? I’m going to propose it does. I’m going to propose that if we really make use of this, of everything that we have here, we can give an integrated account of how that how relevance to a self emerges from self relevance. And that will be a key move explaining how the self emerges in a naturalistic fashion. That’s the proposal I’m now going to make. Beautiful. So just as a point of converging evidence, we talked about this a little bit in WorldKnot, but my path to sort of the big picture view of what’s happening with the nervous system at its root is an investment value system. Yeah, yes. So when we get into recursive relevance realization and caring and determining what’s good, what’s bad, what are the pleasure pain signals of its own dialectic of approach avoid, what is valuable is very much akin to layering. Totally. So it’s just a total and it’s from a totally different, or at least a convergent, independent line up the mountain. Yeah, we didn’t know each other until both of us had gotten to that point of convergence between our theories. And we only discovered in discussion the convergence between them. That’s exactly right. Which lends plausibility to the proposal. So what I want to do is basically give you guys some time right here to ask any questions you might have before I want to make one more major theoretical move and then we’ll sort of wrap it for today. Any questions or all good on my end? Yeah, no, I think we’re very clear that we built an architecture of agent arena recursive relevance realization that’s going to set the stage then for this loop that you indicated. That feels pretty clean and tight to me. Yeah. And the argument for all relevance being self-relevance, I think that’s a latter development, I think, of the architecture of that argument, John. And I think it’s a very good development. I really like that move. I think it sets up well. Thank you. Because where that does is it provides a theoretically justified overlap between relevant recursive relevance realization theory over here talking about intelligence and agency and consciousness and the psychology of the self, because most of the current cutting edge models of the psychology of the self invoke self-relevance as a central mechanism that needs to be accounted for within the theoretical explanation of the nature and function of the self. And so that’s why it’s the linchpin place that I’ve come to in the argument, trying to get a convergence so that we can do the bridging that we need to do. So I want to now try and move to one more important construct, but it will recall how we talked about self-modeling earlier and how that because we want to add self-modeling into this. And then it’s going to start really like really ramping this up into something that is starting to get closer to the folk model of the self, but also keeping it very problematic throughout. One of the things I can say is all of this architecture that I propose to use in order to try and explain the self is also accepted by many of the people who want to argue that there is no self. That’s why this architecture is so important to get clear on, because both sides presuppose this as what they’re basically wrestling over. And we should always well, at least I would like to with the people that are saying that there is no self, the normal is that there’s no concrete self in the way folk ontology talks about the self. That’s right. That’s usually you want to make sure that that gets stuck in or else you get all this academic confusion or at least your that’s really right. Academic confusion in relationship to what the argument actually is. Yeah, that’s exactly right, Greg. That’s exactly right. And thank you for reminding us. You can even see that within the camp of people who do the no self because they’ll they’ll flip between the no self and the true self, often with the capital S as what is being realized in the Buddhist practice or the neoplatonic practice. And so, again, we have to tread very carefully because this is so fraught with confusion and the potential for useless debate. So I want to go back to Reed Montague, the guy who told said, you know, we’re different from computers because we care. And I want everybody to remember that, please. Recursive relevance realization is not cold calculation. It involves affect. It involves care. It involves right. It involves arousal. It involves the direction and the risk, the risky paying of attention to one thing rather than another. All of this is fraught and it’s not cold calculation. So when we’re talking about recursive relevance realization, like, don’t don’t don’t. Well, what about our emotions? That’s right, of course. But right. One of the some of the best accounts of emotions are that our emotions are how we do online relevance realization, recursive relevance realization, in which we’re modeling the world and shaping the world and modeling ourselves and shaping ourselves very fast in a quickly evolving manner to a particularly dynamic situation. The base of emotion is energized motion towards that which you care about. Exactly, exactly. You hear what he said? There you go. Exactly. OK, so. Given all of that. Let’s go back to the fact that we care and read Montague. Well, we’re trying to take care of ourselves and we’re trying to do it bioeconomically, we’re trying to get the, you know, Sperber and Wilson talk about this in their account of relevance. We’re trying to get the most cognitive bang for the least cognitive buck. Right. We’re trying to spend if I can, you know, do the least I can and get the most I can out of, you know, adapt to fit to the environment. That’s good. That’s that is that whatever is affording me doing that is extremely relevant to me. Extremely tracking return on investment, John. We’re totally. My criticism of Sperber and Wilson is they don’t give enough about. Yeah, but, you know, relevance is cognitive profit. But how do you measure the costs and how do you measure the benefits? Right. And how do you do that in a non-monketer fashion that doesn’t presuppose the very thing you’re trying to explain? Relevance realization. But that’s another point we’ll come back to at another time. Right. Let’s go back to this. We Montague. So I’m trying to take care of myself. I’m trying to ration. Right. My expenditures and be as efficient in how I’m using my cognitive machinery, my cognitive resources. Notice all the bioeconomical language that Montague is using that when we do that, we face what he calls the efficiency paradox. What’s the efficiency paradox? Well, the efficiency paradox is one of the things I need to do is I need to, right, not act at cross purposes. Right. So if my right hand and left hand are doing opposite opposing things, I can’t get anything done. So I need to be I need to all. And we have this word coordinate, put them all under a shared normativity. Coordinate. I need to coordinate. So what does that mean? They need to all communicate with each other. You only get coordination through communication. Think about if you try to get a bunch of people to do something. What do they need to do? They need to talk to each other. They need to coordinate. OK, so one of the ways I increase efficiency is by increasing communication. OK. But what’s another way in which I increase efficiency? I increase efficiency by reducing costs. I try to reduce the cost. You know what’s really costly in the brain and even between people? Communication. Notice all the time and effort we’re engaging in here to communicate. And we’re not doing anything. We’re not getting anything done. Right. It’s very costly in terms of effort. And it’s the same thing for your brain. Having different parts of the brain talk to each other consumes metabolic resources. So you want to have parts of the brain talk to each other as little as possible. Oh, no. We have a contradiction. They need to talk to each other as much as possible. And they need to talk to each other as little as possible. Oh, no. Oh, no. OK. So what’s the solution proposed by Montague? So in order to explain the solution, I’m going to use an analogy that he uses. And I’m sorry, it’s just a heteronormal right kind of analogy. I don’t mean anything here. I’m not making any pronouncements about people’s orientations, their gender, their sexuality. It’s just easy for me to use this to explain. OK. So you have a husband and a wife. Now, I like. My parents stayed together because they hated each other. And so they stayed married because being married to somebody is a great way to torture them. So let’s put aside that kind of couple that’s been married a long time. My aunt and my uncle. Stayed married because they loved each other and they had they had a long lasting marriage till until until until until he got Alzheimer’s, there would be a sparkle in his eye when he came into the room and saw my aunt and there would be a reflective sparkle back from her. We can all we all we can all hope for. Maybe we’ll find that. Who knows? Anyways, one of the things I noticed and Mardigou points this out to is it’s almost like they had telepathy. They could convey so much with so little. And it’s like it’s almost like they have telepathy. They’ll just raise an eyebrow and oh, no, I know what that means. Blah, blah, blah. Right. And he said, well, how do they do that? He said, well, the husband has a model of the wife and the wife has a model of the husband and they don’t need to talk to each other, right, they can actually be independent in different rooms of the house and their behavior, listen to my word, is coordinated because she’s consulting her model of him and he’s consulting her model of her. So their communication is next to zero, but their coordination is very high. So you have this mutual modeling. Mutual modeling is how you get high coordination with low communication. Notice how this is already the brain is modeling itself in order to improve its capacity to model the world. And in modeling the world, it needs to model itself. Hence Lipton’s self modeling robots. Right. OK, so mutual modeling now. Mardigou leaves it there. I want to propose that he needs to make it a dynamical system. Why? Well, go back to my aunt and my uncle. If they never spoke again, their models would eventually become unglued and distorted and the coordination would start to degenerate. What they need to do is periodically come together and talk and then periodically go apart and work in the world. And they have to cycle between. Right. And what do they do when they talk? They correct and update each other’s model of each other. And then they can go out with updated models and model the world better. And then as they model the world better, that means they’ve been changed as modelers. They have to come back in and remodel each other and remold each other and so on and so forth. And you cycle back. And I would propose to you that that is plausibly what’s happening when we see people doing the shift between what’s called task focused attention and what’s called the default mode network. I have to bring this up because the default mode network is highly associated with the self within neuroscience and self psychology. OK, so what are the two modes? Well, the one mode is I’m task focused. I’m focused on the task and I’m doing these coordinated things. Right. But then the default mode is suppose I’m bored, for example, I pull back and I start reflecting. I go into working memory. Uh huh. I go into working memory and I start to daydream. And I start to play with models of myself and different actions. And I propose to you that what’s happening in the default mode is the different parts are checking in with each other and making sure they’re still modeling each other well. And then we cycle between that and task focused, default, task focused, default, task focused. And so this is where the self modeling is self corrective self modeling. Now, what we already have now, again, building out of the very same machine, you can see why that would track also with your attention being distracted, it being focused, blah, blah, blah. Right. All of this coordinates very well with the recursive relevance realization. And now we’ve got self modeling and we’ve got the brain is mutually modeling itself at many levels as it’s modeling the world at many levels. And it is cycling in and out in order to appropriately keep the solution to the efficiency paradox, right, getting the optimal relationship between coordination and internal communication. And again, all of this is coming out in terms of the functionality of intelligent agency. Yeah, that makes a very good point about the difference also between the self model and consciousness of itself. OK, and the reason is because I like to talk about consciousness as focal recursive relevance realization, meaning that when we have we have, as we talked a lot about, there’s a generally there’s one entity that is the focus of our conscious attention because there’s a matching process. But if you think about the default mode network, which is modeling self in parallel across a lot of different potential emphasis as you’re trying to stay focused and then something catches your attention, what do you do? You jump off to the daydream. You bring that on to consciousness and then you attend to that and then you shift back. Notice the consciousness is jumping back and forth with that deep network holds that self model in subconscious parallel in relation. So that’s just a good way of saying, yeah, no, that self modeling system is very it’s related, but it’s also different than the conscious attention system. Yes, very much. And they have this sort of opponent processing relationship with each other constantly again, evolving our capacity to be fit to the environment, because there’s good evidence, for example, that the distraction right between the default mode network and the task focused network affords insight, affords creativity, affords our capacity to restructure how we’re attending to the task and vice versa. We can internalize the world into our self model in increasingly evolving ways. I would also imagine that in order for the mutual modeling to be actually progressively developmental and not just self sustaining, it would have to continue to vary the mnemonics that it uses to model in the first place. Yes. So I would like going back to your analogy, but like the husband and the wife, sometimes they can simply look at each other. They also probably have choice phrases that are at anemic that contain within themselves vast, vast implications, pragmatic implications as to what is being conveyed, and so those mnemonics also evolve as the relationship evolves and become more and more representationally efficient. And so there’s probably is there something like that also going on in the mutual model? And I would argue and Greg and I talked about this in Untangling the World, not going to the global workspace model of consciousness or even thinking about working memory is and this is where you can bring in somebody like me, Chris, George, you know, George, right. Right. The generalized other. One of the things you can do to help the mutual modeling is instead of having my aunt and my uncle mutually model and then this one model that one and like, which can be what you can do is form a generalized other sort of a general. What do most models look like? I can do some sort of very sophisticated compression and get, you know, a generalized other that can take from all the individual models and globally broadcast back to them. And you see what we’re starting to get? We’re starting to get this internalized, you know, thing that’s doing this higher order recursive relevance realization, higher order mutual modeling. It’s starting to get centralized and it’s starting to be able to take and coordinate from all of the more low level mutual modeling and do this more higher level and do exactly the thing you’re talking about. Manage, if you’ll allow me, the internal semiotics. Right. So actually, yeah, go ahead. But that just reminds me. So one of the things that I do when I’m teaching and teaching my doc students is we want to track the relationship that we have as therapists because that’s very informative. But we also don’t want our idiosyncratic reaction or transference or whatever to be over weighted in relationship to the event so that we then project and then it’s our stuff that’s involved. Right. What do I tell people to do? And this is interesting and they begin to do this naturally. Well, just take the take a general person. Imagine yourself as the general person. Oh, and then compare your idiosyncratic reaction to what the general person. And if the general person would hold, then you’re in good shape. If you feel like you have a deviant response, then that may be a transfer. Oh, that’s a beautiful, beautiful. So that kind of goes back to the I just want to put these together and I’ll let you talk, Chris, there’s a sense in which you stitch the two things together. The generalized other is a powerful mnemonic is basically what you’re saying. Wow. And we care. And it was just accessible to people, although I mean, I mean, how often do you say, well, what’s your generalized other model? But in the therapy room, I’ve actually used that without really being conscious of it, just sort of naturally. And everyone said, oh, yeah, I can get that. So we carry that sort of pretty accessible without necessarily having to practice. What’s my generalized other? But it’s implicit in the structure. Right. And just as a way as a way of removing cognitive bias, it reminds me of the earlier theoretical point about the fact that one of the one of the functions of self modeling is to differentiate itself from the world so that it can remove the contaminating influences that become conflated with it. And so it strikes me that what you’re describing, Greg, in the therapeutic context is a version of that. Absolutely, absolutely. And I’ll add that what happens to folks like actually in neurotic, reciprocally closing cycles is that they become so self conscious that they they over exaggerate and lose their capacity to really often judge how they would be seen by others because there’s a blow up of self absorption. And then this this capacity basically, they so anticipate people will be so upset over here and they’re so self conscious. Then they get blind to people actually being this upset over there. So actually one of the issues with neuroticism is this extreme inner spotlight that then actually, ironically, because it’s usually driven to trying to manage social relations, it actually ends up blinding the person off their normal capacity of tracking because it’s a self absorption and a reciprocal narrowing of the inside stage that’s happening. Wow, I’m so glad I’m doing this with you, too. So glad about this. So that’s as far as I want to argue, I just want to briefly foreshadow. I want to think about this dynamical recursive mutual modeling that can go down to very specific, you know, maybe mutual modeling between the visual and auditory system up to the generalized others. So it’s calm. It’s a it’s a complex recursive system. And then what I want to say is so we can think about mutual modeling happening at different temporal spatial scales. And so the mutual modeling is like happening between the online level of consciousness, the more extended of intelligence and then the even more extended of personhood and personality and character. Right. And that part of what the self is, is the kind of mutual modeling that coordinates all of those so that overall we get the ongoing evolution of our agency in the world. Amen. I’ll say a real quick other clinical thing, because what happens to people a lot brings you in the clinic room, you have one self on one time scale, like an impulsive self that eats and you have another self on another time scale. It’s like, oh, I want to get in shape. And those two cells don’t always line up as if you don’t want to know. That’s the truth. That’s the problem. And so, yeah, getting them to mutually model each other in so getting the cycle of the mutual modeling to be adaptive rather than maladaptive so that our overall self model enhances our agency, it’s something I think we will keep returning to again and again in this discussion. I think I’m hoping that that will afford us some of the tools to talk about the more existential and spiritual dimensions of the self. What’s the relationship between self and spirit? What’s the relationship between self and soul? And if we’re giving a completely naturalistic account of the self, does that mean we’ve completely destroyed any use of the word of spirit and soul? Is there a naturalistic way that is nevertheless non-tribulizing to talk about these things? I hope to explore all of that going forward with you guys. So I’m going to shut up now and give you guys any final word you want to bring in before we end the recording for today. Any final things you want to say? Well, I mean, since you mentioned that, I just found myself actually typing to the list today that the one way to characterize the whole unified theory is is the the critique that a lot of scientists and scientism has really corrupted the soul and spirit. And we need to revitalize an appropriate way to talk about that and build a scientific, humanistic, ontological bridge that’s fulfilling. That’s all part of the meaning of places. Yes. And what I what I’m proposing, I think you’re you’re converging in your proposal is that the best place to try and look for a reformulation of spirit and self is in the ID, sorry, of spirit and soul is in this massively self modeling, recursively relevance, realizing self that that that that that’s the kind of entity where we’re going to find the language and the machinery by which we can talk about spirit and soul and meaning making in a way that can help address the meaning crisis. Arguably, that has perhaps always been true, but we’re simply adducing new technology and language to it. Totally. I totally I totally agree with that. I totally agree with that. So in one sense, I’m trying to with your help, I’m trying to exact from these traditions that have always like you’re right, have been in perennial sense, been doing this, but we need to bring it into where we are now. Such. Yeah, I think this is the critique of modernity, modernity, you know, others are doing and then modernity oversold its capacity to like acts in a horrifying way. Yeah, yeah, yeah. To put it mildly, to put it mildly. OK, guys, once again, thank you so much for this. This was a little bit more monologic than than what most of the episodes will be, but that’s because we had to get through some sort of very tight argumentation and lay things out, but you guys has always expanded and explicated it and drew out induced emergence that I hadn’t foreseen. And so thank you as always for being here. Thank you very, very much, guys. Thanks for your passionate edifying instruction there, brother. It was great. Likewise. Thank you both.