https://youtubetranscript.com/?v=Ev5LWNXzET4

Welcome, everyone. The video you’re about to watch was originally recorded on the Active Inference Insight channel and podcast. It’s an excellent discussion between myself and Darius Parvizzi-Wayne. It builds a lot on the current work I’m doing integrating relevance realization, predictive processing, and for ECogSci. Does some excellent new work around bringing all of that to bear on the topic of flow. This is great work. I hope to talk to Darius again. So please enjoy the following video. Welcome, everybody, to the third episode of Active Inference Insights, brought to you by the Active Inference Institute. I’m your host, Darius Parvizzi-Wayne. And today, I am absolutely thrilled to be able to speak to John Vaveky. John is an award-winning professor of psychology, cognitive science, and Buddhist philosophy at the University of Toronto. He is also the presenter of the renowned YouTube series, Awakening from the Meaning Crisis, as well as the newer After Socrates. His work focuses on 4E cognitive science, which holds that cognition is embodied, embedded, enacted, and extended beyond the brain. In particular, John explores relevance realization, our adaptive ability to zero in on salient information in a world of near-infinite complexity. Last year, he, Mark Miller, and Brett Anderson wrote the paper, Predictive Processing and Relevance Realization, exploring convergent solutions to the frame problem, which proposes that trade-offs in precision weighting, a key second-order dynamic in predictive processing hierarchies, are at the heart of our ability to be intelligently ignorant. And that alignment of predictive processing and relevance realization is exactly what we’re going to be talking about today. John, welcome to the show. Thank you so much for joining us. It’s an absolute treat. Thank you, Darius. A great pleasure. I’m hoping that Mark does make it on in December. That’ll be great. Yeah, absolutely. But in his absence, because Mark was meant to be here today, but it’s unfortunate he’s not unfortunately able to make it. Perhaps, so this podcast is acting as a kind of primer and introduction for our audience to the critical themes in cognitive science and psychology and biology, as well as maths and physics, as they are right now. So I think it would be a worthwhile place to start if you could just unpick what relevance realization is, and then we’ll come to how it aligns with predictive processing. Sure. So relevance realization is sort of inverts the way common sense works. Common sense is there’s a lot that is obvious to us, and it’s obvious what we should pay attention to. We’re sometimes mistaken, but it’s obvious what we should be remembering and what we should be doing, and that’s all obvious, and we just run from that. Our job as cognitive scientists is to explain how the brain generates that obviousness, that salience landscapes that makes the right things stand out as relevant to us so that we can do what is astonishing, solve a wide variety of problems in a wide variety of domains. You’re a general problem solver, which is just astonishing. You can learn Albanian history. You could learn about swimming or rock climbing. You can take up the study of dinosaurs in the Jurassic period. It’s just amazing and astonishing, and there is good empirical evidence you have something like a general intelligence. There seems to be a general capacity. There is individual variation in talent. Some people are better at learning this domain than other domains, but way back from Spearman, we know there’s some sort of general ability. I’m going to propose that the general ability are two interlocking things, the anticipation and relevance realization. What I mean by a general ability is these are kind of meta problems. These are the two big problems you have to solve when you’re doing any other specific problem solving. And what makes relevance realization so hard, generating that obviousness so mysterious, actually? That’s what Scott Attron says. He says that what science does is it makes what common sense takes to be obvious, mysterious. Here’s a table in front of me. But physics says, well, what is that really? And I think mysterious, and it’s made out of quarks and all this sort of stuff. It’s the same sort of thing here, because when you think about it outside of common sense, you realize that there’s just an overwhelming amount of information that’s constantly, as you said, dynamically changing in the environment. There’s an overwhelming amount of information in your long-term memory that’s constantly changing and being readjusted to the reconstructive nature of memory. There is an overwhelming number of combinations of actions you can perform, sequences of action. And yet, you ignore that overwhelming, almost all of that overwhelming information and zero in on the relevant information so that you are oriented right in the world, finding things obvious, salient, standing out. And it’s not just a single thing. There’s a salient landscape. Some things stand out more than others. Some things are more foreground, background. You have all this happening so that you are capable of solving so many problems in a messy, complex world in which there’s constant novelty because of emergent phenomena. And trying to give this ability to machines has been overwhelmingly difficult and one of the hard, hard problems. We’re actually bumping up right into it now as we’ve finally taken up the AGI project, the project of trying to create artificial general intelligence. Excellent. It may be worth digging a little bit deeper into that exposition because people might be thinking, well, don’t we just have a module for relevance or salience? Don’t we just kind of we’re born and we know, oh, I should not fall off cliffs and I should. But actually, what happened yesterday might be quite irrelevant. You have a wonderful explanation in your sort of keynote paper, Explicating Relevance Realization, where you say that leads to an infinite regress. Perhaps you could explain that argument. So the magic module, not to be confused with predictive processing’s reported problem of the magic modulator. So let’s keep those two distinct from each other. But the magic module is, well, I have a relevance realization thing in my head that just does it. And then I can just say, well, how does it do it? And the mistake people can make is, well, evolution made it. And that’s right. Evolution tells me how it got here. That’s not telling me how it functions. Evolution tells me how my eye got here and how my ear got here. And that’s not telling me how my eye and my ear function. I’m trying to take what’s called the design stance. Tell me what I need to do to give that module the ability to realize relevance. All you’ve done is said it has that ability. But it faces the problem of, OK, now it has to know what to pay attention to, and so on and so forth. Now what you might say is, well, evolution totally prepared us for it. The problem with that is that doesn’t work. Evolution can only pick up on things that are long-term invariant and that make an ongoing continuous difference to your reproductive status. And that’s not how relevance works. Relevance is really fast and changing. If I say left big toe, it suddenly becomes relevant to you. And it wasn’t relevant a minute ago. And what’s the Darwinian difference there? And there’s nothing that’s intrinsically relevant. Well, my own life is intrinsically relevant, is it? You’ll sacrifice your life for your kids. Well, my life and my kids. Under all circumstances, saving your kid means that 10 million people. And so on. And we get into all these philosophical arguments because we realize, no, no, there isn’t any hard, fast thing that is always relevant. Relevance is not something for which we can generate a scientific theory. It’s not stable. It’s not intrinsic to the phenomena. And there’s nothing, other than being relevant, there’s nothing that all the events or objects or feelings have in common. What do they all have that makes them, other than saying, well, you’ll give me synonyms. They’re important to me. Or they help me solve my problems. Yes, that’s exactly it. But how? Exactly. Perfect. And it’s so nice to see how the apparently disparate strands of cognitive science, so Gibsonian affordances for e-cognitive science, predictive processing and active inference, align at this path, which is that everything is dynamical. Everything is about this mutual unfolding. There is no rayified static existence. It’s what’s relevant to you might not be relevant to me. And there are these dynamic relationships which govern that. So that’s a great place to start. I think let’s now go into the solution, if there is such a solution. And let’s start with opponent processing. So this is a term that you use frequently. And to a layman audience, it’s not clear exactly what it means. So what is opponent processing, and how does it help shed light on this complex problem? Sure. And maybe along the way, I can point out how relevance realization really gives some specific teeth to claims of embodiment from 4-E cogs on. Because I like the point you made. And maybe we don’t have to do it right now, but I’d like to come back to it. Relevance realization is kind of a glue. It glues together predictive processing and 4-E cogs in a powerful way, and makes them all, including itself, mutually stronger from that integration. But yeah, the opponent processing. So let’s just note something. At many different levels of analysis in your biology, you will find opponent processing at work. I’ll take one that’s very easy to explain, and people have a ready experience of it. So we are all constantly, mostly unconsciously, although it’s affected by conscious factors, we’re constantly recalibrating and adjusting. Dare I say it, we’re even evolving our level of metabolic arousal. I don’t mean just sexual arousal. Don’t be Freud here. I mean, how much, how much, how sort of activated are you? How sort of energized are you? And the problem is there isn’t some state, homeostatic state you’re looking for with that. Because if there’s a tiger in the room, I need to go to maximal arousal. And if I’m going to sleep, I need to go to minimal arousal. And I can’t just be sort of Canadian and keep sort of average arousal at all times, because then I don’t fall asleep and I get killed by tigers. And so it constantly has to be, like we’re talking about, it has to constantly be adjusting. So what has evolved in your biology is your autonomic, meaning self-governing. It’s a self-organizing system. Like you said, it’s a dynamical entity. Your autonomic nervous system, and what it does is it couples together two subsystems that have opposite biases from each other. So your sympathetic system is biased to, I’ll speak anthropomorphically just because it speeds things up. Your sympathetic system is biased to seeing as much as it can of the world as threat or opportunity and arousing you as much as it can. And your parasympathetic system is biased the opposite way. It’s biased to seeing as much of the world as secure, safe, a place where you can rest and recover. And then these subsystems are not independent from each other. They’re locked together. And they’re continually trying to shut each other off. And they’re constantly competing that way, but they’re cooperatively competing. It’s not adversarial. They’re cooperatively competing. And what happens is that constant trade-off between those processes, constantly in very dynamic manner, constantly recalibrates your level of arousal to the world. And that’s opponent processing. And what you find is you got that, there’s opponent processing between your focal vision and your peripheral vision. There’s opponent processing plausibly between your left and right hemispheres. The number of these things is huge. And I say at many different levels of analysis. And so I think a point of processing is a clue to how relevance realisation is actually undertaken by your embodied cognition. Wonderful. And immediately to anyone interested in active inference or its manifestation in continuous state spaces as predictive processing, what we’ll be screaming out is this notion of an attractor set or homeostatic equilibrium, which Carl has spoken about a length, obviously, and as has many other researchers. But this idea that what this, we suffer, but we also embody in a kind of positive way, this itinerancy. We’re never stuck in a single mode. We are endowed with the capacity to go beyond our homeostasis and return back to it. But we’re always, in a sense, being drawn without ever residing, without ever having stopped at that homeostatic set point. So let’s now fold in active inference to the picture. What does it provide relevance realisation and the formulation of relevance realisation that couldn’t be done purely with Fourier cognitive science? So, I mean, I’m gonna talk about it in terms of predictive processing, because that’s the formulation of it that lines up most cleanly with the theoretical integration. As you said, there’s clear derivation relations between active inference and predictive processing, and also between them and the Bayesian math. But the Bayesian math, if you were to actually strictly apply, it would be computationally intractable. So we’re doing some approximation function. So I’m just gonna take it that everybody shall know that I’m playing fair with this. I’m not trying to be dodgy here. And so relevance realisation is grounded in the idea of problem solving. And problem solving actually assumed a fundamental thing, which again is so obvious to us, which is the state you’re in and the state you want to be in are not the same state. That’s the defining feature of a problem. If I’m in a state I wanna be in, I wanna be sitting in this chair. I am. I don’t have a problem. Right? And now what that means is you’re all, you’re immediately into something very interesting. The organism is trying to actually predict a possible state in the world and prepare itself. So it’s an agent. It doesn’t just behave. It alters its behavior to alter the states in the world. Right? And so it is trying to, I would say, predictively prepare for the world. It’s trying to predict the world, but prepare itself for that world, but also prepare the world so it’s more likely to come out in the prediction that it seeks. Right? And so I put those two together and I talk about anticipation. And so whenever you’re problem solving, you’re anticipating a goal, meaning that prediction and preparation. Now, Fourier Consci doesn’t directly talk about that as clear as it needs to. It talks about coupling and it talks about affordances. And I think there’s a deep connection between important processing, optimal gripping, and affordances. And maybe we can explore that at some point. But typically one of the things where Fourier Consci has some challenges is in more distal relations to the environment, because it tends to rely primarily on coupling. And this is a longstanding critique that one of my colleagues, Brian Campbell Smith at the University of Toronto, because that is where all knowledge is flowing from, right, has made. He said, you know, you can be predictably coupled to things that you’re in direct causal contact with, but, you know, sorry, dynamically coupled, not predictably coupled, my apologies. But how do you do things that are much more distal? And for me, that is key. And here’s why, Deris. I think when we evaluate, even intuitively, so I’m not using that as an authority, I’m just showing how readily this works for us. When we evaluate an organism for its intelligence, we tend to do it in two interlocking ways. We do it in terms of, I think, how well does it zero in on relevant information? How well does it pay attention to what it needs? We look at an animal and go, wow, that’s really good. Notice how it’s noting subtle differences. But we also evaluate the intelligence of an organism. And Michael Levin talks about this with his cognitive light cone idea, is how deeply into the world they can anticipate. And I don’t mean just spatiotemporally, I also mean it motally, possibility. Like when I talk to my cat and I say, you know, where’s your toy? The cat looks at me and I go, oh, well. I say to my puppy, where’s your ball? And she goes into the other room, looks for the ball, finds it under the couch, and brings it all the way back to me. And I go, wow, you’re really smart, right? And so we get this because we know how just moment by moment that adaptive ability is built on. How distally can we pursue goals? We tend to evaluate people, wow, you pursued a long-term goal and you brought it about. So that’s what’s missing. And I think that’s what’s really afforded. Now, I wanna make one more point, and then I’ll shut up so you can ask me another question, which is these two issues, anticipating more deeply, and remember, I don’t just mean spatial temporally, I mean motally, and relevance realization are deeply interconnected. The more you anticipate, the more the problem of relevance realization goes up exponentially. And so these two things, right, I would argue because they’re interlocking, they have to be solved together. And I think that’s why we use both of them as evaluations of the intelligence of an organism. Okay, I’d like to backtrack to that second argument to after this, after what I say here. It strikes me that what predictive processing can give, perhaps where Fourier Cognitive Science doesn’t, but I’m happy to be schooled on this. And I know Varela and Auto-Paresis in some sense, pre-shadowed and prefigured what I’m about to say, is that it kind of gives a ground to our action and to our perception, which is that we have to have a model of the world and a model of ourselves, which is not only descriptively viable, but also normatively viable. Yes. In so far as it’s all like, it’s very important for me to be able to predict the world. But as you said, we aren’t just at the behest of the world’s dynamics, we can change the world according to our preferences. Exactly. And I think what predictive processing gives us is, although to be fair, I think Karl might say that this is a bit of an overshoot, is a telos, is a fundamental attractor set to which our perception and action is governed. So I think that’s kind of the way that I see the added benefit of this convergence between predictive processing and relevant realization. What I was also gonna say there is that it’s fantastic that you’re mentioning this kind of deep temporal modeling, because from my eyes, that’s really where a lot of the work is being done right now, which is that selfhood, consciousness, even perhaps space itself is downstream on the fact that we can, downstream on the degree to which we can model the slower dynamics and the faster dynamics in a generative hierarchy. So that’s- I think that’s a complete, I’m in complete agreement with that. I think that’s an extension of the argument I made. Yeah. And so what I wanted to pick up there is you said, if I heard correctly, that the more, the deeper your temporal model, the more you can model slower fluctuations in the environment, the more critical relevance realization becomes, the higher the stakes in some sense. Yes, very much. I’m interested in unpicking that argument because from my eyes, there are plenty of things out there that don’t have deep temporal models. So a virus, for example, just reacting to very coarse-grained features of the environment, but for them, relevance realization, at least in my eyes, appears as critical as it would for a human. It is, it is. And so I think it’s, how can I put it? I think it’s always the demanding problem. So I’m not denying that even the paramecium has to do salience landscaping. It has to, I’m gonna use a word very neutrally here. I’m not connoting anything about consciousness, but it has to recognize this molecule as food and that molecule as poison and swim reliably towards the one and away from the other. I’m not, so I think the problem is always there, as I was trying to indicate, as soon as you have problems, as soon as your goal states are distinct. And I think the space opens up very quickly exponentially. I just meant that it exponentially gets worse. Right, right, right, right. This is why you see across species, hyperbolic discounting, temporal discounting, because that’s a huge relevance realization machine. It’s about salience discounting, right? And the point of that is, is because as you opened into the future, the number of possibilities goes up exponentially. And so you need to have this attenuation function so you don’t get overwhelmed by the possibilities as you extend your cognition into the future. That’s why we have hyperbolic discounting and we pay a huge price for it. It makes us procrastinate. It makes us difficult to pursue our long-term goals. So there’s even a trade-off relationship in there, but I won’t get into it, but that’s an example. There’s a good reason why we must have something like temporal discounting across species. That’s what Ainsley showed, is because the relevance realization issue, just it gets even worse. It starts out bad and gets even worse. Right, wonderful. And yeah, I mean, a lot of your work, we’ll come to affordances now because a lot of your work is centered on the notion of affordances. Yes. So affordances are the possibilities for action afforded, actually that’s circular, possibilities for action granted by the environment. I mean, afforded was a verb and then Gibson made it now. How much has the predictive processing framework supplemented your understanding of affordances and perhaps the way that what we perceive in the world is not necessarily a feature list of objects, but they are fundamentally affordances. I see my glass of water as grippable rather than being glass this dimension. And I only ask that because, as I said, as conscious, animate cognizers, we are blessed with the capacity to change the world in accordance with our priors, which makes action really the fundamental currency for autopoiesis, for self-organization. So has predictive processing ramped up the importance that you give to Gibsonian affordances? I don’t know if it ramped up because I always thought, I literally learned this stuff from John Kennedy when it gives his greatest protozoa. So very early on, I was very much impressed by this. And it occurred to me that relevance realization, the realization of relevance, and I mean realization in both senses, actualizing a possibility and becoming aware of it, or at least detecting it in some fashion. That it’s a prototypical kind of affordance. The thing about affordances is they’re not found in the object or in the organism. The grippability of the glass is not in the glass, can’t be gripped by paramecium, and it’s not in my hand. My hand can’t grasp Africa or the sun. It’s in a fitted relation between them, and that’s exactly what relevance is. It’s a fitted relationship between. So I always, before I came into a deep dialogue with predictive processing, I already saw a deep relationship between relevance realization and affordances. And I think part of my work has been to really explore the deep ontological significance of that. That you have a category that sort of falls outside. This was Gibson’s intent too. This is what Forty Cogsci was really talking about. Falls outside our belief that we have a complete exhaustive dichotomy between the subjective and the objective, between the inner world and the outer world. And so there’s this other important, I call it the transjective, this between-ness, this connectedness that you see in adaptivity, you see in relevance, but you also see it, like you said. Now, what do I think the predictive processing did? If you’ll allow me, I wanna talk, back and forth, symmetrical, not just linear. So I think what the predictive processing does is emphasize the need to try and explain rather than just, I don’t wanna be cruel. So I’m being a little bit over simplistic. I’m doing that just for speed. Generally, it’s like, yeah, but how do the affordances get actualized in the 4E color? There’s a lot of affordances all the time. And what I’m interested is, how do those affordances become, well, salient and obvious to me so they become binding on my sensory motor behavior? And that was always something of, not that there’s Redfield and others who are doing some work on it, but typically they start to invoke predictive processing to try and explain that mechanism of, okay, there’s a huge affordance network, but I’m not, like how do I, and here we go again, how do I select and actualize the ones that actually go into my salience landscaping? And I think predictive processing, especially with the notion of precision weighting, does a good job of that. And what I think relevance realization helps to do to predictive processing, a bunch of other things. But one thing is, predictive processing basically, there’s subtlety to this, and I know there’s lots of mouth, but just to keep going, right? Well, what attention is, is the precision weighting. It’s this meta function. And I think that’s a very good argument, but there’s a conceptual analysis that’s been needed, but it’s like, but what is salience? And one of the arguments I can make is, well, what the precision weighting is doing is relevance realization, and relevance realization, when relevance realization does a higher order, finding relevant of an affordance that has been generated by lower order relevance realization, that’s when something becomes salient to us. And so we can actually give a conceptual explanation that goes with the theoretical identity claim in predictive processing, the precision weighting is attention, and what it gives you is salience. So they can unpack each other. I think what, another thing that happens is, relevance realization is deeply connected to 4-E cogcide because relevance is grounded, I would argue, in autoprocesis. Relevance is always relevant to an organism, right? And so that means relevance realization is in cold calculation, it’s how the organism cares about some information rather than all the other information, because it is constantly taking care of itself. There are literally, and I mean this physically, things that matter to it, and things that it must import into itself, important things, and some of that’s also information. And so why that, if you can glue 4-E cogcide predictive processing together, I think you can answer some of the questions, maybe even challenges, that some people in 4-E cogcide are making a predictive processing, and saying, well, it’s not really connected to embodiment. I think, no, no, if you show a deep theoretical integration of predictive processing and relevance realization, and then relevance realization and autoprocesis, then you’ve really strongly glued predictive processing and 4-E cogcide together. So that’s some of the important theoretical work that can be done. Yeah, absolutely. I would love to jump into the critiques that certain 4-E cognitive research, cognition researchers have leveraged at predictive processing, but let’s leave that for a second. This is exactly what I was talking about in terms of telos, which is that, exactly, this notion of care, and when I spoke to Mara Barassin, an active inference researcher last week, we kind of actually ended up concluding this is almost a Heideggerian sense of care that would take a shift from our existence. Can I just interrupt there? Yeah. Right. A Heideggerian sense of care was exactly what I was invoking, and I was deeply influenced by Dreyfus and the frame problem who got this from Heidegger. So the connection you just drew is very, yeah, I explicitly argue for that. Excellent. I personally think that the Heideggerian, Dreyfus, Merleau-Ponty lineage needs to be integrated more into active inference, and I’m speaking to Dr. Marilyn Stendero, who has done work on Heidegger and autoparesis. So that will be really fun in the upcoming weeks. This is exactly what I was talking about in terms of telos, which is that taking a stance on your own existence can be computationally modeled as having these kind of high precision priors. Yes. So our homeostatic set point will be different from a snake’s, and that actually gives a really solid explication of why we act differently in certain contexts than snakes. And so that was my kind, but then that said, I will have to make clear that Karl is, himself, although he has spoken about existential imperative, says that there actually is no imperative. All that you’re really saying is, if a thing like us is to persist over time, this is what it needs to do, and it’s just cast as a minimization of free energy. So I think there’s an interesting question there that maybe you could have a go at, which is, to what degree do you see this as a genuine imperative versus just what things do to self-organize over time? Yeah, and so, I mean, and Karl’s no sloathe, so he’s welcome. I sometimes don’t like it when people who are physicists or something start commenting on philosophical and normative questions with an authority they don’t properly possess. But I know, I mean, Karl’s a great theorist, and I know he’s philosophically educated. I have not yet to talk to him, but I hope I do get to do so. We can do it here. Yeah, so, yeah, I mean, so, I’m gonna argue later, if I get a chance, that what relevance realization doing is strongly analogous, obviously at a very different time scale to what evolution does about constantly redesigning the adapted fittedness of the organisms to the environment and allowing organisms to fit their environments to them. Because as you keep emphasizing, and I keep agreeing with, it’s not passive, it’s not, right? The organism is shaping the environment as the environment is shaping the organism. Niche construction and all that sort of thing. And I think that’s all right. And then there again, that’s a very clear connection to four-ecoxide, again. But this deeper question, I do wanna do it. I wanna pause and I wanna slow down because it is a really important question because it gets us beyond what we might call scientific explanations of how and into properly, but not useless, philosophical reflection of sort of why. And why does this matter to me? Because as you’ve mentioned, I’m deeply concerned with issues of meaning in life. This largely metaphorical, nebulous notion that humans seek meaningful lives, lives that are worth living, even given all of our failures and our faults and our flaws and our foolishness and our frustration, right? And pretty clear empirical evidence that it’s not reducible to just subjective wellbeing or pleasure and pretty good conceptual philosophical argument because that’s where it’s relevant, that it’s not reducible to just living a moral existence. You could live a very moral existence in which you’re sort of experiencing the pleasure of food and other things and certain stability in your environment, but you could be very, very lonely. You could be very lonely. So and what that loneliness points to, I’m just making an intuitive gesture. That’s not a tight argument. I have tight arguments. But the gesture is people are seeking a kind of connectedness and now you see where I’m getting to. The elements realization and what we’re talking about, this niche construction, the fittedness, the belonging togetherness, this mutually shaping, like this is this, I argue that that’s exactly the connectedness human beings are seeking the meaning in life. Now you can then ask the question and I’m gonna play with two terms of phrase here to play with that. I think we clearly seek meaning in life and I think the evidence for that is growing and it converges with the psychological work on meaning in life and on what Karen Allen calls belongingness. If you don’t feel that you belong, you’re in trouble and you’re in trouble across all these measures, cognitive, emotional, social, financial, block. Like you’re in trouble, physiologically you’re in trouble. That’s why solitary confinement is such a punishment, right? So is there a meaning of life to the meaning in life? The meaning of life is that is there some sort of cosmic destiny, cosmic order to which we are ultimately, which we are ultimately trying to find? And here’s the thing, I don’t think I have enough evidence for that. I think that there’s lots of evidence for meaning in life and if I’m right that it’s relevance realization and if that’s something you very much like evolution, right? Evolution is constantly redesigning adaptivity, but there isn’t a final thing that evolution is aiming towards, there isn’t a final form of life. There isn’t some like what we’re doing is like the sculptor we’re constantly refining and then finally one day we will have the final form of life and I think this is fundamentally flawed and I think this matters philosophically. I don’t think previous forms of life were in some, in any kind of moral sense superior to us. So I don’t believe in any nostalgia, some of the previous things that people found important and relevant and sacred those were the true ones and we just have to get back there. And I don’t believe the utopia, we are working towards the final ultimate thing that we’ll all agree for all time is the most relevant, most salient. I don’t believe that how this works. So I reject nostalgia, I reject utopias. The idea that there’s a T loss can either mean there’s a T loss to meaning in life, which is it’s like Carl says, it’s necessary, it’s constitutive necessary for living things to have this, but does that mean there’s any metaphysical necessity to there being living things? That’s a different question. Now here’s my weak answer at that one, sorry, long question but you asked like, this is like almost up there with like God, right? I note that I’m in a very difficult situation. I can’t see any evidence for, and I’m not attacking God here, as you know, I’m very respectful and even appreciative of religious frameworks and religious lives. But I can’t see the evidence for that. However, if you were to ask me, which is a better universe one with life in it and one without life in it, even if you couldn’t exist in either one, I would say the one with life in it. There’s some primordial judgment going on there that has some kind of metaphysical import. Now it’s weak, I admitted that ahead of time. And so I’m giving you a very, I’m sorry, very long but wishy washy answer. I don’t really think I believe in anything like the meaning of life, but I do think there’s a metaphysical import to being connected to reality. We find reality, being connected to reality inherently valuable for its own sake. And we find worlds in which things can realize reality better worlds than ones that don’t have that in it. That’s my answer. No, it’s a wonderful answer. And it’s really helped me and hopefully our audience clarify exactly what I mean by an existential imperative, which is that the free energy principle doesn’t tell you what you’re here for. No. It tells you if you’re here, what are you doing? Like, I’ve given you a phenotype and given your culture and et cetera, et cetera. So that’s a really useful sort of explanation of exactly what I was going for there. And actually it points to something I said to Carl right at the end of our podcast, which at the time of recording this came out yesterday. So people should definitely check it out because he’s on top form. Which is that I asked him, I mentioned Thomas Nagel. So Thomas Nagel has this very romantic philosophical notion that life is, death is bad because life is fundamentally good. And it points to kind of your thought experiment. If we had two worlds, one with life and one without life, I think we would all intuitively say the one with life is better. Whether that has any metaphysical import, as you say, is up for grabs. I’m going to ask sort of jumping on the back of the initial thing you said, which was about connectedness and belongingness. So if we take this from a theoretical or computational stance in active inference, what this looks like to me is we embody a model of the world, so we can make fundamentally sound predictions about how the world will unfold and how we will act to make that world more compatible with our preferences. Now, when that was formalised initially in the active inference literature, immediately a response to this came in the form of what’s called the dark room problem. So this is all papers in 2011, 2012, and they’ve been rebutted in multiple ways. But the basic idea is why don’t humans just seek out dark rooms where maybe they have access to food and water, but they don’t really seek out… but in a state where they don’t go and explore and they’re not curious. Because what you’re seeing there is actually a perfect coupling between your predictions of the world and the way the world is unfolding to your eyes. So I was wondering whether you had pondered the dark room problem and where you see exploration and epistemic affordances coming into play here. Yeah, I think that the dark room problem… I’m happy with all the other rebuttals. And I don’t know them all, so I may be stepping on somebody else’s toes. So if I am, whoever you are, I apologise. Yeah. I think this is a lock-in individualist model of cognition and how we work. And I think it is therefore the presupposition is that no, that’s actually… I don’t agree with that presupposition that our cognition is fundamentally individualistic in that way. I think we are, you know, we are socio-cultural mammals. And let me point to one thing. You know, yes, measures of G are very robustly predictive, but you know what’s also really a very powerful way of predicting your behaviour? Your attachment style. This is also very robust. Now, that means… Think about what that actually means. And this goes into the heart of religious traditions like agapic love, right? When you have a child, you have to invert your relevance arrow. It’s not how that being is relevant to you. You invert everything around. This is agapic love, and it’s how it’s different from erotic love or phylia love. You invert everything around you, right? And so that it’s, how am I relevant to this being? How am I relevant to this being? How can I be relevant in a way that turns it into a person, turns it into an intelligent, rational, self-reflective relevance realiser, predictive processor, meaning maker, right? And so, well, first of all, where are my attachment relationships in the dark room? Well, there’s other people in the dark room. As soon as there’s other people in the dark room, all the problems that you thought you got away from by putting me in a dark room return. Other people have different goals than me. They have different needs. They’re going to move around differently, right? That we’re going to have to decide about when and where and how we gain access, blah, blah, blah, blah, blah. All of that immediately unfolds. Secondly, and this is part of 4E cog sign, I think the evidence for extended cognition, distributed cognition, that we evolve to work in groups and the collective intelligence of that is actually our superpower, and other people are making this argument. I think that’s clear. Standard thing, take the waste and selection task, put it an individual person, highly educated, highly intelligent, second year, you know, psychology and top tier university, from the 1960s on, you put them in the waste and selection task and only 10% get it right, right? You replace that same task with four people who are allowed to talk to each other and their success rate goes from 10% to 82% reliably. Why? Because we do opponent processing between each other. You have biases different from mine and if we work in opponent processing, not adversarial, but if I say, you’re probably a good source for correcting my bias and I’m probably a good source for correcting yours, we get the best dynamical self-correction possible. And so when you add in the fact of the reality, and there’s actually evidence, empirical and formal, for the power of collective intelligence, the reality of attachment, well then the dark room becomes filled with other people who are trying to band together to solve problems that they can’t solve individually. And then to me, everything that the dark room thought it had as an absurd, like a reductio ad absurdum, disappears. That would be my response. Yeah, I think eventually the dark room just becomes the world once you start adding in the things you need eventually. You know, eventually it becomes a civilization. Yeah, I love all this stuff on distributed cognition. I spent the first six months of this year working at UCL on finding experimental ground for what’s called social baseline theory. Social baseline theory is this idea that humans, like to study a human being in a lab by itself is to study a human being at deficit. A human being actually wants its physiological arousal to take one factor is at its baseline when it’s with other people and it will fluctuate according to stuff like attachment style or the relationship that one has with that person, which is really beautiful, but inverts our kind of, well, maybe not common sense, but our institutionalized notion of what it is to be a person, which is kind of alone. Yeah, wonderful. Well, I’m going to take this argument a little bit further if you let me, because this is something that we spoke about a couple of months ago, and I think Mark might have a different opinion, so it’d be cool to sort of unpickle the read. I have this slightly fuzzy, wishy-washy idea that the notion that, OK, so I always think about a climber climbing a rock face or a climbing wall. And the climbing wall is a kind of beautiful, canonical example of an affordance-based landscape. It’s rich for the affordances. It’s got the toe holes, the finger holes. So in many ways, in terms of active inference, what it allows the rock climber to do is to self-evidence or find evidence for its own model about itself as a successful rock climber. Yeah. Now, that’s relatively well established in the literature. What isn’t established is my kind of, again, slightly wishy-washy idea, which is that the rock climber is offering affordances to the wall. And so this is why, right at the beginning of our conversation, I spoke about mutual coupling. So what dynamical systems theory gives us, what autoparesis gives us, what active inference gives us, is this idea that to exist is not to be this kind of reified self that takes a objective stance on the world. And actually, what it is is this, really, the fundamental unit of analysis should be this dynamic mutual coupling between agent and arena, to use your terminology. Yeah. Do you think it’s an over… I mean, I’ve spoken to Mark about this, and he uses… He said it’s something like an overextension of the term affordances to say that the human being is offering affordances to the climbing wall. I know Mark isn’t here to defend himself. So again, I’m not going to put one… But I will definitely raise this with him again. But just for your own ears, how does that argument sound to you? I like the argument, and yeah, I don’t want to speak ill of Mark. I’ll only speak to the propositions that you spoke on his behalf. And I’m very happy to… I mean, Mark and I work close together. Mark is one of the best people I know. Mark’s a former student of mine. I’m very proud of Mark. But I actually agree with you. Now, that goes to another connection I make in my work that would probably be a little bit stranger to the ears of many of your listeners. They might say, I get why he wants to connect interprocessing, to relevance realization and the fore-e-card side. But now what you’re doing, and this is where I do a lot of my… what I call my deeper ontological work, and this is where the Heideggerian stuff really comes to bear, is this is all the work I do on neoplatonism, which may sound like something very arcane, but again, here’s the proposal that once we really profoundly accept affordances and once we accept that they are reciprocally realizing, I agree with you, the affordance not only discloses things about me, it discloses things about the world. And that’s a way in which things… like Heidegger’s notion of truth as alifeia, as an event, as the disclosure, rather than a static property of our propositions. I think that…hang on. Now, as Filler argues, and I’ve been arguing for a while, and John Roussin, Filler’s book is called Neoplatonism, Heidegger and the History of Being, Relation as Ontological Ground, that relationality is actually the ultimate nature of reality. Heidegger is actually turning back towards this, because what does that give you? If you have that, and I think this is a fair way to put it, if you have that alithetic notion of affordance, then you are getting back to the neoplatonic theory of knowing by conformity, knowing by participation. And the deeper argument goes something like this. That if the fundamental grammar of your cognition, I don’t mean the content, your content can go wrong and all, but if the fundamental grammar, the bottom-up, top-down, all this stuff we’re talking about, if it’s not picking up on something fundamental about how reality is structured, right, then you face a kind of profound solipsism, a profound kind of skepticism. And I think that is ultimately leads you into all kinds of performative contradictions that I agree with Whitehead. They’re as devastating as propositional contradiction. There’s a longer argument there, and I can point to, please check out some of my talks on YouTube about this longer argument, like Pickstock’s Aspects of Truth, where she said, all the things we use to decodimize this, right, we had the analytic-synthetic distinction, that is broken down under philosophical criticism. We had the theory-fact distinction, that is broken down. We had the, right, the is-ought, that has broken down. Like, think about relevance in is or not. Well, blah, right, it’s sort of both, right? And so her point is all the things that were used to cleave, and then what we tried to do is we tried to make the logical world, the thing that sort of stuck the two worlds together, and that collapsed under Gödel and other people. And so we’re back to the idea that we either go into the lock-in cabinet in which we’re locked inside of our heads, we’re somehow getting postcards sent to us, we think they might be from an outside world that we think might be out there, and we’re trying to build it from the postcards, and we’re due, that’s never going to get us there. And so I think if you have an alithetic notion of affordance, you start to make the argument that how reality is realizing, and how we are doing relevance realization, are fundamentally participating in the same principles, in a profound way. And I think, yeah, I have no doubt that Mark might not want to do this. I think Mark would be, I would agree with Mark that this is not a direct derivation from sort of classic Fristonian presentations of active inference and predictive processing, Bayesian brain, we should settle on a name. But I think if you make the connections through deeper into Gibson, deeper into Forie-Kogsai, deeper into Heidegger, you get back to this notion that with the prevalent notion of knowing, that when we know something, what we’re doing is our structural functional organization is identifying with the structural functional organization of the thing. And we are both participating in that same form, that same principle, that same grammar. And that has all kinds of metaphysical and ontological consequences, even ethical consequences. It means talking about the true, the good, the beautiful becomes something very relevant again. Yeah, yeah, no, this is exactly where I wanted to go and exactly aligns with the way I’m thinking about the free energy principle at the moment, which is I have a skepticism between the internal and the external distinction. Yes. And it’s something I spoke about with Carl. So a classic problem when one starts reading active inference, as I spoke about with Carl, is whether one would consider the agent or the internal dynamics as having a model of the external world or instantiating a model of the external world. Yes, excellent, excellent. As this comes back to cybernetic formulations of what it is to be a good regulator. So Colin Ashby and all of that stuff from the 50s and 60s. So I have an inherent skepticism because I kind of take the stance that actually all that the internal dynamics are doing is instantiating a particularized form of the external dynamics. And this is very easy once you take out the picture of a homunculized ego. Just if you look at it in terms of just the particle physics, we’re just a instantiation of some of the physics that defines everything else. But we just actually seem to have a kind of complexified Markov blanket or Markov blanket form of that. But there’s nothing particularly distinct about that. And we’ll come to consciousness because maybe there is. And that’s, I guess, is the big elephant in the room here. But it’s funny that you mentioned Heidegger because Heidegger to my eyes, and in my opinion, and then obviously Dreyfus and Merleau-Ponty and the phenomenologists who followed Heidegger, really, he’s the one who strikes at this established distinction between a subject and an object. So the Cartesian distinction. And then this is where his kind of gripes with Sartre came in. Now, my question is, is that, yes, if you read Heidegger, the ground of Dasein is this kind of interrelationality. Alfea is the disclosure of truth in an interrelational relationship, so to speak. But there still seems to be some kind of subject who, the hammerer who is hammering away at the nail. Now, from the perspective of the hammerer, he’s not some reified ego who is objectifying the nail. All that is really being witnessed there from the level of consciousness is the disclosure of activity. But still we can give a description of a hammerer and a nail. So to what degree in your thinking do you completely eradicate the subject-object distinction? And is there anything that you think predictive processing has to say about that? I think, well, second question. All right, don’t worry about the second question. That’s a good question, don’t worry about it. I’m happy to think about it with you. I have put a lot of thought into the first question. And so, I mean, I think Filler is right in his book that our subjective-objective divide is our version of the appearance-reality distinction. And it’s just our version. In the ancient world, they had a different version. So ours is an in-out metaphor. How is the inner and the outer connected? And the ancient world is the problem of the one and the midi. How is the lower and the upper connected? And once you see that, you realize that there’s a deeper structural problem that isn’t bound to the inner outer or the emergence and the emanation. It’s the connectivity issue. And what Filler argues, and there’s a lot of other people convergent on this, is if you go within Aristotelian metaphysics, that what’s most real are substances in the Aristotelian sense, which are things that can independently exist. Then this becomes a deep problem for you. And that kind of ontology, I would argue, and I can make the argument, drives you towards a nominalistic epistemology. And then you get Occam’s version of it or Kant’s version of it, where all of the patterns and all the information are just in the mind, which means the mind is radically other because it is the only place that information intelligibility actually exists and the world is profoundly absurd in a deep, deep way. And I think that leads you into, well, first of all, it makes science impossible and it leads you into all kinds of existential and moral dilemmas. And if you then take a look at the logic of trying to get, you can’t actually, this is what Filler does very carefully, you can’t actually get relations out of properties that belong to the relata, the relation have to precede the relata in a very important way. So I think there’s a deep, at a very deep level, I do want to call it into question. I do want to challenge that, first of all, it’s historical, it’s cultural. We pretend as if that is the only way in which human beings have related to the world. Like I said, even in the West, that’s not their problem in the ancient world. Their problem is the problem of the one and the many. It’s not the problem of the in and the many. Now, what does this all ground in? I think this grounds in the common problem of the relationship between appearance and reality. And here’s where I can now say something that aligns with my previous argument about relationality. See, so, and I’m gonna borrow a term from Rapport here, the hermeneutics of suspicion. The hermeneutics of suspicion is that, and we’ve got it because of Freud and Nietzsche and blah, blah, blah, there’s historical reasons. Those are valuable critiques, by the way. But the hermeneutics of suspicion says appearances are distorting, they’re deceptive, they’re destructive, they’re disruptive. And what we should do is always question whether or not they are leading us into reality. Now, Marle-Ponty has a great argument against that, that he gets from, maybe he doesn’t get, but it’s in Plato, which is, wait, you’re treating real like red. You could just say, you can look at an isolated thing and say that’s real. But real is a comparative term. So, of course, is illusion. To say this is an illusion is to say this is an illusion in comparison, in relation to something that is more real. And that thing is in relation to so on and so forth, right? And what that means is, first of all, you have to see that those judgments are inherently relational. And secondly, you have to call into question the independence of the hermeneutics of suspicion. The hermeneutics of suspicion is actually parasitic on the hermeneutics of beauty, right? It depends on there being things that we agree on by saying that’s where appearance discloses reality rather than distorting it, right? And then as soon as you do that, that undercuts the deep divide because you ultimately make these divisions between the one and the many, or the subject and the object, by getting a hermeneutics of suspicion into the appearance, reality, and distinction. That’s at least the argument I would make. Interesting, yeah. I like the suspicion of binariness inherent in all of this. It speaks to me and I think it speaks to people who are interested in active inference and the notion of sort of philosophical vagueness as well. At what point is something real, is fundamentally a comparative term in the web of things that could be considered real? Yeah. I think that’s right. Yeah. I’m trying to do this without invoking a binary, but you mentioned earlier that four E cognitive scientists do have their critiques of active inference and predictive processing. And I will stick with active inference actually for now, and I will explain why later. It’s not that interesting. So this is people like Tony Camero or Ed Baggs who are radical inactivists. And what they’re critiquing in a sense is an internalist picture of active inference. So this is some, again, I’m not putting words into people’s mouths, but if you read Jacob Howey’s 2016 paper about self-evidencing, some of the language may to some people imply that there is a homunculus that has a representational picture of the world. And this to the ears of an inactivist is deeply worrying. I would love to, as a four E cognitive scientist, with obviously a vested interest in predictive processing as well, I’d love to just hear your take on that debate. And again, without striking at binaries and whether you think those critiques are legitimate. Well, I mean, it depends what you mean by legitimate. I mean, there’s also the ones that Evan Thompson made and I hold Evan in very high regard. I mean, they’re legitimate in that they are well-made arguments in peer reviewed journals. And so we have no right to be dismissive of them. Well, if I was to say that let’s take as an axiom of active inference, this isn’t the case, but let’s just say for sake of argument, that you have some internalist representations of a statistical model, right? So you are in a sense, you’re not just embodying a model, you actually have a model. So again, it comes back to that distinction we’re talking about. Would a inactivist critique of that internalism be justified? So the reason why I’m hesitating is this is landing on the swamp of what do we mean by representation, which I mean, so does the thermostat represent the temperature in the environment? And where, and everything I’m gonna say is controversial because of the swamp. So I want that understood, please, right? I would say no, because what’s needed is, right? There might be some coral, there might be co-variation, but what I think the critiques of the co-variation model of representation, Locke’s ultimately model, that what did it, so we had the idea that representations have to be similar to what they represent, that’s how they represent. And then you have all the problems with similarity and Aristotle even could bring that down. And then Locke replaced it with the co-variation model. To have a representation is I have something in my head that reliably co-varies with something in the world and that’s how it represents it. And then the problem with the co-variations is they don’t give you the specificity, for example, of thought, right? So this is co-varying with a bottle, with a tool, with a man-made object, which is it? Those are not the same things, those are not the same ideas, but this is co-varying causally with all of that. And you have the problem then of aspectualization. And the Lockean answer, of course, is we get aspects by doing representation, but I agree with Searle that that’s the wrong way around, that any representation is inherently aspectual. When I represent this as a bottle, I’m only picking up on some of its properties insofar as they are relevant to each other insofar as they are relevant to me. So representation depends on aspectualization, which depends on relevance realization. And this is not what you do with relevance realization. You do not represent all the facts, judge them to be irrelevant, and then zero in. So it’s ultimately non-representational. So what I would say is, I don’t know that some level there’s something like representations, but if I agree with many arguments that representations are more than co-variation, but there’s this kind of caradness and aspect, carrying an aspectualization through them, and they depend on relevance realization, which grounds in autopolisis and is deeply intertwined with predictive processing. That would be my response. Splendid. And I guess what intuitively supports a more representationist picture is consciousness. I think consciousness is going to become a recurring theme in this podcast. Well, it’s the Holy Grail, right? It is the Holy Grail. So I guess my question here is, is that if one was to adopt a radically inactivist view, there’s nothing in that picture which needs consciousness. So why couldn’t my mutual coupling with the world just happen with the lights off? So this is an argument, I guess, that’s rooted in Chalmers, 1995 paper, which is, you can give me the function, but you can’t give me the why. Why are the lights on? And so I guess in other words, my question too is, yes, we may have relevance realization preceding perception or grounding perception, then why do we have perception in the first place? So again, I’m gonna give a gist of an argument that I’ve spoken about at length elsewhere and I’m trying to get published on and presented at conferences and so forth. So I don’t think you can separate the function and the nature questions, which is what Chalmers’ hard problem relies on. Is it, yeah, you’ve given me the function, but you’ve told me nothing about the nature. I don’t think function works ontologically like that. I think function has to be plugged into the ontology. And I think as soon as we’re talking about the ontology of anything within a living organism, we’re talking about something functional. So I think the questions have to be answered interconnected. So let’s go back. Let’s say that you give me that any cognitive agent has to be doing anticipatory relevance realization or it’s not gonna be a general problem solver. And then when it’s doing that, it has to be aspectualizing its world, right? Right, it’s not doing all of this, but this as a bottle or the molecule as food, the molecule, right? And then if you then start to pay attention to, let’s look at the continuum of consciousness. Let’s look at the possibility, which I have experienced. That’s not even the right sentence. Many people have, and this is Foreman’s idea, of something like the pure consciousness event. And the pure consciousness event doesn’t have, well, actually I need a distinction here. I’m gonna claim it doesn’t have one type of qualia, but it has another type of qualia in it. It doesn’t have adjectival qualia. There’s no red, there’s no blue, there’s no cap, there’s no dot. You’re not even conscious of consciousness. You’re just conscious, right? But what it still has is it still has the adverbial qualia. It still has a sense of here-ness, noun, and the here-ness is profound presence. That’s the language with you. The noun-ness, eternity, the integration, right? The togetherness, everything is one. So all the adverbial qualia are still there and you still have consciousness. That shows that the adjectival qualia are not necessary to consciousness. Now I have the other arguments to show they’re probably not sufficient, because if I give you sort of atomic blips of blueness and greenness and they’re not bound together and there’s not a here-ness and a noun-ness so you can anyway orient on them, I don’t think you have consciousness either. Right. So I’m not saying that adjectival qualia don’t exist. I’m saying that we’ve held consciousness hostage to them and what I’m proposing to you is that what consciousness is doing is these adverbial qualia which are just, right, salience, which is just relevance realization, which is, as far as we can tell, tied to the best evidence we have, it’s all controversial, about the function of consciousness. It’s tied up with working memory and attention which are both doing higher order relevance realization. We seem to need consciousness for situations of complexity, novelty, or ill-defined-ness, ones that are really demanding on relevance realization. So you can make a pretty, and there’s a lot of convergence, actually, on what the function of consciousness is relevance realization. And I think if you plug in relevance realization, you can at least get all the adverbial qualia and that, I think, gives you a lot of what consciousness actually is. Cool. Yeah, let’s stay on adverbial qualia because I’d like to integrate what you’ve just said with a more active inference account of consciousness, which there are, and they’re also diverse. Well, I have to ask a question before. Does Foreman say that mineness, so feeling that this experience is mine, does he, is that one of his adverbial qualia that persists in the pure consciousness event? So that has another ticket because the debate about, this is a raging debate, and Evan Thompson actually has a good anthology on this, on the self-no-self debate. Right. Right. And I think it’s reasonable and complies with most of the evidence, which of course is self-report, after the fact, which is problematic, blah, blah, blah, blah. I agree with all of that. I’m not dismissing that. But that’s what we basically have to go on right now, that the ego narrative sense of mine and me and I goes away. Whether or not that is a complete loss of the self as where relevance realization is happening or something like that, I’m not convinced that that second thing is the case. I think, so if you would allow me, and this is a torture distinction, a distinction between the ego and the self, I think these experiences, very much the ego goes away. I’m very suspicious of the claim that the self goes away because people are readily able to recall these experiences. This is what Forman does report, and I would report to you, and seamlessly integrate them into their auto-noetic, autobiographical memory. There isn’t any weird disjarring or where was I during that? Yeah, yeah, yeah, yeah, yeah. Okay, excellent. The reason why I asked that is because a lot of self-modeling in active inference is based on the work of Thomas Metzinger. So Thomas Metzinger, he’s such a wonderful addition to this conversation. So Thomas Metzinger. I had a wonderful conversation with him not that long ago. Oh really? I love him. But on my show. Amazing. I think he’s such an underrated and important philosopher. Oh, I totally agree, yeah, yeah. So he’s got this notion of, so he makes a distinction between phenomenal self-modeling and an ontological self. So people intuitively, even everyone listening to this podcast, even the way we act on a day-to-day basis is in lieu of an ontological self. So we can’t help but really think of ourselves as being this soul, this Cartesian soul. But anyway, Metzinger invokes this notion of phenomenal self-modeling, how the system appears to itself. And he’s got this minimal phenomenal self, which has been unpicked in other ways, but he normally speaks generally of presentness, mineness, and perspectivalness. Exactly what I’ve been talking about. Yep, exactly. And this gets fleshed out in some of the active inference work, people like Jakob Howey, Carl and Jakub Limanowski have used Metzinger’s work for several papers. And then on the other end of that, we have what Carl turned an epistemic agent model, which is the system that sees itself as epistemic in the sense that it can retrieve past memories to inform future decisions, and agentive in the fact that it can conduct allostatic action, i.e. action to retain some homeostatic equilibrium, given what it knows about the future. So me putting on a jumper before I go outside, because I know it’s gonna be colder outside than it is inside. The reason why I invoke the minimal phenomenal selfhood is because, and Carl actually outlines this in the podcast we did together, that consciousness may be downstream in a sense on, one, a deep temporal model. And from that, the sense of agency. And he has an argument from referring to dimensions, not in terms of sort of millimeter size, but in terms of the degree to which you have embedded Markov blankets, that when we’re distant from our actuators, what we end up doing is we have to have a way of distinguishing what’s my action from what’s your action or from how the world has acted on us. And in doing that, we come up with a self other distinction, but also the notion of an agentive self, the idea that I can change the world in accordance to my preferences, as I mentioned beforehand. So to sort of paraphrase Carl, what that seems to be suggesting is that consciousness is actually rooted in selfhood. And I know there’s this whole argument in Metzinger about whether that’s the case. That’s why I asked whether mindness is also part of these pure consciousness events, or whether it’s kind of a post hoc inference. Just wondering whether you had any ideas about that notion and whether we can truly have consciousness without at least a sense of self. Right, okay. So let me try a few things. First of all, I don’t think it’s inevitable that human beings model themselves as souls. I think that’s a Western post Cartesian way. And then the soul as a monadic single substance, more I can put it that way. That’s not even the case in the ancient world of the West, certainly not the case in other parts of the world, et cetera. So I hesitate that that’s the claim that’s part of how the machinery must unfold. And then secondly, I worry about saying that because it isn’t a soul, there isn’t a self. This is a weird notion of a substance ontology being just presupposed in an unquestioned manner. I mean, look, we’ve discovered most things aren’t substances. This table, which would be a classic of a two in substance, is not a substance. It’s a dynamical system of atoms and quarks and blah, and we don’t go, oh, well, because tables aren’t substances, they aren’t real. We don’t do that. And so I wanna just note that I’m worried that there’s a substance ontology creeping in here and leading to certain conclusions. Now that idea about this, I can’t remember the name of the people, I apologize for that. There is a fairly recent theory of consciousness that we’re trying to respond to all the Libet experiments and things like that, arguing that, well, consciousness is sort of after the fact, but consciousness is actually for the future. So the idea here is consciousness emerges out of the evolution of episodic memory. And so the function of consciousness is to allow us to create an episodic memory of something we have already done. And the point about the episodic memory is, soon as you get into episodic memory, you get into perspectival knowing. That’s what an episode is. You have a perspective on a situation, what you found salient and relevant, how your actions in the arena coupled or didn’t couple, the affect, all of that, all of that that’s so bound up with consciousness. And the point that they make, and I totally agree with this, is as soon as you agree that there are multiple kinds of knowing, not just propositional and procedural, but also perspectival and participatory, you get the argument that episodic memory affords perspectival knowing, which allows you to solve problems that you can’t solve without perspectival knowing. You can pick up on the world, the world discloses itself in ways that it is not otherwise disposable to you. And then consciousness emerges as an optimization on the formation of episodic memory, so it is optimally transfer appropriate for the future. And then you get, it is a sense of agency, but it’s not a billiard ball agency. It’s this kind of longitudinal agency. And to tell you the truth, given the Humean critiques, that’s actually the kind of agency the self has. It has this kind of longitudinal agency around episodes. It doesn’t have this, I’m an uncaused cause, a moved mover within myself, which I think is both a ridiculous proposal ontologically and an ethically undesired, why would I want such a thing? Like it’s completely arbitrary. I think my life is being lived, I’m trying to change myself so that I am, my thoughts are as determined by what is true, my action as determined by what is good, and my perception is determined by what is beautiful as I possibly can. I would like to lose all my freedom in that sense. And so I think if we move to the right level of analysis, and Gallagher makes a convergent argument about this, that when we’re talking about agency and selfhood, we’re not talking at that limit scale. We’re not talking at where’s the first movement of the billiard ball chain. We’re talking more about, no, no, no, how are we building this long-term virtual engine that enhances our predictive agency in the world? Yeah, absolutely. And Gallagher’s work is very convergent with Thomas Metzinger’s, of course. And so this, just for our audience, these arguments and this notion of the narrativized self is well fleshed out in active inference literature. So I could point you, yeah, I would probably start with the Fritzl and Limonowski papers. There are two wonderful papers about self-construction under active inference. And then I have a convergent argument coming out of Daniel Huda, who was also in the 4E, right, the narrative practice hypothesis. And he argues that any mind-sight ability, any ability to see in other people’s, mopping into other people’s mental states, attributing beliefs and desires to them requires, well, for example, if I’m gonna attribute a belief and desire, I need to know something about your character. Are you lying or not? I need to know something about the context, the setting. What’s going on in this situation? Are you tired? Is that a small child so you’re not really lying when you tell them at Christmas time that there’s a Santa Claus? Right, right. And I need to know what the conflict is, what the problem. Notice what I’m talking about here. I need to know all the elements of narrative. Daniel Huda points out, we practice narrative incessantly. And unlike many of the other things, including language, which we scaffold for children, we scaffold narrative for our kids. I had to sit through the Teletubbies twice, in which you have to sit through these really impoverished narratives, because we’re scaffolding this up, because narrative ability gives us the, right, gives us the set of skills, the sets of states of mind with the perspectives taking therein, and the traits of character that allow us to pick up on other people’s mental states. And I think that is also a function of the, the function of selves is to make us agentically predictive to each other. Right, exactly, exactly. And there’s another convergent argument in active inference, which is that only in the context of other people like you, would you ever come to the inference that there is something that is like to be you. Like, if you were alone. If you were alone. That converges with the whole Vygotskyan approach to the development of metacognition and self-awareness, which I think is right. Right, right, right. And of course, your colleague at, just pointing people in different directions, your colleague at Toronto, or former colleague, Jordan Peterson, as a whole literature on literature and narrative. And he has a conversation with Carl on his podcast, which is deeply intriguing about these kinds of alignments of the free energy principle and narrative. John, I wanted to also speak about flow states. You’ve written about flow as the sort of locus of implicit learning. As you know, I’ve just been writing up a paper on flow from an active inference perspective. So I’d love to be able to sort of see maybe where we align, where we don’t align and try and sort of unpick that. Just as a forewarning, the paper that I’ve written with my wonderful co-authors is not out yet, but it should be coming out soon. So, you know. I have not yet read your paper because you suggested waiting. Yes, the paper has gone through modifications as papers are want to do. It’s in the final stages. So you have access to it. So please feel free to read it. I will read it then. I will read it. I can give a very brief overview. So to not disadvantage you. My, the basic thesis of, the basic perspective that we take on the paper is about self-modeling underflow states. So that’s the main thing that we’re looking at and the attenuation of an epistemic agent model. Because what we’re arguing is that certain precision weighting mechanisms are lending, it’s leading the organism to undertake pragmatic action, seeking out pragmatic affordances, rather than engaging in what Carl or other authors would call epistemic foraging. Now, I think where we might have a point of difference is that I have a section on flow states and learning. And I think this is a really interesting point because your paper speaks about implicit learning. Whereas I’m, and this, and to make it transparent, this is, there are, I’ve heard different perspectives on this. But my current opinion is that in flow states, what you’re getting is a reinforcement of the skills that you actually picked up through epistemic work. So I have the example of a violinist. The violinist must undergo a certain period of exploration of epistemic foraging, which comes with this kind of self-talk, this real prominence of myself as a knowing thing. Of foraging, yeah. And then what it does is over time through learning, you get high precision over the beliefs about that action in terms of the technical detail. And then that becomes kind of the foundation on which you can then go and do more epistemic work. And then that expertise development is step-wise, in a sense, I mean, you can picture it either way. But critically, where we differ is that I’m arguing that actually in the moment of flow itself, you’re just garnering more evidence for the capacities and the policies that you already have. So I’d like to start there and see what you think of that claim. So we, as you point out, Leo Ferraro, and Ariel Benadai, we argue something different, which is why you’re bringing it up. We argued that learning is inevitably occurring even in the flow situation. And so there’s improvement. And that’s why if the environment doesn’t have the capacity to renew challenges on you, you will very quickly fall out of the flow state. So the argument is, well, why don’t you just stay in the flow state? Well, the argument is because eventually you get a mastery over the environment, which means your skills start to exceed the demand. And so phenomenologically, when I’m in the flow state, like in sparring or lecturing, I’m finding tremendous amount of insight and innovation coming out. Now, this is where it might get tricky because is procedural innovation a restructuring of, especially, is that a restructuring of your information and therefore have new capacities in it, new emergent abilities, or is it just a reinforcement? And I don’t, this would, we might get into Theseus’ ship here, which is gonna be problematic. But one of the defining phenomenological features of the flow state, and I wanna be clear, I’m not pinning you down on this, but I do think it’s relevant evidence, is the ongoing sense of discovery. There’s a sense of discovery there. There’s a sense of coming to know things you did not know before. And of course, when the flow state is in much more comprehensive expertise, not like the plain tennis or maybe like just your optimal gripping on the world, people come to think they have learned something deeply profound about reality. So I tend to think that there’s evidence for transformative learning happening up and beyond just reinforcement learning. Okay, yeah, I like the return to the phenomena. Dreyfus has this phrase, which I will definitely use over and over again in this podcast, which is when in doubt, return to the phenomena. The way that we have cast it is that there’s this idea in the literature about a hyper prior that the world is changing, so that we have a prior that the world is changing, and that is adaptive for us because we don’t get stuck in the same free energy minima. Now, if that prior is always at play, what ends up happening downstream on that is that my inference about my own abilities deteriorates over time. And we experience that, I guess. I mean, like if I play tennis today, and then I play tennis in a week’s time, I’m probably gonna have less fidelity in my own capacities in the week’s time than I do if I play tomorrow, having followed today. And so our argument is that the positive affect that’s part of the flow state is downstream on the surprise that’s generated when you actually violate that hyper prior that the world is changing because what you’re getting evidence for is that your policy still work in the world that you would have thought didn’t lend itself to your policies working. And there’s this idea Caspar Hesp’s paper and his co-authors in 2021 that affect inactive inference is about basically your model doing better than you thought it would do. And this actually comes a lot into what Mark talks about in terms of aerodynamics. So I’m wondering if the… Yeah, well, let me… Not what, I don’t know. Yeah, yeah, yeah. You know what you’re saying? Well, see, I think that’s right. I think, and the problem of course is you can get an infinite regress. So well, I’ll say, well, the things here, and you’ll say, well, there needs to be something, a hyper prior behind that. And then we can, that’s what I mean about thesis is shit. Yeah, yeah, yeah. So again, I would say, well, what are you getting better at? And what you’re getting better at, I would argue, is not stand, like, so expertise is generally built around giving you a specific domain. We can talk about the possibility of sort of meta expertise if you want. But what I mean by that is once you’ve got expertise, you’ve gotten really good at, within a certain domain, at formulating the problems you’re confronting as for you well-defined problems. That’s one way in which an expert is sort of reliably different from a novice. A novice goes in, this is an ill-defined problem. The expert goes in, it’s a, no, it’s this problem, and this is the, what you need to do in this, and this, and this, and this, and this. Okay, so we agree with that. But I think what’s happening, right, and maybe this is the gray area between your position and mine. I think what’s happening is we’re discovering new ways in which we can turn well-defined problem, sorry, ill-defined states into well-defined problems. Something’s like, I didn’t realize I could adapt to that situation, but I can. And that sounds similar, but for me, that’s an insight experience. Like it’s an insight flow, because you’re getting an extension of your cognitive capacity because you have restructured what you take to be, like your problem formulation of the world. That’s… Yeah, I mean, to supplement that, I actually, I don’t disagree on that note, but the way I would formulate that, and this actually isn’t in the paper, it’s in the paper in some sense, but it’s not a fully fleshed out argument, because it wasn’t hyper-relevant. I’m glad there’s another really quality paper on flow coming out. There’s really nothing. When we published the paper, we were literally the only paper talking about the cognitive processes that work in flow. I, yeah, it’s a bizarre one. There’s very little on skillful coping generally within cognitive science, but as I said, I think that’s why I said there should be more phenomenology active or cognitive science convergence. What we kind of mentioned in brief is that flow, if we take the sort of macro perspective on flow is, it can also be this kind of humming between the pragmatic and the epistemic boundary. So maybe when you’re realizing, oh, I can reframe this kind of perplexing problem and actually something manageable, what you’re doing there is maybe just slightly exiting the flow, doing some epistemic foraging, returning. But there’s something I want to make really clear to everyone listening as well. When we talk about epistemic action and pragmatic action in terms of expected free energy or active inference, it’s not like the agent goes, I’m doing pragmatic action right now, and I’m doing the actual action, right? It’s part of the actual maths. If you look at the maths, it’s like, firstly, you can do a bit of both, right? At the same time, like that’s a thing. But it’s also like the action policies that we’re talking about here, John and I, these are not protracted seven hours of just like pragmatic action, no, these have to be very temporally thin because the world is volatile, right? The world is constantly changing. And so you have to be able to not just pin all your hopes or pull your eggs in one basket in terms of an action policy. And so actually fundamentally our divergence might well be a semantic thing, which is that I’m very much about the synchronic nature of being in flow right now. And like I have for simplicity, I have that boundary between pragmatic and epistemic. Whereas if you take a kind of more macro perspective, as that flow state, as the concerto unfolds, you could be humming at that boundary and doing learning as well. Yeah, I really look forward to your paper because I think that sounds like actually a powerful way in which the theories could be integrated together. I’m also interested in a question that’s emerging out of this, well, I’ve already been interested in the question, but it emerges out of what we’re talking about because I’m interested, because I think there might be an additional thing because you can flow in a situation and it doesn’t transfer to other domains. Video game addiction is the classic model. You can flow in the game, you can’t flow in the world. So you get depressed in the world, you can flow in the game and so you wanna stay in the game more and more and more. That’s very different than how I flow in Tai Chi Chuan. Tai Chi Chuan, in fact, other people pointed this out in me, the flow states that I cultivate in Tai Chi Chuan, they’re cultivated in such a way that they transfer broadly and deeply to many other domains. And I’m interested in this sort of ritual framework, what you get is this ritual and this philosophical framework, Taoism and other practices and ecology of practices around so that it broadens, it transfers in powerful ways. And I’m wondering what the differences are and also how does that show up? Is there a phenomenological difference? So now really, really, really weakly, anecdotally, in terms of phenomenological practice, I do get a sense of a difference between when I’m just flowing and when I’m flowing in a ritual context. So if I enter into a violin hall, and let’s say I’m an expert in playing the violin, and I see a crowd and I see my violin and I see the whole stage, what I end up getting is this, so habits are quite well codified within active inference, you get this kind of contextual cue that like, in layman’s terms, although we’re saying this isn’t happening propositionally, it’s time for me to play the violin. And this can look like precision over beliefs about your own action. So this is again, not something I’ve thought about necessarily with any depth, because this is just something that you’ve brought up. I’m wondering whether that contextual cue, what you’re happening when you’re doing Tai Chi and then maybe what you’re doing when you’re lecturing is that your system is seeing similarities in the contexts and doing similar precision weighting in a separate domain. Yeah, I think that’s right. And I think what philosophical frameworks do is they reverse engineer that. They try to find out what might be similar or maybe even invariant across many contexts and then build that back into the specific context in which you are doing the practice to exactly afford that transfer appropriate processing. I think that’s right. I think there’s something else also going on with the queuing and this goes into Apter’s metamotivational theory. We can be broadly speaking, we frame our arousal in two different ways. When we’re in a telek mode and we’re working towards and where the reward is found in the external goal, then increased effort is experienced as frustration and then it increases too much, I get anxiety. But I mean, if I’m doing something where the goal is the behavior itself, like making love or doing poetry, then the increased, not infinitely obviously, but the increased arousal is framed as positive, as excitement, right? And so I think also, and so Apter talks about safety, safety framing, right? Let’s call the first the telek mode work and the paratelik mode play. And I think ritual has to do with serious play and I’ve got a whole argument about that. But I think also what you’re doing in the context is you’re trying to cue people into the play mode because the play mode allows them to model themselves, their arousal as positive rather than as negative. And that allows them, and I think there’s a deep connection between this paratelik framing and Shik’s Atmahise, I think accurate claim that flow is autotelik, you’re doing it for its own sake. Right, yeah, yeah, yeah, cool. Yeah, I like the notion of effort here. I recognize that one of the fundamental factors or fundamental characteristics of flow is perceived effortlessness. Yes, even though you can have sort of very peripheral cues that you might be expending a lot of metabolic energy. Exactly. And just for the, given this is the Active Inference Institute’s podcast, just for those interested in the computational framing of that within Active Inference, Thomas Parr has got a paper out this year on cognitive effort and active inference. So that gives a kind of nice computational modeling picture of that, which actually leads me to a much broader question because I know we haven’t got too much more time, which is I’ve noticed in myself to be totally candid that I’ve started, like I learned about flow and Six Hens, Mihai, through you and through other people, but mainly as a philosophical notion. And now I’m viewing it as a computational notion. Yes, yeah. Do you worry about that? Do you worry that if we rely too much on computation, not just computational models, but maths, physics, we in some ways reduce the phenomena in a way that’s detrimental, not just because it’s not romantic or that it’s, but like that we’re missing something fundamentally? Well, it depends. I mean, that’s a really important question. We could do two hours on just that question. But if you think that a leveled ontology is actually the way ontology is, that the level at which we do science is as ontologically real as the quantum level we discover when we’re doing science. And I think you have to come to that conclusion. And I have extended arguments for that elsewhere. Then you can make a clean philosophical distinction between explaining a way in a reductive fashion and explaining that actually enriches your appreciation for the phenomena. Now, if the computational stuff, the computational stuff to my mind, well, let’s say what the argument we were just exploring, has merit, then notice what we’re saying. We’re saying, well, what the computational modelling actually does is actually shows why this sometimes very obscure philosophical framework is really important. It’s actually doing some really important work and you can’t get rid of it. You can’t dispense with it. And that would mean that kind of some of the stuff we’re doing where we’re trying to commodify flow and take it out of those frameworks and do the thing we do and sell books and right. It’s actually misrepresenting the phenomena in a way that we can philosophically and scientifically critique. And say, wait, wait, this phenomena has the power to has because Chikma Sattmahai said it’s an evolutionary market for adaptivity and then, well, making it adaptive is how broadly and deeply is it transferring out of the situation? And then the philosophical framework really matters to what it is and how it functions. And so if you can make a distinction and you need an ontological distinction to make this distinction, but if you can make a distinction between explaining and explaining a way, then I think it’s possible to say, no, no, when I do these things, and to be fair to me, I get a lot of people who are from various religious and spiritual backgrounds and they come in and they say, thank you so much for your work. I know I much better appreciate this experience or that experience or the flow state of this mystical state because you didn’t try and tell me it’s nothing, but you tried to say, this is what it’s doing and this is why you like it and value it so much. But do you think your work to kind of, yeah, to look at that personal vantage point, do you think you’ve managed to keep, because if I, without being sort of embarrassingly lauding, people should watch Awakening from the Meaning Crisis, not only for the content, but also just the way you present it, which is just so magical. So thanks, I mean, it’s really inspired a lot of the way that I think science should be done and philosophy should be done. What is it about, I think, you know, anyone who watches that recognizes, there’s something kind of special going on there. The ideas are living through you, they’re breathing through you, you’re breathing through them. There’s this really beautiful coupling. Again, we’ll come back to that between you and the ideas. What is it about that kind of synthesis of the philosophy and the sort of more hard cognitive science, do you think made that project so successful? I suppose it’s a view of the role of cognitive science, what cognitive science is doing. I think what cognitive science does is it’s, well, I’ll try and do it sort of narratively. And I do this in the series. I think, you know, even the notion of mind, we’ve been invoking mind and cognition is equivocal because it means one thing to the neuroscientists who’s studying the brain and looking at neurons and anatomical networks and perhaps functional networks. It means a different thing to the artificial intelligence person who’s building algorithms and heuristics and doing reinforcement learning and blah, blah. It means a different thing to the psychologists who study human behavior with experiments and running stats and they talk about working memory. They don’t talk so much, right? And it means a different thing to the linguists and a different thing to the cultural anthropologists. By the way, I include them because they were the people that have been studying distributed cognition and collective intelligence. They matter. If you think about it on this analogy, it’s like they’re different countries speaking different languages. And here’s the thing, and that has great value. Specialization has great value. I’m not dismissing that. I am not dismissing that. But each one of these is talking about a different level. And here’s what I think I would state as a very plausible claim. I think it’s unlikely that these levels in reality, the brain information processing, behavioral, linguistic, sociocultural levels are independent from each other. I think they cause influence and constrain each other. So we are missing something important about the mind by not getting clear about the relationship between these levels we can be equivocating. And we have a fragmented notion if we don’t capture the relations of constraint and causation between these different levels of cognition and mind. And so I think the proper function of cognitive science is to use philosophy’s skills of creating bridging discourses, creating bridging conceptual vocabulary, theoretical grammar, so that these different disciplines can talk to each other in reciprocally reconstructive ways as a way of converging on the causal and constraint relationships between the levels rather than trying to compete or say the bottom level is the only real level. So that for me, they all live together. They are doing, and I’m sorry to invoke it again, they’re doing opponent processing between each other. And that’s what inhabits me when I’m doing cognitive science, that vision. I call it synoptic integration. Yeah, well, it’s wonderful to see. So, I mean, like, I think we, that’s the cognitive scientist that I’m aspiring to be. I think you’ve called it a big picture cognitive scientist as well. Yes, and one of the things, one of the, and I felt very lonely for a while. I was doing relevance realisation, which is a big picture. And then to my great delight, another big picture for E. Cod’s side and then to my even greater happiness, another big picture came down the road, which is the predictive processing framework. And I think they are more convergent than adversarial. And I, well, you’ve heard me arguing about how we can integrate them together. Excellent, yeah, yeah, yeah. I would say maybe it’s just bias. Active inferences or physics processing is a big picture cognitive scientist. From the physics and the maths all the way, well, what we’re trying to do to the phenomena to what it is like to be a human being, to have conversations like this. John, it’s, I, it was, you know, I can’t say it was better than I expected because I knew it was gonna be wonderful, but you’re full of just thrilling insights and speaking to you, listening to you is always such a wonderful learning experience. Whether that happens in flow or without flow. But I just wanted to thank you so very much for, I know you’re extremely busy, but for giving us your time. Where can people find you? What have you got coming up? I’m sure people are curious. So the most immediate thing for this conversation is the talk I gave at Leiden on the Predictive Processing Symposium. It’s up on my channel. I think it’s the second most recent video on my channel, on YouTube. We can put it in the description. Where I try to go into the nuts and bolts of how you could integrate predictive processing and relevance realization theory together. So people might find that. And I’ve gotten a lot of good feedback on that for being sort of clear and a good argument. So I would recommend that when people are interested in the broader implications of this, awakening from the meeting crisis and then after Socrates also. After Socrates is where I take all of this and all of this stuff and how do you turn it into practices in order to overcome self-deception and enhance relevance realization, become more wise and virtuous. And so they can take a look there. The arguments around neoplatonism, there are several videos around that. I try to connect neoplatonism to four-echoed psi and relevance realization and predictive processing as we saw in this podcast. But in the neoplatonism, I’m working on my third big series, Walking the Philosophical Silk Road, which will be on Zen neoplatonism, trying to see if we can bring an integration, an opponent processing, not an adversarial one, an opponent processing between Zen and neoplatonism to give us a rich philosophical framework by which, like the philosophical road, we could trade ideas and move between worlds without having to descend into tribalism and other such things. Wonderful, wonderful. Well, again, it was absolutely my pleasure. I apologize, I’ve got a slight cold. So if I’ve been a little sniffly or nasal, we can blame it on the London weather. But I’d love to have you back on at some point. Your work is truly inspiring. So thank you so much, John, for me and the Institute. Thank you, Darius. I’m happy to come back on. And if it turns out that Mark and I come on together, that would be thrilling as well. Great, we will definitely get that sorted. All right, thank you. Thank you. Thank you for watching. This YouTube and podcast series is by the Verveki Foundation, which in addition to supporting my work, also offers courses, practices, workshops, and other projects dedicated to responding to the meaning crisis. If you would like to support this work, please consider joining our Patreon. You can find the link in the show notes.