https://youtubetranscript.com/?v=51wcY39ouGs
Thank you for watching this YouTube and podcast series is by the Verveki Foundation, which in addition to supporting my work also offers courses, practices, workshops and other projects dedicated to responding to the meaning crisis. If you would like to support this work, please consider joining our Patreon. You can find the link in the show notes. So this is a special event. It’s a symposium as a part of the Prison Lab Speaker Series. We have Professor John Verveki. Professor Verveki is an award winning professor at the University of Toronto in the Department of Psychology, Cognitive Science and Buddhist Psychology. He currently teaches courses in the Psychology Department of Thinking on thinking and reasoning with an emphasis on inside problem solving, cognitive development with a focus on the dynamical nature of development and higher cognitive processes with an emphasis on intelligence, rationality, mindfulness and the psychology of wisdom. He is the director of the Cognitive Science Program, where he also teaches courses on the introduction to cognitive science and the cognitive science of consciousness, wherein he emphasizes 4E models of cognition and consciousness. And last but not least, he is the creator of the transformative, I would say, YouTube series Awakening from the Meaning Crisis and After Socrates. All right, so I want to talk about predictive processing and relevance realization, how we could integrate them further and a tighter fit between function and phenomenology. We’ve seen some major frameworks being proposed here, predictive processing, of course, is one of them. A lot of talk about relevance salience, relevance realization for E. coxii. And as Julian said in his wonderful talk, I try to do work. Well, some of what I try to do in my work is bring about an integration between these frameworks to see if we can get them to fit more tightly together, to give us better conceptual tools, a better theoretical grammar by which we can do our empirical investigations and carry out our theoretical debates. As Julian mentioned, I don’t have to give out my talk as Julian gave out of it already. So, recently, Anderson Miller and Vervecky in 2022 published a paper about integrating the predictive processing and relevance realization frameworks together. And it had to do with the idea that predictive processing and the relevance realization framework were both converging on a solution to a deep problem, which is known as the frame problem. Many of you probably aren’t familiar with that, but I’ll go into that right now. So if you look at the Stanford Encyclopedia of Philosophy, you’ll find an article on the frame problem is written by one of the best people on it, Shanahan. And Shanahan said the frame problem was actually two problems. The first problem is how do we come up with a syntax, the computational syntax for representing change that doesn’t get all muddled up. And that turned out to be a very hard problem. And Shanahan was part of the solution to that. So that part of the frame problem, which was originally named that by sort of computer scientists, that has been to a large degree resolved. What Shanahan goes on to argue, though, is a deeper philosophical problem was thereby more clearly disclosed, what he calls the relevance problem. What is the relevance problem? The relevance problem is the fact that there is too much information to consider. There is so much information in the environment, you could be paying attention to right now. And combinatorially, you could look at that spot and then that spot. You have so much information and long term memory. All the various combinations of it. There’s so many different patterns of behavior you can engage in. And yet somehow You zero in on the relevant information and you’re doing it right now. Just to give you a brief idea about this, let’s think something that’s very limited. Formal game of chess, very limited, doesn’t last that long, not connected to most of the world. The number of alternative pathways, the number of alternative sequences you could consider playing outnumbers the number of atomic particles in the universe. So this is what you don’t do. You don’t check them all to see which ones are good and which ones are bad because that’s the last thing you’ll do. In fact, you won’t get anywhere near doing it. Now that’s just for something really limited like that. When you put it into a complex world, somehow, and this sounds like a Zen Cohen, you’re ignoring most of the information out there and in there and what could be and that’s what is making you intelligent. And I have been obsessed with this for my whole career. And I have been obsessed with this for my whole career. Now, Thankfully, other people pay me to be obsessed. Because This issue. Is, I would argue, the core issue that’s still facing the project of artificial general intelligence. The ability to zero in on relevant information and know the LLMs don’t have that ability yet. Right. They are actually parasitic on our ability to do it in many ways. I won’t go into that. That’s another topic. You can read my book if you want. So, So, The standard places where we look to try and generate our explanation of cognitive phenomena, representations, Rule following, Etc. The paper published with Tim Lilliclap and Blake Richard in 2012, we argued, you can also take a look at my thesis from 1997, that representation, rule following and inference all presuppose relevant information. And therefore can’t ultimately be used to explain it. Let me just give you one quick example. I can’t do all of these because that would be an entire talk on itself. When I choose, this is based on earlier work by John Searle, when I choose to represent anything, I of course don’t get all of its properties. Because the number of properties this actually possesses is uncountably large. So what I have to do in order to represent this is from all of the properties I could pay attention to, I zero in on some, I bind them together insofar as they are relevant to me. So I’ll call this aspectualization. And you have to aspectualize whenever you’re representing. That means in order to create a representation, I have to first be capable of doing relevance realization because relevance realization is constitutive, it’s generative of representation. So you can’t use the manipulation of representations to explain relevance realization. It’s deeper than that. I can make similar arguments for rule following, for inference, read my published work. I’m just going to ask you to take that on faith that those arguments are there. I’ll give you a taste of how primordial relevance realization is. It’s a very, very deep problem. And it’s plausibly one of two meta problems that any problem solver has to solve. What are those two meta problems? Well, what are the two things that you need to solve to solve in order to solve any problem is the one I’ve just mentioned. You have to zero in on the relevant information. We’ve been invoking tasks and goal relevance and right. Yes, but how would I give a machine the ability to do that? Context isn’t an entity in physics. So, is it? Centaurus Paribus, organisms that can zero in on relevant information across a broader variety of contexts will be more adaptive by being more generally intelligent. What’s the other meta problem? The other meta problem, the other thing that makes you adaptive whenever you’re trying to solve any problem is how much you can anticipate the world. That notice how you will often judge how relevant another species is by these two things. When you notice that the animal has somehow zeroed in on relevant information to go, oh, that that looks pretty smart. The other thing is when you notice the dog is anticipating farther and farther into the future. You see what I’m setting up for these two meta problems. The relevance realization problem is going to be handled by relevance realization theory and the anticipation problems is handled by predictive processing, but they’re integrated together as problems. Why? Because the more I anticipate the future, the more I have longer term goals, the more adaptive I am that way, the more the space of information I have to consider explodes exponentially and relevance realization becomes more and more important. The two problems are interwoven. The two problems are deeply interwoven. Now, what did we argue in the 2012 paper? When we said relevance realization is occurring at a much lower level than semantics or even computational syntax, we argued it’s occurring at a bioeconomic or a biologistical level. And this is why it meshes deeply with 4e cogsci. That’s to do with how the organism is ultimately spending its very precious metabolic and attentional and cognitive resources. These are known as cost functions. Now, what we argued is as problem solvers, organisms are always trying to trade between two alternatives. One alternative is become a general problem solver, a general purpose. So what you’re doing there, your hands are general purpose things. You can use them in many different environments to do many different things. And that’s great because that means you can go to many different environments. The problem is, as you get more general purpose, you’re pretty pathetic as a special purpose thing. So if you’ve ever seen Castaway, Tom Hanks had his hands, but he didn’t have a saw or a hammer and well, he was lost. Oh, but in very specific context, you need very specifically fitted special purpose machines. But you wouldn’t go for this deal. You wouldn’t, if I can do it, I’m going to saw your hands off, put a hammer on one and pliers on the other. Great. As you know that sometimes you need general purpose and sometimes you need special purpose. And so what we argued the brain does is it’s constantly trading between these two. And that’s going to lead into a notion that Julian mentioned that I’m going to make a lot out of trade offs, trade off relationships. So one of the things you’ve rain seems to be doing, and I’ll just, I’m stating sort of conclusions of other work, right? It seems to be doing data compression. It takes a bunch of, and you can see this in deep learning machine and hinted and stuff like that. It takes a bunch of information and it does data compression on it. So for those of you who don’t know what data compression is, you must have been taught this in high school. You get a scatter plot. Yes. Yes. Say yes. And then what did they tell you to do with that? What did you have to draw on it? A line of best fit. Now, why do you do that? Very often doesn’t touch any of your dots, but it allows you to generalize. It allows you to extrapolate and interpret data compression. So that’s data compression. What data compression does is it tries to find what are the invariant patterns in the environment so that you can generalize so that you can go from one context to another. But sometimes it’s not always the case. You need to, and we had to coin a word for this in the paper because there wasn’t one, you have to particulars. What you have to do is you have to actually try and capture the data that you’re trying to find. So you have to find the data that you’re trying to find. So you have to find the data that you’re trying to find. So you have to find the data that you’re trying to find. And sometimes you need to do the opposite. You need to, and we had to coin a word for this in the paper because there wasn’t one, you have to particulars. What you have to do is you have to actually try and capture more of those dots. You have to be more specific to your actual data set. And that allows you to be more context sensitive. And notice when we talk about adaptivity, we often move without realizing the tension between, oh, I want to be able to move from context to context, but I want to be very sensitive to context. I want to be sensitive to this context and sensitive to that context. Those two are in a trade off relationship with each other. The more you create something that generalizes, the less you get specific discrimination. This is, of course, the problem facing any experiment in science. As you generalize, you lose specificity, discrimination. As you go for specificity, your generalization is reduced. So you’re getting the idea here, these trade offs. So what we then argued was that when rain is doing this, it’s doing something very analogous, strongly analogous to biological evolution. Because what biological evolution does is it’s a system. So in which, first of all, you have a feedback loop, you have a feedback loop, and that’s reproduction. Goats make more goats, right? Feedback loop. What comes out goes back into the system, reproduction. And then what you have in Darwin’s theory is you have two sets of opposing constraints. You have constraints that introduce variation into the system. Look around the room. There’s variation. That opens up the possibilities for the system, and then you have natural selection, which kills off many of the options. And what you’re doing is you’re constantly varying and then selecting, varying and selecting, varying and selecting as you go through the loop. Same thing is happening here. The particularization of the data introduces lots of differences into your data representation, and then the compression kills many of them off. And then the particularization introduces more variation, and then you’re cycling back and forth all the time. Very analogous, strongly analogous to biological evolution. So what is the loop in cognition that corresponds to the loop of reproduction? Well, it’s a loop that has been invoked many times here already. It’s your sensory motor loop. So the input is sensations or sensory experience that I move that then feeds back into sensory motor experience, because as I sense I move, and then as I move it changes what I’m sensing, and what I change what sense is changing how I’m moving. And I’m getting this loop, and then what’s happening is I’m introducing variation and selection on that. That was our proposal of how your brain solves the problem of trying to fit itself as best it can to its environment. So that’s very strongly analogous to how evolution fits the morphology of organisms to environments. And of course, it’s a universal process, but that doesn’t mean it produces the same thing. It varies according to the environment. So what I want you to see here is these trade-off relationships are being responded to by opponent processing. These are two processes that work in opposite directions, generalization, discrimination, compression, particularization, but they’re locked together so that they’re mutually self-correcting. They’re always challenging each other, forcing each other to correct. And what we argued in that paper is when you put various processes into this opponent processing relationship, you can actually overcome a standard limitation that most heuristic. Do you know what a heuristic is? Is that okay to just invoke that? So Wohl-Parth, the no-fray lodge theorem, every heuristic where there’s an area where it improves your performance above average, and you pay for that by there’s an area where it actually degrades your performance. So that’s why we always talk about heuristics and biases. But if you put two heuristics that are complementary to each other, they can actually correct each other and overcome those limitations. So what we thought is if we could make a case for what Evan Thompson calls deep continuity between how we see life biologically evolving and cognition evolving, we would at least have the beginnings of a solution to this problem of relevance realization. Because it’s not happening at a representational level or even a computational level. It’s happening at this very ongoing bioeconomic level. And of course, a part of processing is found at multiple scales in living organisms. And the examples I’m going to use are all relevant to this process of relevance realization. So for example, arousal. Arousal, and I don’t mean sexual arousal, I mean metabolic arousal. Arousal is a problem for you. Because sometimes you need to go very low and fall asleep. Sometimes you need to be very high because there’s a tiger. And you can’t be a Canadian and just be middling all the time. That doesn’t work. Oh look, a tiger, you’re dead. So you need to be able to do dynamic recalibration restructuring. So what did evolution design? What it is is your autonomic nervous system, which is the sympathetic and the parasympathetic. The sympathetic system is biased to interpreting much of the environment as it can as a threat or an opportunity and engaging you to be more active. Parasympathetic is biased to the opposite way. It’s biased to trying to see as many situations as possible as safe, secure, and they’re not riding independently. They’re pulling and pushing on each other. And so these two, what you do is you set the two biases against each other, remember overcoming the no free lunch problem? And they are constantly correcting each other and you’re constantly evolving how aroused you are. So your arousal tends to fit the situation well. Perfectly? No, it can’t be perfect. Because you’d have to search all of the options. We’ve already established you can’t do that. Your attention. There’s a lot going on in attention. I hope I can get an amazing talk. But you can see a point of processing in attention between task focus and default mode. Task focus, right? And whether or not it’s one or multiple networks is in, there’s controversy right now. That’s not germane to my talk so I can just proceed. As the name says, it’s trying to keep you focused on the task. Let’s say the task is, you know, listening to this weird Canadian professor give this lecture without slides. Now you have another part, the default mode network, and what it wants to do is make your mind wander. Drift away. Is he going to be done? I’m getting hungry. Is everybody in the university in Toronto like him? Oh, and then you come back. You see what’s happening? The default mode network is introducing variation on things you could pay attention to. And then the task focus network is killing most of them off. But not all of them. And then from that, variation, selection, variation, selection. So your attention is constantly evolving its fittedness to the environment. So I just a couple weeks ago recorded a talk with Ian Gilchrist. And I think he’s done careful work. And I think it’s plausible that you see opponent processing between the left and the right hemispheres. Left is narrow focused attention, right is wide focused. Left is well defined problem, right is ill defined problems. And you constantly want to play them off against each other, constantly, on a processing. And as Julian mentioned, in your action, you’re constantly trading between exploring and exploiting. So you have, and we proposed some ways you could actually mechanize this. You have opponent processing. You’ve got sort of these two systems that are pushing you to explore and also making you consider exploiting. Well, stay here because I’ve got some resources. I’ll just stay. But as I stay, it gets harder and harder to diminishing returns, right? So I’m building up opportunity costs. Oh, there’s gold and stuff over there. I just went. But then when I go there, I’m there and I could be there. Oh, if I just wander, I starve. If I just exploit, I starve. So what do I do? Well, I set them off against each other in opponent processing. So we argued that that’s the basic. Now there’s a lot more detail to it. There’s some math, but that’s the basic proposal that relevance realization is this multiple scale use of opponent processing that is basically evolving your arousal, your attention, your problem solving formulation and your behavior, exploring and exploiting in this multi-dimensional fashion. So you’re getting a very complex grip on your world, how fitted you are to your world. Now, in the Anderson, Miller and Rovecki paper, and I don’t have to go over this because it’s already been sort of alluded to here in a lot of the previous talks, predictive processing is also converging to the relevance problem with its, especially with its emphasis on precision weighting. And precision weighting is being offered as a model in general of attention, especially selective attention, as well focusing in zeroing in on the relevant information. Initially they tried just sort of basic syntactic reliability. That’s too weak. Then there were, and then my heart sank in 2017, Clark said, what we’re talking about, we’re talking about precision weighting is task relevance. Because then it was like, oh, you know what precision weighting presupposes? Relevance realization. And then, oh no, something. But thankfully, Mark Miller returned into my life and there was glory and no, no, there’s much more going on and there was a sophisticated, there’s more sophisticated dynamics. What I want to do is I want to now try and pick up on something I alluded to earlier, how these two things, predictive processing and relevance realization are interconnected and get a tighter conceptualization of the inter-theoretic relationships between predictive processing and relevance realization. So what I want to do here is I want to extend the evolutionary analogy that we used in the 2012 paper. Remember the proposal was the relevance realization is basically the evolution of your sensory motor loop with the environment. I want to propose extending that. And what I want to do is talk about the grand synthesis in biology. It’s now being criticized with good reason, but it’s still a great moment of inter-theoretic integration. So you had Mendelian genetics that gave you the mechanism, the actual mechanisms of reproduction and metabolic control, but especially reproduction. And the idea is in the grand synthesis, if you take an organism that has Mendelian genetic mechanisms and you put it into an environment, what will happen is that thing will start to evolve. Well it and it’s in it across its descendants, but you know what I mean, the species will start to evolve. Darwinian evolution emerges out of genet, right, Mendelian genetics within a particular kind of environment. There’s been some speculation that for the first 500,000 years there’s no evolution because the environment, the prebiotic soup and the first microorganisms don’t have to compete for resources. I don’t know how you would establish evidence for that, maybe a mathematical model. But anyways, given our kind of environment, actually environments, both across space and time, weight, variety, you’re going to get the emergence of evolution that acts as a tap down constraint on reproduction. Who gets to reproduce? Yes? Yeah, that’s the theory. The grand synthesis says the fundamental machinery is the Mendelian genetics, but you get the emergence of the evolution, which then you need in order to explain developmental processes. So in this analogy, I am proposing that predictive processing is the fundamental mechanics. Predictive processing is analogous to the Mendelian genetics. It’s the fundamental mechanics. What it’s basically doing, and of course I’m going to use an oversimplification here because I just need to get the gist for the argument I’m making, is it’s reducing surprise within the reproductive cycle of the sensor motor. You’re trying to reduce being surprised. It’s trying to anticipate the world well. Because it is doing that, it must, and that’s the word I just used, it must encounter inevitable trade off relationships. It’s trying to reduce surprise in one and it starts to get more surprised in the other because these things are in trade off relationships. Let me give you, we’ve talked about a bunch already, let’s do another one that’s really important, especially for something that’s trying to make predictions. Because all predictions face the problem that stats wasn’t invented to try and deal with, which is you take a sample and you try to predict the population. And what’s the problem? Sampling models. There’s patterns in your sample that don’t generalize to the population. You can do the other thing too, which they typically don’t talk about enough earlier on. There are patterns that you’re not noticing in your sample that do generalize to the population. These have a name in machine learning, bias and variance. Bias is when you’re missing information in your sample that is accurately predictive of the world. Variance is when you are treating patterns in your sample incorrectly as if they apply to the world. When a system is biased, it is under fitted to the data. It’s not sensitive enough. It’s not picking up on all the patterns it should pick up on. When a system is undergoing variance, it’s over fitted to the data. It’s picking up on patterns that it shouldn’t be using to predict the world at large. Does that make sense so far? Yes? Now notice there’s a trade off relationship and machine learning is bedeviled by this, by the way. Because if I tried, how do I work on bias? I’ll make my machine more sensitive, more powerful. I’ll make my neural networks capable of more parametric space, picking up more complex patterns. What happens when I do that? I’m increasing and increasing the chances I’ll over fit to the data and I’ll fall into variance. What I’ll do is I’ll have to somehow disrupt that. What do you see actually in machine learning? You see at an institutional level, opponent processing going on. What they’re doing is they’re building increasingly more powerful machines that are increasingly sensitive at picking up patterns. And then they have to do an opposite, an opponent process. They do disruptive strategies. They will throw in noise. Will the turn off half the network, drop out or not? Why are they doing this? When they throw in the noise, when they do the disruptive strategy, it loosens the over fitting on the data. This lines up with a lot of what Julian was talking about. What you basically do is you make the system stupid for a bit. It’s less sensitive. It stops picking up on too many patterns and it gets released to make better predictions about the world. But you don’t want to make it too stupid for too long because then what you have is a stupid machine and that’s not good either. So what do you do? Well, you toggle back and forth. You do opponent processing back and forth, back and forth. It’s a good reason to believe your brain is doing that. And what we find is in many different parts of machine learning, people are converging on these solutions. Why? Because the bias variance problem can’t be solved a priori. You can’t create a rule that you can just put into every machine to universally apply. Why? Because how you play the trade off depends on the environment you’re in. And unless you can get an a priori rule that captures all possible environments, good luck with that by the way, you can’t make an a priori rule. This has to be decided in a 4E fashion. It has to be decided in an environmentally bound way. So what am I proposing? The predictive processing hits these and there’s many of these trade offs. I was just using the bias variance as one example, okay? It hits these inevitable trade offs and what will it need to do? By proposing it will lead to the self organization of opponent processing that has appropriated the trade off, not just represented it but internalized it into its actual functioning, has appropriated the trade off as opponent processing. Well there’s bias variance in the world, okay, try to reduce the, oh no, as I try, oh, what I’ll do is I’ll self organize and actually make opponent processing that is doing that. And this is part of Friston’s, it doesn’t have a model, it is a model. The opponent processing isn’t a representation of the trade off, it’s an actual structural functional organization that has appropriated it in order to keep the organism adaptively fit it to its environment. So once the opponent processing has emerged the way evolution emerges out of genillion and endelian genetics, right? Once opponent processing is emerged out of the predictive processing, and of course I want to remind you again that this opponent processing is multi-dimensional and nested, I’ve just used single examples just for explanatory purposes, then you get the evolution of the sensory motor fittedness to the environment. That emerges and acts as a top down constraint. You now have relevance realization operating. You now have relevance realization. Now the evolution of sensory motor fittedness leads to optimal grip. This has been mentioned on a lot of talk. It comes from Marle-Ponty. Marle-Ponty talked about the fact that when you’re trying to perceive something you’re in trade off relationships. So do I go in close to get detailed? Do I go far to get a gestalt? Do I want the normal presentation? Do I need the oblong? As I gain one I lose the other. Do you see this? So there’s inevitable trade off relationships and what you do is you do opponent processing until you find what is best fitted in the situation you’re in. I need to use this as a letter R so I orient it this way and I look at it this way. I need it as a projectile weapon. Totally different. Totally different. And there is no final optimal grip. There can’t be because there is no a priori way of deciding the final solution to the opponent processing that is capitalizing on the inevitable trade off relationships. Optimal gripping actually presupposes opponent processing and it presupposes a dynamical restructuring of opponent processing. It presupposes an evolution within opponent processing. As I said before this optimal grip is multi-dimensional dealing with multiple trade offs and it produces thereby a salience landscape. The term originally comes from Ramachandran but I’m trying to broaden it. You have one now. What is standing out to you? What’s foregrounded? What’s backgrounded? And it’s constantly shifting. If you were to represent it topographically like things that are more salient being higher up like mountains of salience and valleys of things you don’t care about and it’s constantly shifting. If I point at her, she goes up. The back of the room until I said that. Very low. You left big toe. Very low. Now higher. And you get this evolving salience landscape. And notice how it’s working. Both the romantics and the empiricists are wrong. I don’t just reject it onto the world. The world isn’t a blank slate. The empiricists, I don’t just receive it to the world. I’m sort of projecting and then binding myself into it and letting myself be regulated by it. EP, predictive processing, RR, relevance realization is not cold calculation. I’ve already argued that it’s lower than that. More primordial. It involves you caring about this information rather than that information. Caring. Montague said it very well in his book. Your brain is almost perfect. The difference between you and computers is computers don’t care about the information they’re processing including the LLMs. You do. Why? And this is why, this is how it connects to 4E. Because you are a being that moment by moment has to take care of itself. You are a self-making being. Varela and Thompson who were in the book earlier. You’re autopoliatic. You’re self-making. Many things are self-organizing, a fire or tornado. But you are self-organized to seek out the conditions that protect, produce and promote you. You have real needs. And that’s ontologically important because relevance is always relevant to something that has real needs. The autopolisis grounds the relevance realization. The predictive processing relevance realization is caring. It’s connectedness. It’s what your attention is bound to. And it’s continually affectively laden because you’re always taking a risk. So this is an act of commitment. There’s caring and connectedness and commitment. And this is Marksworth. This is being registered and regulated by very fine-grained, very global, recursive I have a whole other part of my work where I argue that that caring, connectedness is what people are talking about when they talk about how meaningful an experience is. They’re using a metaphor. They’re using the way a sentence has meaning. They’re saying there’s something about my experience. It’s hanging together in the right way and it’s fitting to me the way a sentence is understood by me and fitting me to the world the way I can use a sentence to determine if something’s true or false about the world. It’s a metaphor. And contrary to what you might have seen from the media, purpose is not synonymous with meaning. There are four factors to meaning and purpose is not the most important. There is coherence. It has to make sense. The world has to make sense to you. There’s significance. You need to be connected to something that you deem is real. This is why if I ask my students, do you want to know if your partner’s cheating on you even if that will destroy your relationship? And they say, yes, I do. Because they don’t want it to be fake. And then the most important is maturing. This is a sense of being connected to something other than yourself that has a value other than your own egocentric concerns. So to determine if you have meaning in life, ask yourself these two questions. What do I want to exist even if I don’t? And how much of a difference do I make to it now? So you’ve got optimal gripping and salient landscaping. And they’re interacting. Let’s extend the analogy about optimal gripping. So I’m a martial artist. I do Jeet Kune Do, Fujin, Tai Chi, blah, blah, blah, blah. And we often have what’s called a fighting stance. You don’t do anything with this. There’s nothing you do. You don’t do anything with this. Why do you do that? You do that because it’s a meta-optimal grip. What it does is this gives me the best optimal grip on all the other specific optimal gripping I need on the situation. I stand like this because I can get into so many different things from it that I need to be able to get into. And it’s sort of well designed to be a meta-optimal grip. It optimally grips my optimal gripping. And I have to use meta a few times because I have a PhD in philosophy. So think about this. You have a stance. You have meta-optimal gripping and you have meta-salient landscaping. I think those two go to what Stegmeier in his book in 2019 calls orientation. What is orientation? It’s a philosophical analysis. He argues that before you can do anything else, you have to be oriented in the world. Or you can reason before you can prop like you have to be oriented. Orientation is what is needed for doing any kind of navigation through any environment or any kind of problem space. I think we can understand what that orientation is because he uses stance language. I don’t think that coincidentally to try and talk about, well, when I’m oriented, I take this stance towards the world. Yes, you do. You’re meta-optimally gripping, meta-salient landscaper. Orientation gives you these two things that we take for primordial granted, but they are not part of our ontology given to us by physics, obviousness and ordinariness. This room is ordinary to you. You don’t go, oops, and it’s obvious what you should do. You know what is the… Common sense works in terms of what’s obvious and what’s ordinary. Science has to explain common sense. My job is to try and explain how does a brain generate obviousness and ordinariness in such a way that you would be a general problem solver the way you are. Even as you take that design stance, you realize, my gosh, this orientation is one of the most complex skills, and that was already alluded to multiple times, one of the most complex skills, very complex. It’s a dynamical system of skills and states. I think that that’s what goes into the fluency effect, by the way. There’s an effect in psychology. If I make information more fluent to you, you will make all kinds of judgments about it, that the information is good, it’s more likely to be true, et cetera. First I thought fluency was just ease of processing. That’s been destroyed. I think what we’re getting with fluency effects is the machinery that orients you has oriented well. It’s obvious and it’s ordinary. When the world has been made obvious and ordinary to you, you get fluency. What’s interesting, and Tobolinsky and Reber argued this very well in 2010, is that insight is a fluency spike. You know when you have that aha moment? Thought she was angry, but she’s actually afraid. Add those, right? That’s a fluency spike. You’ve got a fundamental reorientation. What you considered relevant and salient has shifted. The facts are all still there in the world, but your grip on them has substantially changed. You’re reoriented. Your salient landscaping and your optimal gripping have re-engineered themselves in a profound way. Insight is sort of a little bit beyond, it’s like new obviousness. It’s like I didn’t know and then now I know. I see into it now insight. It’s a little bit extraordinary. You get a flush of salience when you get insight. Then Verveki, Ferraro, and Hirabedda in 2018 in the Oxford Handbook of Sputaneous Thought argued that flow is an insight cascade. Flow is that phenomena talked about by Csikszentmihalyi when you’re in the zone. For example, you’re a martial artist and you’re just, you’re right there. You know you’re putting in lots of effort, but it seems like grace because the block goes up. I find myself blocking and punching. Who did that? It just comes out at me. People regard the flow experience as optimal in two ways. They seek it out more than pleasure. You need to understand this or else you can’t understand rock climbing. Rock climbing makes no sense. You climb up that. It will hurt. You will fatigue yourself and once you get to the top, come back down. It’s like a Greek torture. People rock climb because it gets them into the flow state. You see, because what they’re doing is they almost literally, they almost literally impasse and they literally have to reorient and restructure their salience landscape in order to keep going. It primes them to do one insight after another. The flow state is one of the true universals across genders, socioeconomic status, language groups, right? People describe it in detail using the same language. This is something that’s evolutionarily marked. It’s powerful. Do you see what I’m proposing? Here’s your orientation machinery, fluency, and then it spikes with insight and then you get it even more going, flowing in the flow experience. Notice how the flow experience sits on the border between everyday experience, Mark was talking about, and the mystical. Because in the flow experience, people say the sense of the narrative self completely falls away. They feel at one with their environment, at one deep resonant out onement. And the world is super salient. It’s like it’s ongoing discovery. It’s like a continuous aha moment. So now we’ve stepped even farther. The obvious has been taken into this radical kind of deep insight, flowing insight, and the ordinariness has been taken into something even more and more extraordinary. So here I’m going to end. I’ve got what, five minutes? It’s nice when your talk fits. Okay. So flow is usually pinned to whatever expertise you can bring to bear. So I can flow in martial arts because I have developed expertise. I’ve been doing it for three decades. In fact, I think Taoism is the religion of flow, flow induction. It’s like I do several Taoist practices. But yes, there’s a kind of very sophisticated expertise that we could bring flow to that wouldn’t be domain specific. So let’s go back to fluency for a sec. Fluency is not domain specific. Fluency doesn’t actually care about what domain you’re working in because long as you get the fluid defect, you get all these results. It’s really real. It’s true. It’s good. It’s beautiful. It’s really real. It’s optimal. So even more so for insight. In fact, insights can actually bias you and cause you to judge things as more real than they might be. Flow even more so. That’s why when you’re flowing in a video game, it can be so addictive because the video game seems really real in contrast to a dull flat world. But what if you didn’t flow as a martial artist, as a rock climber or as a jazz musician, what if you could put disruptive strategies in at that meta level, at the level of orientation so that you could induce a flow state at the level of orientation? You’re flowing at that meta level. Now you’d have to use significant disruptive strategies because these are higher up fires, right? At a meta level. But across the world, you see wisdom traditions using all kinds of disruptive strategies. Remember the machine learners? Machine learners doing disruptive strategies to their neural nets. God, of course psychedelics. We had Julian’s delicious talk about that. That’s a good talk, right? But you have sleep deprivation. I’m experiencing it right now. I’m seriously jet lagged. I think this is a good talk, but I don’t know if it’s true or not because I’m jet lagged. Right? There’s sleep deprivation, there’s sex deprivation, there’s food deprivation, there’s water deprivation, there’s company deprivation, and we can drum all night, we can sing all night. It’s unlimited. We have these disruptive strategies that aren’t disruptive in a particular thing. One side note, small digression, so I’ll go over here because I’m digressing. If you are trying to solve an insight problem and you’re in passing and you can’t solve it and I put a bit of noise on the computer screen, shake the picture around, you’ll get an insight because it’s a disruptive strategy. We have disruptive strategies working at the level of your fundamental stance towards reality, your orientation, and potentially inducing a flow state at that level. I think that that is the explanation. I’m proposing that that’s an explanation for a kind of mystical experience that’s just beyond the flow state, which is the resonant at one minute. We’ve talked a lot, and we should by the way, of these interior kinds of mystical experiences where I’m going deep inside the guts of the mind. We might give some priority words at because we’re academics, which predicts we’re probably introverts, but there are mystical experiences that are directed outward. I’m at one with all of reality. I have a sense of how it all hangs together, how it’s all true, good, and beautiful. You get a sense in these higher states of consciousness of the really real. I call this onto normativity, onto for being and normativity for this is the best, the really real. This is what’s weird about these. This is how you normally judge realness. You normally judge realness against ordinary obviousness. You have a weird freaking dream and you go, oh, that’s not real because it doesn’t line up with this. You take the psychedelic, DMP is a weird thing because people tend to believe in the space cells. We’ll put that aside. You come back and go, oh, that was a trip and everything, but that’s not real. This is real. What’s interesting about these experiences is people go into them and they come back and say, this isn’t real because that is really real. Julian pointed to some research and I would also point to the research of Yaden. What do people do? They transform their selves, their lives, their identities because they want to enhance that orientation, that connectedness, that mattering. They want to conform to that really real because it makes more and more meaning in life. That meaning is efficacious in making their lives reliably better. There isn’t knowledge to be gained from these experiences. Sorry, I mean knowledge of reality. There’s knowledge to be gained about the psyche. We’ve been fixated too long on what knowledge and what I propose to you is instead what’s people are getting this higher order stance, this meta state, meta trait, meta skills that is enhancing their ability to zero in on relevant information in complex, messy, ill-defined situations. They get better because they can take a kind of situation that disrupts the very fabric of the self and they can come back and say, give me more. I put it to you that individuals who can zero in on relevance under those circumstances in ill-defined, messy situations with complex interacting dynamics, those are wise individuals. We should be looking for how these experiences are conducive to wisdom rather than trying to establish because we’ve not been able to establish that they give us some sort of arcane knowledge about the ultimate fabric. Of reality. Thank you very much for your time and attention.