https://youtubetranscript.com/?v=Wex12GhUFqE

Welcome back to Awakening from the Meaning Crisis. This is episode 30. So last time we decided to dig into this central issue of realizing what’s relevant. And we’re following a methodological principle of not using or presupposing relevance, a capacity to realize relevance in any process, purported cognitive process or brain process, that we’re going to use to try and explain that ability. I gave you a series of arguments that we can’t use representations to explain relevance, because representations crucially presuppose it. And then we took a look at some very interesting empirical evidence that really comports very well with that. The evidence surrounding finsting and your ability to do an active demonstrative reference, salience tagging, just making stand out the here-ness and the now-ness of something. We then took a look at the… And we drew a few conclusions about the meaning that we’re talking about in meaning in life, that connectedness, that connectedness is ultimately not generated by representations. Again, I’m going to keep saying this. I’m not denying that representations and belief in that level can’t alter or transform what we find relevant. We’re talking about the explanation of the phenomena, not how it is causally affected by other aspects of cognition. We then took a look at a syntactic level, the computational level, and saw arguments that neither inference nor rules can be used to explain the generation of relevance, precisely because they also presuppose it. We looked at trying to deal with relevance in terms of some sort of internal module dedicated to it, and that won’t work. It’s homuncular and relevance realization needs to be scale-invariant, or at least multiscalular. It has to be happening simultaneously in a local and global way. And that, again, points towards something else we noted about any theory. It has to account for the self-organization of relevance that is demonstrated in the phenomena of insight. We then saw that a theory has to use explanatory ideas that point to processes that are, at least in the originary sense, internal to the relevance realization, the relevance realizing system. I tried to get clear about how not to misunderstand that. What I meant was the goals that govern relevance realization initially have to be constitutive goals. They cannot be goals built upon representing the environment in a particular way. Instead, they have to be the constitutive goals that are part of an autopoetic system, a system that is self-organized because it has the goal of preserving and protecting and promoting its own self-organization. And that draws deep connections between relevance realization and life and relevance realization and being an autopoetic thing. And, of course, as I’ve already mentioned, relevance realization processes have to be multiscalular. They have to be self-organizing. They have to be capable of developmental self-transcendence, self-correction, insight, etc. We noted along the way about how this links up with an argument about how the propositional depends on the procedural, which then depends on the perspectival, which then is grounded in the participatory. But we hit a roadblock when I want to now zero in on… I had been treating them as identical, but I’m going to then make a very important theoretical distinction between a theory of relevance and a theory of relevance realization. Because what I want to argue is that there cannot be a theory of relevance, at least a scientific theory of relevance. And since we’re playing in the arena of science, scientific explanations, I’m not going to keep doing that qualification. I’m just going to say there cannot be a theory of relevance, scientific theory of relevance. Why not? Well, this has to do with an issue that was originally brought up by Chappie and Kukla in an article, A Commentary in Behavioral Brain Sciences. Dan Chappie and I have published work together. We’re collaborating right now on a work on telepresence. I recommend you to take a look at the work of Dan Chappie. But they made a point, and I think this point is very well taken. It’s a point that goes back to J.S. Mill. But you can also see an updated version of it in the work of the important philosopher and philosopher of science, Wilfred Quine. So this has to do with how science works. Now, of course, the philosophy of science tackles all kinds of controversial claims about what is science and how science works. But I take it that one thing that is agreed upon in science is that science works through inductive generalization, or it tries to generate inductive generalizations. What do I mean by that? In science, you study a bunch of things here, and then you make predictions and claims that that will be the case for all of that type of thing. So here I study a bunch of, you know, here’s a hunk of gold, here’s a hunk of gold, here’s a hunk of gold. I come up with a set of features or properties. Does that generalize to all the instances of gold? And if it does, then I come up with an inductive generalization. I want to get the broadest possible inductive generalizations that I can, because that’s how science works. It’s trying to give us a powerful way of reliably predicting the world. It’s doing other things, very importantly. It’s also trying to give us a way of explaining the world. I’m not claiming this is, I’ve tried to make it clear, this is not meant to be an exhaustive account of science. It’s meant to point to a central practice within science, but a constitutive practice nevertheless. If you can’t generate inductive generalizations in your purported endeavor, then you don’t have a science. This is why pseudoscience is like astrology fail, precisely because they cannot do inductive generalizations. Okay. You say, okay, great. So what J.S. Mill pointed out is that means that we need what’s called systematic import. And this is so relevant to what we were talking about last time. In fact, even using the word import is really relevant. What that means is science has to form categories, because that’s what I’m trying to do, right? I’m gathering a bunch of things and saying they belong, they’re the same type of thing. They’re all instances of gold. They all belong to the category of gold. Science has to form categories that support powerful, meaning as broad as possible, inductive generalizations. To be able to do that is to have systematic import. Now, what do I need? Think about reverse engineering this. In order to have reliable, you know, that’s what powerful means, reliable and broad inductive generalizations. In order to have those, what do I need to be the case here? Well, I need there to be important properties for that category. One thing is I need the categories to be, the category members to importantly be homogeneous. There’s a sense in which all the members of the category have to share properties. That’s me indicating they’re all sharing properties, right? And it’s because they share properties that I can make the inductive generalization that other instances will also have those important properties. That’s exactly what I need, because if the members are heterogeneous, there’s no set of properties I can then extend in the generalization. They have to be homogeneous. Now, this gets us towards something very important, right? This gets us towards an idea from Quine, because there’s a lot of discussion about this word right now in the culture. And I think the discussion sort of is too polarized. And this has to go, again, with an issue made by Wittgenstein, because I want to put Wittgenstein and Quine together on this. Very important modern philosopher. Because Wittgenstein, and this is what some of the critics of essence say, right? Because if you remember according to Aristotle, and we talked about this when we talked about Aristotle, an essence is a set of necessary and sufficient conditions. And what Wittgenstein pointed out, and remember we did this with the example of a game, that many of our categories don’t have essences. There is no set of necessary and sufficient conditions that will pick out all and only games. There’s no set of necessary and sufficient conditions that will pick out all and only tables. So many of our categories don’t have essences. That was Wittgenstein’s point. Now, Wittgenstein, I don’t think you could ever pin him to the claim that no categories have essences. And that’s what some people, I think, have concluded, that no categories have essences. Everything is just nominal description. But that’s not right. Because, of course, non-controversially, for example, triangles have essences. That’s why Aristotle thought many things did. If it has three straight sides, three angles, it’s enclosed, it’s a triangle. That’s an essence to a triangle. Now, that’s mathematical. Here’s what Quine argued. At least, I think this is an interpretation of Quine that is philosophically defensible. Science, these are things like triangle, these are deductive essences. These are the essences that we can deduce. But what science discovers are inductive generalizations. And if they’re powerful enough, science gives us the essence of something. The essence of gold is the set of properties that will apply to all instances of gold. All and only instances of gold. That homogenous set that can generalize is what an inductive essence is. Now, what that means is we shouldn’t, a couple of ways of talking in the media shouldn’t, or the general culture should not be so uncritically accepted. Essentialism isn’t bad for things that have essences. Why would it be? Essentialism is the mistake of treating a category as if it has an essence. It is a mistake for things like games and tables, precisely because they don’t have an essence. It is not a mistake for things like gold, because gold has an essence, inductively, or triangle, because triangles have a deductive essence. It is too simplistic to say everything has an essence or everything doesn’t have an essence. Now, it cuts both ways. It cuts both ways. There are many things that don’t have essences. That’s what’s right about the critique of essentialism. But it is wrong to conclude that there is no, it’s wrong to say that Wittgenstein argument points, it is not an argument, because it’s not a deductive argument that concludes that there are no essences. It only points that many categories don’t have essences. So, that means it is possible to do a science when we do what? When we categorize things in such a way that we get this, because if we get this, then we have the essential properties of the thing. Now, the reverse is the case. That’s what I mean by cuts both ways. We can’t have a scientific explanation of everything. We can’t have a scientific explanation of everything. If the category is not homogeneous, if it does not support powerful inductive generalizations, if it does not have an inductive essence, we cannot have a science about it. It doesn’t mean those things don’t exist. It means we cannot scientifically investigate them. So, for example, I can’t have a science of white things. Now, are there things that are white? Of course there are. This blackboard is white. This pen, at least part of it, is white. This part, right, this piece of paper is white. To say there are white things in this room is to say something true. Notice that. There are truths that are stateable. But the category I am using, this is J.S. Mill’s example, white things does not support any inductive generalizations other than the thing is white. Now, don’t give me, well, we can have a theory about light and whiteness. We’re not talking about a theory about light. We’re talking about a theory about white things. Knowing that this is white, what does it tell me? So, I study this white thing. What do I learn about it other than, oh, nothing, other than it’s white? Is there any other important share, well, no. Well, they’re both flat, but this is vertical, this is horizontal. You see, it doesn’t generalize. It doesn’t generalize. So, it is correct to say that there are many categories that we form for which we cannot generate a scientific theory or explanation precisely because those categories are not homogeneous. They don’t have an essence. Now, notice what that doesn’t mean. The fact that I can’t have a scientific theory of it does not mean that white things are made out of ghosts or dead elves or ectoplasmic goo. It licenses no metaphysical weirdness. It just says that category functions in the sense that I can make true statements about its membership, but it does not function insofar as it supports, right, through systematic import, powerful inductive generalizations. What else do I need? Well, let’s compare the white things, as JS Mill did, to horses. You see, and we depend on the fact that horses seem to have an essence. Now, whether or not they ultimately do at some sort of species level is something, you know, really argued about in biology, and I’m not trying to be negligent of that, but I’m also not going to try and resolve that. What did Mill mean by his example of a horse? Well, what he meant by, if I learn a lot about this horse, right, it will generalize to other horses. It will generalize. So, horses are in really important ways homogeneous. That’s why we can have veterinary medicine and things like that. I can learn about it in terms of horses that have already been studied, and it will generalize well to horses that have not themselves yet been studied. That’s fine. What else? And this is, I don’t mean this to be a pun. I need the category membership to be stable. So, that doesn’t mean to be horses and stables, right? What’s, like, what’s in the category, the kind of things that are in the category, should be stable, shouldn’t be constantly shifting or changing. Because if this, and this was a point made a long time ago by Plato, if what’s in here is constantly shifting, now I don’t mean the particular members. I mean what kind of thing is in here is constantly shifting, right, then of course I can’t do inductive generalizations, because I will get into equivocation. I will get into equivocation. So, the word gravity originally meant having to do with drawing down into the grave, as we mentioned. It had to do with sort of an important seriousness. But now we use that term to describe a physical mode of attraction and interaction. And if I don’t notice the change in what goes into my categorizations, I’m not making a good inductive generalization. I’m engaged in equivocation. And as I’ve tried to show you, equivocation, right, is a way in which we make invalid, often ridiculous arguments. So it needs to be stable. We need the properties of the objects to, in some sense, be intrinsic, or at least internal, inherent. This also comes from an argument by John Searle. Many objects have properties that are not intrinsic to the object, but come from the object’s relationship to us, for example, and they are attributed properties. So a non-controversial example is something being money. Now, here’s again. Is money real? Well, a lot of my life is bent around money, so in that sense it seems to be real. But does anything intrinsically possess the property of being money? If I take out some coin or piece of paper, is it intrinsically money? No, it’s only money because we all attribute it as being money. We all treat it as money, and that’s what makes it money. If we all decide to not treat it as money, it ceases to exist as money. We can’t do that with gold. Notice what I’m saying. We could all decide that gold is no longer valuable, no longer analogous to money, but we can’t all decide that gold no longer possesses its mass atomic number. We can’t do that. Now, the thing you have to remember is that many things that we think are intrinsic are actually attributed. This being a bottle is attributed because what it means to call it a bottle is the way it is relating to me and my usage of it. If there had never been human beings, and this popped into existence because of some quantum event near a black hole or something, it isn’t a bottle. It’s an object with a particular mass, a particular structure, but it’s not a bottle because being a bottle is something that it gets in its relationship to me. Now, again, did I just show you that everything’s an illusion? No. Again, the fact that there are many things that are genuinely relational, genuinely attributed doesn’t mean that I’ve shown you that everything’s false. I’ve just shown you that you can’t do science unless the members of your property are homogeneous, stable, intrinsic, or at least inherent because that’s what you need to have powerful inductive generalization. Let’s see something that fails this, all of these tests. Things that happen on Tuesday. Events that happen on Tuesday. Tuesday events. Are there events that happen on Tuesday? Okay. Are there events that happen on Tuesday? Yeah, and there’s even events that can happen on multiple Tuesdays. We categorize things in terms of the days. We categorize events in terms of the days. Now, are all the events on a Tuesday homogeneous? No. Are all the events on Tuesdays, many different Tuesdays, homogeneous? No, they’re very, very different and widely varying. Is it stable, the things that happen on Tuesdays? It’s the same Tuesday. No, that’s Groundhog Day or some kind of horrible Nietzschean hell. And what about Tuesdayness, being Tuesday? Is that inherent? I mean, is there Tuesday in the room when it’s Tuesday? I mean, it can’t be because there was a time when we didn’t even have calendars. But notice how hard it is to realize that. There’s no Tuesdayness. So, you know, can I make two statements? Last Tuesday, I went to a movie. Is it true? Yes. Can I do a science of events that happen on Tuesday? No, I can’t. Because it doesn’t satisfy these criteria. Does that mean that Tuesday is made out of ectoplasmic goo? Does Tuesday events actually take place in a different dimension? No, it doesn’t. None of that. None of that. Okay. You have to be careful on, and this is what we learn from Wittgenstein, you have to be very careful about how the grammar of our thought, how we’re regulating our cognition. Now, what I want to try and show you is that relevance does not have systematic import. Relevance, relevant events are like Tuesday events. Okay, let me show you. The things that I find relevant, other than me finding them relevant, what do they share in common? I might find this pen relevant. I might find my knee relevant. I might find this air relevant. I might find, right, the fact that it’s a particular day in May relevant. Do you see what I’m showing you? The things we find relevant is not homogeneous. Other than we finding them relevant, there is nothing that they share. It’s exactly like the class of white things. What about stable? So when I find something relevant, do I always find it relevant? This is relevant to me now. Will it forever be relevant to me? I will care, oh, it is relevant. No, things are not stably relevant. Relevant one minute, irrelevant the next. You may say, oh, there’s things that are always relevant to you. Always? Right? Don’t know. Very hard to find them. And it’s if maybe, maybe oxygen, maybe? But that’s only relevant to me if I want to keep living. A person who commits suicide, and some people commit suicide this way, they suffocate themselves to death. Because that was more important to them than oxygen. It’s not stable. It’s relevant, and here’s where I think we’ll get into some difficulties, I suppose, with some people. But is relevant, right, internal or intrinsic to the object? Is there a way of like, oh, like if there had never been human beings or sentient beings, could this have relevance? Doesn’t seem that that’s at all a plausible intuition. Relevance always seems to be relevant to some one or some thing. And that, of course, I think is going to be bound up that relevance ultimately has to be relevant to an autopoetic thing. Only things that have needs, only things that are self-organized so that they have the constitutive goal of preserving their self-organization, that’s what it is to need. I need food because I am self-organized to preserve my own self-organization, which means I need food. Food literally matters to me. Literally matters to me. It’s hard to see how things could be relevant unless they were in relationship to an autopoetic thing. Relevance is not something for which we can have a scientific theory. I want you to notice what’s come along the way. Relevance is not intrinsic to something. There can be no essence to relevance. Nothing is essentially relevant. That’s the whole point about talking about the problem of essentialism. And relevance is not stable. It’s constantly changing. Okay, so what do we do? Well, first of all, we add to our theory, sorry, our set of criteria we need for a good theory. Our theory of relevance realization can’t be a theory of relevance detection. I’ve given you a sustained argument for that. It is not, this is not how relevance works, relevance realization. It’s not, it’s not, this has relevance and I detect its relevance. Now you might say, well, maybe relevance realization is just projective. I’m going to reply to that too. I think that’s also inadequate. In order to see how it’s inadequate and in order to get out of the bind that we seem to be getting in, I want to open up the distinction between a theory of relevance and a theory of relevance realization with an analogy. And it’s going to turn out to be a very, I hope, helpful analogy. And this will also, I think, help us to see why relevance is not something we merely project on the world. This is why I have a sustained criticism against both the empiricist, we just detect it, and the romantics, we just project it. So let’s get into that. What’s the analogy that will help develop an argument to show why we need to merely detect it, merely project it, and get us out of the bind that we can’t have a theory of relevance? Okay. Notice a very important, and I think this is one of the central insights of Darwin, right? And we talked about Darwin, we talked about Aristotle and dynamical systems. So if you need to, please go back and look again at Video 6. I don’t want to repeat all those arguments right now. We built them so that we can use them now. See, before Darwin’s time, the people studying the natural world were often clergymen. Darwin himself was thinking about going into clergy. And that’s because people thought if they studied the natural world, they could understand the essence of how things were designed. Because if we could get at the essence of how things were designed, how things were sort of fitted to their environment, then of course that would give us some deep insight into the mind of God. That’s why clergymen are collecting species and doing all this. But I think one of the insights, and it’s not given enough attention in the analysis of the brilliance of Darwin’s theory, is to realize that things don’t have an essential design. There is no essential design. So consider the notion of evolutionary fitness. Now, there’s a problem. There’s a technical definition of fitness, which means the capacity to survive long enough in order to be capable of reproduction and that will allow that gene pool or species, all of these are controversial terms, to propagate and exist. So if we want to use that technical definition of fitness, then I’ll be talking about fittedness. And what I mean by fittedness is what is it about the organism that makes it fit? What is it about the organism that allows it to survive long enough to reproduce? And what I want to argue is there’s no essential design on fittedness. Some things are fitted in this sense precisely because they are big, some because they are small, some because they are hard, some because they are soft, some because they are long lived, some because they are short lived, some because they proliferate greatly, others because they take care of a few young. Some are fast, some are slow, some are singular, cellular, some are multi-, like nothing. Nothing. And the answer for that of course is deep and profound because the environment is so complex and differentiated and dynamically changing that niches, ways in which you can fit into the environment in order to promote your survival, autoproetic, are varied and changing. See, this is Darwin’s insight. There is no essence to design. There is no essence to fittedness. If you try and come up with a theory of how organisms have their design, I’m using this in quotation marks, in terms of trying to determine or derive it from the essence of design, you are doomed because it doesn’t exist. But what Darwin realized is he didn’t need such a theory. He needed a theory about what’s relevant in this biological sense, a theory about how an organism is fitted, how it is constantly being designed, redefined by design. And redefined by a dynamic process. See, fittedness is always redefining itself, reconstituting itself. It is something that is constantly within a process of self-organization because there is no essence. There is no final design on fittedness. Fittedness has to constantly be redesigning itself in a self-organizing fashion so it can constantly pick up on the way in which the world is constantly varying and dynamically changing. There is no essence to fittedness, but I don’t need a theory of fittedness. All I need is a theory of how fittedness is constantly being realized in a self-organizing fashion. That’s exactly what the theory of evolution is. Do you remember? There is a feedback cycle in reproduction. There is a virtual engine, selection, variation. And that virtual engine constantly shapes, regulates how the reproductive cycle feeds back onto itself. And there is no, and of course this is why some religious people get very angry about this process, but notice this is exactly what we need. There is no intelligent designer to this. This is a process that is completely self-organizing. The fittedness of organisms constantly evolves out of and is constantly evolving towards other instances of fittedness. Fittedness has no essence. It is not a stable phenomena. I should not try and give a definition or a theory of fittedness. What I have is a theory of the evolution of fittedness. And again, even when I say that, you’re tempted to think, what Vervecki means is there was no fittedness, and then there was evolution, and it resulted in fittedness. That is not what Vervecki is saying. Vervecki is saying fittedness and the evolution of fittedness are the same thing. So, what Darwin proposed, of course, was the first dynamical systems theory of how fittedness evolves, so that fitness is ongoing. That is the theory of evolution by natural selection. Now that tells us something that we need. First of all, we need to understand how fittedness evolves. It is a self-organizing process. It is non-homuncular. It can generate intelligence without itself being an intelligent process. It’s doing a lot of what we need. Here’s the analogy I want to propose to you. Let’s make relevance analogous to biological fittedness. In fact, let’s call relevance cognitive interactional. And I mean by that, both in your cognition and how that cognition is expressing itself in problem solving. Cognitive interactional fittedness. Cognitive interactional fittedness. And I don’t need a theory of this. What I need is a theory of how this evolves. Okay? How it evolves. What if, let’s say, I’m going to take a look at this. What if my ability to formulate problems, form categories, pick up on conveyance, make inferences, all this stuff. What about that ability? Because what do I need? I need something that constrains the search space, that constrains how I pay attention. I need systems that are able to do that. I need systems that are able to do that. I need systems that are able to do that. And I need the search space that constrains how I pay attention. I need systematic constraints. And what are they doing? The systematic constraints have to regulate a feedback loop. What’s the feedback loop? The feedback loop is my sensory motor feedback loop, right? I’m sensing but I’m also acting. And my acting is integral to my sensing and my sensing is integral to my moving. and my sensing are doing this, a sensory motor loop. Right? I interact with the world, and then that changes how I sense it, and then I inter-, so there’s a sensory motor loop. What if, what if there is a virtual engine, broadly construed, that is regulating that sensory motor loop, so that it is constantly evolving its cognitive, interactional fittedness to its environment? It doesn’t have to come to any final, essential way of framing the environment, but what it’s constantly doing is evolving its fittedness, its cognitive, not just its biological fittedness, although I’m going to argue, as many people do, that there’s an important continuity between those two. It’s constantly evolving its cognitive fittedness to the environment. Then, what I need is not a theory of relevance, I need a theory of relevance realization, how relevance is becoming effective, right, how it is altering, shaping the sensory motor loop. I need a dynamical system for the self-organizing evolution of cognitive, interactional fittedness. And if I could come up with that, then I would have an account of relevance realization that was non-homuncular, would be consonant and continuous with how the organs, the embodied organ, the embodied brain that is responsible for intelligence itself evolved, it would plug in very nicely to what we need. Well, what do we need? Well, we need a set of properties, if you remember. We need a set of properties that are subsimantic, subsyntactic, that ultimately have to ground out in establishing the agent arena participation. They have to be, right, the processes have to be self-organizing. They have to be multi-scale. They have to originally be ground out in autopoetic, an autopoetic system. Well, what kind of properties are we talking about then? Well, we’re talking about, and this again is deeply analogous to the Darwinian picture, we’re talking about bio-economical properties. And what do I mean by that? Think again of your biology as economic. This is again part of Darwin’s great insight. Now, don’t be confused here. A lot of times when people hear economic, they hear financial economy. That’s not what an economy is. An economy is a self-organizing system that is constantly dealing with the needs of the human being. With the distribution of goods and services. The allocation and use of resources. Often in order to further maintain and develop that economy. So your body is a bio-economy. You have valuable resources of time, metabolic energy, processing power. Think about how we say we pay attention, by the way. Processing power. And what you do as an autopoetic thing is you are organized such that the distribution of those resources serves the constitutive goal. It will serve other goals, of course, but it serves the constitutive goal of preserving the bio-economy itself. And the thing about economies, of course, is they’re self-organizing. Economic properties are your bio. They come out of your biology. They’re not semantic or syntactic properties. Now, we use semantic and syntactic terms to talk about them. Let’s not keep making that confusion. They are multi-scale. See, economies work locally and globally simultaneously, bottom-up, top-down. So bio-economic properties are great, and that’s good because that comports well with the analogy. Because Darwin’s theory is ultimately a bio-economic theory. So can we think about what kind of norms are at work in a bio-economy? Okay, so here, right, we’re dealing with norms ultimately of truth. Here, we’re dealing probably with norms of validity, at least formal validity, in some way. When we’re here, we’re not dealing with these kinds of logical, semantic norms. Economies are governed by logistical norms, or at least regulated by logistical norms. I want to try and use the word governing for selective constraints and generating for enabling constraints. I apologize if I sometimes slip. Economies are regulated by logistical norms. Logistics is the study of the proper disposition and use of your resources. So if you’re doing a logistical analysis for the military, you’re trying to figure out, I have my limited resources, food and ammo and personnel and time and space. How can I best use them to achieve the goals I need to achieve? So what are logistical norms? Logistical norms are things like efficiency and resiliency. Now, resiliency, we’ll talk about more in each detail. A way of thinking about these is, resiliency is basically long-term, broadly applying efficiency. But instead of using efficiency and efficiency, which is confusing, we’ll talk about efficiency and resiliency. So what if, let’s go step by step. What if relevance realization is this ongoing evolution of our cognitive interactional fittedness, that there is some virtual engine that is regulating the sensory motor loop, and it’s regulating it by regulating the bioeconomy, and it’s regulating the bioeconomy in terms of logistical norms like efficiency and resiliency. Now, all of this, of course, can be described scientifically, mathematically, etc. Because, of course, Darwin’s theory is a scientific theory. We can do calculations on these things, etc. One more time. The fact that I use science to talk about it does not mean that it exemplifies propositional properties. My properties of my theory and the properties that my theory is about are not the same thing. What kind of relationship, how do we put this notion of self-organization and this notion of the logistical norms governing the bioeconomy together? So one way of doing this is to think about a multiscalular way in which your bioeconomy is organized to function. A multiscalular way, many scales of analysis, there is a way in which your bioeconomy is organized to function. Okay. Let’s take your autonomic nervous system as an example. This is not exhaustive. In fact, my point is you will find this strategy, this design, at many levels of analysis in your biology. I’m only using this as an example. So your autonomous nervous system. This is a part of your nervous system that is responsible for your level of arousal. That doesn’t mean sexual arousal. Arousal means how, and notice how this is logistical, how much your metabolic resources are being converted into the possibility of action, interaction. So you have a very low level of arousal when you’re falling asleep. You have a very high level of arousal when you’re running away from a tiger. Okay. Now, think about this. You need your level of arousal. There is no final perfect design on your level of arousal. There is nothing you should, there isn’t a level that you should always shoot for. You should maximize your level of arousal. If I’m always, aah, that’s not good. I’m never going to sleep, I’m never going to heal, right? If I’m just like always, okay, that’s it, going to sleep, that’s not good. And the Canadian solution, well, I’ll always have a middling level of arousal. That’s not good either because I can’t fall asleep and I can’t run away from the tiger. So what does your autonomic nervous system do? Well, your autonomic nervous system is divided into two components, right? There is your sympathetic and your parasympathetic. So your sympathetic system is, right, it’s designed, it’s really biased. It’s designed towards interpreting the world in a way, it’s biased. Notice what I said, remember the things that make us adaptive also make us susceptible to self-deception. It’s biased, right, because you can’t look at all of the evidence. It’s biased to looking for and interpreting evidence that, I mean evidence non-anthropomorphically, that should, you should raise your level of arousal. Your parasympathetic system is biased the other way, right? These are both heuristic ways of processing. They work in terms of biasing the processing of data. Okay, so the parasympathetic system is constantly trying to find evidence that you should reduce your level of arousal. So they’re opposed in their goal. But here’s the thing, they’re also interdependent in their function. So the sympathetic nervous system is always trying to arouse you. This is this hand pulling up. And that parasympathetic system is always trying to pull you down. And as the environment changes, that tug of war shifts around your level of arousal, right? The opponent processing, because when you have two systems that are opposed but integrated, you have opponent processing. The opponent processing means that your level of arousal is constantly evolving, constantly evolving to fit the environment. Is it perfect? No, nothing can be. Any problem solving machine, in order to be perfect, would have to explore the complete problem space. That’s combinatorially explosive, it can’t. But what is this? Well, you’ve seen this before. Opponent processing is a powerful way to get optimization. Remember when we talked about optimization, when we talked about PLATO? You’re optimizing between systems that are working at different goals but are integrated in their function. And that way the system constantly self-organizes and thereby evolves its fittedness to the environment. So the way we can get this, I would argue, is by thinking about how the brain, and I’m going to argue very importantly, the embodied embedded brain uses opponent processing in a multiscalar way in order to regulate your bioeconomy, your autopoetic bioeconomy, so that it is constantly optimizing your cognitive interactional fittedness to the environment. Let’s think about it this way. Let’s think if we can get a virtual engine out of efficiency and resiliency. Because here’s the thing about them. They are in an opponent relationship. They pursue, pursue. The problem with language is like Nietzsche said, I fear we’re not getting rid of God because we still believe in grammar. The problem with language is it makes everything sound like an agent. It makes everything sound like it has intentionality. It makes everything sound like it has intelligence. And of course that’s not the case. So bear with me about this. I have to speak anthropomorphically just because that’s the way language makes me speak. But, let’s use a financial analogy to understand the tradeoff relationship between efficiency and resiliency. Not all economies are financial because the resource that’s being disposed of in an economy is not necessarily money. It might be time, etc. I’m using a financial analogy or at least a commercial analogy, perhaps is a better way of putting it, in order to try and get some understanding of how these are in a tradeoff relationship. So, you have a business. One of the things you might do is you might try to make your business more efficient. Because Ceteris Paribus, if your business is more efficient than that person’s business, you’re going to out-compete them. You’re going to survive and they’re going to die off. Obviously the analogy to evolution. So what do I do? What I do is I try to maximize the ratio between profit and expenditure, cost. How do I do that? We keep thinking of it as the magical solution, but we’ve been doing it since Ronald Reagan, at least. We do massive downsizing. We fire as many people as we can in our business. And that way, what we have is we have the most profit for the least labor cost. That’s surely the answer. So notice what efficiency is doing. Notice how efficiency is a selective constraint. The problem is, if you are cut to the bone, if you’ve got all the efficiencies, and this is the magic word that people often invoke, without remembering and forgetting the opponent relationship to resiliency. See, if I cut my business to the bone like that, what happens if one person’s sick? Nobody can pick up the slack because everyone is working to the max. What happens if there’s an unexpected change in the environment, a new threat or a new opportunity? Nobody can take it on because everybody is worked to the limit. I have no resources by which I can repair, restructure, redesign myself. I don’t have any precursors to new ways of organizing because there is nothing that isn’t being fully used. Notice also, if there’s no slack in my system, and this is now happening with the way AI is accelerating things, error propagates massively and quickly. If there’s no redundancy, there’s no slack in the system, there’s no place, there’s no wiggle room, and error just floods the system. You see, if I make the system too efficient, I lose resiliency. I lose the capacity to differentiate, restructure, redesign, repair, exact new functions out of existing functions, slow down how error propagates through the system. Efficiency and resiliency are in a trade-off relationship. Now, what resiliency is trying to do is enable you to encounter new things, enable you to deal with unexpected situations of damage or threat or opportunity. It’s enabling. These are in a trade-off relationship. As I gain one, I lose the other. What if I set up a virtual engine in the brain that makes use of this trade-off relationship? It sets up a virtual engine between the selective constraints of efficiency and the enabling constraints of resiliency, and that virtual engine bio-economically, logistically shapes my sensory motor loop with the environment so it’s constantly evolving its fittedness. We’ll take a look at that possibility and some suggestions on how that might be realized in the brain in the next lecture. Thank you very much for your time and attention. Thank you.