https://youtubetranscript.com/?v=gfKcVbNd7Xc
Welcome back to Awakening from the Meaning Crisis. This is episode 31. So last time we were taking a look at trying to progress in an attempt to give at least a plausible suggestion of a scientific theory of how we could explain relevance realization. And one of the things we examined was the distinction between a theory of relevance and a theory of relevance realization. And I made the argument that we cannot have a scientific theory of relevance precisely because of a lack of systematic import, but we can have a theory of relevance realization. And then I gave you the analogy of that which I’m building towards something stronger than an analogy of Darwin’s theory of evolution by natural selection and that when Darwin proposed a virtual engine that regulates the reproductive cycle so that the system constantly evolves the biological fittedness of organisms to a constantly changing environment. And then the analogy is there is a virtual engine in the embodied brain and what’s embodied will become clear embodied embedded brain will become clear in this lecture. So I thought there is a virtual engine that regulates the sensory motor loop so that my cognitive interactional fittedness is constantly being shaped. It’s constantly evolving to deal with a constantly changing environment. And what I in fact need, as I argued, is a system of constraints because I’m trying to bring selective and enabling constraints to limit and zero in on relevant information. And then I was trying to argue that the way in which that operates, we saw that what needs to be sort of related to an autopoetic system. And then the way that operates, the self-organization I suggested operates in terms of a design that you see many scales and we need, remember, a multiscalular theory in terms of your biological and cognitive organization and that’s in terms of opponent processing. We took a look at the opponent processing within the autonomic nervous system that is constantly by the strong analogy evolving your level of arousal to the environment opposing goals but interrelated function. And I propose to you that we could look for the kinds of properties that we’re going to be talking about, the level at which we’re going to be pitching a theory of relevance realization which is the theory of bioeconomic properties that are operating not according to normativity of truth or validity, not logical normativity but logistical normativity. The two most important logistical norms I would propose to you are efficiency and resiliency. And then I made an argument that they would be susceptible to opponent processing precisely because they are in a trade-off relationship with each other and that if we could get a cognitive virtual engine that regulates the sensory motor loop by systematically playing off selective constraints on efficiency, selective logistical economic constraints on efficiency and enabling economic constraints on resiliency, then we could give an explanation, a theory deeply analogous to Darwin’s theory of the evolution across individuals of biological fitness, we could give an account of the cognitive evolution within individuals’ cognition of their cognitive interactional fittedness, the way they are shaping the problem space so as to adaptively be well fitted to achieving their interactional goals with the environment. Before I move on to try and make that more specific and make some suggestions as to how this might be realized in the neural machinery of brains, I want to point out why I keep emphasizing this embodied embedded. And I want to say a little bit more about this because I also want to return to something I promised to return to, why I want to resist both sort of an empiricist notion of relevance detection and a romantic notion of relevance projection. So the first thing is why am I saying embodied? Because what I’ve been trying to argue is there is a deep dependency, a deep connection and the dependency runs from propositional through to down to participatory, but there is a deep dependency between your cognitive agency as an intelligent problem solver and an intelligent general problem solver and the fact that your brain exists within a bioeconomy. The body is not Cartesian clay that we drag around and shape according to the whims or desires of our totally self-enclosed or Descartes immaterial minds. The body is not a useless appendage, it is not just a vehicle. So even here I’m criticizing certain platonic models. The body is an autopoetic bioeconomy that makes your cognition possible. Without an autopoetic bioeconomy you do not have the machinery necessary for the ongoing evolution of relevance realization. The body is constitutive of your cognitive agency in a profound way. Why embedded? And this will also lead us into the rejection of both sort of an empiricist and a romantic interpretation. Why embedded? The biological fittedness of a creature is not a property of the creature per se. It is a real relation between the creature and its environment. Is a great white shark intrinsically adapted? No, it makes no sense to ask that question because if I take this supposedly apex predator, really adapted and put it in the Sahara desert, it dies within minutes. Its adaptivity is not a property intrinsic to it per se. Its adaptivity is not something that it detects in the environment. Its adaptivity is a real relation, an affordance between it and the environment. In a similar way, well I would argue that relevance is not a property in the object. It is not a property of the subjectivity of my mind. It is not a property of objectivity nor a property of subjectivity. It is precisely a property that is co-created by how the environment and the embodied brain are fitted together in a dynamic evolving fashion. It is very much like the bottle being graspable. This is not a property of the bottle nor a property of my hand but a real relation, a real relation on how they can be fit, fitted together, function together. So I would argue we should not see relevance as something that we subjectively project as the romantic claims. We should not see relevance as something we merely detect from the objectivity of objects as perhaps we might if we had an empiricist bent. I want to propose a term to you. I want to argue that relevance is in this sense, transjective. It is a real relationship between the organism and its environment. We should not think of it as being projected. We should not think of it as being detected. This is why I’ve consistently used the term we should think of relevance as being realized because the point about the term realization is it has two aspects to it and I’m trying to triangulate from those two aspects. What do I mean by that? There is an objective sense to realization which is to make real and if that’s not an objective thing I don’t know what counts. Making real, that’s objective but of course there is a subjective sense to realization which is coming into awareness and when I’m using both these words, I’m using both these senses of the same word, I’m not equivocating. I am trying to triangulate to the transjectivity of relevance realization. That is why I’m talking about something that is both embodied necessarily so and embedded necessarily so. Notice how non or perhaps better anti-Cartesian this is. The connection between mind if what we mean by mind is your capacity for consciousness and cognition and body is one of dependence, of constitutive need. Your mind needs your body. We’re also talking not only about it being embodied embedded. It is inherently a transjective relation of relevance realization. The world and the organism are co-creating, co-determining, co-evolving the fittedness. Alright, let’s now return to it. The proposal, now before we return, notice what this is telling us. This is telling us that a lot of the grammar by which we try to talk about ourselves and our relationship to reality, the subjective, objective, both of these are reifying and they’re adherence claims. They’re the idea that relevance is a thing that has an essence that inheres in the subject or relevance is a thing that has an essence that inheres in the object. Both of those, that standard grammar and the adversarial, right, partisan debates we often have, I’m arguing need to be transcended. Need to be transcended. And I would then propose that that’s going to have a fundamental impact on how we interpret spirituality. If again, by spirituality we mean a sense and a functional sense of connectedness that affords wisdom, self-transcendence, etc. So back to the idea of efficiency, resiliency, trade-offs, I would point you to the work of Marcus Breed. He’s got work sort of mathematically showing that when you’re creating networks, especially neural networks, you’re going to optimize, and we talked about optimization again in the previous video, you’re going to optimize between efficiency and resiliency. That’s how you’re going to get them to function the best you can. And what I want to try and do is try to show you the relationship, right, the poles of the transjectivity and how that’s going to come out. Or at least point towards the generative relationship that can be discussed in terms of these poles. So I argued that initially the machinery of relevance realization has to be internal. Now, again, this is why I just did what I did. When I say internal, I don’t mean subjective. I don’t mean inside the introspective space of the mind. When I’m talking about the goals are internal, I mean internal to an embodied, embedded, right, brain-body system, an autopoetic system of adaptivity. In fact, there’s many people who are arguing in cognitive science that those two terms are interdependent. Just like I’m arguing that relevance realization is dependent on autopoiesis, being an adaptive system and being an auto-dependence, sorry, an autopoetic system are also interdependent. The system can only be continually self-making if it has some capacity to adapt to changes in its environment. And the system is only adaptive if it is trying to maintain itself. And that only makes sense if it has real needs, if it’s an autopoetic thing. So these things are actually deeply interlocked. Relevance realization, autopoiesis, and adaptivity. So as Marcus Breed has argued and other people and I’m giving you independent argument, you want to get a way of optimizing between efficiency and resiliency. You don’t want, remember with the autonomic nervous system, this doesn’t mean getting some average or right stable mean. It means the system can move, sometimes giving more emphasis to efficiency, sometimes giving more emphasis to resiliency. Just like your autonomic nervous system is constantly evolving, constantly recalibrating your level of arousal. Now what I want to do is pick up on how those constraints might cash out in particular. I’ll put this a little bit further over here. How these logistical norms understood as constraints can be realized in particular virtual engines. So, and I want to do this by about internal bioeconomic properties. And then for lack of a better way for this contrast. And again, this does not map on to the subjectivity objectivity. I don’t have to keep saying that, correct? Okay. External interactional properties. By external I mean that these, right, eventually are going to give rise to goals in the world as opposed to the constituent goals in the system. And what I want to do is show you how you go back and forth. Now it’ll make sense to do this in terms of reverse engineering because it will just help to make more sense because I’m starting from what you understand in yourself and then working. So often I will start here and go this way. So you want to be adaptive. We said you want to be a general problem solver. And that’s important. But notice that that means there’s two kinds of, and people don’t like when I use this word, but I don’t have an alternative word. So I’m just going to use it. There’s two kinds of machines you can be. By that, what I mean by that is a system that is capable of solving problems and pursuing goals in some fashion. If I want to be adaptive, what kind of machine do I want to be? Well, I might want to be a general purpose machine. Now these terms are always, and I keep showing you that, are always relative. They’re comparative terms and relative. I don’t think anything is absolutely general or absolutely special purpose. It’s always a comparative term. But let me give you an example. My hand is a general purpose machine. My hand is such that it can be used in many, many different contexts for many, many different tasks. So it’s very general purpose. Now the problem with being a jack of all trades is that you are master of none. So the problem with my hand being general purpose is that for specific tasks, it can be outcompeted by a special purpose machine. So although this is a good general purpose machine, it is nowhere as good as a hammer for driving in a nail, nowhere as good as a screwdriver for removing a screw, etc. etc. So in some contexts, special purpose machines outperform general purpose machines. But you wouldn’t want the following. You wouldn’t want, you know, you’re going to be stranded on a desert island like maybe Tom Hanks and Castaway. And he lost all of his special purpose tools. They sink to the bottom of the ocean. That causes him a lot of distress. Literally what he starts with at first is his hands, the general purpose machines. And you see that, wow, they’re not doing very good. If I just had a good knife, right. But the problem is, you wouldn’t want, well, not Tom Hanks, but his character, forget the character’s name. I think it’s Jack. You wouldn’t have Jack, Jack, I’m going to cut off your hands. And you know, I’m going to attach a knife here. And you know, a hammer here. Now you have a hammer and knife. It’s like, no, no, no, I don’t want that either. I don’t want just a motley collection of special purpose machines. Okay. So sometimes you’re adaptive by being a general purpose machine. Sometimes you’re adaptive by being a special purpose machine. So general purpose machine, you use the same thing over and over again. Sometimes we make a joke about somebody using a special purpose machine as a general purpose machine, right. When all you have is a hammer, everything looks like a nail, right. And the joke there is, right. And it strikes us as a joke because we know that hammers are special purpose things and everything is a nail. It’s not so much a joke if I say, you know, sometimes when all you have is a hand, everything looks graspable. That’s not so weird. Okay. So what are we, what am I trying to get you to see? What I’m trying to get you to see is you want to be able to move between these. This is very efficient. Why? Because I’m using the same thing over and over again, the same function over and over again, or at least the same set of tightly bound functions, right. The thing about special purpose, right, is I won’t use it, you know, I don’t use it that often. I use my hammer sometimes and my saw sometimes and my screwdriver sometimes, and I have to carry around the toolbox. Now the problem with that is it gets very inefficient because a lot of the times I’m carrying my hammer around and I’m not using it. So I have to bear the cost of carrying it around and I’m not using it. So it’s very inefficient. But you know what it makes me? It tremendously resilient because when there’s a lot of new things, unexpected specific issues that my general purpose thing can’t handle, I’m ready for them. I have resiliency. I’ve got differences within my toolbox kit that allow me to deal with these special circumstances. So notice what I want to do. I want to constantly trade between them. I want to constantly trade between them. Now what I’m going to do, I did that to show you this, I’m then now going to reorganize it this way. Because what I’m going to show you, what I’m arguing is general purpose is more efficient. Special purpose is making you more resilient. And you want to trade between them. Okay, so okay, those are interactional properties. And you said, I sort of get the analogy. What does that have to do with the brain and bioeconomy? Right? So how would you try to make information processing more efficient? Well, what I want to do is I want to try and make the process I’m using, the functions I’m using, to be as generalizable as possible. That will get me general purpose. Because if I can use the same function in many places, then I’m very efficient. How do you do that? How do you do that? Well, here’s where I want to pause and I want to introduce just a tiny bit of narrative in here. When I was writing this paper with Tim Lillicamp and Blake Richards, but especially, this was Tim’s great insight. You’ve got to get in, if you’re interested in cutting edge AI, you really need to pay attention to the work that Tim Lillicamp is doing. Tim’s a former student of mine. He’s calling in many ways, of course, he’s greatly surpassed my knowledge and expertise. He’s one of the cutting edge people in artificial intelligence. And he had a great insight here. I was proposing this model, this theory to him. And he said, but you know, you should reverse engineer it in a certain way. And I said, what I mean? He says, well, you’re acting as if you’re just proposing this top down. But what you should see is that many of the things you’re talking about are already being used within the AI community. So the paper we published was relevance realization and the emerging framework in cognitive science, namely, that a lot of the strategies I’m going to talk about here are strategies that are already being developed. Now, I’m going to have to talk about this at a very abstract level, because which one of the particular architectures, a particular application is going to turn out to be the right one. We don’t know yet. That’s still something in progress. But I think Tim’s point is very well taken, that we shouldn’t be talking about this in a vacuum. We should also see that the people who are trying to make artificial intelligence are already implementing some of these strategies that I’m going to point out. And I think that’s very telling the fact that we’re getting convergent argument that way. Okay, so how do I make a information processing function more generalizable? How do I do that? Well, I mean, you know how we do it, because we’ve talked about it before, but you do it in science. So here’s two variables, for example. It’s not limited to two. And so I have a scatterplot, and what they taught you to do was a line of best fit. This is standard move in Cartesian graphing. Now, why do you do a line of best fit? You know, and my line of best fit might actually touch none of my data points. Does that mean I’m being ridiculously irresponsible to the data? I’m just gauging an armchair speculation. No. Why do we do this? Why do we do a line of best fit? Right? Well, why are we doing this is because it allows us to interpolate and extrapolate. It allows us to go beyond the data. Now, we’re taking a chance, and of course, all good science, and this is the great insight of Popper, all good science takes good chances. But here’s the thing. I do this so that I can make predictions what the value of y will be when I have a certain value of x that I’ve never obtained. I can interpolate and extrapolate. That means I can generalize the function. So this is data compression. This is data compression. What I’m trying to do is basically pick up on what’s invariant. The idea is that the information always contains noise, and I’m trying to pick up on what’s invariant and extend that. Of course, that’s part and parcel of why we do this because in science we’re trying to do the inductive generalizations, et cetera, et cetera. So the way in which I make my functionality more general, more general purpose is if I can do a lot of data compression. So if the data compression allows me to generalize my function and that generalization is feeding through the sensory motor loop in a way that is protecting and promoting my autopoetic goals, it’s going to be reinforced. But what about the opposite? What was interesting at the time, I think some people have picked this up on the term, we didn’t have a term for this. And I remember there was a whole afternoon where Tim and I were just trying to come up with what do we want for the tradeoff. So this is making your information processing more efficient, more general purpose. What makes it more special purpose, more resilient? And so we came up with the term particularization. And Tim’s point, and I’m not going to go into detail here, Tim’s point is this is the general strategy that’s at work in things like the wake-sleep algorithm that is at the heart of the deep learning promoted by Jeffrey Hinton, who was at U of T and Tim was a very significant student of Jeff’s. And so this is the abstract explanation of how that strategy that’s at work in a lot of the deep learning that’s at the core of a lot of successful AI. What particularization is, is I’m trying to keep more in track with the data. I’m trying to create a function that overfits in some sense to that data. That will get me more specifically in contact with this particular situation. So this tends to emphasize what is invariant. This tends to get the system to pick up on more variations. So this will make the system more cross contextual. It can move across context because it can generalize. This will tend to make the system more context sensitive. And of course you don’t want to maximize any one of these. You want them dynamically trading. And notice how they are, is this the right word? I hope so, obeying. It sounds so anthropomorphic. Notice how they’re obeying the logistical normativity trading between efficiency and resiliency. And there’s various ways of doing this, right? And there’s lots of interesting ways of engineering this into, but it’s creating a virtual engine, engineering this, creating sets of constraints on this. So this will oscillate in the right way and optimize that way. And so the idea is when you’ve got this as something that’s following the completely internal bioeconomic logistical norms, it will result in the evolution of sensory motor interaction that is going to make a system, an organism, constantly adaptively like moving between being general purpose and being special purpose. It will become very adaptive. Now different organisms will be biologically skewed one way or the other. Even individuals will be biologically skewed. So there are people now proposing, for example, that we might understand certain psychopathologies in terms of some people are more biased towards overfitting to particularizing and some people are more biased towards compressing and generalizing. These people tend towards seeing many connections where there aren’t connections, right? And these people tend to be very featurely bound. Okay, what’s another one, another? Oh, so this is compression particularization, right? We called this cognitive scope and we called this applicability, how much you can apply your function or functions. And the idea is if you can get scope going the right way, it will attach to, it will get coupled to, it’s not representing, it will get coupled to, right, this pattern of interaction which will fit you well to the dynamics of change and stability in the environment. Okay, what’s another thing? Well, a lot of people are talking about this. You’ll see people even talking about this in AI very significantly. Exploration versus, right, exploiting versus exploration. So here’s another trade-off. This tends to be in terms of the scope of your information. This has to do more with the timing. So here’s the question. Should I stay here and try and get as much as I can out of here, or should, that’s exploiting, or should I move and try and find new things, new potential sources of resource and reward? You’re in a trade-off relationship because the longer I stay here, the more opportunity cost I accrue, but the more I move around, the less I can actually draw from the environment. So do I want to maximize either? No, I want to trade between them. I’m always trading between exploiting and exploring. There’s different strategies that might be at work here. I’ve seen recent work in which this is you reward when a system doesn’t make an error, and then you reward when it makes an error. And of course, those are in a trade-off relationship, and this sort of makes it more curious. This makes it more sort of conscientious, if I have to speak anthropomorphically. One way we thought you could do this is you could trade. One way you can do this is you can reward error reduction, reward error increase. That way we talked about in the paper is you can trade off between what’s called temporal displacement learning and inhibition on return. I won’t go into the dynamics there. What I can say is there’s different strategies being considered and being implemented, and this is cognitive tempering having to do with both temper and the relationship between temp and time. And this has to do with the projectability of your processing. Now, first of all, a couple things. Are we claiming that these are exhaustive? No, they’re not exhaustive. They are exemplary. They’re not exhaustive. They’re exemplary of the ways in which you can trade between efficiency and resiliency and create virtual engines that will adapt by setting up systems of constraints, the sensory motor loop, the interactions with the environment in evolving manner. So why is exploitation efficient? Because I don’t have to expend very much. I can just stay here, but it depends on things sort of staying the same. Exploration is I have to expend a lot of energy. I have to move around, and it’s only rewarding if there’s significant difference. If I go to B and it’s the same as A, you know what I should have done? Stayed in A. So do you see what’s happening? All of these in different ways, this has to do with the applicability, the scope. This has to do with the projectability, the time, but all of these you’re trading between that sometimes what makes something relevant is how it’s the same, how it’s invariant. Sometimes what makes something relevant is how it’s different, how it changes, and you have to constantly shift the balance between those because that’s what reality is doing. That’s what reality is doing. What’s another one? Well, another type of one. I think there are many of these, right? And they are not going to act in arbitrary fashion because they are all regulated by the trade-off normativity, the opponent processing between efficiency and resiliency. Notice these are both what are called different cost functions. They are dealing again with the bioeconomics, how you’re dealing with the cost of processing. So playing between the costs and benefits of these, etc. But you might also need to play between these. So it’s also possible that we have what we call cognitive prioritization, in which you have cost functions being played off against each other. So here’s a cost function, here’s a cost function, they’re playing. So cost function one, cost function two, are playing off against each other. And you have to sort of decide here, and this overlaps with what’s called signal detection theory and other things I won’t get into. You have to be very flexible in how you gamble because you may decide that you will try and sort of hedge your bets and activate as many functions as you can. Or you may try to go for the big thing and say, no, I’m going to give a lot of priority to just this function. Of course, you don’t want that to maximize, you want flexible gambling. Sometimes you’re focusing, sometimes you’re diversifying. You create a kind of integrative function. All of this can be, and if you check in the paper, all of this can be represented mathematically. Once again, I am not claiming this is exhaustive, I’m claiming it’s exemplary. I think these are important. I think scope and time, right, cost functions and prioritizing between cost functions, I think it’s very plausible that they are part and parcel of our cognitive processing. What I want you to think about is, I’m representing this abstractly, think about each one of these. Here’s scope, here’s tempering, and then of course there is the prioritization that is playing between them. I want to think, you think of this as a space and these functions because they are all being governed, regulated, sorry, regulated in this fashion. Relevance realization is always taking place in this space and at this moment it’s got this particular value according to tempering and scope and prioritization and then it moves to this value and then to this value and to this value and then to this value and then out to this value. It’s moving around in a state space. That’s what it is, that’s what’s happening when you’re doing relevance realization. But although this, I’ve represented how this is dynamic, I haven’t shown you how and why it would be developmental. I’m going to do this with just one of these because I could teach an entire course just on relevance realization. Okay, when you’re doing data compression, you’re emphasizing how you can integrate information. Remember, like the line of best fit, you’re emphasizing integration and because you’re trying to pick up on what’s invariant and of course that’s going to be versus, that’s going to be versus differentiation. Now, I think you can make a very clear argument that these map very well onto the two fundamental processes that are locked in opponent processing that Piaget, one of the founding figures of developmental psychology said drives development. This is what Piaget called assimilation. Assimilation is you have a cognitive schema and what is a cognitive schema again? It is a set of constraints and you have a cognitive schema and what that set of constraints do is it makes you integrate, it makes you treat the new information as the same as what you got. You integrate it, you assimilate it, that’s compression. What’s the opposite for Piaget? Well, it’s accommodation and that’s why of course when people talk about exploratory emotions like awe, they invoke accommodation as a Piagetian principle because it opens you up. What does it do? It causes you to change your structure, your schemas. Why do we do this? Well, because it’s very efficient. Why do we have to do this? Because if we just pursue efficiency, if we just assimilate, our machinery gets brittle and distortive. It has to go through accommodation, it has to introduce variation, it has to rewire and restructure itself so that it can again respond to a more complex environment. So not only is relevance realization inherently dynamic, it is inherently developmental. When a system is self-organizing, there is no deep distinction between its function and its development. It develops by functioning, but by functioning it develops. When a system is simultaneously integrating and differentiating, it is complexifying, complexification. A system is highly complex if it is both highly differentiated and highly integrated. Now why? Well, if I’m highly differentiated, I can do many different things, but if I do many different things and I’m not highly integrated, I will fly apart as a system. So I need to be both highly differentiated so I can do many different things and highly integrated so I stay together as an integrated system. As systems complexify, they self-transcend. They go through qualitative development. Let me give you an analogy for this. Notice how I keep using biological analogies. That is not a coincidence. You started out life as a zygote, a fertilized cell, a singular cell. The egg and the sperm are zygote. Initially, all that happens is the cells just reproduce, but then something very interesting starts to happen. You get cellular differentiation. Some of the cells start to become lung cells. Some of them start to become eye cells. Some start to become organ cells, but they don’t just differentiate. They integrate. They literally self-organize into a heart organ, an eye. You developed through a process, at least biologically, of biological complexification. What does that give you? That gives you emergent abilities. You transcend yourself as a system. When I was a zygote, I could not vote. I could not give this lecture. I now have those functions. In fact, when I was a zygote, I couldn’t learn what I needed to learn in order to do this lecture. I did not have that qualitative competence. I did not have those functions. But as a system complexifies, notice what I’m showing you. As a system is going through relevance realization, it is also complexifying. It is getting new emergent abilities of how it can interact with the environment and then extend that relevance realization into that emergent self-transcendence. If you’re a relevance realizing thing, you’re inherently dynamical, self-organizing, autoproetic things, which means you’re an inherently developmental thing, which means you are an inherently self-transcending thing. I want to make clear an argument that might, or I want to respond to a potential argument that you might have. It’s like, well, I get all of this, but maybe relevance realization is a bunch of many different functions. First of all, I’m not disagreeing with the idea that a lot of our intelligence is carried with heuristics. And some of those are more special purpose and some of them are more general purpose. And we need to learn how to trade off between them. However, I do want to claim that relevance realization is a unified phenomena. And I’m going to do this in a two-part way. The first is to first assert, and then I will later substantiate, that when we’re talking about general intelligence, in fact that’s what this whole argument is bid, we’re talking about relevance realization. Now this goes to work I did with Leo Ferraro, who was a psychometrist, somebody who actually does psychometric testing on people’s intelligence. And one of the things we knew from Spearman way back in the 20s, and he discovered what’s called the general factor of intelligence, sometimes called general intelligence. Sometimes there’s a debate about whether we should identify those or not. I’m not going to get into that right now. What Spearman found was that how kids were doing in math was actually predictive of how they were doing in English, even how they were doing contrary to what our culture says, how they’re doing in sports. There’s, right, how I’m doing in all these different tasks was how I did when A did in A was predictive of how I did in B and vice versa. This is what’s called a strong positive manifold. There’s this huge, right, interpredictability between how you do and all these very many different tasks. That is your general intelligence. Many people would argue, and I would agree, that this is the capacity that underwrites you being a general problem solver. Often when we’re testing for intelligence, we’re testing for, therefore, general intelligence. I’ll put the panel up as we go along. What Leo showed me, he made a good argument, is that the things we study when we’re studying, like when you’re doing something like the Weschler’s test or something like psychometric test, right, so you will test things like the comprehension subset. Of course, you’ll concentrate on similarity judgments. You’ll also do the similarities of pictures. Other people have talked about your ability to adapt to unpredictable environments. This is other work by Godfrey and others, your ability to deal with complex workplaces, what are called G-loaded, that require a lot. Now, when you talk about G-loaded, that require a lot. Now, when you trace these back, what this points to is your capacity for problem formulation. The similarity judgments and what are called the adduction abilities, the ability to draw out latent patterns. This, of course, is similarity judgment. This is a similarity judgment and pattern finding. Pattern finding. All right. The complex, this is basically dealing with very ill-defined dynamic situations. The thing is, right, and adapting to complex environments. So, this is general intelligence. This is how we test general intelligence. We test people across all these different kinds of tasks and what we find is a strong predictive manifold. There’s some general ability behind it. But notice these, problem formulation, similarity, similarity judgments, pattern finding, dealing with ill-defined dynamic situations, adapting to complex environments. That’s exactly all the places that I’ve argued we need relevance realization. Relevance realization, what I would argue, is actually the underlying ability of your general intelligence. That’s how we test for it. This is the things that came out, right, and you can even see, you know, comprehension aspects in here, all kinds of things. So, relevance realization, I think, is a very good candidate for your general intelligence. And so far as general intelligence is a unified thing and we have, this is, this, look, this is one of the most robust findings in psychology. It just keeps happening. There’s always debates about it, blah, blah, blah, blah, and people don’t like the psychometric measures of intelligence, and I think that’s because they’re confusing intelligence and relevance and wisdom. We’ll come back to that. The thing is, this is a very powerful measure. If I, it’s reliable, this is from the 1920s, then this keeps getting replicated. This is not going through a replication crisis. And if I had to know one thing about you in order to try and predict you, the one thing that outperforms anything else is knowing this. This will tell me how you do in school, how you do in your relationships, how well you treat your health, how long, right, you’re likely to live, whether or not you’re going to get a job. This crushes how well you do in an interview, in predicting whether or not you’ll get and keep a job. Is this the only thing that’s predictive of you? No, and I’m going to argue later that intelligence and rationality are not identical. But is this a real thing, and is it a unified thing? Yes, and can we make sense of this as relevance realization? Yes. Is relevance realization therefore a unified thing? Yes. So relevance realization is your general intelligence, and what I’m arguing, well at least that’s what I’m arguing, and that your general intelligence can be understood as a dynamic developmental evolution of your sensory motor fittedness that is regulated by virtual engines that are ultimately regulated by the logistical normativity of the opponent processing between efficiency and resiliency. So we’ve already integrated a lot of psychology and the beginnings of biology and some neuroscience. I want to next, and we’ve definitely integrated with some of the best insights from artificial intelligence. What I want to do next time to finish off this argument is to show how this might be realized in dynamical processes within the brain and how that is lining up with some of our cutting edge ideas. I’m spending so much time on this because this is the lynch pin argument of the cognitive science side of the whole series. I’m going to show you how everything feeds in to relevance realization. If I can give you a good scientific explanation of this in terms of psychology, artificial intelligence, biology, neuroscientific processing, then you can see that it is legitimate and plausible to say that I have a naturalistic explanation of that. If the history is pointing towards this, what I’m going to then have the means to do is to argue how this, and we’ve already seen it, how it’s probably embedded in your procedural, participatory knowing. It’s embedded in your transjective dynamical coupling to the environment and the affordance of the agent arena relationship. The connectivity between mind and body, the connectivity between mind and world. We’ve seen it central to your intelligence, central to your functionality of your consciousness. This is going to allow me to explain so much. We’ve already seen it as affording an account of why you’re inherently self-transcending. We’ll see that we can use this machinery to come up with an account of the relationship between intelligence, rationality, and wisdom. We will be able to explain so much of what’s at the center of human spirituality. We will have a strong plausibility argument for how we can integrate cognitive science and human spirituality in a way that may help us to powerfully address the meeting crisis. Thank you very much for your time and attention. Thank you.