https://youtubetranscript.com/?v=Yp6F80Nx0lc

Welcome back to Awakening from the Meaning Crisis. So we have been looking at the cognitive science of intelligence and we’ve been looking at the seminal work of Newell and Simon and we’ve seen how they are trying to create a plausible construct of intelligence. They’re drawing many different ideas together into this idea of intelligence as the capacity to be a general problem solver and then they’re doing a fantastic job of applying the naturalistic comparative which helps us to avoid the homunculus fallacy because we’re trying to analyze, formalize and mechanize our explanation of intelligence, explaining the mind ultimately in a non-circular fashion by explaining it in non-mental terms and this will also hopefully give us a way of re-situating the mind within the scientific world view. We saw that at the core of their construct was the realization via the formalization and the attempted mechanization of the combinatorial explosive nature of the problem space and how crucial relevance realization is and how somehow you zero in on the relevant information. They proposed a solution to this that has far reaching implications for our understanding of meaning cultivation and of rationality. They proposed the distinction between heuristic and algorithmic processing and the fact that most of our processing has to be heuristic in nature. It can’t pursue certainty. It can’t be algorithmic. It can’t be Cartesian in that fashion. That also means that our cognition is susceptible to bias. The very processes that make us intelligently adaptive help us to ignore the combinatorial explosive amount of options and information available to us are the ones that also prejudice us, prejudice us, and bias us so that we can become self deceptively misled. This was a powerful, they deserve to be seminal figures and they exemplify how we should be trying to do cognitive science and they exemplify the power of the naturalistic imperative. There were serious shortcomings in New Orleans Simon’s work. They themselves, and this is something we should remember even as scientists, the scientific method is designed to try and it’s a psycho technology designed to try and help us deal with our proclivities towards self deception. They fell prey to a cognitive heuristic that biased them. They were making use of the essentialist heuristic, which is crucial to our adaptive intelligence. It helps us find those things that do, those classes that do share an essence and therefore allow us to make powerful predictions and generalizations. Of course, the problem with essentialism is precisely, it is a heuristic, it is adaptive, we are tempted to overuse it and that will make us miss see that many categories do not possess an essence. Like Wittgenstein famously pointed out the category of game or chair or table. New Orleans Simon thought that all problems were essentially the same and because of that how you formulate a problem is a rather trivial matter. Because of that they were blinded to the fact that all problems are not essentially the same, that there are essential differences between types of problems and therefore problem formulation is actually very important. This is the distinction between well defined problems and ill defined problems and I made the point that most real world problems are ill defined problems. What is missing in an ill defined problem is precisely the relevance realization that you get through a good problem formulation. We then went into the work in which Simon himself participated, the work of Kaplan and Simon to show that this self same relevance realization through problem formulation is at work in addressing combinatorial explosion. We took a look at the problem of the mutilated chessboard in that if you formulate it as a covering strategy you will get into a combinatorial explosive search. Whereas if you formulate it as a parity strategy, if you make salient the fact that the two removed pieces are the same color then the solution becomes obvious to you and very simple to do. Problem formulation helps you avoid combinatorial explosion and helps you deal with ill-definedness and this process by which you move from a poor problem formulation to a good problem formulation is the process of insight and this is why we have seen throughout insight is so crucial to you being a rational cognitive agent and that means that in addition to logic being essential for our rationality those psycho technologies that enhance our capacity insight are also crucially important, indispensable. We know that in insight relevance realization is recursively self organizing, restructuring itself in a dynamic fashion. So insight of course is not only important for changing ill-definedness, so this is what problem formulation is doing or as I’ll later call it problem framing. It’s doing this, right? It’s also doing this, it’s helping me avoid combinatorial explosion but it’s also doing something else, something that we talked about and saw already before with the 9 dot problem. It’s helping you to overcome the way in which your relevance realization machinery is making the wrong things salient and obvious for you. So insight is also a way in which this process is self corrective, in which the problem formulation process is self corrected because you can mislead yourself, be misdirected. So insight deals with converting ill-defined problems into well-defined problems and this helps us to do problem framing, problem formulation or doing reframing when needed. It helps us avoid combinatorial explosion by doing problem formulation or reframing as with the person who shifted from a covering strategy to a parity strategy in the mutilated chessboard or it also helps us correct how we limit inappropriately our attempts to solve a problem by what we consider salient or relevant and it allows us to reformulate, reframe and break out of the way we have boxed our cognition and our consciousness in. So insight is crucial in the cultivation of wisdom. I want to go on now. What we’re seeing is many, so we’re doing this thing where we’re understanding intelligence and we’re seeing many things just converging on the idea of being a general problem solver and then we’re seeing many instances already within that, once we get this notion of trying to come up with a general problem solver, we see that that in turn, many different things are feeding into this issue of relevance realization as what makes you capable of being a general problem solver. So we have many things feeding into this and then we’re analyzing, so this has tremendous potential. I’m trying to show you a plausibility structure. Many converging lines of how we measure and investigate and talk about intelligence lead to understanding intelligence as a general problem solver and then we’re starting to see that many lines of investigation are converging on what makes you generally intelligent is your capacity for relevance realization. I want to continue this convergence. I want to make it quite powerful for you. So presumably one of the things that contributes, as I mentioned, to your capacity for being a general problem solver is your capacity for categorizing things. I already alluded to that when I discussed this issue last time. Your ability to categorize things massively increases your ability to deal with the world. If I couldn’t categorize things, if I couldn’t see these both as markers, I would have to treat each one as a raw individual that I’m encountering for the first time, kind of like we do when we meet people and we treat them with proper nouns. So this would be Tom and then this would be Agnes and meeting Tom doesn’t tell me anything about what Agnes is going to be like. We talked about this when we talked about categorical perception. But if I can categorize them together, I can make predictions about how any member of this category will behave. It massively speeds up my ability to make predictions, to abstract important and potentially relevant information. It allows me to communicate. I can communicate all of that categorical information with a common noun, marker. Your ability to categorize is central to your ability to be intelligent. So what is a category? A category isn’t just any set of things. A category is a set of things that you sense belong together. Now, we noted last time that your sense of things belonging together isn’t necessarily because they share an essence. The common mistake. How is it that we categorize things? How does this basic ability central to our intelligence operate? I’m not going to try and fully explain that. I don’t know anyone that can do that right now. All I need to do is show you again how this issue of relevance realization is at the center. The standard explanation, the one that works from common sense, is the one you see in Sesame Street. You give the child, here are three things, three of these things, they’re kind of the same. One of these things is not like the others. Three of these things are kind of the same. You have to pick out the one. These go together, this one doesn’t. These are categorized as markers. That’s a Sesame Street explanation. What’s the explanation? I notice that these are similar. I notice that this one is different. I mentally group together the things that are similar. I keep the things that are different mentally apart. That’s how I form categories. Isn’t that obvious? Again, explaining how it becomes obvious to you and how you make the correct properties salient is the crucial thing. Why? This is a point made famous by the philosopher Nelson Goodman. Great name. What a great name. Goodman, what a great name. Nelson pointed out that we’re often, I’m going to use our language. I think this is fair, but we’re often equivocating when we invoke similarity and how obvious it is between a psychological sense and a logical sense. In that sense, we’re deceiving ourselves that we’re offering an explanation. What do I mean? What does similarity mean in a logical sense? Well, remember the Sesame Street example. Kind of the same. Similarity is partial identity. Kind of the same. Okay. What does partial identity mean? Well, you share properties. You share features. The more features you share, the more identical you are, the more similar you are. There you go. That’s pretty clear. Well, Nelson, once you agree with that, Nelson Goodman is going to say, well, now you have a problem because any two objects are logically overwhelmingly similar. Because pick any two objects. Let’s say a bison, right, and a lawnmower. All right. All I have to do is list lots. That should be an S. I don’t know what a bison is. I have to pick properties that they share in common. Okay. Well, they’re both found in North America. Neither one was found in North America 300 million years ago. Both contain carbon. Both can kill you if not properly treated. Both have an odor. Both weigh less than a ton. Neither one makes a particularly good weapon. In fact, the number of things that I can say truly that are shared by this and this is indefinitely large. It’s combinatorially explosive. It goes on and on and on and on. And this is Goodman’s point. They share many, many indefinitely large number of properties. Now what you, I imagine you’re saying is, yeah, that’s all true. I didn’t say anything false. Notice how truth and relevance aren’t the same thing. I didn’t say anything false. But what you’re saying to me is, yeah, but those aren’t the important properties. You’re picking trivial and notice what you’re doing. You’re telling me that I haven’t zeroed in on the relevant properties. The ones that are obvious to you. The ones that stand out to you as salient. So what you’re now doing is you’re moving from a logical to a psychological account of similarity. And what matters, right, for psychological similarity is not any true comparison, but finding the relevant comparisons. And the thing about that is that doesn’t seem to be stable. So I’m going to give you a set of things. Is it a category? Okay, so it’s your wife, pets, works of art, gasoline, explosive material. Is that a category? Works of art, your children, your spouse, gasoline, explosive material. Is that a category? You go, no, they don’t share enough in common. Now here’s what I say to you. There’s a fire. Oh, right. All those things belong together now because I care about my wife, I care about my kids, and fire can kill them. Pets and explosive stuff and flammable stuff is dangerous. Now it forms a category. In one context, not a category. In another context, a very tight and important category. Now the logical sharing has not changed. is what properties or features you consider relevant for making the comparison. Out of all of what is logically shared, you zero in, you see, on the relevant features for comparison. You do the same thing when you’re deciding that two things are different. Because any two objects, any objects you think are really the same also have an indefinitely large number of differences. And when you are holding things as different, it’s because you’ve zeroed in. In here, you know, shape and use are relevant differences. So at the core of your ability to form and use categories is your capacity again for zeroing in on relevant information. Now one thing that people sometimes say to me when I start talking this way is they say, oh, you know, Darwin, Darwin, Darwin, Darwin. And we’ll talk about Darwin again, right? And Darwin’s very important and we’ll talk about his work. But what they mean by that is like you’re doing all this abstract, you know, concrete survival situations, right? I just got to make a machine that can survive. That’s just obvious, right? It avoids this and it finds that. Well, first of all, is it? So one of the things a machine has to do, for example, is avoid danger. Danger, Will Robinson, right? Danger. What set of features do all dangerous things share? Don’t tell me synonyms for danger. I mean, holes are dangerous. Bees are dangerous. Poison is dangerous. Knives are dangerous. Lack of food is dangerous. What do all of those share? And don’t say, well, they lead to the damaging of your tissue. That’s what danger, those are synonyms for danger. Of all what I’m talking about are causes of danger. What do they share? How do you zero in on them? You still say, well, I sort of get that, but still, you know, just moving around the world, finding your food. Okay, well, let’s do that. Let’s try and make a machine that’s going to find, it’s going to deal with that very basic problem. It’s going to be a cognitive agent looking for its food. Now, because it’s an electronic machine, it’s a robot, we’re going to have it look for batteries. This is an example from Daniel Dennett. So here’s my robot. It’s mobile. It’s got wheels. It’s got, right, this appendage for grabbing stuff. It’s got all these wonderful centers. It’s got lots of computational power. Okay. So we know what we need to do. In order to make it an agent, a cognitive agent, that’s what we’ve been talking about from the very beginning, it has to be different from merely something that generates behavior. Everything generates behavior. This behaves in a certain way. This behaves in a certain way. This behaves in a certain way. What makes you an agent? I mean, this isn’t all that makes you an agent. This is a philosophically complex problem. But the crucial thing about what makes you as an agent is the following. You can determine the consequences of your behavior. I’m using that term very broadly. You can determine the consequences of your behavior and change your behavior accordingly. So this ability to determine the consequences, the effects of your behavior, is crucial to being an agent. So we build a machine that can do that. It can determine the consequences of its behavior. Now we give it this problem, very basic problem. Here’s the, right? So here’s a wagon, right? It has a nice handle, and on it is a nice juicy battery. And the robot, right, will try and do what you and I do, right? And this is also a Darwinian thing, because for most creatures you have to not only find food, you have to avoid being food, and so you don’t just eat your food where you first find it. Even powerful predators like leopards move their food to another location, because it will get stolen, they could get preyed upon, etc. You don’t eat your food where you first come across it. Fast food restaurants are somewhat of an anomaly, but when you walk into the supermarket you just don’t start eating. You try and take your food to a more safe place. You try and share your food with other people, because that’s a socially valuable thing to do. That’s why when you’re eating something you don’t like, you give it, ooh, this tastes horrible. Taste it, you want to share, right? You want to use food as a way of sharing, experience, bonding together. So, the robot is programmed to take its food, the battery, to a safe place and then consume it. Well, that seems just so simple, right? That’s so simple. Well, we have to make this a problem, because we’re talking about being a problem solver. And on this, right, on this wagon is a lit bomb. The bomb is lit, which means there’s a very high probability the fuse will burn down and the bomb will go off. And we put the robot in this situation. Now, what does the robot do? The robot pulls the handle, because it has determined that a consequence, an effect of pulling the handle is to bring the battery along. So, it pulls the wagon and it brings the battery along, because that’s the intended effect, right? That’s the consequence that it has determined is relevant to its goal. Okay? But, of course, the bomb goes off and destroys the robot. And we think, oh, what did we do wrong? What did we do wrong? There’s something missing. And then we realize, ah, you know what? We made the robot only look for the intended effects of its behavior. We didn’t have the robot check side effects. And that’s really important, right? Every year this happens. People fail to check side effects. They go into a situation in which they know flammable gas is diffuse, but it’s dark. And so they strike a match because they want the intended effect of making light. But it has the unintended effect of creating heat, which sets off the gas and explodes and harms or kills them. So, ah, we say, ah, we have to have the machine not only check the intended effects, it has to check the side effects of its own behavior. Okay. So what we’re going to do is we’re going to give it more computational power, right? It’s going to root, right? More sensors, way more sensors, way more computational. And we’re also going to put a black box inside this, like they do in an airplane, so that we can see what’s going on inside the robot. And then we’re going to put it into this situation, because this is a great test situation. Because once we solve this simple Darwinian problem, we’ll have a basically intelligent machine. So we put it in this situation and it comes up to the wagon and then nothing happens. It doesn’t do anything. And we go, what? The bomb goes off, right? Why didn’t it just move away from the wagon or why didn’t it try to lift the battery off? Well, we take a look and we find that the robot is doing what we program it to. It’s trying to determine all of the possible side effects. So it’s determining that if it pulls the handle, that will make a squeaking noise. If it pulls the handle, the front right wheel, right? The front left wheel will go through 30 degrees of arc. The front right wheel will go through 30 degrees of arc. The back wheel, same way, back left wheel, back right, there’ll be a slight wobbling and shifting in the wagon. The grass underneath the wheels is going to be indented. The position of the wagon with respect to Mars is being altered. Do you see what the issue is here? The number of side effects is combinatorially explosive. Oh crap. So what do we do? Well, we think we’ll give and this is something I’m going to argue later we can’t do. Well, we think we’ll give and this is something I’m going to argue later we can’t do. We come up with a recipe, a definition of relevance. Nobody knows what that is. I’m going to in fact argue later that that’s actually impossible. I’m going to in fact argue later that that’s actually impossible. And that’s going to be crucial for understanding our response to the meeting crisis. And that’s going to be crucial for understanding our response to the meeting crisis. Let’s say, let’s give the possibility. We have a definition of relevance. And what we’ll do is we’ll have the robot, right, determine, right, which side effects are relevant or not. Oh, so that’s great. So we add that new ability here. We give it some extra computational power. We put it in here and it goes up to the bomb and the wagon and the battery and the bomb goes off and it doesn’t it just it’s there calculating. And what’s going on? And we notice we look inside and it’s making two lists and it’s you know, here’s the wheel turning. That’s irrelevant. And it’s judging that that is irrelevant. Oh, here’s the change in here. It’s irrelevant. And it’s making a list. And this list is going and it’s yet correctly labeling each one of these. It’s irrelevant. But the list keeps going and going and going. See, this is going to sound like a Zen Cohen. You have to ignore the information, not even check it. See, relevance realization isn’t the application of a definition. It is somehow intelligently ignoring all the irrelevance and somehow zeroing in, making the relevant stuff salient, standing out so that the actions that you should do are relevant to you, are obvious to you. This is the problem of the proliferation of side effects in behavior, in action. This is called the frame problem. Now, there’s different aspects of the frame problem. One was a technical aspect, a logical aspect. In computational programming, and Shanahan, I think, is correct that he and others have solved that technical problem. But what Shanahan himself argues is once you solve that technical version of the frame problem, this deeper problem remains. And, of course, he calls this deeper problem, this deeper version of the frame problem, the relevance problem. He happens to think that consciousness might be the way in which we deal with this problem. We’ll talk about that in a minute. He happens to think that consciousness might be the way in which we deal with this problem. We’ll talk about that later. Many people are converging on the idea that consciousness and related ideas like working memory have to do with our ability to zero in on relevant information. But let’s keep going. Because what about communication? Isn’t that central to being a general problem solver? You bet. Especially if most of my intelligence is my ability to coordinate my behavior with myself and with others, communication is vital to this. We see this even in creatures that don’t have linguistic communication. Social communication makes many species behave in a more sophisticated fashion. And I already mentioned to you that the way in which we deal with this problem is that we have to be able to coordinate our behavior with ourselves. I already mentioned to you there’s a relationship between how intelligent an individual is and how social the species is. It’s not an algorithm. There seem to be important exceptions like the octopus. But in general, communication is crucial to being an intelligent cognitive agent. Let’s try and use linguistic communication as our example because that way we can also bring in the linguistics that’s in cognitive science. And see, so the point is when you’re using language to communicate, you’re involved with a very particular problem. This was made really clear by the work of H.P. Grice. He pointed out that you always are conveying much beyond what you’re saying. It’s much more than what you’re saying. It always has to be. And that communication depends on you being able to convey much more than you say. Now why is that? Because I have to depend on you to derive the implications and that’s a logical thing. And then what he also called implicature, which is not a directly logical thing, in order for me to convey above and beyond what I’m saying. So I drive up in my car, put my window down, and I say, excuse me, there’s a person on the street. I’m out of gas. And the person comes over and says, oh, oh, there’s a gas station at the corner. And I go, thank you. Drive away. Okay, so notice, let’s go through this carefully. So I roll down the window and I just shout out, excuse me. Okay. And what would I actually need to be saying to capture everything that I’m conveying? I would have to say, I’m shouting this word, excuse me, in the hope that anybody who hears it understands that the me refers to the speaker and that by saying, excuse me, I’m actually requesting that you give me your attention. Understanding, of course, that I’m not demanding that you’ll give me your attention for like an hour or three hours or 17 days, but for some minimal amount of time that’s somehow relevant for a problem that I’m going to pose that’s not too onerous. I’m, again, when I’m saying I, I mean this person making the noises who is actually the same as the one referred to by this other word, me. I’m out of gas. And of course, I don’t mean me or I the speaker. I mean the vehicle I’m actually in. I’m not asking you to make me more flatulent. I’m asking for you to help me, right, find gasoline for my car. And I’m actually referring to gasoline by this short term gas. And by saying this, I know you understand that my car isn’t completely out of gasoline. There’s enough in it that I can drive, you know, some relatively close distance to find a source of gasoline. The other person. Oh, by uttering this otherwise meaningless term, I’m indicating that I accept the deal that we have here, that I’m going to give you a bit of my attention. And I understand that it’s not going to be too long, too onerous. I can make a statement seemingly out of the blue and that you will know how to connect it to what you actually want, which is gasoline for your car. It’s not completely out of gas. I will just say the statement. There is a gasoline at there’s a gasoline station at the corner. You will figure out that that means that you can drive to it. I’m talking about a nearby corner, somehow relevantly similar to the amount of gasoline. Like this isn’t a corner halfway across the continent. There’s a gasoline station. This is what will distribute gasoline for your car. It’s not for giving, you know, helium to blimps. It’s not a little model of a gas station. It’s not a gas station that’s closed and not has been has not been in business for 10 years. It’s a gasoline that will accept Canadian currency or credit. It won’t demand your your your first born or or fruits from your field. And you know how all of that’s going on? Because if any of that’s violated, you either find it funny or you get angry. If you say, excuse me, I’m out of gas and the person comes up and blows some helium into your car. You don’t go, oh, thank you. That’s that’s that’s what I wanted. I wanted some gas, helium. Yeah, it’s ridiculous. If you drive to the corner and there’s a gas station that’s been out of business for 10 years, you go, what? What’s going on? What’s wrong with that person? You’re always conveying way more than you’re saying. Now notice something else. Notice I tried to I tried to explicate what I gave you a whole bunch of sentences to try and explicate what I was conveying. But you know what? Each one of these sentences is also conveying more than it was saying. And if I if I was trying to unpack what it said, what it was conveying and what it says, I would have to generate all of its sentences and so on and so on. And you see what this explodes into. You can’t say everything you want to convey. You rely on people reading between the lines. By the way, that is actually what this word means. At least one of the etymologies of intelligence is interledger, which means to read between the lines. So what did Christ said we do? Well, what we do is we follow a bunch of maxims. We assume that we’re trying to communicate when we’re trying to communicate. There’s some basic level of cooperation. I don’t mean social cooperation, just communicative cooperation. And we assume that people are following some maxims. So you’re at a party. All right. And you hear me say, right, somebody and you ask me, well, how many kids do you have? And I say, oh, I have one. I have one kid. And you know, OK. And then later on, I’m talking and you overhear me. Somebody asked me the same question. How many kids you have? Oh, I have two. I have two sons. And what you come up to me, you say, what’s wrong with you? Why did you lie? What? I didn’t lie. If I have two kids, I necessarily have one child. I didn’t say anything false saying I have one kid. And you’d say, what an asshole. Because I didn’t provide you with the relevant amount of information. I didn’t give you the information you needed in order to try and pick up on what I was conveying. You spoke the truth, the logical truth, right, or I did in this example. But I didn’t speak it in such a way. This is again why you can’t be perfectly logical. I didn’t speak it in such a way that I aided you in determining what the relevant conveyance is. So, Grace said we follow four maxims. We assume the person is trying to convey the truth, right, and then a maxim of quantity. They’re trying to give us the right amount of information. This is often called the maxim of quality. It has to do with truth though, right. And then there’s a maxim of manner, and then there’s a maxim of relevance, right. So, this is basically, we assume that people are trying to tell the truth. They’re trying to give us the right amount of information. They’re trying to put it in the kind of format that’s most helpful to us in getting what’s conveyed beyond what’s said, and they’re giving us relevant information. There it is again. Oh, look, there’s the word, relevance. Then Sperber and Wilson come along writing a very important book, which I’ll talk about later, I have criticisms of, but the book is entitled Relevance. And what’s interesting is they’re proposing this not just as a linguistic phenomena, but a more general cognitive phenomena. They argue that all of these actually reduce to the one maxim, be relevant. Okay, so manner. This is just, what is it to be helpful to somebody? Will it present the information in a way that’s helpful to them? Well, what you do is you try and make salient and obvious what’s relevant. Okay, that’s easy. Quantity. Give the relevant amount of information. What about this? Ah, and you say, ah, John, I got you. You can’t reduce this one because this is truth, and you have been hitting me over the head since the beginning of this series that truth and relevance are not the same thing. You’re right. So what does Sperber and Wilson do about that? Well, they do something really interesting. They say we don’t actually demand that people speak the truth because if we did, we’re screwed because most of our beliefs are false. What are we actually asking people to do? We’re asking people to be honest or sincere. That’s not the same thing. You’re allowed to say what you believe to be true, not what is true. Okay, so what you say? Well, that means the maxim is actually be sincere. What does sincere mean? Well, convey what’s in your mind. Everything that’s in your mind? Everything that’s going? So when you ask me, you know, how many kids do you have? I’ve got all this stuff going on in my mind about, you know, this marriage is failing. What am I going to do to take care of these kids? You know, I love this kid, but there’s all kinds of, oh, do I convey all of that? And all, no. Oh, man. If you’ve been trapped with somebody that’s like getting drunk and they talk like that at a party, it’s horrible. You’re trapped. You say one relative, do you have any kids? And you’re trapped for three hours. So that’s not what we mean. We don’t mean tell me everything that’s in your mind right now. Convey it all to me, John. Give it to me all. That’s not what we mean. What do we mean? We mean convey what is relevant to the conversation or context. Out of all of the possible implications and implicatures, zero in on those that you might think are relevant to me, our conversation, and the context. So that also reduces to relevance. So at the key of your ability to communicate is your ability to realize relevant information. Notice what I’m doing here. I’m doing this huge convergence argument again and again and again. What’s at the core of your intelligence? What’s at the core of your intelligence? Again and again and again is your capacity for relevance realization. It’s even more complicated than this. Putting things together that we… There’s so much more I could teach you. All of the information available in the environment, overwhelming, combinatorial, explosive, you have to selectively attend to some of it. So this is doing relevance realization, selective attention. And then you have to decide how to hold in working memory what’s going to be important to you. Lynn Hasher’s excellent work showing that working memory is about trying to screen off what’s relevant or irrelevant information. You’re using this in your problem solving, right? You’re using this and here is where you are trying to screen off what’s relevant or irrelevant information. You’re using this in your problem solving, right? You’re using this and here is where you are trying to deal with the combinatorial explosion in the problem space. All that stuff we talked about. That’s also interacting with the proliferation of side effects like we saw with the robot and the battery. You’re trying to act. So you’re trying to select what do I hold in mind, right? How do I get moved through the problem space once I start acting? What side effects do I pay attention to? Which ones do I not pay attention to? And all of that has to do with out of all the information in my long-term memory, how do I organize it? How do I categorize it? How do I improve my ability to access it? Long-term memory organization and access is dependent on your ability to zero in on relevant information. And this of course feeds back to here. This feeds back to here. This feeds back to here, right? Feeds back to here. These are interacting. These are interacting. This is the relevance problem. That. That’s the problem of trying to determine what’s relevant. It’s the core of what makes you intelligent. Now why does that matter? What I’m trying to show you is how deep and profound this construct is. This idea of relevance realization is at the core of what it is to be intelligent. And we know that this isn’t just cold calculation, right? Your relevance realization machinery has to do with all the stuff we’ve been talking about. Salience, obviousness. It’s about what motivates you, what arouses your energy, what attracts your attention. Relevance realization is deeply involving. It’s at the guts of your intelligence, your salience landscaping, your problem solving. Okay, so what do I want to do? What I want to do is the following. I want to propose to you, right, that we can continue to do this. I can show you how all of this is, I could do more, right? But how it’s all converging on this. Then I want to do two things. I want to try and show how we might be able to analyze, formalize, and mechanismically analyze. Formalize, formalize, and mechanize this. In a way that could help to coordinate how our consciousness, our cognition, our attention, our access to our long term memory, how all that’s working. Then what do I want to do with this? I want to try and show you how we can use relevance realization in a multi-app fashion. To try and get a purchase on these things we have been talking about in the historical analysis. Can we use relevance realization, how it’s dynamically self-organizing in this complex? It’s self-organizing within each one of these. Remember I showed you how attention is bottom up and top down at the same time. All of these are powerfully self-organizing and how the whole thing is self-organizing. We know it’s self-organizing in insight. Can I use relevance realization to explain things that are crucial to wisdom, to self-transcendence, to spirituality, to meaning? That’s exactly what I’m going to do. I’m going to use this construct once I’ve tried to show you how it could potentially be grounded, building the synoptic integration across the levels. And then do this kind of integration. Think about why this makes at least initial plausible sense. Relevance realization is crucial to insight and insight is central to wisdom. Relevance realization, and you’re getting a hint of it, seems to be crucial to consciousness and attention and altering your state of consciousness. What we’ve already seen can be crucial to wisdom and meaning making. And that would make sense. Look, isn’t it sort of central that what makes somebody wise is exactly their capacity to zero in on the relevant information in the situation? To take an ill-defined, messy situation and zero in? To pay attention to the relevant side effects, the relevant consequences? To get you to pay attention to what are the important features? To remember the right similar situations from the past? Yeah, right? Well, you say, OK, I sort of see that. What about the self-transcendence? Well, we already see that this is a self-organizing, self-correcting process. We already know that there’s an element of insight. The very machinery that makes you capable of insight is the machinery that helps you overcome the biases, helps you to overcome the self-deception. And it helps you solve problems that you couldn’t solve before. We talked about this with systematic insight. OK, so consciousness, insight, wisdom. But what about meaning? Come on. Like, where’s all that? Well, here’s the proposal. That what we were talking about when we talked about meaning in terms of the three orders, the normological, the narrative, and the normative, were, yes, were connections that afforded wisdom, self-transcendence, very much. But what connections? Well, the connections that were lost in the meaning crisis. The connections between mind and body, the connections between mind and world, the connections between mind and mind, the connection of the mind to itself. These are all the things that are called into question. The fragmentation of the mind itself. What if that, sorry, and we saw how this all throughout had to do with, again, the relationship between salience and truth. What we find relevant in terms of how it’s salient or obvious to us and how that connects up to reality and how it connects, remember Plato, connects parts of us together in the right way, the optimal way. What if what we’re talking about when we’re talking, when we’re using this metaphor of meaning, is we’re talking about how we find things relevant to us, to each other, parts of ourselves relevant to each other, how we’re relevant to the world, how the world is relevant to us. All this language of connection is not the language of largely causal connection. It’s the language of establishing relations of relevance between things. Perhaps there’s a deep reason why manipulating relevance realization affords self-transcendence and wisdom and insight, precisely because relevance realization is the ability to make the connections that are at the core of meaning, those connections that are quintessentially being threatened by the meaning crisis. That would mean if we get an understanding of the machinery of this, we would have a way of generating new psychotechnologies, redesigning, reappropriating older psychotechnologies and coordinating them systematically in order to generate those, regenerate. That’s the word I want to use, regenerate, regenerative of these fraying connections, re-legitimate and afford the cultivation of wisdom, self-transcendence, connectedness to ourselves and to each other and to the world. And that’s in fact what I want to explore with you and help explain to you in our next session together on Awakening from the Meaning Crisis. Thank you very much for your time and attention. Thank you.