https://youtubetranscript.com/?v=9j5O-tnaFzE
Welcome back to Awakening from the Meaning Crisis. This is episode 27. So last time we took a look at the nature of cognitive science and argued for synoptic integration that addresses equivocation and fragmentation and the ignorance of the causal relation between these different levels, which we talk about mind and study mind. And it does that by trying to create plausible and potentially profound constructs. And then I propose to you that we start doing cognitive science in two ways. First of all, just trying to look at the cognitive machinery that is responsible for meaning cultivation, but also try to exemplify that pattern of bringing about synoptic integration through the generation of plausible and potentially profound constructs. So we took a look at the central capacity to be a cognitive agent, and this is your capacity for intelligence. And of course, intelligence is something that neuroscientists are looking for in the brain and that artificial intelligence, machine learning, is trying to create. They’re trying to create intelligence. Psychology, of course, has famously been measuring intelligence since its very inception. The word intelligence has reading between the lines, which seems, we’ll see, has a lot to do with your use of language and et cetera. And culture, of course, seems to be deeply connected to the application and development of intelligence. So intelligence is a great place to start. And then I said we need to be very careful. We don’t want to equivocate about intelligence. We want to make sure we approach it very carefully, because although it is very important to us, we often are using the term in an equivocal and confused manner and therefore bullshitting ourselves in an important way. And then I propose to you that we don’t focus on the results, the product of our intelligence, our knowledge, and what our knowledge does for us, our technologies, for example. We focus instead on the process that allows us to acquire knowledge, because that way we have something, we can use intelligence to explain how we have acquired the knowledge we have. I then propose to you to follow the work that was seminal both in the psychometric measure of intelligence, Vinay and Simon, and the attempt to artificially generate intelligence, the work of Newell and Simon. And this is the idea of intelligence is your capacity to be a general problem solver, to solve a wide variety of problems across a wide variety of domains. And then in order to get clear about that, we took a look at the work of Newell and Simon, trying to give us a very careful formal analysis of what is going on in intelligence. And I’m going to come back to those ideas in a minute or two, this idea of analysis, a formal analysis. A problem was analyzed into a representation of an initial state and a goal state, and I have a problem when my initial state and my goal state are significantly different. I can then apply operations or operators, these are actions that will change one state into another state, remember me moving towards the cup, raising my hand for example. And I can have a sequence of operations that will take me from my initial state into my goal state, but I have to follow the path constraints. I want to remain a general problem solver, I don’t want to solve any one problem to the detriment of my capacity to be a general problem solver, or then my solving this problem will undermine my intelligence in general. So to solve a problem is to apply a sequence of operations that will transform the initial state into the goal state while following the path constraints. And then you can analyze that by taking a look at the problem space. And it was this explication, making explicit of the problem space that was the radical and I will in fact argue profound power in what Newell and Simon were doing that’s what made their work so impactful in so many disciplines. Now there’s two things we have to note about this that are potentially misleading. First of all, one is I didn’t draw the whole diagram out, and that wasn’t just happenstance. We’ll come back to that. The second one is this diagram is misleading precisely because it is created from God’s eye point of view. If I were to fill the diagram out, you could see all of the pathways at once and you could see at once which pathway leads from the initial state to the goal state. But of course in life when you have a problem that you are not out here, having a problem is precisely to be here. And you do not know which of all these pathways will take your initial state to your goal state while obeying the path constraints. You don’t know that. You’re ignorant because remember we’re not confusing intelligence with knowledge. Solving this is how you acquire knowledge. A problem solving method is any method for finding the sequence of transformations that will take you from the initial state into the goal state while obeying the path constraints. And you say, okay, I get it. The diagram isn’t complete and you’re over here, you can’t see the whole thing, you don’t know which of all the pathways. Yeah, so what? Well here’s so what. When I have analyzed this and formalized it, when I’ve explicated it in terms of problem space it reveals something. I can calculate the number of pathways here by the formula F to the D where F is the number of operators I’m applying at any stage and D is the number of stages I go through. So Keith Holyoke gave a very famous example of this. A psychologist who was instrumental in doing important work on the psychology of problem solving. Let’s do a concrete example of this and then it’s a great example because it brings up machines that we have today. So let’s say you’re playing a game of chess, right? So on average, the number of operations you can perform on any turn, that’s F, is 30. Now don’t say to me, well many of those are stupid. That’s not the point. I’m trying to explain how your intelligence. That’s what I have to explain. It’s not what I can assume. Okay, so there’s 30 legal moves and on average there’s 60 turns. That’s the number of pathways that are in the problem space. This is known as combinatorial explosion. It sounds like a science fiction weapon, but it’s actually a central fact. This is a vast number. Something like 4.29 times 10 to the 88th in standard and sort of scientific notation. Now I want you to understand how big this is, how astronomically incomprehensibly large this is. So one thing you might say is, well, that’s easy. I have many different neurons and they all work in parallel and they’re all checking different alternative pathways and that’s how I do this, parallel distributed processing. There’s an important element in that, but that’s not going to be enough because you probably have around that many neurons. Now that’s a lot, but it’s nowhere near this. You say, ah, but it’s not the neurons, it’s the number of connections. That’s something like 5 to the 10th to the 15th and that’s a big number, but you know what it’s know it’s astronomically far away from? It’s astronomically far away from this. So even if each synaptic connection is exploring a pathway, this is still overwhelming your brain. In fact, this is greater than the number of atomic particles that are estimated to exist in the universe. This is in a literal sense bigger than the universe. It’s big. And what does that mean? That means you cannot do the following. You cannot search the whole space. For very, very many problems, the problems are combinatorially explosive and therefore you cannot search the space. You do not have the time, the machinery, the resources to search the whole space. Now here is what is deeply, deeply interesting about you. This is sort of my professional obsession, you might put it. If I could represent it this way, right, if this is the whole problem space, this is what you do somehow. You can’t search the whole space. And I mean you can’t look here and then reject. Because if you look at this part of the space and reject it and then look at this part of it, then you end up searching the whole space. It’s not a matter of checking and rejecting because that’s to search the whole space. What you do is somehow do this. You somehow zero in on only a very small subsection of that whole space and you search in there and you often find a solution. You somehow zero in on the relevant information and you make that information effective in your cognition. You do what I call relevance realization. You realize what’s relevant. Now this fascinates me. And that fascination is due to the work of Newell and Simon. Because how do you do that? Like, and you say, well the computers are really fast. Even the fastest chess-playing computers don’t check the whole space. They can’t. They’re not powerful enough or fast enough. That’s not how they play. So this issue of avoiding combinatorial explosion is actually a central way of understanding your intelligence and you probably hadn’t thought of that before. That one of the things that makes you crucially intelligent is your ability to zero in on relevant information. And of course you’re experiencing that in two related but different ways. One way is, and the way this is happening so automatically and seamlessly for you, is the generation of obviousness. Like what’s obvious? Well obviously I should pick up my marker. Obviously I should go to the board. Obviousness is not a property of physics or chemistry or biology. Obviousness is not what explains your behavior. It explains your behavior in a common sense way. But obviousness is what I scientifically have to explain. How does your brain make things obvious to you? And that’s related to, but not identical to, this issue of how things are salient to you. How they stand out to you. How they grab your attention. And what we already know, right, is that that process isn’t static. Because sometimes how you zeroed in on things is relevant. What was obvious to you, what was salient to you, how you join the nine dots is obvious and salient to you, and yet you get it wrong. And part of your ability is to restructure what you find relevant and salient. You can dynamically self-organize what you find relevant and salient. Now Newell and Simon wrestled with this. And there’s a sense in which this is the key problem that the project of artificial general intelligence is trying to address right now. In fact, that’s what I argued. I’ve argued in some work I did with Tim Lillicrap and Blake Richards, some work I’ve done with Leo Ferraro. There’s related work by other people. But Newell and Simon realized that in some way you have to deal with combinatorial explosion. To make a general problem solver, you have to give the machine, the system, the capacity to avoid combinatorial explosion. We’re going to see that this is probably the best way of trying to understand what intelligence is. People like Stanivich argue that what we’re measuring when we’re measuring your intelligence and psychometric test is precisely your ability to deal with computational limitations, to avoid combinatorial explosion. Christopher Czerniak argues something similar. So what did Newell and Simon propose? Well I want to talk about what they proposed and show why I think it’s important and then criticize them in what they mistook or misunderstood and therefore why their solution, and I don’t think they would have disputed it, why their solution was insufficient. So they proposed a distinction that’s used a lot but the term has, these terms have slipped. I’ve watched them slip in the 25 years I’ve been teaching at U of T. I’ve seen the terms slip around. But I want to use them the way Newell and Simon used them within the context of problem solving and this is the distinction between heuristic and an algorithm. They actually didn’t come up with this distinction. This actually came from an earlier book by Polia called How to Solve It, which was a book just on the psychology and it was a set of practical advice for how to improve problem solving. So remember we talked about what a problem solving technique is. A problem solving technique is a method for finding a problem solution. That’s not trivial because a problem solution has been analyzed in terms of a sequence of operations that takes the initial state into the goal state while obeying the path constraints. So what’s an algorithm? An algorithm is a problem solving technique that is guaranteed to find a solution or prove, and I’m using that term technically, not give evidence for it, but or prove that a solution can’t be found. And of course there are algorithmic things you do. You know the algorithm for doing like multiplication for example, you know 33 times 4, right? There is a way to do that in which you can reliably guaranteed get an answer. So this is important and I remember I said I’d come back to you and explain to you why Descartes’ project was doomed for the failure because algorithmic processing is processing that is held to the standard of certainty. You use an algorithm when you are pursuing certainty. Now what’s the problem with using an algorithm as a problem solving technique? Well it’s guaranteed to find an answer or prove that an answer is not findable. So algorithms work in terms of certainty. Ask yourself, in order to be certain that you found the answer or prove that an answer How much of the problem space do you have to search? There’s some things you can do to shave the problem space down a little bit and you know computer science talks about that, but generally for all practical purposes and intents, in order to guarantee certainty I have to search the space, the whole space. And the space is combinatorially explosive. So if I pursue algorithmic certainty I will not solve any problems. I will have committed cognitive suicide. If I try and be algorithmically certain in all of my processing, if I’m pursuing certainty, as I’m trying to get over to the cup, combinatorially explosive space opens up and I can’t get there. Because my lifetime, my resources, my processing power is not sufficient to search the whole space. That’s why Descartes was doomed from the beginning. You can’t turn yourself into Mr. Spock, you can’t turn yourself into data, you can’t turn yourself into an algorithmic machine that is pursuing certainty. That is cognitive suicide. That tells us something right away by the way. Because logic, deductive logic, is certainty. It is algorithmic, it works in terms of certainty. An argument is valid if it is impossible for the premises to be true and the conclusion false. Logic works in terms of the normativity of certainty. It operates algorithmically. So does math. You cannot be comprehensively logical. If I try to be Mr. Spock and logic my way through anything I’m trying to do, most of my problems are combinatorially explosive and I won’t solve even one of them because I’d be overwhelmed by a combinatorially explosive search space. This tells you something. This is what I meant earlier when I said trying to equate rationality with being logical is absurd. You can’t do that. The issue, these terms are not and cannot be as much as Descartes wanted them to be. They are not and cannot be synonymous. Now that doesn’t mean that rational means being illogical or willy nilly. See, ratio, rationing, pay attention to this. Ratio, rationing. Being rational means knowing when, where, how much and what degree to be logical. And that’s a much more difficult thing to do. I would argue more that being rational is not just the psychotechnology of logic but other psychotechnologies, knowing where, when and how to use them in order to overcome self-deception and optimally achieve the goals that we want to achieve. Often when I talk about rationality, people think I’m talking about logic or consistency and they misunderstand. That is not what I’ve meant and that’s what I said when Descartes was wrong in a deep sense from the beginning. Okay, Newell and Simon realized this. That’s precisely why they propose the distinction. A heuristic is a problem-solving method that is not guaranteed to find a solution. It is only reliable for increasing your chances of achieving your goal. So I’ve just shown you, you cannot play chess algorithmically. Of course you can and even the best computer programs do this, play chess heuristically. You can play chess doing the following things. Here are some heuristics. Get your queen out early. Control the center board. Castle your king. You can do all of these things and nevertheless not achieve your goal, winning the game of chess. And that’s precisely because of how heuristics work. What heuristics do is they try to pre-specify where you should search for the relevant information. That’s what a heuristic does. It limits the space you’re searching. Now what that actually means is it’s getting you to pre-judge what’s going to be relevant and of course that’s where we get our word prejudice from. And a heuristic is therefore a bias. It’s a source of bias. This is why the two are often paired together, heuristic bias approach. Look, what my chess heuristics do is they bias where I’m paying attention. I focus on the center board. I focus on my queen. If I’m playing against somebody who’s very good, they’ll notice how I’m fixated on the center board of the queen and they’ll keep me focused there while playing a peripheral game that defeats me. I played a game of chess not that long ago and I was able to use that strategy against someone. This is called the no free lunch theorem. It is unavoidable. You have to use heuristics because you have to avoid combinatorial explosion. You can’t be comprehensively algorithmic, logical, mathematical. The price you pay for avoiding combinatorial explosion is you fall prey to bias again and again and again. The very things that make us adaptive are the things that make us prone to self-deception. Now there’s going to be deep reasons later why this is insufficient. If you remember, we talked about these heuristics and biases when we talked about the representative heuristic and the availability heuristic, the representativeness heuristic and the availability that were at work when you take your friend to the airport. Because you can’t calculate all the probabilities, it’s combinatorially explosive, so you use the heuristic. How easy can I remember plane crashes? How prototypical, how representative are they of disasters and tragedies? Because of that, you judge it highly probable that the plane will crash and then you ignore how deeply dangerous your automobile is. So the very things that make you adaptive make you prone to self-deception. Now this account, I think, I have tremendous respect for Nolan Simon. So let me tell you why I have respect and then what criticisms I have. So first of all, this idea that part of what makes you intelligent is your ability to use heuristics. I think that’s a necessary part and the empirical evidence that we use these heuristics is quite powerful and convincing and well replicated. This is also an instance of doing really, really powerful work. And this will add one more dimension to what it is to do good cognitive science. Yes it’s about creating plausible constructs that afford synoptic integration, but there is another way in which Nolan Simon exemplified, they modeled to us what it is to do it well or properly. And again, this is going to relate to the meaning crisis. Notice what they’ve done. Notice how all of the aspects, all of the great changes that have made the scientific way of thinking possible are exemplified in what Nolan Simon are doing. Notice that they’re analyzing. They’re taking a complex phenomena and they’re trying to analyze it down into its basic components just like Thales did so long ago when he was trying to get at the underlying substances and forces. They’re trying to do that ontological depth perception. And then like Descartes, they’re trying to formalize it. They’re trying to give us a graphical mathematical representation. The problem space is a formalization that allows us to do calculations, equations. That’s how I was able to explain to you combinatorial explosion. And then what they were doing is they were trying to mechanize. I know that will make some people’s hackles rise. But the point of this is I’ve got this right if I can make a machine that can carry out my formal analysis because that means I haven’t snuck anything in and that really matters because it turns out that trying to explain the mind, we often fall into a particular fallacy. Okay, so how do you see? Well, here’s a triangle out here and the light comes off of it. Oh, and it goes into your eye. Right? Sorry, that’s a really horrible eye. Goes into your eye and then the nerve impulses and then I’ll equivocate on the notion of information to hide all kinds of theoretical problems. And then it goes into this space inside of my mind, let’s call it maybe working memory or something, and it gets projected onto some inner screen and there it is, it’s projected there and then there’s a little man in here. The Latin for little man is homunculus and the little man, maybe it’s your central executive or something, says, triangle. That’s how it works, right? Notice what’s going on here. It sounds like I’m giving a mechanical explanation and then I invoke something. Now what you should ask me right away is the following. Ah, yes, but John, how does the little man, the little homunculus, see the inner triangle? Oh, well, inside his head is a smaller space with a smaller projection and there’s a little man in there. And so on and so forth. You see what this gets? This gives you an infinite regress. It doesn’t explain anything. Why? This is the circular explanation. Remember when we talked about this? This is when I’m using vision to explain vision. And you say, well, yeah, that’s stupid. I get why that’s stupid. That’s non-explanatory. Circular explanations are non-explanatory. Yes, they’re not. They are. They’re not explanatory. But here’s what I ultimately have to do. And this is what Newell and Simon are trying to do. They’re trying to take a mental term, intelligence, and they’re trying to analyze it, formalize it and mechanize it so they can explain it using non-mental terms. Because if I always use mind to explain mind, I’m actually never explaining the mind. I’m just engaged in a circular explanation. What Newell and Simon are trying to do is analyze, formalize, and mechanize an explanation of the mind. They’re not doing this because they’re fascists or they’re worshippers of science or they’re enamored with technology. Maybe some of those things are true about them. But independent from that, I can argue, which is what I’m doing, that the reason they’re doing this is that it exemplifies the scientific method because it is precisely designed to avoid circular explanations. And as long as I’m explaining the mental in terms of the mental, I’m not actually explaining it. I call this the naturalistic imperative in cognitive science. Try to explain things naturalistically. Again, some of this might be because you have a prejudice in favor of the scientific world view and there’s all kinds of cultural constraints. Of course, I’m not denying any of that critique. But what I’m saying is that critique is insufficient because here’s an independent argument. The reason I’m doing this is precisely because I’m trying to avoid circular explanations of intelligence. Why does that matter? Remember, the scientific revolution produced this scientific world view that seems to be explaining everything except how I generate scientific explanations. My intelligence, my ability to generate science, is not one of the things that is encompassed by the scientific world view. There’s this hole in the naturalistic world view. That’s why many people who are critical of naturalism always zero in on our capacity to make meaning and have consciousness as the thing that’s not being explained. They’re right to do that. I think they’re wrong to conclude that that somehow legitimates other kinds of world views. We’ll come back to that. Because I think what you need to show is you need to show that this project is… Because this is an inductive argument. It’s not a deductive. You have to show that this project is failing, that we’re not making progress on it. That’s a difficult thing to say. You can’t defeat a scientific program by saying, pointing to things it hasn’t yet explained. Because that will always be the case. You can’t point to problems it faces. What you have to do, and this is something I think that Lakutosh made very clear, you have to point to the fact that it’s not making any progress in improving our explanation. And it’s really questionable, and I mean that term seriously, that we’re not making any progress in explaining intelligence by trying to analyze, formalize, and mechanize it. That’s getting really hard to claim that we’re not making any progress. Now, why does this matter? Because if cognitive science can create a synoptic integration by creating plausible constructs, theoretical ways of explanation, like what Newell and Simon are doing, that allow us to analyze, formalize, and mechanize, they have the possibility of making us part of the scientific worldview. Not as animals or machines, but giving a scientific explanation, our capacity to generate scientific explanations. We can fit back into the scientific worldview that science has actually excluded us from as the generators of science itself. So Newell and Simon are creating this powerful way of analyzing, formalizing, and mechanizing intelligence. There’s lots of stuff converging on it. There’s stuff from how we measure intelligence, talk about it, how we’re trying to make machines, right? And it holds a promise for revealing things about intelligence that we didn’t know before, like the fact that one of the core aspects of intelligence is precisely your ability to avoid combinatorial explosion, make things salient and obvious, and do this in this really dynamically self-corrective fashion, like when you have an insight. So I’m done praising Newell and Simon for now, because now I want to criticize them. Because Newell and Simon’s notion of heuristic, also a valuable part of the multi-optus, a valuable new explanatory way of thinking about our intelligence, while necessary is insufficient. Because Newell and Simon were failing to pay attention to other ways in which we constrain the problem space and zero in on relevant information and do that in a dynamically self-organizing fashion. Well, what were they failing to notice? They were failing to notice, right, that they had an assumption in their attempt to come up with a theoretical construct for explaining general problem solving. They assumed that all problems were essentially the same. This is kind of ironic. We have a heuristic, a heuristic, if you remember, challenged a long time ago by Ockham. We have a heuristic of essentialism. This is also a term that has been taken up and I think often applied loosely within political controversy and discourse. The idea of essentialism is that when I group a bunch of things together with a term, remember Ockham’s ideas about we group things together just by the words we use for them, that’s nominalism, but when I group a bunch of things together, they must all shape into a single set of things. Together, they must all share some core properties. They must share an essence. Remember that’s the Aristotelian idea of a set of necessary and sufficient conditions for being something. Now, it is of course the case that some things clearly fall in that category. Triangles have an essence. All triangles, no matter what their shape or size, have three straight lines, three angles and the angles add up to 180 degrees. If you have each and every one of those, you are a triangle. They’re also natural kinds. One of the things that science does, and we’ll see why this is important later, is it discovers those groupings that have an essence. All gold things have a set of properties that are essential to being gold. We’ll talk about why that’s the case later. Not everything we group together, and this was famously pointed out by Wittgenstein, has an essence. Let’s use Wittgenstein’s example first. If you remember, we call many things games. What set of necessary and sufficient conditions do all and only games possess? Well, they involve competition. Well, there are many things that involve competition that aren’t games, war, and there are games that don’t seem to involve competition, like catch. Oh, well, at least they involve other people. Solitaire. Oh, well, they have to involve imagination. Solitaire? What? They have to involve pretense. Catch? What are you pretending to do? This is Wittgenstein’s point. You won’t find a definition that includes all and only games. This is the case for many things, like chair, table, et cetera. Remember, this was all part of what Occam was pointing to, I think. The idea is we come with a heuristic. We treat any category as if it has an essence, but many categories don’t have essences. We’re going to come back to that shortly in a few minutes when we talk about categorization. Why do we use this heuristic? Because it makes us look for essences. Why do we want to look for essences? Because this allows us to generalize and make very good predictions. Yes, I can overgeneralize, but I can also undergeneralize. That’s also a mistake. Okay. We use this heuristic because it’s adaptive. It’s not algorithmic because there are many categories that don’t have essences. Newell and Simon thought this category had an essence, that all problems are essentially the same. That all problems are essentially the same. Therefore, they could come up with one base. If all problems are essentially the same, then to make a general problem solver, I basically need one problem solving strategy. I just have to find the one essential. I may have to make variations on it, but I have to find the one essential problem solving strategy. Because of this, how you formulate a problem, how you set it up to try and apply your strategy, how you represent the initial state, the goal state, the operators, the path constraints, that’s trivial. That’s not important because if all problems are essentially the same, then you’re going to have to find the one essential problem solving strategy. Both of those assumptions, they were in fact being driven by a psychological heuristic of essentialism. Essentialism isn’t a bad thing, at least talking about it as a cognitive heuristic. The problem solver is not a bad thing. It shouldn’t be treated algorithmically, but we shouldn’t pretend that we can do without it. If Nolan and Simon were right about this, then of course these aren’t problematic assumptions, but they’re actually wrong about it. Many people have converged on this at different times and using different terms, but there are fundamentally different kinds of problems. There are different ways in which there are different kinds of problems. I just want to talk about a central one that’s really important to your day-to-day life. This is the distinction between well-defined problems and ill-defined problems. In a well-defined problem, I have a good, meaning an effective, guiding representation of the initial state, the goal state, the operators, so that I can solve my problem. I take it that for many of you, that problem I gave earlier, there is a relationship between something being well-defined and algorithmic. They are not identical, but there is a relationship. For many of you, that should be a well-defined problem. You can tell me your initial state. This is a multiplication problem. That gives you useful guiding information. You know a lot of things by knowing your initial state. You know what the goal state should look like. This should be a number when I’m done. You know that this number should be bigger than this number, these two numbers. The most beautiful picture of all time of a platypus does not count as an answer. The operations are singing and dancing are irrelevant to this. A lot of your education was getting you to practice making whole sets of problems well-defined. Part of what psychotechnologies do is they make well-defined problems for us, like literacy and numeracy, mathematics. Because of that power and because of their prevalence in our education, we tend to get blinded and we tend to think that that’s what most problems are like. That means we don’t pay attention to how we formulate the problem, because the problem is well-formulated for us, precisely because it’s a well-defined problem. But most of your problems are ill-defined problems. In most of your problems, you don’t know what the relevant information about the initial state is. You don’t know what the relevant information about the goal state is. You don’t know what the relevant operators are. You don’t know what even the relevant path constraints are. So you’re sitting in lecture perhaps at the university, and you’ve got this problem. Take good notes. Okay? What’s the initial state? Well, I don’t have good notes. And? Oh, well, yeah, okay, okay. So what should I do? And all you’ll do is give me synonyms for relevance. Right, I should pay attention to the relevant information, the crucial information, the important information. And how do you do that? Oh, well, you know, it’s obvious to me or it stands out to me. Great. But how? How can a machine be able to do that? What are the operations? Oh, I write stuff down. Do you? Do I just write stuff down? Why? Like I draw, I make arrows. Do I write everything down? Well, no, I don’t write everything down and I don’t just… What are the operations? Does that mean everybody’s notes will look the same? No, when I do this in class, everybody’s notes look remarkably very different. So what are the operations? And what’s the goal state look like? Well, good notes. What are the properties of good notes? Well, they’re useful. Why are they useful? Well, because… Oh, right. Because they contain the relevant information connected in the relevant way that makes sense to me and so I can use it to… Yeah, right. I get it. What’s actually missing in an ill-defined problem is how to formulate the problem. How to zero in on the relevant information and thereby constrain the problem. So you can solve it. So what’s missing and what’s needed to turn… to deal with your ill-defined problems and turn them into something like well-defined problems for you is good problem formulation, which involves again this process of zeroing in on relevant information, relevant realization. And you see, if they had noted this, if they had noted that this bias made them trivialize formulation, they would have realized that there are… problems aren’t all essentially the same and they would have realized the important work being done by problem formulation and that would have been important because that would have given them another way of dealing with the issue of combinatorial explosion. Let me show you. So we already see that the relevance realization that’s at work in problem formulation is crucial for dealing with real-world problems. Taking good notes, that’s an ill-defined problem. Following a conversation, that’s an ill-defined problem. Well, I should say things. What things? Well, oh, the relevant… When? Well, oh, it’s appropriate. How often? Well, sorry. Tell a joke. Go on a successful first date. All of these are ill-defined problems. Most of your real-world problems are ill-defined problems. So you need the relevance realization within good problem formulation to help you deal with most real-world problems. Already problem formulation is crucial. But here’s something that Newell and Simon could have used. And in fact, Simon comes back and realizes it later in an experiment he does with Kaplan in 1990. And I want to show you this experiment because I want to show you precisely the power of problem formulation with respect to dealing with constraining the problem space and avoiding combinatorial explosion. I need to be able to deal with ill-defined problems to be genuinely intelligent. I also, as we’ve already seen, have to be able to avoid combinatorial explosion. That has something to do with relevance realization. And that has a lot to do, as we’ve already seen, with problem formulation. Let me give you the problem that they used in the study of the experiment. So this is called the mutilated chessboard example. There are eight columns and eight rows. And so we know that there are 64 squares. Now because this is a square, I didn’t draw it perfectly a square, but pretend it is. If I have a domino and it covers two squares, it’ll cover two squares equally if I put it horizontally or vertically. How many dominoes do I need to cover the chessboard? Well, that’s easy. Two goes into 64, right? I need 32. 32 dominoes will cover this without overhang or overlap. Now I’m going to mutilate the chessboard. I’m going to remove this piece and this piece. Okay, so how many squares are left here? Right? 62. There are 62 squares left. So I’ve now mutilated the chessboard. Here’s the problem. Can I cover this with 31 dominoes without overhang or overlap? And you have to be able to prove, deductively demonstrate, that your answer is correct. Now many people find this a hard problem. They find it a hard problem, perhaps you’re doing this now, because they formulate it as a covering problem. They’re imagining, they’re trying to imagine a chessboard and they’re trying to imagine possible configurations of dominoes on the board. So they adopt a covering formulation of the problem, a covering strategy, and they try to imagine it. That strategy is combinatorially explosive. So famously there was somebody, one of the people in one of the experiments, one of the participants trained in mathematics and was doing this topographical calculation and they worked on it for like, what is it, 16 to 18 hours and filled 81 pages of a Hillelroy notebook and they didn’t come up with a solution. Why? Because if you formulate this as a covering strategy, you hit combinatorial explosion. The problem space explodes and you can’t move through it. And that’s what happened to that participant. Not because they lacked the logic or mathematical abilities. In fact, it was precisely because of their logic and mathematical abilities that they came to grief. Now you should know by now that I am not advocating for romanticism, oh, just give up logic and rationality, that’s ridiculous. You’ve seen why I’m critical of that as well. But what I’m trying to show you again is you cannot be comprehensively algorithmic. Okay, so if you formulate this as a covering strategy, you can’t solve it. Let’s reformulate it. You can’t quite see this on the diagram, but you’ll be able to see it clearly in the panel that comes up, right? These squares are always the same color on a chessboard. In fact, that’s not hidden in the diagram and what’s used in the actual experiment, clearly, that’s clearly visible. These squares are always the same color. You say, so what? Right, that’s the point. You can see them, but they’re not salient to you in a way that makes a solution obvious to you. They’re not salient to you, they’re there, but they’re not standing out to you in a way that makes a solution obvious to you. But let’s try this. If I put this domino on the board, if I put it horizontally or vertically, I will always cover a black and white square, always. There is no way of putting it on the board that will not cover a black and white square. So in order to cover the board with dominoes, I need an equal number of black and white squares. I must have an equal number of black and white squares. That must be the case. But these squares are the same color. Is there now an equal number of black and white squares there? No. I must have an equal number of black and white squares. I know for sure, because these are the same color, I do not have an equal number of black and white squares. Therefore, I can prove to you that it is impossible to cover the board with the dominoes. If I go from formulating this problem as a covering strategy, which is combinatorially explosive, to using a parity strategy in which the fact that they are the same color is salient to me, such that now a solution is obvious. No, it is obvious that it is impossible. I go from not being able to solve the problem because it is combinatorially explosive to a search space that collapses and I solve the problem. This is why the phenomena we have been talking about when we talked about flow and different aspects of higher states of consciousness is so relevant. This capacity to gum up with good problem formulation, problem formulation that turns ill-defined problems into well-defined problems for you, problem formulation that goes from a self-defeating strategy because of combinatorial explosion to a self-defeating strategy because of combinatorial explosion to a problem formulation that allows you to solve your problem. That’s insight. That’s insight. That’s why the title of this experiment is In Search of Insight. That’s exactly what insight is. It is the process by which bad problem formulation is used. It is the process by which bad problem formulation is being converted into good problem formulation. That’s why insight, in addition to logic, is central to rationality. In addition to any logical techniques that improve my inference, I have to have other cycle technologies that improve my capacity for insight. We’ve already seen that that might have to do with things like mindfulness because of mindfulness capacity to give you the ability to restructure your salience landscape. We’re starting to see how problem formulation and relevance realization are actually central to what it is for you being a real-world problem solver, avoiding combinatorial explosions, avoiding ill-definedness. We’re going to continue this next time as we continue to investigate the role of relevance realization in intelligence and related intelligent behaviors like categorization, action, communication. Thank you very much for your time and attention. you