https://youtubetranscript.com/?v=IwBw1BFljd8
So I’ve been setting up the debate for you between these two positions that pick up on the two primary metaphors of what thinking is. The debate between the search inference framework where thinking is like moving through space, and the control framework where thinking is like seeing, realizing things in attention and in perception. And as we’ve been doing that, we’re progressively getting clearer about how we’re going to operationalize thinking in terms of problem solving, and then what’s at the core of problem solving is problem formulation, because problem formulation deals with combinatorial explosion and ill-definedness. And then we started to take a look at central issues concerning problem formulation. The question between the two positions is whether or not the problem formulation process is itself a computational process, a heuristic process. The search inference framework people are meant to say yes it is, and the best of all people are going to say ultimately no, it is not a computational process. That those provocative heuristics are really not computational processing at all. And we took a look at how that might be the case. We took a look at the search inference framework’s main candidate for a heuristic, the means-end heuristic, and I showed you how its functionality was dependent on, first of all, the problem formulation has turned an ill-defined situation into a well-defined situation, and that aspects of the heuristic itself depend on problem formulation in order to prevent them from being homuncular in nature. Once we had set that all up, we went into the debate. I filled you in on the other side, the control framework. We looked at the work of Kohler and Dunker and Wirtheimer, and we saw progressively working out of some notions of insight as a restructuring, and that this restructuring has something to do with transfer, and we saw that while there was a lot of important observations and some good experimental results, the Gestalt framework seems to lack any kind of coherent, unifying theoretical picture. Nevertheless, the Gestalt framework did present a challenge to the search inference framework, a challenge that was taken up by Weisberg and Alba in 1981, an experiment that claimed to demonstrate, very clearly, I’m very clear on that, they seem to claim rather than just suggest, claim to demonstrate that there is no such thing as fixation, and since there’s no such thing as fixation, and since the Gestalt tradition, a defined insight is the overcoming of fixation, there’s no such thing as insight. We then took a look first at theoretical difficulties with that argument. After that argument, we took a look at difficulties concerning Weisberg and Alba’s alternative, which was compartmentalization, and arguments that the compartments themselves would require solving a lot of ill-defined problems, requiring a lot of insight, et cetera, and therefore it’s circular to explain insight problem solving in terms of use of compartments. However, I did point out that the connection between searching through the search space and searching through organized memory that is being emphasized by Weisberg and Alba is an important point we’re going to keep coming back to. We then took a look at Ellen’s point that it’s highly unlikely that the answer to a problem is going to be found in an integrated fashion somewhere in memory. It’s much more plausible that different pieces from different compartments in different areas that are nonconfigurous have to be properly brought together, properly restructured to fit together, such that they apply to the problem at hand, et cetera. So that whole aspect was not properly addressed. We then took a look at the work at Dominovsky, he pointed out that Weisberg and Alba seem to be confusing necessity and sufficiency, and that seems to be driven by an assumption that there’s a single difficulty that has to be overcome in order to solve the 9-dot problem, for example. Instead, it’s probably the case that there are many different interacting constraints or issues that have to be simultaneously or in the dynamically integrated fashion dealt with in order for the problem to be solved. We’ll come back to look at that specifically in later work by Kershaw and Olson. Finally, we took a look at the fact that Weisberg and Alba’s clue, think outside the box, may not be the right type of clue, because it may be propositional in nature, whereas the processing change that is needed is procedural in nature. It may not be inference about propositions, it may in fact be attention that is altering what is salient, kind of like what the Gestaltists have been arguing, and that might be the relevant issue that needs to happen in order for insight to occur, and that is why the procedural cue by Castleman and Meier was facilitary for solving the 9-dot problem, in contrast to Weisberg and Alba’s failed proposal. Any questions about that before we move on? We talked a lot in the way of a lot of things like graphic suddenness versus phenomenological suddenness, and these issues will come back, but that’s the core argument so far. Anything else that needs to be addressed? All right, so now I want to turn to the experimental responses to Weisberg and Alba, because what I’m trying to again show you, I’m trying to model something, I’ve been very clear and explicit about a particular philosophy of science, a science involving this ongoing integration between theoretical debate and experimental competition. So now we’ve done the theoretical debate side, and we’ll see how it interacts with the experimental composition. So that competition begins primarily with the work of Janet Metcalfe, who later went on to, I don’t know if she’s still working now or she’s retired, Janet Metcalfe became a very pivotal figure in the emerging field of work on metacognition within cognitive psychology. It’s plausible that this is one of the first areas where Meier got its initial big boost. So Janet Metcalfe, we’re going to take a look at three experiments. Now, the thing you have to remember about the middle experiment is it’s like the middle book in a trilogy. There’s exceptions to this rule. Most people think that The Empire Strikes Back was the best of the three movies. I don’t include the prequels, which were punishments for humanity’s sins that George Lucas inflicted on us. But generally, the middle book of the trilogy is unsatisfying because many things are unresolved. So when I’m teaching you the second experiment, it will feel unresolved to you. And there’s a reason why it feels unresolved to you, because it’s unresolved. Like the experiment comes to certain clues, the experiment has, of course, its results, but the theoretical issue is not fully resolved in the second experiment. It is only when we get to the third experiment that Metcalfe is able to bring it to a more satisfying theoretical plane. So please remember, as I’m doing the second experiment, it will be slightly dissatisfying. And then we’ll get into the third experiment, and then we can take it as a summation. Now, of course, along the way, you’re free to ask clarification questions and make challenge questions, but I just wanted to forewarn you about how the narrative is going to unfold. Let’s take a look at the first experiment, which is 1986. 1986A, because we’re going to talk about two experiments, two separate experiments at different times within 1986. So Metcalfe, 1986A. No. This experiment, like I said, like I suggested to you, I think this is where her career in metacognition really started to take off, because she was going to make use of a metacognitive probe in order to study insight. So please look at the date. We don’t have all of the sophisticated machinery, fMRIs, and dense EEG, and all that. We didn’t have that in 1986. So probing cognitive processes required quite a bit of experimental cleverness. So what Metcalfe uses is she uses a cognitive probe, ultimately a metacognitive probe. And what a probe is, right, is it’s an independently studied and replicated and confirmed phenomena that you can be reliably confident is operating in a certain manner. So here’s the one that you all know, because it’s one of the things you have on your membership card when you go to the psychology department. I don’t know if the philosophy department will even have cards, because maybe the cards don’t exist. But I mean, you go to the psychology department parties, and one of the things you have to do is you have to talk about P equals 0.05. Say that. One of the other things you have to be able to talk about, the drop of a hat, is the Strube effect, because it’s the most studied phenomena in psychology. I kid you not. This has been actually investigated empirically. Paul McCloud in the 90s did a review of all the Strube effect. In the 90s it was already into like, I think, 500. It’s been ongoing since. My favorite example, it seems I prefer, is that a study entitled the following, the effect of lunch on the Strube effect. Apparently if you have lunch, the Strube effect isn’t as bad. I don’t know if that tells you much about the Strube effect. I think that’s just a general property of lunch. But it’s a very well studied phenomena. Okay, so you know what the Strube effect is, and because you know how it operates, and this is how it’s used, it’s used as a probe. What you do is get it to interact with other processes, see if other processes affect it, etc. And then that will tell you something about those processes. So for example, because the Strube effect is such a reliable probe about the automaticity of processing, Imiraz was able to use the Strube effect as a probe to recently provide some of the first clear evidence for the objective existence of hypnosis. Okay, how did he do that? Why is he a candidate researcher? Because what he did was hypnotize people and told them when they were looking through the words to not pay attention to the meaning of the words but only to the color. And you can’t do that. You can’t placebo that because of the automaticity of the Strube effect. And what he was able to show reliably, replicated many times, is that if people are properly hypnotized, they can significantly reduce the Strube effect. There you go. So do you understand how a cognitive probe works? You go, wow, hypnosis must be real because it can really change the Strube effect, and the Strube effect is a real phenomenon. So he gets to become famous and eventually he’ll live in France. Okay, so she was doing the same thing like Imiraz uses the Strube effect to study hypnosis. She was using feeling of knowing to study insight. Okay, so feeling of knowing. And it’s always pronounced, even though we have this acronym, it’s always pronounced feeling of knowing for obvious reasons. You don’t pronounce the acronym. Okay? It’s always F-O-K or feeling of knowing. Now here’s the basic idea. First of all, what is feeling of knowing? Feeling of knowing is if I ask you something and you can’t currently answer me, you will often get a feeling that you nevertheless know it. How many of you know what I’m talking about? Okay. So there’s quite a bit of research done on that. There’s been quite a bit of research done by the time Metcalfe was doing this experiment. That’s actually an accurate thing you have. Your feeling of knowing is a quite good predictor of the fact that you will later be able to produce the needed information. So it’s not like a lot of, we get lots of feelings that don’t track anything. Many of you know that your confidence in your memory is not a measure of how accurate it is. It’s a measure of how meaningful it is. So you really call, I know this, that’s the person who shot Agnes, I swear. Turns out that’s not the person. Because the reconstructed nature of memory, blah, blah, blah. You know all this. But unlike that, unlike your confidence, which tracks how meaningful the information is to you rather than its accuracy, feeling of knowing is an accurate predictor. It’s an accurate predictor that you will be able to, right? And it’s reliable, it’s robust. So that’s one of the feelings you’ll have good reason to trust, by the way. You should not trust your confidence in your memory because it is not a tracker of accuracy. But you should trust your feeling of knowing because we’ve got good evidence, reliable, replicated evidence, that it is predictive of your ability to actually recall the information. Yes? Could you provide an example of that? I’m still a little clear on what you’re, or you’re not talking about the tip of the tongue. Tip of the tongue is one instance, but not all instances in which you have a feeling of knowing, right, are tip of the tongue phenomena. An example that I made a joke about, right, when I talked about you preparing for a test, is you will have studied something, you come in, you look at the material, you go, I know that, I know that I know that, but your brain goes, not right now. Have that happen to you? Yes? That’s an example. It’s not so much tip of the tongue, it’s not like, right, but it’s like, I know this, I know that I know this. But that’s not the same as being able to actually produce it. Is that okay? Yeah. All right. So, here was the idea. Yes? In terms of confidence and memory being meaningful, like that’s been documented, but what if your feeling of confidence and memory is related to your feeling of knowing? It could be, but it doesn’t seem to be that that’s what it’s generally tracking. So what people’s experience is. I mean, you could train yourself with what you’re saying. No, I’m saying like what if in one instance, not like generally, but like what if you’re saying people are like, I mean, you can say that you’re confident that the lecturer said X, but at the same time it’s because you have a feeling of knowing the semantic information that was provided. Sure. So, generally, yes. Generally, the feeling of, I mean, sorry, generally that’s not the case, but I can’t say to you that it’s never the case, which is what you’re asking me. But in the experiments in which people have been tested for the confidence, people will have very high confidence ratings and they can’t have a feeling of knowing because they actually will never. What you do is you like, for example, the classic one is you give people a bunch of dots, patterns. That’s one and that’s one and that’s one. Right. So, and then what you do is you show them a dot pattern that they have never been presented that represents the mathematical average of all the dot patterns that they see. They will be highly confident that they have seen it before, even though they never did. So, they have not learned it, but they have a high confidence that they have seen it before because it’s tracking how much previous information can be meaningfully integrated to make future predictions. Now, I can’t do the opposite, which is what you want. Can I deny that in some instances the confidence might be driving from feeling? No, I can’t deny that. What hangs on that? Is there like an important point or you just wanted to make that point? You’re free to make the point. I just want to make sure I’m picking up on any implications. I just was thinking that sometimes you can’t be confident that you know something or that you remember something depending on the memory versus the semantic versus episodic relationship. There could be. I mean, feeling of knowing seems to be not the same thing as your confidence in the accuracy of what you have remembered. Okay. Okay. So, here’s what Metcalfe’s argument. She’s got this probe, and the feeling of knowing seems to be predictive of a successful memory search. She’s got this reliable predictor of successful memory search. So, let’s try and track her reasoning. F-okay successfully predicts that you’re going to have a successful memory search. Now, Weisberg and Alba claim that that’s all insight is, a successful memory search. So, Weisberg and Alba are basically saying insight is successful memory search. Successful memory search, that’s what that stands for, successful memory search. And successful memory search is predicted by F-okay. You following the argument so far? So, what should F-okay be predictive of? Insight. So, that’s a straight prediction using the probe. So, let’s see if insight is what Weisberg and Alba says, just the ability to complete a successful memory search, and feeling of knowing is predictive of that, then feeling of knowing should be predictive of insight. So, what she did, right, is she gave people a standard memory task, feeling of knowing for a standard memory task, and then feeling of knowing for being given an insight problem. So, you put people in a situation where they have to remember something they currently can’t remember, you get them to give their feeling of knowing rating, and then you see how predictive that is of them successfully remembering. You put them into an insight problem solving situation that they can’t currently solve, you can see the structural parallels here, and then you ask them, you give them a feeling of knowing, will they be able to come up with an answer, do they know the answer. Now, what she found, yes, Gita? What’s an insight problem? An insight problem is a problem that requires that you in some sense change the problem formulation in order to solve the problem. Now, part of what we’re doing, Gita, is we’re trying to work out more and more carefully what that is going to mean. Because one of my criticisms, of course, is that the Gestaltists have talked about this in relatively vague terms. Now, we have a couple of things we made progress on last time, I think necessary but not sufficient, namely, there’s two things that an insight problem does for you, and they could be together, they could be separate, one of the most important things is it changes a combinatorially explosive problem formulation into a non-combinatorially explosive problem formulation, and it might also be, or in alternative to it, it might be changing an ill-defined problem into a well-defined problem. Is that okay? All right, so what did she find? Well, first of all, she replicates the cognitive probe, and she finds that there was a high correspondence for memory. So people, again, feeling of knowing strongly predicts that people will successfully find in memory the needed information. The result for the insight problems was low, in fact, it was not statistically different from zero for insight problems. Now that’s, right, that’s a pretty powerful disconfirmation of Weisberg and Olba’s specific thesis that, right, insight problem solving is just a successful memory search, which of course is why this experiment was set up. Yes? Can you repeat one sentence that you just said was? There’s a high correspondence for memory retrieval, so feeling of knowing strong predictor that they will remember something that they can’t currently, they can’t currently find it in memory, but they will find it in memory. And that relates to Weisberg and Olba? Because Weisberg and Olba, remember, they proposed that all insight is, because insight doesn’t actually exist, insight is just a memory search through the compartments. So if there’s anything that can probabilize success, then it must be something else? Well, that’s the implication of the conclusion, is that, now, let’s be careful what Metcalf isn’t saying. She’s not saying that memory search is not involved in insight problem solving. That’s ridiculous. And that’s not an implication of the experiment. What she’s saying is there must be something other than memory search involved in insight problem solving. That’s what she’s arguing. There has to be some other process at work in order to explain this very stark difference in the results. Please remember that, because one standard mistake people make, students have made in the past, is they will take this as, and they will make the claim that this showed that memory search is not involved in insight. That is not what it is showing. It’s showing that memory search, Weisberg and Olba’s alternative, is insufficient to explain what’s going on in insight. Some other kind of processing does that work. Yes? Does feeling of knowing only predicts memory search? Yes. And it got replicated in this experiment, too. But does it predict something other than this successful memory search? I don’t know. I don’t know what else it would predict. I mean, you could randomly try seeing it while you’re political allegiance or something like that. But the only theoretical motivation would be right now that I think the way the construct is described would be towards memory search. So she’s using enough knowing to say something about memory search. So what if enough knowing is actually true? Okay, so what she’s relying on is that’s the standard presentation. So that’s already independently asserted and confirmed empirically in the literature. And she replicates that again in the experiment. So it’s legitimate for her to talk that way. Secondly, it’s legitimate for her to talk that way because that’s exactly the proposal of Weisberg and Olba about what insight is. But your question is good. You see how, if you didn’t know the theoretical debate, though, you’d be missing a lot of what’s going on in this experiment. You’d still get it in one sense, but a lot of what’s going on in this experiment would be like, whoa, you wouldn’t be getting it. Okay? All right, so that was really interesting. That was really interesting. And when this came out, it made a lot of noise, as it should, because it seemed to be a stark contrast to the Weisberg and Olba 1981 paper. Okay. So Metcalf followed this up. So this is now Metcalf 1986b. I’m not going to write it on the board. Just write Metcalf again and put a b beside it. Okay. So Metcalf was interested in the fact that feeling of knowing is very much like a snapshot. It’s a one-shot sort of take on what’s going on in cognition. So you’ll allow me to use a metaphor. She wanted to try and replace that snapshot with a film. She wanted a more online moment-to-moment, or at least every 10-second measure of what was going on. So she wanted to turn the probe from a one-shot probe into a continuous probe. Is that all right? Because she wanted to determine, like, how is this metacognitive awareness interacting with problem solving? It’s obviously not a simple predictor of simple memory service. Okay. So again what she did is she went into the search inference framework to try and see if she could come up with, from the search inference framework, a theoretical motivation for a construct that would allow her to do what she wanted to do. And so what she was doing is she’s basically going to replace feeling of knowing with a feeling of warmth. Okay. So she took a look at the work, and we’ve already done this so I can remind you of it. She took a look at the work of Simon from the DPS, the general problem solver. And then Simon, and we talked a bit about this last time, admitted that the means-end heuristic requires, right, something that is telling you if you’re getting closer or farther away from your goal. And he, this is a direct quote from the work he wrote on that. He said, examination of path produces clues of the warmer, colder variety. And of course what he’s alluding to there is the game warmer, colder, which it’s all clear to me how and why that’s a game. But you know what it’s like. You put a bunch of people, you have to close your eyes, and people move around and you say, warmer, colder, warmer, colder, and then they eventually get to the object because they followed your warmer, colder instructions. Yay. Okay, so anyways, what Metcalf theorized then is that people as they’re attempting to solve problems, right, are constantly, right, not just doing a one-shot feeling of knowing, they’re constantly doing a feeling of warmth, by getting closer or farther away, closer or farther away from a goal. So, I think this is very, again, very brilliant on Metcalf’s part. She wants to reformulate this, and she’s in critical dialogue with the search inference framework. She doesn’t just draw it out of thin air, she goes to the search inference framework itself in order to generate the new theoretical construct. So she’s playing really, really vigorously, right? She’s playing by the rules really, really clearly, really well. Okay. So, what she then decided to do, right, is to use this more ongoing processing probe while people were solving problems. So what she did is she gave the participants, were all attempting to solve purported insight problems. These are problems that had generally been called insight problems. Now we’re going to come back to that. Gina has already brought up that this is a problematic notion. And I want to foreshadow that is it really the case that something is intrinsically on its own an insight problem? Maybe that’s something we should reflect on. We’re going to come back to that point. The literature catches up, sorry, we’re speaking from the omniscience of hindsight. The literature is going to catch up with that and realize that that’s an important issue. The person who actually is going to raise that question, because he keeps coming back, is Weisberg. So we’re going to follow that debate out. So right now I’m asking you, that’s a legitimate concern, put it on hold, because we’re working towards it. Because we’re only still in 86, none of you even were born then. Okay. So what NetCaf does is give people purported insight problems to solve, and then they have to provide a warmth rating every 10 seconds. They have to provide a warmth rating every 10 seconds. Now remember what I said, this is the second experiment, and it won’t be satisfying. Okay. So, there were two findings. Okay. And be careful, because they sound like they’re contradictory, and they’re not quite contradictory, but give me a chance to explain why not. First of all, the first finding is there’s an abrupt increase in warmth ratings right before a solution is found. So, if you’re looking at the ratings, they look like this, and then boom, just before the insight is found. Okay. Second, right, strong, a strong feeling of warmth, predicted failure to solve the problem rather than success. So, although you get this right before a solution, right, a strong feeling of warmth actually predicted failure to solve the problem rather than success. Now, what you have to remember is to distinguish rate from difference. This is a sudden, graphically sudden. Here’s where we’re not trying to be, you know, is it graphically or phenomenological? Okay. It’s at least graphically sudden, right, which might be different from just an overall high feeling of warmth rating, right, like this, right from the beginning. So, don’t confuse rate with difference. Those two findings are not in contradiction to each other. In fact, the rate issue seems to be more predictive than the overall amount of warmth. So, Metcalfe said that this was a confused result. It’s like, what’s going on here? This is confusing. Right. So, what she argued is in these situations where there’s a very strong feeling of warmth overall, but it doesn’t predict success, she speculated that people are satisfying. Okay. So, this is a good opportunity to teach a concept we’re going to need in the course as well. Okay. Satisficing. Okay. This is a notion drawn from, again, the Olden-Simon. Okay. What do you do in satisfying? Satisficing is you’ll take a, let’s say you want to reach a goal, right? Getting to that goal is very hard or difficult. What you do in satisfying is you settle for a goal that’s somewhat similar to the actual goal that you wanted, but that is easier to get to. Okay. That’s satisfying. Satisficing is you don’t, you’re going to go for this goal, but that’s really hard. So, you weaken your goal to this, which is easier to do, and then you are satisfied with that lesser goal. And you’ll probably realize that this can explain a lot of your dating behavior. Yes. Could it also be that people who have, like, this strong feeling of warmth all the way through just be formulating the problem wrong from the very beginning and are thinking that they’re doing it right? That’s what she’s saying was satisfying. Oh, okay. That’s exactly what they’re doing. They have, they instead, but the problem with an insight problem is there’s a specific answer. So, if you formulate it this way and you move towards that, you think, oh, I’m going to get there, I’m going to get there, and then you don’t actually get there. But it’s not that the people are consciously saying, okay, I don’t know how to solve this problem, I’ll move towards this solution, which I don’t know how to solve. Okay. Now, some of them might be, some of them might be doing what you’re saying. And the point is, for her argument, it doesn’t really matter, because what she’s worried about is there’s a con found here. Namely, she has not controlled how people are approaching these problems. She hasn’t controlled for that variable. Yes? Are there people who actually both can occur? They would? With, I, for the same, you mean across different problems? Not for the same problems. So, they would start off really high, and then they would realize that they are on the wrong track. Oh, so, let me, let me share that. Let me make sure I’m drawing this. So, they’re like this, and then they go down like this, and then it goes like that? Exactly. I don’t know if she reports that specifically or separately. I don’t believe she does. Why did you ask that? That’s intriguing. I just thought that maybe people would, like, come to realize that they, that despite their cheating of one, that they aren’t actually progressing in their right way, so they would report that they actually are getting colder before, just before the incident. That’s good, yeah. I honestly don’t remember. I haven’t looked at the original, the primary data in a long time. So, that might be something to, I don’t believe it’s in the discussion or any of the write-up of the experiment. Yes? Wouldn’t, like, the graph that you just drew kind of be, like, for the nine-dot problem? Because we don’t, like, we talked about, it’s a problem from the last lecture. We talked about how it appears easy, so we realize we can solve it, but then later on we realize that it’s impossible. Sure. It might be. I guess, in fact, there’s sort of an a priori argument there that all the Z problems might have some initial, like, high value. I guess the question is about its duration and its impact. So, that’s a good question. I mean, that’s a good question that should be actually, I think, empirically investigated, even using this sort of design paradigm. Yes? Wouldn’t the level of, like, warmth just be correlated to whether the person thinks they have an answer? If they think they know the answer, then they’ll get warm. If they don’t, they’ll get cold. That’s right. That’s what it’s measuring. So, I think what he was suggesting, what’s your name? Q. Q? I think what he was suggesting is, like, in a lot of insight problems, people initially think they know the answer, then they realize they don’t. And then if they solve it, they get this, like, idea. Is that fair? Yeah. Is that correct? I think that’s more so saying, not so much that example, but more so, like, it depends more so on the subject. Because whether, like, the subject believes that they’re knowledgeable of that would determine the actual graph itself. Like, if you kind of see my theory. I’m not quite seeing, I mean, because these people, I mean, this is not tracking. Yeah, because that doesn’t actually track that, like, whether the answer is right, right? It’s like, that basically tracks, like, their belief that the answer is right. Right. So, depending on the subject believes that their answer is right, whether it’s right or not, that would show the graph more so than actual, like, accuracy of that. Sure. But the problem is, these don’t, this graph doesn’t predict success, and this does. Okay, so that one actually does predict, like, a lot of high-numbered success. Yeah, yeah. So, if it was just what you were saying, there wouldn’t be a difference in the success of the two, right? Absolutely. But what she wanted to know is why, like, turn it around, not to twist your words or anything, but both of these are sort of getting strong ratings towards the end. Why are these people largely inaccurate and these people are largely accurate? Okay. And so what she’s speculating is that because these people are satisfying, is Beck beckoning? Yeah. That they’re misrepresenting the problem to themselves. They’re misformulating the problem. Yeah, that makes sense. Okay, great. Good. This is, this is A. See, theoretical debate mixed up with experimental competition. I love this stuff. Okay, so what was needed was, what we’ve got to do is try to control for, and here’s where the experimental competition and the theoretical debate are going to keep hammering against each other, right? It’s like, well, what we’ve got to do is control for this so that people will treat, right, some problems as insight problems and only as insight problems. Okay? Now again, that goes back to Gina’s question about, well, but what is an insight problem? Does it objectively exist? Now Metcalf sort of does what people often do, which is legitimate for a while, is she sort of did an experimental hack around that theoretical problem. And I think it was cool that she did it because it was needed to be done and it was an idea that keeps getting picked up. What you do, what she did was she ran the third experiment. So this is Metcalf. And I don’t know how it’s pronounced. Weebee, why, I don’t know. I’ve never heard it. 1987. I know a lot of thumbs are called weebs. Pardon me? I know a lot of weebs. Like it’s written this way and it’s pronounced weeb? Let’s go with weeb then. We have at least some empirical evidence. Okay. So what she did was to try, she got insight problems and then she got non-insight problems. She had people try to solve both and then give feeling of warmth ratings for both. So she’s trying to pull these two apart. How did she pick up non-insight problems? What she did was she found well-defined problems that have been analyzed and modeled by programs using the means and theoristic. She went in and said, well, what kind of problems does the means and theoristic do well at? Again, she’s using the search inference framework itself. So these are the kinds of problems that the means and theoristic does well at. I’m going to use them as my model for that. Notice, you’re nodding. That’s correct. There’s an intuition there that’s good. This is not the same thing as giving a theoretical explication, though, right? Okay, so she goes in and she says basically what we’re going to do is we’re going to have, this is how I’m going to operationalize non-insight problems. We’re going to find the problems that have traditionally been, I’m going to tell people to solve them and give feeling of warmth ratings. And then I’m going to compare these two. Okay, so control group, polynomial part of the compound, you know this. This is how you do good science. And like I say, this is a good experimental hack. The intuition is clear. You understand why it makes sense. And it’s fair because she’s taking the notion from the search inference framework itself. So she’s not just pulling out the thin air. But ultimately, ultimately, she has not answered, she just questioned as to yes, but what is an insight problem? And are there problems that are intrinsically insight problems? Okay, I’m not going to keep hammering on that point because I made it clear that it’s going to be addressed and it’s important. Is that okay? All right, so what were the predictions? There were two predictions. So let’s just get clear on the names. Talk about. Okay, so this is an incremental curve. It goes up incrementally. All right. And this is an abrupt curve. Is that all right? Okay, I don’t think anything more is needed for that. So the first prediction was an incremental versus an abrupt warp pattern for non-insight versus insight. So the idea is this is what people would give for non-insight. And this is what people would give for insight. Second prediction, you’d have accurate metacognition for non-insight problems, whereas you would not have that for insight problems. So now she’s going back and trying to gather what was also in the first 86th minute. Okay, so the idea is this is going to be accurate and inaccurate. This is not the same thing as whether or not people solve the problem like in the second experiment. This is accurate about people’s predictions about how they’re going to do. All right, so take a look at this. Does this make sense? Okay. So basically what she’s saying is feeling of warmth and feeling of knowing, ultimately, work for non-insight problems because they are probably mostly standard memory search. They’re not going to work for insight problems, and you’re going to get this sudden abrupt change precisely because something other than standard memory search is going on. And both of those predictions were well confirmed. Both of those predictions were well confirmed. Yes? Have they ever done this research with people who have previously been exposed to the specific insight problems that they’re testing? It depends. Different insight problems have different… I sort of have to be careful here because I don’t want to talk like it’s an intrinsic property of the problem. In general, what happens is there’s a wide variation on ability to remember the solution to a problem. And there’s a complex answer to that, and it has to do with how insight problems and memory are interacting because obviously the relationship between insight and memory is not a clear cut, straightforward relationship. So there’s an answer coming. Is that okay? Yes? Why did you write accurate and inaccurate for… Accurate and inaccurate? Yeah. And that is their ability to predict that they’re going to solve the problem versus this. Okay. All right. So now the first thing happens… Well, the first thing you want to happen is replication. And as you know, we’ve talked a lot about this for a very long time, and we didn’t do it very much, which is why psychology is going through a replication process right now. But this is one of those things for which there was good replication. And it was replication that made a theoretical advance. Yes? I’m kind of confused because isn’t the one that’s insight accurate for insight problems? Yeah, but if you ask people along the way, most of their predictions, they’ll say, I’m not going to be able to solve it. I’m not going to be able to solve it. I’m not going to be able to solve it. You ask people here, I’m going to be able to solve it. So you get a lot more accurate predictions than you do here. Wait, but isn’t that inaccurate because they’re wrong about being able to solve it? No, no, no. They solve the problem. Oh, they solve the non-incentive problem. Yeah, these are non-incentive problems that are solved. That’s what I meant. This is not the same graphs as in the second experiment. She’s pulling the two things apart. OK, I understand. Sorry for that. OK, so let’s get back to it. All right, yes? Yes? Just to speak to you there, these graphs that they’re measuring is like the metacognitive prediction of how close they are to the solution, right? Not binary, will you solve it? But there’s also the metacognitive report, will you solve this problem or not? And then you’re also given, you’re given ratings, but you’re also asked, are you going to solve this problem or not? So here’s a plausible thing that when it’s like this, there’s a thing in you that can easily extrapolate that you’re probably going to come to a solution. A thing in you, whereas there isn’t such a thing here. OK? Is that OK for now? Just a clarification about the exact answer. Yeah, yes? So in an insight problem, your feeling of warmth doesn’t work. Until only until the very end. But that’s… Because when it suddenly goes up here, people start to say, oh, I am going to solve this, and they turn out to be right. Right. So it’s not like they just don’t have a feeling of warmth at the beginning? It’s very low, very minimal, yeah. So the insight problem doesn’t induce that feeling of, I have this in my brain? That’s right, yeah. OK, until then? Until then. That’s exactly what the graph is showing. OK. Is that just a clarification question? Yes. OK, Anna? It’s because solving a non-insight problem is more like solving a math problem, or it could be solving a math problem. You’re going through the proof in your head, and you’re like, OK, so I’m like three quarters of the way through the steps, and you can see that as you go along. That’s right. So the insight problem, you don’t know what’s going on, and you’re just sort of searching the wrong thing. That’s right. OK. But what we want to do is we want to get beyond what you did. It’s fair enough for your question, but we don’t want… Our theory is, what happens is, you search the darkness. Yeah. OK? So, and notice, again, that’s bound up with Gina’s question about, well, what do we actually mean when we’re solving something that’s not called? And again, we’re working towards that, because we’re doing theoretical debate, experimental competition, right? It’s going to take us, like, seven lectures to get close to what’s going on in insight. And it should be hard, because if the argument so far is right, the process by which restructuring a problem formulation is being manipulated is the core of your problem-solving ability. Isn’t that kind of like a singular definition for an insight problem? Because it’s like you’re basically just saying an insight problem is a problem where you have insight. Well, that’s part of what we were saying when we don’t have an independent way of talking about it. It’s not quite circular in that what you’re saying is there is something that people call insight problems that seems to be processed differently than other regular problems. And now I want to show you the replication, because the replication adds to that sense of difference. OK? So there’s a sense in which, and we’ve talked about this, we already talked about it in the discussion about things like confidence, this is all, you know, this is the universal pejorative term that really isn’t well thought out, but it’s applied. This is very subjective. OK. So the thing is, like, people are reporting something, but does it actually reflect any kind of difference in processing? Well, again, you don’t have the machinery back then to look at the brain very carefully. But what happened was a couple of interesting studies. So Jassovac did this in 1989, versions of this, 1994. Jassovac. I don’t know if I’m pronouncing this right. And I don’t know of these researchers other than this work, 1995. So basically three studies. And what they did was they set up the same experiment, right, where you have people solving non-insight problems, insight problems. You have people giving the feeling of warmth ratings, but then they were also measuring heart rates. Because typically heart rate actually reflects metabolic differences in processing, et cetera, et cetera. It might reflect other things, but you’re going to look for how tightly the heart rate, for example, is tracking the feeling of warmth. Because if the heart rate is actually reflecting the feeling of warmth, then you’re going to look for the feeling of warmth. So the heart rate is tracking the feeling of warmth. Because if the heart rate is actually reflecting, it has high, you know what concordance means, right? The two graphs, if I were to put them one on top of the other, they would have closely superposed. If I have high concordance, that means that the heart rate is actually measuring, there’s something actually physiological going on that corresponds to the feeling of warmth. That’s what they found. They found those two things. They found for the non-insight problems, heart rate went like this, and for the insight problems it went like that. And then they also found that these heart rate graphs had high concordance with the feeling of warmth graphs. So first of all, they replicated in three studies Metcalfe’s original experiment, which is good. And then secondly, in three studies, they were able to show that there’s something objective going on with feeling of warmth. There’s different processing occurring in some sense, because it’s showing up in a highly concordant physiological measure. Okay, so that impressed people. All these sets of things, so it’s like something’s going on. The Weisberg and Alba claim that there’s nothing going on in insight is now becoming impossible. Nah, that’s, something’s going on, it’s not standard memory search, it’s even making a physiological difference, this feeling of warmth difference is real in some objective sense. So what is the difference? What’s going on? We can’t just say there’s nothing different or special about insight. Okay, we’re going to take a look now at another set of experiments. And once again, I’m going to, as we go through the experiment, we’ll look at it, but I’m also going to point out a looming theoretical flaw in the experiment. Just like the issue about the objectivity of an insight problem is a looming issue. By the way, although this hasn’t been brought up, a related issue, which Weisberg is also going to point out is, whether or not the class of things we call insight problems is homogeneous or not. So we’re going to come back to both of those. The next series of experiments have to do with the relationship between insight and verbal overshadowing. Now, here’s my theoretical forewarning, and we’re going to come back to this. Some of you know this from 250, because I do this argument more extensively in college 250. Even though this term shows up in textual, it’s not going to be a problem. I think verbal overshadowing is a horrible theoretical mongrel. And I’m going to come back and I think very seriously criticize it as a construct. Nevertheless, I want to go through the experiments because they show important things. Okay? So the experiments are going to rely a lot in terms of the theoretical problems. I’m going to come back later and say that the notion that the overshadowing is done by something verbal, I’ll show you along the way why that’s misleading. Language itself is not an interference effect. And then the notion of overshadowing implies a model of attention that is overly simplistic and does not capture the attention of the person. So I’m going to go through the experiments. Language itself is not an interference effect. And then the notion of overshadowing implies a model of attention that is overly simplistic and does not capture what is needed to understand what’s going on in itself. All you need to do right now is just put a pin in, verbal overshadowing, where Becky’s going to come back later and destroy it. The experiment I want to talk, the series of experiments have to do with work done by Schuler who has been an advocate for verbal overshadowing. The first one is from Schuler, Olson, and Brooks from 1993. The publication dates and the production dates for this data do not trap, which is a little bit weird. I mean, it was published and when they were actually done, it doesn’t trap. Okay, so the first one is Schuler, Olson, and Brooks, 1993. Okay, so what they were basically doing is they were making use of Metcalfe’s difference between people solving insight problems and non-insight problems. And then they’re going to eventually zero in on what they’re going to call an individual differences methodology, which I’ll also teach you about. What was happening in this study is they were comparing the effects of concurrent verbalization on solving insight problems and solving non-insight problems. Okay, what first of all is concurrent verbalization? Concurrent verbalization, this is where the notion of verbal overshadowing comes from. Concurrent verbalization is as you’re trying to solve a problem, you’re speaking aloud. You’re giving a, like, writing protocol analysis basically, right? What you’re doing. What’s going on in your mind? What are you doing as you’re trying to solve the problem? Concurrent verbalization. Now they did a series of studies, and what they found was that concurrent verbalization markedly impairs insight problem solving. It really impairs insight problem solving. It’s not just that the insight words fail you, and sometimes words cause you to fail. Whereas concurrent verbalization had no effect on non-insight problem solving. So you seem to get a reliable difference on interference between insight, where you get a lot, and non-insight problems where you’re getting very little from concurrent verbalization. Now two things come to mind that are going on in this system. One, which I think is what they’re sort of thinking, and it’s plausible. I’m not saying they’re saying something implausible. Is that insight is largely an ineffable process, and therefore speaking might somehow interfere with insight. Because if I ask you how you have an insight, you’ll go, well I didn’t have an insight, and then I did. That will be your explanation. We’re going to come back to this leaping in insight. Now that, I think that’s at sort of the level they were talking about. But there might be something else going on, and this is again to foreshadow what we’re going to talk about later. That lines up with what we talked about when we criticized, why is Brignal and Ower-Dolmanowski criticized them? Which is the idea that insight may be much more of a procedural process than a propositional process. It might not be verbalization versus ineffability. It might be that insight is a very procedural process that can be interfered with by propositional processing. In fact, one of the best ways to interrupt people’s procedural abilities is to get them to go into propositional processing. So if you’re sparring with somebody, a good thing to do is to say, wow, that was a really good move, how did you do that? And then they’ll try to answer you and then they fall apart. They can’t spar anymore. Or if I were to ask you to tell me what you’re doing while you’re writing your notes, what you’re doing with your hand in order to do it, it’ll… Yes? Is this the argument of Schumer or some sort of… No. This is an argument that I’m making. And I’m making this argument to try and foreshadow, to give you some tools for getting a little bit of alternative ways of thinking about their interpretation of the results. So they’re presenting this very much as it’s just the verbal overshadowing. There might be something going on there. Another way of me putting this to you is, is it language use per se that’s causing the interference, which is what they’re asserting, or is it that certain types of language use triggers certain kinds of processing? Is it language use per se that’s the interference, or is it that certain kinds of language use triggers certain kinds of processing, and those kinds of processing might be the interfering effects? There’s a difference between those two hypotheses, right? Okay. That’s all I need you to get right now. Sorry, I’m not holding you now. No, no, no, no. First off, just skip. Okay. Okay. Okay. Okay. All right. So there is something going on here. They interpreted it in terms of verbal overshadowing. I’m suggesting there might be something else going on in terms of type of processing. We’ll come back to that later. Okay. So Shuler and Melcher, 1995, you have all these protocol analyses, like you have, right, all these protocol analyses you can do, right, because you have all of these reports that people have given. So Shuler and Melcher, in 95, like I said, are presenting this in terms of the logic of the studies. So they compared the non-inside and inside protocols. Okay. So the non-inside protocols contained a significantly greater proportion of what they called arguments. What they mean by that are inferential strategies, like logic, means, and analysis. There is very little use of inferential strategies in the inside protocols. So what we’re seeing is lots of inferential strategies here, not very many here. Okay. Yes. What do you mean by inferential strategy? So using logic, right, using math, using means and heuristic, using anything where you’re checking the logical relations between propositions in order to try and advance you getting to the goal state. So one way you can try to move from an initial state to the goal state is by moving logically between propositional representations of states. But that might not be the only way you can move to your goal state. And of course it’s not primarily the way you move to your goal state when you’re doing something procedural. Is that okay for now? Yes. Great. I forget your name. Angley. Angley. Okay. Now, the reverse, there was a reverse where you found something in inside protocols that you didn’t find very much in non-inside protocols. Much higher proportion of pauses and metacognitive reports for inside protocols. So people are much more often pausing, right, they’re just like nothing’s happening, and they’re doing metacognitive reports. They’re reporting on what’s going on in their mind. So there is a shift in insight, right, onto the attention, onto the cognitive medium rather than just onto the problem itself. People are actually paying attention to the cognitive medium. They’re stepping back and looking at it rather than looking through it. So if you’ll allow me an analogy that I’m going to back to later when we talk about mindfulness. So this is an analogy. It is an analogy. It is an analogy. It is an aid to understanding. And we’re going to come back to this idea later. Right now, I am looking through my glasses in both senses of the word, beyond them and by means of them. And I am specifically invoking both senses of the word. I’m looking through my glasses. Because that’s actually what’s happening. I’m looking beyond them and I’m looking by means of them. Is that okay? Now sometimes, right, I’ve been wearing glasses since I was one. I’ve been wearing glasses a long time. And I can’t get to my packs. And I can’t do the laser surgery because of specific sort of weird defects in my eyes. So glasses and the eye are going to be together until I have this. But sometimes what I have to do, right, is I do this. Right now, if my glasses are transparent, I’m looking through them. But sometimes I step back and I adjust my attentional focus, which in this case is also perceptual focus, a visual focus. I adjust my focus so I’m looking at my glasses rather than looking through them. Does that make sense? I’m not looking through them. I am stepping back and looking at them. And this allows me to pay attention to what’s actually going on in the media. Right? And I like to do things like clean my glasses. And then of course when I put them back on, right, I can see differently through them than before. So what I’m suggesting to you is that’s what’s happening. People are engaging in something like a core process of mindfulness. They’re stepping back and looking at the cognitive medium that’s going on in their mind rather than looking through those processes directly at the problem. Yes? How do they know that this is what’s happening? Pardon me? How do they know that they are doing this? Because they’re giving metacognitive reports. They’re saying what’s going on in their mind rather than talking about properties or features of the problem. Yes? So the experiment involves a prompt, right? And then it’s just a matter of the type of feedback you get? The only prompt is tell me what you’re doing from beginning to end. So there’s an initial prompt and then you’re just let run. And then what they’re doing is they’re looking through all of these protocols. This is everything that people have said and looking for what was present in one and absent in the other. So the first difference, and first of all we haven’t even determined if it’s predictive or not. What people are saying they’re doing and what they’re doing are not necessarily the same thing. Okay? The first prima facie, what we see is there’s a difference. Lots of inferential processing going on in the non-insight problems. And there seems to be this transparency opacity shift, this kind of metacognitive directing of attention at the cognitive medium rather than through it in insight problems. A plausible, like at least initially plausible speculation, which we will then have to try and turn into theoretical argument and empirical evidence, is that people are stepping back and trying to look at how they’re formulating the problem rather than looking at the formulated problem. Okay? In turn, step back and look at the problem formulation, how they’re formulating the problem rather than at the formulated problem. Okay, so as I mentioned before, what people say they’re doing and what they’re doing, right, are not the same thing. This is why introspectionism died as the main way of trying to do psychology right before the behaviorists because introspection is shitty. Sorry, it’s crappy. Introspection does not work well because what people think they’re doing and what they’re actually doing can be very different. This is, by the way, why you should always, you shouldn’t ignore it, but you should always question any evidence that is produced by self-report questionnaire. Because all you’re getting access to is people’s introspectively available information, which can be completely other than what they’re doing or what’s actually generating their performance. Yes? Are those things type specific? What do you mean, like are there types of things for which introspection is generally more accurate than others? Well, types in which introspection is more accurate than other types, but also if you ask someone who is missing a limb, like, oh, what’s it like, how do you walk without a limb, and they introspect and they think and they write it down, I feel like that would be pretty accurate because they live with that versus someone asking how do you solve an insight problem, which is like, if you lived experiences, you can know how often that is different. Like, it’s a feeling of knowing because then you know experiences versus like something you think, but does it work because your feeling of knowing doesn’t apply in that situation? Yeah, I don’t know if I have enough knowledge to answer your question. My concern is that generally if things are introspectable, that means they’re being held in working memory. And since a lot of your processing a lot of the time is going on outside of working memory, that even when you might be accurate to some degree, it’s still going to be quite misleading. I mean, there’s a sense in which I agree with you though because I mean, if what I want to do experiments on is people’s awareness of like their conscious experience, then of course introspection is going to be in fact the only mechanism I have that I want to study that. But for example, you’re asking me to introspect while you’re doing language processing turns out to be largely useless because most of that process and all that syntax and all that stuff is going on largely unconsciously. Even though you’re very familiar with languages, you do it all day long. So sorry, that’s not a conclusive answer, but it’s an honest one. All right. So let’s move from what people say to finding out if what they say predicts their success. Because if what they’re saying they’re doing is actually tightly correlated with their success, then it is plausible that it is actually what they’re doing in order to produce their success. Okay, so what they did was they checked the degree to which problem elements, arguments, what I’m calling inferential processes or pauses or metacognitive reports are predictive of the success for both types of problems. So for non-insight problems, use of inferential processing was highly correlated with success. So for non-insight problems, the use of inferential processes was highly predictive of success. So when people say they’re doing this inferentially, they’re doing it inferentially and that’s what’s actually getting them to the goal statement. For insight problems, there was no correlation, zero, between the use of inferential processes and success. There was no correlation. In fact, and this is going to push to the second experiment, there was very little that predicted success in insight problems. There’s very little in the protocol analysis that predicted it. There’s a weak correlation between pausing and success in insight problems. That might be like I say when people are sort of trying to disengage from the problem, step back. Okay, so again, we seem to see that there’s very computational processes going on in insight problems. Not good evidence that that’s going on in non-insight problems. Now, one temptation here, I’m just going to briefly say what it is and then we’ll take a break. The temptation is like if you’re in the search inference framework is to say, aha, but that’s, it’s actually computational for both. It’s just that in insight, the computational processes are going by so fast that they escape verbalization and consciousness. This is known as the speeded reasoning view that for both sets of problems, you’re using these computational processes. But for insight, it’s happening so fast, so rapidly that it can’t be brought into language and consciousness. This is the speeded reasoning view. Before we come back after the break and talk about the empirical response to that, first of all, think about it in terms of its theoretical plausibility. Now you need an independent theory as to why when you go into insight problems, you do this speeded up inferential processing. And why you don’t do it for non-insight problems. Secondly, it has not been the case that as we have produced machines that get faster and faster at unconscious inferential processing, they have become more insightful in their problem solving abilities. And you are using such machines right now. So it’s an implausible hypothesis to begin with. And then as we’re going to see, it’s going to receive empirical discomparison. Within-subject design, I think that was out of between-subject design, between the various studies. But because you have a within-subject design, you can check how people are doing on both the insight and the non-insight problems. So if both types of problems use the same processes, if both the non-insight problems and the insight problems are both using more inferential processes, then success on non-insight problems should be predictive of success on insight problems. If you’re using the same machinery and all that you’re doing is speeding it up in insight, then there should be a predictive relationship between success with that machinery for non-insight problems and success for using that machinery on insight problems. So what they did is they looked for when people were solving these problems, and they’re using the inferential processes for the non-insight problem, does their success on the non-insight problems correlate with their success on the insight problems? What’s the degree of the correlation between them? And the degree of the correlation between them was none. There’s no correlation. So it’s both an implausible thesis and it doesn’t receive empirical confirmation. So the question that emerges is what kinds of processes, if inferential processes are not predictive of insight problem solving, what kind of processes are? So this goes to work by… So this goes to work by… Kuhler, MacLeod, Brooks, and Melcher. But this is 1993. They used what they called an individual differences methodology. An individual differences methodology. So let’s talk about process A and skill B. I sort of, I should always forewarn you when I do that. It does sound like a hand grenade going off. Which is unpleasant. Okay, so I’m going to look first, I’m going to examine some process. So I’ve got some process A, and then I’m seeing if it’s a component to skill B. So what we can do is you can see if process A, the division goes like this. If process A is involved, is a causal factor in skill B, then performance on tasks involving A, performance on tasks involving A, should be predictive of task requiring skill B. So you’re trying to figure out what are the processes at work in this. And I say, oh I think this process is at work in this. If that’s the case, then performance on this should be predictive of how well you do in this. Does everybody understand the reasoning? Okay. So what they wanted to do was, what they did was give a bank of tests, and we’re going to see people are going to start repeatedly using this individual differences methodology. So we’re going to give a bank of tests to see which processes are predictive of inside problem solving as an attempt to try and get clear about what are the elements within it. What are the component processes within inside problem solving. So another way of putting it is the individual difference methodology is a way, a method of trying to analyze a process down into more basic processes. So they analyze the skill, I’m using that word because the potential of the inside is procedural in nature, into its component skills. Okay, so they consider three potential insight processes, restructuring, I’ll tell you how they operationalize each one of these in a minute, field independence and unconscious search. Okay, so field independence is when you’re doing a disembedding task, and you’re saying what the heck, you’re talking about how many of you have played as a kid, or you have younger children in your lives who have to do, where’s Waldo? How many of you know at least what I’m talking about when I talk about Waldo? Okay, good. Why is it hard to find Waldo? This is not a fun book, wide open tundra, Waldo. There’s Waldo. Okay, why is it hard to find Waldo? There are lots of other people. There’s lots of other people, right, there’s lots of ways, there’s lots of things around Waldo that seem to have some of Waldo’s features, but not all of Waldo’s features, so you’re potentially distracted, so it’s embedded. And what you have to do is you have to learn how to, right, you have to do something so that Waldo stands out. Last one is unconscious search, and that’s the idea that somehow you’re connecting things that are not directly connected unconsciously. Alright, so how did they, how did they operationalize this? Yes? You’re looking at what we said to field independence on the map? Restructuring, restructuring field independence and unconscious search. You’re welcome. Yes? Sorry, how does field independence relate to Waldo again? Oh. When you’re, when you’re trying to find Waldo, you’re doing a field independence task. You’re trying to make Waldo independent from his field. You’re trying to say, there’s Waldo, I can see him, right, but he’s no longer embedded in the field, like people and distractors. I was just using it because it’s the most common version of a field independence task that we’ve decided to make a cultural thing. It’s one of the things that binds us together as a culture, things like Waldo and Star Wars and stuff like that. So what we stand for. Okay, so they tested restructuring. This is really clever. They tested restructuring by giving people out of focus pictures. So I’ll give you an out of focus picture and then you can do even better versions of that now. But I give you an out of focus picture and you do something. It’s really cool. People will do this. Well, I look at it and I sort of, oh, that’s Abraham Lincoln. Now you probably know that you can now do that. You can do this even better because of just being out of focus. You can do like really coarse pixelation of a picture and then you sort of look at it and then you can sort of do this thing in your head and you go, oh, that’s actually locked. Now there’s ones that seem to combine them that are really cool. All right. So you’ll look at a picture and it’s just a bunch of colored splotches, purple and red and all kinds of, and you stare at it for a while and all of a sudden it goes 3D and it’s a 3D picture of a porpoise jumping out of the ocean. How many of you have done those? How many? Okay. How many of you have done those and you can’t get it to work? Okay, so we’re going to have to take you away. Okay. So it’s interesting about when it works and when it doesn’t work and what that’s predictive of and not predictive of. Okay. All right. So how did they do the field disembedding? They just gave people things, variations on like where’s Waldo. So they did an embedded figures test. You have all these lines and you have sort of, oh, can you see the three dimensional square? Oh, there it is. There. Okay. They did three things for the unconscious search. They did a remote associate task. So a word that all three of those are associated with. Okay. So those are called remote associate tasks, which are as the wonderful acronym, a rat task. Okay. So there’s games out on the market now that you can actually play this. There’s a game called Tri-Bond where you have to do this. Give them three things and you have to say what do they all have in common? Okay. Then there’s, now interestingly, how this has gotten taken up into the general literature, I think is a little bit confusing, but we’ll come back to that. Then there’s category instance generation. So I give you a category. Okay. So I’m going to give you a category. The category of gems. The first one is diamond. Quickly. Give me two other category members. Gems. The first one is diamond. Give me two more. Shout some out. Come on. Emeralds, rubies, sapphires. Okay. Those are the, those are the premium. The ones that come up first are the ones that you should buy for your significant other. If they don’t come up very quickly, don’t buy that. Good he was like, topaz. Okay. And then the last one, which I’m going to give you, is the category of gems. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Give me two other categories. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. Okay. So I’m going to give you a category. All right. So, yes. Sorry. All of those are under unconscious search. Yeah. All right. So let’s keep going. All right, so let’s keep going. So perceptual restructuring turned out to be the single best predictor of insight performance. The better people are on perceptual restructuring, the better they are at solving insight problems. Now, whether or not it’s perceptual restructuring is probably a bit of a mistake, because a lot of problems can be posed to you in an abstract form and you can still solve them. So it might be attentional restructuring that we’re actually talking about. We may be measuring something more basic like cognitive flexibility. So don’t lean too much on the perceptual part of this, but what’s going on here is that this is, your ability to do this is a very good predictor of how you’ll do on insight problems. Your ability to do that weird attentional thing where you redistribute what’s salient, what’s foreground, what’s background, all that, seems to be predictive. Well, you’ve got evidence to suggest it’s predictive of your insight. Second best is field independence task. Being able to find Waldo really well is actually predictive of how good you are at solving insight problems. Now, it’s not clear that that’s causal in the sense that if you get, if you do Waldo all day long, you’ll get better at solving insight problems. That’s something we’ll have to ask later. Now, remote associate tasks were predictive of insight, but much less than field independence and restructuring. Now, here’s why I wanted to stock and flight this for a sec. This does predict insight. Clearly, that’s what the evidence shows. But it’s much, it’s much, it’s a much weaker predictor than field disembedding or perceptual restructuring. Now, why is vervecchi going on about this? Because many researchers use this as the test for insight problem solving, which is somewhat problematic. Why are they using this one when the other two are more predictive of insight problem solving? Yes? Easier? Probably easier. Okay, so, and that’s, that’s a real thing, right? Ease of experimental design is a real thing in science. So I’m not, I’m not ridiculing, you understand? What I’m doing is setting in a reminder for you when you read this, that this isn’t the best. Okay, it’s not theoretically the best. It may be methodologically easier to use, but that doesn’t mean it is necessarily getting clearly at the central functions of insight. Okay? Just remember that. Okay, category generation, like I said, category generation was not, was not predictive of insight success. Again, seeming to confirm that searching through the compartments is not the primary thing you do in order to solve an insight problem. So yet another reputation, well, sorry, empirical disconfirmation of the Weisberg and Olva theory of how you do insight. Okay, so category generation, searching the category box does not, is not predictive of insight. Now what’s interesting is it goes the other way. The perceptual restructuring and embedded figures tests are not predictive of non-insight problem solving. Your ability to do perceptual restructuring, your ability to find Waldo, is not predictive of how well you will solve non-insight problems. Category generation, as well as other inferential processing measures such as MAT scores, which do not predict insight, do predict success on non-insight problems. So what you’re basically getting is the processes that predict success on insight do not predict success on non-insight and vice versa. You’re getting something close to a double dissociation. Okay, which is pretty good evidence, it’s not conclusive, but it’s pretty good evidence that you’ve got different processes at work. In fact, Shuler and Melcher come to a conclusion that’s almost as strong as Weisberg and Olva. And it’s amazing how these two, like they’re the direct opposite, but they’re both, this is what they say. The two types of processes draw on, the two types of problems draw on qualitatively different processes. Now I’m going to argue later, I think theoretically they’ve got it the wrong way around, but nevertheless you understand what they’re saying, that the processing that’s going on for insight problems seems to be very different than the processing going on in non-insight problems. What I mean by the other way around is I think we have different problem solving processes and when we use one, we call it insight, and when we use the other, we call it non-insight. Okay. Alright. So we’ll come back to again the issue of verbal overshadowing. When we come back and we take a look at some of Weisberg’s theoretical responses to this empirical work. Because again, science is driven both by empirical experimental competition and theoretical debate. And I’m trying to show you how they weave back and forth through each other. Part of the reason I’m doing that of course is part of my pedagogical responsibility is not only to teach you the material, but to teach you how to be a good scientist. Because in the end, the second thing is more important to you. Right now, the content stuff is more important to you because it means your mark. Your mark means your degree. Your degree means possibly your job. And your job means whether or not people will eat with you and reproduce with you. Right? But, I get that, and so that’s why I’m trying to teach you a lot of the relevant content. But down the road, a lot of this content is going to be overturned or found to be obsolescent or mis-brained. This is what the history of science shows. But knowing how to do theoretical debate and experimental competition will always be valuable to you. That will always give you access to the pursuit of knowledge. Okay, so, another set of experiments, we use the individual differences methodology to try and think up on this thing we talked about briefly before. This sense of leaping in an example. I don’t know, I don’t know, I know! What happened in between? Nothing! So it’s interesting here. This is this metaphor, I like this leaping metaphor, that is trying to metaphorically integrate the search inference framework and the perceptual framework. Because you’re moving through space, but you’ve left the ground. So you’re, right? So it’s trying to find, like, re-visualization because you’re in one place and suddenly you’re in the other with that you’ve moved through space. What they’re trying to pick up on is this gap in processing. Now interestingly enough, the researchers are going to try and operationalize that and to theorize as to what’s going on in that gap. And to propose some interesting beginnings of an alternative to the computational framework for trying to understand what’s going on. It’s a beginning. So, this work was done by, I’ll try to bring my computer. I wanted to show you something. I’ll try to remember it next time when I do the review and I’ll show you. Baker’s Senate, who I now think is back to being Baker, and C.C. 1996. So they want, they’re going to use an individual differences methodology to study what they call inductive leaping. Now because I take induction to be an inferential process, I think the word they’re using there is incorrect. They shouldn’t be calling it inductive leaping. Because that’s the whole point of what they’re trying to talk about. So I’m going to recommend to avoid confusion, especially those of you who have done some work in logic and what induction and deduction are, etc. I’m going to recommend that we call what they’re talking about cognitive leaping rather than inductive leaping. Okay, so let’s talk about how they operationalize leaping and then what’s the theory behind it. What’s the theoretical argumentation behind this? So what they did was they gave people a series of cues. I wish I had, I forgot to bring them, I wanted to show you the direct visual imagery. But what I can show you initially is just a bunch of dots on the screen. You look at it and you go, I don’t know what that is. And then I’ll fill in more dots. Let’s say I don’t know what it is. And then more dots. And then at some point you go, oh that’s a sofa. Do you understand the task? So what I can do is I can time every 10 seconds or so, more dots are added. And your instruction is the following. Try to use fewer cues, that’s the one way of putting it, or try as soon as you can to tell me what this is going to be a picture of. They also did it with word stems, but the pictures are I think less compounded. Does everybody understand the task? Now a good cognitive leaper is somebody who uses fewer cues but reliably gets the answer. By the way, on your test put cognitive leaper if the question comes up. It’s funny the first time, but by the fourth or fifth time, reading about cognitive lepers, it’s not funny, it’s weird and kind of distressing. Okay? All right, so here’s the proposal. People who are better at being cognitive leapers, they can use fewer cues, but yet accurately come to the right answer, will be better at insight problem solving. So first of all, before we go on, they received good experimental confirmation for that. The better you are at leafing, the better you are at problem solving. Okay, so let’s stop here, and now I’m going beyond what they say in the experiment. I’m going to start with what’s directly implied by what they’re saying, and then I’m going to build beyond in a way they might not have thought to talk about. I don’t think it’s inconsistent with what they’re talking about. Okay, so this is slowly progressing more and more into my voice out of their voice. Now, what they’re initially talking about, it looks like, is your capacity for pattern completion. All right? And your pattern for pattern completion, right, is a very interesting ability that you have. One thing you should know is, your ability to do this is one of the best tests we have for your general intelligence. Raven’s progressive matrices, right? Yes? Isn’t that also not necessarily like the most different types of intelligence tests, and that results are typically consistent, but are not consistent enough to be generalized? Like if someone has ADHD or dyslexia or something, that there’s like really specific problems, different types of problems that they can’t… But these are comparative standards, right? So… So there’s specific areas that are used in some types of problem-solving measures, but not in others. Well, I’m not quite sure I agree with the second thing you’re saying, but before I address that, let me address the first thing. Right, I mean, none of these measures are perfect, but there’s no measure in psychology that’s better than our measures for G in terms of its predictive capacity over your behavior. Nothing comes close to G’s ability to predict your academic success, your life success, your health success. It is our most powerful predictor. So, is it perfect? No, but it’s really strong. It’s really powerful. So it’s getting at something. So I think it should be treated that way. Secondly, the idea that there are different types of intelligences, if that’s what you’re suggesting. Okay, let me address that first thing. That is a very… That has not received good empirical confirmation, that hypothesis. So the multiple intelligence ideas, et cetera. So if you’re not saying… So what is it other than that that you’re advocating for, Kaleke? What I’m saying is, if you tested someone who had ADHD based on pattern conclusion, based on working memory, based on other types of problem-solving ability, they could get 99th percentile in one and 38th percentile in the other. So it’s not an accurate measure even of problem-solving, let alone of different types of life skills. So because you’re not measuring… Obviously, if there’s certain types of learning disabilities, also that can follow similar patterns, then there’s obviously between that type of pattern conclusion problem-solving and another type of problem-solving in which someone can do. There’s a difference between that. But, I mean, first of all, you’re talking about pathological cases, and the pathology might be exactly that that’s what’s breaking down is the fact that in general there is a strong positive metaphor between those tests. How you do on these tests is strongly predictive of how you do on all the others, suggesting there’s a general ability underneath them. And once again, that general ability measure is a very powerful predictor across many domains. So the fact that in some individuals it might be breaking down is not evidence that that general ability doesn’t exist. It might be that what’s actually being affected is the generalizability of the ability, right? Kind of, but I’m not… Well, you don’t… What is it you want to… You’re not saying that your IQ is unaffected by learning disabilities or things like that. That’s not what you’re saying. Well, I mean, that’s a different point that can or can’t be depending on the types of tests that are used. But that’s not… I was just saying, if there’s a difference between that, then there’s obviously something that’s there in some problem-solving measures, but not in others. And how… If there’s a disassociation, so people can do really well in one, badly in the other, or really well in the other, and badly in the second one, then they’re not… Even within problem-solving, then there’s something that’s causing the ability of them not to be associated? Is that because they’re different? No, that’s not good evidence. It’s good evidence that they’re not identical, but the claim was never that they were identical processes. The subtests are all somehow tapping into an underlying core general ability. That’s what explains the consistent positive manifold, and the ability, again, for that manifold to predict across many different domains of life. So I’m not quite sure what the thesis is you’re criticizing, or what you’re… The idea that there’s a general ability behind it, the evidence for that is very strong. What you’re saying is, somehow, in certain individuals, it gets skewed so that a general ability goes here, but it doesn’t go there. Yeah, I agree with that. What follows from that? Well, you’re saying, like, would they be different so that you would know… I mean, it seems like you’re saying that they’re not, but I was just wondering, like, would it be different because you can get different results? Like, why would the general ability… If you’re saying, like, it can skew one way or the other, then obviously there’s something that’s affecting that skew, but before, I was just thinking, like, obviously, if there could be different results that are produced, then it’s not a correlation that is always the case. Then there has to be a catch in store in some way, and if you’re saying, like, it’s just because the general ability is skewed, then it’s one way. Right, so here’s one way. So what also is highly correlated with measures of general intelligence are measures of working memory capacity, and things like attention can alter how working memory is being used. So ADHD could skew working memory, which could then skew how the general ability is being implemented. That would be one way for example. Did I understand your question? And again, there’s independent measure. So independent of things like Raven’s progressive matrices, there’s independent measures that working memory use is also highly predictive of your general intelligence abilities. And we know, of course, that attention alters how you’re using your working memory. So if you had something that was affecting attention, it could skew working memory, which could then affect your general intelligence being implemented in one way rather than another. Is that okay for an answer? Okay, there was another question. Yes? I wrote in my notes, like, initially measured capacity for pattern completion. I was just wondering what that was referring to. So before I got into the discussion with Chloe, what I was saying is that what we’re measuring, what AcreCentret and CC are measuring in the experiment, I would say is plausibly pattern completion. And what I was saying is one of our strongest tests for your intelligence ability, your ability to solve problems, is a pattern completion test. Raven’s progressive matrices. And then that’s how we got into the discussion about sort of measures of general intelligence. Okay, now what’s interesting about pattern recognition, and okay, this is meant to be, again, a step towards a theoretical formulation, is that pattern completion abilities were actually, have traditionally been very difficult for standard computational machines to carry out. The reason for that is that pattern completion problems are often what are called multiple simultaneous constraint problems. Now notice we should be thinking about this because we already criticized the idea of a single constraint, multiple simultaneous constraint problems. Okay, so let me give you the classic example and then let’s talk about how it was solved. Okay, so you can do things like this. So you can read that and what does it say? Okay, and here you read this as an H and you read it here as an A. An A, okay. Now this was actually a very difficult problem because if you try to represent this sort of initially, inferentially, you can get into weird kinds of minds. Because this is what you all said, in order to read the word I have to read the individual letters, but in order to disambiguate the letters I have to read the word, which means reading is impossible. Because until I read the word the letters are ambiguous, but I can’t read the word until I can read the letters, therefore reading is impossible. So it’s an illusion, what you’re doing is this is an illusion. Okay, now what the answer was, what is the answer by the way? What if you don’t plot the answer? I’m interested, how do we do this? What do you think? Yes, Gina? You’re simultaneously doing top down and bottom up processing. So you’re simultaneously doing top down and bottom up processing in a self-organizing fashion. Pattern recognition seems to involve having to deal with multiple simultaneous constraints by doing self-organizing processing that is working simultaneously top down and bottom up. I’m using a non-controversial example. Is that okay? Now the machines that turned out to be very good at doing this in contrast to standard computational machines and the people who first solved this problem are people like David Romelhart and Jeffrey Hinton because they were able to get neural networks to solve this problem very readily. Because neural networks work in terms of parallel distributed processing, many types of processing going on simultaneously and it can be both top down and bottom up passion in a self-organizing fashion. Now that’s right now this is suggestive and that’s all I’m going to do with it right now. But this is suggestive because as many of you know neural network theory has consistently presented itself as a theoretical alternative to computational processing. Yes? Yes? So you’re, yeah, but that’s because you already have some model of the whole word which you’re using to fill it in. Where’s that coming from? Some top down process that tells you where the whole word is, right? So that’s right. Okay, so what’s going on is the idea that now again this is now me doing cognitive science extension beyond acre-centred and CC that maybe what’s going on when you’re cognitively completing is pattern completion. And pattern completion is using this kind of simultaneously top down bottom up self-organized process that can be implemented in neural networks. Which is very different than the logical influential processing, propositional processing that is central to computers. It’s a very different model than the search inference framework. Because it’s not inferentially driven. It’s not driven by inferences operating on propositions. It’s driven by a self-organized dynamic relation between top down and bottom up processing. Now that’s interesting because that suggests a potential, I’ll get your question in a sec, that suggests a potential source for a rigorous theoretical alternative to the search inference computational framework that was never provided by the Gestalt people. Yes? Sorry, I just completely catched, okay so if humans have like parallel distribution processing, computers are… Oh, and let’s be careful what I’m not saying. I’m not saying that you don’t do computational processing. I’m not making that claim. What I’m claiming is that a better model for how you do this kind of thing is the kind of thing that’s going on in neural networks. Which is a process in which you’ve got a self-organizing relationship between top down and bottom up processing. Like in here when you’re doing the cat. Thank you. And notice that there’s an aspect of your cognition that has this bottom up, top down self-organizing capacity to it. Namely your attention. So one of the things in which this kind of model is frequently invoked is attention. Attention is both bottom up, so that if I do that it grabs your attention. But I could also drive it top down. How’s your right big toe doing right now? Okay. And of course the two, right, are always interacting. Things are trying to grab your attention. You’re trying to direct your attention. But as you direct your attention, that will grab your attention. And people are talking more and more about attention being this kind of simultaneously bottom up, top down self-organizing process. So we might have the potential for an alternative understanding of the core processes that are going on in restructuring. This is a beginning. Yes? So if Baker and Cece’s task is related to like, you know, a pattern recognition. And if that pattern recognition is related to insight problem solving. And if what they’re measuring is kind of indicative of IQ, pattern recognition is indicative of IQ. Is insight problem solving and IQ very related? There’s a problem with that. And this is one of Sternberg’s criticisms of it. So standard IQ measures are not predictive of insight because guess what most of the problems are in standard IQ measures? They’re all well-defined problems. And very few of them involve this. What Sternberg has done is he has taken, and I do more of this in 371, he’s taken standard IQ tests and then added to them insight problem solving. And produced a measure that combines the two. And that turns out to be much better predictor than standard IQ measures. For academic success, blah, blah, blah, blah, blah, blah, stuff like that. So like, would the Ravian cycle metrics be like an example of something that’s very, because it feels like it would be like very… It should do both, yeah. I think so too. And that, this is leaping ahead, so I can’t immediately justify what I’m going to say. But a lot of people are arguing that what you’re measuring when you’re measuring sort of, especially like the fluid aspects of G, is you’re measuring cognitive flexibility. And that cognitive flexibility turns out to be the single biggest predictor of your capacity for insight problem solving. Now, all of this is encouraging, right? Because it means that we’re on the right track then. If we’re getting at this core ability and it’s pointing us back to sort of core measures and core theoretical constructs for intelligence, then we’re on the right track. Then we’re on the right track. But as I’m also indicating, we’re also on the sort of the cutting edge. This is where a lot of people are proposing what needs to be done next and what has not yet been done. Which is good for you, right? Because you want to know what your future is, not just your past. Yes, Hannah. So you were saying that the top-down model of processing and both of these data are going to be shown as native neural network models, not in the search inference framework. But you said that this was something that the Gershaw, this could have used if it were more scientifically to be applied? Yes. That’s exactly what I said. I mean, I don’t want to be unfair to them because it didn’t exist. You guys should be using neural network theory. But that’s okay. So you’re saying it aligns with the Gershaw ideas, ideas that it’s a scientific interpretation as opposed to Microsoft? Yes, exactly. That’s exactly what I’m saying. Okay. I hadn’t yet established it or even given you evidence for believing it. All I’m saying is this is now at least a rigorous alternative that would give something that, give a vocabulary, a conceptual vocabulary to the Gershaw test so that we could start to get the debate going even better. Okay. And this is when, 1996, come on, any of you who did 250 with me, why is this an important date for neural networks? Jeffrey Hinton, 1996, the wake-sleep algorithm is invented the same year, for those of you who know. The best way you do unsupervised self-organized learning in neural networks. All this stuff is happening at about the same time. Yes? Is it possible that you could have like an inferential question that ends up being answered by insight? Yeah. Okay. So there, and people can move the blackboard between them. Interestingly enough, there’s been some recent research, and I’ll talk about it later in the course, that you should trust, you should give quite a bit of trust when you solve your problems by insight policies. They do tend to be better solutions than your incremental. We’ll come back to that. Again, within certain constraints. All right. Let’s get back to Baker Senate and CEDA. All right. So again, what did they say? They did some brilliant stuff. They operationalized leaping. I think they misnamed it with inductive leaping. The stuff with the sofa dots, that’s not induction. That’s like, that’s not induction. That’s maybe adduction, but it’s not induction. You’re not saying, I have this pattern of dots, then they’re all black, so all these dot patterns. It’s not induction. So let’s put that aside. So you’ve got cognitive leaping, and then it’s plausible, especially given the way they describe it, that what they’re pointing to is a pattern completion ability that’s sort of a strong predictor of insight. And then I point out that pattern completion seems to involve being with multiple simultaneous constraints. And we have an alternative theoretical construct for talking about this, and that is a self-organization of top-down bottom-up processing within neural networks. Final point on that, or is that okay as a recapitulation of the whole argument? Final point on that. Another thing that neural networks have been better at than standard computational models is that procedural models, skill abilities. In fact, almost all of the renaissance that’s going on right now in AI, not all of it, but a lot of it, people with one of my students and one of my fellow authors, Tim Willencroft, some of the work that he’s doing, is about coming up with very powerful machines for dealing with procedural abilities. You see, skills also tend to, like when you, knowing how to catch a ball seems to involve, again, this kind of problem. Multiple simultaneous constraints that are unfolding dynamically in a self-organized fashion. And your cognition has to deal with that in order to be able to know how to catch a ball. So maybe this is also why, right, again, maybe, because right now we’re just trying to come up with a plausible conceptual vocabulary. We have to do this because the digital people don’t have it. But maybe why insight seems so procedural in nature is precisely because it’s exactly this kind of processing that we’re talking about. All right. Now as you may expect, the search inference framework just didn’t roll over and die. Okay? It didn’t go, you’re right, you’re right, you’re right, you’re right, you’re right, you’re right, you’re right, you’re right. Especially in 96 because, sorry, this doesn’t mean to be self-promotional, but people weren’t considering that, you know, all of this stuff could be sort of integrated together theoretically in terms of dynamical processing within neural networks. Because that would be an acronym. People were just beginning to get those ideas. As I said, Jeffrey, you’re right, we’re not going to be able to do that. All right. Now the search inference framework around, right, around this time, sort of after Metcalf but before Schuller and Melcher and before Backbaker 7CC has one of its best replies to the work especially of Metcalf. And we mentioned this already, but let’s go into it again. And this is the work of Kaplan and Simon, right, from 1990. Okay. And this is indeed the assignment from Newell and Simon. And now I hope you understand better the joke in the title of their article. The title of the article is In Search of Insight. It’s an academic joke. Now that tells you what, what, the title is brilliant. It tells you what their stance is. They’re now admitting, they’ve given up the Weisberg and all the things in search of things in sight. They’re admitting there is such a thing, but what are they ultimately going to do if it has the title In Search of Insight? They’re going to show that it can ultimately still be handled by the search inference framework. Again, see how even, like, you can get into stuff more deeply if you situate experimental competition within theoretical debate. Okay. So, they are going to study something I talked about, and I, I foreshadowed this experiment when I mentioned this to you before in the first class. And we talked about, they used the mutilated chessboard problem. Remember the mutilated chessboard problem? Remember you can cover it with the dominoes, like, remove the two corners. And the insight is to realize that you shouldn’t be able to do that. Now, what’s interesting, first of all, is to look at their theoretical response. Because, okay, this is, this is, I’m not going to go into the details of this, but I’m going to go into the details of this. Okay. So, what I mean is, okay, what they’re doing is really cool. I think there’s going to be something sort of wrong about what they’re doing, their theoretical move, I think, and I’ll try and give you an argument why. But there’s something really right about it because it’s prescient. It’s really prescient. They come up with something abstractly that’s going to turn out to be discovered concretely in what’s going on in the brain. It’s really cool. It’s really cool. So, although I’m going to be sort of criticizing the theoretical move, I also want to, I hope it comes across that there’s, I have a lot of respect for it. Because it’s actually foreshadowing something really important in a very prescient manner. And I think it might, I’m not sure, I can’t even have been able to trace the history, although this article is very influential. I think it might have even influenced Jung, Biemann, and Bowman’s work and stuff like that. Okay. All right. Yes. I think my question is very related. It’s just dialectical, really. So, if this was published in 1990, then how is it a response to the… It’s a response to Metcalf. Okay. Okay. Great. So, what they do, remember, is they read that insight exists, they talk about it as restructuring of problem formulation, and then they operationalize problem formulation as a search for a problem formulation. Okay. And what they do then is they divide, they divide inside problem solving into two levels of analysis. There’s the primary level, and then there’s the meta level. So, the primary level is the level at which you’re searching for a solution to your problem. Now, one thing, this is not Weisberg and Alba’s memory compartmentalization thing. Even though I’m drawing these boxes, this is not. This is a different thing. I have a lot more respect for this. And like I said, I’m going to argue later that it’s prescient, and it shows, again, the value of theoretical debate in order to foreshadow and even at times afford empirical investigation. Okay. So, the idea of Kaplan and Simon is there’s a primary level you’re trying… This is where you’re searching for a solution to a problem. You can’t get a solution. Perhaps it’s combinatorially, but it’s not a solution. What you then do is you shift to a meta level. This is not the level of a formulated problem. This search space is a search space through all possible problem formulations. So, this is the search space of problem formulations. This is the search space of a formulated problem. The main idea is this is what insight is. This doesn’t work. You switch to this space. You switch to this space. You switch to this space. You switch to this space. You switch to this space. You switch to this space. You find an alternative problem formulation. You bring it back to here, and then that affords you solving the problem. And that’s what insight is. Insight is this movement to, right, and then back from the meta space that restructures this problem space so that it becomes solvable for you. First again, this kind of shifting is going to turn out to be relevant. Notice how this is intelligent because they can plausibly say that the shifting is what’s reflected in the sudden spike. You shift, and when you come back, that’s when you get it. So, it’s very plausible. Like I said, it’s going to turn out, and I’ll just foreshadow, that this seems to map on to sort of hemispheric shifting in processing that seems to be correlated with insight, which also seems to be correlated with the Metcalf stuff. Yes, Becky? It’s the same type of processing that happens when you’re searching the problem formulation, right? You can still search. Yes. Okay, so now, Nakedia is putting her finger around the problem. Nakedia is putting her finger on the very narrow theoretical place that Kaplan and Simon are trying to stand on. It’s the razor’s edge. Because, in order for this to all be computational, the same kind of processing has to be going on here as here, right? That’s what they want to argue. But if the same type of processing is going on here and here, and do you think this space is a small space? No, it’s probably what kind of space? Combinatorially explosive. So, if exactly the same thing is going on here and here, is that going to make any difference? Which is your point, I think. So, what they need, this is the hard place they’re getting into. They need the process going on here to be different in kind to prevent the infinite regress. Because you don’t want to say, well, you know what I do here? I go through another space to look for a problem formulation. So, the way you prevent an infinite regress is that there has to be an important difference in kind between one stage and another stage. So, what they need to say, one part of their mouth, if I could use that metaphor, we’re speaking out of two sides of their mouth at the same time, one side has to say the processing going on here has to be different in kind from the processing going on there. That’s in order to prevent the infinite regress. But the other side of their mouth has to say, ah, but this is all just computational processing, because that is how we are defending the search inference framework against the Gestalt alternative. Now, let me be really careful here. Now, this is subtle, I get this. This is subtle in abstract thinking, which is hard. I get that too. I am not accusing them of a contradiction, because I technically have not shown you a contradiction. What I’ve shown you, though, is that this is a very, very narrow theoretical space. They have to somehow have to say it’s different in kind, right? So, the infinite regress is blocked, but it’s basically still the same stuff, the same kind of computational processes. Now, that’s not technically a definition, sorry, a contradiction, but it’s getting really close. Did that pick up on your point? No. That was good. Good. Okay. So, they’re in a very tight space. Okay. One more time. Although I’m launching into this criticism, nevertheless, this is not Weisberg and Albert, right? This is not the compartments of memory search, okay? Secondly, there’s something, again, this is brilliantly impressive. This foresees something that seems to be the case. This shifting idea seems to be a really important one, and may correspond to a shift between computational into non-computational processes, et cetera. Now, did somebody have a question over here? Maybe I should let you go on. I was just wondering why it isn’t a contradiction. Why it isn’t a contradiction? Because there may be a way of saying it’s, like, see, because the line between differences of degree and differences of kind is not one that philosophy and logic has been able to actually specify. What are called Pec-Soraitis paradoxes? Okay. Here’s the technical reason. Okay. If I have, if I’m not bald and I remove one hair from my head, does that make me bald? No. So, removing one hair from a non-bald head doesn’t make me bald. So, I’m non-bald, I remove another hair, I’m still not bald. So, I should be able to remove all of my hair and still not be bald. But I’d be bald, wouldn’t I? That’s a paradox. Okay. So, at some point, a difference of degree becomes a difference of kind. But do we know exactly where that is? No. So, they could say, well, it’s a really big difference of degree, but not completely a difference of kind, and therefore it’s not a contradiction. That’s the technical reason, to be fair. Is that okay? See, philosophers actually do some really useful stuff. Okay. Now, they know this. They know this problem. So, to bring it up is fair. So, what if they did it, excellent. Because they themselves bring it up, they realize, right, that they face a problem. Now, I want to be clear that I’m not misattributing something to them. This is a direct quote. This is from page 376 of the original production. The same processes that are ordinarily used to search within problem space can be used to search for a problem space. A problem representation, which is what we call a problem formulation. So, I’m not misattributing. See, that is the statement of there’s no fundamental difference between the two. Okay. However, they get that the meta level is combinatorially explosive. So, they then want to say the crux in problem solving is selectivity in search. So, what they want is they want something that gives them a kind of selectivity here. Right. That will operate differently than what operates here. And so, they propose a heuristic that operates at the meta level that’s not operating at the primary level. Okay. So, this is the definition of the problem. Okay. So, this is, they’re putting all of their theoretical baskets into this heuristic. Okay. It’s called the notice invariance heuristic. I’ll talk just a little bit about it and then we’ll wrap up because a lot of you are starting to get a glazed look. It’s about like I’m full of too many thoughts. I can’t tell you what any of them individually is. Okay. So, let’s just talk about, I’ll just briefly talk about it and then I’ll introduce a sort of point and then we’ll come back and pick up on it. And that will give us sort of going momentum in the next class. Okay. The idea is that what you do is you try, what you’re going to do is you’re going to look at failed problem formulations. Here’s a failed problem formulation. Here’s another failed problem formulation. And here’s another failed problem formulation. Okay. So, what should you do? Try to notice what is invariant across all of these different failed problem formulations. What is not changing. What’s not changing is this base, for example. I’m just using this pictorial. Okay. Try to notice what’s not changing across your failed problem formulations. Now, why? Because you’re failing. There is probably a common cause to all of your individual failures. That common cause plausibly, or probably even, although not certainly, probably is related to what is invariant in your problem formulation. That’s what you’re not changing that is preventing you from solving the problem. So, you try to notice what’s invariant. And then that is what you’re going to change in your problem formulation. And then that is what you’re going to bring back down here to see if it reformulates your problem. Does that make sense? All right. Now, that’s kind of cool, first of all. So, like I said, I’m going to take two more minutes and then I’ll let you go, okay? Notice how gestaltig this heuristic is. The notice, I mean, that’s a new adjective, by the way. You can now use it. It’s very gestaltig. So, adding on an English, what is that? That’s a suffix to a German word. Gestaltig. Can we use it in sound? Yes. You have explicit permission. You can use the adjective gestaltig. Okay, first of all, it’s notice, which is an intentional salience term, not an inferential propositional term. And notice, sorry, how it’s using, right, like for example, a kind of field disembedding that was predictive of inside problem solving, just like people from the gestalt tradition, Schubert and Melcher people, had shown. So, this is a very gestaltig heuristic. It’s not really operating very computationally. Now, there’s also, in addition to that, right, there’s a logical problem. How many things remain invariant across something that’s changing? Combinatorially explosive. So, again, it has to somehow be a zeroing in on a subset of all the possible invariants. Okay, so what are we getting to? We’re getting to this point. Let me just wrap it up one more minute and then I’ll let you go. What we’re getting to is this idea that the search inference framework is coming back, and they’re coming back really strong. They’re coming back with a theoretical advance. Like, what we’ve got are these two levels. We’ve got the shifting relationship. And then, right, then what we’ve got is we’ve got a different kind of heuristic, search heuristic operating at the meta level. It’s going to turn out to be a really important idea, by the way, to notice invariance heuristic. So let’s give all of that. And this shifting idea and a different kind of, right, attentional processing, this is all pressed in because one of the ways you can reliably talk about the differences between the left and the right hemisphere is in terms of their attentional processing of information. Okay, in fact, that seems to be the best sort of evolutionary account for why you have left, why you have lateralization between the left and the right hemisphere. Okay, we’ll come back to all of that. So that’s all great. That’s good. Yes. The problem with this is it’s really questionable whether or not they’ve actually succeeded in what they wanted to do. Did they get, did they find a landing space in that very narrow space? Or have they actually come up with a gestalt process that’s really not computational in nature? Okay, that’s it for today. Thank you very much for your attention. We’ll pick it up next time. Really good discussion, really good involvement. I appreciate it. It makes the class enjoyable for me as well. And I think it helps the fellow students.