https://youtubetranscript.com/?v=-ltz3k8S3ZM

Last time we looked at all three of Metcalfe’s experiments. First, the feeling of knowing, in which she was using the feeling of knowing as a metacognitive probe, and she seemed to provide good experimental evidence that a standard memory search was not going on in inside problem solving. Then we had the middling experiment, which I won’t go into in great detail in the review, because what was mattered with the third experiment, where she did feeling of warmth ratings for non-inside problems defined by the search inference framework and inside problems, and she found that the feeling of warmth patterns were different and the accuracy of predictions for finding the solution were reliably different for the two forms of problem solving. This was followed up by Jaroslawewicz’s work and others showing that there was something physiological going on. The heart rate differences between the two sets of problems were the same pattern and were highly concordant with the patterns for feeling of warmth for inside and non-inside problems. There seems to be something actually physiological and highly plausible therefore the same thing, a little different going on between these two kinds of problem solving. We then started to take a look, we did take a look, sorry, not started, we took a look at the work of Schuller, Olson and Brooks and we took a look at the idea of concurrent verbalization. I pointed out that concurrent verbalization involves a theoretical construct of verbal overshadowing, which I’m going to criticize at length later, but nevertheless we paid attention to some important findings in the experiment. Concurrent verbalization interferes with inside problem solving and not with non-inside problem solving. This would be additional evidence that something we started to investigate when we looked at responses to Weisberg and Alva, that insight is much more procedural in nature than propositional slash computational in nature. The kinds of things that were reported by people and the kinds of things that predicted their success and what we noticed is that argumentative strategies, inferential strategies were predictive of success on inside problem solving, sorry, on non-inside problem solving, but not on inside problem solving. Very little predicted success on inside problem, pausing and perhaps because it seems to involve that metacognitive stepping back and looking at the cognitive medium, looking at the framing of the problem rather than through the framing at the problem. We then took a look at the speed of reasoning view, the idea that although it looks like inside problem solving and non-inside problem solving are very different in a qualitative sense, perhaps it’s just that inside problem is speeded computation, computation speeded up, it’s the speed of reasoning view, but then when we looked at correlations between the inferential strategies within subjects for both inside and non-inside problems of a zero correlation, which was pretty good evidence against the speed of reasoning view. And I also pointed out that the speeding reasoning view has some initial implausibilities anyways. So, we looked at why for these sets of problems do we speed up our processing and why is it that when we make machines that have sped up this processing, they haven’t become more insightful in nature. We then took a look at further work by Schuller et al. using the individual differences methodology to try and find what are the subskills that are predictive of the skill of insight, given that we’re now seriously considering the fact that insight might be more skill-like in nature. And we took a look at perceptual restructuring, field independence tasks, remote associate tasks, category generation, and they also did anagrams, so I argue that I think that was terribly contended. And what we found is that perceptual restructuring and field independence are highly predictive of insight problem solving, RAC tasks somewhat, but not as much, which is paradoxical because they’re used a lot to test for insight and creativity. And that category generation was not predictive of insight. Category generation and other inferential strategies like SAT and MAT scores were predictive of non-insight problem solving but not predictive of insight problem solving. So you have a double dissociation in terms of what’s predictive of each kind of ability, which is pretty significant evidence, quote, that the two types of problems draw on qualitatively different processes, which is almost as bold a statement as Weisberg and Alba and in completely the opposite direction. We then took a look at the work of Baker, Sennett, and Sisi with what they call inductive leaping. I argued that that is a misuse of the word induction, and we should replace it with the term cognitive leaping. And what they basically did was test, well, I’m arguing, they didn’t quite put it this way, they’re testing for pattern completion as predictive of insight problem solving. And that turned out to be the case. The better of a leaper you are. Remember on the test, leaper, leaper, please, remember a leaper. It’s very disruptive to read on and on again if somebody can answer about lepers. Okay, so, right, the cognitive leapers are, that is in fact predictive of insight problem solving, and then I suggested to you that that not only addresses or helps to give some understanding to the ineffability, we started talking about what pattern completion is. It’s a multiple simultaneous constraint task. It’s something that has to be solved in a parallel self-organizing fashion. It’s something that can be done better by neural networks than by standard computational machines. Do you remember all of that? Yes. And that suggested very strongly that this might be going on in insight problem solving, and I pointed out that both skills are also considered dynamic multiple simultaneous constraint problems, and therefore it would make sense that insight would be more skill-like in nature than propositional in nature. Okay. Then of course we pursued the idea that the search inference framework was not going to just lay back as all of this was going in, and we took a look at the work of Kaplan and Simon in 1990, and that is in fact the Simon from the Nolan Simon Search Inference Foundation. And you now know why they’re titled as such a witty title in search of insight. Ha ha ha ha. Okay. And their basic argument is to try and assert that the search inference framework is still adequately to the extent. So they now contrap, wise-gripping, although they now freely admit that insight really exists, but what they want to now argue is that it can be adequately explained by the search inference framework. You do not need any additional theoretical machinery. They do propose to solve this with something that I have a lot of respect for in the sense that it’s very prescient, what we’re going to talk about when we move into the neuroscience. They talked about shifting between two kinds of space, the primary space, which is the problem space within which you’re looking for a solution, and the meta space, which is the space of all possible problem formulations, where you’re looking for an adequate problem formulation. And the idea of insight is you switch from the first space to the second space from primary to meta, find the effective problem formulation, and then write it down to reshape the primary search space and solve the problem. And that’s what insight is. And the idea is it’s just search inference at both levels. The question then became that they are in a very sort of theoretically tight spot, because what they need to do is to say that in order to prevent the infinite progress, what’s happening at the meta level has to be in some sense different from what’s happening at the primary level. But in order to maintain their theoretical commitment, they have to be saying that it’s not important to be different. Whether or not they succeed on that is an interesting question. I’m suggesting that they don’t quite succeed on that, because the heuristic that they argue is operating at the meta level is the Mildeson variance heuristic, which is a very, and I invented an adjective for you, a very Gestalti heuristic. It’s operating by using, by the manipulation of attention to manipulate what is found salient in order to reformulate the problem, which sounds much more like what the Gestaltists were saying was going on in things like necracute flipping than what Newell and Simon were originally talking about when they were talking about things like the needs and the heuristic. I believe that’s what we got to so far, is that correct? So they took a look at some of the things that were predictive of people’s ability to notice or to solve the mutilated chessboard problem, which is the problem in the 1990 article, and they also experimentally manipulated the variable of the salience of the parity queue. So the more salient they made the parity queue, the more likely people were to come up with a solution, again, being a salience issue, not simply a belief issue. More importantly, what it came down to is they found that flexibility in noticing, this sort of notion of flexibility in noticing, turned out to be really important for understanding who was going to be able to solve the mutilated chessboard. So what’s interesting about that then is this is also prescient, I think in an important way, of the increasing consensus that what’s the central thing that’s behind insight abilities is cognitive flexibility, where cognitive flexibility is almost always measured by your ability to use attention to reconfigure what you find salient in the problem. Yes? What did they do to make the certain cube more salient? They would replace, for one thing, what they did is they replaced black and white colors with the words red and butter. And people would notice that the two things had the word butter on them, and then they would go, oh, those are the same. That helped. That’s important, by the way, because when we talk about what is a Gator and a Parity cube, so what you can do is create sort of independently measured levels of difficulty for the mutilated chessboard by the degree to which you change the salience of the Parity cube. Now, obviously, this is a statistical play. We’re still floating in the air, the question about what it is for something to be an insight problem. Gina, you had your hand up earlier. Yeah. Where is the experience of the left side? Right side? Right side, I think it was Simon, but two levels. How would the levels communicate? How would the levels communicate is something that is not discussed. And that’s also part of it, something that I would love. Well, let’s go right now. The issue about what the processing is between the levels, I mean, the thing to really do to annoy any psychologist when they give a flow chart is what’s happening on the arrows. But what’s happening on the arrows? Right? And so what the nature of that shifting is, because presumably, there seems to be, we have to be careful here not to misattribute to them, but it sounds like it’s not a computational process, the shifting between the levels. It’s some kind of switching of activity and translation of information from one format to another. So you might be sort of beginning to worry, that sounds a little bit like the very phenomena we’re supposed to be explaining. Yes. But what I’m arguing is I don’t think that’s, they’re not the same problem, but I think they’re intimately connected, that problem and the fact that the notice of variance here is that it tends to be a very ishtaltic problem, and that what they’re relying on is by altering salience that somehow, in a non-computational fashion, reshapes the problem space. Is that okay? Yes. I think I missed something. What does the ameliorated chessboard have to do with the parity cube? The necr cube? Yeah, the necr cube. Oh, what I was saying, sorry, is that the notice in variance heuristic, where basically you’re using attention to reconfigure what you find salient, is the kind of process that the gestaltists were talking about that’s at work in things like the necr cube, which is a classic gestalt example of how things restructure. Okay. That’s what I meant. You mentioned the ameliorated chessboard as well in reference to something, but I think I missed something. I don’t remember, did I? What did I say? Like now? You said something about red and butter. Yeah, I got confused with that. Oh, and I said that was like the necr cube? Well, no, but you said something? Oh, the thing about, oh, so that was how they manipulated the salience of the parity cube. They replaced the white, or they would put on the white and black squares of the chessboard, they replaced them with the words bread and butter, which helped people notice that the two corner pieces were the same, that they’re parity, and that’s what you need to notice in order to solve the mutilator chessboard. Okay, so who did this? Kaplan and Simon. It was one of the experimental manipulations. Okay, so they were, wait, wait, wait, I just don’t want to come through and understand. They replaced the pieces with? Yeah, you have a chessboard like this, laid out, right, and you have like, where this would be white, you’d say butter, and where this would be black, you’d say bread. Oh, okay, now I get it. Okay, cool. What would have been cool if they put things that people expect to be integrated together, like peanut butter and chocolate, chocolate or something like that? I have the same question. Because you do put butter on bread too, I suppose. Any other questions about that so far? Clarification questions? Okay, so let’s get back to it. Now, look at what’s coming out of the Kaplan and Simon thing. You could make insight better in people, and presumably this is something we would want to do, by training their attentional skills, by making them increasingly sensitive to how they are distributing salience onto stimuli. By also increasing their awareness of their flexibility to alter what they find salient and relevant by redirecting their attention. You’d want to be doing this, this monitoring should be done in a non-inferential fashion. I hesitate to say non-verbal for reasons I’m going to come back to, but at least a non-inferential fashion. Now why am I mentioning that list? Because that is a list of the kinds of things you train in mindfulness practices. That means that there’s a prediction, obviously this is a post-piction on my part because I’m lecturing from 2016, but when I started lecturing this in 97, it was a prediction. Namely that we would find that increased training in mindfulness practices enhances people’s insight problem-solving ability. And there’s increasing experimental evidence confirming that claim. That mindfulness training enhances insight problem-solving. So it’s a post-piction now, but it was a prediction then. So one of the topics we’ll move on to consider is the connection between mindfulness and insight. Something you should know is I started teaching about this in 97 and taking a look at not people from the Buddhist world, but from a psychologist who started doing work on mindfulness independently, Alan Langer, in the 1980s. I believe that I was the first person to academically teach about mindfulness at U of T. And it was actually because of this kind of prediction coming out of this experiment. So again, this experiment I have a lot of respect for even though I’m sort of disagreeing with their final theoretical trust because they’re just doing really good science. It’s really theoretically fine-grained in a way that makes several important predictions. Yes? If insight is linked to some measures of fluid intelligence, could meditation theoretically increase measures of fluid intelligence? So first of all, there’s difficulties about distinguishing between the rational enhancement of your abilities and their natural capacity. But there is indication that extended mindfulness practice does alter attentional abilities and there seems to be some indication it alters working memory. And those things do both seem to be highly correlated with measures of fluid intelligence. You have to be careful about this stuff because the language we have around intelligence and distinguishing it from rationality needs to be treated a little bit more rigorously and carefully within psychology. Is that okay? Yes. But if, so I would prefer saying this, and for some of you who have done 371, you know my arguments for it. I would prefer to say that mindfulness increases, it makes your attentional processes more rational rather than saying it makes them more intelligent. Now if you have a problem with that because you think that rational equals logical, that’s a problem we should discuss. Because there’s very good reasons for not saying that rational equals logical and vice-versa. But that’s in 371. So like intelligence would be more like something like hardware, like a hardware. I think the best way to think about intelligence is something that’s directly relevant to this course. I think Sanovic’s argument that intelligence is the brain’s capacity for dealing with computational limitations, for its basic ability to deal with combinatorial explosion, is what you’re measuring in intelligence. That’s not the same thing as measuring the brain’s ability to optimally achieve its goals. Which is what I think rationality is. Is that okay? Yes. In terms of this particular context, if you want to condense or distill this and which you want to think, we said cultivating this set of explosions, developing a set of cognitive styles that would enable you to use the use of your intelligence capacity more? Yeah, I ultimately think that’s what rationality is. I think that rationality is when intelligence uses psychotechnologies to alter how we best use our intelligence. Now we’re getting into 371 a lot, and so we’re going to stop doing that. But it’s tantalizingly out there for you now. So this is, we talk a lot about rationality and wisdom in 371. Now you can see why those might be relevant to the type of stuff we’re talking about here now. The questions start to arise naturally about these kind of issues, about intelligence, rationality, etc. Okay, but that’s all. So we’ve got a couple things to look forward to from Kaplan and Simon. The things that most directly is this shifting, and also Geeta’s concern about what kind of processing is going on there. And my concern added to her concern is that seems to line, I think that has a connection to the gestalt-y nature of the notice invariance heuristic. That shifting issue though, and then this other issue about the connections between insight and mindfulness. I’ll say one more thing about that, to pre-figure what we’re going to say. I think there’s a lot of really crappy research going on under the rubric of mindfulness research. Some of you might have taken 471 with me last year where we took a critical examination of the mindfulness construct of psychology. A lot of the experiments are very poorly designed. A lot of the theoretical debate is handled in a very shoddy fashion. And the construct is often muddled and confused. That being said, I think the best research on mindfulness is actually taking place within the department of psychology in connections between mindfulness and insight and mindfulness and self-regulation, rather than within clinical psychology where it is predominantly being studied. So there you go. Just to give you some connections again between this course and what’s going on in the mindfulness research. I think it’s going to be more and more the case, by the way, as neuroscience gets more and more involved as it is in investigating mindfulness. All right. So one of the issues that was sort of floating around in Kapler and Simon was this issue of noticing invariants picking up information from the past. And this has been part of the debate from the beginning. And we’ve been sort of talking about it sort of vaguely about that the gestaltists were responding to Thorndike and others who were asserting that learning and problem solving were just merely extending or repeating the past into the present. And the gestaltists contrasted that with how sometimes the past is actually what thwarts or blinds you from solving a problem. That’s how we got into the notion of fixation and things like that. And so what started to happen was around the same time, the 80s and the 90s, so we keep sort of moving in and out of this time, people were starting to investigate more carefully the issue of transfer in problem solving. And what I want to show you is that very quickly converged with the work on insight problem solving. And then how that provoked another sort of, I guess it’s fair to say, philosophical reflection and critique by Weisberg. All right. So the best place to start is a classic experiment that was then going to provoke a lot of work more directly tailored to insight problem solving. Some of you probably know it. This was the famous experiment by Nathan Poliok in 1980. This is the same Poliok who ran the argument about combinatorial explosion in a chess game. Remember that? He also has, as I’ve mentioned before, that is the most Viking name I’ve ever heard in psychology. So that’s pretty cool. Now, what was impressive about this experiment was how counterintuitive the result was. It’s a really good heuristic in your scientific practice to pay more attention to counterintuitive results rather than simply intuitive results. Because counterintuitive results often mean that we have structured the experiment such that it disconfirms some of our theoretical biases, et cetera, et cetera. And that’s really important. So it’s a heuristic. It’s not an algorithm. But it’s generally a good idea to give more weight to counterintuitive results than simply confirming results. Now, how is it counterintuitive? Well, it has to do with what seemed like a pretty obvious and what should be a very powerful pedagogical type of intervention. So what they did is they’re going to use a problem in the second half experiment that you’re very familiar with, the Dunker radiation problem. Remember the Dunker radiation problem? Okay. The participants were given a story to read. They were given a story to read. And so there’s a variation on it, but this is sort of like, here’s a fortress. And somebody once explained to me that a fortress, it means it’s more royal than a fort. They often wondered, what’s the difference between a fort and a fortress? So you can’t make a fortress. The blankets and cardboard boxes, you can only make a fortress. All right. So here’s the fortress. And there’s a general who wants to attack the fortress. The problem is you can only access it through, like I tend to call these causeways, where they’re supposed to be. And so you can only access it through the fortress. So you can only get to the fortress along the causeways. And the problem is there’s a garrison here, and these guys are like the 300 Spartans. Like they will just defend and defend and defend. Right? So the problem in general basis is if, right, you’re going to attack the fortress, you’re going to attack the fortress. And you’re going to attack the fortress. And you’re going to attack the fortress. And you’re going to attack the fortress. So the problem in general basis is if, right, you marched on anyone’s causeway, the garrison will come out and defend that causeway to the death and hold you off long enough that your army will go out of supply and you will have to retreat. By the way, that’s one of the most important things in military confrontations, right? Trying to make the other guy go out of supply. Because morale and fighting effectiveness drop really rapidly when people are hunted. Just in case you’re in charge of a large army. Okay. So what does the general do? The general divides his army up into four forces that then converge at the needed point. And the idea being what will happen is that the garrison, right, will come out and block one of the four. But that means three quarters of his army will get into the city in an uncontested fashion. So the idea here is the solution to the problem involves breaking up into multiple forces that converge on the point where they’re needed at the requisite amount. Which of course is strongly analogous to what you have to do in the Dunker Radiation Project. Okay, so you show people this. You make sure they understand what it is. And then you have an intervening distractor task. And then you give them the Dunker Radiation Problem. Now, most people, so we have a long history because people have been doing the Dunker Radiation Problems since Dunker. And so we know sort of what the spontaneous solution rate is on average for the Dunker Radiation Problem. And what they were expecting was the spontaneous solution rate for this version of the Dunker Radiation Problem should be much higher than the historical norm. We should show evidence of significant facilitation from this to this. What they found was exactly the opposite. There was very little spontaneous transfer, very little evidence at all. That having seen this was at all facilitary for people solving the Dunker Radiation Problem. And do you see why that’s so counterintuitive? Like when I presented to you on the board it seems obvious, right? And people are like, ah, ah, ah, ah, ah. But that’s not what happened. Now there’s some variation that we’ll talk about the variation. A little bit. I mean, for example, if you sort of explicitly tell people to go back and try to use the first problem, that will increase the spontaneous solution, right? But still, what’s going on here? This was so counterintuitive. And it was also very depressing because this was in 1980. And it was around this time that Kahneman and Kvarsky and Evans and a whole bunch of people were running all those experiments. And we’ll talk about a bunch of them in the second half of this course, like the Ways and Selection Task. And all bunch of experiments where people’s reasoning is really shoddy, really crappy. And people seem to not reason well at all. And we’ll talk about that when we do the Ways and Selection Task. And by the end of the course you will be sick of the Ways and Selection Task. That’s the one we’re going to zero in on. And we’ll talk about a lot of them. Evidence for, we talk about more of these in 371, evidence for belief perseverance, evidence for pseudo-diagnosticity, evidence that people will not look for readily available information. All this. And then they had this. And it was very depressing. And there was a lot of people, Stephen Stitch and others, were writing papers basically saying, look, we’re getting increasing evidence that human beings are not rational. They’re really not rational. The number of people that are rational is a very limited minority. Statistically, a lot of these. So we’ll talk about this, for example, again, when we get into it, the failure rate for the Ways and Selection Task is on average 90%. And the people taking the test are you guys. The cream of the crop of our society. Yes? What is the Ways and Selection? You’re going to have to wait. I’ve got to put some narrative spin on this. Now, we’re going to look at all the responses to the Ways and Selection Task. All I’m talking about is how it was being appraised in the 80s. This experiment and all of that stuff, the Ways and Selection Task and the belief perseverance and the pseudo-diagnosticity and all that stuff were leading people to the conclusion that the vast majority of even highly intelligent, highly educated people operated consistently in a rational manner. And that was really sad because they were then arguing that most of our democratic ideals and our moral principles are based on the presumption of rationality. There was serious discussion about whether or not democracy is what we want. Yes, go ahead. These things are mostly tested on undergraduate students. Isn’t there a hostility of confounds that people try to give, not necessarily reason the way that they would reason, but try to give what they presume is the correct answer based on the school kind of environment? Because people reason it, if someone tells you a puzzle for fun, you reason very differently about it than you would in a more formalized setting. So what you do is you test it in settings where people are given very little information. What you do is you test it in settings where people are given very explicit encouragement. So if they’ve been taught formal logic and that doesn’t seem to improve their ability on these tasks and things like that. No, I meant because you’re testing it on undergraduates and this is a test because people know that it’s an experiment. Isn’t that already advising people to… No, but that’s the problem again with all experiments. So in all experiments you’re worrying about whether or not people are going to try to please you or displease you. So you do all kinds of things to try and make sure that they don’t understand what the actual goal of the experiment is. Which works generally, but if the goal of the experiment is people to be thinking creatively and you’re in a… to think creatively or to think outside… But you can even reward people. You can even say be as careful, as logical as you can, we’ll pay you more if you give them more. And that doesn’t seem to change how they perform. What would be the way to counteract that? Well, what I’m saying is… Because I’ve done the opposite. Like I’ve gone out on the street and did the waste and selection task on people and people on the street do even worse. Okay, because that was my question was, I don’t understand. Sometimes when you’re an undergraduate, sometimes you give responses that are not necessarily stuff that you think intuitively. Is the right answer? So Chris Green and I have done that a couple times. We’ve set up a booth and get people to do the waste and selection task on other things. And they don’t do that? They do worse. The problem is you can’t get that published. Because it’s just an uncontrolled experiment. But yeah. So, you know, I did that when we were in graduate school and it’s like, no, people aren’t doing anything about it. Now, there’s a deeper point though, and this is by… I don’t want to get into this too much because I’ll talk about it in current giga runs or in other people. We think that in natural settings, people are performing in a way that actually makes sense. And when you take that into the lab, it looks like an error wouldn’t be an error if you put it in the natural setting. It has something to do just to foreshadow the idea that the lab problems are often very well defined and real world problems are very, guess what? Ill-defined and the strategies that people are using in the real world are probably good for that, but not good for in the lab. We’ll talk about current giga runs here and we’ll take a look. So I think Chloe, that’s the best formulation of your criticism and we’ll take a look at that. So, and I think there’s something to it. And for reasons I don’t understand, he really likes me. He sends me all of his publications for free. He’s started doing it. I don’t know why. I just get mailed from current giga runs. Here, these are all the things I’ve published. Have free copies. I say, thank you. I’ve never met him. It’s really weird. Maybe he has spies. Hey, this guy actually lectures on your material here. Okay, yeah, send me everything I’ve done. Maybe he reads your papers. Pardon me? Maybe he reads your papers. Maybe he what? He reads your papers? Maybe, but I don’t specifically mention his work. Did I? I read one once. It’s hard to remember these things. Your papers are like your children. You can’t remember everything. I read things about them. Really? What did I want? Really? What did I want? Okay, so. All right. So, and then, so part of the history of this experiment was the whole rationality debate, which we talk a lot about in 371. And we’ll talk a little bit about that when we talk about reasoning, studying the study of inference in this class. But, right, I want to go, there’s another line of history that came out of this that I foreshadowed a minute ago, which is that this then became a branch specifically connected to insight that then converged with the other experiments going on in the 80s and the 90s on insight. Okay? So there was some initial work about, some related work by Weisberg, again, Di Canelo and Phillips in 78, about the candle box problem and what did and didn’t facilitate solution. I’m not going to go into that. It’s just, there’s stuff that was happening around the same time, very similar. Just take it that people really found this, because of stuff that had just happened before in this experiment, people really found this like, wow, this really needs explanation. Because this was very counterintuitive. Now what, I think, Dick, herself, this is she, right, I believe, yes, was involved in some of the initial response. One of the most important was the work by Lockhart, Lammon, and Dick in 1988. So, let’s keep in count. And what they did is they went back to, what they argued is a theoretical conceptual change was needed. And that’s, of course, again, that is the proper response for when you get a powerful counterintuitive result. Because your conceptual theoretical formulation of your experiment was in some sense wrong. There’s a conceptual reformulation that is needed. So they did that, and they did something, they went to a very good source, whenever you want to go back and sort of rethink your conceptual foundations, they went back to William James. I like William James because I’m, as well as a psychologist, I’m a cognitive scientist. And I think of William James as one of the first cognitive scientists, because he was both a very important psychologist and a very important philosopher. So he’s a good place to go back to if you want to reflect on your psychological constructs and theorizing, because he has the philosophical training in order to afford to doing that. Now the problem with doing that, though, is you have to take into account that his language is the language of the 19th century, which is an incredibly flowery language. It’s like, oh my gosh. This came very prominent to me. I don’t know if any of you have seen it. It’s considered probably one of the best documentaries ever made. It’s Ken Burns, The Civil War. Have any of you seen it? The thing is, you don’t realize, is that every documentary since then uses the format invented by Ken Burns in The Civil War. So all the documentaries you see, largely use his style. It’s that influential. What was impressive about me, and one of the things he was doing is, and this was an innovation, is he had actors with dedicated voices, like they would dedicate an actor to a particular person from the past, reading letters from the soldiers in the American Civil War. I couldn’t believe the caliber of the letters that these privates were writing. The language was complex and sophisticated and extremely flowery. I’m thinking, geez, I couldn’t write that. I have four degrees and I went to university for 17 years. I don’t know why I do that. This guy probably went to public school in Tennessee and he’s writing. It was very humbling. You have to take that into account. The language is, part of what’s going on there, the language is very overblown too. Anyways, James made the distinction between sagacity and learning. Okay, now technically, in terms of the syntax, sagacity is the property that makes one a sage. Where being a sage is to be an extremely wise or enlightened individual. So chances are you’ve never met anybody with sagacity and therefore it would be a fairly useless psychological construct for psychology. But if you go in, what James was talking about is he’s talking about the skills of appropriate conceptualization. He’s referring to the functions that people use. He’s talking to what we would now call in terms of what the cognitive operations are. And in that sense he’s talking about a procedural thing. This is in contrast to what he calls learning. Learning is the possession of the relevant knowledge, the justified true beliefs. And it’s the clarity of conceptual content. It’s the content. So to use, I think, a little bit more accurate, he’s talking about cognitive procedures versus cognitive propositions. Cognitively held propositions, as opposed to just written on the page or something like that. Okay. Now why make that distinction? Well let’s go back to the two things that we used in the Gigan-Holia experiment. So you have the general attacking the fortress and you have the dunker radiation problem. Now in terms of their propositional content, they are very similar. They’re very similar. There’s a strong analogy between the propositional content of the two problems. But procedurally they’re very different. In the first one, your primary cognitive operation is to comprehend an existing solution. That’s what you have to do. You have to comprehend the story of the existing solution. In the second, what you do is have to solve a problem. So although they’re propositionally, if you’ll allow me to use that as a term for a whole set of phenomena that we’re talking about in an integrated fashion, although the two problems are propositionally very similar, they’re procedurally very different. Yes? How are they propositionally similar? Because you can use the same propositions to describe what you have to do. Divide up your force into smaller forces that converge at the point where they’re needed. But think about that. We’ve already known that saying to people, think outside the box, doesn’t work. So what, Lockhart, you know, you’ve heard of this name, right, Bob Lockhart? Yes or no? Levels of processing theory? Memory? I got to work with Bob Lockhart for one year when I taught 250, way back in 1993, 94. Nice guy. Harry Potter. Pardon me? Harry Potter. What? He’s the professor in Harry Potter, in the second part. Oh, right, yeah, it’s not the same guy. The really, you know, one of the useless ones. Yeah, he’s not a muzzle. Okay, so what they reasoned is, well, maybe what’s going on, and think about how this is relevant to what Domonoski said about Kastleman and Meyer, what we said about Meyer and Kastleman, and respond to Weisner and All. What’s going on, what’s needed for transfer to occur is much more procedural similarity rather than propositional similarity. Now, this is going to turn out to have been a very difficult, I mean, I don’t think he realized it initially at the time, but this is a particularly honest and brave thing for Bob Lockhart to do. Because this is going to line up with transfer appropriate processing, which was the theory of memory transfer that actually brought down his theory of levels of processing. So, very honest, I really admire that approach. If you don’t understand what I said, just hang on, it’ll make more sense in a few minutes. Okay? So the idea is, what’s going on here, right, might not be propositional similarity, but procedural similarity. Now, if that’s the case, right, how do we test that? And then he sort of, they link the idea that inside problem solving involves much more sort of procedural processing than propositional processing. So you put the two together and then experiment around a problem they call the multiple marriage problem. The multiple marriage problem is designed to really try and make sure that the issue is not any kind of lack of relevant knowledge. So let’s first of all take a look at the problem and then we’ll talk about it, then we’ll talk about the, well, we’ll take a look at the problem, I’ll just briefly make sure you understand it, then we’ll take a break and we’ll come back at how it was used in all these experiments. Is that okay? Okay, so this is literally from the experiment. A man who lived in a small town married 20 different women of the same town. All are still living and he never divorced any of them, yet he broke no law. Can you explain it? So things you might think of, this is in Utah, that’s irrelevant by the way, it’s illegal in Utah, right, alien abduction, did time travel or not allow answers? Is it that he’s like a priest or something that marries people? Yes. Really? Yeah. Oh my gosh. Why is she so happy and why did she go oh, because it seems to be like oh, okay. Now I hope that this is not a revelation to any of you, it’s not like you mean clergymen can marry people, like you knew that, right? That’s not like oh, in fact it’s kind of a prototypical thing we associate nowadays with clergymen or clergypeople. What was actually going on, right, is you have to shift from two different meanings of marriage. Right, you have to shift from the dominant and therefore initially salient interpret meaning to a legitimate but sub-dominant and therefore not initially salient meaning. So what you have to do is shift your attention and we can figure what is salient to you. Those are the cognitive operations that are involved in a multiple marriage problem. Okay. Now that’s pretty cool because it’s such a simple problem, right, and because people, it’s not like the nine dot problem where there’s a real good chance that people won’t solve it, which is problematic. Given enough time almost everybody solves the multiple marriage problem, like within an experimental context. So these are all good design features. So we’ll come back at 2.45 and we’ll talk about how they use the multiple marriage problem to make use of this distinction, which we’ve sort of updated and now have this overblown language, to respond to what happened with the dump of radiation. Okay. So the idea is that the multiple marriage problem places sort of quite a bit of demand on your sagacity, to use James’ term, but very little demand on your knowledge. So that means it’s fairly safe to assume that failure to solve such a problem is not through a lack of knowledge. And so what Lockhart and Lemon and Dick did was to try and set up a different initial task before being given this as the insect problem. So the structure, experimental structure is going to be the same. You’re going to give people some task, it’s going to be a distractor, and then you’re going to give the insect problem, but instead of it being the dump of radiation problem, it is going to be the multiple marriage problem. But there’s going to be a difference this time. So this is first stage, distractor, second, here you’re getting the multiple marriage problem. Here, you’re dividing the participants into two groups. One group is getting, you’re reading through a bunch of sentences. One of the sentences is the target sentence. All the other sentences are just there to mask what the internical task is. The target sentence is like this. It made the clergyman happy to marry several people each week. So people would literally read the sentence. It made the clergyman happy to marry several people each week. So they called that the declarative. And then in contrast, they had what they called the puzzle form. And this was done on a really basic, because the computers of this time are not powerful machines. You had to feed the squirrel regularly. This is what the puzzle form looked like. Again, this would be amidst a bunch of other sentences. So the puzzle form was the man married several people each week because it made him happy. Then there’s a pause, pause, pause, and then a few seconds later the word clergyman appears on the screen. The man married several people each week because it made him happy, followed a few seconds later by the word clergyman. Before I go on to the results, and then what you do is you have both groups giving this unexpectedly. And you’re looking to see if there’s facilitation in one or the other. Before we talk about that, I want to point something out, because it’s going to go back to when I talked about verbal overshadowing. Note that both of these are verbal. Both of these are verbal. The language itself isn’t the relevant factor. Both of these are verbal. Why do I say that? Because what happened was people who had this showed no facilitation for that. People who had this showed significant facilitation for that. Even though the content is almost exactly the same, and presumably you already possess the content, because you know about clergymen being evolutionary people, this facilitated this dip. Now, there’s two important things here. That tends to provide evidence. We’re going to see how this gets replicated in multiple other experiments, so not just this single experiment. This tends to, again, add evidence, convergent evidence that we’re already seeing, that insight is perhaps more procedural than propositional in nature. And it also, as I said, means that language per se is not the issue. It is the kind of processing that is being triggered by the language that is relevant. Look, if you think verbal overshadowing is what drives or impairs insight, you’re in a lot of trouble here, because these are both verbal, and one clearly facilitates insight. Yes? Why does the second one trigger procedural? Because what’s the difference between the first one and the second one? Oh, this one, what’s your task? What’s the procedure? What’s the cognitive operation? You have to comprehend an existing story. What do you have to do here? You have to actually do the cognitive operation of shifting from the dominant to the subdominant medium. So, the idea here is, both of these are conceptually or propositionally similar, but this one is procedurally similar and this one isn’t. Because in both of these, you’re doing the same cognitive operation. Did that answer your question? Yes. Sorry, I should have made that, sorry, I apologize, I should have done that point first. It’s a little bit confusing for me, the way I’m doing it. So, that’s the key thing. But as I’m saying, as I’m pointing out, this is clear evidence that there’s something wrong with the notion of verbal overshadowing, because it is not the use of language per se that’s the issue. I’m going to add other things to this. I’m just asking you to note that. This is not something that Lockhart, Lammon, and Dick are talking about. How could they? The Schuler experiment is four years down the road, five years down the road. I’m going to show you other instances where it is clear to the case that language use per se is not the issue. Alright. So, that was really interesting. And then Lockhart and Lammon go on to talk about memory indexing and other things like that. We’re going to come back to that when we talk about ideas about incubation. So, I’m going to put that aside for now. Instead, what I want to do is follow up on some, one thing that is relatively rare in psychology, compared to how much we talk about how important it is, which is replication. Okay, so Adam and Al, same year. 1988. Alright. People were asked to rate the truth of statements presented in problem form or fact form. So, they had to read through a bunch of statements that were presented in problem form or fact form. And people had to evaluate whether or not the sentences were true or false. So, the problem form is you can marry several people each week if you are a minister. The delay is just literally one clause away, but you have to go, what? Oh, right. The fact form is a minister marries several people each week. People read all these sentences, distract their task, and then they were unexpectedly given the multiple marriage problem. And as you may, I’ve already given them away because I said it’s replication. What they found was that people who were given multiple marriages, what they found was that people who were given the puzzle form showed facilitation on the multiple marriage problem, and people who were shown the fact form did not. Now, how does this relevant to that issue around Bob Lockhart’s honesty? Or at least integrity as an objective? So now, a digression because we have to talk about transfer within memory. Okay, so Lockhart, Kraken Lockhart proposed, it’s probably still in your textbooks, it’s probably still taught, how many of you have heard of levels of processing for memory? I’m interested, sorry, I don’t mean to be creepy or anything, but how was it taught to you? Like this is how memory works? How was levels of processing taught? A lot of you put up your hands, somebody has to have an answer to this. Yes? It’s generally one of the first methods of memory processing that you learn, and then people go, oh, but there are these other models that show crowds that levels of processing isn’t a model. Good. So that’s how it’s done now? That’s good. Because that’s taken quite a while to sort of set in. So, levels of processing was the idea that the more deeply or more semantically, and whether or not those are the same is part of the theoretical issue, we’re going to put that aside, but the more deeply you process something, the more likely you are to remember it, independent of your intention to remember it. There was lots of evidence that the depth of processing was predictive of how well you remembered something. Bob Lockhart was one of the advocates, one of the creators of that theory, and some of the original experiments for levels of processing were designed by Bob Lockhart. Now, as was mentioned, what’s your name again? Armin? Youna? I’m getting it wrong and I don’t want to. Youna? There’s no N. What? Are you calling June? Oh. I failed. Part of it is I’m deaf in one ear, so that means it tends to skew how I’m hearing things from people. All right. I’m sorry. I apologize. What she indicated, what happened is the theory got overturned, or at least significantly challenged, and then came after it, which has been more long-standing, which is the idea of transfer appropriate processing as opposed to levels of processing. The levels of processing was the depth of the processing always predicts better memory. Transfer appropriate processing… Transfer appropriate processing is the idea that how you process… Or maybe put it this way. The more the manner of processing in the learning context is similar to the manner of processing in the testing context, the more likely you will remember the material. And that has important study tips, too. You should study in a way that is, and listen to my language, as procedurally similar to how you’re going to be tested. So, for example, how are you going to be tested in this pre-70 midterm? You’re going to be asked to provide short-term tests, and you’re going to be asked to provide short-term tests, and you’re going to be asked to provide short-term tests in this pre-70 midterm. You know how you should practice the material? Try to generate as many short answers, questions, for yourself as you can, and practice writing answers to them. That is the best way to learn the material, because you’re studying in a way that’s very procedurally similar to how you will have to produce your answers. Now, part of the problem facing your brain is, and part of the issue around this, the research of a transfer of appropriate processing is very robust. And this has to do with, and some of you know, I talk about this in other courses, that memory is not largely about storing information and reproducing it accurately. Memory is about trying to set this up so that you can adaptively anticipate the future. Memory is reconstructive rather than reproductive in nature. Memory is not primarily about accurate reporting on the past. It’s about intelligent prediction of the future. Most of your confident memory, this isn’t a course on memory, but many of you probably know this, you’ve done some of the memory classes, and you’re confident eyewitness testimony is wrong about 49% of the time, things like that, because memory is not that. Now, part of the thing to note is the difference between a lab or an academic setting where you know what the future is going to be, and so you can very sharply tailor the process in here to match what’s going on. But your brain faces a much more difficult problem of how should I process it now so that it’ll be relevant to the future, because it’s not quite sure what that’s going to be. Now, how is all of this relevant? Because as this was happening, this was happening in the insight literature, and people were realizing, oh, what’s going on is something like insight problems have to be processed in a certain way, and so the information has to be presented in a transfer-appropriate manner. So the notion of procedural similarity that we’re seeing going on in insight is convergent with the growing evidence for transfer-appropriate processing within memory in general. So not only is this work, the Lockhart and Adam, and we’ll come back, we’re going to look at Liedemann-Beg and Gick and McGarry, more replication. What I’m pointing out is not only is this work specifically on insight convergent with Metcalfe’s work, and at Schuler, Melcher’s work, and Baker-Sennett and Cece’s work about the procedural nature of insight. So it’s already convergent that way. It’s also convergent with independent work about the nature of memory function. Now, I think this is a better way of talking about what Bertheimer was trying to talk about when he was trying to talk about transfer, and you have to get sort of the… I don’t think he was quite right about maybe he was talking about the right structural organization and the right fit. He was struggling. I think what he’s trying to talk about is that you want procedural similarity within transfer-appropriate processing in order to facilitate insight problem-solving. But of course, that’s unfair if you criticize him about that in one sense, because it’s anachronistic, because none of this was known at the time. So he doesn’t have those independently established theoretical constructs to make use of. We do. Okay, now, more replication. Neelam and Begg. From 1991. They did another experiment, except this time they kept the content completely constant. They did not change the content between the two participant groups at all. The content was held constant. What was changed was just the form of processing. All right. And they broadened this beyond the multiple marriage problem, which was also important. They used a bunch of different problems. So in it, participants read training stories that were analogous to target problems that they were going to receive. So they were going to talk about the different types of problems that they were going to face. In it, participants read training stories that were analogous to target problems that they were going to receive. There was two different orientations in memory-oriented processing. The same participants were asked to study the story so they could remember it. And given that participants are almost always students, this is a very ecologically valid thing to do, or an externally valid thing to do. Another group were problem-oriented processors. They were asked to try and explain why each solution in the story was correct. They had to try and explain why each solution in the story was correct. Both groups were then given correct explanations for the solutions in the story. Okay. So you read this story about people solving the problem. You either read it to remember it, or you read it and to explain it, how the solution works. Both groups are then given the correct explanation. So it’s the case that both groups know why the story was correct. Is that okay so far? Do you understand the difference? So the only difference between the two groups is how they initially process the information. One group read the story and remember it. The other group read the story and explain to me why it works. Both groups, here’s why it worked. Here’s why it worked. Okay. Now, then you give people unexpectedly the problems to solve. And, well, what do you predict? Anybody? Yes? People who were in the problem orientation had facilitation. Yeah, had facilitation. Much more facilitation than people in this. If you then switch the task, right, and say, tell me more details from, more content from the original story, these people do better. Now, just to try and make a connection for you, do you see why I’m testing you to give explanations on things? Because getting you to remember stuff and presenting it in slides, and this is why I use the blackboard, because it’s more puzzle-form presentations. What some of you might want is, no, give me lots of slides with all the answers. Why? That’s detrimental to you being a good problem solver. It should be presented this way with a heavy emphasis on argumentation and explanation. And I should periodically present you with puzzles, questions. Why is it going this way? What’s, well, it debates challenges. That’s, right? So you should think about this in terms of, you know, how you’re studying, how you’re processing information. What do you want from your education and your study? Do you want to be a better problem solver in the future, or do you want to remember this material? They’re not completely in opposition. Which one do you want to emphasize? I recommend the first rather than the second, because a lot of this material is going to be obsolete and forgettable in ten years. Okay, so another growing sort of convergent argument of what kind of thing is going on in Insight problem solving. So then this all came together in another counterintuitive experiment. Another counterintuitive experiment, which is again something we should pay more attention to. Is this going okay for you guys? The arguments, step by step, the experiment? And then this will lead into, because we’re now talking about the connection between memory and insight, even though we’ve seen that a standard memory search is not what’s going on in insight, it’s going to lead to the issue that’s very prevalent in the media and very popular in culture about using incubation to improve your insight. Leave the problem alone and sleep on it, and your genius unconscious will come up with an amazing answer, which will hit you like a shaft of light accompanied by choirs of angels in the morning. Okay, now here’s the startling thing. Most of the evidence shows that incubation, at least the standard romantic version of it, does not exist. Even though it’s very popularly portrayed in the media, it doesn’t exist. Almost all of the claims of the romantics turn out to not be well-confirmed. But before we do that, let’s do this counterintuitive thing about memory and problem solving. So, the GIC and McGarry, it’s the same GIC again. Now this is working quite good. Really, just constantly good work. It should be more famous. It should be more famous. Now this is going to take a little bit of reasoning, and because it’s counterintuitive, you’re going to experience it as counterintuitive. Okay, so let’s just do it, try to do it at the level of intuition and get it clear before we go back to the actual design. Okay, so you’re trying to solve a problem that needs insight, which means you’re currently not solving it. Is that okay? That’s what it is to have one like this. Oh, how do I do the nine dots? Okay, now think about this carefully. Not in terms of content, but in terms of procedural similarity, what kind of problems are most in your memory most procedurally similar to this one? Other problems that you didn’t pass on, that you’ve not, that you hit an impasse on. So other impasse problems are more procedurally similar. Yes? Does that depend on the type of problem? Well, that’s, pardon me? So for example, like the nine dots versus the multiple merit problem. It could be, it could be, and this is the degree to which issues like cognitive flexibility that I’m talking about more and more, I hope more and more clearly as we’re going on, the degree to which there are domain general aspects of them and the degree to which there are domain specific aspects of them. By the way, that’s a really good topic and I don’t want 60 versions of this for an essay on insight problem solving. The degree to which we can talk about domain general aspects of insight versus domain specific. Okay? They’re clearly talking about it in this experiment in a domain general way. Whether or not this experiment licenses random conclusions about domain generality, that’s another issue. Okay. So what Giga McGeary argue is, what would actually facilitate you solving this problem is to have a history of failures. Which sounds counterintuitive. It’s like, but those were failures. Why would I pay attention to my past failures? How could they possibly help me now? What I should do is think of my past successes. We don’t like to look back at our past failures. But think about it. If I have all of these failures, what can I apply to them? Yeah? The notice of variance. The notice of variance heuristic. The idea is you can apply the notice of variance heuristic, and that’s what you actually need in order to facilitate solving that problem. So this is very sort of counterintuitive because what we tend to do is we tend to look back at our previous problem solutions. Right? The difficulty with that is that they tend to be similar in the wrong place. They tend to be similar in being solutions and not similar in giving us a basis for how to solve the currently unsolved problem. So a very counterintuitive proposal. So how did they do this? How would they induce failures in people? And then how did they test it? Well, what they did was they did variations on the mutilated chessboard. Now, remember I said that you can, in terms of statistics, you can vary how difficult the problem is by varying how salient you make the parity cube. And so they did analogous problems like that, procedurally analogous. And what they did was they gave people a bunch of problems that were difficult, that were not solvable. They forced people, well, most people, because in statistics also some people are not going to be forced, but they created a lot of failures, what they called source solution failures. And what they found is if I give you more source solution failures, you’re more likely to solve the mutilated chessboard than if you have fewer source solution failures. Because if you have more, you can apply the noticing variance heuristic better than if you have fewer. So more source solution failures facilitate the insight, the spontaneous insight, for solving the mutilated chessboard. Now that sounds really counterintuitive, right? It’s like, no, it should be, all of my successes should matter. But it’s got an analog for like a developmental psychology, right? So Piaget was the first person to wonder if whether or not the errors on intelligence tests were systematic or not. Some of you are taking 312 with me. And that point, the errors, the systematicity of the errors points to important constraints on cognition. The same idea here is if there’s systematicity in the errors, if you have more source solution errors, then the noticing variance heuristic can operate and that facilitates insight. Now this has a kind of a weird possible consequence, which is something that has often been very difficult to explain. And so take what I’m saying with a grain of salt, more than a grain of salt, several grains of salt, a lump of salt. But it might be that this is part of the connection in many traditions, cross-culturally, between the issue comes up around this, and we’re starting to, like I say, we’re starting to really sit on the boundary between insect problem solving and learning. I could go back to Gick and Lockhart reviewing some of their own work and the work of others in 1995. They talk about how insight relates to automaticity. This is a quote from them. Insight is, quote, the sense of having escaped the tyranny of automaticity, which is just a beautiful sentence. I would like to have that on my tombstone. John Ravichy. He escaped the tyranny of automaticity. Sorry, was this Gick and Lockhart? This was Gick and Lockhart in 1995. This isn’t, Michelle, this isn’t an experiment. This is them writing a review article about all the insight stuff, including their own earlier work in the 80s in 1995. So they’re summarizing a lot of this work, their own work, and other works, and that insight seems to involve having escaped the tyranny of automaticity. I want you to remember this notion of automaticity for two important reasons. Automaticity is a way of procedurally talking about what a constraint is, as opposed to propositionally in terms of a rule. We’ll come back to that. Just note it down, please, because it will make the connection backward. There’s a connection between automaticity and a procedural understanding of what a constraint is, rather than a propositional understanding. Secondly, we’ll come back to this as well, but I try to foreshadow these connections so you can regather them and come back later. A lot of what’s going on recently, because of the work of Kang and others in mindfulness, is that mindfulness, but this goes back to an old proposal by Dyckman, that mindfulness training is basically training in the de-automatization of cognition. The de-automatization of cognition. Mindfulness training. Yes? So, automaticity is procedural, but isn’t procedural how you get around insight problems? Sure. So, automaticity is helpful to insight? No, no, no, no. Automaticity, the fact that insight is procedural and automaticity is insight, sorry, is also procedural. It just means that they both belong to the class of things that are procedural in nature. So, what Lockhart is saying is that insight is something that is procedural, that helps you to counteract the procedural constraints of automaticity. We’re going to need this when we talk about both mindfulness as de-automatization and when we talk about what is meant by constraint relaxation. Constraint relaxation. A positive way of understanding what, what I’m trying to foreshadow is a positive way of understanding what constraint relaxation is, is to understand it as the de-automatization of your process. All right. Now, I haven’t justified any of those claims. I’m just foreshadowing this connection here. There’s a connection made between automaticity and insight by Gick and Lockhart that was going to be prescient about what we’re going to talk about later in mindfulness and when we talk about constraint relaxation and insight problem solving, the work of Nodwick and others. Okay. So, this idea now about facilitating transfer is now crucial and now we’re going to take another line of evidence. We’re going to take a look at the independent work done on incubation because that seems to be a memory facilitation of transfer or important insight thing, incubation. As I foreshadow. What I’m trying to do is show you that in, that many different lines of theorizing and experimental evidence are all converging on this idea about what kind of processing insight is because what I’m trying to do is do what I’m doing to make sure that the idea is not just a theory, an experimental evidence are all converging on this idea about what kind of processing insight is. Because what I’m trying to do is do what the gestaltists have not done very well, which is try to build up a base of constructs that are independently established that will allow us to come up with an alternative explanation for insight that I think transcends both the gestaltists and the computational framework. But we’re only about halfway there. Alright. So, incubation. So, just again, in the popular notion, this is like sleep on it or stop thinking about it. And, you know, there’s psychoanalytic mythology around this that your unconscious will work on the problem and solve it and do things like that for you. And there’s stories, to be fair, about people in dreams having important insights to solve problems that they couldn’t solve. One of the topics, I don’t want 60 papers on this that people write on, is the connection between dreaming and insight. Especially REM dreaming, because non-REM dreams tend to be really boring. They’re like you’re eating soup and stuff. Right? What were you doing? I was eating soup. Okay, go back to sleep. REM dreaming. I was writing a book. Okay, so there’s connections between that. There’s also been some recent research on connections between lucid dreaming and insight. Although this is interesting because it does like credence to certain traditional claims about certain kinds of induced altered states of consciousness, as particularly in shamanic traditions, being the kinds of things that it can be. And there’s now research coming to show that that’s plausibly true. At least good evidence for it. Alright. So incubation sits within that whole sort of idea. And it goes back to the idea of a deep sleep, which is that you’re not going to have a dream. You’re going to have to have a dream. of idea. And it goes back to a book from 1926. It’s called, again, remember how as you go back farther in time to the titles, we have four more grandiose. This is the title of the book by Wallace, W-A-L-L-I-S, Wallace. This whole idea was introduced. The title of the book is The Art of Thought. Just thought. Boom. William Mantua’s How to Improve Your Problem Solving. And he argued that there were four phases in innovative problem solving and creativity. And we’ll talk about the fact that those are often overlapping with insight problem solving. Here are the four phases. Mental preparation is where you’re working on a problem and you don’t solve it. You keep working on the problem and keep working on it, and it’s unsuccessful. And then incubation, putting the problem aside and working on other things. I like the next complex. The name of the next phase. Illumination. Illumination. A flash of insight unexpectedly occurring during the incubation phase. It sounds religious, right? Illumination. It’s like, ah. Oh, right. And then the last phase is verification. Working out the details of the solution and or determining if the solution of PEM actually applies. So that’s what is typically. This is an old, old idea, and it keeps getting prodded out as the solution to making you better at your problem solving. Yes? I’m sorry, what year was that again? 1926. During the roaring 20s. And do you think that’s confused with the idea that you can remember things and you tell something about it? Because that’s sort of different. Because that seems to be true. Like you forget, you stop thinking about it for a second and eventually your brain will retrieve that fact. What you just said was great. We’re going to come back to this. So whatever saying is, maybe it’s not incubation or the unconscious doing its magical mystery of working. Maybe what it is is that you forget certain things and when you come back that just means you’ve reformulated the problem. The selective forgetting hypothesis. We’re going to take a look at this. One thing you could do, just a foreshadow, we’re going to do this more rigorously by looking at the experiments in detail. But this is an answer to Emma. One of the things you could do, if incubation’s right, then time away should matter. What matters is just a restructuring of attention. Then very brief interventions that cause you to restructure would be better than lots of time away. It turns out that that’s actually the case. So good idea, we’ll come back to it. Now there’s sort of three views. We’re going to take a look at the work by Seifert and Al, only Seifert and Al, 1995, on the connections between insight and incubation. None of these are going to be too startling to you, but they present three views as opposed to the two. And they do that because they’re doing this in an interesting, if slightly manipulative, rhetorical way. Then it goes back to Aristotle. Most of the interesting, but slightly manipulative, views are all embedded by Aristotle. So there’s three perspectives on insight, and therefore on incubation. They call the first one the business as usual perspective, which we’ve already talked about. And people who typify that are, of course, Weisberg and Alba. And even, although they make changes, Kaplan and Simon are trying to say it’s business as usual. There’s nothing importantly different going on here. The second perspective, and I don’t, they, this is the name they give it. I don’t know how it’s, they call it the wizard Merlin perspective. So the wizard Merlin perspective is, insight occurs. Its results are awesomely spectacular. The results are produced by sort of superhuman and strange mental powers that operate in an unpredictable fashion. This is romanticism. You have thought-like powers within you. If only you could tap them. Do you realize you only use 10% of your brain? If you could just use all of it, you would be a god telecomically flying through a telepathic heaven. The fact is, you use most of your brain most of the time for most of your tasks. The one time in which you’re using 100% of your brain, and it completely breaks down your brain, in a completely integrated fashion is during an epileptic seizure. So limitless and all that stuff is really kind of crap. In fact, higher IQs are predicted of lower cortical activation, not more cortical activation. But nevertheless, we still like romanticism for reasons that are obscure to me. OK. So some people talk about this. A person that’s often trotted out as an example of this is Richard Feynman. Richard Feynman had a reputation for being able to do this. People would present problems to Feynman, and he had this sort of drama that he would go through. He’d get almost pained, and he would sort of, ugh. And then he’d blurt out something, and people would go, oh my god, that’s rough. Ah-ah. And then reformulate their problem. They would go, but how does he do this? And I don’t want to take anything away from Feynman. I’m sure it’s the case, but after he died, they went through his desk and his notebook. He was working on these problems a lot. Now, so some of the magic in the mystery was he was a Madonna of the academic world. He was really good at marketing himself and presenting himself as having talent that he might not actually have, because she had no talent. Well, her talent was to present herself in such a way so that people would stick around and talk, which is why her material is disappearing from the common repertoire already. Really rapidly. But Feynman also shouldn’t be totally put aside, because he at least foresaw the kinds of problems that were going to require insight, and he did a lot of work on them so that he would actually get to an insight stage so that when he was ready, so when people asked him if he was ready, he was ready to go to the stage. And he was ready to go to the stage. So when people asked him, he was ready, and he had the insight ready to go. So that’s kind of dodgy on his part. But on the other hand, it does mean that you know something was going on there that was interesting. So the Wizard Merlin perspective, the romantic perspective on insight, is the opposite of the business as usual. So Seifert and Auer present their view, which they call the prepared mind perspective, squarely between these two extremes, which leads you to believe, oh, well then, their position is obviously the more reasonable and therefore the most plausible. As I said, this is a strategy you should always pay attention to if people use it rhetorically. It goes back to Aristotle. You present the two extremes, and then you present the position as the golden mean between them. Take what they say with a grain of salt. I don’t really know of anybody in cognitive psychology that has the Wizard Merlin perspective. There seems to be an illusion that perhaps the Staltes were saying that. Not clear that they really had the Wizard Merlin perspective, because they thought Salton the Chimp was capable of it. And Salton the Chimp is certainly not a superhuman. So I’m not sure there be, I’m not sure who actually has, in the history of psychology, the Wizard Merlin perspective. It seems to be a bit of a, what’s the opposite of a straw man, an Iron Man or something, that they rejected, that nobody actually took. Alright, so their prepared mind perspective is going to come out in a particular thesis, the opportunistic assimilation hypothesis. The opportunistic assimilation hypothesis, which sounds like something from an alien abduction movie. The opportunistic assimilation hypothesis. However, for all of my criticisms of the rhetoric, there is an important theoretical innovation that does come out in their work. And in order to make that theoretical innovation clear, we have to talk about what are the different theories of incubation, at least in 1995. Let me get the original work, and then we’ll go forward on that. Sorry, could you just say that again? What is the opportunistic? Assimilation, I think. What is the opportunistic? Assimilation hypothesis. Thank you, Michelle Broskin, for doing it again. Okay. So what Zephyr and Al do is they, first of all, they zero in on why does transfer often fail? Why do people often fail to exploit relevant past information? Why do they fail to transfer it into the future? And how could incubation potentially help them? Now, interestingly enough, following up on Verheimer, but getting a little bit more specific, is Zephyr and Al understand fixation as transfer failure. Fixation as transfer failure. And if we understand the causes of transfer failure, we therefore may better understand fixation. Now, that’s going to turn out to be really interesting, talking about different theories of how things work, different theories of what’s going on when people are in passing. Now, what are some of the standard theories, as I mentioned, for incubation? So the standard theories that Zephyr and Al consider are, the first one is, these are theories about what’s going on in incubation. The first one is called the conscious work hypothesis, which is a polite way of saying people are lying. The conscious work hypothesis says that when people claim that they’re not working on the problem, they are working on the problem, which is just a polite way of saying people are lying to you. The second hypothesis is called the fatigue dissipation hypothesis. All that happens in incubation is people are tired, that’s why they can’t solve the problem, they get rest, and because they get rest, they come back and they’re able to solve the problem. That’s all that’s going on. Third, which is the selective forgetting hypothesis, the idea that you forget some of the information in the problem that causes a reformulation of the problem, and this is what brings about the insight. And fourth, which is the one that is most prevalent in the self-help industry, is the subconscious random recombination idea, that what the subconscious does is randomly recombines things and this is what is actually going on in incubation such to produce the solution. Yes? It’s very Freudian. The idea that what’s going on in your unconscious works according to primary processes, which are largely associational and semi-random in nature, and recombine and manipulate things and stuff like that, and that’s what produces the insight. Random in the sense that it’s not what, yeah, I don’t think they mean random in the way that you want it, because I know you, I don’t think they mean metaphysically random. I think they mean that it’s not logically or rationally governed. So it would be closer to what we would mean by non-rational or irrational arbitrary recombination, as I think of it, but that’s not the language that’s used yet. But that’s what they mean. So, I mean, remember, when Wallace writes a book in 26, Freud is really big in America. Freud is considered the Newton of the mind. We don’t think of Freud that way now. Sorry, just a quick thought. Selective forget-to-have oppositions is just forgetting the false solutions. It’s forgetting aspects of the problem, and because that causes you to restructure how you formulate the problem, and that restructuring is what facilitates your insight. Okay. Yes. So how is that if recombination is different from recombination? That being the last one? Yeah. The last one is the recombination leads to a reformulation of the problem. So the difference between the fourth and the third is that the fourth is an active, constructive process, rather than just a passive reaction that happens. Okay. So Siebert and I’ll point out, and this is somebody who is going to, he’s invoked, and he’s going to come back again when we talk about incubation. The Smiths were, as of 1995, all of the attempts to find experimental confirmation for incubation came to this. This is what Smith says in 1995. A scarcity of replicable incubation effects. So although this is widely believed, the attempt to find good experimental confirmation for incubation has largely failed. Please remember this name, because Smith, we’re going to take a little more reason for that by Smith on incubation. It’s important you know this, because Smith knows what he’s talking about when he’s talking about incubation, because he did a lot of the review work. Now, what Siebert and I’ll do is they, now what you have is you have this weird inconsistency. You get some results, but they don’t get replicated, and you have this massive sort of popular belief in incubation. And so part of you should say, so what, we have massive popular beliefs in a lot of things, right? That turn out to not be true, right? Because that’s what we do in science, because we rely just on massive popular beliefs. Massive popular beliefs, we’ve been in trouble, and a lot of issues. Fair enough, right? But there’s something a little bit more. Notice Smith’s language. It’s important language, and what he’s saying is important, but he said a scarcity of what kind of results? Replicable. So what you get with incubation is you get sort of what looks like confirmation here, and then when people try to replicate it, it’s not there. And then people get it here, and they try to replicate it, it’s not here. Now, again, one response to that is, well, this is the same thing with psychic phenomena. We can never get replication going, and what we’re generally sort of coming to the conclusion again, again, popular belief is there probably isn’t anything like psychic phenomena, right? So, but Siebert and I’ll, there’s another alternative you can say is when you’re getting inconsistent empirical results, it can sometimes mean, it’s sometimes good evidence that your theoretical construct is muddled or confused, or it’s missing an important component. This is the option that Siebert and I’ll pursue. They argue, they think incubation, and then they’re going to have experimental results to back up their claim, okay? They argue that incubation is producing very inconsistent results, precisely because it has been misunderstood. And again, the misunderstanding is people have either said it doesn’t exist, and that’s the business as usual, or they think it’s a wizard Merlin, and I guess the wizard Merlin here is Freud? That’s a weird Freudian thing to say, isn’t it? No, it’s a more union thing to say, isn’t it? Why is it old man? Dumbledore. Dumbledore, right, or Gambo. You’re obviously a Harry Potter character. It’s no good. It’s no good? You’re making a movie though. You are? No, she said it would make a good movie. She didn’t say they’re making it, it’s her book. They’re making Fantastic Beasts. But the new book that’s based on a play, but it’s nevertheless packaged as if it’s a book, is not a good book play thing. Is that right? A lot of people say it feels like a fan fiction, which it kind of is, because she carries a nationality. But her name is on the book. She oversaw the process of it. She wrote the story. She wrote the story that the play is based on from which the book was written. But she didn’t write the book. The book is the book. But she didn’t write. It’s a script. Why didn’t they just publish her story that she did write? I think she was talking about the book. Sorry. I think when they’re saying story, it’s a physical short story, she was just writing out her ideas of how it was going to progress, and then they just took it out. Oh, I misunderstood. I see. So there wasn’t actually a draft in existence. She just said things, and then people went, ooh, and wrote it down as a play, and then somebody else went, ooh, we can sell this as a book and put her name on it. Yes. Oh, I see. Okay. That’s almost as dark and twisted as one of her stories. All right. So they say what Seifert and Aller are saying is like what’s going on is either the idea is it doesn’t exist or it’s this completely internal, unconscious process. And so the theoretical innovation that Seifert and Aller are going to bring in, which sounds like sort of da when you say it, is the environment matters. This is not a completely internal process. It’s a relationship between the person and their environment that is actually crucial for incubation. The environment matters. It is not a completely internal wizard Merlin process. But it does exist, and it’s not just business as usual. And this is what’s behind the opportunistic assimilation hypothesis. So the idea here is your processing encounters an impasse which produces a failure index. This is an index that there’s a problem that has been unsolved by you. And this is fair for them to introduce it. People like Lockhart and others I mentioned it to, talk about failure indexing. But you can’t solve a problem in some sense. It is flagged in memory as an unsolved problem, which could also help to give them a dairy idea about using the notice and variance heuristic. So the idea is processing encounters an impasse which produces failure indices in long term memory. Incubation involves a change of context. Incubation involves a change of context. And this change of context produces a fortuitous encounter with relevant objects or events or information. So the change of context produces a fortuitous encounter with the relevant information, the information that’s actually needed to solve the problem. So you’re impassing. Your brain sort of indexes this as, I need to solve this problem, I need to solve this problem, no I failed. It sort of goes zip, zip, zip, zip, zip. It makes that noise. And then you go to another context. And because this has been failure indexed, you’re sensitive to the information that is needed and that has been produced because you’ve changed context. So what you do then is you notice the needed information and that connection between the information being present in the new environment and you being sensitized to what you need sensitizes you. You notice it and you then assimilate it. This is why it’s an opportunistic assimilation hypothesis. Noticing the needed information that has been produced by a change of context is what triggers the spontaneous transfer and the restructuring of the insight. And that is how the restructuring of the problem started to produce insight. So the idea here is that there is an interaction between the prepared mind, there’s things you have to do to the mind, and external context variation. Variation in the context is also necessary for incubation to be effective. You need both the prepared mind and the contextual variation. So the idea here is that incubation is a relational process. It involves a relation between things happening in the mind, changes occurring in the mind, and changes occurring in the environment. So we will get there eventually. So we will get there eventually. Yes, yes. That’s relevant realization all over the place. So we will get there eventually. So what you’ve got going on here is a very complex interactional model of incubation rather than these competing internal models. So I think they’re quite right to say, yes? I was just wondering if you earlier said that the incubation experiments, Smith said that they all didn’t work, or like didn’t prove it. There was a scarcity of replicable results, yes. And so was this ever replicated? We’re going to see what gets replicated in here. So what’s going on with Seifert and Al, but what’s your name? Naomi. Naomi. What Naomi is saying is very good, because that was the point I was going to get to, but thank you, so that’s great. So what Seifert and Al are proposing is the reason why you’re getting inconsistency in the previous research is because the previous experimenters were not controlling for the environment. There was a missing variable, context change. If you put that variable in, then you will get reliable results for incubation. That’s what they’re predicting. So they ran a series of experiments that demonstrated incubation was effective, but only if relevant problem-solving information was presented during the interval. Sheer time away from the problem was not a factor, was not much of a factor. There probably is a little bit of a peak dissipation, but that’s not much. So if you take time away from the problem, but you do not move into an environment that contains the relevant information for helping you reformulate the problem, there’s no incubation of that. But if you do take time away and it involves a change of context in which the new context has the relevant information needed, then you will get the solution to your problem. They looked for, and you can do this by doing association tests and things like this, they looked for evidence of spreading activation during incubation and to see if that was what was going on in incubation. They couldn’t find any good evidence of that sort going on. So sheer time away and attempts to find evidence for internal processing, like the random recombination, didn’t seem to be what mattered. What mattered was whether or not the environmental change produced the new context that had the relevant needed information. Along with this, they had some evidence which goes back to Giccan McGarry, I’ll be with you in a sec, okay, Giccan McGarry, that people have better memory for their failed problems than for their successful problems. Now, but only if the subject of, well, this is in case it’s not a participant in the experiment, it’s the subject of the problem, only if the subject actually generates the impasse. The impasse has to be self-generated. You’re trying to solve a problem and you get to an impasse on your own. Then it gets indexed. So what they did was they had some supporting evidence that self-generated failures get memory indexed, which is their way of providing evidence for the prepared mind part of the hypothesis. The mind is being prepared, right? What the brain does is flag and sensitize you to problems in which the impasse was self-generated. There was another question first down there. So it’s just random if you take time away and happen to go into an environment that offers the proper information? That’s what they seem to be saying, yeah. Whether or not that’s the case is something we’re going to come back to. This is 1995, this is not the last word on incubation. But it’s kind of the first word in the sense that this was the first time in which things were shifting so that we could start to get more reliable results about incubation in experimental situations. Yes? So you’re talking about self-generated impasse. Would that be sort of like in the Weisberg and Albeck story, sometimes they say you’ve exhausted or you haven’t even though they haven’t actually gone exhausted themselves? Yeah, you want it to be that the person gets to a place where they say, I can’t solve this problem. But it has to be because they have tried to solve the problem and thought they could. If I simply ask you what’s the meaning of life and human happiness and you say, I don’t know, that’s not what we’re talking about. We’re not talking about I gave you some problems and you say I can’t solve that problem. We’re talking about where you have been trying to solve a problem and you hit the impasse. So this would be why you would see it in the paintings as well, where they would go when they’re working on a project, they could do an art gallery or walk around some of the pieces that would complete it. That would be an example of that kind of thing. This could be, by the way, why dreaming is effective for integration because you move into an alternative context that’s bizarrely strange enough that it might contain the idea or assumption that you normally won’t think of. That’s what David said. Yes? Sorry, can you just explain how that relates to the prepared mind again? The prepared mind is the idea that the self-generated impasse causes a failure index. That problem is especially and easily remembered by you and that sensitizes you to information in the environment that is relevant to that problem. Because if a problem can be easily remembered, it can be easily triggered and it means that it sort of primes your perception and attention in certain ways. Yes? How would this kind of integration be different from just continuously trying to solve the problem? Because if your mind is prepared and you’re consciously moving in a different environment, aren’t you just continuing to solve the problem? Well, the idea is that you’re not thinking about the problem, you’re not trying to solve the problem anymore. You put the problem aside. The prepared mind doesn’t mean you’re holding the problem in your working memory and trying to solve it. It just means that your long-term memory has been sensitized. So it’s out of your conscious awareness, in a sense. But it’s like a procedure that happens. Like if you were to look at it in a timeline, you try to solve the problem, you step away, and then that’s the incubation period. During the incubation period, your long-term memory becomes sensitized if you moved into a new contact. No, it becomes sensitized regardless. And if it’s just sensitized, that’s necessary but not sufficient for the insight. Your memory has to be sensitized and then the environment has to change in such a way that it’s fortuitous. An opportunity is created. To gain new information? To gain the needed information, yes. To gain the needed new information for solving the problem. Oh, so your mind becomes ready to gain the new information but if you don’t go into the context that has that information, you won’t gain it. Exactly. That’s perfectly clear. And that’s where we will end today. Oh, I’m sorry, you had a question. Yeah, so is it random? Do you need to go into a context? No, because you don’t know what the information is, you can say, I need the information next, I’m going to go into that context that’s rich in it. So, enjoy the Festival of Black Beat called Thanksgiving.