https://youtubetranscript.com/?v=VNCnVv0IsD0

Last time we went through some very difficult stuff and we took a look at two mechanisms that might be at work within INSIGHT. We looked at the work of Stefan Jixson and how self-organizing criticality could be operative in INSIGHT and we went through the argument as to how we could use some new mathematics in order to get a new way of measuring and theorizing about data. And the evidence seemed to indicate that self-organizing criticality at least is a plausible candidate for being a mechanism at work in INSIGHT and we saw how that connected up to issues like feeling of warmth, potentially connected up with moderate distraction within incubation, etc. Then we took a look at issues of the wiring and network theory and we looked at the Schilling’s theory of a INSIGHT made when a regular network is transformed into a small world network. That would change the network’s functionality in a powerful way. Verbeke then proposed that the two could be integrated together and that INSIGHT is when self-organizing criticality transforms, causes the transformation of a regular network into a small world network which would cause an increase in efficiency without much loss of resiliency and that would be experienced as increased fluency and that connection to fluency at INSIGHT was already noted in the work of Tobolinsky and Reaver. And then throughout we saw how logistical notions of efficiency and resiliency were playing a dominant role in describing and understanding these kinds of mechanisms. We then did an extensive discussion of the nature of dynamical systems theory and we went through the work of Alicia Urrero and we saw that what’s central to a dynamical systems theory is the discovery of a virtual engine that regulates a process that has feedback or cyclical causation and a virtual engine is a systemic relationship of two kinds of constraints where constraints are different from causes and that constraints are conditions that shape the possibilities available to a feedback cycle and that there are two types of constraints. There are selective constraints that work in terms of efficiency and enabling constraints that work in terms of resiliency and what they do is in a constantly shifting manner alter how the feedback system is reorganizing itself, reshaping itself and we took a look at that in terms of Darwin’s theory of natural selection as an example of a powerful dynamic systems theory within science and biology and I believe that is where we got to. Is that correct? Because most of you by that time were deeply dictated. Now it’s going to get harder and then it gets easier. Because the hardest part of all still needs to be done. So this is a somewhat strange lecture now for me to give because this is where I am going to have to be lecturing a lot of my own work, the work that I do which is bizarre and of course can be narcissistic and self-aggrandizing and I’m not claiming to be a saint and I’m trying my best. So I’m just being honest. That’s the best I can do and I’ll try to present it as rationally as possible as I can. So what’s the issue? The issue is can we address the central mechanism that’s still unaddressed that was at the core of why we started investigating insight which is relevance realization, zeroing in on relevant information. Moreover, can we do it in such a way that it integrates with this machinery that we’ve discussed in the previous lecture? Can we understand relevance realization in terms of a dynamical systems theory that makes use of ideas like self-organizing criticality and small world networks? Because if we could do that then of course we have the potential, not the full actuality, but we would have the potential of a formalizable theory, an integrated theory of what’s going on. So what I want to do is try to present this theory to you. It comes from basically three papers. There’s also a manuscript, an unpublished manuscript that has now sort of taken on a life of its own and is circulating through the internet. I’m not totally happy about that. So it’s not dated. It’s not a prediction curving. It’s not a prediction curve. It’s a prediction curve on connections between relevance realization and theories of general intelligence. So it’s out there. We wrote it around the same time as this was being written. So this is not a publication date. That’s why I’m putting it in brackets. It was written around 2010, 2011. Sometimes things get out of your hands, right? I think I’ll put it in brackets. So what’s the idea? We need to come up with an account of how relevance realization works and in a way that could be theoretically and also empirically through formal mathematical measurement and prediction integrated with the machinery we’ve been talking about that’s at the core of insight. So this will be hard for a bit and then we’ll shift to the topic of creativity, which is much more open. Much less sort of like, okay. So the main thing to ask yourself is what we want to do is we want to of course try and analyze the phenomena of relevance realization. We want to try and see. We want to try and explain it in terms of more basic processes and then hopefully we can formalize relevance realization in terms of those more basic processes and then of course hopefully mechanize it, show that our conception of this is more accurate by putting it into computational and neural network machines and see if we’re starting to create the phenomena rather than just explain it and study it. And all of that is being done. For example, a lot of this, this is Tim Lodocrap who was one of the designers of AlphaGo. Some of you know this is the system that recently built Beat Go Masters, which is considered, was considered for very recently something that an artificial intelligence could never do is beat a human being at a time. Okay, so this stuff is, what I’m talking about now is ongoing. It’s happening right now. Okay, so one of the things is you try to get the right level at which you’re going to study relevance realization. Now, one of the levels we’re very familiar with, especially in psychology and cognitive psychology, is the representational level. This is the level at which we have representations. We have thoughts about things. We, you know, we utter, we make statements about things. We make pictures about things. Right, this is the representational level. And the problem with the representational level is it’s not the correct level to try and explain relevance realization. This has to do with the fact that basically an argument that comes from a guy named John Searle who pointed out that representations are inherently aspectual in nature. Interestingly enough, this can be used to make connections between relevance realization and consciousness, and some of you have seen that argument elsewhere, which I’m not going to do right now because consciousness is terrifying. But sometimes you have to face a face. Okay, so what’s the idea? When I form a representation of something, I am not encoding all of its properties because the number of properties that any object possesses is of course common in thoroughly exposed. So out of all the possible properties, I select a subset. And of course they are not just a list, and I can’t go into great detail. Some of you already know this from independent work on the psychology of concepts. Concepts are not feature lists. They have a structural functional gestalt that holds them together. So it’s not only how the pieces of information are taken out as relevant. It’s also how they are relevant to each other and then relevant to you. So I see this as a can. Right? I see this as a can. And there’s a sense in which you already know that that’s very misleading because there’s no such entity in physics as a can. Okay? All right? Given the physics of things, you can’t derive that this is a can. So the point is, right? The point of this argument is that you actually need a lot of relevance realization in order to generate your representations because representations are inherently aspectual. Representations select out of all the possible properties a relevant subset, they impose a formulation, a structural functional organization on that subset, and that is not done just abstractly. It is done in a way that makes that information relevant to us. And some of you know this argument in a more extended fashion from pre-12 when we talk about how this is at work even in Pavlovian and awkward conditioning. Okay. Now, what that means is, of course, you ultimately can’t explain this in terms of this because this presupposes this. You always have to analyze in terms of something more basic, right? So this is more basic than this, so you can’t analyze relevance realization in terms of representation. Now, I’m talking about a theoretical point here, and I’m not saying that your use of representations can’t alter what you find relevant or anything like that. I’m talking about the order of explanation here. All right. So, now, by the way, that tells you something really interesting right away, right? I mean, it tells you that there’s a deep sense in which the propositional level is not going to be the level at which you’re going to find the machinery of insight because it’s just not the right kind of machinery for explaining relevance realization. And the degree to which relevance realization is a central aspect of insight, I hope I made that possible to you, is the degree to which this level is not going to be adequate to explaining or to capturing. Yes? What is relevance realization? Relevance realization is this ability to zero in on relevant information. We’ve been talking about it from the beginning of the course. Avoiding combinatorial explosion, converting ill-defined problems into well-defined problems. Okay. Now, in fairness to the search inference framework, you can say, well, they weren’t actually talking at the representational level, they were talking at the computational level. So, John, maybe your criticism is in that sense misdirected. And in one sense, you’re right. So, the computational level is about information being included into abstract symbolic propositions and manipulated inferentially according to logical rules. We’ve talked about this. So, the main thing here is propositional structure, not the content, because the content is up here, but it’s propositional structure, it’s logical syntactical structure, and then the rules operating on it. Okay. Now, of course, there is a lot of discussion about the metaphysics of computation. Some of you might have taken courses with Brian Campbell Smith, but this is a pretty standard notion of what the computational level is. Okay. The problem, and this is a problem that was pointed out by Jerry Fodor, who is actually a staunch, he was one of the generators and staunch defenders of the computational theory of mind. So, he actually advocates for the idea that cognition is computation. So, him putting forward this criticism is very, something you have to pay very careful attention to. This is not criticism from a hostile critic. This is criticism from a loving generator and father of this whole theory. Okay. Although it is kind of odd to describe Jerry Fodor as loving, because he gets really angry. Okay. But of course, one of the ways love expresses itself is in anger, because love isn’t an emotion, right? You know that, right? Love isn’t an emotion. It is something that is, because it is expressible in almost any emotion you can think of. You can express your love by being angry. You can express your love by being sad. You can express your love by being happy. You can express your love by being annoyed. Almost anything you can do can be an expression of love. Love is not a basic emotion. That is one of the many errors of romanticism. Romanticism said love was an emotion and that emotions are feelings and so love is a feeling and that is just a ridiculous stupid thing to say. Now, let’s go back to the argument. Okay. Fodor pointed out that this thing we are talking about, the relevance of a proposition, can’t be captured in its structure. And in this, he was actually consonant with an independent argument by Cherniak on a book called Minimal Rationality. Now, I know you are going to hate me saying this, but we talk a lot more about both of these things in Psych 371. We talk a lot more about rationality and its relationship to intelligence. I can’t do all of that right now. Like I said, I wish I could teach 312, 370, and 371 as one Ura course. Now, what they both said is the fundamental reason why you can’t capture the relevance of a proposition in its structure is because the relevance of a proposition is basically your cognitive commitment to it. Your cognitive commitment. That means how much time and energy and effort you give to it. What the propositional structure, what the logical structure of a proposition does is indicate all possible implications relations. That’s what that logical structure of a proposition does. It indicates all, according to the rules of logic, what are all the implications relations. Now, how many implications relations do you think any proposition actually has? Logically possible. How many? Commitatorily? Explosive. You can’t, and this was Cherniak’s point, you can’t check all of the implications relations. Cherniak did a thing where he said, suppose you even had just 138 logically independent propositions and you could check an implication relation in something like one one thousandth of a second. You still couldn’t do it in the entire history of the universe. It’s that combinatorially explosive. Which means what do you have to do? Whenever you are thinking of a proposition, you have to decide out of all of the implications which ones you’re going to draw and which ones you’re not going to draw. Which logical connections you’re going to make, which ones you’re not going to make. Which potential contradictions you’re going to worry about and which ones you’re not going to worry about. Does that make sense? And Cherniak even goes so far as to say, what you do is you only, to be rational, is to not make all possible, logically possible implications but to only make the irrelevant ones. Which means your propositional relevance can’t be captured by your propositional structure. Because your propositional structure defines combinatorially explosive space and the whole point of relevance is to avoid combinatorially explosion. Propositional structure, you can’t have, I mean this is kind of a weird thing to say but it’s, I’m just giving you a very tight argument. You can’t have a logic of relevance. It’s not a logical thing. Now, what about the rules? Maybe at least the rules. Couldn’t we have rules for deciding all of this? Deciding our cognitive commitment? And we could make that into a computer. The problem with rules, and this is an argument that goes back to Wittgenstein, although a more recent version of it by Brown is a lot better, and to give everybody credit it ultimately goes back to Aristotle. But I suspect that he stole it from Plato. Because we have, most of what Aristotle wrote we don’t have. The 20 years where he was a disciple of Plato, all of that’s been lost. Which is kind of convenient, right? Okay, what’s the argument? The problem with rules is they cannot specify their own conditions of application. So let’s just pick a rule. We’ll pick a moral rule because those are sort of obvious to you. Okay, so I take it that this is a moral rule you know. Be kind. Yes? How many of you try to follow this rule? So, I’m worried about those of you who don’t. So don’t visit me late at night. Okay, now the problem, and this is an argument again. We talk a lot about this also in 371. This is an argument also developed more recently by Schwartz and Sharp and their work on practical wisdom. The point about be kind is, it means one thing when I’m being kind to my younger son Spencer. My behavior towards him would be certain kinds of things. But being kind doesn’t mean acting that same way to Sarah, my partner. Because she would find it inappropriate if I treated her the way my 12 year old son needed to be treated when he needed to be shown kindness. He would find that insulting and inappropriate. If I treated Ada, who’s a student, like I treat either Sarah or Spencer, that’s creepy and inappropriate. And then she’d go, ew, I didn’t think John was that gross. Right? And if I treat a stranger with the kindness that I treat her as somebody I know and I have a professional relationship, that’s also presumptuous and inappropriate. Do you see the point? You can’t, now what you might say is, well what I’ll do is I’ll make rules for how to apply this rule. And what does that trap you in? Corritorial. Right, because you get into an infinite regress. So what Brown argues very convincingly is that all of this has to terminate in something other than a rule. He calls it the skill of judgment. The skill, notice the language we’re talking about here. We have to shift it procedural. Right? The skill of judgment. And of course the skill of judgment, I mean I can’t give you this argument in detail. If you want you can read my thesis where I do it in more detail. Right. The skill of judgment is the ability to zero in on relevant information for all the relevant implications, etc. etc. Right. Okay. What does all of that show? All of that shows that this level presupposes relevance realization in order to exist. Which means you ultimately can’t explain relevance realization in computational terms. Now, again, I am not saying that computation can’t change or alter what we find relevant. Just to keep reminding you of that. But what we’re trying to get at is what is the correct theoretical analysis and explanation of this phenomenon. What are some things, so we know that we have to, so philosophers and cognitive scientists call this the semantic level. The level at which we’re working in terms of meaning the content of things. They call this the syntactic level because this is the level at which we’re doing with the sort of the logical structure of how we encode information. And so the point is relevance realization is lower than the semantic and the syntactic levels. A couple of things we also know about relevance realization. We know that it’s recursive. It’s self-organizing in some sense. It feeds back on itself because that’s what the phenomenon of insight shows us. So we do know that RR is in some sense recursive, in some sense self-organizing, and we already had some independent evidence for that last week. Another thing about relevance realization is it has to be what’s called scale and variant. It has to be something that is happening both locally and globally in the brain. Because what you can’t do is you can’t have a central executive just to use two terms out of the blue. Let’s say this processing has to make decisions about what’s relevant and it has to send that to here, this central executive to make the decisions about what’s relevant. What is this thing actually going to face? Well, how many different layers and possible combinations are there of all the things, all the information processing going on in your brain? It’s combinatorially explosive. You need something that’s going on simultaneously locally and globally, what is now being called scale and variant. Because whenever information is being processed, relevance issues are coming in. Sometimes it’s global concerns, sometimes it’s local concerns. Sometimes it’s global concerns, sometimes it’s local concerns. Now, before we go on, there’s a major roadblock that we hit. I sort of hit it myself personally when I started doing work on this. I hit it in 1997 when I wrote my thesis and I thought about giving up my academic life. I don’t know what it was meant to become. Because you can’t have a scientific theory of relevance. Now that doesn’t mean it doesn’t exist. That’s another function of this argument, to show you that just because you can’t have a scientific theory of relevance doesn’t mean it doesn’t exist. So we all know that science works by making inductive generalizations. You get a sample of things and then you try and draw general conclusions from your sample. So you make a class of things, a class of events, a class of objects, and then you try and draw inductive generalizations. That’s the core of science. If you can’t do inductive generalization, you’re not doing science. Okay, now in order to do inductive generalization, your class of things has to have certain properties. This goes back to work done actually a long time ago in the philosophy of science by J.S. Mill and others. So I want you to consider, this is Mill’s example, I can group a bunch of things under this, horses, and I can also group under this, white things. Okay, so first of all, do we agree that horses exist? And do we agree that white things exist? So white things aren’t like witches or the celestial features. They really are white things, right? Yes? Okay, now notice if I study a bunch of horses, that will give me a lot of information about other horses that I’m going to find. I will be able to make inductive generalizations, right? You study a bunch of horses and you realize, oh, this kind of medicine works when they get this kind of sick. I don’t have to study, I can just study a few horses and that allows me to do that. Mill called that systematic import. Horses share a lot of key features, and W.S. Coyne argued that was all that was left of the notion of essence, right? They share a bunch of key features that allow us to make a lot of causal predictions about horses. That’s why we can do that. Now, suppose you study five or six or ten or two hundred white things. Other than knowing that the next thing is going to be white and a thing, can you draw any conclusions about the causal properties of the next thing? Yes? You can draw properties based on what you know about things that are white, so you can also draw how the heat cools. I just said that. I just said, other than drawing implications about it being white or a thing. If you know that it’s white, you also know it… Reflects things in a certain way. Give me things that aren’t ways of defining white. I would say that you can say that reflects things in a certain way isn’t necessarily a definition of white. You can also talk about the heat absorption. Sure, that’s part of what we mean by white too. But is it not a definition of what you mean? Those are all based on it being white. Other than it being white, what other properties do all the white things share? But those are things about being a horse. All the properties you know about being a horse are because it’s a horse. And we define horses by those properties as well. So what ways would be saying, well, what other things can you tell me about this white thing other than it’s white? What other things can you tell me about a horse? But that’s the point. All you’re doing is telling me some properties of white, which are not assumably… But that’s the same thing you’re doing about… Ah, but is this? What kind of properties are… Right? It’s not just brown. I can tell me all the different things about it. You’re basically agreeing with me. You’re saying that this definition allows me to make a whole bunch of different kinds of inferences. And this one only allows me to give you all the different ways we talk about white. I don’t need to examine any of the white things at all to tell you that about what I’m doing. But I have to examine all the horses to figure out what the properties they are they share. One is definitional. The other requires empirical investigation. Some of it, but… Yes? Could you argue that white is like horse? So like the second part, it’s a white thing rather than just white? Well, the problem with white is it’s not in and of itself like a thing we study. It’s part of how we study light reflectance. Now, I think that light is the kind of thing we can do a science on, and we can. But do we? I mean, do we have a science of white things? I mean, that’s the easy way to test this, do we? How about a ball, being a ball? Do we have ballology? Science of being a ball? Other than it being roundy, you can balance it. There’s no other important properties. It doesn’t tell you anything else about it. You can’t discover anything more about the world. In fact, most, and this is the point, most of our concepts don’t pick out things we can do science on. So just because we have a concept doesn’t mean we can do a science on. Yes? I understand what you’re saying about white things because the only thing about them that we know that they share in common are them being white. They’re different from horses. However, with balls, you can also investigate differences in size, which are not specified by being a ball, just like in horses. But one of the things I was trying to say before is a lot of things that people don’t understand is the definition of horse already has definitions in it, such as DNA. And how did those definitions come about? Because we were able to do a science on horses. But I’m saying that’s part of the definition of horses. No, it’s not. We had that definition and then it turned out that we could do the science that could feedback and add to the definition. You’re arguing in a circle. You’re saying it has a scientific definition, which is the point. That’s the point. But so does white. No, it doesn’t. White things don’t. You’re changing what I’m talking about. No, no. White things have white as their… Other than telling me that they’re white, what can you tell me about them? I don’t need to tell you about anything else because you’ve just been telling me that other than horses being horses… So other than… Okay, first of all, when we picked things out as horses, we didn’t know everything about them as horses, right? Okay. So it turned out that that was a way of cutting up the world so that we could discover what kind of hearts they had, what kind of DNA they had, and like that they were actually mammals and we keep discovering more and more things about horses. Right. Okay. So I pick out a bunch of white things, I pick out a bunch of things by labeling them as white. I picked out all those things. What do I learn about those things? Nothing. I already knew they were white because that’s how I picked them out. Yeah, but then you also already knew that the horse were horse horses. No, I didn’t. We knew about horses before we discovered everything we’ve discovered about horses. Do you think back in Mesopotamia where they say there’s a horse, they knew everything we’ve learned about horses? No, but back then when they said this thing is white, they also didn’t know the properties of things being white. There’s nothing we can… No, no, no. That’s not the point. I use that just as a way of picking out all the things. There’s nothing I can find that all those things have in common other than they’re white. I’m done. They’re white. Oh, well, you can discover more and more white things. But then I’m not talking about white things. I’m talking about white light. And yes, I can have a science of white light. I’ve admitted that to you. There’s a difference between a science of white light and a science of white things. You keep talking about white light. I want to talk about white things. Do you see the difference? Yeah. Okay. So, the point you’re trying to get at is that… And you don’t know what this is. And that’s the whole point of science. We don’t know which of the groupings is going to turn out to work this way. But what has to start to happen, according to… is we have some reason to believe that the classification is going to be homogeneous. It’s going to share a lot of undiscovered properties. Now, what else do you need? Right? You need the properties that you’re doing your science on to be intrinsic properties. Okay, so what’s the basic difference here? Okay, so here’s an attributed property. We treat this as money. It’s only money because we all treat it that way. If we stop treating it as money, it stops being money. Okay? Now, some things are not quite as clear as that. This being a can is only a can because we attribute it that way. If there were no human beings using it as a can, it wouldn’t be a can. It would be this object with its mass. And notice what… that’s why we can do a physics on its mass and its structure, but we can’t do a physics on it being a can. That’s another reason why, by the way, we can’t do a science on everything that you can label. Because not everything we label is an intrinsic property. Many times what we do is give it attributed properties by labeling it the way we do. Right? So, the class has to be homogeneous. The members have to share many, many properties that allow you to discover more things about other members of the class. Those properties have to be intrinsic. And of course, it has to be stable. The set of properties has to remain stable or you can’t make your inductive inferences across time or across context. If gold was constantly changing across time, it would be a very stable class. If gold was constantly changing across time and across context, we couldn’t make conclusions or generalizations about the nature of gold. We couldn’t say, you know, gold is behaving this way, right, on, you know, I don’t know, some planet in some other solar system. Right? But we can’t say gold behaved this way back in the year 4000 BCE. If gold isn’t in some sense fundamentally stable. Yes, Gina? Okay. So, the class has to be homogeneous. Oh, sorry. So, are there intrinsic properties based on functions of the objects? Give me the, that’s a, okay. So, are there intrinsic properties based on functions of the objects? Now, that’s actually a really philosophically interesting question because you can’t put functional properties into like ultimately into physics. Because function is defined in terms of purpose and we sort of decided that physics and teleology don’t act on purpose. So, we can’t put functional properties into like ultimately into physics because function is defined in terms of purpose. And we sort of decided that physics and teleology don’t act on purpose. Now, there’s a dispute about because maybe biological things act on purpose, but if you say that and you push it too strongly, you get into that whole issue we got with Kant that maybe you can’t ultimately talk about living things in a scientific way. Right. So, the question is, is there any way to define the definition of money as a function of function of the object? So, the question is, is there any way to define the definition of money as a function of function of the object? So, the question is, is there any way to define the definition of money as a function of function of the object? So, the question is, is there any way to define the definition of money as a function of function of the object? So, the question is, is there any way to define the definition of money as a function of function of the object? Right. So, you might want to say that ultimately anything that’s attributed is ultimately also not stable because human beings might disappear. Right. So, you might want to say that ultimately anything that’s attributed is ultimately also not stable because human beings might disappear. I get that argument. But the other way around, but it doesn’t mean that everything that is stable is necessarily intrinsic. So, but how are the two components of the theory working? So, what are you pointing to? You’re pointing like this. So, you have to make it stable and the properties sometimes can shift based on the context. Right. How can that be stable and make a change? Oh, I see what you’re saying. How is it possible to ever have stable without? I said I don’t think you can have stable without it being intrinsic. If that’s okay. I’m not sure that we think that all intrinsic things are necessarily stable across all times and contexts. I think that something like the singularity of the Big Bang probably intrinsically existed, but I don’t think we think it’s a stable thing. So, and then that’s why in a technical sense you can’t do, and that’s part of the difficulty, you can’t do a complete science on the singularity because we don’t have any concepts or classes to apply to it. Which is why it gets kind of crazy when you try to talk about it. Okay. That went longer than I thought it would normally go. The basic idea is, and Gina was pointing towards this, your class of things has to be homogeneous, intrinsic, and stable. And as she already indicated, that doesn’t work for the things you find relevant. Other than you finding them relevant, and we can do a lot about that, and that’s going to be part of the point, right? You don’t learn anything, they don’t share any important properties. Right. Yes? Insight problems. They’re not really homogeneous, are they? That’s right. But are they stable? That’s exactly right. So what did we give up? We gave up trying to have a theory of insight problems, and what did we shift to? Theory of insight problems solving. Okay. But are they stable and not intrinsic and not homogeneous? I don’t think they’re intrinsic. Problems don’t intrinsically exist. So they’re stable, but… I don’t know if they’re stable, because something you found to be an insight problem when you were four, you probably don’t find to be an insight problem now. True, true. Okay. And as you already indicated yourself, they’re clearly not homogeneous. None of the above. Okay. Which is why, right, I think Weisberg’s criticism, and I emphasize that at the time, now you can see why his criticism is so penetrating. It’s not like it just… it’s just the wrong way to go. You can’t have a theory of that. But that just is reinforcing evidence, and that’s the connection that I wanted to make with you. Thank you. That’s why you can’t have a theory of relevance realization. For all of the same reasons. But, sorry, you can’t have a theory of relevance. What you can do is you can have a theory of relevance realization. You can have a theory of the processing that goes into how relevance is being realized. Even though you can’t have a theory of relevance realization. Just like we shifted from trying to come up with a formal theory of insight to trying to come up with a formal theory of insight problem solving, this is what we need to do here, too. We need a theory of relevance realization. Now, so we need to do this. We need a theory of relevance realization. Now, so one strong connection that I just made, that you can probably think of another one that we made last time. Because we know what those kinds of theories look like. So, for example, remember I pointed out that Darwin gave up the project of trying to find a definition of fitness. Because if you take a class of fitted things, some are big, some are small, some are fast, some are slow, some are hard, some are, like, there is no essence to being fit. And remember what he did was instead of trying to find the perfection that the naturalists were looking for, he looked instead for a dynamical systems theory of how design was constantly evolving for previous instances of design. And again, we forget how insightful that was. That was a big, huge move he made. It was brilliant. Like I said, I think the theory of evolution by natural selection is one of the top five theories ever produced in science. But here’s the idea. What if, what if that’s how we should be thinking of relevance realization? What if, let me start it as an analogy, but turn it into something more like a derivation. What if relevance realization is some kind of virtual engine of cost functions that regulates the feedback, the informational feedback between your brain and some environment. Such that, right, your problem solving abilities are always, if you’ll allow me first as an analogy and then I’ll try and specify, are always evolving their fittedness to their environment. Or to use language from the course, that in addition to heuristics, there are cost functions and virtual engines of cost functions that constrain the problem solving space so as to regularly avoid combinatorial exposure. What would that look like? Well, interestingly, and this is part of what we were talking about here, we were talking about a convergent framework that was emerging. More and more people are talking about these logistical cost functions and their logistical norms regulating all of intelligence. For example, right now, some of you might have heard of Friston. Friston is arguably called the free energy principle. Right? Basically, he thinks it’s a potential unifying theory for cognitive science. That’s how Paul thinks it is. Yes? Sorry, what’s called the free energy principle? The free energy principle. I won’t go into, I mean, because it’s a very, he does a lot of work with it. So, this is one example, and there are many, and you can read these papers to find all of these examples. What he’s basically arguing is that ultimately your brain is trying to avoid being surprised. Right? And why your brain is trying to avoid, ultimately, if you think below that, why your brain is trying to avoid being surprised is it’s trying to become as efficient as it can in its processing. So, the basic idea here, and this is the point, it’s a scalar-variant principle. All of your brain processing locally and globally is trying to be, right, is designed by evolution, if you’ll allow me, the anthropomorphic language is always inappropriate, right, to try to be as efficient as possible. Now, that’s a cost function. It means the brain is not just logically processing information, it’s logistically processing information. It’s constantly looking for, right, how much cost, time, effort, metabolic energy, right, there’s even some evidence the brain does opportunity cost calculation. All of this is going to, how costly is this, how much am I getting out of it? It’s constantly trying to, if you’ll allow me initially, because we’re going to have to qualify this, maximize its cognitive profit. In fact, there’s a theory of relevance, which I think is a mistake because I just argued that, but we don’t have to argue about it, by Sperber and Wilson in their book entitled Relevance, that argues just in fact that, that that’s what, to find what you’re doing when you’re seeking relevant information is you’re trying to find information that allows you to make your processing more efficient. Yes? How come do you like to play this then? Well, that’s the problem. And the problem with this, and many versions of this, is you don’t want efficiency to be your only logistical norm. What did we talk about last week? What do you also always need? Resiliency. Resiliency. And you need a trade-off relationship between them. So you don’t want to maximize efficiency or resiliency, you want to optimize the relationship between them, right? And we argue that one of the ways of optimizing the relationship between them is to do a self-organized opponent processing, right? Now, this sort of, and I’m doing this deliberately, this starts to tie more and more tightly to what we talked about last week. Because, of course, the way you try and make your processing as efficient as possible is you want to increase generalization. Because generalization means I can use the same function in more and more places, and that means my functionality has become more and more efficient. The more I can generalize, right? But what was the, what was the opponent thing you needed to be able to do? Do you remember that? Discriminate. Differentiate. And you know, and you know this in psychology, we’re always trading between generalizability and individual differences, generalizability and individual differences. In fact, what, I guarantee you, at any conference, if somebody stands up and says blah blah blah and proposes some generalized function, somebody in the audience will put up their hand, thinking they’re brilliant, and say, well, what about individual differences? And blah blah blah blah. And everybody goes, oh yes, oh yes, oh yes, because that’s just part of the deal, right? Yes? Isn’t like generalization like a form of discrimination? Pardon me? Isn’t generalization like a form of discrimination? No, I’m not using discrimination in the political sense. Oh no, no, but like that’s kind of what I mean, too, like if you’re generalizing like a class. So if I generalize a function, it means I can interpolate and extrapolate it, right? So what does that mean? How is that discriminatory? Oh, because like if you generalize like all horses have these characteristics, you’re differentiating the horses from let’s say like ponies or zebras. Ah, but then you’re not generalizing when you’re differentiating horses from zebras. Then what you’re now doing is of the larger classes of equines, you’re now differentiating them into two subclasses. Okay, so a generalization has subcategories that are discriminatory. But the way of thinking about it is you move up, this is only one instance, this is not everything I’m saying. As you move up a taxonomic hierarchy, you’re generalizing and as you move down you’re discriminating. Okay, so it’s less of like a generalization discrimination, it’s less of a class and more of like a process. That’s exactly the point, yes. Okay, now some of the work I did was about how to talk about this. And if you want the specifics in the math, look at the 2012 paper. I’m not going to load that on you right now, right? Now the interesting thing here is the way you get generalization is you’re doing a data compression. I just did it, right? And when we talked about this, you have the scatter plot and you do the line of best fit so you can interpolate and extrapolate. So that’s what data compression does. So whenever you’re trying to generalize, you’re running some kind of compression function in a neural network like weight decay or something like that. There’s many different ways of doing it and it’s unclear which one the brain is specifically using in general. That’s not what I need to talk about right now. I mean at the level, at the biological level, I’m not talking at the functional level. It wasn’t a name for the opposite thing in the literature. We invented it but people now seem to be using it so they call it particularization. Cool. This is when you try and overfit to the data rather than interpolate it. Why would you want to overfit to the data? What does it allow you to do? What kind of function does that allow you to create? It allows you to create a special purpose machine that only works for a very limited data set. Sometimes you want special purpose machines. So what we have is these bioeconomic logistical norms that work in terms of cost functions. We can talk about a system of constraints in the brain. A virtual engine. Now what’s interesting is you’ve seen this before. Remember? Where did you see it before? Switching in the brain from left to the center of the brain. Well that’s part of it and that’s at a very high level but more abstractly. Exactly. Where did we see this last time? This is self-organizing criticality because what happens is when the neurons are finding synchrony, they seem to be doing data compression, they avalache, they differentiate this centigrade out, particularize and then they recompress and they’re constantly oscillating between these. This is self-organizing criticality. Yes? Is this also bottom-up and top-down? It’s scale and variant. That’s right. It’s recursive and scale and variant. Now if that’s right, and so I have to be careful, when we originally did this, this was a prediction. In fact both of these I’m not going to say were originally predictions and then they turned into post-dictions. They received empirical confirmation. Okay so, arguing that this can implement this, right? Yes? Self-organizing criticality does this and this is the way in which the brain in a sense can be constantly optimizing its logistics so that it’s constantly evolving its fittedness for problem solving in the world. Now what this course has argued is that this is the core of problem solving and the core of problem solving, and problem solving is the core of intelligence. So that to be intelligent is to be a general problem solver. That’s how we started the course. Again, I can’t give you the full argument here. You can see more of it here. Leo is a psychometrist. He does intelligence testing and so we did work building this argument, but I’ve already given you a version of this argument, so there’s more there. Basically what we’re measuring when we’re measuring intelligence is we’re measuring the system’s capacity for relevance realization. A lot of people are converging on this. Stanivich for example in his recent book What Intelligence Tests Miss? makes the argument that what we’re measuring when we’re measuring intelligence is basically the ability to deal with computational limitations. That is avoid common and total explosion. So again, a lot of convergence on this idea. So if this is all right, if self-organizing criticality can implement relevance realization, and relevance realization is the core of your intelligence, your general intelligence, what should there be between this and this? What kind of relationship should there be? Should they be uncorrelated? What should the relationship be between them? They should be highly what? Highly correlated. Now we have ways of measuring this, and of course we’ve had centuries of getting ways of measuring that. So that’s not controversial. I mean you know that measures of general intelligence are the best constructs in psychology. Okay, that’s been found. Factored out 2008, 2009. I’m getting this graphic. I can’t remember if there’s a T here or not. I’m pretty sure there is. That’s right out. Basically, the more flexible your brain is at SOC, the more intelligent you are. There’s been quite a bit of work since then. You can look at the review paper I mentioned last week about why that would be the case. And it’s sort of zeroing in on this. Now that’s kind of cool because we’re getting an account of how the brain is plausibly doing relevance realization above and beyond heuristics and other things I’ve said from the beginning of the course. I’ve never denied that. Above and beyond the heuristics, there are virtual engines of cost functions that are operating logistically that are really constraining the search space but in an ongoing evolving manner. Now, we also said there was something else that optimized for the relationship between efficiency and resiliency last class. Do you remember what else it was? Small world networks. Do you remember? Small world networks optimized between efficiency and resiliency. Do you remember that? So, small world networks would also be a way of implementing relevance realization. Yes? Relevance realization is the dynamic optimization between efficiency and resiliency. And small world networks are doing that as well. And we should be able to do the same thing. We should be able to do that. So, the more your brain is wiring, like a small world network in a scale and varied manner, the more intelligent you should be. Yes? That’s the doubt. Langford L. 2012. Now, what I’m showing you and what we argued independently of talking about insight over here, we were just trying to solve this sort of core problem, is at least, I mean, there’s other machinery that’s going on in relevance realization. This is just one. We’ve talked about other things like quantum processing between exploring and exploiting and stuff like that. But here’s one we did talk about. Now, what’s interesting about it is it’s getting more and more empirical confirmation. These were just the first two. And what’s deeply, I think, very interesting for this course is that very same machinery is the machinery that independently people were talking about to explain insight. We have the possibility of a genuine formal theory, not of insight. It can’t be such a thing. But what’s going on in insight problem solving? Insight problem solving, plausibly, as we talked about last time, was, right? And there is a fundamental relationship between this kind of wiring and this kind of wiring. Insight is making use of this machinery, but what was missing from the theory of insight is, yes, but how is the relevance realization going on in that machinery? There’s how it’s going on in that machinery. The same machinery that we use to explain insight is part of a deeper theory of explaining how the brain avoids combinatorial explosion, how the brain does relevance realization, how above and beyond heuristics it constrains the search space. This was the hardest part of the whole course. I have a question. So if general intelligence is a stable variant, it can’t be, just like you, right? But then relevance realization is a work of the people. Can you train relevance realization? Yes, you can. So that would be by training self-organizing criticality? In a sense, yes. So what would you do? How would you do that? Well, I mean, so one of the things you could do is you could train your cognition to get better at creating greater and greater integrations of things. So basically you would be… Wait, hang on. Sorry. But you also have to train it at getting better and better at breaking those up. Now where have we seen a kind of practice that does that for you? What? Pardon me? Pardon me? I didn’t hear what you were saying. Mindfulness. Mindfulness. Now, here’s where you can get into theoretical problems. And Sanovic actually wrestles with this. Again, I know you’re going to hate me for saying this. We talk about this a lot in 371. Does that mean we’re changing our intelligence? The problem is if we do that, then we’re blurring a distinction that he thinks is very important. We should reserve the term intelligence for this stable sort of biologically given basis of your brain’s ability to avoid combinatorial explosion, right? He says instead, when you use your intelligence to develop improved skills of problem solving, like if you acquire skills, right, then we should stop talking about intelligence. We should stop talking about rationality. Hmm. And see, and then he’s got a whole argument about the fact that you can show very powerfully that you can get just as strong positive manifold for all of our rationality tests as you can for our intelligence tests. And then what you can do is you can take those two manifolds and you can ask how tightly are they correlated with each other. Our best psychometric tests, all the rationality tests, because we’ve been doing rationality tests for about 40, 50 years, they all strongly manifold together. You already know from intelligence that these all strongly manifold together. What’s the correlation between the intelligence manifold and the rationality? If they were the same thing, it should be a very high correlation, right? It’s about at maximum .3. So your intelligence is necessary, but nowhere sufficient for your rationality. So your rationality would be different from your intelligence. Yes, so. And then if it’s different then could you, that would be, you would use. One of the things you can rely on. Yes? Yes. Yes, and that’s what he’s, and that’s the whole point about the book. What intelligence tests miss is they don’t measure rationality. And the thing is what he goes on to argue is although IQ is a great predictor, you know what’s a much better predictor of how well people do in their life? Rationality. Rationality. Now, you have to broaden your notion of rationality. You have to take into account this course. Rationality doesn’t just mean improved inference. It doesn’t just mean being logical. Because we’ve seen that rationality should also mean improved problem solving. It should be an improvement of your insight abilities. One of the ways of making yourself more rational is, as she said, to practice mindfulness. Is it just my thought? No, no, of course not. There’s other things. You have to improve your inferences. You can acquire the skill of active open mindedness, which teaches you how to look for the way heuristics are biased in your inference and how to improve, right? How to reduce the impact of those heuristics in biasing your inference and thereby improve your inference. That also makes you more, right, allows you to better use your inference machinery to solve problems. That’s the kind of thing we do in 371, by the way. Now, you can just let it blur if you want. You can just say, well, any improvement in problem solving is intelligence. And that’s fine. He doesn’t have a metaphysical argument against that. He says, but if you do that, then we lose these different kinds of predictions we’re capable of making. And that’s all we can ever, that’s the only basis we can ever use for justifying the distinctions we make in science. Well, does it afford different kinds of predictions? Yes. So do you understand what I’m saying? It’s not like I’m saying God is above and saying, no, this is intelligence and this is rationality. He’s saying, like, we keep them apart because that way we avoid theoretical confusion and we get empirical clarity. Is that okay? No. Okay. So when people say they have studies or they show over time that people’s intelligence changed, like, in the measures of IQ, they’re really measuring a change in rationality? So which measures? You mean within an individual or you mean across generations like the clinical? Within an individual. Because I’ve seen, like, physically people take it and get different results. I don’t know if that’s just like a mark of an error thing and I’ve also read things on it. Yeah, so there’s a lot of controversy about how stable it is or not. There seems to be some indication that you can do things, so take it like this, because you’re both agreeing there’s controversy around this. There seems to be some indication that people might be capable of altering core working memory capacity, which would make an impact on your fundamental intelligence. Don’t know. Okay. Here’s what’s interesting. Oh, by the way, another thing that correlates very well with measures of intelligence are measures of working memory. But why might that be the case, given what we’re saying here? What’s happened? We talked a bit about this. Remember the two different models of working memory? Okay, the one model is the tabletop model, right, the Miller model. You have a limited space. It’s like your counter. Working memory is your counter and your long-term storage is your kitchen cupboard. And you bring stuff out, you put it on the counter, you do stuff, and then you put it away. And then you have limited space. The problem with the limited counter model, and this is Lynn Hasher’s work, and she’s just some really recent, cool stuff, what doesn’t it explain? How can you get more stuff through working memory? Chunking. Chunking. And what is it to make a chunk? You find the pieces of information relevant to each other, and then you subjectively organize it, which is code for saying you make sure that it’s highly relevant to you. You do lots of relevance realization. Her model is that working memory is a relevance filter. It’s a higher-order relevance filter. That lines up with all of this. Lynn Hasher, she’s here at U of T. She’s brilliant. And some of the coolest research is that, and this could go towards supporting what Chloe was just saying, and maybe messing things up even more, but it’s exciting and interesting math, because we have this sort of standard model that as people get older, they get stupider, because their ability to keep things out of working, to avoid distractors, goes down. And a lot of that’s a disaster. Because again, you’re emphasizing one side of this. What is turning out is, and now they’ve done several studies. There was one that was just in the news last week. In the news, it actually made the news. Wow. But a lot of it’s converging on the idea that older brains are wiser brains. Because what they do is they don’t as rapidly screen off a lot of stuff that, I’m not a younger brain anymore. I don’t find that relevant. I can work better than you. But older brains actually don’t screen off as rapidly. They don’t ignore as much irrelevance. And so what happens is, older brains tend to be wiser. They tend to see connections and have insights that younger brains aren’t capable of doing. Especially, get this, in ill-defined, messy social situations. So we shouldn’t be chucking our old people away, after all. So there is, you know, again, that’s, of course, that’s going to be affected by how nasty they are. How bitter they are. How horrible their life has gone, and all that stuff. But the research is, like, there’s been now several studies about all of that. Okay, we gotta stop soon. Okay? Yes? I guess one thing that just kind of stuck out to me was that I read this paper that talked about how philosophy majors have one of the highest IQs out of any major. And it just occurred to me that philosophy requires a lot of breaking frame, making frame. And it would make sense that a person who’s good at philosophy would be better at relevant realization, and therefore have a higher IQ. That’s the old argument. I mean, sorry, we do this a lot in 371. The idea that what the primary job is of philosophy is to train rationality. Where, again, I’m asking you not to hear the word rationality as meaning logic. It’s meaning optimizing your inference, optimizing your insight, optimizing your intuition, optimizing your ability to internalize other people’s perspectives. These are all the things that ancient philosophy talked a lot about. And so the degree to which you’re doing those things in philosophy is the degree to which you’re training the ability to. Now, just to be a little bit more self-critical, and I have a degree in philosophy, so I’m allowed to do this. There’s a lot of criticism within sort of academic philosophy that academic philosophy has not, until very recently, been concerning itself very much with wisdom. It has gotten locked into conceptual analysis and has only recently gone back to the ancient idea that the primary function of philosophy is to train rationality. Again, broadly construed in the way that I’ve just indicated. So I would say yes to you, but with that important qualification. Is that all right? Okay. So we’re now done talking about insight. What else is there to do? I mean, there’s more science to do. But what I mean is, what I’ve tried to show you is that we’re sort of beyond both the search inference framework and the digital framework. And we’re increasingly using a cognitive science theory as opposed to pure psychology theory. There’s obviously lots of psychology in what we’ve been doing, but neuroscience and ideas from artificial intelligence are playing an increasing role in trying to figure out what the core of our problem solving ability is. Now, the question I want to turn to and get going, and then we’ll take a break in a minute, right? Is the question that was brought up in one of the previous lectures and then was specifically brought up by Margaret Bowden, who comes out of the search inference framework, the computational framework. This was a book originally in 1990, Margaret Bowden. And this is the question, whether or not creativity is anything above and beyond insight. So what Bowden, in fact, argues is her theory of creativity is basically, right, she makes a distinction between personal and what she calls historical creativity. Personal creativity is basically when you change your problem space such that you’re capable of coming to a conclusion that you couldn’t come to before. Historical creativity is when an individual does that for a shared problem space. So there’s a problem that a bunch of us can’t share. Somebody comes along and goes, no, but reconfigure the problem space this way. And we go, ah, and then we can solve our problem. Now, I don’t think she would disagree with this, what I’m going to say. Basically what she’s saying then is creativity is just deep insight. Creativity is you couldn’t think of something, you restructure the problem space, and now you can think of it. That’s it, that’s creativity. I’m not saying she’s right. I’m saying that this question was brought up, and I’m using her as somebody, an important thinker in the field who seems to advance, I think you can make a strong interpretive case, that she’s advancing the thesis that creativity is just a synonym for insight. We can get a little bit clearer what creativity is. We’re trying to emphasize how radical the insight is, how influential or important it is to you as an individual or to a group. So that brings up the question then, is that it? Is that all of the cognitive machinery that’s going on in insight? Now, what I want to do, obviously you already know, and I told you this ahead of time, there’s somebody who jumps on that bandwagon and says there’s nothing to creativity, it doesn’t really exist. Who might that be? Weisberg, because that’s what he does. That’s his raison d’etre. We’re going to take the car to the movies, but do cars really exist? But what we’re going to do is instead do the opposite. Because although many of you are immediately shaking your head, doing this, nodding, saying yes, there’s much more to creativity, trying to get that going and establishing that turns out to be really hard. So I want to propose to you the first hypothesis we’ll take a look at, and then after I give you the proposal we’ll take a break and then take a look at it. Here’s the first hypothesis, that when we’re talking about creativity we’re talking about another process that is used to trigger insight. So nobody denies, let’s be clear about this, nobody denies that insight has an important role in creativity. Yes? Sometimes. I don’t necessarily think that it’s necessary. That it’s possible to be creative without it being insightful? Yes, because insight requires, like as we were saying, a problem you couldn’t solve before then you solve it. So you can be creative without having a problem you can solve. Right, right. And so that’s a good point. I should have been more clear what I meant by everybody thinks that insight is in some sense relevant to creativity. Zoe’s right, sorry, Chloe’s right, because what some people will argue, and we’ll see this later, is that although creativity is using the same machinery of insight, it’s using it in a fundamentally different way that we wouldn’t call insight. So we’ll come back to that. But the first proposal is this idea, right, that what creativity is, is basically the use of analogy in order to provoke insight. And this is a very popular idea. Now, although there’s a distinction between these two terms, the distinction doesn’t turn out to be highly relevant in this discussion. So when I say analogy, I’m extending it to mean things like metaphor as well. Even though they’re not the exact same thing, I get that. But here’s a theory of creativity. A creative person is somebody who knows how to use analogy in order to provoke insight. Which sounds very attractive, and it does line up with a lot of discourse about creativity. So here’s the first idea we’re going to look at. This is to say, right, yes, insight is playing a role, but insight is much more kind of the result of creativity. It’s not identical to creativity, as creativity is using analogy to provoke insight. Okay, so what this theory depends on, of course, is the independence of analogy and insight, and the capacity to be able to explain how analogy causes insight. Correct? Yes? When you’re talking about creativity, you’re just talking about creative problem solving. I’m talking about…well, I don’t know. I was going to answer you, then I thought, I know how you think. Well, at least I’ve come to know how you think. What do you mean by that? I mean, as opposed to making art, which isn’t necessarily solving a problem. Sure it is. You’re solving all kinds of problems when you’re trying to make art. But it’s not a problem that has a particular end in mind. Right. Okay. So, but why does that have to be the case? So, if it’s not about solving problems, does that mean that any random generation of new things by the natural environment is creativity? Sort of. Okay, then. Then it’s not a proper part of psychology, and we should stop talking about it in psychology. It’s part of physics or chemistry, right? I think it’s important to be clear about what we mean when we’re talking about creativity. That’s what I’m asking you. So, I don’t know. The problem is, if you remove the idea that you’re pursuing a goal, then you have to give me another way of distinguishing it from all the other non-goal-directed things. And don’t just say, a human did it, because that’s to smuggle in the very point we’re trying to clarify. Okay. But when we’re solving an inside problem, there is usually one solution we’re going to work on. Right. Okay. Now you’re saying something better. Keep going. I mean, to some degree. It has to have some intelligibility. I can’t just call something, and that’s enough. So, are we agreeing on that? So, what you’re not suggesting, which is a different idea than what you were saying before, is maybe creativity is the kind of problem solving in which multiple solutions are available to you. Now, that’s a good thing to say, but then we have to, how do we distinguish that just from really easy to solve problems? Because one of the defining features of really easy to solve problems are that they have many equally good answers. Because that means you can get to your goal state many different pathways to the search space. And you don’t want to say that creativity is the same as solving easy to solve problems. I think it might be harder than inside problems in certain contexts. Okay. So now it’s tricky. Now you’ve got to tell me how it can have multiple answers and yet still be harder than an inside problem. Yes. Yes. So, I think one thing, because you’re saying if it’s just the random generation of things, then it’s not really creativity, it’s not useful. I think in some ways creativity is the random generation of things, with the caveat that because we don’t all think that the same things are creative, it needs to be useful. So, a modern art painter might be like, oh, this is so creative. And someone else is like, no, that’s terrible. Who would pay money for that? Because they don’t think it’s creative, because they don’t think it’s useful. Sure. So then it is in a sense like one part random generation, one part personal interpretation based off of usefulness. Which is already like a not clearly defined term which relates to relevance, but can also determine how it’s not necessarily solving a problem. Because the person is just creating stuff. They’re obviously creating it so it’s useful to them. So it could be a problem solving that they’re making, but it’s also like not solving a problem. Well, okay, well, you said many different things that I can’t respond to all of at the same time. But one thing is I’m still having difficulty seeing how if random generation that somebody finds useful would be a good definite, because that’s happening all the time. It’s happening in this room all over the place. Because there’s lots of sort of random shit happening and we can find some of it useful if we just put our minds to it. Is that all we want to say creativity is? And maybe we do. I’m fine with that. And then we should stop talking about it in psychology, because it’s not part of psychology. It’s part of chemistry or physics. But presumably that’s not what these people want to talk about. Yes, Gina. I don’t think it’s the randomness that’s our problem. It’s that we’re finding it useful and that’s what we’re studying in psychology. What use are we picking out with the randomness? That’s why things can randomly change. So, I mean, are you even, sorry, I’m not being Socrates here. Are you even okay with that? Because if it’s randomness, then does it make sense to call an individual creative? Well, it’s because they’re finding the usefulness in the randomness. So they’re not random then. They’re reliably generating useful novelty. Oh, yeah. So if you want creativity to be something in psychology that you can attribute to people, you’ve got to tighten up what you’re talking about. If you don’t, and this isn’t ultimately a semantics debate class we’re having right here. Do we want to talk about it as a psychological property that individuals possess? Then we’re limited in how we have to define it. If we don’t want to, that’s fine. But then we’re no longer doing, it’s not a proper part of discourse. I’m very comfortable with the idea that you have to, like creativity implies that you have an end goal. But it could be that you are solving a problem without realizing, without bearing the problem in mind. Sure. So what you’re saying, which is something other people say, is that what happens in creativity is the goal is backgrounded. It’s not foregrounded. But then what you’re talking about, and this is another important factor to bring up, is now we’re not saying the cognitive machinery so much is different, but the motivational apprehension of what you’re doing is different. And that’s one theory of creativity. That basically you’re using the same cognitive machinery, but your motivational framing of what you’re doing is different than when you’re doing the whole pursuit. By the way, before that goes on, that’s a big thing right now. There’s a big distinction between telec and autotelic behavior. Go ahead. But even if the machinery is the same, the process has to be different. Because when you’re trying to solve a problem, you have to have a mental representation of that problem. But in creativity, if that’s the case, then you don’t start with the mental representation. You start by making something new. Well, do you? That’s not always true. In a sense, I cover music background, so I know a lot of musicians who will hear symphonies and then start writing it. So they absolutely start with the background. Also, many people would argue, and this is going to be important too, because we’re going to talk, that one of the most creative forms of music is jazz. But if you talk, you can’t be good at jazz until you’ve done a lot of formal music training. You can’t just sort of randomly screech things out in the background. Okay, so you’re interested. So in order to make this work, we are going to have to take a look at theories of analogy. Most theories of analogy are responses to and variations on probably the dominant theory of analogy in psychology, which is Dieder-Gedner’s theory of analogy, which is known as the structure mapping theory. And once I explain it to you what it is, it would become readily apparent how analogy might be the engine driving insight, and then this would give us an account of what creativity is. Creative people know how, they have the skills, right, in order to use analogy. And I don’t just mean verbal analogy. I’m talking about pictorial analogy, pictorial metaphor, visual, acoustic, all kinds of things. There’s analogies in music, as you know, etc., etc. So when I say this word, don’t just think verbal analogy. It’s a capacity to use analogy to drive insight. So, Dieder-Gedner and the structure mapping theory. Alright. So, let me first of all lay the theory out for you intuitively and then how she formalizes it. Okay, so the main idea is that the theory of analogy is a structure mapping theory. So, let me first of all lay the theory out for you intuitively and then how she formalizes it. Okay, so the main idea is that what you’re doing in an analogy is you’re trying to find the important relations within, okay, so let’s just standard term, source and target. Okay, so source is where I know or what I’m using in order to try and change how I’m thinking about something else. So, let’s make it concrete. You’re rather for the boars, you’re trying to figure out the structure of the atom and what happens is, right, you use the solar system. Okay, so this is the source, solar system, the atom is the target. It turns out to be wrong by the way as you now know, right, you know that, right, you know that little solar system model of the atom is mostly wrong. It’s kind of right in certain circumstances. If you do all kinds of hand waving, you can’t walk. Okay, so let’s first of all do it linguistically and then formally. So, here’s the idea, right, this is from Dieder’s work, right, you have something like the sun, the tracks, the planets. I know it goes both ways but let me just write it this way because it’s easier. Okay, the planets orbit the sun, right, and there’s equilibrium, that stands for equilibrium between the fraction and orbiting or at least long term pseudo equilibrium. Okay, so the idea here is, right, so there’s the source, the target, the nucleus of the atom, not biological nucleus, the tracks. The electrons, right, the electrons orbit the nucleus and there’s equilibrium between orbiting, sorry, I should do it the correct order, between attraction and orbiting. Okay, so this is getting mapped over, this is getting mapped over, and then this whole thing is getting mapped over. Okay, we’re going to formalize this in a second. So, the idea is, what’s getting mapped, when I talk about the structure, what’s getting mapped is this relation, this relation, and then this relation of relations. Okay, so the structure, what you’re, right, is just intuitively, we’re going to make a more formal distinction so this, right, doesn’t get too equivocal, right. What’s going on is, right, I’m leaving the content of the solar system behind, all I’m picking up on are these important, this relation and relation of relations, so there’s a system of relations that is getting mapped over. Okay, I’m mapping this over, and then of course this talks about orbiting and attraction, and I’m mapping orbiting and attraction over as well. So the idea is, the theory, again, first intuitively, right, is that, you know, Bohr has this, knows this, doesn’t know this, has all sort of all the data, you know, all the empirical data, and is trying to figure out how to structure it to make sense of the atom, and then, right, makes this analogy, maps this structure over, and then, right, has the insight, oh, the atom is like the solar system. And that would count as an instance of scientific creativity, which of course it was, because he created an important theory. Okay, so now, how do we make this a little bit more formal? So what Pantner does, right, is talks about predicates and arguments, and then I’m going to suggest we change the term. So this is a predicate, right, and then an argument, I think, is, right, this particular object to which that predicate could be applied, so that’s a way of representing that the ball is round, is that okay? I think calling this an argument is a deeply confusing and stupid thing to do, because it’s not an argument, it’s an instantiation. I don’t know why, I don’t know, maybe psychologists should just do a little bit more logic training, I don’t know what to say about that. It’s not an argument, it’s an instantiation. It’s our best proposition, the ball is round, but a single proposition cannot possibly be an argument, the ball is round. That’s not an argument. Okay, so, it’s not a good term. Okay, so we’ll talk about predicates and instantiations. Okay, so, right, then what you have is that attributes are predicates that can take one instantiation. Okay, so here’s an example. Large X, right, so I can put many things in there, large element, large ball, large tower, large ego, you think I can give you a Trump ball, obviously. Or I’ll give you some towers and Trump. Okay, so, right, that’s an attribute. A relation is a predicate that can have two or more of those variables. So, right, collide X, Y, right, there has to be two things that get instantiated for that predicate. You can’t just have one of those variables, you can’t just have one of those variables, you can’t just have one of those variables. Okay, so, predicate, it’s when you attribute some property to something, is that okay? Sorry, we have to do a bit of logic now for the rest of the course, because that’s what we’re getting into. Okay, actually, you can’t just have one thing collide, you can’t just have one thing collide, you can’t just have one thing collide. Okay, so, that’s what we’re getting into. Attribute, you can take one instantiation, relation can take two. Next distinction, we have to make a distinction between first order and higher order predicates. First order and higher order predicates. First order and higher order predicates. First order predicates are predicates, right, that take objects as their instances. Okay, like here’s one. First order predicates are predicates that take objects as their instances. Higher order predicates take predicates as their instances. They’re meta-predicates in that sense. Higher order predicates take predicates as their instances. So, this is an example, again, also from Gander. This is Gander’s example, not mine. You’ll know why I’m emphasizing this when I put it on the board. Okay. I’m not going to comment on why she used it, she just did. Okay. Maybe she was having a particularly bad day. A live truck car, strike woman, man. Okay. So, these are first order predicates. Here’s a higher order predicate. Cause. Okay. So, this is a first order predicate. It’s taking objects as its instances, yes. This is a higher order predicate because it’s taking two predicates as its instances. Okay. Again, higher order predicates take predicates as their instances. As its instances. Okay. Again, predicates. Attributes, one instance, relation, two. Lower order predicates, right, take objects as their instances. Higher order predicates are also by definition relations, right, and they take predicates as their instances. Yes. Okay. So, the truck collided with the car which caused the woman to strike the man. Which, that’s her example, not mine. Okay. Makes perfect sense. Like I said, maybe she had a really bad drive into work the day she wrote this. Okay. So far so good? Okay. So, we have the distinct, we’ve got predicates, the distinction between attributes and relations, and the distinction between lower order predicates and higher order predicates. And higher order predicates are always relations because they take at least two. Okay, is that okay? Let’s keep going. Okay. So, now let’s, what you can see is I can turn all of this into a formal representation, and then you’ll see what the structure is given that formal representation. So, this star, that stands for equilibrium, is that okay? So, if I go here, right, X stands for sun and Y stands for planets. But if I go here, X stands for nucleus and Y stands for electrons. But I can use this same, now here’s the idea, the same structure exists in both. Okay. So, what I have is I have a higher order predicate, right, that is a relation between two lower order predicates that are relations. What’s common between the source and the target is that structure. And what you do in an analogy is you find the structure in one thing and you map it onto the target. Okay. Is that all right so far? One person is responding. I can’t go forward in this argument unless this makes sense to you, because if it’s all opaque then nothing will matter. Is that okay? Yes. Like, I get the general argument, but I don’t understand the thing that you would go on the board, like attract, orbit, EQ. Okay. So, sun attracts the planets. Yep. The planets orbit the sun. And there is equilibrium between the attraction and the orbit. But I can go over here, right, the nucleus attracts the electrons. The electrons orbit the nucleus and there’s equilibrium between the attraction and the orbit. Okay, so the structure could be applied to both. That’s right, the structure is exactly the same. Well, I understood that part, I just didn’t understand it. You didn’t understand how to interpret the formula. Yeah. Fair enough. I just taught it to you. Yep. Okay, is that alright? So what you’re getting is you’re getting, what’s happening in analogy is you’re mapping. Okay, yeah. I don’t need to keep this up anymore, do I? Yes? Wait, so why are X and Y switched? What, because here it’s, the nucleus attracts the electrons, here electrons orbit the nucleus. Oh, okay. It’s just the orders. That’s how English syntax works. If it was German, we wouldn’t care about the order, we’d have inflections at the end, but we don’t want to do German. It’s just the language. Inflected languages are really hard to, does anybody know a language as fluently as English in this class? Like is it an inflected language like German or something like that? I just want to know what it’s like to try and translate an inflected language into symbolic logic. Because inflected languages don’t rely on word order to specify logical relations. Yeah. Pardon me? Okay, so what is, have you learned symbolic logic? Oh. I know German as logic. But never the Twainian method? No, but I’m just thinking, even when you’re translating English into formalism. It’s hard. Well, it’s hard because English is weird. But I’m just saying the order usually doesn’t stay the same anyway. Like sometimes if you want to express a complicated English sentence, you order it and the formalism looks nothing like the order in the English sentence. That’s true. I was just wondering if there was a difference in the difficulty of the translation problem given a syntax ordering versus an inflected syntax. I don’t see why there wasn’t, but I haven’t tried formalizing German. Well, that’s why I was asking. You just learned the one that has the structure. That’s it. I’m not understanding you at all. You don’t understand? So you just learned the English order and that’s it. And don’t try to directly translate one to another. Oh, I see. But I understand that. But I’m just wondering why it’s like to go from non-inflected language directly into symbolic logic. Because the translation problem is the hardest part of logic. Because that’s the ill-defined part. That’s the part they basically test you on. They teach you all this stuff in logic class and then when they test you on it, you ultimately translate. You have to do some derivations and proofs and stuff like that. But it’s all the translation that’s the tricky part. At least that’s what I would find. I think you could just assign an X or a Y to each of the forms, like object, subject, because those have a new order. Oh, sure. I think that’s what I would do. Sure. Nothing important hangs on this. I’m just wondering. It just occurred to me. I shouldn’t have said it. I will never say it again. Okay. So now here’s how Gatner uses this, or maybe I’ll be more neutral, claims to use this to solve a problem that we’ve seen before. The selection problem. So the selection problem. So one of the things you can say is, well, why did Orr choose the solar system for the atom? And then you can say, well, because there’s a similarity between them. There’s a problem with similarity, and some of you know this from other courses, right? Is that similarity, this goes back to Nelson Goodman. Okay, so you know this from Sesame Street. One of these things is not like the others. Three of these things are kind of the same. So similarity is kind of the same thing, which means partial identity. So things are similar to the degree to which they share some but not all properties. The more properties they share, the more similar they are. That’s the standard interpretation of what similarity means. And so the logic of similarity is, the more properties you share, the more similar you are. And then Nelson Goodman argues that similarity is completely vacuous. It explains nothing. Because 82 things share an indefinite number of properties. So like a chair and a buffalo, both existed after the time of the dinosaurs. Both often have four legs. Both are made of organic material. Both have found on the earth. Both have acquired some of the significance. Both figure metaphors. Both have been used as political symbols. Neither one is carnivorous. Neither one weighs less than a paperclip, etc., etc. The number of true properties that I can attribute to both of them is combinatorially explosive. Which, just so you know, that’s why one of the things they test you on is similarity judgments in psychometric testing. Because what’s actually being looked for is not the logic of similarity, but your ability to zero in on the relevant factors of the comparison. So in addition to the logic, there’s a psychological process of selecting which of all the shared properties are the ones you’re going to give your attention to. So the point about similarity is that it requires some selection process. If you’re going to compare two things, you can’t compare all of the true properties they share. You have to make a selection. There has to be some selection mechanism. Gatner proposes a selection mechanism from her structure mapping theory. So Gatner proposes that the selection of the relevant factors of comparison, which we would call predicates in Gatner’s system, the relevant factors of comparison, she argues that that is what she calls systematicity. So systematicity is an ordering of preferences for what predicates you should pay attention to. Systematicity is an ordering of predicates, right, in terms of how you should pay attention to them. So you’re basically giving an order of the types of predicates you should give preferential treatment when you’re directing your attention. Okay, so what does systematicity say? You should prefer higher order predicates over lower order predicates, and you should prefer relations over attributes. So higher order predicates have systematicity because they are a system, right? They’re predicates of predicates. That’s where she gets the term systematicity. So the higher order the predicate, the more it should be preferred, because the more systematic it is. And you should, for the same reason, you should prefer relations over attributes. Now, what’s the sort of reasoning? The idea is, because if I bring a higher order predicate, I bring all this structure with me, right, that organizes, it’s very efficient and effective, it organizes a lot of information very readily. So it’s a very sort of tasty thing for your mind. I feel like I’m being metaphorically described, a theory of analogy. So the idea is, what’s happening is the following. The brain is going to compare to the, right, and sort of looking for, to provoke an insight. It looks for, right, it starts searching, and if it finds a comparison that has high systematicity, it will attempt to map that. If it can map it, then what has that done? From the source we have a system of predicates that gets mapped over. It gives us a way of structuring the target. That, of course, you should say to me, that’s probably going to help problem formulation, you bet. It’s going to help, in fact, restructure how you’re setting up your problem. It would help more to restructure the problem of how to best represent the atom. And that is the way in which analogy would drive insight. You would, right, so we’ve got what she calls the structure mapping engine. Everybody likes to use the word engine, not just me. Structure mapping engine, it’s a cognitive process, trying to make comparisons. It looks for systematicity. It grabs what’s most systematic, transfers it over, and that causes the restructuring that is central to insight. And the idea is, it would be something like this. Creative people are very good at doing that. They are very good at finding systematicity. And then doing the structure mapping. I’m going to criticize this a lot, but first of all, do you understand what’s going on in it? Yes? Is it sort of saying that your problem has one sort of problem formulation, and then you go to search for all these possible analogies, and then you adopt the system of the analogy into your problem, therefore changing the curve? That’s exactly right. You translated what I said perfectly. So it’s basically saying that you don’t find the analogy that has the same structure, but rather you find an analogy and then fit your problem together. So what you do is you find one and you go, ooh, here’s a shared predicate, and over here, this predicate has high systematicity. I should import it here, and then I’ll get high systematicity in my problem. Does that make sense? You said systematicity is important in preferences to what predicate should pay attention to, and those predicates you should pay attention to in higher order over lower order, because they have systematicity in that situation. Well, if I don’t give you an independent account of why systematicity is useful, which is what I’m trying to do on her behalf. Okay. So systematicity just means that when you find a higher order predicate, you give it preference, and then what will happen is a system of predicates will come over. Why do you give it preference? Because it’s systematic. Well, because it imports a system of relations, which gives you a very powerful way of restructuring your target. How do you realize that? Okay. How do you realize which? Because I think you’re putting your finger on a problem we’re going to discuss. How do you realize that? Not in tests. So I have some property, I know at least something, I can say at least one predicate of my source. And then I’m searching through my bank, and I go, here’s another thing that shares that predicate. And over here, that predicate is highly systematic. So this is a predicate of predicates that structures the whole phenomenon. So I then import this over, because they have the shared predicate, and then it folds down. Wait, how do you know that one over here? Because you already understand, for example, the solar system. And you know that it has this systematic relation with it. So it’s a system of understanding? Right. But that’s a vague way of talking to be fair to Gettner. She’s trying to give you a more formal, clearer meaning of what it is to say that you have a better understanding. So it’s still kind of circular. In one sense it is. And so… I’m confused by myself, so I don’t know where to go. Well, don’t totally give up, because there’s a problem I think you’re putting your finger on. Okay, about the problem. But first of all, you have, see how this is the theory of creativity. It says creative people are basically good at searching for systematicity that then allows them to import a powerful structure that allows them to structure something that was previously not well structured. They create intelligibility, perhaps where there was none before or little before. They create a problem structure to perhaps replace an effective one, etc. Now, many people have pointed out that there are some difficulties with this. So Palmer did this and made this criticism in 1989. And then somebody I’ve written some paper with early on in my career, Dan Schaffy, also in 1997, then he went out to do some empirical work with John Kennedy about this. The problem is, right, and this goes back to the issue I was talking about earlier, which is the translation issue. The same property can be represented by either an attribute or a relation. See, this whole formalism depends on the distinction between an attribute and a relation. So, and I sometimes think that a lot of debates in metaphysics sort of hang on this undiscussed cognitive move. So here’s an attribute. And of course, I can equally re-describe it as a relation. You can do that almost for anything. Count relations into attributes, attributes into relations. Now, when you do that, your systematicity is going to go, because everything is going to shift around. Because the whole thing of a systematicity is dependent on the distinction between attributes and relation. There’s nothing in reality that distinguishes those. We can describe any property as an attribute or a relation. Do you understand? That’s important because she’s made a logical formalism and I’m making a logical point. That’s completely legit. Now, Gentler knows this. This was pointed out to her by Palmer in 1989. I’m going to read you a direct quote. This is Gentler and Clement, 1989. He admits this but states, Our intent is not in all the ways our domain could logically be represented, but in how it is psychologically represented at a given time. Now, this is a disaster move. Because what she’s saying is, in addition to the logic, there’s some psychological process that selects how things are going to be represented. Why is that a disaster? Because this was originally supposed to address the difference between the logical representation of similarity and the psychological selection. What she said is, you know how you make the psychological selection over above the logical selection? Over here, you make a psychological selection over above that logical connection, which is a nice infinite regress. Now, what am I not saying? Obviously, we do it. So, she’s saying something true. That’s not the point I’m making. It’s obviously true that there are psychological processes that determine that you’re going to describe it this way or this way. This is if you’re Plato. This is if you’re Aristotle, by the way. Really? Here’s the form. Nope. Here’s the eudomorphism. And Western civilization has been an interminable argument as to which metaphysics is ultimately correct. I think the answer is neither. This is why. Okay? So, you can’t get out of… Your explanation for how you overcome a logical problem with a psychological process is to say, well, I overcome a logical problem with another psychological process that I don’t explain to you. That’s not an answer. Okay. Now, what I’m starting to point towards, and maybe you guys are already seeing it, and this could perhaps be what you meant by some of the circularity, is that what’s lurking inside this theory of analogy is an unexplained capacity for problem formulation. How to best represent the phenomena so that it can be transferred and used appropriately to solve your problems. Because that’s what she’s talking about. She’s talking about, well, I psychologically choose the best representation so that I can transfer it and solve a problem that I couldn’t solve before. What I’m trying to show you is at the heart of the explanation of analogy is this phenomena called insight. Transfers for appropriate processing. We should be able to see similar issues that we saw in insight. We do. Another problem with Cantor’s theory, also pointed out by Palmer in 1989, the man Palmer Maslita’s theory. And also Dion, a different Dion, not Colin Dion, but Dion, also 1989. It’s known as the grain size problem, also known as resolution size. See, sort of the example in Gentler’s example in my presentation of that was sneaky. Because what I did is I picked a level at which to represent the phenomena. I picked a level of abstraction and generality in my predicates that made what happened in the solar system equivalent and map onto the atom. See, if I do this, instead of saying attraction, if I do this, I know it’s more than this. There’s the nuclear blah blah force, the nuclear weak force. But we now think it’s actually, there’s the nuclear weak force is actually one with the electromagnetism, so that’s fine. These should all be capitalized, okay, but I’m getting tired. Okay, so I do that, there’s no shared bracket. But that’s true, right? That’s a true, in fact, that’s a more accurate description than attraction. The mapping only occurs because I get the right level of abstraction in my predicate representation. But that means that I’ve got to get the right sort of positioning between the gestalt and the fetal. All that stuff we’ve been talking about. The right scale, the right scaling of attention is being presumed here, not being explained. Okay, so when these problems are pointed out, that you have all of this machinery of formulating the information for appropriate transfer and right, appropriate problem solving. But Gendler responded as to how she actually saw the structure mapping engine working. This is a quote. The structure mapping engine operates by first finding all possible relational identities between the base and the target. It then assigns to each of these matches hypothesis on evaluation based on the structure closeness of the match and on a kind of local systematicity by what a given pairing of matching predicates is assigned a higher evaluation if the parent also matched. So she’s basically saying you’re doing sort of top down and bottom up between this processing while searching everything. Now I hope I’ve taught you well enough that that’s not how it can happen. So what she’s presupposing in all of this, she’s presupposing the relevance realization machinery, the attentional machinery, the ability to formulate something well so that it can be transferred in an appropriate manner to solve an unsolved problem. Namely, she is presupposing all of the insight machinery that we’ve talked about in this course. I think what I’m trying to show you is that this is not the relation. Analogy is not a parent to use her language, she used the language of parent. To insight, they are siblings. They interpenetrate with each other. There’s as much of insight going on in analogy as there is analogy going on in insight. So this would devolve into the claim that creative people are good at insight. Which would then be to lose the very distinction that you were trying to establish. So we’re trying to establish the distinction between creativity and insight by using analogy to say what distinguishes creativity from insight is that in creativity analogy is driving insight. But what we found is analogy is not reliably distinct from insight. And if analogy is not reliably distinct from insight, you know what you can’t use it to do? Distinguish creativity from insight. If analogy is not reliably distinct from insight, you can’t use analogy to distinguish creativity from insight. Now I spent a lot of time on this because this theory has a lot going for it. I mean you can show empirical evidence that people do, once you give them a description of the predicates, they do prefer systematic ones. Once you give them a representation of the level, etc. But that’s not my primary reason. My primary reason is that many theories of creativity ultimately are versions of this. So we’re using something like finding a similarity, finding an analogous connection, blah blah blah. Then we transfer and then we get the insight and that’s what creativity is. And what I’m trying to suggest to you is that this is the best version of that theory that we have and it doesn’t do the job. Just for logical reasons. I didn’t even have to present any empirical evidence to you. Just for logical reasons it cannot do the job of distinguishing creativity from insight. It just can’t do it. It’s just not set up the right way. So what we need, right? What we need, and let’s, what did I? Okay. What else might be at work? I think one area that is probably the second most prominent, and it was pretty much like, came up quite rapidly in the discussion with you, is to look for motivational differences between creativity and insight. Many of you were trying to do that in terms of, well I’m not goal oriented or I’m goal oriented or I care about it. This is backgrounded and this is foregrounded. And I’m not saying that there are differences, but I’m trying to lump them together. Because that is another main sort of set of families, a family of theories, we’re trying to talk about creativity. That what distinguishes creativity from insight is some very significant and relevant motivational differences. So this also has a long established sort of research tradition attached to it. Okay. So there was a conference on creativity held, I’m just looking through my notes because I’ve gotten ahead of myself. I’d like to bless you. Nineteen years ago, I was in the United States, and I was in the United States, and I was in the United States, and I was in the United States, oh my goodness, and I thought you didn’t think I’ve ever начegn chain before! This is what it looked like when they Court Limited What are you talking about? The original proposal was the idea of detached devotion, which sounds like the beginnings of the ending of your relationship. Detached devotion was the correct sort of attitudin’ and motivational relation that people have when they’re creative. They’re detached from what they’re doing, but they’re also devoted to it. So the idea is you have intense passion and commitment that is balanced by capacity for critical detachment. And that makes you more capable of insightful problem solving or generating or finding instances that people find insightful, etc. So Cratchfield took that idea, also in 1962, and made a distinction between ego involved or extrinsic motivation. He framed it in motivational terms. He’s the person that took this vague idea from Henley and the conference, and he framed it in motivational terms. So ego involved or extrinsic motivation, that’s two words for the same thing, versus intrinsic motivation or task involved motivation. So on one side is ego involved, extrinsic, task involved, task involved, right? Intrinsic. And then the main idea was that what happens in creativity is people are able to become intrinsically motivated and they are task oriented rather than goal oriented and ego involved. And some of you articulated that, which is why I was sort of not trying to debate you too much, because this idea is almost as old as me. I’m only one year older then. So this idea has been around for a long time in psychology. Now, there was quite a bit of work going on about this, and the problem is this became sort of a problematic way of talking about it, because we are not really clear on what that term means. And so a good rule of thumb is don’t use a more muddled and controversial thing to try and define your muddled and controversial thing. I’m going to define creativity with this equally muddled and controversial construct, the ego. And so what happened is the emphasis became on extrinsic versus intrinsic motivation, and then that was defined in terms of whether or not you are sort of goal oriented or task oriented, which is again, to be fair, the language many of you are using today. Okay, so this was taken up by Teresa Amabile. So you sort of had a lot of discussion through the 60s and the 70s, and then like I said, it sort of gelled in her work, and she developed what she called the intrinsic motivation hypothesis. The intrinsic motivation hypothesis. So she put it into a testable form, a hypothesis, etc. The intrinsic motivation hypothesis. Okay, I’ll read you out the hypothesis, and then we’ll define some of the core terms within it. The intrinsically motivated state is conducive to creativity, whereas the extrinsically motivated state is detrimental. Now it’s interesting because this has kind of eerie parallels with propositional processing versus procedural processing. It’s not the same at all, but it feels like the same, it’s structurally analogous, right? The intrinsically motivated state is conducive to creativity, whereas the extrinsically motivated state is detrimental. So what’s intrinsic motivation? Intrinsic motivation is the motivation to engage in an activity primarily for its own sake. So, sorry I don’t mean to be graphic, but the prototypical example that’s usually given is sex. Okay, and sometimes, like you engage in sex extrinsically, because you’re trying to have a kid, and man is, talk about what big that’ll really kill your sex life, is that. As soon as you extrinsically are focused, it really removes a lot of the creativity from the endeavor. I’m trying to speak as neutrally as I can. So the motivation to engage in an activity primarily for its own sake, because the individual perceives the activity as interesting, involving, satisfying, or personally challenging, it is marked by focus on the challenge and enjoyment of the work itself. Okay, that’s intrinsic. Yes? But don’t people also have sex in pleasure? So they don’t really. That’s not, but that’s, pleasure is something that’s done in the task. You take pleasure in the task. You don’t take pleasure on the result produced by your activity. Like when you’re done, and it’s all over, you don’t go, now I have measured, because the goal stage of post-sex has been achieved. The pleasure is in the act. You’re interested, involved, in getting pleasure from the process itself. Okay, so eating junk would be intrinsic in other words. Yeah, right, but eating kale is presumably extrinsically not a good idea. But you could have things that are both, right? Yes. Give me an example, because I agree with you, and we’re going to come back to that point shortly, but I want your example, because maybe it’s a better example than mine. Well, if you’re working on a project, but you pick a topic you really like, so you obviously, you have extrinsic motivation to do the project, but you really enjoy the task itself. There’s a lot of things like that in life, working towards goals like that. Okay, so, to be fair to Emma Beale, she’s not denying that that exists. Her hypothesis is about how creative you will be. And her hypothesis is, the more intrinsically motivated you are, the more creative you’ll be. And the more extrinsically motivated you are, the less creative you’ll be. Okay, yeah, just one thing, you could have a balance of the two, and then it’s not, it’s like halfway in between. Kind of, it’s not that easy, but yeah. So, just to be, there’s another way of framing this that will be a little bit more problematic for Emma Beale. But the way you’re framing it, Emma, isn’t to challenge her, because she’s saying, I don’t deny the existence of this task, I’m just making a prediction of how creative people will be in those tasks. Which is, I think, fair to her. Michael, did you want to say something? I thought your hand was up. Well, there are like, three different levels between extrinsic and intrinsically interjected, and stuff like that. Well, we’ll get to that. So, they have made like, distinctions. Yes. Where does pressure come in on this, for some people that work well under pressure and under stress? Well, that’s not quite right. You work well under certain levels of arousal that have been properly framed by you. But the cramming mythology likes to turn that into the simplistic construct of pressure. Pressure. Right. So that it can somehow, like, just be sort of, because it’s just not the case that, like, cramming is effective for achieving the goals you want to. And I know some of you are saying, oh, no, you’re wrong, it works for me. Sorry. It’s just not true. What about studying? What about anything else, just if you know that you have to do something for something? So, continuous study is way better than cramming study. Just every single time we put them into competition. Continuous. So, start studying. I had to do this with Jason. He starts university first year. You know, he’s getting 90s in high school. He said, your marks are going to crack. Your marks are going to crack, and you have to start working differently. You have to do continuous study. From day one, start reading your material, right, making questions, going home, practicing, generating questions, and answering them. Do that every day. Continue to study. Do the MBA, continuous study. Don’t just study the two or three days before the test. Do continuous study. He didn’t do it. And then I mean, kept coming to him. Jason. So, finally he starts giving it, and his marks start going. And he comes to me and he said, and I said to him, do you not know what I teach? You took part 250 with me. In fact, I taught you in one course. See, the problem is, just to, right, the problem is, you know, you’re not going to be able to do it. The problem is, our study strategies are not motivated just by the goal of optimizing our academic success. Our study strategies are also motivated by the goal of alleviating our anxiety. And so, typically you alleviate anxiety by moving to more primitive, familiar functions. Right? And that’s why people think cramming works. It’s just like people think that smoking relaxes them. It relaxes them because it initially stresses them. So, do you see what I’m saying? Right? So, the notion of pressure, I think, is too simplistic. We have to break it up into all these other variables I’m talking about. Am I right? Now, I think you have an intrinsic motivation to reduce anxiety. Which is separate from your often extrinsic motivation to get a good mark in a course. And the problem for many of us when we’re trying to study is the intrinsic motivation to reduce anxiety. You just want to do it for its own sake. You can overwhelm the extrinsic motivation to get a good mark in the course. So, you do avoid behavior and then you do cramming as a way of trying to address the anxiety. Sorry, that was just an opportunity for me to again say why you should do continuous study rather than cramming. And the research shows that you are all now ignoring me. So, there are aspects of this endeavor that are futile. Okay. Okay. So, obviously, as you can gather from my extrinsic motivation, right, the motivation to engage in activity primarily in order to meet some goal. External to the work itself, such as attaining an expected reward, winning a competition or meeting some requirement. It is marked by a focus on external reward, external recognition, and external direction of one’s work. Okay. So, what happened in the 80s is that Anabella was able to create quite a lot of experimental evidence to support the intrinsic motivation hypothesis. When people are intrinsically motivated, they do seem to be more creative and when they’re extrinsically motivated, their creativity seems to go down. Now, what happened is there were some challenges linked to this. Yeah. And then there was also two things were happening, two types of challenges. One was a question about whether or not this was a clear construct. And the other question was an explanation as to why should it work? Why should it be that intrinsic motivation improves your creativity, whereas extrinsic, right, in and of itself, this isn’t very explanatory. Like, people sort of nod and go, yes, but that nod usually indicates familiarity with what we’ve talked about rather than being able to provide an explanation as to why the intrinsic motivation would facilitate creativity and extrinsic would be detrimental. Okay. So, in answer to the first problem, people started to point out that it wasn’t clear, there were certain things that, not the next thing I was doing in the demo, but there were certain things that it wasn’t clear if they were intrinsic or extrinsic motivation. So, one of them was one of the ones that Amabile used in her definition of extrinsic, which is achievement. Is achievement intrinsic or extrinsic? Like, is it something that is pursued, right, like when you’re sort of achieving something, isn’t that something that’s intrinsic to a task? You’re like, you’re achieving what you’re doing. But it also can mean extrinsic, like you’ve achieved some status. So, the notion of achievement was sort of very vague. Now, in order to try and get clearer about that, the second issue was addressed. And the mechanism, the most likely or most plausible mechanism proposed by Amabile as to why the intrinsic motivation works wasn’t so much about how intrinsic motivation facilitates creativity, it was about why extrinsic motivation was detrimental to it. And this was the idea that extrinsic motivation divides attention. It divides attention. So, you’re just spending less time attending to the task, because you’re spending more time attending to the external goal state that you want to be in. Okay, now, that’s Amabile. Amabile is now done, for a moment speaking. Verveki is going to speak. I’m not trying to put words into Amabile’s mouth, but I’m saying there’s stuff we could potentially do to strengthen this. The more you are focused on the task, the less, sorry, the more you’re focused on the external goal, the more you’re focused on the external goal, the less chance you will step back and look at the problem formulation of your task. Right? The less chance you’ll have for transparency to opacity shift. So, this is not Amabile, this is using stuff we’ve developed in this course. The less chance you have of looking at your problem formulation of the task, the less chance you’ll have of looking at the problem formulation of the task, the less chance you’ll have of looking at the problem formulation of the task. The greater the chance it is going to operate automatically. And automaticity could significantly reduce creativity. It would significantly reduce the capacity for insight and innovation. So, if I increase automaticity and reduce the possibilities for insight, that could plausibly be why creativity is reduced. Again, Amabile didn’t say any of this. I’m trying to make her case stronger for her. All she said was divide attention. Now, in another sense, I mean, that’s also part of it because we know that simply dividing attention tends to reduce performance. But that would be sort of a general thing. Why might it more specifically reduce the creative aspect of your performance? Yes? Would it also take away from your cognitive resources or like working memory while… Well, that’s just the general idea. That’s what I meant about the general thing about dividing attention. So, dividing attention generally reduces the cognitive… It’s a cognitive load phenomenon. And I admit that’s part of it. But we would want to try and make that explanation more specific so that we could specifically explain why creativity is being reduced. And that’s why I was bringing in this more specific machinery of increased automaticity, decreased cognitive flexibility. Is that okay? Yep. Okay. So, we could buttress what she’s saying that way. Okay? So, some people picked up on that. Sternberg and Newbert. Yes, this is the same Sternberg that does all the work on insight and other stuff that you mentioned before. So, they talked about rather than calling it intrinsic and extrinsic, they talked about task focusing versus goal focusing motivations. And they got this published as a paper and I could see it. Really? Okay. Isn’t that just kind of what Amabille meant by intrinsic and extrinsic? Task focused versus goal focused? Anyways. Amabille then also came back in 1993 and made a distinction between types of extrinsic motivation. So, she refined it and she wanted to say that not all extrinsic motivation was detrimental. Okay. So, why? Okay. What kind of extrinsic motivation could be beneficial? Well, if I take my eyes off the task and look at the goal, but that gives me critical feedback that I then take back in order to reformulate my problem, that could actually be beneficial to my creativity. So, you can use extrinsic motivation in order to reformulate how you’re doing the task. You get critical feedback. So, she called this synergistic extrinsic motivation. It provides information which acts in concert with and supports intrinsic motivation. So, if you’ll allow me an analogy, it’s kind of like the returning act hypothesis. You leave and you come back and you’re capable of reformulating your task. Now, obviously, what’s left, of course, is non-synergistic motivators. So, there’s non-synergistic extrinsic motivation. This takes your eyes off the task, focuses you on the goal, but does not afford you any kind of critical information for reformulating your task, for de-automating it. You just worry that you’re not at your goal date. Yes? So, could you repeat what you said about synergistic motivation? So, synergistic extrinsic motivation is actually synergistic. It helps creativity because although you’ve taken your attention off the task, by looking at the goal, you’re able to come up with critical feedback that you then use to reformulate your task. And that improves your involvement with your task, increases your flexibility, perhaps, etc. Kind of like incubation? Yes, kind of like incubation. That’s what I meant with the analogy. Okay. So, around this time, what happened is that people were now starting to look more at, and this is the language that was used. I’m not imposing this language on it. Not the framing of problems, but the framing of motivation. Because what we’re actually talking about here, right, is where we’re directing attention, what is being foregrounded, what is being backgrounded, and how much flexibility we have in that. So, what started to happen is that people were trying to come up with accounts of what you might call the cognitive aspects of motivation. One of the lines that we’re going to talk about in the course, we’ll just introduce it today, and then we’ll move on to next week, is the work of Michael Apter in what’s called reversal theory. Reversal theory. Now, what’s interesting, and the reason why I pick him is, there’s a lot of important connections, but he also starts from the Gestalt heritage, like one camp of the insect problem solvers. He explicitly does so. Now, he has what he calls a metamotivational theory. A metamotivational theory. The basic idea is that motivation is a framing of your level of arousal. And then that framing of your level of arousal has an impact on how your attention is directed, which then has an impact on your problem solving. And then this would be a more comprehensive motivational theory of creativity. So, we only have five minutes, and to go any farther than this would be irrelevant. And I can only stay until four o’clock, so I’m going to end here, and I have to leave at four, so I’ll take any urgent questions. But what we’re now moving towards is, what is this theory? And the basic idea is you have different metamotivational modes, and only one of those modes is actually really conducive to creativity. So, what we’re going to see is how you frame your level of arousal significantly impacts how you direct attention, which then has a huge impact on your problem solving. And that would then be a motivational theory of creativity. So, creative people can motivate themselves in the right way to get themselves into the right framing of their own arousal, so that they direct attention in a way that is optimal for doing what they need to do. Yes? That means consciousness. Pardon me? That means consciousness. I don’t think so. Let’s talk about that next time. That’s a good idea. I don’t think so. Alright, thank you for your attention.