https://youtubetranscript.com/?v=Bn8i5uhOAu0
So what I would like to do today is to spend time looking at some of the evidence for this self-organizing, the kind of self-organizing processes that at work in InSight and then start talking to you about this alternative framework that we’ve been talking, hinting at and looking at, namely dynamical systems theory, and then also start to connect that to what is still missing from all of this, which is machinery and mechanisms of selection. If what’s going on in InSight is not the inferential change of belief in terms of its representational content, but the attentional change of what’s salience in terms of its procedural efficacy, then we need a different account of what’s going on in InSight and how that account could line up with the transformation of salience, the alteration, the self-organized, in that sense spontaneous alteration of what the system considers relevant. So this would be to connect the machinery of InSight ultimately to machinery by which relevance is being realized by a cognitive system. So I’m not quite sure we’ll get through all of that today. Where that’s going to lead us is we’ll then take a look at the question about the connections between insight and creativity, and there we will talk about the flow phenomena, intuition, and metamotivational structuring, not cognition, but of your affective responses, and we’ll look at Apter’s reversal theory, and along the way we will also take a look at whether or not we can understand creativity in terms of analogy that provokes insight, which is a very traditional idea that what makes creativity what it is, is that creativity is the use of analogy to provoke insight. So those are the topics, and then after we’ve done that, although we could go on and on, and I mean I have, I’ve done work on this for about 20 years now, we will stop talking about all of this and shift to the reasoning part of the course. I’ve given priority to thinking because I think it is a more important phenomenon overall, but we’ll start talking about that inferential manipulation of belief, and what kinds of things are going on. Initially that will seem very disconnected from our whole discussion of problem solving and insight, but as we move deeper into the machinery of influence, we will see that that machinery actually starts to overlap more and more with the machinery of problem solving and insight because of the nature of the kind of reasoning that human beings by and large do. Okay, so that’s the rest of the course. I’ll unpack it for you. No, that’s it, go home. Alright. So, what’s going on is this idea that, like I said, we’ve got the intentional manipulation of salience in terms of procedural efficacy rather than the inferential manipulation of belief in terms of its semantic content or its representational content. And, you know, we’ve made a case for that and that’s trying to get more specific about both what that means to give an alternative to the theoretical vagueness of the Gestaltus. And I’ve talked about what these intentional processes look like and some of the empirical evidence for them, but what is needed is an overarching theoretical alternative. And part of what I foreshadowed last week is that I don’t think so much the debate is getting resolved in favor of one of these two positions rather than another, namely search inference versus the Gestaltus. I think a third position that synthesizes aspects of both is coming into prominence, which is often what, I’m not an Achelean and I’m not claiming that Hegel’s right, but this is a pattern that does occur quite often in science that you get a synthesis rather than a simple resolution in terms of one theory over another. Okay, so what would it be to look for self-organizational processes within insight? Well, a person that was very prescient on this, and some of you know this from 371, we’ll talk more about his work in 371, but Perkins in 2002 was very interested in trying to explain a huge aspect of our intelligence that didn’t seem to be captured well by sort of the predominant computational models within cognitive psychology. He was particularly interested in our very fast and fluent coping with the environment. So, you know, if I’m moving around the room and I have to navigate and shift and do things, I’m making all these adjustments. And part of what a lot of people were realizing, and this is part of what’s known as third generation cognitive science, that you have to pay a lot more attention to the aspects of cognition that are embodied and embedded because they turn out to be very, very fundamental. So particularly, like getting something that can move around the room while undertaking tasks and then adjust as the context shift, adopt new goals, all of this turns out to be very hard. Now, we’re making good progress on it. There’s nothing deeply darkly mysterious about it, but nevertheless, that turns out to be a huge part of cognition that had tended to be ignored by models that were very sort of classically computational in nature. What Perkins was really interested in was a phenomenon he called emergent activity switching. This is your ability to, in the very bottom up, contextually sensitive fashion, switch the activity that you’re engaging in, change it, alter it, restructure it to adapt to the changes in the environment or the context within which you find yourself. Following on other people, but I really gave him a lot of credit, he proposed a new model that was emerging, a mathematical model that was trying to explain these kinds of switches in activity in natural systems. Now, let’s note how right away, even at this abstract and therefore somewhat vague theoretical level, this would be pertinent to insight. He’s talking about something that seems to be emergent, in which there’s a sudden shift or a switch in your activity that of course happens in insight. You restructure or reformulate the problem. It seems to happen in a very bottom up, rather than directed fashion. And it seems to be very much more procedural, skill-like in nature than theory-like in nature, which is what he was exactly talking about. Now, he wasn’t talking about insight. He’s talking about this more general phenomena of emergent activity switching, how a lot of our behavior is not so much directed as participatory. What we seem to do is become involved in, and this language is initially vague, I get this because I’m speaking phenomenologically and we’ll try and make it more precise as we move to a functional level of discussion, but I seem to get involved in processes that are somehow self-organizing that I participate in. So, I mean, and what I’m suggesting to you is a prototypical example of that, something like insight, you can’t just sort of say, oh, it’s 1.22, it’s time for an insight. It doesn’t work that way. And of course, this would be even more prototypically the case for the controversial, at least within the context of this course, idea of creativity. Creativity has always been sort of portrayed, well, at least the romantic notion of creativity, which is again something we’ll talk about, but this idea that creativity is something that we participate in rather than something we direct. Okay, so all of that at least is an initially good sign. And then what he talked about was what’s known as self-organizing criticality. Now, we’re going to come back and talk a lot more about this when we talk about dynamical systems theory. So, the trouble is I can’t sort of do all of these terms simultaneously with you. So that means I have to move through one and then move through the other. And so some aspect of it will initially be, well, what exactly? And part of that is an important point. This is a very vague notion. Okay? I think in order to make it less vague, we have to re-situate it into its historical origin and content and we’ll work forward from that. We’ve got to get a better idea. And one way of doing that is to try and be more specific about the component elements of what we mean by self-organizing. So one of this is this notion of criticality, and criticality is also going to be related to the idea of opponent processing. Where there’s processes that are working sort of in different directions from each other. Okay, so let’s go very carefully and keep moving. Before we get to all the way back to self-organizing criticality, it has a useful acronym, SOC. As opposed to F-OK, which we don’t need ever see. Okay, so let’s take an example of self-organization that uses a point of processing that is non-controversial within psychology. And it will be important because it will be an instance of emergent activity switching and about this ongoing, right, contextually sensitive coping with the environment. And this has to do with your level of arousal. I also need this because we’re going to be talking about metamotivational structures, which are ways in which your cognitive system interprets your level of arousal. Because depending on how you frame a situation, you will interpret the same level of arousal in different ways. So we need both of those. We’ve got to talk about arousal. Sorry, that’s an odd word to use a lot. And please, try not to hear arousal with brilliant ears. I’m not here doing sort of soft porn with you or anything like that. I’m talking about metabolic arousal. How much of your potential energy is being converted into actual energy and how you’re disposing and comporting yourself with respect to that. I get it. Sometimes it’s sexual, but most of the time it’s not. Okay? So I can say, and you’ll probably still laugh, you’re all aroused, but I’m hoping I’m not saying anything that isn’t in any way sexually arousing for you. Okay? See, some of you still laughed, as predicted. Okay, so. Now you know this. You know that what deals with this, and although obviously your directed cognition can influence it, you’re largely participating in your arousal. Your arousal is driven by, or at least regulated by, is perhaps a better way of putting it, your autonomic nervous system. Now notice the term autonomic, which means self-governing. It’s an inherently self-organizing process. It’s very neat. Your autonomic nervous system. Okay, so you guys, you’ve all done psychology. You’ve done Psych 100. This is in the textbook. You have this. You have this. You’ve done Psych 100. This is in the textbook. You have this. Study this. This system is divided up into two subsystems. What are they? Sympathetic and parasympathetic. Okay? And what does the sympathetic system try to do to your level of arousal? It tries to raise it. And if by raising your level of arousal, you get more reward, which means you get more information that your goal is being achieved, your parasympathetic system continues to arouse you. Yes? Say yes, because this is uncontroversial. Nobody goes, oh no, this is… And what does your parasympathetic system try to do? Shut you down. And if your level of arousal stops to drop, and your pertinent goals are being achieved, you’re getting information of goal completion by doing this, your parasympathetic system gets more and more active. Yes? So you need your parasympathetic system so you can fall asleep. Which I couldn’t do on the night of the election. I kept going, go to sleep, John. She’s lost. It’s over. And I was like, but it can’t be. It’s amazing. Anyways, let’s not talk about that. Let’s pretend, let’s pretend the world is still unfolding in an intelligible order of thought. And that we’re safe within us. Okay, so, right? Now, you also know something else if you were taught this, that these are not independent systems. How are they related to each other? Right, they do opponent processing. Because look, they’re organized in opposite ways, but they are not functionally independent. One of the things the sympathetic nervous system tries to do is shut off the parasympathetic system, and the parasympathetic system tries to shut off that sympathetic system. Okay? So this is an example of opponent processing. And what it means is, right, you’ve got this ongoing moment by moment, contextually sensitive, self-organized calibration of your level of arousal. The two systems are constantly pushing and pulling against each other in this self-organizing fashion, right, in order to constantly do emergent activity switching on your level of arousal. Your level of arousal is constantly being shifted, constantly being subtly changed in this emergent fashion. It’s constantly being switched. Ah! See, so it changed there for a second. Right, your level of arousal went up a bit. Okay. And now it’s calming back down. And you had a bit of parasympathetic rebound, by the way. Your sympathetic system went, wah! And you went shh, and then you looked around and there wasn’t any actual threat and then your parasympathetic system pulled it back down and so it rebounded a little bit more and then you did this weird thing that we don’t understand which seems to be driven by that parasympathetic release. You sort of do this ho ho ho ho ho ho ho ho ho thing with your body called laughing and like, what is that, right? It’s a very bizarre thing. It’s the closest thing to magic we have. We speak words at people or do something and they lose cognitive control over the body. We’re doing it now. Now, there’s also sad versions of parasympathetic rebound. You found out something very horrible. A loved one has died. You go into high sympathetic activity. And then there comes a realization. And so this is very high. So this is increasingly highly counteractive. And then suddenly, you get a realization that there’s nothing you can do. The person’s dead. There’s nothing you can do. This drops down. And you get what’s called parasympathetic rebound. Your parasympathetic system overcompensates. And so all this, and one of the things it controls is your tearing and your posture. So your posture collapses. And you start to leak out of your eyes. And you know this as what? Crying. You’re crying. Another bizarre thing you do. Notice how I can explain all this weird emergent activity using this. Does that make sense? Yes or no? No. So what I’m trying to show you is already within psychology, you have this notion. It’s at work. Yes. So what we’ve talked now is self-organization through opponent processing. And that’s going to get emergent activity. And now we want to bring in this, add to this, refine it by bringing in this notion of criticality. Criticality is when a structure is starting to lose its identity. A system has a structure if its components are highly predictive of each other. The more ordered a system is, the more information about one part of the system gives me information about other parts of the system. The more disordered a system is, the more information about one part of the system doesn’t give me information about other parts of the system. Does that make sense? Criticality, in this sense, is when order is going down or in the way that I’ve defined it. So try to use order in the technical sense I’ve defined it, not in the aesthetic sense or the political sense. I’m talking about mutual predictability between parts. So order goes down during criticality. Or another way, and think about this because we’re going to need this when we talk about Steffan and Dixon. Another way of talking about that is entropy goes up for the system. Now, before I go on, I have to deal with a common misconception about how you should use the notion of entropy. Entropy only applies to micromanagement. So one thing is the level of entropy for a system can go down. And part of you might be screaming right now because this happens to me whenever I start talking about this. But the second law of thermodynamics, entropy must always increase. That’s right for the universe as a whole. You are not a closed system. As long as energy is being put into a system, you could offload your entropy onto the environment. So this is what you do. You take in food and produce heat. And that food provides energy that keeps you highly ordered. It doesn’t generate energy. What it does is channel a transformation of energy through your body. But you know what I’m talking about. And all of the Earth is not a closed system because the Earth is constantly having tons of energy poured into it from this thing called the sun. Sorry, I didn’t mean to be harsh about that. But sometimes I pump into this when I start to try and talk to this about people. You can’t increase the order of a system. Entropy has to always increase. What? Have you seen somebody build a building? People do that. That happens. Like you’re not applying the concept correctly. Are we OK about that? All right. All right, now back to criticality. Criticality is within a system. The entropy is going up. But what happens in self-organizing criticality is initially the entropy goes up. And then the entropy goes radically back down again. Which is why I just gave that whole speech about entropy can go down for an open system. Yes? OK. So this notion of self-organizing criticality was discussed in physics by Turbach. So normally in physics to get major awards, and he’s gotten most of them, you have to have new-kinder accelerators or lasers. He used piles of sand. That’s what he did all of his work on. So what he had, and this is still a standard way of talking about this. Imagine you have a column of falling grains of sand like in an hourglass. Is that OK? If you come to my office, I have an hourglass that does the self-organizing criticality. And then, just to make it even better, one of my students gave it, there’s a magnet underneath it. And what’s falling are not grains of sand, but iron phalanges, which really messes up the self-organizing criticality in ways your brain finds fascinating. So if you come to my office, you can see that. But let’s just have a normal rate. So what happens when you’re falling? Initially, within, obviously, a certain zone, you can’t predict where, unless you have a huge amount of information. But for reasons of mathematical chaos, you can’t predict where one rate is going to end up. So here’s a grain, and it bounces and lands here. Now that I know that when that one’s there, can you tell me where the next one’s going to fall? No. Somewhere in here, but that’s about it. But over time, that changes. Over time, what happens? What forms as the grains of sand fall? What forms? Amount of sand, like this. Now, we’ll talk more about this when we go back to dynamical systems theory. Notice that this is an emergent thing. Nobody’s making the grains of sand. There’s no intelligent sand maker. In fact, what a lot of this theory is doing is increasingly removing the idea of an intelligent designer for processes. Now, let’s stop at foreshadow. That’s going to be valuable for us, because ultimately, we want to get an intelligent designer out of our explanation of problem solving, because that’s the only way to make it non-circular. OK, so this keeps falling. And this just goes on smoothly forever, right? The mound just goes up, and up, and up. What actually happens? This isn’t hard. You know what happens. You’re terrified to answer, because what? What happens? What happens? Does that add more grains of sand? Does it just keep doing that, or what happens? It stops. It collapses. Does what? It goes up. It goes wider. Does it just go wider smoothly, and naturally, and normally? No, it collapses. It collapses. I need you to say that, because that’s the critical period. The structure breaks down. It goes critical. While it’s like this, where one grain of sand is gives me a lot of information where the other ones are. But when it avalanches, that breaks down. Order breaks down. Entropy increases. You get an avalanche. It goes critical. Is that OK? The criticality. Other language, and this language is important, is right now, this is a highly integrated state. And then it goes through a process of disintegration. Is that OK? All of these are different ways of saying the same thing. Different ways of explaining what criticality is. You OK so far? Yes or no? So you get the avalanche. Right? Shhh. Shhh. Now if the avalanche becomes self-perpetuating to too great a degree, the structure will just degrade and fall apart. But what can happen is the avalanche actually changes the base. It changes the conditions of possibility. It changes the base so now a new structure begins to emerge. A different shape takes shape. Because no one’s doing the shaping. A different structure takes shape, and now in a stable and ordered fashion, the sand pile can go into a greater height. It is, in that sense, more complex than the previous structure. Because there are more grains of sand distributed across a greater dimensionality, yet remaining still integrated together. Is that OK? This is not to be easy to understand. These are grains of sand. How is that? Now of course, what Perlbach was trying to do after studying this was do what you do in physics. Like Galileo rolled balls down in kind planes, and Newton apocryphally probably never really did this. Had apples fall on his head. Right? And so Perlbach was doing stuff with piles of sand because he’s trying to find principles that will highly generalize throughout all of nature. Which of course is what people are discovering about self-organizing criticality. Now intuitively, and then of course we’ll have to see if we can make it both experimentally confirmable and theoretically for con. But first initially intuitively, you see why Perkins would like this, propose this as something like self-organizing criticality. Obviously your brain isn’t made out of sand. But something like this is going on in the brain. The brain is somehow going through this oscillation between integrative structures and disintegrative, critical things. And then that affords new structuring, right? And so on. It’s oscillating constantly between integration and disintegration. And that affords the brain constantly generating different kinds of structures. Again, I’m talking vaguely. And you’ll have to give me the whole lecture, maybe a bit more about how to get this ultimately concrete and down into how neurons are firing and wiring in the brain. But I promise you, we’re going in that direction. But I’m like, it doesn’t make sense. Exactly. OK? Now this tightens the analogy or perhaps the specification. I suggested to you that insight is a specific example of this, using this. And if you take a look at this, that tightens that intuitive connection. Because what you have is you have something like a structure being made. It’s being broken. And that affords the making of a new structure. It’s like breaking frame and making frame. And we can understand that as the relationship between self-organizing processes of self-organization that contain within the opponent processes. And we’ve already seen how attention is organized in this opponent process in a very complex fashion. You don’t know. OK? What I’m suggesting to you is that you can understand this. You can see this as being the physics behind breaking and making frame. And that’s initially plausible because we’ve come to under, at least I’ve given you argument and reason for believing that attention is driven by opponent processes. Between gestalt and feature, transparency and opacity. Yes? Yes. And so that it is plausible that we’ve got self-organizing criticality going on in the attentional processes and giving us an emergent activity switching, a restructuring that’s going on in insight. Yes? But the DHS consent file requires you putting in more and more stuff, whereas in insight you are using the same information that you got. Well, in one sense it is. But I mean, I don’t have to do it quite literally that way because you’re constantly taking in information from the problem. In one sense, like you might say, well, all the information is constant. But you are constantly getting a stream of information into the cognitive system from the problem. But I guess that’s more like you intentionally disturb the consent file as opposed to just allowing it to fall because you’re getting different information from the stimulus. Somewhat, but your attention is not, I mean, some of it’s top down, but a lot of it’s also bottom up, right? What’s causing you? So think about when we talked about Loraris and Thomas’s work. We get people to move around the dunker radiation problem. And when they move around it up, it somehow triggers the insight. In one sense, the problem isn’t changing. But they’re constantly getting new information because how their attention is moving around the dunker radiation problem. And that seems to be triggering an insight. Would this be related to Thomas’s work regarding that the hint has to exist in the environment that they’ve changed into? Pardon me? So remember how there was an experiment where people had to change an environment and then? The seaford and aft, the opportunistic assimilation hypothesis. So would that relate to the additional sand? It might be. It might be the additional sand. Or I’m going to suggest something else. We saw that the relationship to the environment required this sort of moderate level of distraction. And it was more the level of distraction than the specific content. Now just a foreshadowing of that, maybe what the distraction does is introduce enough entropy into the system that it goes critical without introducing so much that it breaks down. That’s why you want a moderate level of distraction. OK, now that’s the first part. So what I’m trying to do is show you what this basic notion is, how it makes use of ideas that are already existing in psychology. Just remember, the autonomic nervous system. We put this all together. You come up with this notion of self-organizing criticality. And it’s at least initially plausible that we can see insight working according to this. Is that all right so far? Now let’s talk about people who have specifically tested it. How would they go about testing it? Because this is like a czar writing. It’s so vague and mathematical. It’s like, what? How do you test this? Now this is part of an ongoing trend about what is called the criticality hypothesis, that self-organizing criticality is an important new sort of theoretical structure for trying to understand the brain and cognition. As you can imagine, all new, such all provocative ideas are controversial. But there’s increasing evidence for it. There’s a very good review by Hesse and Gross in 2014, making a very good case, taking a look at all the controversy and evidence for the criticality hypothesis. I can’t do that right now because this isn’t a course on self-organizing criticality, although I would love to teach those, of course. But they will only let me teach so much. They tell me to go home. Go home, Tom. Stop teaching and go home. There was one year where I taught 11 courses. And one at York. Yeah, actually, not in county. All right. So what I want to do, like I said, take a look. This is very careful, very well argued, comprehensive. What I’m going to do is try and make a case for it specifically in connection with insight by looking at the work of Stefan and Dixon. But I want you to remember that the Stefan and Dixon work is situated within this more comprehensive theoretical change that’s going on in cognitive science and neuroscience. So Stefan and Dixon, 2009. Three papers on this. So how do you go about studying self-organizing criticality within cognition, especially when psychology relies on behavioral measures? You measure behavior as a way of trying to measure and discover things about cognition. And we know that we very, very rarely directly measure cognition. Even in neuroscience, we often don’t directly measure cognition. We look at brain activity in conjunction with behavioral measures. For example, that’s what’s going on in an fMRI. You’re having somebody do some functional task, and you’re doing an MRI, and you’re trying to correlate the behavioral measure with brain activity. We have to remember that, because as you know, the media treats MFRIs as if they’re snapshots of cognition. Look, this is what it looks like when you’re thinking about a cat. And as I’ve mentioned, that has become even problematic within academia. There’s now been several instances of this where you take a paper that has been rejected for publication, you just put in fMRI photos, you change nothing else about the paper, you resubmit it, and it gets accepted. So that’s happened several times now. So we have to pay attention to this evidence, but we have to remember what kind of evidence it is. We are looking at largely still, although I want to suggest something later, we are looking largely still at behavioral measures. So how do you get at self-organizing criticality within cognition, this theoretical abstract notion, within cognition, which is something we don’t have direct access to? That’s why the behaviors for a long time didn’t want us to talk about it, because we don’t have direct access to cognition. How do we do this? It turns out it sounds like magic the way you do it. It’s very interesting. I will try to make this as conceptual and semantic and novel on math. But I need to describe a few things. How many of you have heard of a state space diagram or graph? OK, so I better teach it. It’s an extension of a Cartesian graph. You’re dealing with a system that is changing through time. So let’s say I have three variables. I may be measuring the velocity of the water, its viscosity, and its temperature. And I’ll have three axes, and one of them may be viscosity, temperature. What did I say? Viscosity, temperature, and velocity. Viscosity was blocking velocity. Tip of the tongue. So at different times, I measure all three variables. And let’s say I’ll get a point here, and then another time a point here, another point. And what I can do is, throughout time, I can trace. I can, for each intersection of all the variables, I put a marker, a dot, in my state space diagram. And what will often happen is that you don’t just get a random scattering of dots through your state space. Instead, what you get are what are called attractors. You’ll get that, as you measure across time, you get this repetition within the state space. Does that make sense to you? How many of you have heard of the notion of an attractor? This is what it’s talking about. There seems to be self-replicating patterns within the state space as you measure the interaction of variables. Is that all right? OK. Now, the problem for us, especially with the way we have to design our experience, is we usually can’t measure multiple variables at the same time. Typically, we only have, we do univariate measuring. That means univariate coming from one variable. We measure one thing, like how long it’s taking somebody to do tax x. We measure one variable. But, and this is really weird. It’s one of the kind of thing that would make the Greeks freak out. So Tankins did a lot of really important mathematical work. OK. So I’m going to go slowly on this about these state space diagrams, these state space graphs. Look, I know that this is complex. I’m sorry. You have to sort of get this, because this is where it’s going. I mean, take a look at, it has a grasp. Look at all the work that Kelsa is doing now on the metastable rate. He’s got a new article, for those of you who are doing development with me, 2016 Trends in Cognitive Science About the Self-Organizing of Agency, using a lot of these ideas that I’m talking to you about right now. So this is like, this is cutting your stuff. OK. Sorry. We’re getting a long way away from the flow charts. All right. And what Tankins showed is, let’s say I take some univariate measure. I’m measuring something. It doesn’t matter what I measure. Let’s say I’m measuring the temperature of the water in the river. We’re talking about the temperature, the speed, and the viscosity, right? And what I can do is, at different times, I’ll have to, like, it’s 15, 17, 12, 13, right? Or something like that. All right? 18. Now, typically what I would do, that would be the temperature. And then I’d have a measure of the viscosity and a measure of the speed, right? And I’d have all those measures going through. And that would give me my three dimensions for my state space. Is that OK? So let’s say I did that, and I have my sort of three dimensions, and I’m getting this sort of figure doing this in my state space, or something like that. Now, what Taken showed you can do is, let’s say all I have is this, and I didn’t actually go out and measure these. What I can do is I can actually take this and just lag it. All right? So it goes 17, right? 12, 13, 18, 15. I can just lag it like that. Does that make sense? See, all I did is I just lagged it. I just lagged it. I just lagged it. I just lagged it like that. Does that make sense? See, all I did is I just shifted this this way. Do you see that? And I can lag it the other way and create a line up here. Now, that’s completely artificial, right? I didn’t go out and measure anything. I just took this set of data and lagged it in two different directions and filled in these two. And you’re saying, why do that? Because I’ve been taught about empiricism. You have to measure the world, right? Well, here’s the weird thing. This is what Taken’s actually proved mathematically. If I now have my two artificial strings, I now take these three and I put them into my state space. I graph it. Right? Do you understand what I’ve done? I’ve generated. I’ve got one real line of data. And then I’ve generated these two artificial ones by doing this lagging business. Does that make sense? And then graph it as if I had measured these two things. And what he showed mathematically is that this state space diagram will share important central similarities with the real thing. If I went out and measured, in addition to measuring the temperature, the velocity, and the viscosity, and then I did the state space diagram, it will be relevantly similar to this artificial, mostly artificially generated one. Do you understand? He proved that. It’s weird, right? But it’s only weird because we’ve been operating under a fiction for a very long time. We’ve been operating under the fiction that when we measure the world, we’re measuring this single variable. You’re actually never just measuring temperature. You’re always measuring temperature as constrained by velocity and viscosity and vice versa. We have been pretending for hundreds of years that we measure single variables. But we’re actually always measuring one variable that has been constrained by other variables. Do you understand? So as soon as you realize that where the illusion comes in isn’t in this, but in the fact of how we’ve been thinking about our measurement, then the taken stuff does ultimately make sense. OK. Now why are you doing all this mathematical metaphysics about measurement? I mean, I went into psychology, so I didn’t have to do all this. OK. Here’s the thing. Because this is what you can do. You can take a measure, a univariate measure in psychology, a univariate behavioral measure, and then in the taken’s fashion, I can blow it up into a multidimensional state space. And because of the taken’s work, I know that that multidimensional state space will show our important properties with the actual complex set of variables that are going on in the brain. I’ve got to do that again, because you’ve got to get this. This is the linchpin of the argument. OK? I can take, and then we’ll talk about what’s seven addictions actually measure. I can take a univariate, right? A single, right? The measurements for a single variable, behavioral variable. I can then blow them up into three dimensions this way. Yes? Do you understand what that means? And then I graph them in the state space. That state space will be, importantly, similar as if I was actually measuring multiple behavioral variables in a complex dynamic fashion. Yes? So, was this things that change over time, or could you, like, one person falls at one particular point on a scale, and then you can predict where they’re going to fall on the other scales, or as well? I’m not quite understanding your question, though. So, what you were talking about was things that change over time. Yes. But you could also look at things that change across people, right? So, like… Oh, synchronic as opposed to diachronic. I don’t know if this would work that way. Okay. Why do I care about this? Because what state space diagrams do is give you a rigorous way of measuring in a dynamic fashion changes in the entropy of the system. What these state space diagrams do is give you a way of measuring in a dynamic situation, a situation of constant change, right, of thought, changes in the entropy of the system. How do you do that? Okay, so, pretend this is three-dimensional state space, and it’s done now. Okay? Now, it’s like, keep collecting more data, right? There’ll be a degree to which it retraces itself, right? But, there could be this one, and then there’s this one, right? This one wobbles a lot more than that one. You understand? This one really, really, really, like, retraces itself very closely. This one retraces itself much less closely. Does that make sense? Now, because this is all graphed, I can mathematically calculate that degree of wobble very precisely. Like, I could take the difference between the two, and I could take the difference between the first, I point on the first pass and the second point, and I can do that for all the points, well, not all the points, because they’re an infinite number, but you know what I mean. And what I can do is get a very precise calculation of the entropy in the system. Does that make sense? And I can tell you where and when it’s changing and shifting, accept it. Okay? This is a recurrent analysis function you do. Okay? But you can just think of it as you mathematically measure the entropy of the system, because you mathematically measure how much, right, it wobbles in your state space. Okay. So what Stefan and Nixon were doing were giving people the interlocking gear problem. So you know what gears are, right? Like, wheels that have little spokes and gaps on them, interlocking gears. The thing is, like, you’ll have like this gear rolls into this one and this one and this one, and you have to ask people, if this one’s rotating this way, how does that one rotate? Okay? Now again, this is like the multiple marriage problem, because everybody eventually gets the insight, because initially, this is what everybody does, everybody does force tracing. They trace, they go, oh, it’s going this way, it’s going this way, and they go all the way through. Right? What’s the insight? You actually need to do all that. You don’t need to do all that tracing. Yes, what? Every other one, like, the total is the same. Right, you just have to know if it’s an even or odd number. Right? And what happens is people, everybody gets this. They go, oh, right! And they have sort of a little eureka moment. Okay? Now, what’s interesting about this is because, they use this because you get a behavioral measure. You can actually measure, you measure where people are touching, where their finger actually is. That’s the behavioral measure, or you’re measuring, which I like even better, you’re measuring their gaze. Right? So, I get all these vectors of where they’re looking, what their attention is actually doing. So, that’s what I’m actually getting measure of. Either, like, where they’re pointing, right, in some coordinate system, or where they’re looking. So, that’s what’s actually, the behavioral measure, because both of these are measured basically the same thing, you’re behaviorally measuring where people are paying attention. And you’re getting numbers out of that, because you put it onto a Cartesian space. Is that okay? You’re generating this sequence by how people are paying attention. Yes, Thomas? I was just wondering whether, is there any experiments in that case for measuring, we got a mock spot by the eye tracking version of the picture. Like, a physiological measure. Yeah, that’d be great if we had one that wasn’t spotlighting. Yes. That was taking into the whole field of attention, not just the focal, yeah. Not that I know of. Does anybody know of that? I don’t think there is. I think all eye tracking is still spotlighting. I’m thinking especially about the experiment where they’re being covert. Yeah, right. But one of the things you see is even in the spotlight, the way the spotlight is moving around shows that this other model of attention is actually at work. But, yeah. I don’t think so. Invent it. It’ll be famous. It’ll be a little bit of a crass. Where do you want to live now? Maybe Canada’s the best, right? Let’s just stay here. We’ve got sort of a happy government with this young guy. It doesn’t really seem like he’s ready to govern, but it’s okay. Because he’s happy, and we’re happy, and we’re all happy. We’re not angry at anybody. We like each other. Yeah, maybe Canada. Good. The election is overshadowing. Ha ha, pun. Get the pun? Overshadowing the election. Okay. So, they took, they had a univariate measure of attention, and then they did the takings thing on it. They blow it up into the state space, and then they measure the entropy by using the recurrent analysis. Is that okay? Let me do that again. They measure, they get number quantitative measurements of people’s attention. And then, obviously, at different times, they create this sequence. They do the takings thing. They blow it up into the state space, and then they do the recurrent analysis on it to measure, like, the dynamics of anthropic change within the behavior. Is that okay? Now, first of all, I think this is wickedly cool, because this is, again, we’re getting sort of, you know, we’re getting ideas and math out of physics being taken back into psychology and cognitive science and neuroscience, and giving us new ways of getting at complex cognitive processes. Yes? So the diagram is always indicative of your entropy of the system? That’s one of the things you can measure. It’s measured by the things, right? It’s some sort of measure of the self-organization of the system, other things. But that’s what they’re particularly measuring, yes. Okay. And the univariate that we’re measuring has to be representative of this system? Yes. So we use attention because attention represents the cognitive, the cognition that are going on. Now, I think, I mean, if you were just sort of 1970, that would have been like a bit controversial thing. But we now have lots of evidence that attention is playing a fundamental role in insight. So that’s a very plausible thing for them to be doing. Okay. Is that okay? All right. So what did they find? Okay. What does, now, you take all this, and basically what you do is you create a graph about the entropy in the system. So this is time, and this is entropy. And you’re going to get a particular graph for self-organizing criticality. What’s it going to look like? Your entropy is going to be low, and then when it goes critical, what’s going to happen to your entropy? It’s going to go up, and then what’s it going to do? It’s going to drop lower, right? Because you’ve now got a more structured system, yes? This is what SOC looks like in a system. Self-organizing criticality. Does that make sense? I’m sorry, I keep just throwing at you these abstract graphs. It’s like, can’t you show me rats in a cage and food pellets, please? I get that. Sorry. We’re a long way away from Skinner now. Okay. Now, this is exactly what they found. Where? Right. And where do you think people are doing this, right when they’re having the aha moment? The talk? Yeah, right when it goes like this, right here when a crescent comes down. And this would be like breaking frame and breaking frame. And notice how that lines up with all the other stuff we’ve been seeing. It even lines up with even the Metcalf stuff, the Sudden Graph. But this is getting a much more accurate measure than the feeling of warmth used to give us. Now, here’s the interesting thing. Because, as I said to you before, science doesn’t just explain, it affords increased intervention. If you’re on to something, you can also causally make it to be, or at least increase it to chances of existing. Oh, do you want to ask a question first? Just a clarification. The making frame is the emergence of the new baseline. Yeah, this. That. And the breaking frame is when it starts to come down. Right, and that looks like the s-learning curve, right? Exactly. This is so counterintuitive. It’s so counterintuitive, and you pay attention when, as I’ve said before, when you get counterintuitive things happening. Because what they reasoned is, they could take people who have not yet been able to solve the problem, they haven’t had the insight in that sense. Right? And what they could do is introduce entropy into the system, and that would actually drive an insight, cause an insight to a turn. So you wait until people are sort of in passing, and then you introduce entropy into the system. How do they do that? One way they did it is, like, people were watching, like, the gears on a computer screen, and they’d have it wobble. Right? You’re actually introducing disorder into the stimulus. Or they would make, they would introduce static, or make the picture more grainy. And guess what it did? It provoked insight. So if I introduce entropy, I can actually drive this. Obviously not too much, cause, you know, the screen just goes black, and clowns appear or something. You’re not gonna go, oh, right! Right? And this is what I mean. And right now, I get this, right now I’m still, I mean, although all of this is quantitative, the connection I’m making is qualitative. But there’s no reason in principle why it has to stay that way. I’m suggesting to you that this introduction of the right amount of entropy to trigger an insight is probably exactly what’s at work in the moderate amount of distraction that’s at work in incubation. That’s what’s going on in incubation. You’re getting enough, you’re getting the moderated introduction, the moderate introduction of entropy into this system that drives self-organizing criticality, that drives an insight. Like I said, that connection I made, I admit that connection is only qualitative right now, it’s theoretical. But I don’t see any reason in principle why it couldn’t be made quantitative, why we couldn’t bring this mathematical analysis where we need to bring it to bear. Yes? Just a thought, but could this explain why when children are learning, they often confusion? I’m not sure. Because like they, there might be like introducing sort of like in a self-organizing way. Could be, but it could also have to do with the autonomic nervous system, because it might be an expression of the arousal level miscalibrating that. Probably because they don’t fully understand or they don’t assign the correct amount of savings to the rewards we’re giving them. I’m not saying, I know that’s the case either. Right. It could be that like we think, you know, getting the right answer should be the highly rewarding thing, but maybe for them it’s not. And so there’s an imbalance between the sympathetic and the parasympathetic system. I don’t know. I’m just saying that’s an alternative explanation. I tend to think that it might not be as disconnected, because there’s increasing evidence that your embodied state has a significant effect on your cognitive processing. So there’s increasing evidence if you let people move around and gesture, they’re much better at problem solving, even inside problem solving, that if you keep them rigidly still. So sort of like maybe, but not a week. Like I don’t know maybe, but here’s several reasons and here’s a bit of evidence. For children specifically, because they have like a less developed executive function, it’s probably more likely to be impulse control issues or boredom rather than in like that. Could be. That’s what I suggested. It might have to do with the rousal. One of the reasons why you fidget, right, is you’re trying to get the cerebellum to help regulate the frontal cortex. And as she said, the frontal cortex is not as well developed in children as in adults. And so one of the ways you boot, like see I’m doing it now, you bootstrap your frontal lobes is by, like you do this other stuff so that you can get the cerebellum. And again, this again, look at there’s three or four consensus papers on the cognitive role of the cerebellum out there right now. We’re constantly trying to get the cerebellum engaged because the cerebellum really helps fine tune and bootstrap the frontal cortex. It’s why people do bizarre things like stick their tongue out when they’re trying to, when they’re working. They do all kinds of stuff like that. And you can take a look at the work of Susan Golden Meadow. I’m sorry, that is her name. Her name is Susan Golden and then she married a guy named Meadow. Her name is Susan Golden Meadow, which sounds like something from My Pretty Pony or something. But she’s done a lot of work on the cognitive functionality of gesture. And I can’t go into that right now. I talk about it in other courses. Okay, so so far we’re getting, I mean pretty good evidence, right, for self-organizing criticality being at work and in science. There’s another mathematical model about, right, about another branch of theory, graph theory, that has important connections to self-organizing criticality. So it’s sometimes called graph theory, which is misleading. It’s sometimes called network theory, which is much more inaccurate. The problem with calling it network theory is it regularly gets confused with neural network theory. And it’s not. Network theory is the study of how things are connected. How things are connected. Is this okay? Should I stop? Should I just not talk for like two minutes just so you can like catch your cognitive breath? Thank you, Pete. Okay. I’ll just wait. I’ll just rub the board off in this incredibly self-indulgent leisurely fashion. Can you wobble the chalkboard? Let’s get it. And then I’ll take a drink of ginger ale. Yes? Can a neuroticism maybe like predict insight? Neuroticism doesn’t seem to be very predictive of insight. The personality factor that seems most related to it, but not as strongly as you might think is openness. Openness seems to be the one that’s most correlated with easy insight. Although the thing is, the personality variable that messes up all the other personality variables is conscientiousness. Because the more conscientious somebody is, the harder they’ll work at something. And actually they’ll just use everything. That’s why conscientiousness independent of IQ predicts your academic success. Plausibly, I think, because conscientiousness is ultimately the most important thing. So conscientiousness independent of IQ predicts your academic success. Plausibly, I think, because conscientiousness is ultimately a measure of your capacity for self-regulation. And we have good evidence that measures of self-regulation are independently predictive of your cognitive success from IQ, almost as good as IQ. And IQ is the best thing, IQ is the best construct we have in psychology. It is the single best predictor of all of your behavior. Okay, was that long enough? Alright, network theory. Network theory is the study of how things are connected. Now ultimately we’re going to try and use, we’ll come back and try to use this, because of course one of the things we’re interested in is how neurons are connected in the brain. But that’s not what network theory is primarily about. Network theory is how anything is connected, like how computers are connected in the internet. How airlines are connected. How social systems are connected. In fact, this theory, network theory, actually initially emerged in social psychology. It was invent, it emerged by the work of Milgram. The Milgram, by the way, the guy who did the shocking experiment. He also did this work, and this is where the idea of six degrees of separation comes from. The idea of six degrees of separation is through social connections, you’re only about six degrees of separation on average from anybody else in the world. So when I used to teach, I was an academic chair for a private high school for a while, and I would teach the APS course, the anthropology, psychology, sociology course. And one of the tasks I would give them was the competition of who can get the most famous person in the least number of social connections. And they found this a really, they really liked this task. And it’s amazing, like one person was able to get three connections to get to the Pope and stuff like that. You wouldn’t. But see, I’m close friends with Evan Thompson, and Evan Thompson is close friends with the Dalai Lama, so that’s two for me. So, anyways. Although the Dalai Lama is not the Pope of Buddhism, that’s a common misconception. All right, so network theory is the study of how things are connected, and there’s a lot going on there about clustering coefficients and everything. But I’m going to try and just boil it down to what I think the gist idea is that we need here. Okay, so there’s three types of networks. I’m going to draw them on the board for you. The important thing to note in these diagrams is the number of nodes and connections stays the same. So give me a chance to draw this. Okay. Okay. Okay. Okay, so number of nodes and number of connections are constant, okay? All right, so this is known as a regular network. Notice all the connections are local. Is that okay? All the connections are only one-step connections. This is a regular network. Okay, so this is known as a random or chaotic network because the connections can be one or they can be long distance or very short. It’s just random or chaotic. And this is known as a small world network. I kid you not, comes from the Disney song, it’s a small world after all. Because that’s to acknowledge the fact that all of this first emerged in the study of social connections. And we say it’s a small world after all. We don’t mean that the spatial size of the planet Earth is constantly like, oh, we keep discovering that it’s actually very tiny. What we mean is people are much more connected than we sort of think. The thing is we thought, because of sort of ideas about what the scientific world implied, we thought that most things are connected this way in the world. And that most artificial things are man-made things. Sorry, we’ve got to leave it. Human-made doesn’t sound right. But man-made is sexist, so I don’t know what term to use. Because human-made sounds like in comparison to aliens or something. Well, this was made by the humans. Yes? Synthetic maybe? Synthetically? Like it’s created? Because you can’t say it’s natural, like it’s something that’s naturally made that humans make. Yeah, it may be artificially made rather than synthetic maybe. Okay, well anyways. It turns out that a lot more, and this was one of the surprising things that people have found, there’s a good sort of popular book on this called Linked that talks about some of this. A lot of systems are actually organized this way, in the natural and in the artificial worlds. So of course people wanted to, because that turned out to be counterintuitive, people started to do the math. What’s the mathematical reasoning behind this? If you’re interested in how all of this eventually connects back to neural networks, the work of Mark Burkis is really good on this. And I’ll talk a little bit about his work. Okay, now, you calculate the efficiency of a network by doing what’s called mean path distance. I take any two point in the network and I calculate all the pathways to it and then I average it and then I do that for all the different origins and destinations. I calculate the average path distance for the network. That make sense? That’s a mathematical thing you can do. The mean path distance. So the more efficient a network is, the lower its mean path distance. Okay, now which one of these three do you think has the lowest mean path distance? One, two or three? Who thinks one? Put up your hand. Who thinks two? Who thinks three? You guys are right, this has the lowest mean path distance. This is actually the most efficient network. Why does it have to, your brain is going, but no, it’s so messy and I’ve been taught since a kid that when things are messy they’re inefficient. Okay, so think about this. This has so many long distance connections that its mean path distance is actually going to turn out to be very, very low. Does that make sense? This is the worst, which some of you have included, because all the connections are local so its mean path distance is going to be very, very high. Okay. But, you don’t want to look at just that logistical norm. Logistics is how, right, when you’re talking about logistics these are the principles governing your cost functions. Like logistics is not logic, logistics is how you are using your resources. So like there’s a branch in the army called logistics where you have to best organize how can I best use my resources to do what I need to do. Is that okay? That’s what logistics means. So efficiency is a logistical notion rather than a logical notion. Okay? Now of course when you talk about these networks you don’t talk about them just in terms of this, right, logistical notion, because it’s in a trade off relationship with another notion which is resiliency. It’s also logistical. Resiliency is how much the system can deal with damage or the system’s capacity for redesigning itself. So efficiency and resiliency is really long term cross contextual variable efficiency, but saying that and then using the word efficiency twice is really confusing. So we talk about the efficiency and resiliency. Okay. Before I go back to the networks, before we go back to the network let’s use an analogy so you can get a better intuitive grasp of the trade off relationship here. Because we’re talking about a trade off relationship again which is a kind of opponent processor. All right. So when America decided to take the great term to support stupidity that has resulted in Donald Trump, the president that made that cultural change, and most people would agree with me, this isn’t controversial, was Ronald Reagan. Ronald Reagan represents this term. So I lived through Ronald Reagan. This is really weird. When Ronald Reagan was elected I initially had doubted, I can’t believe they’ve elected Ronald Reagan. But now that seems clean and nothing compared to how I felt Wednesday morning. Okay. So what happened during, we had part of what’s Reaganomics, right? And a lot of that is trickle down theories, you give tax breaks to the Uber rich and they will then spend money on the poor, which turned out to be ridiculously false. But we’re going to try it again. But what initially seemed to be working in part of Reaganomics was a mania for downsizing. Okay. So the idea behind downsizing is, and we had it when we had Rob Ford as our new mayor, he would use this word as a noun in a strange way. He would talk about efficiencies. Like, oh, I’d like to buy some apples, some oranges, and some efficiencies, please. Okay. So what’s downsizing? Downsizing is, right, you’ve got a lot of fat in your organization, you fire as many people as you can. You make your system as efficient as you can. Efficient means you’re using the fewest resources to get the maximum amount of output. So your costs go up. Sorry, your profits go up because your costs go down dramatically. Right? And you’ll see how downsizing works. Does that make sense? And of course, part of why Star Wars movies came in about the empire was automatization, right, also allows you to powerfully downsize. And that’s why we were afraid of the empire. And instead, the humans had to fight it with the force which involved feelings and human connection against this horrible automated machine. So the great analogs on the big screen. Okay. Great. I wonder what great movies are going to come out under Trump. That would be kind of interesting. Okay. So initially, what you get is you get a spike in profit. But it turns out that these corporations or these businesses are very brittle. That is dependent on the environment remaining very stable. So everybody’s working, right, as much as they possibly can in a downsize corporation. Everybody’s working as much as they possibly can to keep their job. So what do you do if Susan is sick? Because human beings get sick. Can anybody fill in for Susan or pick up some of the slack for her? No, because everybody’s working as hard as they can. What about if there’s a sudden change in the market and there’s an unexpected new thing to deal with? Is there anybody you can put on the job? No. Now what you end up doing is, remember all those people you fired? You have to hire them back as consultants and they’re pissed off at you. So they charge you way more than you had to pay for them when they were your employees. Or you don’t get your consultants and your business crashes. So this is what it meant. These businesses get too brittle. They can’t deal with a changing environment, unexpected threats, unexpected opportunities, damage to the system. Is that okay so far? Now, why do I say all of that? Why did I give you this whole analogy about this? Because you don’t want to maximize efficiency. What you’re trying to do is optimize it. Because you also don’t want to maximize resiliency, but you want resiliency in your system. You want resiliency in your system so that it can adapt itself to unexpected opportunities, unexpected threats, and damage. Now, this is the most resilient network. You could damage this network a lot and it will keep functioning. It’s very resilient. And that’s because these are going to trade off relationship. Precisely because it’s low in efficiency, it’s high in resiliency. This one, so you should be able to tell me this, this one is so high in efficiency. What do you think its resiliency is? Very, very poor. Look it, there isn’t even connections here. So this one is maximizing efficiency, but you’re losing too much resiliency. This one is maximizing resiliency, and you’re losing too much efficiency. What you want to do is optimize. Optimize is to get the best possible relationship between the two of them. We’re getting quite good efficiency and quite good resiliency. Does that make sense? And guess which network does that? That network optimizes for the relationship between efficiency and resiliency. Does that make sense? One of my students, Philip, he’s actually with Sporns. Sporns is one of the people doing a lot of this research now in the brain. Really cool stuff. Because this is turning out to be correlated with all kinds of interesting things. For example, like if I give you like a general anesthetic and you start using dense EEG to measure sort of how your brain is connected up, right? As you pass into unconscious, your brain fragments into a lot of local networks like this. But as you come back into consciousness, it starts to go back into being a small world network. It’s kind of cool. This is kind of cool. So, all right. All right. So, how does all of this relate to insight? Okay. Do you want me to give you a break or tell you first? Tell us. Okay. So, Schilling has a mathematical model about insight. It makes all kinds of very clean predictions. And what the idea is, right, is that an insight is when you have a regular network, right? You have a regular network and a long distance connection is formed so that it goes from being a regular network to being a small world network. What would that mean? That would mean a sudden increase in efficiency without much loss of resiliency. So, what would happen is the system would suddenly get sort of more powerful, which of course explains quite a bit of the cognitive effects we see. But it also helps to explain a lot of the phenomenological effects. Might even be why there’s that flash of salience. Because, as I said, small world network formation is actually associated with consciousness. And I’m not going to talk about consciousness any more than that. If you want to talk about consciousness, take column 402 with me. What I can talk about is the work of Tobolinski and Reber, which is from 2010. So they’re part of a growing bunch of researchers doing work on fluency, cognitive fluency. How many of you have heard about cognitive fluency yet? So cognitive fluency is the idea, again, the media representation, which we’re not even right, sort of the popular science word. It’s poorly, sorry, that’s not quite right. It’s inadequately described as the ease of processing influences your judgment about whatever you’re processing. The idea that if it’s easier for you to process something, you tend to have more confidence in it, you tend to judge it as more important, more relevant, more salient, all that kind of stuff. It turns out that ease is an inadequate measure. I’m doing a lot of work with Talia Grandidus, and we’re coming up with, we’re running a bunch of experiments, because ease isn’t quite right, because if I just say the same thing, same thing, same thing, same thing, same thing, same thing, that’s very easy for you to process. But your confidence in that data goes down rapidly, and your interest in salience for you goes down very rapidly. So ease is an inadequate understanding of what fluency is. A better understanding of fluency is probably some optimization in your processing. Tobolinsky and Reaver suggest something that aligns with that, because they say you can explain most of the phenomenology of the insight experience, aha, as a sudden spike in fluency. So if you look at all the stuff we already know about fluency, and what it tracks, you can very readily explain insight as a spike in fluency. Fluency is suddenly going up dramatically. That’s why people have all of the super salience, they get a sense of great confidence in discovery, et cetera, et cetera. Tobolinsky and Reaver, sort of, Tobolinsky and Reaver are showing things together with what I said. If we think of fluency more as optimization, right, and I might say you’re having like a whole bunch of insight experience. I’m saying that your brain is really looking for quick and fluid transformations of ill-defined problems into well-defined problems. And that insight is a particularly prominent version of that whole process. Perhaps insight is a fluency spike, and a fluency spike measures an increase in optimization. That would make sense, and all of that, right? So all the evidence would then be supportive of Schilling’s theory, because what would happen if she’s right, if I go from this network to this kind of network, is I would have a sudden spike in optimization. I would get a sudden burst in fluency, because I’d get a sudden increase in efficiency without that much loss in resiliency. And it makes sense that the brain would reward that and say, do more of that, because optimization is what your brain is constantly seeking to do. Does that make sense? Okay, one last thing, and then I’ll let you break, and then we’ll come back and do harder stuff. Yes. Because all of this still has a huge mechanism missing from it. But what Vervecki and Ferraro have argued, and a couple of presentations to learn in societies we haven’t got it published yet, although it’s part of other things we’re getting published on flow, perhaps we can actually integrate these two models together, the Stefan and Dixon and the Schilling and Tobolinsky and Reaver. What would that mean? Well, SOC is basically how the processing is going. It ultimately is how the neurons are firing. The thing about this is that these two things tend to, mutually reinforce each other. The more a system is going through self-organizing criticality, the more it will connect up this way. The more a system is connected this way, the more it will go through self-organizing criticality. They mutually support each other. Although they’re different ideas, they’re not causally independent from each other. Do you need me to do that again? Okay, the more a system is going through self-organizing criticality, the more it will connect up this way. The more it is connected up this way, the more it will go through self-organizing criticality. Is that all right? All right, so what if insight is a speed up of that? What do I mean by that? What if self-organizing criticality in the brain transforms a regular network into a small world network? What does that mean? That what’s happening is if I pass self-organizing criticality through the system, it breaks the structure up enough that it can be restructured. I can go from a regular network to a small world network. That way I would get a consistent, that would be consistent between the self-organizing criticality measures and the small world network measures. Insight is what self-organizing criticality causes, right? A small world network to be formed out of a regular network. Okay, let’s take a break. Come back at 5.2, 2.55. And then we’ll do harder stuff. There’s two things that still have to be addressed in order to try and fill this up. One is I need to give you a better understanding of dynamical systems theory so you can understand how we, and think about this stuff. And I need to do that first because I’m going to need it in order to do the second thing. The second thing is what’s missing from all of this is still, one of the things that we needed from all of this was, yes, but what about the zeroing in on relevant information? What about the avoiding common atomic explosion? How will all of this connect up to the discussion about relevance? Now here’s what I’m going to try and argue. Independently of this, there is work using these kinds of notions, right, in order to try and explain how relevance is realized in a complex system. How it zeros in. How it constrains the search space, the problem space, to avoid common atomic explosion. Now, most of this theoretical work has been published in the last two years and it’s done by this crazy guy, John Brevecky. So, you have to take that. I mean, I’m going to try as much as I can to, like, but I’m obviously a biased source. What I can tell you is it’s published, I presented at conferences, people like it, I get excited, like, that’s worth that, okay? But I’m just forewarning you, okay? But, I mean, in all fairness, I think it’s the best way, I think, but I’m biased, to try and complete the argument. So, dynamical systems theory first, and then relevance realization theory. And the point is to show you how we can, first of all, abstractly understand relevance realization theory, and then we can understand it directly in terms of self-organizing criticality and small world network formation in the brain, which would help to bring into insight the whole relevance realization aspect that has been central throughout. The constraining the search space. But think about it already, how we’re getting different ideas here, right? Because in addition to heuristics and the logical sort of processing of information, we’re getting cost functions and the logistical processing of information. So, we already now have another theoretical entity that might be constraining the search space, which are cost functions. Which, like how the, there are logistical constraints on your processing in terms of efficiency and resiliency. And it’s turning out, by the way, and I can’t do that in this course, that if you take sort of standard AI, like neural networks, and you put cost functions into them, you suddenly increase or improve their performance generally. Okay, so it looks like we are starting to come up with, like, processes and entities that are other than heuristic operate within self-organizing processes that nevertheless could conceivably constrain the search space. Now, remember what I said from the very beginning. I have never argued that, and I never will be arguing, that heuristics are not playing an important role. That’s not what I’ve been doing, and I’ve been saying that from the beginning. I’m trying to say, is there something above and beyond them that is helping to constrain the search space and helping to explain things like problem formulation and insight? And that’s what I’m building the rest of the argument for. But you’ve already started to see what these kinds of things would look like. Self-organizing processes, transformation of network connectivity, all of this is done in terms of cost functions that are measured by logistical norms like efficiency and resiliency. So increasingly, people are talking, in addition to heuristics, they’re talking about cost functions. But before we do that, I was explaining a lot to you, but what is this dynamical systems theory, and what kind of theoretical entities does it give us in order to try and explain self-organization and cognition? And again, this is a lot of fairly recent work. So this discussion about the nature of dynamical system theory and self-organizing process owes a lot to a very important book by Alicia Yovaro. Is that the right spelling, Thomas? The number of R’s? Okay. Very important book integrating dynamical system theory, information theory, and central ideas of cognitive psychology. It’s called Dynamics in Action. She’s written a pre-seonit. It came out later, and then there’s been more work on it. Terence Deacon looks like he plagiarized it, wrote another book called Incomplete Nature, which is also quite famous, but it’s perhaps also famous because it looked like he plagiarized from Yovaro. Anyways, so again, the way to do this, the way to get you thinking the way you need to think in order to talk about dynamical systems, I think I’ve made a case that it’s at least an important hypothesis to consider when we’re talking about insight and problem solving, is to go back and try again to situate where this notion of self-organization emerged historically. Yovaro traces it to the work of the German, the Titanic, the great German philosopher, Manuel Kant, who was probably second only to Plato for his influence within intellectual discourse. Have any of you read any of Kant’s? Good. Your lives are better. Okay. So, one of the things Kant, I mean, one of the things Kant was trying to do, but one of the things he was trying to do is he was trying to understand and he was trying to give a philosophical explanation for the success of Newtonian science. Now, it turns out that Kant overestimated Newtonian science. It turned out that Newtonian science, for example, for example, Newtonian science is committed to Euclidean space and to absolute space and these turn out to be fundamentally wrong. And so Kant thought he had to explain why those were necessary to science, and it turns out they’re not. Turns out they’re not. Okay? So, but nevertheless, there was something going on here and there’s a huge explanation. I only want to concentrate on one aspect of this explanation for why Kant thought Newtonian science worked. Because Newtonian science had a particular picture of causation and explanation. And part of what I’m going to also be arguing in the rest of the lecture is that this is still the prevalent model of causation and explanation in science, especially in, sort of, not in science broadly, but outside physics, especially in psychology. We stand to still think of causation in Newtonian terms. And we usually slough that off by saying, because you don’t have to worry about Einsteinian stuff, you only have, you know, extreme space. That’s not the point I’m talking about here. I’m going to be talking about something else. Okay, so what’s the Newtonian model? The Newtonian model is deeply influenced by, Kant was also influenced by, sort of, Hume, and it’s like the billiard ball model of causation. There’s an event, and then it causes an event that’s after it, and then it causes another, right? So causation is cause and effect. Cause is an event that precedes, right, and is followed by its effect in a reliable fashion. So, one, two, three, four, five. And then one of these to explain five, so this is causation is to show how it was caused by four, and how four was caused by three, and how three was caused by two, and how two was caused by one. And then you get back here, and then, oh no, what came here? I don’t know, don’t, don’t even worry about it. And then Kant basically said, you can’t do that. You can’t talk about what caused the causal sequence to begin, because it’s oxymoronic. Right, so we can’t, we can’t, we can’t do God anymore. At least not in silence. And so everybody sort of stopped talking about that part of the problem. Right? So, I mean, there is a possibility that this universe was caused by a rip in the multiverse. That’s one possibility, and I don’t deny it. But it could also just be that there was nothing before the Big Bang. Nothing. What caused it? Nothing. But, but, no, that’s, causes happen within the universe. But, so, no, no. You don’t like it? No. Okay, so, let’s just leave that part silent. Okay. Why is this so good? Why is this so good? This is so good because it avoids circular explanations. Circular explanation is that when I use the very phenomena I’m trying to explain as a phenomena to explain the thing I’m trying to explain. This is particularly dangerous within psychology. A persistent problem, a version of this is the hamunkid or fallacy. The little man right here. Right, so, here’s the Disney explanation of vision. There’s a triangle out there in a world that extends and there’s impulses onto the eye and then that goes in and then it goes inside like working memory and inside there’s a triangle and then there’s the central executive that labels it. Triangle. Now why is that problematic? Because what should you ask me? How does the little man see this triangle? Well, inside his head there’s an even littler working memory and then an even littler and an even littler memory. Triangle. And how does he see the little man? Triangle. Triangle. Triangle. That, of course, is an infinite regress and not explanatory. Okay? Because I’m using vision as a core process in order to explain vision. By the way, I’m not being completely ironic in my use of things like the workspace and the central executive because they very easily and often become hamunkid or explanations within psychology. Well, how do I categorize things? Ah! Your, your, your, you know, your, your central executive sorts things. How does it do that? What? How does the central executive do that? Well, it’s the central executive. I know. How does it sort things? Well, it’s really smart. Ah! So I want to understand, people are smart because they can categorize things. And how do they categorize things? They have a central executive and the central executive can categorize things because it’s really smart. So you know why you’re really smart? Because inside you is a little thing that’s really smart. And that’s not an explanation, right? So you have to be careful about it. Now, what I am not saying is that I’m not, I am not, I am not saying that all indications of, you know, working memory or the central executive are hamunkid. I’m warning you that it’s a real danger and it does occur. Okay, so you get the overall point. Circular explanations are vacuous. They’re empty because they just are an infinite regress. It’s like being given a check that whenever you try and cash it, they just give you another check. Now, what this does, because the arrow has to go this way for cause and this way for explanation, and all of these things are distinct from each other, this precludes circular explanation. This precludes circular explanation. So this is wonderful. Yay, Newtonian science. Now, what’s problematic for you is you think isn’t this just the way it is? You should know that before Descartes and Galileo and Newton, this is not the model of causation or explanation that’s pervasive in the Western world. This is an innovation. Now, it’s become so endemic in our culture that we sort of, oh, but that’s just the way it is. Okay, well, no, not necessarily. Now, Kant had a problem with this. He went out and he looked at a tree, literally. So Newton had his apple as a papagraphal, but Kant had his tree really out of it. I don’t know if there was any apples on the tree. Here’s what Kant said was a problem, and this is where he introduced the term. A tree seems to be self-organizing. The tree seems to make the leaves and then the leaves make the leaves. The tree seems to make the leaves and then the leaves seem to provide the energy for making the tree, and then the tree makes the leaves. And what I seem to be saying is, you know what makes the tree? A tree. It sounds like what? A circle. So Kant introduced the idea of self-organization, and in self-organization there is circular causation. So the feedback process itself organizes. All right. Now, that’s problematic, Kant said. This is what he argued. He argued living things, and of course if it’s the case for living things, it’s also the case for psychological things. And this was probably Kant’s deeper point. Living things seem to have an inner-teleology. They seem to be acting on purpose. They seem to be making themselves. Okay. Living things are at least self-organizing. And here’s how the argument goes. When I try to trace self-organizing process, my explanation will turn into a circle. And circular explanations are vacuous and empty. So wherever there’s self-organization, wherever there’s feedback mechanisms and circular causation, I like to use a graphical term, then my explanations, as I attempt to trace out the circular causation, I’m going to get into circular explanation, and circular explanations are forbidden in science. So this is what Kant concluded. There can never be a science of living things. There can never be a biology. And even more strongly, there can never be a psychology. So if you think there is such a thing as psychology, Kant’s argument is saying you’re wrong. If you think even more controversially, right, if you think there’s a science of biology, Kant’s saying you’re wrong. Now, there’s a response to this, of course, because we do have a biology. Does that just influence how you’re thinking of the purpose of the tree? Because if you think the purpose of the tree is to make a new tree, that’s not necessarily… Because if you’re saying the purpose of the leaves is to feed the tree to exist as a tree, as opposed to the purpose of the leaves is to feed the tree long enough to make seeds, perhaps. But it doesn’t look like trees are just that way. It looks like they are self-organizing in that they are making and trying to maintain their own structure. Now, it may be that there’s an ultimate teleology for that being a reproduction, but there is still the intermediate teleology of making itself and maintaining itself. But then, isn’t that still not necessarily circular? Because what makes the tree is not the previous tree. No, what’s making the tree be what it is right now are all the chemical processes within the tree are self-organized to keep making it continue to be a tree. That’s why trees can die, because the self-organization can break down. That’s how inanimate things can die. So, what’s going on? Now, what Chloe is trying to do is exactly the right thing, which is we’ve got to get out of this triangle hole. And what I’m suggesting you have to do is you either have to… Con’s argument, I mean, I didn’t lay it out completely, premise by premise, it’s technically valid, it’s a logically valid argument. There’s no logical mistake in it. Now, that means you can only challenge its soundness, which means you have to either accept the conclusion or disagree with one of the premises. Now, what Urara says, what that forces us, and what she’s trying to get you to do is to understand the importance of self-organization. Because what it does is mean that we have to reject this. Not… No, the rejection doesn’t mean, oh, this is false. The rejection is, this must be inadequate in some way. This must be inadequate. This must not be capturing everything that we need in order to explain things. So, what we’re trying to do is we’re trying to get out of this triangle hole. And we’re trying to get out of this triangle hole and explain these things. So, the idea is that the Newtonian model of causation is somehow radically incomplete. Now, we know abstractly or more comprehensively that that, of course, is the case, but let’s get into it specifically. Okay? So, what I should get you to notice is what this… And think about this in terms of problem formulation and the idea of, like, you know, fixation. This gets you to focus on events. It’s exclusively talking about the relationship between events. Okay, so we’re going to do Newtonian causation right here, right now. Right here, here it is. There’s going to be an event and there’s going to be another event. Okay? So, why did the chalk move? Because I pushed it. Yay. One event, another event. Woo-hoo. He’s proud, he’s happy. Why else did the chalk move? Yes? What? So, it rolled because of gravity and… Sure. Yes? And there was nothing stopping it? Right, there’s nothing stopping it. There’s empty space or empty enough space to be more specifically accurate, empty enough space for it to move into. It also rolled because it has the shape it has. It also rolled because of the shape of this. See, in addition to events, there are conditions. And we always knew that because whenever you have to apply Newtonian formula about events, you’ll have to also pay attention to the initial conditions. Okay, so next we’ve got events and we have conditions. Now, your oral plays around with a bunch of different vocabularies. I’m going to propose we use a vocabular that is becoming pretty standardized in the philosophy of science and the philosophy of biology for talking about this distinction. Let’s say that we’ll use the word cause for this. When we’re talking about the relationship between events, we’re talking about causes. So when we’re talking about conditions, this is the term that’s used, constraint. Now, causes are events, but constraints are conditions. Now, as always, these are relative terms because conditions might be made out of a more micro-events, but the micro-events might be then relying on more deep conditions and it goes all the way down. So it looks like physics is not going to bottom out in events because that’s not the way it’s going. Relativity and quantum stuff are ultimately conditions rather than events. So it looks like in the end, this may be more primary than this, which means science is already using both of these ideas. So events are changes in actuality. Conditions are changes in possibility. Changes in possibility are what allow you to speak of probability or potentiality. So you can change conditions so that an event becomes more probable, which isn’t the same thing as causing the event. Okay, now before we go on, we have to change another thing that collapsed with the Newtonian model. These two words have become synonymous, real and actual. But you will often use the terms interchangeable. You’ll use the word actual to say being real. That actually happened. It really happened. The thing with the word really is it doesn’t necessarily mean real anymore. It just means more. It’s an intensifier, like very. I told you about this, right? Okay, this is an intensifier. Like if I say it’s very green, what do I mean by that? It’s like a lot of green. It’s intensely green. So this used to mean, this came from verily, which means truly green as opposed to not truly green. We lost that and this just became an intensifier. So then what we did was we invented another word for when it was real as opposed to false. Real. And then we made this word, really. But the problem is really just became like this. Really green just means intensely green. So now we’re doing it again because we had another term. This was supposed to be, okay, really, right? Real in reality. And my son will come up to me and he said, I was playing a video game and it literally blew my head off. And I’ll say to him, you still have your head. We do this weird thing where we keep taking a qualitative marker of the difference between real and not real and turning it into a quantifier marker of intensity. I don’t know why we do that. It would be interesting why, like, is there anybody in linguistics? Do you know why this is going on? Like, that’s a real, sorry, that’s a real pattern, right? I think that there’s still subtle distinctions between like when people use an intensifier really versus literally. Sometimes people still use really to like, you can still use like it was really green to mean like. Yes, you can. But for literally, specifically, people use it not necessarily just as an intensifier, even though it’s an intensifier, but as a figurative intensifier. Whereas you wouldn’t say, like, if you were drinking orange juice and someone says it literally tastes like orange juice, they don’t mean it, like, it’s very orangey. Spencer started to use it that way. But I’ve never heard other people. Yeah, the younger kids are. Which is bizarre, because before Ivo and Vivo were using it as an intensifier, it was a figurative intensifier to imply it’s almost as if it literally happened, even though it didn’t. Which kind of makes sense, though it’s annoying, but like it follows from people trying to emphasize it’s almost the same as if it literally happened, where it really wasn’t used. I mean, like, it’s the same as if it was real, even when you use it as an intensifier, because that makes sense. So they’re not, I mean, at least now, I don’t know what’s going to happen, but they’re not used as intensifiers as the same. If you say, like, that’s really green, you don’t mean like that’s figuratively green in an intense way. I think the problem, I think what’s changing this is they spend a lot of time in the virtual world. And so that’s why it’s starting to become just a synonym for this, because the border between a fictional world and a real world is breaking down for them. We’re weird about all, look, we use this word. She’s very pretty. And then we say, thing, that’s pretty dark. What? It’s kind of a beautiful dark? No, no, it’s an intense, you know, it’s a mildly intense dark. Pricky means mildly intense. It’s a messed up language. We’re all insane. Language is a virus from outer space, as William S. goes. Because the problem with this is that actual, right, the way the word was created by Aristotle, actual is in contrast to potential or possible. So these are contrasts. Now if you make these synonyms, then this can’t occur for you. Real possibility. Possibilities can’t be real. Because if actuality means real and possibility is contrary to actuality, then possibilities can’t be real. But science depends on real possibility. Okay, so conservation of energy, right? Yes, yes, of course, of course. Well here it is. Look, here’s the kinetic energy. It’s gone. It’s gone. I just destroyed the energy. Nothing’s happening. Nothing’s happening. I made the energy go away. What do you say to me? You’re an idiot. The kinetic energy has become what kind of energy? Okay, there’s lots of examples of this, okay. So science depends on conditions. It depends on real possibilities. Look. As I mentioned, relativity is a description of conditional relationships. Is that an event? Is that an event? E equals mc squared. Is that like every Tuesday? E equals mc squared. That’s the wrong way to think about it. That’s a categoristic. That’s not an event. That describes how possibilities for events are shaped in this universe. Because this can describe multiple things. It can describe what happened at Hiroshima and Nagasaki. But it can describe all kinds of things, like time violation as you travel close to the speed of light. It’s not an event. It’s a shaping of the possibility space for events. Does that make sense? I’m just trying to show you how this way of talking, this is Iorado’s point, is deeply pervasive and essential to science. But we lose sight of it when we fixate on the Newtonian model because the Newtonian model is exclusively fixated on events. Is that okay? All right. All right. Now, the next thing, to go back to the craziness of language, is I need you to hear this in an unmarked fashion. The problem with when you hear constraint is you only hear it in its limiting sense. In its restrictive sense. Okay, now what do I mean by marked and unmarked? This is unmarked. This is marked. This is unmarked. This is marked. Unmarked. Marked. Why? Because I can ask you how tall it is and I’m not assuming a height. If I say, how tall is she? You can say, oh, she’s quite short. But if I say, how short is she? What am I assuming? Short. Tall can be used for both cases. Short can’t. What am I assuming? I can ask you how wide the road is and you can say, oh, it’s quite narrow. But if I ask you how narrow the road is, then implies that it’s what? Narrow. Narrow. Okay, do you understand? I need you to hear this word because this is how it’s being used by your own mothers in this sense. Not in this sense. Yes? Can I just probably use language? Like we could probably have English where short is unmarked. Yeah, of course. I’m not making a metaphysical point. I’m not saying, there’s nothing to hear other than this is what we do. I’m just asking you to not, what we do is we hear constraint only in a marked sense and I want you to hear it in an unmarked sense. Okay. Okay, now let’s go back to this. Because not only do we have a distinction between events, sorry, between causes and conditions, we have a distinction between two types of constraints. Relative to a system, we can have constraints that limit the possibilities for a system. Your RO calls those selective constraints. And then we have constraints that can open up the possibilities for a system. Those are enabling constraints. So there’s also two types of constraints. Selective that limit and enabling that open up. Obviously these are metaphors, spatial metaphors, we’re talking about modal changes. Now, your RO’s point is if you have all of these distinctions, Conch problem goes away. What dynamical system theory does is allow you to use all of these distinctions to trace feedback systems, self-organizing processes without falling into circular explanation. So let’s do Conch tree. There’s a bunch of biochemical events, right? There’s a bunch of biochemical events and they cause the tree to have a particular structure. Why do trees have the structure they do? You notice the trees have the structure, most trees, I mean there’s variations. Plausibly the trees Conch is looking at. The branches spread out and the leaves spread out like this. Why? Is that sort of because God wanted us to have shade on the conch? Why do trees have that structure? Any ideas? You should think about things like this. So they can optimize the amount of sun coming in in all directions. Right. Well, it’s not even optimized, they’re trying to maximize. They’re trying to increase the amount of sun coming in in all directions. They’re trying to increase the probability, listen to my language, they’re trying to increase the probability that a photon will interact, have a causal interaction with a chlorophyll molecule. The tree has the shape it does and it has the shape even at night. Because what it’s doing is it’s creating a condition that shapes the possibility space, increasing the chance of a particular causal event occurring. Does that make sense? So what you can say, right, is there’s these biochemical events that cause the structure and then the structure alters the probability space, constrains the probability of the events occurring. And the structure enables certain events and limits other events from occurring. Because the structure also blocks the light for things around the tree, which limits weeds growing, competing with the tree. Now this isn’t a circular explanation because I’m shifting, right, I’m talking first about events and causes, and then I shift to conditions and structures and constraints, and I talk about two different, and then I come back. And moving this way isn’t a circle because I’m not repeating myself. Because I’m making, I’m moving between fundamentally different metaphysical categories. I’m not saying that vision is caused by vision, I’m moving between two fundamentally different things. So here’s Thomas. Thomas, and all of us, but I’ll pick on Thomas. Thomas is a sack. What he is, is there’s a bunch of biochemical events that have a very low probability of occurring in this environment. Inside the sack, the events have a high probability of occurring, because the sack creates a certain set of conditions. Also, events that are very low in probability inside the sack are very high in probability outside the sack. That’s why Thomas likes to keep the sack in place. He doesn’t want the sack punctured. It’s true. It’s easy, he’s worried, but it’s true. So what happens is the sack alters the condition, so a bunch of improbable, the probability for events gets terrifically skewed. Those events then do what? They put energy into the structure. The structure then alters the probability of events. The events make that structure, the structure alters the probability of the events, and so on. Do you see? It’s a self-organizing process, but I’m not talking in a circle. Because I’m not just saying the sack makes the sack. See, that’s what life basically is. There’s a little bit more to it, and we need to know this, because living things are not just self-organizing like tornadoes. They’re autopoetic. Autopoetic things are a subspecies of self-organizing things. Autopoetic things, using this language, autopoetic things are self-organized such that their structure functions to seek out the conditions that will increase the probability of the structure existing. Look, a tornado is self-organizing, right? You’ve given up the biblical notion that there’s a desert god that makes tornadoes, right? I hope you don’t. We don’t need that anymore to explain tornadoes, or hurricanes. But see, tornadoes, although they’re self-organizing, their structure doesn’t function to have them seek out the conditions that will maintain their own existence. Tornadoes will happily, this is anthropomorphic, crazy, they have to, they will happily go into conditions that will destroy them. Because they are not self-organized to seek out the conditions that will actually increase the probability of their existence. Does that make sense? Autopoetic, auto, I mean, autopoesis, self-making. Living things are autopoetic. They are, they are structured so that the structure functions such that they seek out the conditions that help to improve, right, the longevity of that structure. Does that make sense? Okay. So with the notion, with all of these in place, right, we can talk about dynamical systems on self-organizing, and more importantly, ultimately, autopoetic systems. Okay. So now, let’s try and apply this to something Kant said he couldn’t do, which is biology, to one of the foundational theories of biology. And this is what Yoraro does in the book. And it’s probably given away by the use of this term, selective constraints versus enabling constraints. And this is where, all right, we can pick up on one of the things, probably, we’ll say. All right. So probably the first and one of the greatest dynamical systems theories in science is in biology, and it’s one of the foundational theories of biology. It’s Darwin’s theory of evolution by natural selection. Now, I think Darwin’s theory of evolution by natural selection is one of the great theories. I mean, in terms of its scope and power and its evidential base, it’s as good or better than, I think, the atomic theory. It’s way better than relativity. We just have way more direct evidence at explanatory scope and precision than we do with relativity. It’s a great theory. There was a BBC series called The Voyage of Charles Darwin. It was on the 70s. Any of you seen it? When I watched it, I was like, I want to be Charles Darwin, which, of course, is impossible. But think about that. What did you do? Oh, I sailed around the world and went to all these exotic places, and then when I came back, I created a world-changing theory. That’s a great life, eh? So I ordered the DVD and everything, and I did it. I was so stupid. I went on Amazon, and oh, there it is, and I didn’t look carefully. I did the one click, and I ordered it, of course, and it came from Britain. And then, of course, I can’t watch it because it’s coded for Europe. So I have the DVD case just sitting there. Yes? You can just watch it on your computer. My new Mac doesn’t have any way of watching people do this. I know, I thought that too. I know. Sometimes life is tragic. Okay, so that’s all we’re going to get to today. I’m just going to try and show you, I’m going to do two things. I’m going to try and show you how you can apply this to Darwin’s theory, and what does a dynamical system theory look like? And then I’m going to suggest to you how we could use all of this to start thinking about what relevance realization is. And then next class, we’ll tie it all together. Okay, so what you need is you need a feedback system. A feedback system is any system where the output becomes input into the system, and you know what that is for biological things. The feedback system is reproduction. That’s why it’s called reproduction. Here’s the output from a system, goats. And what do they then do? They are the input into the system. They make more goats that then make goats. So the output from the system is the input to the system. You have some feedback processes. All right, so what Darwin did is he had this brilliant insight. Again, we tend to explain, well, I understand, because you’re trying to do biology when you’re talking about Darwin, and you’re trying to do psychology. But we should also step back and do the psychology and compare what was going on, like get at Darwin’s insight. See, before Darwin, lots of people were trying to explain morphological design, why organisms had the design they did, the structural functional organization. And here’s the thing. Most of the people that were doing this work, the naturalists, were clergymen. Easily. That’s easily to establish fact. Why? In fact, Darwin was considering going into the clergy so he could study, be a naturalist. Now why? That sounds bizarre. Why are people before Darwin, the people who want to study the natural world and species and the morphological design of things, why are most of them clergymen? Any ideas? Yes? Because the explanation wouldn’t have to do with God? Kind of. Yeah, it’s the idea that, right, they seem to be so perfectly designed. It was an idea of perfection of design. And there must be some sort of central secret in there. If we could just figure out, there must be some essence of perfection to design. If we studied it long enough, we could get the formula for design. We could figure out the essence of design, and then we would know the mind of God. And you have to say it that way and put your hands in the floor. The mind of God. Now it turned out that that was exactly the wrong way to frame that. I want you to remember that. Because it turns out that looking for the essence of design or the perfection in all design is a misframing of a phenomenon. Because Darwin actually didn’t try and do that. He didn’t try and come up with a theory of design. He tried to come up with something different. He tried to come up with a constantly self-organized process of designing, which is a different theory, fundamentally different theory. Design is only emerging out of previously contextually sensitive design in a self-organizing fashion. All right, so you know the theory of evolution by natural selection. And we now know that evolution is not driven just by natural selection. It’s also driven by genetic drift and autocadalytic processes and then neutral transformations. There’s other mechanisms that work. I get all of that. But by and large, natural selection still does a lot of the heavy lifting in biology. Darwin’s theory of natural selection and Mendelian genetics are the foundational theories of biology. Okay, what’s Darwin’s theory? Well, Darwin’s theory is a theory of conditions that constrain this feedback process. So let’s use Urroto’s language to talk about this. What we have are we have selective constraints. These are conditions that limit the possibilities for the system, prevent a lot of possibilities from becoming actual. The condition is scarcity of resources. Scarcity means that not all life forms can survive. If there was unlimited biochemical soup that we were swimming in as a single-cell creature, and if natural selection was the only process, so give me that because we’re focusing just on that, if that’s all there was and there were single-cellular organisms in an infinite soup of nutrients, evolution wouldn’t occur. So this is selective, right? That’s why she uses this term. This is natural selection. Out of all the possibilities, only some are selected. It’s narrowed down. So this is selective constraint. It limits the options on design. But of course, there’s also enabling constraints there. These are conditions that increase the possibilities for design. So look around the room. What do you notice about all of us? We’re very different. There’s a lot of variation. Now there are lots of things that cause variation, but that’s the condition that we’re running onto. Mutation is one of them. Sexual reproduction is another. In fact, there’s a current theory right now that the sole function of sexual reproduction is to shift our genetic makeup. We’re constantly shuffling the decks so that we stay ahead of the viruses, sort of gaining ground. I don’t know if it’s going to turn out to win or get consensus. People are talking about this right now. I don’t know. But the important point is this is an enabling constraint. This increases the number, the options, the possibilities for the system. Now, Uraro does something, and she only goes halfway through, and my co-authors and I think we should extend the metaphor. She calls these kinds of constraints a virtual governor. Virtual because we’re talking about possibility rather than actuality. That’s what virtual means. Okay, so you know what an actual governor is, right? You have a steam engine. Say this shaft is turning, and you have, like, you’ve seen this kind of thing before. And because of inertia, or what you’ll often call centrifugal force, as this spins faster, the balls move out, and you’ll have that, like, attached to some lever, and that will decrease the amount of fuel going to the engine. And then that means the shaft spins slower, the balls drop, and then that increases the amount of fuel. And this self-organizes very quickly to give you a stable, right, speed. That’s a governor. It limits the options on speed very narrowly. Is that okay? It’s because it’s using a lot of negative feed time, but that doesn’t matter if you’re not doing this hypernetics. Okay, and she stops there. And we said, well, you should finish the metaphor because this is a useful way of naming this. If a virtual governor limits your options, what should we call something that makes more options? Well, that’s a virtual generator. Now, what Darwin has is he has, of course, a systematic relationship between these conditions. And what do you have when you have a systematic relationship between a governor and a generator? You have an engine. This is a virtual engine. A virtual engine is made up of a virtual governor, a set of selective constraints, and a virtual generator, a set of enabling constraints. So this is what a dynamical systems theory does. A dynamical systems theory is what Darwin’s theory does, finds a virtual engine, discovers, postulates, gets confirmation for, a virtual engine that regulates a feedback process. That’s what Darwin’s theory of evolution is. Evolution by natural selection is a self-organizing process because a feedback system is being regulated by a virtual engine so that the design of organisms is constantly being shifted in a contextually sensitive manner. Does that make sense? That’s what a dynamical system theory is. That’s what you’re doing in dynamical systems theory. Darwin isn’t saying something vacuously circular. He’s not just saying goats come from other goats. He’s saying that birds come from dinosaurs. And that’s cool. Even though there’s people in the United States who don’t believe that, now they’ve got their own president. Okay. Okay. You think I’m joking? Look up his vice president and look up the legislation that he’s most trying to push. Pence is trying to get creation science taught on an equal basis throughout the American educational system. That’s his big deal. I’m not just making a joke. That’s his big thing. That’s what he most wants. We’re in Canada. Okay. So we’re almost out of time. And if I introduce another theoretical move, several people are just going to explode. Okay. So all I’m going to do now is gesture. And it’s going to be vague and promissory, but I’m going to try and tighten it up. First of all, right, this is the kind of thinking that you can use to try and explain, right, the more specific notions of self-organizing criticality and the self-organization of networks. But then what I’m going to say to you is that I can use dynamical systems theory to explain relevance realization in a non-vacuous way and then link that, again, very tightly to using notions of self-organizing criticality and small world network formation so that we can get a very tight integration between our theory of insight and our theory of the machinery at work in relevance realization. And here’s the final sort of thing I’m going to suggest to you. What if relevance realization is kind of a cognitive analog to evolution? Evolution is constantly adjusting your biological fitness to the environment. What if relevance realization is a way within information processing in which the brain’s problem-solving ability is constantly evolving its fittedness to its environment? What if what we need in order to understand relevance realization is something like a theory, right, a dynamical systems theory of the feedback relationship between the organism and its environment and how that is constantly evolving your problem-solving fittedness to the environment? If we could do all of that, then we could get the theory of insight within a theory of relevance realization within a neuroscientific theory that would ultimately ground out in biology. That would give a base that’s rigorously rigorous enough to get all of the benefits of the search inference framework but start talking about some of the missing machinery that the gestaltists were flailing at in a theoretically impotent and vague manner. Okay, that’s it for today. There’s some uncollected tests here. Thank you for your patience. This was a very hard lecture. The rest of the lectures get easier.