https://youtubetranscript.com/?v=YjBV_6Vv5gg

I wanted to explain to everybody who’s listening a little bit more about this idea of entropy just so that it can be made more understandable. So imagine that you’re driving to work and you’re in your car and your car isn’t bothering you. You’re not attending to your car apart from the fact that you have to drive it. And the reason that you’re not attending to your car is because it’s performing its proper function as a car in relationship to your goal, which means that it is moving you down the road reliably. Now imagine what happens in your imagination when your car stops. Let’s say it stops on a busy highway. Now what’s happened is the path length to your destination and to also other multiple potential destinations has now become indeterminately large. So, and then imagine that the search space opens up. It’s so like now you’re off to the side of the road with your car. Well, your first set of problems is your whole day is now messed up. How are you going to get to work? So you have to compute a whole variety of potential pathways in the world just in relationship to your day. And then while you have the broader problem of the fact that your car is now no longer a car, it’s a useless chunk of metal that you’re trapped in in a dangerous situation. And you have no idea how to fix it. And maybe you have no idea where to take it. And so the collapse of the simplicity of your car as an affordance in relationship to a proximal goal has exposed you to entropy. And entropy is the multiplication of the problems that now beset you. And category collapse does that. And so if you understand this, if you understand that your perception of car is dependent on the maintenance of its function in relationship to a goal, you start to understand something very fundamental about categories themselves because everything you see in the world has this nature. It’s a unity of form, which is something that the empiricists can concentrate on, but it’s a unity of form in relationship to a goal. And that’s built right into the perception of the so-called object itself. And so your object perception is constraining entropy by organizing the world into categories that are functionally relevant to goals that you maintain either explicitly or even more importantly, implicitly. And category collapse produces this increase in entropy. Now you feel positive emotion when you see yourself moving towards a valued goal and you feel negative emotion when some uncertainty with relationship to that goal is manifested itself, or when you encounter say a determinant obstacle that you have to walk around. And so that’s part of the way that, to go back to an earlier section of this discussion, that you can relate emotion to both cognition and categorization. So this issue of entropy reduction is crucially important because it’s, well, it’s at the basis of categorization itself. Now, the reason I’d asked Friston about categories as micro narratives is because I was very curious, I’m very curious, and this is probably more relevant to your work on spirituality. So one of the things you point out in the recent lecture you did for Ralston College is that even the perception of a given object is dependent on some sense of oneness. And so Piaget was very interested in this, is like, why is this one thing? Right, because it’s not there, you know. Now it’s two things. And things don’t have to be physically contiguous to be one thing. Right, right, and so the question is what constitutes the oneness of the thing given that it’s fractionable in an infinite number of ways. And so, and then another question that emerges out of that is what makes two cell phones in the same category? Okay, so let me run a hypothesis by you and tell me what you think about that. So I think that things are one. First of all, they’re one if you can use them for a specific purpose with a specific sequence of actions in relationship to a given goal. But they’re interspersable, so they’re the same, if you can replace them functionally in the same pattern of operations with no transformation of the path. So they’re the same because they’re functionally equivalent in relationship to a goal, not because they share a set of features. So anything that’s swappable is the same. But that is dependent on a teleology. It’s necessarily dependent on a teleology. I mean, this is the, and this is not a criticism. This is a classical notion of multiple realizability. So I can have the same program, Excel, and I can run it on many different machines. So the actual physical instantiation can be different as long as I’m getting the reliable same generative model, as long as I’ve got the same formal system running. That’s why you, in fact, you don’t think that there was one pattern, one program here and one program there. Think about it. Think about this abstract entity, a computer program, or even a file. You can, you move it. What space are you moving it through? The language has come so readily to us. You’re doing this thing where you’re moving it from one computer to another because of exactly that. Because you say, oh, the generative model here and here, and this is an important qualification, there’s no relevant difference. Yes. For example, this one might run a little bit slower on this computer than here, but if it doesn’t impact on how you can use it, then. Then it’s the same enough. Yes. Now, I wanted to introduce, and this will help get us into a little bit more. I recently published a paper with Brett Anderson and Mark Miller on integrating the relevance realization framework and the predictive processing framework. You want to do entropy reduction, but if you look at network theory, and the way you explain it in terms of paths reduction is really important here. So there’s three basic kinds of networks. Networks are just ways in which things are connected, like sequences, or the way an airline is connected, or the way the internet is connected, or the way neurons are connected, functional connectivity. So there’s a regular network, which is nodes are just things that are connected. All the connections are just one step away, node to node. And then there’s what’s called a random network, is where you can have long distance connections, very long distance connection. So I don’t have to fly from Savannah to Atlanta to… I can just fly directly from Savannah to Toronto, something like that. So the regular network is highly inefficient. The way you measure efficiency is called mean path distance. You take all the distance from all possible combinations. How many steps do I have to go from this point to that point? And then you take all of them and you average them together. You get the mean path distance, the average path distance between any two points. In a regular network, it’s very, very high. You have to go through a lot of steps. And a regular network is one where they’re all connected. They’re all local connection. When you look at it, it looks beautiful. It’s highly ordered, because all the lines are the same length and everything. But it’s highly inefficient. The random network is highly efficient because you have a lot of these long distance connections that collapse your path. You mean path distance. But the brain doesn’t go for either one of those. Because there’s a trade-off relationship. As I make the network more random to make it more efficient, which sounds like a contradiction in our terms, but it’s not, I lose robustness in the system. So think about it. When you have a lot of these little connections, there’s lots of redundancy. And so I can lose a lot of stuff, and I get graceful degradation. I only get a small reduction in functionality. I have this random network. I can take out one link, and entire nodes can become isolated from each other. So that’s the danger of efficiency versus redundancy. Yeah, and so what the brain does is what’s called small world networks. So a small world network is mostly regular, and then one or two long distance connections. So I pointed this out before. And is that associated with the manner in which the cortical columns organize themselves? Because there’s a lot of micro-connections within cortical columns that are very fast and efficient, and relatively sparse connections between cortical columns. The cortex, by the way, the cortex is made up of these cortical columns, which are replicated units of about, I think it’s 100,000 neurons each, with 10,000 connections. Certain neuron, something like that. And then that structure’s replicated. That makes up the cortical sheet. So, I mean, everything we’re talking about right now is in one sense controversial. There’s lots, I’m not saying anything that doesn’t have a lot of good empirical evidence for it. But we’re relying on technologies that are still like fMRI and dense EEG that don’t give us the kind of precision. So I wanna say that, I’m not saying anything ridiculous here, but I don’t wanna claim like we’ve concluded. Subject to revision. Yeah, yeah, yeah, right. But it looks like the brain is organized at multiple levels of analysis, not only top down, but back front and in out.