https://youtubetranscript.com/?v=9aFdZ5lo5iY

Welcome to Untangling the World Nod of Consciousness, wrestling with the hard problems of mind and meaning in the modern scientific age. My name is John Vervecky. I’m a cognitive psychologist and a cognitive scientist at the University of Toronto in Canada. Throughout the entire series, I will be joined in dialogue by my good friend and colleague, Greg Enriquez, from James Madison University in the United States. Throughout, we are going to wrestle with the hard problems of how we can give an account of a phenomenal like consciousness within the scientific worldview, how we can wrestle with that problem in conjunction with the problem that Greg calls the problem of psychology, that is pervasive throughout psychology, which is that psychology has no unified descriptive metaphysics by which it talks about mind and or behavior. Throughout this, we will be talking about some of the most important philosophical, cognitive scientific, and neuroscientific accounts of consciousness. So I hope you’ll join us throughout. I’m really excited about it. I’ve been embodying my modeling of various things throughout the last couple of weeks. So I’ve been feeling it. I’m looking forward to the continuation. So last time we did a very long analysis. One of the few times I use slides, I don’t usually like to use them, but we did an analysis of an argument that I’ve been working with Dan Schiaffi about the use of the rovers on Mars. And that was basically a use that as a test case and as a vehicle for explicating the relationship, sorry, first of all, the nature of perspectival and participatory knowing, and then the relationship between them. We brought in Montague’s idea of mutual modeling and consciousness as some mutual modeling between participatory and perspectival knowing. And so that’s sort of where we’re at right now. And what I’d like to do now is take the next step forward and set up a course of argument that will allow us to together, you and I, to enter into dialogue, not only with each other, but with some of the most prominent theories of consciousness on the market right now, on the psychological, cognitive scientific and neuroscientific market, if I can put it that way. And so I have a proposal about how to do this. And what I wanna do is I wanna propose a particular- Go ahead. Let me just pop in real fast. So as a meta theorist, okay, so me, I’m really looking to see how various frames assimilate and integrate key insights from various models. Yeah. And one of the things I’m so attracted about this model is that it seems to precisely do this. Okay. So we’re setting up from my vantage point to my ear in part is listen to the assimilation and integration capacity and how it pulls the strengths of various models together and generates coherence. That’s what I’m often paying attention to. And this is what I love so much about your formulation. Thank you, Greg. That was very helpful. That was very helpful, especially for the listeners. So thank you for that. Yeah. That’s exactly right. In the sense of that’s what I will be endeavoring to demonstrate as much as possible. So I have a particular proposal about a particular strategy. I wanna try and give a cognitive scientific argument precisely because it does the thing that Greg just said, cognitive sciences. At least I argue for a version of cognitive science in which this kind of meta-theoretical integration is central to the practice of cognitive science because we’re trying to bridge between work in artificial intelligence, neuroscience, psychology and philosophy, especially phenomenological philosophy about the nature and function of consciousness. So one of the strategies we use in cognitive science that is different from typically strategies that are used in psychology or in neuroscience and it overlaps with strategies used in artificial intelligence. It’s called the design stance. And so the design stance of this idea, you try to figure out what a phenomena is by reverse engineering it. You try to figure out, well, if I had to build a machine, and until recently, this was not a strategy people could generally use, right? But now we’re getting to the place where this has now become a plausibly viable strategy. So what I wanna get on this, I wanna understand what say intelligence, general intelligence. So, and this is a project that’s on right now. So project is artificial general intelligence. I try to make a machine that has, that I try to design it. And by doing that, I often have to wrestle with some of the deepest and hardest problems of figuring out what the phenomena is. And it gives me a new way of trying to understand it that can be valuable to people in psychology who perhaps might be taking a more experimental approach or neuroscience who are taking a more direct measurement approach of FMRI, EEG, et cetera. So that’s what I’m gonna try and offer here because I think that will allow us to powerfully bridge. Yeah, no, I just said, to me, this is again, this is why we think so much alike. I mean, I basically reverse engineering propositional knowing with the justification systems theory, basically. And that, again, this is this big picture coherence capacity is when you reverse engineer, then all of a sudden you have a design feature and you get a lock key kind of relation potential. And then you see things from different perspectives, it deepens, raises questions. And also I think when it works, catch that lock key correspondence. So anyway. No, no, that’s well said, that’s very well said. So I’m gonna make two, what I think are plausible presumptions in order to try and take the design stance. I’m gonna take it that AGI is a progressively successful project. I’m not saying we’ve made autonomous AI or anything ridiculous like that. But I think a very reasonable conclusion to draw from the last 15 years is that this project is progressing and it’s not even progressing in a linear fashion, it’s progressing in a nonlinear fashion. And then there’s a philosophical justification for that. It’s basically the argument of something like this. If we can’t succeed in coming up with intelligence out of the physical, then I think we’re doomed in trying to get the conscious out of the physical. So what I’m saying is it’s plausible that this project is progressing, AGI, and it doesn’t succeed, I think we’re really doomed in trying to give an account of consciousness that can fit into the scientific worldview. So those are my two methodological presupposition. And then what I wanna do is I wanna say, okay, let’s say we wanted to build a machine that had AGI. And so here’s another part of what I’m arguing for. If we take the design stance, we will be always, it’s what the design stance does, it makes you always move between, and you alluded to this, and you always move between and try and integrate the function issue and the nature issue. And when I’m trying to design something, I’m obviously trying to make it function, but if I’m doing the genuine design stance, I’m also trying to bring it into existence. So the design stance is linking the design function thing together. And by trying to design artificial general intelligence, we’re gonna try and explicate and justify the intuition that there’s deep connections, the deep connection between intelligence and consciousness. Yes. Okay, really, so that’s sort of the methodological, the sort of the justification of my method. So it’s a methodological justification. So what I want to start with is the idea that what’s at the core of intelligence is a capacity for relevance realization. And we’ve talked about this a lot. I’ve argued this in many places elsewhere. You’ve published on it, we published on it together. So there’s, I’m not gonna recapitulate that argument. It’s out there, there’s a lot of material on that. And we’ve also touched on it and discussed it also at length here. So I’m just gonna assume that that’s one huge component of what is needed in order to be an intelligent system. And when we say relevance realization, I sometimes wanna put in a third R and make it RRR, because we’re actually talking about a massively recursive, because we’ve been talking about that also throughout this whole argument. That’s right. Massively recursive relevance realization. And then that’s gonna relate to things like the dynamic interrelation of perspectival participatory modeling across lots of people. Yep, exactly, exactly, well said. It’s really good to be doing this with you, man. It’s just like, it’s really good. Like just that, like you do those little things and they’re wonderful, they’re seamless but they also tie things together. So I just wanna compliment you on that because- Thank you, friend. Appreciate that. Great, it’s fun to be here. Okay, so what I wanna do now is make another step. And there’s a sense in which I don’t need to go into this in great detail for the argument you and I are gonna explore today. But I do want to indicate to people that the deep version of this argument is forthcoming. And I’m working on this argument right now with a former student of mine, and he’s also a student of Andy Collex, Mark Miller, and then another person I just met because he reached out to me because of the awakening from the meeting crisis area, Brett Anderson. And what we’re really working on very hard is the idea that we can integrate relevance realization theory with the predictive processing models, especially that have come out around Clark’s work and Friskin’s work. And I know you’re very familiar with them. Yep. Familiar with them too. And then the idea here is, and then the version of predictive processing that Mark especially thinks will help make this work is not so much the standard computational one, but very much a 4E cognitive science where embodiment in action and all that is the best interpretation of how it works. That’s super exciting. For me and my 30,000 foot view that I tend to take, when I’m thinking about what is information, I talk about information processing system that’s input, recursive computation and output systems. I talk about semantic information, and I talk about information theoretic. So now if we have relevance realization, pulling information processing in semantic, and then we loop it into information theoretic and the predictive capacity of extracting and reducing uncertainty, man, you are pulling things together. So I think some of the most recent work on integrating the information theoretic approach with the semantic information idea works exactly along these lines. And perhaps that’s something we could get back to. But again, assimilation, integration, and connecting with coherence. Yeah, it’s all converging in that way. So what are the things, first of all, in place people are not aware of it, look like a very quick primer on predictive processing. And so the idea, the predictive processing model is the idea that what the brain is primarily trying to do is predict. Now notice how that immediately sounds like we’re at the propositional level. And that’s why I took time to indicate that, no, where Brent and Mark and I are working is at a deeper level, more at the perspectival, very much more at the perspectival participatory level. Actually, I shared on my list a little five minute clip of a spider web, and is a spider web an extension of its mind? So we, and how it utilizes that to model and sense and expand its sensory feedback. Oh, that’s very cool. So we can make that, yeah, we’re talking about a level of potentially as spider here, okay, so in relation, right? Not at narrative prediction and justification. No, exactly, exactly. And at some point I wanna talk to you about that when we move to narrative, how that starts to afford the justification problem. But well, again, too many candies. I’ll take that one. I’ll take that one. Too many candies. Okay, so the basic idea of the predictive processing model and so is that what does the brain constantly trying to do is sort of predict its environment. The problem with that, the idea is it can’t really directly predict its environment. So what it can directly do is predict itself. So the idea is that, and so this is, I think central to it. It’s making use of sort of a Bayesian idea of how you alter the assigned probability of a belief. I don’t like this language, but based on evidence, because it’s all very propositional. We’re trying to talk below that level. So I’m not gonna keep saying that. I’m gonna accept that people understand that we’re just using these words as stand-ins. So the idea is what the brain does is higher levels of the brain. So you think of the brain as hierarchically organized and Greg has emphasized this multiple times. I think there’s a fair much consensus view about that. Is that it’s a neural network in which, here’s the lowest level. This is the level that’s in direct sensory order in contact with the world. And then this level, what it tries to do is predict this level. And then the idea is, right, and then there’s a level above that that tries to predict that level and so on and so forth. And then the idea is by trying to reduce how surprised it is by what the lower levels are doing, indirectly it will start to predict, it will start to zero in and predict the real patterns in the world. Is that fair enough so far, Greg? Absolutely, and we’re gonna, let’s just asterisk the word surprise there. Yeah, yeah. Because then when you get surprised, you will then pull and then there’s gonna be energy to try to model that surprise and then readjust. Right, and so these are places, by the way, where relevance realization comes in because predictive processing talks about revising the model or making alternative models. And that’s actually an issue around relevance and relevance realization because what the system is also facing as a problem is, it’s modeling the hidden causes in the world. What I mean by hidden causes, it’s trying to create, by predicting what’s gonna happen at the lowest level, it’s actually creating at least a procedural model of the causal patterns in the world that are making those patterns in the sensory motor. And the system can adjust by altering its predictions or by altering the world to make them conform to the predictions. And so you can see how it gets you right into the sensory motor. And then he wrote, but the idea there is, okay, but it’s often having to predict things at multiple temporal scales, like right now, because the question of what should it model is of course the relevance realization question. What’s happening right now, what’s happening a little deeper in the environment, longitudinal, and then the idea is, the layers in some sense correspond to these temporal and spatial scopes. And then what you’re doing is this hugely dynamic, massively recursive process in which error signals from the layers permeate up and then correction signals or action signals permeate down. And now notice what we’ve got, Greg. We’ve got what we’ve been talking about a lot, right? We’ve got a machine that is set up to do an important kind of relevance realization. Why? Because as it goes up, it’s doing data compression. It’s doing a massive data compression. As it comes down, it’s doing massive data particularization. It’s doing this in a hugely complex process of dynamical self-organization. And what is it doing? It’s regulating the sensory motor loop because it’s updating its models, but it’s also changing the world with action. And so you’ve got this loop and this loop happening. So you can see, I mean, I’m not gonna go into the technical details, but you can see how relevance realization theory and predictive processing theory just really go really, really nicely together. And you can just, I mean, for me at least, you see the fractal line, the fractal line back and forth, basically. Yeah, exactly. Well said. And so what we’re seeing here, and I wanna sort of say this carefully, because I’m making, like you said, I’m making an assimilation space for where the mutual modeling kind of idea can come in. But what we’ve got, notice how the brain is actually modeling itself in this massively recursive manner in order to model the world. And in modeling the world, it’s always also modeling itself. This goes to the work of Hoey and Michael, so their work on the predictive processing models of the self. And you and I are gonna talk about the self at some point too. Indeed. Because one of the error signals I get in the environment is precisely because of the way my body and my actions are altering my perception. For example, I’m blinking right now. Okay. Yep. So I have to, my system has to learn to discount that I’m a blinker. It has to learn to discount that I have a limited visual scope, but that it can, so what it does is it’s also reciprocally modeling itself as it models the world. So it’s modeling itself in these layers, but it’s also creating a model of itself as it’s modeling the world. So you see massive mutual modeling going on here. Massive. It’s gotta have a model so it can factor itself out as it shifts so that it maintains the signal relative to the noise that’s created by its own shifts. So you can think of it even having, so while it is doing this massive recursive self, it’s also modeling itself in a sense. Well, this is why I often talk about animal environment relation. That’s between model. It’s the animal environment relations. You gotta model the animal, model the environment, and then model the dynamic relation. And that’s in order to maintain the control, necessary hierarchical control feedback systems across time. Those are the variables that definitely. Exactly. And so ultimately what is constraining all of this is how, again, so that’s not completely, that doesn’t completely answer what is modeled and how is it modeled. Right. It’s ultimately about how is it relevant, this notion of self-relevance, how is it ultimately relevant to an adaptive, autonomous, autopoetic system? And again, that’s another place in which the relevance realization machinery comes in. It has to do technically more specifically with the way the predictive processing model talks about how attention is altering salience, but I won’t get into the details. Right. What we’ve got is, okay, we’ve already got, massively recursive relevance realization, predictive processing, multilayer, and multidimensional mutual modeling going on. Is that okay so far? I’m there. Okay, so let’s say we can, we’ve got predictive processing networks. Let’s say the work that I’ve been doing over all these years. We’ve got some, we’ve got a naturalistic account of relevance realization. So we could give, everything I’ve described so far could be given to a machine. Okay, now we’re ready to start. Can we just set the machine in the world? No, because we’re gonna face some problems. Now, these problems, I don’t wanna, I’m not trying to convey that people in predictive processing are not aware of these problems. I’m building this out step by step so we as a group, you and I and potential viewers, can track the development of the argument, okay? So what we, we encounter this problem. Okay, we’ve got this system and it’s gotta be doing all this, but at some point, it’s taking in signal from the world. At some point, it’s engaged in the project, like you said already, of taking information in the information theoretic sense and converting it into something that is meaningful and useful to it. But at the very beginning, we have a problem that is called signal detection problem, okay? And what we’re gonna, why I’m stopping here is because one of the prominent theories of consciousness, the neuroscientific theories of consciousness runs directly off this signal detection problem. That’s why I’m gonna stop at the certain roadblock. The roadblock is associated with, maybe roadblock is the right way, but anyways, is associated with the roadblock. A guidepost and then we wanna stop and then organize this and this is the pieces that we can hone in on a pull from a simulate and integrate. Exactly, exactly. So what’s the signal detection problem? The signal detection problem is the idea that there’s all the, there’s information I want. There’s information that is some sense relevant to the organism. But the problem with that, because information in the technical sense is just co-variations between events in the world, right? That the information I want is always enmeshed and overlapped with the information I don’t want. The way this is referred to in the literature is signal and noise. Noise doesn’t mean just auditory noise. It means any event in the environment, anything that you are, that you could potentially take a signal, but that you don’t want. Let me give a concrete example, okay? So you’re a gazelle, right? And you hear a noise in the bush, okay? Now, the problem facing you is this. Is that a signal for a leopard? That would be signal. So does that noise, does that sound, also use the word sound. Does that sound from the bush, is that signal? Is that predicting that a leopard is there? Remember the predictive processing stuff? Or is it being caused by the wind and all it’s predicting is the wind is going to continue to blow? Right. You can’t tell, right? You can’t tell. So the idea is if this is the population of all the signals, it overlaps with all the population. So there’s lots of times, and this is a perennial problem for us, where we have to see this kind of pollution of signal by noise. Now, what you can say is, well, what I’ll do is I’ll gather more information. You can, but the problem is the problem regresses as you try to gather more information. When you step back and try to get information to get certainty about the first thing, you’ll encounter this same problem again and again and again. More noise comes along. Yeah, exactly. And in fact, there’s some versions of this where the noise expands exponentially as you try and explore for the signal that will initially resolve your primary signal. You get a combinatorial explosion. Yeah, exactly, exactly. So what does signal detection theory say we do? So signal detection theory says what we do is we set a criterion. So this is a top-down app. What we do is, I’m gonna have to use anthropomorphic language, okay? But that’s because language is anthropomorphic. So somehow the brain decides, right, that it’s gonna count, so let’s say here’s the two graphs overlapping. It’s gonna set a criterion. And what it means by that is it’s gonna count anything below a certain threshold as noise and anything above a certain threshold as signal. Yep, now let’s take a look at the gazelle because the gazelle, the two errors the gazelle can make are not equally in terms of their relevance to the gazelle. Mm-hmm. So if the gazelle makes this mistake, it hears the sound and takes it to be the wind and it’s a leopard. Whoa, that’s a very dangerous mistake. You don’t make too many of those mistakes. Yeah, right? But there’s another mistake it can make. It could hear the sound, predict a leopard, and it turns out to just be the wind, but now it’s running around and all the other gazelles laugh at it. Now it can’t do this infinitely because it will burn up all of its caloric energy. So it can’t just say, always run away, that won’t work. But what do gazelles do? Again, not anthropomorphically, even though the language sounds that way. They set the criterion, like really, depending on how you draw your graph, I’ll talk about it this way. They set the criteria very low, which is they’ll count almost all of the sounds as signaling the leopard, right? Because it’s much more dangerous for them to miss a leopard than to mistake the wind for a leopard. But notice how this is a relevance realization project. Depending on the situation and depending on my state, that can be completely inverted, where a mistake is much more worse for me, much more relevant to me than a miss. So it’s not like you can set the criterion like here, right? You have to constantly adjust and move the criterion. And you’re constantly doing that in terms, well, I would argue, of relevance realization. What is your state? How are things relevant and important to you? What’s the context? What’s the relevance? The comparative relevance of the two risks. This is known as error management theory. Right, right. And sensitivity, specificity? Yep, yep, all those things. And so people even use error management theory, and I think plausibly to try and explain some of our cognitive biases, why we’re biased to make the things we do, the kinds of mistakes we do. Let me give one that’s, I think, non-controversial. When things are approaching you, you see them approaching you in an illusory fashion. You see them coming towards you much faster than they actually are. But when they’re receding from you, you’re accurate. It’s like, well, why? Well, because if I screwed this one up, it’s very costly. Duck. Right, yeah, yeah, right. So the idea is, and you can see, you can even talk about like confirmation bias and other biases in this term. Again, those heuristic functions that help us with relevance realization. Yep. Is this okay so far? Totally, I’ll just throw in. So we got a dog named Benji. He’s very skittish, okay? So that basically, every loud noise, everything is then very low threshold for threat. So he’s just like, all of a sudden, jump into escape and run back up into underneath his bed and hide. So that’s just whatever he’s set out. I’ve seen other dogs unbelievably relaxed. Actually, that gets into a tendency towards neuroticism at a dispositional level. But anyway, that’s a- And we can talk about that about the degree to which, personality is a very longitudinal setting of some of these things. Right, we’ll flag that for later. Yeah, because, well, yeah, let’s just do everything. No, let’s just do it all. Okay, so here’s where we get a theory of consciousness, actually, on the literature. And it’s by Lau, L-A-W. And this goes to some work I’ve done with Richard Wu, especially Richard Wu, but also Anderson Todd. And so what Lau argues is that what any good theory of consciousness has to do is to be able to tell you the difference between consciousness and blindsight. So remember what blindsight is? Blindsight is that phenomena where people can intelligently pick up on the co-variations in the environment, but they are phenomenologically blind. So they don’t have any adjectival qualia, but you can do stuff like this. You can put a stick in front of them and say, right, and then, and they’ll say, can you see the stick? And they’ll say, what’s wrong with you? You’re an asshole, I’m blind. You say, what, where do you think the stick is? Oh, and you can, oh, it’s there. I’m just guessing. And you can even say, well, continue guessing. What angle do you think the stick is at? Oh, 45 degrees. They can do stuff like that, okay? If that’s why, why is Francis the primary guy? Consciousness Lost and Found’s a really good book on that. Yeah, exactly. So what Lao proposes is the following, that there’s two things that go into signal detection. There’s what he calls the sensitivity of the system. That’s just its ability to pick up on the co-variations. And he says, that’s what blindsight is. Blindsight is a reliable picking up on the co-variations. Notice the language from Descartes coming in here, right? And you’re down. Right? You’re picking up on the co-variations, but you don’t actually have anything like a representation. Right? And so what Lao says is, what turns that blindsight detection of co-variation into consciousness is the second part of signal detection, which is setting the criteria. So when the brain sets the criteria, this is, think about this, Descartes would love this. This is a way in which the co-variations are being made ready for reason, because they’re being sorted, right? In order to generate behavior that is relevant to the organism. Isn’t that, I mean, Descartes would go, oh my gosh. That’s on to something. Right, no, absolutely. And so the idea here is, the difference between consciousness and blindsight is the setting of the criterion that in an important way is now allowing you to turn the, right? Turn the, sort the co-variations. You’re doing a kind of, like I said, looking through the co-variations. You’re starting to sort them in terms of their behavioral and task relevance. Right. And so I think that’s beautiful. Now, what Lao doesn’t say, I mean, I don’t think he thinks this is a comprehensive theory of consciousness. He, by the way, thinks that the setting of the criterion is, remember we talked about Rosenthal and the higher order? He thinks that that is the higher order action, right? Yeah, exactly. Right. That nicely. That fits very nicely. But he doesn’t think that this is a comprehensive account of consciousness. It doesn’t explain most of the phenomenal properties of consciousness or anything like that. But if we’re gonna make, no, notice what happened. We’re just trying to make an organism intelligent. We’re just trying to give it the ability to do signal detection and already, boom, we’re starting to have to bring in a lot of the machinery that is intimately associated with the function and perhaps even the nature of consciousness. Now, here’s the point that I make with the help of Richard and Anderson is, but Lao does not give us any account of how the criterion is set. And remember, the setting of the criterion is dynamic. The criterion has to be constantly set. And not only that, the example, right? The way I’m talking about this is unit dimensional. I’m talking about as if I’m setting one criterion. But what I’m doing is I’m setting criterion for many different signals in terms of many, right? So I’m gonna need that. I’m gonna be needing to set this criterion in a massively recursive, right? Dynamic, self-organizing manner. Go ahead. So essentially, okay, so we’re gonna pull some sensitivity and then pull in some criteria that’s gonna have a figure ground relation, okay? And then what are we gonna aspectualize about the figure relative to the ground? Well, that’s gonna change enormously depending on a whole bunch of different kinds of contexts. Exactly, exactly. And you can see how what you’re gonna need is you’re gonna need relevance realization within this massively recursive processing model. Okay, so already intelligence and consciousness are already co-emerging together. And I wanna stop and take note of that. Right at the very beginning of the project of trying to make a system, give a system AGI, we’re starting to do that. Okay, so now let’s go into that idea of prediction. And let’s talk about some of the stuff we’ve already alluded to. What we’re actually talking about, I think this is a much better term, by the way, is we’re actually talking about anticipation. We’re talking about that the system is in a highly integrated fashion, predicting the world, predicting itself, and predicting the relationship between those. That’s really nice. Actually, I hadn’t heard that, but it immediately resonates because it does get you out of your propositional and gets you into intuitive, perspectival participatory. You anticipate much more than, oh, I’m gonna hypothesize and predict. I mean, at least just at an intuitive level, it resonates more. Well, no, and I think it goes very well with your behavioral investment theory. And here’s how I think so. So the system is doing this prediction, and within that, it’s already doing what we talked about. It has to be setting the criterion, it’s doing all this stuff. But it’s also forming a model of itself, it has to be. And those are being mutually modeled. And notice what it has to do. We can think of prediction. The anticipation is made up of prediction, which is actually picking up on these patterns and then setting the criterion. And we also can think of it, and here’s where I think it comes in, as preparation. Yes, anticipation is prediction, preparation. And preparation is not just to have a model of the self, it is to use that model to shape how you’re gonna invest. That’s right. That’s right. And when you’re actually engaged in the investment, what you’re actually doing is you’re breaking up little parts of investment, because a huge amount of what your goals are, often, are nested hierarchies of goals. Exactly. And so then you have to anticipate, there’s the immediate goal, and then there’s a mid-level goal, there’s a higher order goal, and all of that’s gotta be sorted. And then you’re organizing and coordinating the pattern of dynamic investment relation. Excellent. So let’s take all of that. And what we’ve got is we’ve got this hierarchical, multi-scale, they’re doing exactly what Greg just said, and it’s doing the relevance realization, the predictive processing, but also the criterion setting. And it’s, did you drop signal there for a second? I did. No, I’m here. Okay, I’m fine. So I think we’re okay. Okay, so you’re doing all of that. And so what we’re basically doing is what I’ve already alluded to. We’re picking up these co-variations that are co-variations that are actually picking up on affordances, because they’re not just predictions of the world, they’re anticipations. So this is picking up on co-variations that are affordances that, my signal for you keeps dropping. I’m not gonna shut you off. But you’re not getting any problem on your end, right? Actually, my end’s been clear, so. Okay, well, I’ll just assume that it’s just something insignificant then. And so notice what we’ve got. We’ve got co-variations that are actually affordances, because we’re talking about anticipation, not just pure prediction. And then we’re now doing this massively, massively recursive process of compressing and particularizing them. And that’s constraining the sensory motor loop. And I proposed to you, that is a very powerful way in which we’re doing what Descartes wanted, right? Where we’re doing some significant compression, we’re actually deep learning on the co-variations, and we’re detecting through them, which is exactly what the predictive processing model says. We’re detecting through. We’re actually figuring out what the deep causes are. Really? Let’s put it actually in the here and now. So as you track the screen, right? You’re making anticipations about what my movements will be or what you’re expecting. And when it freezes, right? Then you get surprised. Then you activate like, well, okay, what’s actually happening? You shift them frame of reference, right? Because the scale modeling across the various perspectival and participatory is not, as it gets off and then all of a sudden you have to wait. Right, right. Because I don’t want to treat all the surprises as equally salient or relevant. When you clean those, right? That’s important. But if your hair moves in a way that I didn’t really anticipate, because I didn’t realize that you’ve got a haircut, right, I don’t go, oh, I mean, I might if it’s a particularly beautiful haircut. But you know, I’m tense as far, it’s not gonna do much for me. Certainly not with me. So as Greg said a few minutes ago, we’re talking about how we’re getting basically at least proto-aspectualization, proto-representation. We’re getting this process in which we are starting to do that whole process of sizing up. We’re starting to get the very beginnings of perspectival knowing. But notice how much it’s enmeshed in participatory knowing because the system models itself to model the world. And in modeling the world, it models itself. There is a deep knowing by conformity there. Amen. Okay. So now we get to the next theory, because we talked about how this is so massively recursive and self-organizing, which means it’s inherently dynamical, which means it’s inherently developmental. It’s inherently developmental as both development realization theory and the predictive processing theory are. So this takes us to Clearman’s. And Clearman’s theory is probably the theory of consciousness that gives pride of place to developmental explanations. He has what he calls the radical plasticity hypothesis, where plasticity is a system’s capacity to literally redesign itself, reshape itself in order to make itself more adaptively fit to the environment. So there’s been a lot over the last two decades about how plastic could bring in plasticity, and he’s invoking that. And so he has an idea, he has an argument, and this is one that many people go, wait, really? Where he thinks consciousness is something that develops. He doesn’t think, now where it begins a development, of course, he doesn’t say, and that of course doesn’t. So it’s not gonna resolve any of our ethical issues about when does consciousness begin. But what he says is that consciousness is actually something you learn to do, which is really interesting. That’s how, this is the part where Descartes is gonna get very stern, what do you mean? How can you learn consciousness? As long as you have the immaterial view in your head, you’re conscious. And Claremont is saying, well, no, what actually has to happen, and think about what we said, all this machinery has to train up and evolve before it’s gonna get some of the capacities that are needed for consciousness. So he, like Lau, follows Rosenthal, that’s why he spends so much time on Rosenthal, making the deep connections between Rosenthal’s theory and aspectualization. He says, well, and notice how this fits in with the predictive processing. What’s happening is, right, you’re doing this training up and the higher levels are learning to better represent the lower levels. We don’t have to be bound to that language because we can already use this other language. And what’s interesting about it, he acknowledges, he acknowledges that it’s not enough for the upper levels to represent or even to track the lower levels. He says that what turns them into consciousness is they care about the lower levels, right? They, right? They, right. There’s an emotional, or at least an affect. Heidegger would appreciate that. Yeah, I thought you would. Mark does too, because Mark thinks that the affective components within the self-organization of predictive processing are doing a lot of the important work there. Actually, I said Heidegger would appreciate it, but I appreciate it also. Okay, oh, yeah. Like Martin Heidegger in terms of the core of our being and caring. Yes, and caring. And, yes, but I appreciate it. That’s such a synchronistic segue because, of course, I follow Dreyfus in arguing that Heidegger’s primordial notion of care is relevance realization, yay, right? And also you who says, you know, the difference between our processing and the computer’s processing is, it doesn’t care about the information, right? Because it doesn’t have to take care of itself. I was trying to show care by setting you up. You did it, you did it extremely well. So, as soon as we get to this theory, right, and notice how it works. Because the system is developmental, development means a system is autopoetic. It’s taking care of itself. It’s making itself. And because of that, it cares for itself at higher levels of processing, care about, right? And so you’ve got, right, relevance realization within that’s affording relevance realization without. And so, okay, so we’re there. Yeah, I’ll throw in a couple of, so the last, there are six principles of behavioral investment that organize it. Principle four is our computational control principle, which is basically the hardware. Principle five is the constant iterative learning process. Yeah, okay. And principle six is the lifespan developmental history process. Yeah. Okay. And the sort of example that I use in relationship to my own personal world there are sort of, in terms of reorganizing, anybody that’s been through adolescence may remember how you get reorganized. So when I was 11, I thought of the world in one way. And let’s say, say girls one way. And then at 12 and 13, I thought very differently as some sort of reorganization of relevance realization happened in those two year period. So that’s really, definitely happening. How development and consciousness really, yeah, you just put it in those terms, do you realize how radically your consciousness can shift depending on developmental and learning histories? Excellent. So notice again, why did we have to bring in development? Because if we’re gonna make an intelligent system, we’re gonna give it recursive relevance realization. We’re gonna give it deep learning in like hierarchical predictive processing networks. And those are inherently self organizing, inherently developmental, et cetera, et cetera. Autopoiesis is inherently developmental. So we’ve got, we just, we’re giving it the basic machinery of intelligence, relevance realization, predictive processing, signal detection and the capacity for plasticity and development. And we’re just building in more and more of a case for the machinery of consciousness. Yep. Okay, so now let’s go to the, something we talked about last time, which is you’re starting to get higher levels of coordination. You’re trying to coordinate, right? All this processing. And I think we’re now at a place where we can start talking about, well, you’ve got all this mutual modeling going on, but there’s a sense in which you’re gonna get very higher order sort of. So the mutual modeling is also going to lead to the abstraction of a very generalized model of oneself and the world, a space, if you’ll allow me a metaphor, in which that can be coordinated to various different degrees. And so this automatically sets up, and you know where this is gonna go, this sets up the global workspace theory, which is the idea that consciousness is like your desktop of your computer. And so what’s the model here? Okay, so the model is, you have all this processing going on, think about all these layers, right? And all these dimensions. And most of that processing is unconscious. Like most of the files in your computer are unconscious. But what you can do is you can activate them by bringing them onto your desktop. Your desktop can potentially access all, or at least most of the files. Yep, okay. So you can access that. And then when you make it active, what can you do? Well, you basically can make it interact and reintegrate with other pieces of information. And then you can broadcast it back. You can store it, you can store it in multiple locations. And so the idea is, he also uses the metaphor of a theater, right? Consciousness is, what consciousness is, the stage is working memory. And then I don’t like this model of attention, but I can put it aside for now. The spotlight of attention. I don’t mind it, so I’ll take the spotlight a little bit. No, that’s fine. I think the model of attention that comes out of, out of, you know, Waltz and Christopher Moll has much more about relevance realization and integration than just shining. Because I don’t think that ceiling is shined. I think it’s much more complex. Yeah, my fame of broadband and the attentional filter iterative processes. Anyway, whatever, we can, we can talk about that. We can come back. Anyway, I think we’re. Yeah, we’ll come back. We can potentially come back to consciousness and attention later. But let’s go with, so you have this model of, you know, the people in the audience are unconscious and the people behind the stage are like the unconscious. But the stage people are more like the top down processing and the audience is more like the bottom up processing. And then you’ve got the stage of working memory and then the spotlight of attention shines on it. And that’s consciousness. So the idea is that, right, you have all your, or to use the computer metaphor, you have all your files, they’re the unconscious. You draw them into the deck space and when you’re manipulating them, that’s shining the spotlight of attention and restructuring them, remember that, restructuring them and then sending them back to memory, that’s what the function of consciousness is. Yep. This is really cool because obviously now they’re, and so the originator of this theory, it’s current defender, although there are many people who defend it, it’s a very prominent theory. It’s Bernard Barres. It’s called the global workspace theory. And he explicitly argues, and I demonstrated that in the model, that there’s this terrific overlap between working memory, attention and consciousness. Yep. And what’s, okay, so I wanna pause right here right now. Why is that part of this argument? Well, what are measures of working memory really highly correlated with? Measures of general intelligence. Yeah, exactly. And what does working memory do? We used to think it was just a holding space and that’s what’s sort of a bit on, a bit conveyed by the stage model. But the work of my colleague, really brilliant work, Lynn Hasher at the University of Toronto, has basically said, no, no, that’s too simplistic. What working memory is, it’s a higher order filter for relevance. And you know how you know that that’s how, what working memory does? Because of the phenomena of chunking. Remember when we’ve talked about this, if I give you a string of letters and there’s no pattern in it and there’s like 12 of them and I ask you to remember that string of letters and you turn and you’ll remember four or five. But if I turn that string of 12 letters into four words like pig and cat and dog and that and sit, then you can do it because they’re chunked, right? And so chunking shows that what working memory is doing is it’s some kind of higher order relevance filter. So notice what we’re getting. This theory is deeply connecting consciousness and intelligence together. And it’s doing it in terms of a function strongly associated with working memory, which is relevance realization. And that is explicitly what Bars says the global workspace is set up to do. Right, are you still there? Are you back? I did lose you there. That time my screen was fine. So, are we okay? Hello, yeah. It looks like we’re stable. How far did you hear? Good question. Relevance, working memory, chunking into just that, yeah, it was basically at the point of chunking. Right. So it’s basically at the point of chunking. At the point of chunking. Right. So the point I’m making is, notice what we’ve got in this theory. We’ve got a deep interconnection between intelligence and consciousness via working memory. And that’s in terms of recursive relevance realization going on at the level of working memory. It’s capacity to do chunking and restructuring. Right, right. And then Bars, and also Shanahan and Bars, have explicitly argued that that is the function of the global workspace. They specifically argue that the function of the global workspace is to solve the frame problem, which Shanahan has argued is now, is basically, he specifically says this, it is the relevance problem. It is the deep problem of relevance. And they argue that the function of the global workspace is in fact to do relevance realization. Yep, yep. I don’t totally agree with their solution, but we don’t need that right now. I just happen to have this on my desk. So this is a global neuronal workspace. Okay, so just for people that, so that linkage immediately then gets into brain. So what he does there, some of the beautiful stuff that he does there is he talks about the consciousness ignition switch, okay, which is basically, so he does lots of subliminal processing. So if you give something about 200 milliseconds, you start to get networking, but you do not get, criteria may be relevance realization, networks, that thing, right? But at 300 milliseconds, when you get enough top down, you get an ignition switch between parietal and frontal lobe. And that very much, that cascade very much corresponds to conscious access, where you can then say, oh, this is what, I now have this on my screen. This got into the stage. And so now you get that kind of linkage. Excellent, excellent. I’m gonna pause here because you’re frozen again, Greg. Yeah, connection seems to be weak. I don’t think it’s on my end. My signal looks really good. Yeah, maybe we’re just getting interference somewhere, but we seem to be back. So the thing you just did, I hope we got it all. You brought in the Hain and you brought in the 200 milliseconds, 300 milliseconds, get the consciousness ignition when you get the prefrontal and the parietal, the frontal and the parietal linking up. I wanna also again, go the other way. One of the most prominent theories of general intelligence is the PFIT theory. The frontal parietal connection is what makes us intelligent. That’s what G. So again, the G machinery and the conscious machinery like this, like this. Both at the level of the theory, relevance realization, and at the level of the anatomy. All right. Okay, so what we’re doing is, I wanna just draw this out again. We’re showing all the way through as we’re building intelligence, the machinery of relevance realization and the machinery of prospectorial and participatory knowing are coming along. We’re getting the machinery of consciousness, the functionality, and also some of the phenomenology is emerging because we’ve spent a lot of time showing how you can get a lot of the phenomenology, the adverbial quality of the prospectorial knowing, the participatory knowing. You can get all of that out of this kind of relevance realization machinery. Yeah, we seem to be getting some intermittent interference here. There’s definitely, right. We’re having trouble mutual modeling to maintain the connection. No, I think what it is is we’re on the verge of solving this problem. Right. The Illuminati don’t want us to know. Go figure out consciousness. Oh, God, they’re right on the gasp. Somebody interrupt the signal, quick. Okay, so what I’ll do is also, because I have a pretty hard out in about five minutes, I wanna do one more theory and then we’ll stop there. We’re sort of halfway through this argument. So the next theory, a prominent theory of consciousness, because it falls directly on this, is worked not by Bars, but Bohr and Seth, and they basically argue that what is conscious, they do this thing about what is consciousness for and when do we need processes to be conscious and when do they move into unconscious. And so they talk about the process of how you make your behavior automatic. What they mean by automatic is things you can do without having any conscious awareness of it or very minimal subsidiary awareness, like when you’re typing, right? Or you can get very bad, like highway hypnotism, where you’ve been driving a whole crap and you realize you haven’t been paying attention or aware of the road at all. You’ve been off and mind wandering. Who would have thought it? But your zombie was doing a really good job keeping you alive. Right, it’s kept me alive so far, John, so that’s good. And so they talk about, well, what the evidence seems to converge on is we need consciousness. So the opposite, so we don’t need consciousness when we can really proceduralize and make our interaction automatic. So what’s the opposite of that? Well, we need consciousness where there’s high complexity, novelty, ill-defined-ness of the situation. Exactly the situations where you need a lot of what? Well, relevance realization. And what did they say consciousness does? Again, the fact that these are convergent, they say the job of consciousness is restructuring. It is doing that higher order recursive relevance realization because what that does is it enhances your ability to sift through the data, find the patterns, do the deep learning. So again, what are they saying consciousness functions to do? It’s higher order recursive relevance realization that allows us to do sophisticated problem solving, makes us more intelligent. It’s what areas of the brain? Oh, look, it’s mostly frontal, but it’s blah, blah, blah. All this, like convergence, convergence, convergence. So just to put this in real world stuff. So in my kitchen, we had the silverware. I don’t think I told the story of silverware in one drawer. Then we got a new dishwasher and then it had a handle on it so we couldn’t open that damn drawer anymore. So we had to move the silverware. It took me like six months to actually, cause I’d so procedurally, habitually, did you go over to the silverware drawer was downloaded. I knew, my system knew how to do that. But then every time I’d open the drawer, I’d be like, I was like, what are you gonna learn? And that’s because the surprise of seeing the drawer empty, and then I had to re-ware it. And now, three years later, I never go to that drawer anymore. So that gives you the frame about how that. That’s excellent. I noticed how Warren Seth beat back into Clements, and the whole developmental thing. You have to relearn to find it conscious. And so you can see the Warren Seth also being very, very, well, consistent and coherent with the Clements of the developmental. And then that’s also consistent, as we said, with this recursive relevance realization, massively hierarchical, dynamically self-organizing, predictive processing, the anticipatory relationship. And notice we are getting so, so much of the function of consciousness, at least, is converging on the idea of consciousness as relevance realization within predictive processing. And notice how we aren’t trying to make consciousness. All we’re trying to do is make artificial general intelligence, but we keep building all the machinery, the functionality of consciousness, and we keep tapping into the same areas of the brain that are deeply associated with intelligence and behavioral flexibility. And I have the intelligence to learn where our silverware drawer is after three months. So I am conscious at some level, John. That’s good news. I’m sure you are. So what I wanna do next, we covered off some of the main theories. Next time I wanna take a look at the last remaining theory and draw it all together, which is Fanoni’s integrated information theory. I won’t go into great detail. I wanna mention a few ideas from the information closure theory, but I think they gel very well also with the model we’re building here. And then we will have done the canvassing. And what the overall art of this argument has been, and you’ve been helpful, is how we can draw all these things together in such a coherent fashion. And it’s a completely naturalistic argument because we’re doing it from the design stance. Every step along the way, this is what we would add to the machine, this is what we would add to the machine, this is what we would add to the machine. I wanna now just take a moment and say one sort of final thing that sort of helps also to bridge a bit between the function and the phenomenology. Because I would argue that salience is relevant to the global workspace. They have all this relevance realization going on, but when information is relevant to the global workspace, such that it does the working memory processing, it looks for the restructuring, or it potentially demands restructuring within the global workspace. I think that’s when we’re now talking about salience. And then once we have that, we now have the possibility like demonstrative indexicality, salience tagging, all that stuff now coming online. That’s a lot of pieces, it’s coming together. That’s a lot of pieces coming together. So we’ve got a bit more to go. Then we will have some time to draw it all together. Then you’re gonna take over, and you’re gonna make an important argument again. What did you say, 30,000 feet? You’re gonna go up to 30,000 feet and draw it all together again. And then we’ll probably have one or two conversations around the relationship between this problem and the hard problems of meaning. The meaning crisis, and then the relationship between that problem and what you call the problem of psychology. We can come back to that trial and see what progress we’ve made on it. So that’s our work for today. That’s our work for today. And I apologize for anybody watching it, for any of the little pauses in the signal, but technology’s the god that lives. And of course, our high god right now is the internet, and high gods always disappoint you when you most need them. So there you go. Right, but we’re still sacrificing. We still need you. All righty. So as always, thank you, my good friend. Again, you did it. You just do these little things, and it was like a light of fire and connection just sparked for me. It was just fantastic. Thank you very, very much. Amen, I really enjoyed it. Thank you.