https://youtubetranscript.com/?v=bbuNwss-6Cg
Welcome to Untangling the World Nod of Consciousness, wrestling with the hard problems of mind and meaning in the modern scientific age. My name is John Vervecky. I’m a cognitive psychologist and cognitive scientist at the University of Toronto in Canada. Throughout the entire series, I will be joined in dialogue by my good friend and colleague, Greg Enriquez, from James Madison University in the United States. Throughout, we are going to wrestle with the hard problems of how we can give an account of a phenomenal-like consciousness within the scientific worldview, how we can wrestle with that problem in conjunction with the problem that Greg calls the problem of psychology that is pervasive throughout psychology, which is that psychology has no unified descriptive metaphysics by which it talks about mind and or behavior. Throughout this, we will be talking about some of the most important philosophical, cognitive scientific, and neuroscientific accounts of consciousness. So I hope you’ll join us throughout. It’s great to be back here, John. We’re getting to the tail end of this thing. I’m excited about it. So I thought what I’d do is, Greg, I’d review a bit of what we did last time. In fact, I’m gonna go to a little bit more and give a sense of the overall overarching argument. And then what I wanna do is then, once that’s done, go into, for us, at least the last of the big theories, the current neuroscientific theories out there, Tononi’s integrated information theory, discuss relationship between that and the other big theory, the global workspace theory, and then also make the argument like we made for global workspace theory, and the Bohr and Seth theory, and the radical plasticity theory, and the setting the criterion theory of Lao, right, et cetera, that what we’ve got is a theory that’s basically running off a hierarchical recursive, dynamical relevance realization. So that’s what I wanna do. And then we’re gonna turn things over to you. You’ll take the lead for a while, which I’m looking forward to. Absolutely, yes, no, I’m looking forward. I’ll take the hot potato, and then I’ll come at it from my problem of psychology view. And I hope that folks can see just what a sort of catalytic reaction this feels like to me in such a synergistic way, so. Thank you for saying that, Greg, and I agree. And then Greg and I are planning then to do at least one video with a more open dialogue about the connection between the hard problem of mind or consciousness and the hard problem of meaning. And Greg and I have different, not different takes, we have converging lines on the meaning crisis. And then Greg has explicated quite a few of those along the way. And we wanna draw that all together into a discussion. And then we also would like to have a final video where we’ll look through some of the comments and questions that have been made in the previous videos and address them so that we do at least as much as we can for due diligence there. I can’t promise to get to everybody’s question or comment, but we will do the best we can. All right, so we started this. I won’t go all the way back to Descartes and all that, but well, this part does reach back to Descartes. We started with this whole idea, this particular line of argument that sparks off his idea of aspectualization and the sense of consciousness is, what consciousness does is it sees through the co-variations. And that’s the aspectualization process. And then we talked about how that gets us into perspectival knowing. And the perspectival knowing, which is a function of relevance realization, gives us the adverbial qualia, the here-ness, the now-ness, the togetherness. And it does that in terms of aspectuality, centrality, and temporality. And so we get the perspectival knowing. And then we talked about integrating the perspectival knowing and the participatory knowing. And we did all this stuff with the robots on Mars and that what the participatory knowing, and the reason why we’re doing this is consciousness has both this perspectival and participatory knowing aspect to it. And we developed again, out of four E. Cargillian Science, what this participatory knowing is. It’s the knowing through mutual shaping, being coupled, conforming, the generation of affordances. So the participatory knowing generates the affordances and then the perspectival knowing selects them as salient within the salient’s landscaping. You get the sizing up. And so, and then we talked about the mutual modeling, the coordination that’s needed, and that what we have is a mutual modeling between perspectival and participatory knowing. And that helps to explain that you have these two aspects, sorry for that term, to consciousness. You have the what’s it like to be, which is the perspectival, but you also have the fact that you know that you’re conscious by being conscious. The distinction between knowing that you’re conscious and being conscious breaks down. And you see that of course, in things like the pure consciousness event, when you’re into this very pure participatory knowing and a very rarefied form of perspectival knowing. But the point is, the perspectival knowing with its every real qualia doesn’t go away even in the pure consciousness event. Brilliant. So we went, we have that machinery. Go ahead, Greg. I’ll just make a comment. So we’re pulling sort of the nervous system activity up into an adjectival that then has an adverbial perspective that aspectualizes and then places in dynamic relation to participatory knowing. So I just wanna, for those that are following, for me, I wanna just be, yeah, that’s a really nice puzzle piece of description. And I’ll be emphasizing that later to get the puzzle pieces of description. And then once we have the puzzle pieces of description in relation, we’ll watch the dynamic interface between the domains and have the whole thing come alive. Right, right. I agree with what Greg said, although the caveat which Greg has agreed to, that we seem to have argument and evidence from the pure consciousness event that adjectival qualia is not necessary. Right. In consciousness. Yes, exactly. So we, last time what Greg and I did is we did, we tried to build on something that we’ve constantly been returning to throughout, which is the intuition that there, two intuitions that we’ve inherited, one that we’ve inherited from Descartes and one that we’ve inherited, I don’t know. I think we just, it’s just come to us maybe in terms of the functioning of our cognition. The one we inherited from Descartes is the idea that there’s, we should give an interconnected, integrated answer between the function of consciousness and the nature of consciousness. And that’s exactly what this relevance realization model is doing because it simultaneously is giving us functionality and phenomenology at the same time. The second intuition, which I think Descartes had, but he also like to some degree challenged, but which is a pervasive intuition. And you see it, if you pay careful attention to how people attribute consciousness, that there’s deep interconnection between intelligence, at least fluid intelligence and consciousness. And that’s buttressed by the fact that the areas of the brain that seem to be active in fluid intelligence overlap with the areas that are active in attention, working memory and what seems to be at least strongly associated with the experience of consciousness. So all of that we dealt with. So we built on that and we took an assumption that AGI is a succeeding project, that we are making progress. I just read about some pretty astonishing stuff yesterday that’s happening in AGI work. And so we take very seriously the probability, I would now say, that we are going to get a completely naturalistic account of intelligence. And then part of what I’m arguing along the way, and it’s getting more and more convergent with other people’s work is what we’re getting are machines that are progressively getting better at implementing dynamically recursive relevance realization. And so what we did was we said, let’s reverse engineer intelligence. And then the idea is we take the design stance. What would we need to build in progressively to make a system intelligent? And then from that, what we’ll see is, do we get the functionality and the phenomenology of consciousness emerging along the way? And that’s exactly the argument I tried to advance. So I started with the idea that at the core of AGI is, as we’ve said, this massively recursive relevance realization capacity, and that that is grounded in an adaptive, autonomous, autopoetic thing. All relevance realization is ultimately self-relevant, which is not the same thing as being relevant to a conscious self, but it’s that relevance is relevance to an adaptive, autopoetic. Right? It’s embedded in an investment value system that’s regulating the flow of animal environment relations. Yep. So that ultimately you’re talking about a way of understanding agency in terms of adaptive, autonomous, autopoiesis, and relevance realization as grounded in that recursive relevance realization. And then I made the argument that you can see that being very integratable with, and that I think the two theories, which would sort of need each other, and this goes to work I’m doing with Brett Anderson and Mark Miller, that we could integrate relevance realization theory with predictive processing. Then we talked about that, the idea that the system’s trying to reduce surprise, be anticipate, so it’s trying to anticipate its world, it’s trying to predict its world and prepare for it, and that this predictive processing machinery teaches us two things, well, it teaches us two points that converge on something. It teaches us that the system is modeling itself in order to model the world, handling that difficult problem. So the trick the brain does is higher levels of the brain, higher in the hierarchy, try to predict what lower levels are going to do, and it turns out if they get good at that, they actually get good at the indirect, but that doesn’t mean inaccurate, indirect prediction of the world. And then what that converges with also is the system also has to model itself to discount how the actions that can be taken to prepare can distort the information coming in from the world, and so the system, for these two reasons, is modeling itself very comprehensively in order to be intelligent, and that’s converging with, again, emerging work in artificial internal intelligence, that if you want to get a robot to learn, you let the robot spend a lot of time, and it looks like a baby flailing, it’s really kind of creepy, you let the robot just play until it develops a very good model of itself, and then its capacity to learn and interact goes up dramatically. I pointed out that as you’re moving up the hierarchy in this predictive process and you’re doing something like the data compression, you’re doing this massive integration, and as you flow down the hierarchy, you’re doing all the variation, and you’re getting that process that’s central, the constant evolution of the sensory motor optimal grip on the world. So the predictive processing that’s anticipating is also doing relevance realization, deep learning. These are all very deeply integratable together, and so what you’re getting is this relevance realization at multiple scales, multiple levels of anticipation, being massively mutually modeled and coordinated together as the system is modeling itself, it’s modeling the world, and in modeling the world, it has to model itself, and so we’re already getting a lot of conscious, the functionality and the phenomenology consciousness here. Sim R cubed, scaling very, it’s relevant really. So we went into that, and we talked about what that gives us is, we talked about the inactive prediction and the inactive preparation, and so what the system is doing, and we brought back Descartes, right? The system is coupled to the world and it’s getting co-variations, but then it’s also preparing for those, and the idea is we’re getting what Descartes sort of said, we’re getting a very sophisticated realizing through the co-variations of what is being modeled in the world, the ultimate causes, or at least models of the causes, model doesn’t necessarily mean a picture here, models of the causes of what’s happening in our sensory motor patterns as we are also participating in those sensory motor patterns. Right. So that gave us proto-aspectualization with proto-representation within deep learning. So we’re getting a lot already. Then we went to the idea from, allow, we might have done this first, I’m not quite sure about the order, but it doesn’t really matter in the argument, that what you’re gonna do is the system has to be very good at signal detection, and then we talked about the signal detection problem that relevant information, and notice the word, relevant information relevant to the agent is always mixed up with irrelevant information. They are both, and this is something to remember, in the technical sense of the word, they are both information, right? But there’s relevant information, there’s irrelevant information. Signal is relevant information. And then we talked about the fact that these are always confusable together, and that what we have to do, and this is Lau’s point, is we have to set the criteria. And what we’re doing that depends on the relevancy to us of the different kinds of errors we can make, and that is always changing it in flux. And Lau’s idea, and it’s a higher order thought idea, although it’s more like a higher order deciding theory, is that consciousness is different from blindsight. Blindsight is just picking up the co-variations. Consciousness is when you set the criterion, and when you’re setting the criterion, that snaps it into consciousness. Although I don’t think that’s a sufficient account, I think that is a necessary account. And notice that we’re doing that when we’re just trying to make a system intelligent, capable of good signal detection. Right. And that, I’m just gonna say then, that by setting the criterion, and then you get a tipping point, you bring the top down to the bottom up, and you know I love my duck rabbit. Yeah, I see your duck rabbit. That’s right, that’s good. And so what you and I discussed is that the setting of the criteria is of course contextually sensitive, it’s gotta be dynamic, it’s gotta be constantly varying. So it has to be implementing relevance realization. We then went in the idea of this recursive layering, and that brought us to Clearman’s idea, one of the few developmental theories of consciousness, the radical plasticity hypothesis. And his idea, again, making use, going back to Rosenthal, of the, both Lau and Clearman are making use of Rosenthal in the higher order thought idea. And what Clearman is arguing is that what makes us conscious is when the higher levels, right, represent the lower levels. We noted the problems that had, remember the intimacy condition, and the intimacy condition is satisfied by Clearman with the higher levels care about, are effectively related to the lower levels. This has to do with the fact that, you know, that relevance realization is never cold calculation. It’s always, as you say, it’s behavioral investment, it involves arousal, it involves commitment of precious and limited time, cognitive processing resources. I like the way you put it. It’s always a investment. And as all investments are, we like, it’s a way of caring. Right. And it’s a way of being at risk. And then the point is, again, we care about information because we have to take care of ourselves. Caring is ultimately, it’s just, it is a component of, an irremovable component of relevance realization. And so what basically, Clearman is arguing is, if the higher levels are not just meta-representing, but are doing relevance realization, and so what we could say is the higher levels are modeling the lower levels, predictive processing, but they’re also implementing relevance realization, the caring, and then we get the radical plasticity hypothesis. We actually learn to be conscious by doing this internal modeling and this internal caring. And so I’ll just make the note here, we’re assimilating and integrating and creating convergent validity for coherence that gives you a holistic explanation, descriptive explanation process. So the pieces of puzzle cohere, and then you then get actually the systemic alignment of the model with the phenomenon. Excellent. Well said, Greg. Thank you for reminding everybody of that. Then we went into, Bars is global workspace theory, and we explained how what that is. The idea is, and you can see it emerging out of these, like this higher order modeling of lower orders. You have something analogous to your desktop. There’s a space, and what it does is all of your unconscious processes are like your computer files, and then you can bring anything from any of the files onto your desktop, and then they’re active, and you can manipulate them and transform them, and globally broadcast them back to the degree to which you want. And then we noted that that was really interesting because Bars says there’s important overlaps between the global workspace model of consciousness and working memory. Something like the global workspace is the part of working memory that’s lit up by the spotlight of attention. I don’t totally like the spotlight of attention, but what he’s saying basically is when relevant information becomes salient information, then we have consciousness. I propose to you that what he’s basically saying, I think this is an insightful idea, is that salience is when information is relevant to active working memory, and that’s what it is for something to be salient as opposed to just relevant. Absolutely. I’ll drop a little point there because I think that once we get into active working memory, we’ll get into episodic storage of memory, Yeah. Okay. and then that’s gonna then connect to what we might call a core self. Yeah, yeah, yeah. So I’ll leave that as a little teaser for a conversation. Yeah, we can talk about that. So, that’s a good teaser. The point that I wanted to make was that you’ve got this deep overlap between consciousness, attention, and working memory, but having this overlap between consciousness and working memory is really important because your capacity to use working memory, which is deeply influenced by attention, by the way, is one of the best predictors. It is the measure that is most highly correlated, sometimes approaching identity with measures of general intelligence. So basically what we’re measuring when we measure your general intelligence is we’re measuring your capacity of working memory to basically find things salient. That’s why also measures of the activity of the salience network are highly correlated measures. That’s particularly true of fluid intelligence. Yes, yeah. Sort of your online problem solving in novel environments. Yeah, well, we could maybe talk about this another time, Greg, I’m very critical of the theoretical construct of crystallized intelligence because I think it’s indistinguishable from the notion of knowledge. And then, Wow. Ah, right. And if you make crystallized intelligence a form of intelligence and it’s equal to knowledge, then you lose the capacity to explain the acquisition of knowledge in terms of intelligence. All right. That’s what you’re saying is, you know how I get knowledge? I have knowledge and that’s not very helpful at all. We could do that another time. Yeah, we can. I can feel my knowledge fluid intelligence wheels here, but no, that’s an interesting point. But Greg’s point is the point that we need to take right now, which is we’re talking about fluid intelligence. In fact, what we’re seeing is that there’s a deep overlap between consciousness, fluid intelligence, working memory and attention. Now, here’s the interesting thing. Working memory, we used to think of it just as a holding space, but given the more recent work of my colleague at UFT, Lynn Hasher, we now know that, well, sorry, that’s too strong. We now have good evidence and reason, good theoretical reason to believe that what working memory is, is a higher order relevance realizer. So the fact that working memory finds things salient, out of the global workspace theory, also has to do with one of the primary phenomena of working memory, which is you can move more information through it if you engage in chunking. And chunking is a process of higher order relevance realization. And the other pieces of information fit together so you can get an optimal grip on them for your processing. That’s exactly what working memory is doing. So we’re seeing that, right, the global workspace theory is gonna overlap within intelligence and relevance realization. And that’s fine because Shanahan and Barrs and Barrs in separate publications have argued that the function of the global workspace is to solve what is called the frame problem. And the frame problem is, as Shanahan now argues, is basically the deep problem of relevance, the problem of relevance realization. So deep convergence there. Also convergence with the next theory we looked at. We looked at the work, what Shanahan and Barrs, Shanahan and Barrs and also Barrs say, the main function of consciousness is, it’s to solve the frame problem, which Shanahan has argued. There was a technical aspect of the frame problem. The deep problem that he now argues remains, is what he calls explicitly the relevance problem, out of system zero and unrelevant information. He calls that a deep problem. And so the fact that I’m arguing that consciousness is doing relevance realization, drawing in Lynn Hasher’s work on working memory, lines up actually with their explicit proposal about the function of consciousness doing relevance realization. And then the point about the working memory doing chunking, and that being a higher order relevance realization, links up with the next theory we took a look at, was the theory of Bohr and Saff. And they argue, and they make a very good argument, which is, look at the difference between processing moves from being conscious to being automatic, mostly non-conscious, and when we need consciousness. I’m moving to this idea of chunking, as being a central thing we see in working memory doing. And this is higher order relevance realization that leads us into the theory of Bohr and Saff. Bohr and Saff make the very powerful argument is, you should look for a theory of consciousness that explains when processing moves from consciousness to being automatic and non-conscious, like when you’re highway hypnotism, or you’re not really paying attention to what you’re doing when you’re moving your fingers. When you’re typing, that’s called automaticity, or when things have to come out of automaticity. So the question is, when do we need consciousness, and when don’t we need consciousness? And then the argument is, we need consciousness for situations that are ill-defined, basically what I would say, they’re ill-defined, they’re novel, they’re complex, there’s a lot of dynamic change happening in this situation. That’s when we need conscious. When the situation is very well-defined, very routine, like typing or driving on a highway when there’s nobody around, we don’t need consciousness. That they’re taking a look at the situations that need consciousness and the situations that don’t. And we need consciousness basically when, situations in which insight might be needed, they’re ill-defined, they’re complex, they’re dynamically changing, they have a high degree of novelty. When they’re very well-defined, routine, not so dynamically changing, we don’t need consciousness and we can do it automatically without almost any conscious awareness or the involvement of working memory. So what they then asked was, okay, well, what do we see happening there? What’s going on? And what they argued is, you’ve got sort of, the frontal cortex seems to be needed when we are in situations where we can’t rely on automatic behavior. And in fact, that’s what the frontal cortex seem to have evolved to do. It gives us flexibility, it gives us online intervention, allows us to deal with novelty, it allows us to deal with complexity, et cetera. And then the idea is, what do we see this cortex largely doing? And the idea is, well, what we see it largely doing is chunking, right? We see it finding information, finding, doing higher order relevance realization, finding patterns in implicit patterns, making them explicit, doing something basically like higher order relevance realization, the very thing that works through, that helps information pass through working memory. Then they have basically the theory of consciousness that has to do with this interaction between the prefrontal cortex and other areas, the parietal areas, which is exactly overlaps with one of the prominent theories of intelligence, the P-fit theory of intelligence, the general intelligence is about how the prefrontal cortex basically constantly accesses and reconfigures the processing, particularly in sort of parietal and at sometimes occipital areas of the brain. And so again, high overlap between relevance realization, the function of consciousness, and one of the powerful theories of intelligence that’s out there on the market. So again, convergence, convergence, convergence, convergence. So that’s sort of where we got to. And I’m sorry, Greg, the review has been a little bit delayed by the glitches because I wanna take a little bit of time and get into the final theory. But did you wanna say anything now? It looks like our signal is stabilizing too. I think our signal is stabilizing, that’s good. I mean, thankfully I have a pretty good frame of reference for catching that. I think it’s really helpful to just piece those elements together and see how many elements are collectively coming together to create such a rich picture. And I think if we now add the integrated information piece, that would go well. So let’s do that. So the two biggest theories on the market, although I’ve tried to cover sort of some of the most prominent ones, and especially ones that ask questions that the other theories don’t ask, like Bohr and Seth asked about when do we need consciousness and when do we don’t? Clearman’s, how does it develop? Lau, what’s the difference between consciousness and really intelligent detection? But the two biggest theories are Barr’s Global Workspace Theory and Tononi’s Integrated Information Theory. And the thing is the theories divide up the Cartesian problem, or at least the Cartesian intuition, pretty cleanly. Barr’s in fact is explicit that he doesn’t have much to say about the nature of consciousness. Global Workspace Theory is almost purely about the function of consciousness. And Tononi is very clear that his Integrated Information Theory is a theory about the nature of consciousness, or what we call the generation problem. How does consciousness emerge out of non-conscious stuff? And he only indirectly ever talks about functions, the function of consciousness. But when he does, it’s actually kind of important. So what I think we need is a theory. Here’s the surprise we weren’t expecting, a theory that integrates the global workspace and the integrated information theory together, which of course is what is on offer right now. So what is the Integrated Information Theory? Like I said, it’s a theory about the nature of consciousness, the generation of consciousness. It’s a very mathematically rich theory. I won’t go into the math. I’ll try and explain it as intuitively as possible. And it’s the idea that consciousness emerges when you have a high degree of coupled information or mutual information, but dynamically mutual. So let’s try and build this up. So let’s compare your TV screen that has a scene presented to it with what’s going on in your retina that actually gives rise to visual experience, your retina plus your brain, your visual system. And so Tononi says, well, if you look at the TV screen, although there’s a lot of information there, there’s not a lot of mutual information. What does that mean? So the way to think of mutual information, although we’ve talked already about it with mutual modeling, ha ha ha, right? The way to think about it is sort of relations of prediction between. So if I have a, right? Here’s an even better example. Think about it like prediction within an experimental manipulation. So in an experiment, we have the independent variable, and this is Mill’s method of difference. We wanna see if there, we wanna do something more than see if they’re just correlated. And think again about going from co-variations to actually modeling the world, right? And so what do I do in an experiment? Well, here’s my independent variable. I manipulate it. I remove it and see if my dependent variable changes. If I remove this, does it go away or at least shrink? If I bring this, if my dependent variable isn’t there and I bring this, does this then appear? And if I vary this, do they vary together? So you’re doing that. So you’re seeing, and if you get that kind of really tight predictive relationship, we think there’s causation rather than just correlation. Is that okay so far? Absolutely. So it’s in the mutual interdependencies are high. And then we can, by varying one, we can vary the other and demonstrate that mutual dependencies, especially in the experimental method by active causal manipulation and then that’s right sorts of mechanism possibilities. So now think of that going both ways, which Greg just alluded to, that instead of just an independent variable, the variables are such that they have this relationship in both directions. Okay, now if I take a look at the pixels on your TV, they don’t have that relationship to each other. They don’t have that causal relationship to each other where changing one will change another or vice versa. They are not causally related to each other so they don’t have that mutual information. So although there’s a lot of information, the information is highly not integrated together. It’s non-integrated. But if you take a look at what’s going on in the retina of the eye, because of all of the patterns of connections, causal connections between the neurons, excitation and inhibition and layering, the opposite is the case in the retina. What’s happening in one neuron is highly mutually informative of what’s happening in the other neurons because they’re actually affecting each other and modifying each other’s behavior. So there’s a high degree of integrated information in the retina, but there’s very little integrated information on the TV screen. Is that okay so far? 100%. And so Tononi basically argues, right, assist and he has a more or less model and you can think of clearments here, but he says, you know, the more integrated information you have, right, the more likely you are to have consciousness, right? Now what’s interesting right away about that before we get into it is that overlaps with how we talk about general intelligence because what Spearman found was in fact, right, that you have this mutual information, it’s called a positive manifold. You have a strong positive manifold of mutual information between various tasks. In fact, that was Spearman’s big discovery. He was discovering like how kids were doing in one subject was predicting how they were doing in another. And in fact, he found that how they were doing in all the subjects were mutually predictive of each other. Same thing we do when we’re testing for intelligence, we give people a whole bunch of tasks and they’re all mutually predictive of each other. They’re mutually informative. Intelligence is also this massive information integration process. Where do you see the convergence happening there? Okay, so back to Tononi. And then the idea is when a system gets sort of very powerfully integrated in this fashion, it becomes conscious. He actually has a way of mathematically measuring that. So he claims to be able to measure whether or not a system is conscious or not. Phi. Yeah, the phi measurement. I won’t go into the details about that. He then argues that he can explain the qualia. This is where most people are quite as happy with him. The idea is if I take, so imagine we had all of the neurons in the retina and I assigned the relationship between any two as like an axis on a graph. Now you can imagine like a Cartesian graph that it’s multi-dimensional, like maybe millions of dimensions, right? And then I get the overall pattern of mutual information between them is gonna be a particular idiosyncratic shape within that multi-dimensional Cartesian space. He thinks that’s exactly what qualia are, but he only talks about adjectival qualia anyways. And so he thinks that’s what it is. So he thinks the theory simultaneously explains how consciousness emerges and why we have adjectival qualia within it. Yeah. So maybe you know this, and I haven’t dug deep into his particular, I’m familiar with that, and I’ve seen that, but I haven’t dug deep into the question of, well, let’s if we go to computers, or any a number of certain kinds of complex adaptive dynamic systems. It seems to me that you can gain lots of integrated information going from fairly limited to very complex integrated information, but not hit certainly what I feel like we’re talking about with phenomenology. So that’s exactly it. So more recently, Tononi has been arguing for a version of panpsychism that precisely because people were, I think that’s a way his response to sort of the counter examples people were giving of you. Well, you have these, the internet is really highly integrated information organization and is organized by these massively recursive layers of small world network, just like the brain. Is there consciousness on the internet? And I think Tononi would now say, yeah, there is. And that’s where you and I probably go, hmm. That’s kind of then taking the definition and then turning the definition and now being like, well, of course, it’s like, that’s not the original thing, is it? Yeah. And so the problem with the panpsychist response, and we talked about this, I think we did, I can’t remember if we did. Yeah, we did a little bit. Yeah, the problem with the panpsychism is you just get a version of the problem you were supposed to originally solve. The original problem is how to explain the relationship between conscious processes and non-conscious processes. And then the idea is, well, it’s consciousness all the way down. And then you now have, but I have a consciousness that’s highly associated with intelligence. This thing clearly doesn’t. So now you need to explain to me the relationship between the consciousnesses that are so intimately interwoven with intelligence and those that aren’t. And then we’re back to what looks like exactly the same problem. My problem with panpsychism is I don’t think it solves the problem that we’re trying to solve. Yeah, I totally agree. And my, it’s sort of where, if we get to it today or whatever, but where I get into the descriptive metaphysics of our language system, I think that’s also in the background here. Like we use the term consciousness, it’s tied up in mind, but then it’s not, and then it moves around. With the proper conceptual grammar, I think we could gain a lot of clarity of the proper boundaries of the fields that we’re talking about here. Yeah, this comes up especially in distributed cognition, where people are willing to say, like they, like remember the scientists and the rovers on Mars, that that’s a distributed cognition and only the entire system is solving the problem of doing, but they talk about it as a zombie. They talk about the fact that it’s an intelligent system, but it doesn’t have the right kind of temporal processing to possess consciousness, et cetera, et cetera. Right, we can certainly see functional awareness and responses of systems, but that doesn’t mean they have phenomenology. Yeah, exactly. Okay, so let’s go back, so what’s going on in Tononi? Well, he calls it the information integration theory, but that’s a little bit misleading, because he calls the information things that result, like from phi, he calls them complexes. He says he thinks the thalamus is central to consciousness, not just because it’s integrated, but because it integrates across significant amounts of differentiation, because it integrates different streams together, right? Sometimes referred to the executive secretary of the brain. Yep, right, and when you ask him, so let me just stop there before I go on. I think that when you read him carefully, and paying special attention to those two points, what he’s actually offering is an information complexification theory of consciousness, that as a system is simultaneously highly integrative and highly differentiated, then you’re gonna get consciousness. And so, and that makes sense. And that also goes with the fact that a lot of modeling of what’s going on dynamically in the brain that seems to correlate with intelligence, our systems, processing that are highly metastable, in which there’s simultaneously a lot of integration and differentiation going on. So again, that lines up nicely. So what am I saying? That if he’s doing complexification and integration and differentiation, and they’re happening in parallel in this recursive dynamic fashion, guess what they’re implementing? Relevance realization. And then when you ask him to how he actually wants to test to see if a system is conscious, he says, you show it pictures and see if it can detect the pictures that are inappropriate, where things aren’t readily, well, chunkable together in a way that is optimally grippable by you, doesn’t make sense to you. So he invokes intelligence and relevance realization as how you actually test for consciousness. So I would argue that his theory is also a theory that has an implicit account of function, which is relevance realization, and the account of nature, because the account of the nature of consciousness, which is basically a complexification account is basically also an account that presupposes relevance realization. So what do we get? Well, we get that his theory also is ultimately a theory of consciousness, I would argue in terms of relevance realization, which means it can be deeply integrated in those terms with the global workspace theory. So what we’ve got, and I’ve tried to argue this, is as we tried to design intelligence, we kept banging into the central theories of consciousness, the machinery and the phenomenology of consciousness, and we get to a point where we can, with that emerging framework, integrate some of the most prominent theories about the nature and function of consciousness together and give a comprehensive account of why we have the subjectivity of consciousness in terms of the perspectival and participatory knowing, why we have the ownership, because the brain is modeling itself in modeling the world, and it’s mutually modeling itself as it’s modeling the world, that’s why it has ownership, and we can at least account for the adverbial qualia, which are the qualia that are necessary for consciousness. I admit that I can’t account for adjectival qualia, but they are neither necessary nor sufficient for consciousness. And so that’s the overall argument of how we can account for consciousness within a naturalistic framework. Now I’m gonna stop talking so much. All right, well, I mean, the capacity to yoke together the recursive relevance realization across scale, to see the pattern upon which so many different approaches to intelligence and consciousness actually have this as an underlying theme, all right, and then to demonstrate its connection with phenomenology and to pull together that to create this scale and variant multilayered modeling recursive relevant realization picture of the function intersection of higher, of intelligence into consciousness is to me, is a brilliant assimilative integrative way of thinking. Thank you, thank you. And from my vantage point, what I would then sort of then to be like, well, can we add to this sort of, for me then I’m coming at this at a slightly different angle, but it just sort of fits like a lock and a key. Right, right. And so what I thought I would do is give the audience sort of we’re gonna shift a little bit then. And what I’d like to do is just sort of say, okay, here’s a little bit where I’m coming from. And this is why when you start to take John’s perspective, it fills in this beautiful gap in my own frame, which then if you then flip that around and say, oh, well, Greg’s frame can hold John’s perspective around it. And then- I like that way of putting it, Greg. It’s nice, it sounds like friendship. So that’s great. It’s love, brother. Agape at least. Okay, so in order to do that, one of the things I was thinking today is just sort of, I was gonna flash through. So everybody knows that I’m obsessed with the problem of psychology. You’re right, right. And that is I get into the field and it’s like, wait a minute, there are all these different paradigms. It claims to be a science of mind and behavior, yet we don’t really know what we mean by those terms. And the identity of what it means to be a psychologist. There are like 55 different divisions. And the range of humanistic psychology to behavioral neuroscience, to neuropsychology, to LGBTQ issues. I mean, it’s just a beautiful but chaotic system. And then to claim that that’s a science always bugged me. Right, right, as it should. Right? Because not only is science, oh, let’s apply this empirical method, but it actually should be the case that our concepts and categories have some degree of intersubjective objectivity so that we know what we’re talking about. So if I mean mind and you mean mind something radically different, man, we’re gonna, I mean, how can we say that we have now working on a scientific enterprise? It’s just equivocation all the way around. All the way around. Equivocation or fragmentation. Equivocation or fragmentation, okay? So my, you know, maybe psychology is just a chaotic thing. Maybe it doesn’t correspond to anything in nature. I actually believe that there is something to be had in relation, okay? So one of the things that I’d like to do is I’d just like to flash through some slides. Please, please. And then my goal here for the audience is to frame descriptively, descriptively frame a couple of key pieces. And when I say descriptively, I mean, another more fancy word is metaphysically. Getting into the descriptive metaphysics of our terms. And then next time, what I’d like to do is then come back and then add some meta-theoretical bracketing, both from something called behavioral investment theory, we’ve talked some about, of course, and the top from justification, and then connect relevance realization to really be the thread that can yoke those together. Great, that sounds great, Greg. Yeah, perhaps I can go a little bit longer than normal because we’ve lost some chunks of time also because of the glitches. Yeah, no doubt. We might need to cut some of those, I don’t know. Yeah, I’ll try and add them up. Okay, all right. So let me just share this real fast. You guys see my screen okay? Yeah. Okay, so I’m gonna now then basically frame what I just introduced in terms of the BM3 problem, okay? Which is gonna set the stage so that we actually develop a shared language, descriptive language for behavior and mental process. Right, right. So the BM3 problem refers to, another way of saying this is that there’s a problem of psychology, specifically the problem that you can’t, there’s no shared meta-paradigm that tells you what mind and behavior is. Right, right, right. And I’m gonna then specify and say, actually we can name it the BM3 because I’m gonna argue there’s actually, there’s the concept of behavior and three different concepts of mind that are actually operative when we try, when we disentangle the mind behavior issue, okay? And then the 30,000 foot view or whatever of the unified theory is about organizing our concepts of behavior and then doing that very broadly and then realizing that actually psychology is concerned with a subset of behavior and then once we look at the subset of behavioral patterns, we can then call those mental behaviors and then we’ll then delineate the domain of mental behaviors in terms of three different kinds of mental process and ultimately then we’ll be at a place where we’re gonna delineate mind two, which is phenomenology, consciousness and then your theory fundamentally allows us to link the underlying processes of neurocognition into a coherent cognitive phenomenological view of mind two. Right, right, right. So we have this BM3 problem and I have some things on this, I’m not gonna go into it, but is mind is behavior, mind is cognition or neuro information processing, mind is phenomenology and mind is self-conscious narrative. Of course, self-conscious narrative we owe to Rene Descartes in particular, although he obviously was concerned about phenomenology to extent, but if you think about I think therefore I am and the ready for reason and that kind of justification, that’s a center point of his concern. So why is this so hard? I’m gonna suggest that we talked about the hard problems of mind and meaning, okay. I believe that in the emergence of the enlightenment in both the science, the way in which science emerged as a particular kind of knowledge system in language game and the kind of philosophies that emerged from Kant, from human to Kant, created an enlightenment understanding that ultimately has a foundational gap in it and that foundational gap is really where the problem of psychology arises. Those two elements of foundation are found in confusion or an inability to define mind and matter and the relation also between social knowledge and scientific knowledge. And in order to address that from the unified that we need a clear descriptive metaphysical that is publishing concepts and categories that organizes our language and then can organize our empirical findings and it’s that alignment of concepts and categories with empirical findings that gives rise to a coherent set of understanding. That idea is represented in my garden with the center of the tree is a metaphysical empirical model. And both you and I know that scientists have long sort of a shoot metaphysics, which I think they should do at the level of what I call pure metaphysics, but actually we need a descriptive metaphysics to organize our empirical findings. And I would argue like when quantum mechanics emerged on the scene, there were a lot of metaphysical questions that got activated there. And I believe there are a lot of metaphysical questions that are activated so that how do we talk about mind versus matter? And so the tree of knowledge system attempts to offer that and we’ve talked some about that. It divides the world into an ontic reality and the scientific ontology and epistemology in relationship to that. And it works to solve the lightening gap because it at least provides a big picture for the mind matter relation to clarify all these understandings. It also specifies science as a kind of justification system that’s modernist as opposed to pre-modern, that’s empirical as opposed to pure metaphysical, it’s natural as opposed to supernatural and it invokes scientific methodologies like experimentation and quantification. Turning now to the BM3 problem, according to the sort of frame that the unified theory argues is that actually the essence of science is to you talk about looking through things, the essence of science is to look through the world through behavioral glasses. It’s a fundamental argument. And what that means is that the epistemological stance of science is to be positioned as a third person observer that then tracks object field change. And then they classify that object field change, they quantify that object field change and then develop models, maps, theories to explain the cause effect sequence of those models. What this means though, it sets the stage for the idea that the conceptual grammar of science is organized by the conceptual grammar of behavior and that the language of science is committed to an exterior empirical epistemology. Why is that really important for us? Well, it’s important for us because two things, one’s gonna come, one is one of the real challenges of a science of psychology is to get to the first person experience. And behaviorism emerges in my field for two reasons. One is the epistemological gap, which I’m talking about here. If I’m on the outside, I can’t look at first person. So I was trained in a behavioral tradition and Watson is like, well, you can’t look and see what the other person has. So what’s going on? Basically, I’ll just say then the language of science and the behaviorist argument in particular was, well, we’re committed to the exterior and a physical ontology, which was actually really problematic because later in the next 20 years, 30 years, 40 years, the emergence of cognition as an information processing ontology would of course overthrow that. But in terms of the history of the tradition of psychological science, that’s really key. And the term behavior then gets entangled with psychology very, very profoundly. So much so that your discipline, cognitive science, essentially jettisons the field in some ways because some of the history of what psychology was about is that tried to reduce mind to behavior. And then the cognitive people are like, you don’t need to do that. And then a whole bunch of other things happen like artificial intelligence, like linguistics, like philosophy of mind, and they then intersected. And then there was a break with the cognitive revolution and this awkward relationship between cognitive psychology and then the emergence of all these other disciplines. And then ultimately you get cognitive science, which is the interdisciplinary connection across a wide variety of different domains. But for our purposes, what I wanna say, and this is more maybe relevant to me as a psychologist trained in behavioral tradition, but I wanna emphasize it’s important is this issue of what do we mean by behavior turns out to be part of our problem trying to sort things out. So what is behavior? If you go back according to behavior, psychology becomes the science of behavior. And I’m gonna argue that this is a very, very important misconception, all right? That what they conflated here was the epistemology, like we had to be a behavior, we had to study behavior from an epistemological perspective because we’re a science. And then they fail to realize that the obvious thing is, well, to be a science, we have to look at the world through behavioral glasses because that’s what physicists do. It also must be then that physicists look at the world as behavior. Okay? So you get a very, very confusing thing of, well, we’re a science because like physicists, we look at it as if it’s a behavior, and then we use the term behavior to differentiate what we study versus what physicists study. Yeah, right, equivocation. Yeah, so this is a massive equivocation when it’s not so much that we study behavior, my point will be very clear here, that it’s not so much we study behavior in general, it’s that psychology is concerned with a particular kind of behavior. Right. Okay? And so, and indeed, I’m gonna argue that that ontology and epistemology of science is to consider the universe as a whole as a behavior. Indeed, the systematic study of structure and process, which I would argue together makes the objects, their relation and change, which then becomes behavior, the study of that is science, the systematic process of that. So then science becomes the study of behavior writ large. And I’m gonna argue that we can see this more clearly, okay? A, if we start thinking about, well, obviously, if you just check the language of physicists, electrons behave when they interact with force fields, organic molecules, planets behave, you get this, okay? So thus, we can have psychologists thinking about that their behavior in general, but they’re interested in a particular kind of behavior. Right. Okay? And according to the tree of knowledge, if we think about this epistemologically, that is, this is a half of Wilbur’s quadrants here, but that he emphasizes that actually science adopts an exterior epistemology in relation and identifies both individual and systemic or collective systems. So behaviors of individual entities and behaviors of systems represent the fundamental framing of natural science, okay? And behavior can then be defined as observed object field change from the outside or that exterior epistemology. The tree of knowledge makes this particularly clear, I think, when we elaborate it a little further, you’re familiar with the matter, life, mind, culture dimension, right? But actually, we can go further and we can say, in addition, that gives rise to the general level of object field relations like object field in physics, organism, ecology, and life, animal environment and mind and person society and culture. Culture. Okay? But there’s actually these primary levels, okay? If we go into physics, like the particle, into the atom, into the molecule. And ultimately, what this does is says, if there are primary levels of part-whole group relations, okay, if that’s the case, and science is about delineating the fundamental vocabulary of nature at its primary levels, we should then be able to see the organization of science in this table, meaning that we should see the cluster and domains of science be able to be classified across these levels and dimensions, okay? So that ultimately, what you’re gonna see here then is a three by four, you’ll see that there are four different columns and three different levels in each one of those columns to give rise to a 12 floor of science representation, okay? And I’m gonna argue then that if we do that, now we can think about the sciences being classified in relationship to these dimensions, levels, and then scale. And if we have that, we’ll have basically a taxonomy, both of behaviors in nature and the different kinds of sciences that organize them. And that’s what the tree of knowledge suggests is that it’s actually a map of both the ontic reality and the scientific systems that map that ontic reality. Right, right. Okay, and then basically you can then say, okay, here then, these are the physical sciences and they start at the level of the quantum and go all the way up through molecules and then sort of normal size, rock size objects up into the moon, up into the sun, and ultimately into the visible universe. And then more specifically, we can then say, okay, well, particle physics is at the base, and then we have atomic, and then we have chemistry. Right. So then that provide, then we can think about the different scale of material sciences, geological sciences, et cetera. Now we have basically a way of thinking about the base of the physical sciences across scale. I argue the same process can operate for biology, such that we see the basics of biology, it’s genetics and biochemistry into molecular biology. Then we see psychology, okay. You know, and the interesting position of viruses in between matter and life. Right. And then we see larger systems, so we get the development of plants and mycology, things like fungi. But then I will argue that we now have a framework of understanding, okay, that the physical into biological sciences that then blends into the neurosciences that actually is conciliant and organized, but it is here that at this juncture that a lot of equivocation starts to happen. Right. We get neuroscience into the animal behavioral sciences, okay, and then all of a sudden, we’re now into some vague set of sciences. And I’m gonna argue then that things go soft, okay. And the argument is that where psychology meets animal behavioral science, and then where psychology moves into cognitive science and the social science, a lot more equivocation and terminology emerges. Right. So if we now shift and say, well, what about the lens afforded by the tree of knowledge? It introduces the concept of mind, capital M mind, as a terminology that we would use to refer to when we are observing animal behavior. It mind, it refers to then the set of mental behavior, okay. And so mind then corresponds to that complex adaptive plane, and it’s then gonna be an integration of the mind, brain, animal behavior sciences, okay. Another term, and I don’t know that this is a great term, but I refer to it because it keeps my coherence as the basic psychological sciences. And the basic psychological sciences would then track from neuroscience up across the complex, active bodied scale of animal activity, right. And in this regard, we would then see neuroscience being the base of it. We would then see, now one of the things, the interesting things that happen is the relationship between what’s called zoology, which clearly is a biological discipline that tracks the evolution of animals as organisms and their classification, okay. But according to this, we should then also look at the subset of what animals do on top of being organisms, in other words, their mental behavioral patterns, okay. Which is then a subset of zoology called ethology, which is the natural science of animal behavior in the wild, so to speak. Right. Okay. But the interesting thing is that ethology is an animal behavioral science that is considered biology, yet when we study it in the lab, and actually traditionally, many psychologists would study in nature, they’d be comparative psychologists, okay. Comparative psychologists is the science of mental behavioral processes across the animal kingdom. It doesn’t, it’s not comparative to animal, it’s just to humans, it’s just how do different animals behave and act, okay. And of course, we take animals into the lab, right. We take animals into the lab, the behaviors, they were psychologists studying independent, dependent variables about what the behavior of rats, going down a T-maze. So isn’t it interesting we have psychologists studying animals in the lab, yet ethologists as biologists studying it in nature. That tells you that there’s no way that there should be two totally different and experimental control as one discipline versus a natural accounting in nature, a different discipline, okay. So I wanna argue that we should be thinking of the cognitive behavioral neurosciences as both the experimental and natural analysis of animal mental behavior in general, right. And actually, this is what the unified theory identifies as mind one, which is about developing a neurocognitive functional account of mental behavior, okay, a neurocognitive functional account that we’re gonna study the nervous system as a hierarchically arranged information processing system that then enables the dynamic interaction of animals in their environment. And then even this would then say, and this is interesting, because it would say things like socio-biology should be actually a subset of comparative psychology and other things. And so right now I just jumped over to the human social sciences. So, and now, so the root of this is going to be now linguistics, okay. And then the development of the human mind into human sociality studied by anthropology, sociology, economics, and political science. Right, right. And we can then see human cognitive science is then the base, this is now like the 10th floor, it’s the intersection of understanding how the human mind processes information and then interacts to create cognitive structures. Right. And then we can then include in linguistics. And before I knew you, I put the architecture of the mind, but we can now put other kinds of information processing systems here. And then we move up a level to the whole individual. So as a clinician, when somebody comes to me, I look at their whole person in developmental context. Right. At the level of human personality and small group relations. And then finally you get the major, the primary social sciences studying anthropology, political science, and economics. Right. Okay. So what I’ve just done is I’ve said, hey, there are four main branches and then 12 different floors. Right. And we could see some of these floors, like right here, very few people dispute the fact that there are the human social sciences, there are, you know, you can debate where exactly the lines are, but anthropology, political science, economics, and sociology. And there are about the ways in which people get together and these cultural social implications of those. Right. Okay. And then right beneath that is human psychology. Although my field never differentiated properly, or our field or whatever, psychology never differentiated its effective role at the level of the human versus the basic. Okay. So ultimately, what I’m gonna argue is behavior writ large sets the stage for under, is a lens through which scientists view the world. We build, develop ontological theories of that reality. We develop scientific systems of justification to legitimize that. That’s gonna be quantification, experimentation, inference to best explanation. And ultimately, we are then mapping from a scientific realism perspective, we’re then mapping the ontic reality. Right. And behavior then is central writ large. Okay. So then this is gets set ultimately this sense the stage then well, what are mental processes? Okay. We have to first keep in mind that the periodic table shows that mental processes reside in a behavioral universe. Right. Okay. And psychologists completely fail this. I’m gonna move quickly. Here’s just a quote. Since science studies observable phenomena, the mind’s not directly observable. We expand this to study the mind and behavior, the study of mind and behavior. In other words, because of our epistemology, we have to then turn mind into behavior. This is the way a lot of psychologists think of it. I wanna suggest then as we think about, once we realize that behavior writ large is what science is studying, then we realize we can think about, huh, there’s actually a subset of behaviors that we’re interested in. And I wanna suggest that if we were to watch a cat fall, we would be watching a behavior. Okay. But when the cat falls and hits the ground and lands on its feet and runs away, what you were watching there is mental behavior. Okay. So the differentiation here is that we can watch behavior in its totality, but if we then factor out the physical causes and even physiological causes at the biological level of complex action, we can then see what the animal is doing in terms of its functional behavior as a whole. And we can think about that in terms of mental behavior. Right, right. And then ultimately, we’re gonna argue then that we can now fundamentally set up, okay, mind is a subset of kind, it’s mind as kind of behavior. Mind, which basically is the collection of behavior of the animal as a whole and will be defined as mental behavior. Neurocognition, which is the way in which the nervous system functions to process information in hierarchically computationally controlled systems. And we have shown that a fundamental feature of that is recursive relevance realization. That’s also gonna then be connected to the emergence of phenomenology and this is all the beautiful work that John is doing. And then mind is self-conscious reflection and knowing through language and proposition, which I will make a point about. Right, right. So ultimately, what I’m arguing here is that we gotta get, my point basically is we don’t have a good definitional conceptual system, so I’m doing some work on that. We can recognize where our epistemology is, so a neurocognitive functional analysis of mental behavior combines brain cognition as neuro information processing and a systematic analysis of the behavior of the organism as an animal as a whole. That’s an exterior scientific position and it gives us one conception of mind. A second is the internal first-person experience of being that emerges sometime in the animal kingdom, certainly is present by dogs, may or may not be present in my fish over there, may or may not be present in insects. These are interesting questions that we can discuss. We have provided a human phenomenological account in this narrative and we’ll then be able to trace, I believe, the evolution of brain and cognition to get a much more delineated frame of reference for the emergence of subjective phenomenology. Right. And there’s also this whole other meaning of self-conscious narrative that actually philosophers often think about. When you talk about the, I have a friend, Lawrence Calhoun, philosopher who’s like, well, when philosophers say philosophy of mind, they really should say the philosophy of the human mind because 99% of the time they’re talking about higher cognitive reason, self-conscious reflection, where does that cause come from, how does that influence the physical world, and that’s a whole nother definition of mind. So ultimately, I’m gonna argue, and this is where I’ll end today, but I’m gonna argue that your frame of four Ps and this description sets the stage for us to really align and be clear about how our concepts are fitting together very beautifully like puzzle pieces. Okay. And so ultimately what the three domains of mind yield is what I call a map of mind one, two, three. It orients us to say, okay, what’s the exterior and interior epistemological considerations? It says mind one A refers to the neurocognitive, neuro information processing that’s operative. Mind one B is that over activity that can be observed. Mind two is the subjective phenomenology. Mind three A is private self-conscious justification. And mind three B is what I share public. I like this, Craig, this is beautiful. Okay. So this enables us to see, there are three different domains, but actually when we add the empirical, I mean the interior and the exterior epistemological, we can then separate them further into these five different domains. Now what’s really cool when I listen to you, I can say, hey, John has four different Ps of cognition, okay? One of which is procedural. And notice that we talk very much about procedural as being what doesn’t need consciousness once you have a fixed action, selection action procedural pattern. We know from the studies of HM and all those other things that this is pretty far removed from consciousness. And what we really needs are models by which the nervous system maps the neuromuscular system in very specific procedural stimulus response type of sequences, okay? We also then have on top of that propositional knowledge, okay? Which we do not share, which makes us quite unique in the animal kingdom. And my first insight that led to the unified theory is that propositional questions set the stage for a question answer dynamic, which generated the problem of justification, which then had an accuracy dimension. It had a personal dimension and a social dimension, and this would give rise to the emergence of culture and the culture person plane of existence, okay? But for our purposes, what you are showing is the connection between the first person perspectival and the yoking of the participatory, which of course is the engagement, the animal arena relation, where the animal is actually tracking itself as it builds things. If a beaver’s building a dam, it’s watching itself engage in that, and participatory, then it has to model what’s happening in the environment and what it’s doing. And the yoking of that dynamic perspective with participation is, is to me, a brilliant articulation of what it is that’s driving the consciousness monitoring, global workspace integrated information process to determine recursively what’s relevant so it can realize a path of behavioral investment. That’s cool, that’s really cool. And the other thing that you note that’s super cool is that we get so trapped in our objective versus subjective dichotomy, this feedback loop between our actions and our perceptions enables us to attend to a particular kind of epistemology that I think is under looked, the transjective. Right, thank you for that. So, so. Very beautiful. So my basic frame here, and I’ll stop sharing now, is basically if we see the enlightenment gap as an inability to effectively define matter and mind in a cohesive, coherent way, I think that we’re on the cusp of developing the proper metaphysics and then what I’ll talk more about next time about why I see a meta-theoretical synthesis. Right, and that’s the. Stimulant. Is that where the relevance realization stuff comes into? Exactly, so what I’m gonna argue is that the behavioral investment theory is a meta-theoretical so I’ll get much more into the description explanatory systems, okay? And what was missing in my system, which went from neurocognitive functionalism and then said, yeah, something sort of magic happens with phenomenology, I mean, I have some notions about that. Okay, and then you have justification on top of that. But we don’t need any magic any longer, we can take the scale invariant multilayered modeling with recursive relevance realization to yoke together the integrated information, global neuronal workspace, and provide a cognitive phenomenological picture that pulls so many pieces together. That’s fantastic, that’s really beautiful work, Greg. And I can see why this would also give us a platform for discussing the interconnection between the heart problem of consciousness and the heart problem of meaning. Totally, because what happens to if, I mean, let’s just put it in, if there’s not really a frame of reference for understanding mind versus matter, okay? And I think I’ve told you this at one point, but I had a young adult friend of mine, a son, but we were having a really engaged conversation. What’d you learn your first year semester in college? And his response to me is, oh, I took a couple of classes, I learned I was just a bunch of chemicals. Okay, so I mean, to me, that’s a microcosm, okay, of the enlightenment gap, right? I’m like, what is the metaphysics of love? Zach Stein will talk about this. What is the purpose that we have? How do we realize the liminal space? I mean, it just goes on and on. If you undercut mind and then fail to clarify issues of what we are, okay, the narrative, and of course, science undercuts lots of religion and meaning making too. So we’ll get into that, but fundamentally, the enlightenment gap not only is a scientific problem, but it’s a problem of everyday meaning making and the values that we have for living. That’s beautiful, Greg. I look forward to the next time we’re together. Hopefully the imp of the perverse won’t be infecting us with so many glitches next time. He was dancing on us, and I’ll see if, I don’t know if it’s on my end or not. The last 24 hours have been a little sketchy. I’ll do what I can to investigate. Okay, I appreciate that. Anyways, I gotta jump, because I got another thing we gotta do, but this was fantastic. So thank you very much. I’ll try and see if I can edit out some of the glitches and some of the pauses. It’s a little bit more watchable by our audience. Amen. Normally it’s a lot clearer, so we’re sorry about that. Yeah, very much. We’ll try and do our best to get this really interesting. That stuff you did at the end, Greg, is just beautiful. Thank you for that. Thank you, friend. Okay, talk to you next time.