https://youtubetranscript.com/?v=IZyWuD9UqI4

Welcome back to Awakening from the Meaning Crisis. So last time we were taking a look at the centrality of relevance realization, how many central processes, central to our intelligence, possibly also to our, at least the functionality of our consciousness, presuppose, require, are dependent upon relevance realization. So we had gotten to a point where we saw how many things fed into this and then made the argument that it is probably at some fundamental level a unified phenomena because it comports well with the phenomena of general intelligence, which is a very robust and reliable finding about human beings. And then I proposed to you that what we need to do is two things. We need to try and give a naturalistic account of this and then show if we have naturalized this, can we then use it in an elegant manner to explain a lot of the central features of human spirituality. And I already indicated in the last lecture how some of that was already being strongly suggested. account of self-transcendence that comes out of dynamic emergence that is being created by the ongoing complexification. And this has to do with the very nature of relevance realization as this ongoing evolving fittedness of your sensory motor loop to its environment. Under the virtual engineering of bioeconomic logistical constraints of efficiency that tend to compress and integrate and assimilate and resiliency that tend to particularize and differentiate. And when those are happening in such a dynamically coupled and integrated fashion within an ongoing opponent processing, then you get complexification that produces self-transcendence. But of course much more is needed. Now I would like to proceed to address, now I can’t do this comprehensively. Not in a way that would satisfy everybody who is potentially watching this. This is very difficult because there are aspects of this argument that would get incredibly technical. Also to make the argument comprehensive is beyond what I think I have time to do here today. There are, I’ll put notes for things that you can read. I’ll point you to if you want to read it more deeply. What I want to do is try to give an exemplary argument, an argument of an example of how you could try and bring about a plausible naturalistic account of relevance realization. Now we’ve gone a long way towards doing that because we’ve already got this worked out in terms of information processing processes. But could we see them potentially realized in the brain? And one more time I want to advertise for the brain. I understand why people want to resist the urge of sort of a simplistic reduction that human beings are nothing but their brain. That’s a very bad way of talking. That’s like saying a table is nothing but its atoms. That doesn’t ultimately make any sense. It’s also the structural functional organization of the atoms, the way that structural functional organization interacts with the world, how it unfolds through time. So simplistic reductionism should definitely be questioned. On the other hand, we also have to appreciate how incredibly complex, dynamic, self-creating, plastic, capable of very significant qualitative development the brain actually is. So proposed to you that you can one aspect of relevance realization, the aspect that has to do with trading between, being able to generalize and specialize, as I’ve argued is a system going through compression. Or that’s something like what you’re doing with line of best fit and particularization when your function is more tightly fitted to the contextually specific data set. And again, this gives you efficiency. This gives you resiliency. This tends to integrate and assimilate. This tends to differentiate and accommodate. Okay, so try to keep that on the mind. Now what I want to try and do is argue that there is suggestive. It’s by no means definitive. And I want it clearly understood that I am not proposing to prove anything here. That’s not my endeavor. My endeavor is to show that there is suggestive evidence for something. And all I need is that that makes it plausible that there will be a way to empirically explain relevance realization. So let’s talk about what this looks like. So there’s increasing evidence that when neurons fire in synchrony together, they’re doing something like compression. So if you give, for example, somebody a picture that they can’t quite make out and you’re looking at how the brain is firing, the areas of the visual cortex, for example, if it’s a visual picture are firing sort of asynchronously. And then when the person gets it and go, aha, you get large areas that fire in the synchrony together. And finally, there’s even increasing evidence that when human beings are cooperating in joint attention and joint activity, their brains are getting into patterns of synchrony. So that opens up the possibility for a very serious account of distributed cognition. I’ll come back to that much later. Now what we know what’s going on in the cortex, right, and this is the point that’s very, I think, very important. This is scale invariant. What that means is that many levels of analysis, you will see this process happening. Why is that important? Well, if you remember, relevance realization has to be something that’s happening very locally, very globally. It has to be happening pervasively throughout all of your cognitive processing. So the fact that this, this process I’m describing is also scale and invariant in the brain is suggestive that it can be implementing relevance realization. Okay, now what happens is at many levels of analysis, what you have is, right, you have this pattern where neurons are firing in synchrony and then they become asynchronous, and then they fire in synchrony, and then they become asynchronous, and they’re doing this in a rapidly oscillating manner. So this is an instance of what’s called self-organizing criticality. It’s a particular kind of opponent processing, a particular kind of self-organization, so we’re getting more precision in our account of the self-organizing nature, potentially of relevance realization. Okay, so let’s talk a little bit about this first, and then we’ll come back to its particular instantiation in the brain. So self-organizing criticality, this goes originally back to the work of Per Bauck. So let’s say you have grains of sand falling, like in an hourglass, and initially it’s random, random from our point of view, of where, right, within a zone, individual grains will end up somewhere in that zone, we don’t know where because they’ll bounce and all that. But over time what happens, because there’s a virtual engine there, friction and gravity, but also the bounce, right, so the bouncing introduces variation, the friction and, you know, the gravity put constraint, and what happens is the sand grains self-organize. There’s no little elf that runs in and shapes the sand into a mold, it self-organizes into a mound like that, and it keeps doing this and keeps doing this. Now at some point, right, it enters a critical phase. Criticality means the system is close to, right, is potentially breaking down. See, when it’s self-organized like this, it demonstrates a high degree of order. Order means that as this mound takes shape, the position of any one grain of sand gives me a lot of information about where the other grains are likely to be because they’re so tightly organized, it’s highly ordered, but then what happens is that order breaks down, and you get an avalanche. It avalanches down. And if this is too great, if the criticality becomes too great, the system will collapse. And so there are people that argue that civilizations collapse due to what’s called general systems failure, which is that these entropic forces, right, are actually overwhelming the structure of the system, and the system just collapses. So collapse is a possibility with criticality. However, what can happen is the following. The sand spreads out due to the avalanche, and then that introduces variation, important changes in the structural functional organization of the sand mound. Because now what happens is, right, there’s a bigger base, and what that means is now a new mound forms, and it can go much higher than the previous mound. It has an emergent capacity that didn’t exist in the previous system. And then it cycles like this. It cycles like this. Now at any point, again, there’s no T loss to this. At any point, the criticality can overwhelm the system, and it can collapse. At any point, the criticality within you can overwhelm the system, and you can die, right? But what you see, right, is you see the brain cycling in this manner, self-organizing criticality. The neurons structure together, that’s like the mound forming, and then they go asynchronous. This is sometimes even called a neural avalanche, right? And then they reconfigure into a new synchrony, and then they go asynchronous. So do you see what’s happening here? What’s happening here is the brain is oscillating like this, and what it’s doing with self-organization, criticality is it’s doing data compression, and then it does a neural avalanche, which opens up, introduces variation into the system, which allows a new structure to reconfigure. That is momentarily fitted to the situation. It breaks up, right? Now do you see what it’s doing? It’s constantly, moment by moment, this is happening in milliseconds, it’s evolving its fittedness. It’s complexifying its structural functional organization. It is doing, right, compression and particularization, which means it’s constantly, moment by moment, evolving its sensory motor fittedness to the environment. It’s doing relevance realization, I would argue. Now, what does that mean? Well, one thing we should be careful of, when I’m doing this, again, I’m using the word and gestures and that, and of course to convey and make sense, but what you have to understand is this is happening at myriad of levels. There’s this self-organizing criticality doing this fittedness at this level, and it’s interacting with another one doing it at this level, all the way up to the whole brain, all the way down to individual sets of neurons. So this is a highly recursive, highly complex, very dynamic evolving fittedness, and I would argue that that is thereby implementing relevance realization. There is some evidence to support this. So Thatcher et al. did some important work in 2008, 2009 pointing towards this. So here’s the argument I’m making, right? I’m making the argument that RR can be implemented. It’s not completely identical to, because you remember there’s also exploration and exploitation, but it can be implemented by this. And I’ve also, last time, made the argument that relevance realization is your general intelligence. If this is correct, then we should see measurable relationships between these two. Of course, we’ve known how to measure this psychometrically for a very long time, and now we’re getting ways of measuring this in the brain. And what Thatcher found was exactly that. They found, Thatcher et al. found that there’s a strong relationship between measures of self-organization and how intelligent we are. Specifically what they found was the more flexibility there is in this, the more intelligent you are. The more it demonstrates a kind of dynamic evolvability, the more intelligent you are. Is this a conclusive thing? No. There’s lots of controversy around this, and I don’t want to misrepresent this. However, I would point out that there was a very good article by Hess and Gross in 2014 doing a comprehensive review of the application of self-organizing criticality as a fundamental property of neural systems. And they, I think, made a very good case that it’s highly plausible that self-organizing criticality is functional in the brain in a fundamental way, and that lines up its convergent with this. So what we’ve got is the possibility, and this carries with it some, so I’m hesitant here because I don’t want to, by drawing out the implications, I don’t want to thereby say that this has been proven. I’m not saying that. So remember the if. But if this is right, this has important implications. It says that we may be able to move from psychometric measures of intelligence to direct measures in the brain, much more in that sense objective measures. Secondly, if this is on the right track, it will feed, remember this, a lot of these ideas were derived from sort of emerging features of artificial intelligence. If this is right, it may help in feedback into this and help develop artificial intelligence. So there’s a lot of potential here. Unfortunately, for both good or ill, I’m hoping, if you’ll allow me a brief aside, I’m hoping by this project that I’m engaged in to link as much as I can and the people that I work with can, and my lab and my colleagues, link this emergent scientific understanding very tightly to the spiritual project of addressing the meaning crisis rather than letting it just run rampant willy nilly. All right, so if you’ll allow me, that’s a way in which we could give a naturalistic account of our art in terms of how neurons are firing. These are firing patterns. Now I need another scale invariant thing, but I need it to deal with not how neurons are firing, but how they’re wiring. What kinds of networks they’re forming. I’m not particularly happy with the wiring metaphor, but it has become pervasive in our culture and it’s mnemonically useful because firing and wiring rhyme together. So, again, there is sort of a new way of thinking about how we can look at network. It’s called graph theory or network theory. It’s gotten very complex in a very short amount of time, so I want to do just sort of the core basic idea with you, that there’s three kinds of networks. So this is a neutral. This doesn’t mean just networks in the brain. It can mean networks like how the internet is a network. It could mean how an airline is a network, a railway system, et cetera. This analysis, this theoretical machinery is applicable to all kinds of networks, which is part of its power. So you want to talk about nodes. These are things that are connected and then you have connections. So I’m drawing two connections here. This isn’t a single thick one. These are two individual ones. Two individual connections here. So that’s sort of the same number of connections and nodes for each network. So there’s three kinds of networks. This is called a regular network. It’s regular because all of the connections are short distance connections. And you’ll notice that there’s a lot of redundancy in this network. Everything is double connected. This is called a random or a chaotic network. It’s a mixture of short and long connections. And then this is called a small world network. This comes from the Disney song, It’s a Small World After All, because this was originally sort of discovered by Milgram when he was studying patterns of social connectedness. And it’s a small world after all. All right. Now again, these now, originally people were just talking about these. But I’ve just now understood that these are names for broad families of different kinds of networks that can be analyzed into many different subspecies. And I won’t get into that detail because I’m just trying to make an overarching core argument. So remember I said that this network has a lot of redundancy in it? And that’s really important because that means that this network is terrifically resilient. I can do a lot of damage to this network and no node gets isolated. Nothing falls out of communication. It has tremendous, it’s tremendously resilient, very resilient. But you pay a price for that, all that redundancy. This is actually a very inefficient network. Now your brain might trick you because that looks so well ordered. It looks like a nice clean room, right? And clean rooms look like they’re really highly ordered and that’s, oh, this must be the most efficient because cleanliness is orderliness and orderliness is efficiency. And you can’t let that mislead you. You actually measure how efficient a network is by calculating what’s called its mean path distance. I calculate the number of steps between all the pairs. So how many steps do I have to go through to get from here to here? One, two. How many do I have to go from here to here? One, two, three, four. I do that for all the pairs and then I get an average of it. And the mean path distance measures how efficient your network is at basically communicating information. These have a very, very high mean path distance. So they’re very inefficient. You pay a price for all that redundancy. And that’s of course because redundancy and efficiency are in a trade-off relationship. Now this, and here’s where again your brain’s going to be like, this is so messy, right? This is so messy. Well, it turns out this is actually efficient. It’s actually very efficient because it has so many long distance connections, it’s very, very efficient. It has a very low mean path distance. But because they’re in a trade-off relationship, it’s not resilient. Very poor in resiliency, right? So notice what we’re getting here. These networks are being constrained in their functionality by the trade-off in the bioeconomics of efficiency and resiliency. Marcus Reed has mathematical proofs about this in his work on network configuration. Now what about this one, the small world network? It’s more efficient than the regular network, but less efficient than the random network. But it’s more resilient than the random network, but less resilient than the regular network. But you know what it is? It’s optimal. It gets the optimal amount of both. It optimizes for efficiency and resiliency. Now that’s interesting because that would mean that if your brain is doing relevance realization by trading between efficiency and resiliency, it’s going to tend to generate small world networks. And not only that, the small world networks are going to be associated with the highest functionality in your brain. And there’s increasing evidence that this is in fact the case. In fact, there was research done by Langer et al. in 2012 that did the same thing, similar thing to what Thatcher did. So here we got this again. RR is G. And it looks like RRs might be implementing, this is what I’m putting here, small world networks. That’s these guys, small world networks. And what Thatcher et al. found is a relationship between these. The more your brain is wired like this, the better your intelligence. Again, is this conclusive? No. Still controversial. That’s precisely why it’s cutting edge. However, increasingly we’re finding that these kinds of patterns of organization make sense. Marcus Breed was doing work from just looking at just artificial networks, neural networks, and you want to optimize between these. So you’re getting design arguments out of artificial intelligence. You’re starting to get these arguments emerging out of neuroscience. Interestingly Langer et al. did a second experiment in 2013 when you sort of put extra effort, task demands on working memory. You see that working memory becomes even more organized like a small world network. Hilger et al. in 2016 found that there was a specific kind of small world network having to do with efficient hubs. Their thing is entitled Efficient Hubs in the Intelligent Brain. The neural efficiency of hub regions in the salience network are correlated with general intelligence. So what seems to be going on is, again, suggestive. You’re not conclusive, but you’ve got the Langer work, working memory goes more like this, and then you’ve got this very sophisticated kind, a species of this, in recent research, correlated with the salience network in the brain. Do you see that? That as your brain is moving to a specific species of this, within the salience network, you become more intelligent. And the salience network is precisely that network by which things are salient to you, stand out for you, grab your attention. One more time, is this conclusive? No. I’m presenting to you stuff that’s literally happening in the last two or three years, and as it should be, there’s tremendous controversy in science. However, this is what I’m pretty confident of, that that controversy is progressive, it’s ongoing, it’s getting better and better, such that it is plausible that we will be able to increasingly explain, and it will be increasingly convergent with the ongoing progress in artificial intelligence, that we will be able to increasingly explain relevance realization in terms of the firing and the wiring. Remember, the firing is self-organizing criticality, and the wiring is small world networks. And here’s something else that’s really suggestive. The more a system fires this way, the more it wires this way. So if a system is firing in a self-organizing critical fashion, it will tend to network as a small world network. The more it wires this way, the more it is wired like a small world network, the more likely it will tend to fire in this pattern. These two things mutually reinforce each other’s development. So remember, let’s try to put this all together. I want you to really, I mean, it’s hard to grock this, I get this, but remember, this is happening at a scale invariant massively recursive complex self-organizing fashion. This is also happening scale invariant at a very complex self-organizing recursive fashion, and the two are deeply interpenetrating and affording and affecting each other in ways that have to do directly with engineering the evolving fittedness of your salience, right, of your salience realization and your relevance realization within your sensory motor interaction with the world. This is, I think, strongly suggestive that we are getting, that this is going to be getting given a completely naturalistic explanation. Okay, notice what I’m doing here, right? I’m giving a theoretical structural functional organization for how this can operate. So we got, last time, right, the last couple times we had this strong convergence argument to this. We have a naturalistic account of this, at least the rational promise that this is going to be forthcoming, and then we’re getting an idea of how we can get a structural functional organization of this in terms of firing and wiring machinery. Now this is, again, like I said, this is both very exciting and potentially scary because it does carry with it the real potential to give a natural explanation of the fundamental guts of our intelligence. I want to go a little bit further and suggest that not only may this help to give us a naturalistic account of general intelligence, it may point towards a naturalistic account, at least of the functionality, but perhaps also of some of the phenomenology of consciousness. This again is even more controversial. But again, my endeavor here is not to convince you that this is the final account or theory. It’s to make plausible the possibility of a naturalistic explanation. So let’s remember a couple things. There’s a deep relationship between consciousness, remember the global workspace theory, the functionality, and that that overlaps a lot with working memory. This is global workspace theory, sorry that should be a T, global workspace theory, working memory. And we already know that there are important overlaps in the brain areas that have to do with general intelligence, working memory, attention, salience, and also that measures of this and measures of the functionality of this are highly correlated with each other. That’s now pretty well established. We’ve also got that we know from Lynn Hasher’s work that this is doing relevance realization. You remember also gave you the argument when we talked about the functionality of consciousness that many of the best accounts of the function of consciousness is that it’s doing relevance realization. And so this should all hang together. This should all hang together such that the machinery of intelligence and the functionality of consciousness should be deeply integrated together in terms of relevance realization. We do know that there seems to be some important relationships between consciousness and self-organizing criticality. This has to do with the work of Cosmely et al. and others ongoing. Their work was in 2004. They did what’s called the binocular rivalry experiment. Basically you present two images to somebody and they’re positioned in such a way that they are going to the different visual fields and they compete with each other because of their design. So what happens in people’s visual experience is they, let’s say it’s a triangle and a cross. What they’ll have experientially is I’m seeing a cross. Oh, now I’m seeing a triangle. I’m seeing a cross. Now I’m seeing a triangle. Don’t forget that’s not obscure to you. Right? So, you know, the necracube, right? When you watch the necracube, it flips. Right? So this can be the front and it’s going back this way, right? Or you can flip and you can see it the other way. And so you are even doing binoculars, sorry, where this is the front and it goes that way, right? So you are constantly flipping between these and you can’t see them both at the same time. So that’s what binocular rivalry is. And so what you have though is you do this a little bit more controlled. You present it to different visual fields, so different areas of the brain. And so what you can see is what happens when the person is seeing the triangle? One part of the brain goes into synchrony. And then as soon as the triangle, that goes asynchronous and the other part of the brain that’s picking up on the pluses, right, because that’s a different area of the brain that’s more basic, right, that goes into synchrony. And what you can see is as the person flips back and forth in experience, different areas of the brain are going into synchrony or asynchrony. So that is suggestive of a relationship between consciousness and self-organizing criticality. Again suggestive. But we’ve already got independent evidence, a lot of convergent evidence, that the functionality of consciousness is to do relevance realization, which explains its strong correlation via working memory with measures of general intelligence. And so we know that this is plausibly associated with self-organizing criticality. So again, convincing? No. Suggestively convergent? Yes. So there’s another set of experiments done by Monte et al. in 2013. And what you’re basically doing is you’re giving people a general anesthetic and then you’re observing their brain as they pass out of consciousness and back into consciousness. And what did they find? They found that as the brain passes out of consciousness, it loses its overall structure as a small world network and breaks down into more local networks. And then as it returns into consciousness, it goes into a small network, small world network formation again. So that consciousness seems to be strongly associated with the degree to which the brain is wiring as a small world network. Now I want to try and bring these together in a more concrete instance where you can see the intelligence, the consciousness, and this dynamic process of self-organization and all that work. So let’s get back to the machinery of insight. The machinery of insight. So if you remember, we talked about this because we talked about the use of disruptive strategies and we talked about the work of Stefan and Dixon. Do you remember that what they found was they found a way, a very sophisticated way but nevertheless a very reliable way of measuring how much entropy is in people’s processing when they’re trying to solve the insight problem. Remember they were tracing through the gear figures and what they found is that entropy goes up right before the insight and then it drops and the brain becomes even, sorry, the behavior, that was a mistake on my part, the behavior becomes even more organized. Now that’s plausibly, and they suggested, that’s plausibly an instance of self-organizing criticality that what’s happening is you’re getting the neural avalanche, it’s breaking up and then that allows a restructuring which goes with the restructuring of the problem. Remember so you’re breaking frame with the neural avalanche and then you’re making frame like the new mound as you restructure your problem framing and you get the insight and you get a solution to your problem. Now interestingly enough Schilling has a math, so this is linking insight to SOC very clearly. Schilling has a mathematical model from 2005 linking insight to small world networks. She argues quite persuasively that’s what, this is very interesting since what you can see happening in an insight is that people’s information is initially organized like in a regular network, just think about that intuitively. So my information is sort of integrated here, local organization, a regular network, local organization, so the whole thing is, all I’ve got is a regular network. But what can happen is here’s my regular network, I’ll make that a little more clear, here’s my regular network and what happens is one of these, I get a long distance connection that forms. So my regular network suddenly is altered into a small world network which means I lose some resiliency, I lose some resiliency but I gain a massive spike in efficiency, I suddenly get more powerful. So insight is when a small, sorry, when a regular network is being converted into a small world network because that means this is a process of optimization, right, because remember this is more optimal than this. And you can see that in how people’s information is organized in an insight. Take two domains, think about how metaphor affords insight. You take two domains, Sam is a pig and you get suddenly this connection between it and those two regular networks are now coalesced into a small world network. Okay that’s great. So some of the work I’ve done with other people I’ve been suggesting because of this, the following. So what happens in insight, a la Stephen and Stephanie and Dixon, is you get self-organizing criticality and that self-organizing criticality breaks up a regular network and converts it into a small world network. So what you’re getting is suddenly, a sudden enhancement, increased optimization of your relevance realization and what’s it accompanied with? It’s accompanied with a flash in salience, remember, and then that can be extended in the flow experience. You’re getting an alteration of consciousness, an alteration of your intelligence, an optimization of your fittedness to the problem space. Okay, again, I’m going to say this again, right, I’m trying to give you stuff that is, makes this plausible. I’m sure that in specifics it’s going to turn out to be false because that’s how science works, but that’s not what I need right now. What I’ve tried to show you is how progressive the project is of naturalizing this and how so much is converging towards it that it is plausible that this will be something that we can scientifically explain and more than scientifically explain that we’ll be able to create as we create autonomous artificial general intelligence. Okay, let’s return back. If I’ve at least made it plausible that there’s a deep connection between relevance realization and consciousness, I want to try and point out some aspects to you about relevance realization and why it is creating a tremendously textured, dynamically flowing salience landscape. Okay, so remember how relevance realization is happening at multiple interacting levels? So, we can think about this, right, where, right, you’re just getting features that are getting picked up. Remember the multiple object tracking. This, this, so basic salience assignment, right, and then this is based on work originally from Maitzen in 1976, his book On Sentience, I’ve mentioned that before, and then some work that I did with Jeb Marshman and Steve Pierce and then later work that I did with Anderson Todd and Richard Wu. The featurization is also feeding up into foregrounding and feeding back, right. So, a bunch of this, this, this, all these features and then presumably I’m foregrounded and other stuff is backgrounded, right. This then feeds up into figuration. You’re configuring me together and figuring me out, think of that language, right, so that I have a structural functional organization, I’m aspectualized for you, that’s feeding back and of course there’s feedback down to here. And then that of course feeds back to and up to framing, how you’re framing your problems and we’ve talked a lot about that and that feeds back, right. So, you’ve got, right, this happening and it’s giving you this very dynamic and textured salience landscape. And then you have to, you have to think about how that’s the core machinery of your perspectival knowing, right. Notice what I’m suggesting to you here. You’ve got the relevance realization that is the core machinery of your participatory knowing. It’s how you are getting coupled to the world so that coevolution, reciprocal realization can occur. That’s your participatory knowing. This feeds up to, feeds back to, right, your salience landscaping. Right? This is your perspectival knowing. This is what gives you your dynamic situational awareness. Your dynamic situational awareness. This textured salience landscaping. This of course is going to, and we’ll talk more about that, it’s going to open up an affordance landscape for you. Certain connections, right, affordances are going to become obvious to you. And you say, oh man, this is so abstract. This is how people are trying to wrestle with this now. Here’s an article from Frontiers in Human Neuroscience. Self-organizing free energy minimization, that’s Friston’s work and it has to do with ultimately about getting your processing as efficient as possible. And optimal grip on a field of affordances using all of this language that I’m using with you right now. That’s by Brunnenberg and Rettfeld from 2014, Frontiers in Human Neuroscience. Just as one example among many. Okay, so this is feeding up and what it’s basically giving you is affordance aviation. Certain affordances are being selected and made obvious to you. Okay, that of course is going to be the basis of your procedural knowing, knowing how to interact. Okay, and I think there might be a way in which that more directly interacts here. Maybe through kinds of implicit learning, but I’m not going to go into that. We’ll come back later into how propositional knowing relates to all of this. I’m putting it aside because this is where we do most of our talking about consciousness. With this I think at the core, the prospectival knowing. But it’s the prospectival knowing that’s grounded in our participatory knowing. And it’s a prospectival knowing. Your situational awareness that obviates affordances is what you need in order to train your skills. That’s how you train your skills. And we know that consciousness is about doing this higher order relevance realization because that’s what this is. This is higher order relevance realization that affords you solving your problems. Okay, so this is, I mean, I’m trying to say, I mean all of this when I’m talking about your salient landscaping. I’m talking about it as the nexus between your relevance realization participatory knowing and your affordance obligation procedural knowing, your skill development, right? Prospectival knowing at the core. And then what’s happening in here is this. If that’s the case, then you can think of your salient’s landscape as having at least sort of three dimensions to it. So one is pretty obvious to you, which is the aspectuality. Your salient, as I said, your salient’s landscape is aspectualizing things. Things, right, okay, so the features are being foregrounded and configured and they’re being framed. So this is a marker. It is aspectualized. Remember, whenever I’m representing or categorizing it, I’m not capturing all of its properties. I’m just capturing an aspect. So this is aspectualized. Everything is aspectualized for me. There’s another dimension here of centrality. I’ll come back to this later, but this has to do with the way relevance realization works. Relevance realization is ultimately grounded in how things are relevant to you. Literally, literally how they are important to you. You import, right? How they are constitutive, right? At some level, the sensory motor stuff is to get stuff that you literally need to import materially. And then at a higher level, you literally need to import information to be constitutive of your cognition. We’ll come back to that transition later. But what you have is, right, the prospect of knowing is there’s a aspectuality and then everything is centered. It’s not, right, it’s not non-valence. It’s vectored onto me. And then it has temporality. Because this is a dynamic process of ongoing evolution. Timing, small differences in time make huge impacts, huge differences in such dynamical processing. Kairos is really, really central. When you’re intervening in these very massively recursive, dynamically coupled systems, small variations in time. Small variations can unexpectedly have major changes. So things have a central relevance in terms of their timing, not just their place in time. So think of your salient’s landscape as an unfolding, like in these three dimensions of aspectuality, sentriality, and temporality. There’s an acronym here, ACT. This is an enacted kind of prospectival knowing. So you’ve got consciousness and what it’s doing for me functionally is all of this. But what it’s doing in that functionality is all of this. And what that’s giving me is prospectival knowing that’s grounded in participatory knowing, that affords procedural training, and that it has aspectuality, a salient’s landscape that has aspectuality, centrality, and temporality. It has, look at what it has. Centrality is the here-ness. My consciousness is here. Because it is indexed on me. Of course it has now-ness because timing is central to it. Nope, that was intended, that move. And it has togetherness, unity, how everything fits together. I don’t want to say unity because unity makes it sound like there’s a single thing. But how there’s a oneness to your consciousness. It’s all together. You have the here-ness, the now-ness, the togetherness, the salience, the prospectival knowing. How it is centered on you. A lot of the phenomenology of your consciousness is explained along with the functionality of your consciousness. Is that a complete account? No. But it’s a lot of what your consciousness does and is. It’s a lot of what your consciousness does and is. So, I would argue that at least what that gives us is an account that we’re going to need for the right hand of the diagram. Why altering states of consciousness can have such a profound effect on your reaching down to your identity, up into your agency. Why it could be linked to things like a profound sense of insight. We’ve talked about this before when we talked about higher states of consciousness. How it can feel like a dramatic coupling to your environment. That’s that participatory coupling that we found in flow. This all, I think, hangs together extremely well. Which means it looks like there’s a very strong link between the two. This all, I think, hangs together extremely well. Which means it looks like I have the machinery I need to talk about that right hand of the diagram. Before I do that, I want to make a couple of important points to remind you of things. Relevance realization is not cold calculation. It is always about how your body is making risky affect laden choices to do with its precious but limited cognitive and metabolic and temporal resources. Relevance realization is deeply, deeply, always, and think about how this also connects to this and to consciousness. It’s always, always an aspect of caring. That’s what Reed Montague argues, the neuroscientist in his book, Your Brain is Almost Perfect. That what makes us fundamentally different from computers because we are in the finitary predicament. We are caring about our information processing and caring about the information processed therein. So this is always affect. Things are salient. They’re catching your attention. They’re arousing. They’re changing your level of arousal. Remember how arousal is an ongoing evolving part of this. They are constantly creating affect, motivation, moving, emotion, moving you towards action. You have to hear how at the guts of consciousness intelligent, there is also caring. That’s very important. That’s very important. Because that brings back, I think, a central notion. I know many of you are wondering why I haven’t spoken about him yet. I’m going to speak about him later. From Heidegger. That at the core of our being in the world is a foundational kind of caring. And this connection I’m making, this is not far-fetched. Look at somebody deeply influenced by Heidegger who is central to third generation or 4E cognitive science. That’s the work of Dreyfus and others. And Dreyfus has had a lot of important history in reminding us that our knowing is not just propositional knowing. It’s also procedural and ultimately I think, perspective and participatory. He doesn’t quite use that language but he points towards it. He talks a lot about optimal gripping. And importantly, if you take a look at his work, being in the world on Heidegger, when he’s talking about things like caring, he’s invoking in central passages the notion of relevance. And when he talked about what computers can’t do and later on what computers still can’t do, what they’re basically lacking is this Heideggerian machinery of caring, which he explicates in being in the world in terms of the ability to find things relevant. And this of course points again towards Heidegger’s notion of Dasein, right? That our being in the world is inherent, to use my language, is inherently transjective. Because all of this machinery is inherently transjective. And it is something that we do not make. We and our intelligible world co-emerge from it. We participate in it. And I want to take a look more at what that means for our spirituality next time. Thank you very much for your time and attention. Thank you for watching!