https://youtubetranscript.com/?v=TxZdwrjM96I

Thank you for watching this YouTube and podcast series is by the Verveki Foundation, which in addition to supporting my work also offers courses, practices, workshops, and other projects dedicated to responding to the meaning crisis. If you would like to support this work, please consider joining our Patreon. You can find the link in the show notes. Welcome everyone to another Voices with Verveki. I’m excited to be here. This is the first time that Sam Tiedemann is going to be on my channel. I’ve been on his channel multiple times, but it’s a great pleasure to have him here. We’re going to talk. Well, we’re going to begin to start talking about AGI, but that will no doubt lead into other things. I’m really excited about this. As many of you know, I have a book out, Mentoring the Machines, which is for a general lay audience about how we can most wisely, at least I’m arguing a proposal, how we can most wisely deal with this looming significant change to pretend it’s not going to change things, I think is self-deceptive. Then for those of you who are interested in more academic rigorous presentation, I have a video essay on YouTube and a bunch of video essays in response to it that other people have come in and given their counter arguments. We’ve discussed it and Sam is going to be in that august company. First of all, Sam, welcome and maybe take a moment to introduce yourself to my audience because they might not know you as well as your audience knows me, etc. Sure. Well, I really appreciate the invitation, John, and I’m honored and flattered to be here. A little bit about who I am. I have my own YouTube channel. It’s called Transfigured. It mostly deals with theology and philosophy and stuff. I would say I guess I’m a resident member of this little corner of the internet. Yes, for sure. I first got introduced to you through Paul VanderKlay. I’ve been a fan and regular listener and regular appear on his podcast. Then you came on his salience landscape and I’ve been following your work closely ever since then. I guess I’m a normal suburban dad in some senses. I live in the Chicago area. What’s interesting is most of the time on YouTube, I feel like I’m talking about theology, Christianity, philosophy, something like that. My professional background is in either what you could say statistics, biostatistics, machine learning, AI, something along those lines. I feel like this is going to be one of the few times where I get to talk about my professional or academic background and those sorts of interests as opposed to something that more often seems like a hobby than my main focus. Let’s see here. I guess a couple other things about me. I feel like I’m halfway above a rando, but not quite to whatever the level is that is above a rando, whatever level you and Paul and others are on, and so whatever that intermediary space is. I lead the Chicagoland Bridges of Meaning Group, which is one of the estuaries that is part of Paul VanderKlay and John Van Dong’s network. We are a group that gets together once a month and practices dialogue together. We talk about various things that we’re interested in, but we try to do it in a way that is respectful and engaging and mutually edifying. I’ve been honored to lead that group for a couple years now with my friend Hank. We’re still going strong. We’re actually, our numbers are as high as ever. If anyone out there in John Vervecki’s audience that hasn’t heard of that before, if you look for it on meetup.com and you live in the Chicago area, I’m sure that you should be able to find it and we’d love to have you. If you like listening to John’s channel, then you would be a perfect fit, I’m sure. So that’s a little bit about who I am. I guess to maybe be a little bit more specific than that, my academic background is specifically in biostatistics and I’ve often been at the intersection of healthcare and machine learning and predictive modeling and trying to basically apply these technologies in the healthcare space for medical research and practice improvement at the same time, often trying to integrate them into the point of care where nurses and doctors are using them. During the pandemic that altered the course of that work quite dramatically. I was a COVID vaccine researcher, I guess, during most of the pandemic because most of my regular work was thrown offline. I currently now work at Google. I left healthcare because COVID frustrated some of my career plans, I guess you could say, and the opportunity of remote work was a little bit too tempting to ignore. I work for Google now. I don’t really work in the healthcare space, but that also gives me a little bit of an inside view into all of this machine learning, large language model stuff that’s going on now. So that’s a little bit about my background. And so I’ve really enjoyed your work, John. When Paul first showed me it, I really devoured your awakening from the meaning crisis videos. I’ve really enjoyed your conversations with Bishop Maximus. I thought those were excellent. I like your conversations with Jordan Hall. I feel like I really sort of resonate with those. So I feel like it’s very hard to find people that can talk about both artificial intelligence and Christianity and neoplatonic philosophy fluently all at the same time. So I’m really excited to get to talk to you about these things. We’ve had, I think, three previous conversations, two just between you and me and then a third with VanderKlaai present. We’ve sort of focused on an intersection of maybe you could say evolution, neoplatonism, and teleology. And I think that a lot of what we’ve talked about before will be relevant in trying to bring some of those same ideas and stuff that we’ve previously discussed to focus on the question of artificial intelligence. What is it? What isn’t it? What could be dangerous about it? What might not be dangerous about it? What potential for good is there? What potential for evil, et cetera? Thanks. Yeah, I agree, Sam. I think, I mean, not to be sort of overly systematic about it, but I do think that we should always be addressing what I call the three dimensions of the discussion, the scientific import and the impact of these machines, the philosophical import and impact, and the spiritual import and impact. And I agree with you that what might seem irrelevant or tangential are discussions about neoplatonism and evolution. And yeah, we need to talk that they will, I think that will bear upon all our discussion about all three of these dimensions. But I’d like, I mean, I know that you’ve watched the video essay, and I appreciate you doing that. Thank you. And maybe you’ve seen a couple of the video responses too. But I’d like to give you, you know, some space right now, you know, maybe an initial presentation about what points do you want to bring forward for discussion? What’s your sort of, what’s the argumentative genesis of those points? Things like that. Yeah, sure. I think I’ll tell a little bit of a story of some projects I’ve worked on in my career, not in any way to try and flatter myself, but because it’s a, I think, it’s a very poignant example. And it’ll help explain where I’m coming from. And it sort of shapes what I think are the possibilities and limitations of AI. So basically, I started out my career working at a company that made software for hospitals. And I found my way to be sort of in charge of collecting and analyzing the data that looked at how emergency departments were performing. And this was back in the days when Obamacare was first coming into effect in the United States. And there was a huge big political question of whether states that expanded Medicaid as part of Obamacare would reduce emergency room visits, or stay the same or increase. And this was a big hot button political topic at the time. I was like, you know, I might have a large enough amount of data to answer that question just as well as anybody. So I looked at the question, all right, so the states that have expanded Medicare, do we see their number of emergency room visits going up or down? And I compared it to the states that had. So basically, it was like Republican states versus Democratic states. And what I noticed is that they were almost perfectly correlated on even on a daily basis. I was like, why would a busy day at a California emergency department be a busy day at a Texas emergency department? Those would seem to be uncorrelated, or at least loosely correlated. And so that kind of planted a bug in my mind. Well, maybe there’s some underlying process that makes emergency departments busy or not busy. And so I went to master, I got my master’s degree at Harvard, and I made a predictive model, machine learning model, an AI model, whatever vocat, I think most of those words is basically interchangeable. I mean, they’re all basically statistical regressions at some level to try and forecast emergency department visits for the Boston Children’s Emergency Department. And it was a pretty successful model. We used some new techniques and some new data sources and stuff like that. And I was like, well, great. Once we have figured out how to forecast emergency room visits and how busy or not busy an emergency department will be, or maybe even things like what sort of patients they should expect or not expect, well, then you can just make emergency departments so much better. It’ll just be obvious once you’ve forecasted the inputs, how to optimize the system accordingly. So I tried to start a startup with my friend Carl, shout out to Carl, that was trying to kind of sell these forecasts to emergency departments on the idea that it would be valuable to them to know when they’re busy and not busy such that they could do things to optimize themselves and improve their quality and processes and stuff like that. And that turns out to be nowhere near as true as I thought it was. And so I was dealing with this problem, I think similar time to when I started listening to you. And when you were talking about the problems of relevance realization and the constraints of embodiment and how this is what enables you to see what’s important and stuff like that, because it’s not, so part of it is, I mean, one part of it is that emergency rooms just don’t really like to change. But I think there are reasons for that. But part of it is like, okay, so things are going to be busy. So what? And the question is, is what makes an emergency department better or worse? And there are so many different trade offs that are being juggled by an emergency department at any given time. Like, what’s the trade off between how long patients wait in the waiting room to see a doctor and the financial profitability of the department? What’s the trade off between the number of patients who pass away in the emergency department and the career satisfaction of the doctors? What’s the trade off between, you know, how completely you screen all of your patients for domestic violence and the patient satisfaction score that they give at the end of their visit? Right? There are so many different things going on. An emergency department is such a complex place that it’s like, well, what do you mean by making it better? And that is really the question that I was like, oh, I actually don’t know. I can tell you that, you know, next Tuesday is going to be 25% busier than average for reasons, reasons, reasons. But it’s like, okay, so maybe we could staff more nurses by then. But is that better or worse? And all these sorts of questions. And that the, I think what I was bumping up against was, and then part of me was like, the next step was like, oh, okay, obviously, here’s what we’ll do. We’ll make a utility function that captures the goodness of the emergency department. And we’ll have all sorts of key performance metrics that we care about patient wait time, throughput, cost, you know, revenue, you know, all the things that you can sort of imagine, some good, some good things that you want to optimize and some bad things you want to minimize, throw together into a utility function. And then you can use your predictive model and the sort of variables that you can have control over, like the number of nurses or the physician schedule, or whether or not to get another, you know, x-ray machine or, you know, to change this room into this other kind of room, you know, whatever. Okay, you’ve got your inputs, you’ve got your variables, you can control, and then you’ve got your utility function, and you just go to town, right? And it is just nowhere near that simple, a because no one really part of it is like this is it’s hard to articulate, but also people get really uncomfortable at the trade off between talking about the trade off between, say, money and well being or money and health, or money and death. And for I mean, for obvious reasons, it makes us squeamish, it makes us it makes humans feel funny to even think that there is such a trade off, even though the emergency department is in some sense embodying that trade off at some level or another, whether it has made a self aware approximation that we value a human life at 3.4 or $4.3 million, or whatever it would be. All these things are embodied in the structure of the emergency department. But it is not exactly clear what they are, how they relate to each other, what the trade offs are. So it was it was really, it was really this problem that was like, I don’t even know how to go to the next step. I don’t even know how to use machine learning and artificial intelligence to improve the morality, I guess you could say the utility, if you want to be a little bit more cold hearted about it, of an emergency department, you know, being able to forecast the inputs with greater statistical accuracy, it was entirely unclear how to connect the dots between that and improving the department. And this is just sort of one example from my career that I think shows kind of the strengths and the limitations of AI that you sort of butt up against, I don’t know, I guess you could say the positive butts up against the normative. And I don’t, I don’t think that there is some super strong, maybe humane divide between is and ought. But artificial intelligence certainly struggles with the distinction between is and ought. And there there is a difficulty translating between those two things, even if there isn’t like maybe some indivisible wall between those two things. And I think that this has this sort of lesson has implications for all sorts of things what of trying to what artificial intelligence could be good at and what it could be bad at, how to understand these large language models, how to understand potentially other use cases of AI, and all those sorts of things, is that once we get to the level of application, we’re in the level of morality or utility or purpose accomplishing at some level. And it is entirely unclear how to train these models to get them to be better at that. And part of me, it almost makes me wonder how we have functional emergency departments at all. But I think really what part of the answer as I was thinking about, so how do we ever come up with an emergency department that could work if I can’t figure out how to get machines to make it work better. And I think that humans, at least I’m not trying to make an argument that humans have and will only will be the only ones who ever have some ability. But I think at this at least at this present time, one ability that we have that we haven’t come up with any form of artificialization or computer version of is sort of judging a purpose and an ability to accomplish that purpose at our own sort of level without being told necessarily what the purpose is. Like we can look at an emergency room and be like, well, I kind of have an idea in my mind, even if it’s somewhat hard to articulate of what a good and a bad emergency department is. And sometimes I can quantify that, but a lot of the times maybe I can’t, but I can still kind of know it when I see it. Maybe my knowing it when I see it isn’t perfect. And maybe someone who has more experience is better at judging this sort of thing than someone who has less experience. But I think emergency departments have relied and again, emergency departments are just a particularly poignant example of what I’m trying to talk about. Yeah, excellent though. Excellent. This is excellent. Yeah, I think emergency departments are a good example because we sort of understand that they’re very morally fraught, that there are a lot of very ethically difficult questions. Like they literally triage patients between which ones should be seen more quickly and others, which ones should deserve more attention and resources than others. And that’s happening on a daily minute by minute, hour by hour basis. And so there are all of these morally fraught questions, but it’s also something that’s kind of a contained system. And that anyone who’s been to an emergency department has also probably wondered, man, this could work a lot better than it does. I’m sure a lot of people have had minor or major horror stories at emergency departments. But then again, people have also had their lives saved at emergency departments. Emergency departments are probably saving thousands or tens of thousands of lives across the country daily. So it’s not like they aren’t good at something, but they are not great at other things. Anyway, but getting back to the question of how do humans make emergency departments better if I can’t figure out how to have a machine make an emergency department better? And I think we sort of abstract the purpose of an emergency department through some ability that I don’t quite know how to describe, but sort of purpose ascertainment or something like that. I think that humans are particularly good at seeing what the purpose of something is or should be. Or like even if you see someone who’s struggling to open a door, a human’s like, oh, that person, their purpose is to open that door. They’re struggling. I could help them. They’ve got bags in their hands. My hands are free. I’m going to open the door for them to allow them to get through. That immediate sort of ascertainment of someone’s intentions or purpose or goal. And then we can sort of use, once we have that, we are pretty good at judging how far along something is towards that purpose and maybe how to get them better. And then slowly over the course of decades, we’ve probably allowed emergency departments to get a little bit better. Like, oh, you know, we could do this process over here and it wouldn’t mess up everything else. And also emergency departments are extremely conservative in that they really don’t like changing very much. And only every once in a while is there some new, better idea that they integrate into their processes. Most of the time they just do not want to do that. And that’s because there’s probably a lot of wisdom built up over the lifetime of the institution. And even if it’s not a particular emergency department, all the emergency departments of the past have sort of built up some kind of know-how and procedures and rules and processes, et cetera, that work even if not everyone, if there isn’t anyone that fully understands all the details of why they work together. And it’s much easier to mess it up than it is to improve it. And so, you know, this kind of slow, reluctant process improvement using human judgment and stuff like that. But I have no idea for the life of me how an AI system would do that sort of thing. Like at its best, we humans can probably use these AI systems to help improve some small piece here or some small piece here that is a little bit complex. But we are often giving these models the goal that they are trying to optimize. They haven’t decided for themselves, obviously, what their goal should be or what the variable is that they’re optimizing it. Why are they optimizing that variable? What are the constraints on that variable? All the time, the engineer has to give them these things and the designer, et cetera, has to give them these things. And then maybe they can find some way to improve the target variable that a human hadn’t realized. And then maybe that can improve things. But even that, I’ve been a part of a lot of projects like that in healthcare. And honestly, most of the time, they fail. And they either fail because they did not improve the process at all, or the cost of improving the process was not worth it. Basically, they weren’t profitable. They were more costly than they were beneficial. So the project gets shut down. Or oftentimes, there’s a funny way of, you know, we improved something, we give a presentation about it at some conference, everyone cheers and claps their hands. And then the project goes into the dustbin and never gets implemented because it was unworkable. But we pretend like it was a success. And on our performance review, we brag about what a success it was. But in all practicality, it actually wasn’t a success. So when I’m hearing a lot of people talk about AI, I oftentimes don’t entirely recognize the thing that they’re talking about. I’m like, I’ve been working hands on with this AI stuff, my whole, you know, I’m not a very long career, but whatever, whatever length of my career it’s been. And most of the time, it’s extremely frustrating. It’s extremely tedious, and mostly ends in failure. And then people are talking about these AI machines taking over the world. I’m like, guys, I can’t even get them to improve an emergency department by 1%. And exactly why are we worried about them taking over the world? You know, so but that’s not to say that I’m one of those people, like, I feel like there are a couple positions out there on AI ones, like, it’s the end of the world as we know it, who knows how soon one it’s the utopia of the world, who knows how soon any minute now. And then there are some people like, it’s the AI is completely stupid. And it’s not really going to do anything. I’m sort of like in between, there are things that I’m worried about. There are very real dangers. There are very real bad things that I think will come about in the short and medium term future. And I think, honestly, one of my worries is that AI won’t live up to its potential of doing good, but it’ll have an easier potential of doing bad than it will of doing good. And but these bad things aren’t going to be, you know, conquering human civilization and subjugating us or annihilating us. It’ll be various negative effects, kind of like the negative effects of social media or something like that. And in fact, many of the negative effects of social media are the negative effects of artificial intelligence and vice versa. And so it’ll be more like a something between a mild annoyance to a medium sized problem of in various ways, and not just like some single manifestation of a problem, but it’ll be sort of like a multi headed hydra of problems that are generated by AI. But it’s hard for me to imagine it really getting to the level of sentience or self driven purpose or anything like that, unless there are just categorical changes and leaps in the technology that are completely different than the way it works at his present or anything that we’ve seen in the past. So that’s sort of my presentation is that excellent AI is good at the sort of positive aspect of things. And yes, you can use it to, you know, say forecast how busy an emergency department is going to be, and they can get darn good at that sort of task. But it’s not good at moving to the normative. And it’s difficult to get these things to actually improve processes, especially morally complex processes. And that I think part of this, I could, I guess, just maybe one or two more minutes before I pass it back to you. I think, I think one thing that we see about AI is that it is best at playing games. And that this is because games have a clearly defined normative landscape. Like you can have an AI get really good at chess or really good at go, or some or even like Mario Kart or whatever. But that’s because these games have a very clearly numerically defined moral landscape. And I think that that numerical definition of goodness is absolutely necessary for an AI to get good at something. But in the real world, most of our definitions of goodness do not have clear mathematical definitions. Because an AI machine needs a clear, and I mean perfectly 100% clear, mathematically defined system that is trying to optimize. It cannot optimize a non-mathematically defined thing. But yet we humans seem to be able to do that. And I don’t quite know exactly how we do that. But I think it does have something to do with this ability of purpose ascertainment and purpose accomplishment judgment or something like that. So anyway, that’s sort of my presentation and my intro for the conversation. That was excellent. Thank you. So I’m going to, I’m going to, I’m going to see what was, I’m going to say what was sparked in me. And we can negotiate how accurately that reflects what you said, but also it might not. But nevertheless, it might be a good response. We can negotiate between exegesis and response together. But let me just sort of go through four things that came up for me in that and how I think they are, they, they connect to some of the main arguments I made in the video essay. And, and I know you have both agreement and disagreement with that essay. And we’ll get into that, but let’s start here. So the first thing I would get at is this idea that the normative and, and, and the, and the problem that I think you’re right. There’s a tension around the normative. I think the humane cleavage is inaccurate. The arguments of Putnam, K-Spear and other, my own work relevance is both a causal thing and a normative judgment. Rationality is both descriptive and normative, right? We have these thick terms that bridge between them and presuppose an interconnection. And if, and if Hume’s divide was absolute, that the relevance and rationality technically wouldn’t be possible for us and things like that. So good. Okay. So I don’t have to pack those arguments in a deep depth. And so for me, and I think for you, we’re saying the same thing. I think I mean, purpose attainment is, is a connection of achieving a goal, which is problem solving, but also doing it in a normally sorry, in a normatively appropriate fashion. And I think that’s where I see, I see, I see purpose as one of these terms is what I’m saying. Purpose can just mean like the purpose of a refrigerator is to keep things cold, right? But you can also, but people talk about purpose, like how it gives meaning to their life because it has a normative aspect to it. We’re trying to achieve the good life and therefore I, so I’m making an argument to you that I think purpose sits right on that, that same kind of place. And that’s why it’s deeply embedded with relevance realization. First of all, how does that land with you? I think that’s exactly right. And that there’s some current inability of these machines to do that, that sort of bridging back and forth between the positive and the normative and the purpose judgment and the purpose ascertainment. And I think one thing that would be necessary for these machines to be better than they are is to be better at that specific task. And so it would be worth exploring what exactly is that task? How in the world is it that we can do it? And why is it that they can’t do it? Is it possible to program a machine to do it? Or is there something perhaps uniquely special about organic brain space that is just fundamentally different than how a computer machine could ever work? Or are there imaginable ways such that artificial intelligence could be given this ability? But it certainly seems like as of yet it doesn’t have it as best as I can tell. Right. And so let’s flag that and come back to that. So I think that’s a good framing. I’m going to make a proposal that I think 4eCOGSI has something tremendously important to say that. And that also the current work I’m doing integrating predictive processing relevance realization and grounding it in 4eCOGSI is specifically trying to address that. And I think the LLMs only have a tiny, tiny bit of all of that theoretical work worked out and the rest of it is basically parasitic on us and our abilities. And so I’ll come back to that. The second point I heard you saying and it’s related to this point we just made is the machines don’t have intentionality not just in the sense of working towards something, but they don’t care about. They don’t care about what’s true, good and beautiful. In no way rational agents. They are not motivated to care about the meaning of what they’re doing or whether or not they’re self-deceived or whether they’re an error. So all of that capacity of caring, and I think that overlaps with relevance realization, is missing. It’s not just that we haven’t figured out how to bridge between the normative and the the causal. It’s also we have failed, I would argue, largely to recognize how much that’s bound up with a capacity for caring about the true, the good and the beautiful in a way that matters to the agent and not to some external authority. Or caring about anything. Well, that’s it. That was the point I was driving towards. They don’t care about anything. And what does that mean? The next thing I heard you saying was a lot of what we do, and this is of course a standard of our vacuum point, is not propositional. This is, I get this from Plato, this is why you can’t define courage. This is why you can at best exemplify it in a life that is in deep dialogue with other lives. And that there’s the non-propositional and the LLMs of course have nothing like procedural know-how. They have nothing like perspectival right knowing. Nothing like participatory knowing. And that is, I think, now the question we could ask is it theoretically possible for them to have that or not? We can come back to that. I thought, I heard you saying that. And then that overlaps with something else I heard you saying, which is you know, it’s not anyone in the emergency room. There’s a collective intelligence of the distributed cognition that has a life of its own and is evolving over time that is capable of grokking this very complex environment, this hyperdimensional environment of good health care. And that is also not in these machines except in a pantomime way. They sort of cannibalize our distributed cognition and mechanize it in the implementation. How does all of that land as a response to what you’re saying? Yeah, I agree with everything that you’ve said. And I think that’s a very good articulation of the points that I was trying to bring up. Excellent. Okay, so that’s a good common ground because I want to see where we pull. I like to use the Greek word now, not just because I like to use highfalutin words, because I use the Greek word tonos because the English word tension has only a negative connotation where tonos is like the tonos of the lyre, the tonos of the bow. There’s also, and I’m looking for the tonos between us on this so we can get the proper attunement and I’m now playing with the language. So I think there are deep reasons why organic brains can do this and the current machines can’t. And I think this is because we are autonomous, we’re autonomous, we generate norms that we bind ourselves to individually and collectively. We’re adaptive and that doesn’t mean just that we learn, we’re adaptive and there’s more to that. And those two things are ultimately grounded in the fact that we’re autopoetic, we are self-making beings. So, you know, autonomous, adaptive, autopoiesis so that we’re properly understood as agents, not just as things, as entities. This of course is the standard 4e cog-sci argument and the argument is that without that constellation you can’t get organisms that, well, you can, sorry, I’ll be neutral, you can’t get entities that can care because they are not constituted by caring, by generating norms that they bind themselves to, by having real needs which make them real agents, etc. So this is where I say I don’t think we have any, I want to be careful here, I don’t think we have any special sauce in some sort of crypto dualism or ghostly ether ectoplasm inside of us or all of those. I think the arguments for that kind of thing I’m deeply suspicious of. But I do think as being biological, we have these properties of autopoiesis, adaptivity and autonomy that these machines do not possess and therefore they cannot ever possess that caring. And so the question then becomes is it possible that we could make devices that were properly autopoietic, adaptive and autonomous? That for me becomes the question. What do you think of that reframing or is there something missing? Is there something, a way you want to push back on it? Yeah, I do think that that is the really important question. I myself when it comes to, I don’t necessarily limit myself to physicalist or materialist explanations and I might be willing to say that there is something like a soul or a spirit that might be something like a special sauce. But I mean, you know, I’m a member of the 21st century too. It’s hard for me not to think pretty materialistically as as much as I leaned into Christian theology sometimes. I still at heart lean pretty far in a materialist direction and would always want to say like, I would need to be convinced that I would need to be convinced that we can’t do it materially before I a priori think that we can’t. Right. And I think there were an agreement. I mean, I think we may have different orientations in terms of our plausibility judgments, but I’m of the point of, first of all, if we will get to the idea of threshold rather than predictions, I don’t make predictions because all the predictions models are based on stupid univariate measures and, you know, stupid graphing. And we put that aside because already they’ve already been disconfirmed, many of these so-called predictions. And so I’m more, we’ll come back to that. But just to say how I actually am in agreement with you, even though we may have different plausibility judgments. By the way, if materialism means reductionism, I’m not a materialist and materialism means only you can only derive from physics, your ontology. I’m not a materialist. I’m committed to extended naturalism, which is non-reductive. And you also have to talk about what your science presupposes, not just what can be derived from it. But maybe that’s still not enough. And I’m open to that. You know, I entered the good faith discussions about these things. And I’m open to that. That could be enough. So yeah, yeah. And that’s good. And that’s good faith dialogue. And I like the fact that we can do that. So I say that before we could, and tell me if this is basically what you’re saying before we conclude that we have the metaphysically special sauce that can’t be captured by any naturalistic framework. So I’ll call it non-natural special sauce to be as neutral as possible, not pejorative. We should try what I’m proposing. I mean, scientifically, if we create machines that have all of these and we still see that they seem fundamentally incapable of tackling the emergency room problem, then we would have started to have built a very rigorous and powerful argument for there being a non-natural special sauce. But it may be the case. But if we were going to do that fairly and not make it a loaded test, we should be open to the fact that if it works, we have to conclude, or maybe human beings don’t have a non-natural special sauce. That’s the proposal I actually want to put on the table. Yeah. Yeah. And I agree with that. So I will tell another, hopefully much shorter medical parable about artificial intelligence. So like the previous one I was talking about, okay, so you have a forecasting machine that forecasts how busy the emergency department is going to have. That doesn’t really seem embodied. It doesn’t seem to care about anything. Maybe at some sense, you could say in a non-phenomenological way, it cares about being accurate about its forecasts. Yeah, I don’t think that’s true. Yeah. Okay. Right. And I kind of agree with you. But another project that I worked on, and this is starting, I think this was another step up, I guess, in the direction of something that is more intelligent and more caring or more like an agent. So we were building this predictive model. So a patient comes into the neurology department. They are newly experiencing headaches, migraine headaches. They’ve never taken a medication for their migraine headache before. Maybe they’ve taken some aspirin or something, but they’ve never been on a prescription. And so there are a handful of drugs that neurologists will prescribe for headaches, but no one really knows which one works better. Or even more interestingly, no one knows maybe some of these drugs work for specific kinds of patients or specific presentation of symptoms or what have you. And that question was just never really been researched. So we would have a patient come in, they would fill out a form or questionnaire basically on their symptoms and who they are and their medical history and stuff like that. We feed these variables into a predictive model. And it’s actually not even a predictive model. I would call it a prescriptive model. It then uses its artificial intelligence programming to calculate the probability that each drug has the highest chance of success for that patient. And so this is a really interesting thing. Imagine I’ve got drugs A, B, and C. And the machine decides that there is a 70% chance that drug A will work the best, a 20% chance that drug B will work the best, and a 10% chance that drug C will work the best. What’s really interesting is if you design the model such that it’s getting feedback from its own predictions, such that every time it gives a prescription to a patient after enough time once the success or failure can be judged, you then basically turn that into a row in the data set and retrain the model and such that it’s getting this real world feedback of its own predictions. What’s interesting is if you want the model to get faster, get better as fast as it can, what you do is you do not simply give the highest probability drug each time. What you do is you flip a weighted coin, so to speak, generate sort of a random number, and 70% of the time it will give drug A, 20% B, 10% C, et cetera. And if you match those probabilities to the probability that you do that action, it actually creates a richer or more powerful data set from a statistical definition, and the learning process happens faster and faster and faster. So a little bit of agency or a little bit of randomness in the prescriptive model actually improves the performance such that by the time you’re seeing your 500th patient, the model will be better if it had been trained that way than if it had been either completely random, just doing ABC, ABC, ABC, or if it had been doing the best one the whole time. And so that’s an interesting thing. And I think that this starts to get closer to the question of agency and what agency is for and what agency can accomplish that non-agency can’t. And I think part of that, because if you ask at some level, what was the reason why patient number 267 got given drug B? Give me the causal explanation of why that patient did. Well, you’d have to say, well, okay, patients one through 266 created a data set that looked like this. Here are the input variables. Here was the drugs they got given, and here was the whether it was a failure or a success for that patient. All right, that trained a model that had these parameters in it and these weights to the various associations between the input variables and output variables. But then you also have to say, oh, and also the random number generator at the very instant that program was triggered for patient 267 gave random number X, and that random number X was put through this probability calculation such that it prescribed drug B. And that starts to seem something closer to agency. I’m not going to use the word free will, but something that the causal explanation is within the machine itself or within the process itself. Okay, so let me pick up on that because I think that’s a really important point. And that actually goes to some of the very cutting edge of the edge of work I’m doing right now. In fact, this has to do with a paper I published at the end of last year with Brett Anderson and Mark Miller and a talk I gave in Leiden just a few months ago where I got horribly sick after being there. But we’ll put that aside. So, I mean, this comes up and this is the classic bias variance problem. So as soon as your models are predictive, you have the problem of either under fitting or over fitting to your data. If you’re over fitting, you have to throw in noise. If you’re under fitting, you have to increase sensitivity and they are in an inevitable trade off relationship. And there is no app or way. This is the work I’m doing with Anna Riddell. There’s no app or way of deciding that. Now, the argument I’ve made is what happens is if you have reinforcement predictive processing, the kind of thing you were putting in the machine, and what you’re doing is the machine is just trying to reduce surprise, what will happen is it will inevitably get bound into these problematic trade off relationships, which you mentioned earlier in your discussion, and then internalize those as opponent processing. And then once you get that, the machinery of relevance realization kicks in. You get opponent processing and you get this dynamical evolution of optimal gripping on the environment. But that is always relative to the environment. So for example, if the environment changes how rapidly, like if it becomes more volatile, less stable, the opponent processing has to be evolving like this so it can shift. And that also leads to another point. You can’t have one machine. Like you can’t have Skynet because there are different environments with different induction caution parameters, different abduction rates, volatility, complexity, novelty, ill-definedness. Exactly. Right. And so you’ll need multiple machines, which gets us into the distributed cognition thing, which means you have to understand, well, how are multiple machines that will literally have different perspectives because they’ll have different salience landscapes because they will be internalizing a different opponent processing on the bias variance. How will they interact? And again, we bump into that, right? Not only is it the distributed cognition problem, it’s the cause normativity problem. How will they balance that? Not individual. And all of this is not in our current machines. And this is part of what I argue. Right. And I think for me, this is a completely formal argument I just made to you. And the idea that the way we make these LLMs masks a lot of that because it sucks from our distributed cognition. It gets reinforced by our relevance judgments. Right. We make the data sets, all that sort of stuff. And that is masking the fact that the machines don’t, as you said, the causal factor for these machines is not actually in the machines themselves. Does that land with you? Yes. Absolutely. And I mean, I don’t think that should be that surprising that there probably isn’t going to be one machine to rule them all, so to speak, because it’s not like there’s one human to rule them all. To build a company or to run an emergency department or to do anything, we often have humans that are distributed, that have specific tasks that get specifically good at different things and then combine their skill sets into a functioning whole. And I don’t think it can or will be any different when we start integrating more and more of these artificial machines and agents and stuff like that, that they will have to be focused on specific problems, specific tasks, because each specific task will have different sorts of tradeoffs of all the various different kind of statistical machinery that you were talking about. And it will make different sense. There will need to be different values for different purposes. And so it’s not like there could be one thing that will do all of that. But I do think that part of why I’m telling that story about the thing that can prescribe the medicines is there is some extra step closer to autopoiesis or goal accomplishment. And I think that ability to learn from your own predictions is a very powerful ability that a lot of people don’t really have that yet in a lot of these machines. And that’s partly because it needs a goal that it is trying to accomplish in order to learn from its own success or failure. And in the case of that migraine thing, there’s still a human nurse who calls the patient, asks them a series of questions to figure out if the drug worked for you. And then the patient is judging in themselves if the drug worked for them. And then that turns into a zero or a one that the machine gets. And so it doesn’t do any of that work. It just gets a zero or a one at the end. Well, OK, so two points come from what you just said. One is, and this is a point I made in the essay, I think, and I just made an argument for those who want to, we’re going to release the light and talk soon about if we can get this proper deep integration of predictive processing and relevance realization, which I think is very analogous to the grand synthesis of biology between genetics and Darwinian theory. And I make that argument. I think that starts to address what you’re talking about here. And the LLMs, they have one sort of domain of predictivity, which is probability rates between appearances of tokens. So it’s not the kind of complex world right. Right. Yes. Predictive processing. And they only have a little bit of relevance realization within the deep learning machinery. So they’re missing most of what we’re talking about. That’s the first thing. So there’s a theoretical thing that hasn’t happened for them. But the other one is this, I mean, what you just said, I mean, like, you know, we dump noise into our system. Moderate distraction makes us have insight. We dream at night. There’s all kinds of stuff where the brain seems to be doing this for like, and you see the same thing in machine learning, you drop out or you throw noise in and all that sort of stuff and put and they’re doing that. But again, they don’t do it theoretically. They have temperature. They throw randomness in and that they just hack. They just literally hack it until they get human beings saying, I like that. That’s not a theoretical model. That’s not a theoretical model. So again, that point, this is hacked, not theory driven. So it’s not explanatory. The final judgment is in humans. And we have no idea how we know if it’s working well or not. We just know that we can know that. Right. And then that brings me back to the point that also overlaps with the distributed cognition is the non propositional, the platonic point that most of wisdom and virtue is carried non propositionally. And it has to do with our embodiment, our embedded, our enactment, right, in a way that we are extended through machine, through technology and through each other and through psycho technologies. And the thing that’s interesting here is, right, these machines actually, to some significant extent, and this is how they may exacerbate the meaning crisis, they, I would say they accentuate propositional tyranny because it looked like the thing that breaks us out of the silo problem. The silo problem is that AI only was, was single domain, single problem solvers. I can make up a game playing machine, but it can’t play tennis or something like that. Right. And it looked like, Oh, but wait with, when you give it language, right. Language is what breaks it out of the silo. And you can hear Descartes sort of cheering, right. Oh yeah. See, I told you, as long as you have sort of language and math, you can do everything. But that I think is actually false. I think it’s completely false. Yeah. Yeah. Yeah. I agree. And I think, but it’s masked by the fact that again, we have figured out how to encode into statistical relationship between tokens are often implicit, semi-conscious know-how, our perspective taking and our identity formations. We have done that. But like that, what I’m saying is these machines are squeezing the juice out of the correlation between the non-propositional and the propositional, which is not the same thing as giving them the non-propositional knowing. What do you think about that argument? I think that’s absolutely correct. I think what a lot of people don’t know about these LLMs like chat GPT or barter or whatever, that they’re almost completely trained to, it’s almost like they’re, they’re trained to be a crossword puzzle solver is essentially what they’re like. They’re given a couple of paragraphs of text and then in their training, cause every single artificial intelligence model or machine learning model will at the end of the day with some constraints and stuff like that have one variable that it is optimizing. And what the way that you train an LLM to what is it optimizing is it’s basically trying to predict, you give it two paragraphs or something like that of random texts from Wikipedia or whatever, and then you have it try and predict the next word and, or maybe two words, but that’s really as far as these things go. Predict one or two words into the future off of this and you train it with an ungodly amount of computing power and training data and electricity, et cetera. And energy. Yeah. Yeah. Yeah. This is one thing that we talked a little bit about, like the amount of electricity that it takes to train a new version of chat GPT or a competitor is about the same amount of electricity that a large city like Chicago or Toronto might use in a week or two. It is an ungodly unfathomable amount of electricity. Like if you were to convert that into pounds of coal or barrels of oil or what have you, it would be on it. It would be almost impossible to imagine how much it’s not like us at all. Right. It’s not like this runs, this runs much as, you know, as a, as a 60 watt light bulb, right? Yeah. Yeah. Exactly. Our brains can run on a box of mac and cheese, not the amount of electricity that it takes to power an entire city like these things. And the, the part of the race for these things will be getting access to electricity for data centers and that sort of thing. But anyway, that, that other than the complete, like unimaginable disparity between the amount of energy it takes to make a bard and the amount of energy it takes to make a five year old human baby that can also talk. Anyway, these models are maniacally focused on training on I’m given some body of text and now I need to predict the next word in the body of text. And that’s how that’s the game that these things are optimizing. It’s like, I don’t know, like they play wheel of fortune or they play crossword puzzles all day or something like that. And it is amazing what can happen if you train them on a giant corpus of text of the internet, that it does seem to, in some sense, mathematically approximate a huge amount of human knowledge and know how and stuff like that. But I think most of the time it’s fooling us in what it’s actually doing or that a lot of people get the mimicry causes them to misattribute all sorts of abilities to it. And I think that this will be a common story over and over again in this sort of AI age, that we will mistake one form of ability for other forms of ability. And that will cause confusion. Most of the time it’ll cause overestimation of the actual abilities of this thing. But there will probably be some dangerous forms of misunderstanding too. And I think people will get freaked out in all sorts of weird ways and confused in all sorts of weird ways because we’re used to anything that can talk is smart, like anything that can talk as smart or smarter than me must be smarter than me. So it must be able to do all the things that someone smarter than me can do. But all it’s doing is guessing word by word by word by word. And there’s certain things that you can do to make it stop after a paragraph or something like that. But basically it’s like you give chat GPT a prompt and it’s trying to predict word by word by word by word. And it uses its own feedback and its own words to guess the next words. And then it has various rules about when to stop. But that’s all that it’s doing. It’s playing a giant crossword puzzle or something like that with you. Now, I mean, like I say, what I think it’s made us realize is the appreciation of what human distributed cognition has done and how we have embedded that in the way the internet is connected together, how terms, how words are connected together, and how we create and collate bodies of representations of knowledge. But we’ve spent a long time, both in evolution and in culture, building up these very intricate correlations. But that’s not the same thing as causation. And you can see this, first of all, the energy to cognition ratio is way different than ours. And that really matters, especially if embodiment matters. So first that point. Secondly, we have general intelligence. How you do in one domain is very predictive of how you would do on other domains. That’s not the case for these machines. These machines can be in the top 10 percentile for getting into Harvard Law exam. But if you ask them to write a philosophy paper, they’ll be at sort of the grade 11 high school level. That’s not how you would perform. And they can rattle, they can generate code, but they are actually not capable. There was a recent paper saying that the machines were getting stupid. I think that was a mistake. It’s that the machines have never been good at something like, you know, like on their own generating good inferential reasoning. They can generate also tremendous moral argumentation, but they’re not capable of making a moral decision. Right. All of these things, all of these things. And then I think the idea that I guess what I’m saying is we’re not paying enough attention to the ways in which they’re fundamentally different. We are overawed, as I think you’re saying, by the similarities and the mimicries. Yeah. I think an interesting thing you could have imagined, like if we train a model with hundreds of billions of parameters, as these things have off of, you know, a giant amount of the corpus of the right digitized writings of mankind, that it would in some sense encode certain things about like maybe the structure of the universe or truth itself in those parameters. And then once it sort of got that mathematical approximation, it could extrapolate into new areas that it hadn’t been trained on. But what’s interesting is that chat GPT and its competitors gets worse. It’s at its best when it’s dealing with something that is very similar to a lot of its training data. Like the reason why it could be good at a Harvard law exam is I’m sure the internet has tons of study materials about how to pass the bar exam or the LSAT or whatever. And so it has a huge corpus of something similar to train on. But if I say, could you write me a, you know, a grad school level A plus paper summarizing, you know, John Fervakey’s newest work? I’ve done that, by the way. I’m using that example because I’ve heard you mention that one. It can’t do it because it doesn’t have training data. And what’s interesting is that it’s not like the mathematical approximation of knowledge or what have you that it got trained to have can extrapolate into new areas. It can only mimic areas of familiarity. And I find this like when I ask it some coding question that I’m sure 10,000 of people have asked the internet, what happens when I get this error in the software language? Okay, it’s pretty good at answering that. But if you ask it to do something novel, it can’t. And that’s a very interesting thing that talks about the limitations and abilities of this mathematical procedure at large. I call this the Plato-Marloponte problem, which is if giving your intelligence and I give you a chance to read Plato and Marloponte, you’ll come up with a rock roughly equivalent argumentative interpretations. But if I ask Chachi PT, it’ll give me a pretty good thing on Plato. Not great. Pretty bad one on Marloponte because there’s so much more written about Plato and so much less written about Marloponte. So you can make predictions about how good it will be in terms of just the quantitative amount of information. But human, I mean, this is something we know quite sometimes human beings don’t work that way. In fact, this is one of the problems we can actually come to very quick apprehensions and appreciations of things. So I want to pick up on this generalization point because, and this is another way of maybe getting people to see, given the way these machines are trained, given what we’ve been saying about how language dependent they are, they can’t possibly give us an explanation of the intelligence of a chimpanzee. They can’t generalize. So they are scientifically inept. They are, right, they’re basically, if we’re looking at sort of numbers of species, not numbers of individuals, numbers of species, their N is one. They can explain, they can explain. I don’t think they can. But the one thing that they model, they only model us. They can’t model the intelligence of a chimpanzee, which is like, but any good, any good instantiation of intelligence would presumably generalize in that way. And it doesn’t. And I think that’s also very, very telling. And then there’s also the comprehension question. I mean, I don’t mean to go too far down like a Chinese room rabbit hole. You know what I mean by that. But, but like, okay, Chad GPT, could you give me an excellent recipe for the best apple pie? Great. It’ll give you a recipe for a really good apple pie. And I bet if you followed that recipe, it would turn into a pretty good apple pie. Chad GPT, can you make an apple pie? Absolutely. Like it is nowhere, it isn’t even like 1% closer to being able to make an apple pie because it can generate an apple pie recipe. And so there’s a complete lack of comprehension, which seems to be part of how we can go from the propositional to the procedural and body and all those sorts of things. Exactly. I agree. And I’ve taken a look now, I haven’t done in the last four months, I’ve been extremely busy, but where people are trying to make these connections and then you have to have some sort of central hub thing that and then it runs into all of these problems, the relevance realization problem, the normativity problem. Right. And then that overlaps with a final point that I think was implicit, but I want to bring it out. And I’ve been making, and I made this argument way before, I made like 2000, what 1617 or something like that, that from all of the data on us, intelligence is only weakly predictive of rationality because rationality is about over, it’s not about logicality. These machines, right. But that doesn’t mean they’re rational. Right. Because rationality is about overcoming self deception. And that means you have to care about the true, the good and the beautiful. You have to care about yourself. You have to care about a good life. So like Frankfurt is our rationality also ultimately depends on a capacity for love, which is very non-Cartesian, very platonic. And I said, you can make the, you’ll make machines that are highly intelligent and highly irrational, which is exactly what we have. We have machines that can fabulate and hallucinate. And more importantly, they don’t care that they do so. They don’t care that they, they are, they are not trying to fix that or correct. So they are not in any way rational. Now, and they don’t get it. Yeah. Well, one thing, a quick thing on the hallucination problem, chat GPT is always hallucinating. It’s just that hopefully most of the time, those hallucinations are accurate. It’s not like it’s doing something different when it’s hallucinating versus when it’s giving you some accurate information. It’s doing the same thing in both circumstances. And it’s, it’s one giant hallucination. Hopefully that is as close to reality as possible, which is, I think some people are like, Oh man, it was doing so good. And then it started hallucinating. No, it didn’t start doing something once it was wrong. It was doing the same thing. It came apart. Right. Yeah. It came apart. There’s a gap that we can notice, but procedurally it was operating exactly the same way. Exactly. It was hallucinating versus not hallucinating. That’s an important thing. So good. And this is where I think the perspectival argument comes in. I think you can make a very strong argument that rationality requires self-reflective metacognition, working memory. It requires consciousness in a really profound way. Unconscious intelligence makes a lot of sense to me. Unconscious rationality doesn’t make any sense to me. And so I think because it lacks perspectival, knowing it has no function like working memory in it. It’s not doing anything like perspectival knowing. I think these machines do not have the, in principle, as they are now, they do not have the capacity for rationality. We can jury rig some things in, right? Well, what we’ll do is we’ll tweak it so it doesn’t hallucinate, and then we’ll come up against the bias variance problem, et cetera. So I think that’s also a deep problem. And I’m concerned about this possible dilemma. So again, this is not a prediction. This is a threshold, which is a difference, right? We may decide, well, we just want to keep the machines running the way they are, in which case we have all kinds of monstrosities because these machines do all of these kinds of noxious things. And if we give them more power, that toxicity and noxious behavior will magnify. And that’s one version of the alignment problem. But then we may decide, oh, we need to give them the ability to really self-correct for their own sake. And then we have to build in, right? They’re auto poetic, they’re social, they have this. And sensory input that sensory input. Yeah, all of that. And I think at the end, it’ll need some sort of goal. Like when I think so, and I think, but, but, but it can’t be a goal we give it. It has to be a goal that auto poetically arises for itself. But that’s it. And that’s a totally different alignment problem. Right? That’s, there’s not one alignment problem. We can go down this road where we say, well, we’re not going to give them those abilities. And then we have something like the paper clip problem or something like that. But the other one is, okay, but we’re going to try and make it. So we don’t crash into that alignment problem. And then we get, oh no, we’re making a just brought, like we’re making autonomous, auto poetic, rational agents. And that’s a very different alignment problem. And I think that there’s like, we will face that choice point. And I think talking about the alignment problem is a mistake because, okay. Yeah, there, there, there will be as many alignment problems as there are AI machines. There isn’t one, there isn’t one alignment problem. And like, when I look at the way my kids learn how to speak, they often learn how to, the first words that they, that a baby will learn are very purpose driven. They’re very much always trying to get something like they learn mama and data first, because the first things that they need are the attention of their parents or people around them. And then quickly after that follow words for like water and food and, and no, but they’re always very motivationally driven words that are trying to cause something in the world that they either need or don’t want, whether it’s the attention of a person or their favorite snack or their favorite toy or other things that follow after that. And it’s always based off of this ability to use language to, you know, have causal effect in your environment to meet your desires and needs. And exactly. And the language doesn’t do anything like that. And that’s why it can’t self correct. That’s right. I agree. I think that’s an excellent way of saying it doesn’t have the four E cog. It’s not, it doesn’t have the non-propositional. It’s not embedded in a sociocultural matrix and, and, and, you know, and it’s not biologically embodied. All of these things are not trivial. These are deeply constitutive of our capacity for general intelligence, rationality and normative behavior. One of the things I think Hegel, I don’t think Hegel is completely right, but I don’t think we should ignore the fact that normativity is bound up with the fact that we have to reciprocally recognize each other as having very, as bearing both responsibility and authority. And that is only generated in a reciprocal recognition process that is socially culturally constructed. That’s, that’s the core of Hegel’s argument. And again, when you, if you, if you, if you make these machines from a Cartesian monological model, you’re not getting any of that. And the normativity just isn’t possible for you. Now, I think in a very deep way. Yeah. So Sam, I’m, I’m, I’m going to end it here, not because we’re done. Sure. I’m going to end it here because I think we have a lot more to talk about. And I think I should come on your channel and we’ll get everybody that’s here to come to your channel. Cause I’d like to talk more about like, you know, some of these, like, like we already agreed there’s possible multiple alignment problems. And what would happen if we start going down the road where we’re going to try and make genuinely sociocultural normative appreciating rational agents and then what that alignment problem might be. And I think that, I think that’s a very real possibility. And I think there’ll be, especially if the first alignment problem gets really noxious, we will be strongly motivated to push the machines that will do this to become as self-corrective as we are in the ways that we are. And I think that will push us into the second alignment problem. And then, as you know, I have a proposal about how I think that needs to be addressed in which something odd, very odd sentence in which something like theology ends up playing a very significant role. And I know that would be where your other interests could come to bear. So I’d like to explore, I think we’ve got a lot of agreement to this point, and then go to there because I know you have some disagreement with me on that and explore that, that that second half of the argument with you. How does that sound as a proposal? That sounds great. In general, talking about the possibilities and difficulties of making an AI sage, as you’ve said, I think that would be Yes, exactly. Yeah. Yeah. So I always like to leave my guests with the opportunity for the last word, they can be summative, it can be cumulative, it can be provocative, it can be inspirational, it can be reflective. Let’s see here. I think, I guess my one sentence summary, and this is something that maybe could be a teaser for the next conversation, is I think that purpose itself is not capturable in a mathematical formula. There is something about purpose that transcends the ability to be described with a mathematical encapsulation. And AI, as we know it, can only accomplish things that are encapsulatable, if that’s not quite the right word, within a mathematical formula. Formalizable. Yeah, formalizable in a utility function. And that this will be a cause for both the disappointment and the evils that these machines generate, is that they are stuck trying to accomplish mathematical tasks, but the most important and final purposes of tasks are not fundamentally capturable in mathematics. And I think that that is sort of, that’s like a whole teaser for a couple more hours, but I think that that’s sort of the conclusion that I’ve come to. I think that’s right, especially our agreement that purpose is one of those thick terms that bridges between cause and normativity. I think we should bring in all of the components of meaning in life. I think these machines also don’t have significance, mattering, and coherence in the proper sense. I don’t think they would be upset with cognitive dissonance or absurdity in any kind of way. So I think I would want to broaden it beyond purpose to all of the meaning in life factors, and I’d like to explore that with you as well. Excellent. That sounds great.