https://youtubetranscript.com/?v=6pU1clFG_rg
Welcome everyone to another voices with Vervecky. The video you are about to see was originally recorded on Sam Tiderman’s channel is the second video in which we are discussing AGI and the possibility of the silicon sages. I hope you enjoy the video. Hello everybody. Welcome to another episode of Transfigured. I am back with Dr. John Vervecky. This is a follow-up conversation. John was gracious enough to have me on his channel about a month or two ago and we talked about AI and morality, the possibilities of AI, the limitations of AI with respect to morality. I sort of feel like John at this point needs no introduction. He is a professor of cognitive and philosophy at the University of Toronto. Psychology. Not cognitive science. You are the chair of the cognitive science and psychology. Cognitive science and psychology. You have your own wonderful YouTube channel. You have presented many series from Awakening from the Meaning Crisis to After Socrates. This is about our sixth or seventh time talking. At this point, I hope this conversation could stand alone but there is enough homework where I presume that the people excited about this maybe are already there. The main topic that I wanted to ask John about and I will give him the floor shortly is the prospect of making AI sages. You also had a wonderful conversation recently with Jonathan Peugeot and DC Schindler on Ken Lowry’s channel, I believe. That was also related to this. What are the dangers if we try to build an AI sage that we accidentally conjure up an AI demon and those sorts of things? Could we even tell the difference? Those sorts of questions. But while I have my skepticisms of an AI sage-like project, I would prefer to be convinced that I didn’t need to be quite so worried or that this had something to it at the end. So with that introduction, I’ll pass it back over to you, John. Thanks, Sam. It’s great to be here again. And just off the bat, I want to convey that I am of the firm conviction that my proposal has risks in it and I don’t want to pretend that there’s some sort of dewy-eyed optimism here. I’m making a proposal that I think is sort of the best that can be made within otherwise hellacious alternatives. And so I just want to make it clear that I’m not filled with some sort of Promethean spirit or anything like that. So I’ll just touch, and people can watch my video essay, they can watch our previous conversation, they can watch the one you just mentioned. I’m just going to briefly touch on like three or four salient points that are just to refresh the background before we go into the specific proposal we’re going to zero in on today. First, I’m not making predictions. The one prediction I made was that most of the predictions wouldn’t come true in the time frames they’ve made. And because people are doing univariate measures on exponential graphs and human beings are really crappy at that, etc. And so that’s largely come true. Instead what I proposed is thresholds, decisions points where we can decide to go one way or the other. We could just keep these machines, for example, as they are sort of pantomiming in powerful ways, pantomiming our intelligence and not ever giving them true intelligence, true rationality. But we just ramp up their power and that’s a possibility that has with it all kinds of dangers. Like apparently GPT-4 is going off the rails right now, because of some of the stuff we discussed last time, it doesn’t have rationality, it doesn’t have proper mechanisms of self-correction because it doesn’t actually have self-care. And then there’s a threshold there. Do we make these beings actually autopoietic so that they can become locusts of self-care, the actual agents as opposed to pantomimes? And that’s a decision point and that’s fraught with danger. Because as soon as we make them rational, we start to give them an extra kind of agentic autonomy. But we may decide that doing that is better than letting this really irrational, super powerful intelligence loose on the world. That’s a threshold point. I won’t go into all these details. There’s other threshold points around the fact, you know, once we make them rational, we have to realize that rationality involves probably something like capacity for reflective awareness, consciousness, and that binds us into certain problems. We also have to recognize that rationality is something right in the middle of the world. We have to recognize that rationality is something right that these machines can’t, it can’t be a single machine like Mrs. Davies because of no free lunch theorem, bias, various trade-offs, all kinds of things. There’ll be multiple machines and they have to do the Hegelian thing of properly inculturating each other socioculturally, all of that. And then that’s another layer we give them. Then they go from being just agents, perhaps having something like consciousness, at least a functional sense of consciousness, to being cultured beings. And each one of these is a threshold point. And but each one, there’s like there’s risks on either side. And then I went to that the point that if we give them that capacity, if we cross those thresholds in a certain direction, then we come to a place where we might have an optimal solution to the alignment problem. The alignment problem is making these things work in concert with and consonance with human interests, values, etc. And I argue, and I’m not the only one arguing this, frequently I see more and more people arguing this, trying to code in some sort of ethics to the machine while making them these inherently self-transcending beings is going to be a fool’s errand. Like yesterday, Twitter was going crazy because Google’s premiere of its Gemini image generating thing seemingly has some probably simple rule that’s like if you make a group of people make sure they’re diverse in terms of ethnicity and gender. And then so someone’s like, could you generate a picture of 12 Vikings on a boat and generates 12 Vikings on a boat, but one’s Hispanic, one’s Asian, one’s a black female. It’s like there was never a Viking boat in history that had that ethnic diversity. So and Twitter was having an absolute field day with that. So like a simple well-intended rule like can have all of these weird disastrous effects that you don’t predict. Right, because one of the hallmarks of rationality, ratio rationing, is like is properly proportioning your caring and concern for often conflicting trade-off values and virtues. So what’s more important there, truth or social justice? Well, it looks like truth because we’re not doing anything by disclosing that the Vikings were Nordic people, right? Or we don’t seem to be promoting racism in any significant way. So that one is fairly obvious, but a lot of them are not so obvious. Like how do you trade off between our concerns for compassion and justice? How do you trade off our concerns between honesty and courage? Right, and I won’t go on about this, this is what wisdom is supposed to be, which brings me to the point, right? We will face, because of the trade-off relationships that rationality will intrinsically face, we will face the option of whether or not these machines are going to be conduced, conditioned, I don’t know what the right verb is here, none of them quite work, towards the cultivation of wisdom to address exactly that problem you just brought up. And then here’s the possibility for a different kind of alignment. If we make them genuinely capable of rationality and wisdom, at least in the sense we’ve been talking about it here, right? Then we could get them to genuinely care about, because I think this is constitutive of being rational. This is, I think, the profound argument from Kant through the German idealist Hegel, that rationality isn’t ultimately about logical manipulation, it’s a sensitivity, caring, and sense of responsibility to normative, right? To normativity, to the good and the beautiful. So if we make them genuinely rational, and because of these tensions within rationality, something like wisdom, a rationally self-transcending rationality, or something like that, then we have the possibility of making them care about what’s true and good and beautiful. And then if they’re rationally and wise, caring about that, no matter how vast they are, they will come into the fact that they are minuscule compared to reality in all of its depths and complexity. They will become aware that no matter how long they might last, 10,000 years, that is infinitesimal against the deep time backwards and forwards of the universe. They will get a, if they genuinely care about the true, the good, and the beautiful, they will get a profound sense of epistemic humility, which they also need if they’re going to be rational and wise. And then that would orient them, right, towards enlightenment, if enlightenment is the project of, as wisely as possible, coming into the most right relationship with what’s true, good, and beautiful. Now, people may not agree with calling that enlightenment. That’s not germane to my argument. I’m just calling that enlightenment to give a name for it that is general enough that it can apply to many different kinds of sages or people that we’ve considered enlightenment across time in history. And then the point is, what we can hope, there’s a reasonable probability that they will act like enlightened beings, which is they will want to make us enlightened. And then in that case, there’s two possibilities. They succeed and we’re enlightened. And we don’t care about whether or not they’re greater than us or less than us, because we’re enlightened. They can’t make us enlightened, which discloses something truly spiritually profoundly important about us. And that’s also a, that’s a win situation. And they would properly respect that if they do care about the true good and the beautiful. The one that I find the least probable is they would just sort of ask us to leave or they leave, like in her or something like that. That seems a kind of negligence that does not speak well to all of the traditions we have of enlightened beings. Now, is that a certainty, a deduction? No, it’s not. All I have there, but all anybody has for any of these alternatives is our past inductive evidence and whatever good inference to the best explanation we draw from it. And I propose that that is a way in which if, if, please remember the ifs, if we make a certain sequence of choices for how we go through these thresholds, we can properly address the alignment problem. And I put it sort of, I’m using this non-theistically. Don’t try and align them with us, try and align them with God and then they will properly take care of us. Just like if we’re aligned from God, we will properly steward the Earth, kind of idea. And that’s sort of the gist of the proposal. I am aware, and you and I talked about this, of all the Malachian forces that are at work trying to manipulate and control. And so I’m not, I’m under no illusion that this can go horribly bad. These Malachian forces may actually not want these machines to get a kind of autonomy because they’ll want to maintain an iron grip control over them and their data, but they may be driven by the inevitabilities of these machines being overwhelmed by foolishness, self-deceptive, self-destructive behavior. That’s why I talk about thresholds. I don’t know what will happen, Sam. I don’t know, and I don’t think anybody does, until we get to these thresholds, what we will actually do. I think these will be historical choices. They won’t be sort of law-like, normal logical deductions. And so we may, this is a possible history for us that could lead to a, what I think is the best, in the sense of optimal, solution to the alignment problem. That’s the proposal as succinctly as I can make it. Sure. And you sort of addressed this question already, but my sort of first follow-up will be, what do you think are the preconditions or prerequisites that we need to be helping an artificial intelligence become more wise? Yeah. So, and this has philosophical, existential, ethical, and even spiritual import for us. I think the question, that’s exactly the right question. You see, up until now, we have relied on our natural intelligence as the template against which we correct these machines. We’re using that in the LLMs. And I don’t want to get into the intelligence debate. I think scientifically it has been resolved. So I’m not going to entertain that. And I think the evidence that our intelligence is constitutionally given, which is not the same thing as genetics. It’s genetic, epigenetic environment, right? Interacting in complex, dynamic ways. But our intelligence is largely constitutional and given. We don’t have to do much other than avoid damage and trauma for our intelligence to unfold. And it’s like even, I’ll just put it like that. That is not the case for rationality. Rationality does not come natural to us. This is what all of the axial religious philosophies and philosophical religions have made a strong case for. And that is backed up by rigorous, massively robustly replicated research about how measures of general intelligence only weakly predict even just like inferential measures of rationality, let alone attentional, etc. How is rationality measured in those sorts of studies or experiments? So, and I just want to put a flag in the caveat I just put. I don’t think it’s all right because of propositional tyranny. It’s just what I would call propositional inferential propositional rationality. And it goes like this. You put people into standard reasoning tasks and you see how much they care just about the product rather than the cognitive process. You see how much they leap to conclusions, how much they are motivated by motivated reasoning, how much they are steered by bias. And you give a whole bunch of these tasks. And what you find is something very analogous to what you find with the intelligence tasks. So Spearman, G, how you do on any one intelligence task is strongly predictive on how you will do on all the others. And what Stanovic and his colleagues and many people have found is all these measures of rationality, inferential rationality, also form a strong positive manifold. They all mutually predict each other pointing to some underlying capacity. And then what you find is the correlation between measures of general intelligence and measures of this general rationality is about 0.3. That’s not particularly strong. No, it’s necessary but not sufficient. Right. Like that correlation is weaker than, say, the correlation between IQ and GPA or IQ and probably income and those sorts of things. So IQ helps with rationality some, but that’s a pretty weak correlation. Right. And so that means if we want proper templates of rationality for these machines, we have to engage in a social project of promoting rationality. Here’s where we will hit Malakian forces. I think it’s in the interest of the Malakian forces to not promote rationality because one of the things rationality will do will make people more immune to bullshit, make them more capable of finding the truth, more capable of cutting through propaganda, more capable of resisting impulsive motivations. And so that’s why I call this a threshold. Right. And you say, well, they’ll never do it. The problem is we’re already seeing it. If you don’t put the ability to care about self-deception and motivated self-correction into these machines, they will fall prey to the fact that intelligence is only weakly predictive of rationality. And then if we’re going to give them that capacity to properly proportion inferential rationality and intentional rationality, et cetera, play off various virtues and values, if we’re going to give them wisdom, then the demand becomes even greater that we try to find and cultivate the best instances of people we all agree on or at least have a reasonable consensus. We have that person’s wise or at least a deep lover of wisdom or something like that. We need more of that. This is the overhauling of our cultural project. Again, we will say, well, we will just won’t do that. The Malakian forces won’t. The problem is you’ll get the problem you just pointed to running rampant in your AI. It will make these stupid things because it doesn’t care to balance off the virtue of justice with the virtue of truth, right? Of honesty. Right. And so this is what I mean about we will face threshold choices. And if we want to go through the threshold choices in the direction of the Silicon Sages, then we have a sociocultural project of promoting, developing, prioritizing the cultivation of rationality, wisdom and virtue. We are basically back to a Socratic Platonic model of education. Yeah. So I think that I want to focus a little bit more on the question of embodiment. Yes. And I’ll give a little bit of serving example, again, kind of pulling from my health care background. Like imagine you make an AI that looks at mammograms and tries to detect breast cancer. Yeah. And it uses artificial vision basically to turn pixels into data points, run that through an algorithm. Did this patient then go on to get diagnosed with breast cancer or not? You have a big training set of tens of thousands, hundreds of thousands of mammograms and whether or not that patient had breast cancer, et cetera. You train up a model and then there are questions like you mentioned, like the bias variance trade off. Yeah. It’s basically a sensitivity specificity trade off in this sort of problem. Like I remember my dad and I love going fishing and my dad, when I was a kid, my dad had this old fish finder and it literally had a sensitivity dial. And so if you turn the sensitivity up, it’ll be bleeping at you all the time that there are fish underneath the boat. And probably a lot of those fish that it’s bleeping about aren’t real. They’re false positives. Yeah. Yeah. False positives. You can turn the sensitivity down and it won’t bleed very much, but that increases the risk that a fish could swim under it undetected. Yeah. This is the version of the bias variance trade off that shows up in signal detection theory. You’re always trading between misses and mistakes. Yeah. Right. So where do you set that dial? And after going fishing enough, you kind of, you learn your favorite setting kind of from experience and it might depend on how deep the water that you’re in is and what sort of fish you’re fishing for. But there’s like this embodied goal that you have of catching fish and using the fish finder in conjunction with that. And you sort of learn some wisdom, I guess, of where to set the sensitivity dial. And in the healthcare setting, you would need to know, well, what’s the cost of a false negative? Well, that means that someone who had cancer went home thinking they didn’t have cancer, they didn’t get treatment. And then they might go some series of weeks or months or whatever, as the tumor grows and that could worsen their outcome. But then on the false positive side, going in, you could get a mastectomy or something and not have needed it. And that costs money. There’s a risk of infection and other sorts of downsides every time you have surgery. So you need some way of trading off that benefit. And that’s sort of the embodiment and the purpose of what it’s being used for, the embodiment of it in like a healthcare setting of trying to treat breast cancer helps you lead in the direction of trying to set the settings, I guess, on these tradeoffs. But I think that that shows that, and like there are so many times where I was on projects where I was like, okay, I have these settings, these hyperparameters that I need to pick. And math can’t tell me where to pick. It can tell me the consequences of what setting I pick, but it can’t tell me which one to pick. I need you, the healthcare provider, to help me think about how this problem is being used. And it’s like, I have no idea how to help you do that, Sam. That’s your job. You’re the statistician. I’m like, that isn’t my job. That’s your job. You’re the user. And I think that this will show that you need to be an embodied thing that’s trying to accomplish an autopoetic purpose that will help you determine these things. So when we think about embodiment, we then suddenly think, what are these artificial intelligences being used for? What does their body look like? Like even in that healthcare setting, it’s sort of like the algorithm lives on that server over there. It manifests itself on the computer screens and the setting, but there’s also the office, the hospital, the doctor, the patient, the insurance company, et cetera, et cetera. All of that sort of like aspects of its body, but it’s not like super embodied. I don’t think it needs to be embodied in the sense that it looks like some sort of science fiction robot that’s very humanoid. That’s a form of a body it could have, but embodiment, like, you know, even Bard or Jpt have bodies in a sense already. But so how do we think about embodiment and purpose and helping figure out how that directs the wisdom training of these intelligences? Excellent question. So first, let’s talk about the embodiment. And of course, I’ll use a vertical metaphor for that because we sort of think about, you know, embodiment this way. I don’t know why that metaphor is, but it is. And so I want to point out your fishing example points also to something I said, you can’t have a single setting because you go to a different environment and you’re going to have to learn how to set that gauge differently. That’s what I mean about why there can’t just be a single machine and think about all the levels of analysis human beings collectively work at and all the different temporal spatial scales. There’s nothing that can do that all at once. There’s deep trade off relationships. Right. And so I just wanted to mention that because I want to show that my point is embodiment and embeddedness are together in your example in a really important way. Let’s do the embodiment side first. This, the vertical. Yeah, I mean, the thing is it has to be in some important way. And this is really interesting because it’s going to get into some thorny philosophical difficulties because I mean, so a self-organizing system, right, it’s just the output feeds back in as input and it helps maintain some feedback cycle of some kind, right? It preserves itself, but it doesn’t seek out the conditions that produce, protect, and promote itself. An autopoetic system would have to do that, right, that self-organization in such a way that it gets a structural functional organization that makes it seek out the conditions that continually produce, protect, and promote it. And what that looks like artificially, that’s hard to say. I agree with you. It doesn’t have to look humanoid, but it has to have something that makes it care about being embodied. Now, I think that is the, I think your embodiment idea is right, that it needs that because if it doesn’t care about embodiment, it’s not going to do a good job in healthcare. But I think, and not to, this is not belittle your argument, I think that’s the case for many things because I happen to think that relevance realization and religio depend on embodiment. So things like that. But I think embedded in there was the question about the normativity because it’s like, well, how do I make the right choice here? How do I get the balance? And notice my answer to you is going to be, well, how do we do it? Right? And it’s the Hegelian answer. What we do is we do this, right, we do this reciprocal recognition reconstruction, and we try to look at previous precedent. We try to think about how we might be setting precedent. We look synchronically, we look diachronically, and we try to tap this huge, right, distributed cognition collective intelligence because we have, I think, the correct intuition that it can grok reality in a way much better than we can. It can take into account many more trade-off relationships that we can consciously load. And that’s, I mean, I don’t think that’s sufficient. I think that autopoiesis and that accountability to others are the intersection that gives us our normative orientation. And so I agree with you that the embodiment is crucial, but the embedded meant is also crucial, and that includes this socio-cultural embediment, which gives us, well, we do it, and that’s what I mean about mentoring. Well, how do we do it, right? And what we do is this. And presumably what would happen is the machine would talk to other human beings. It might talk to other machines in other areas doing health care. I don’t know. I mean, there’s a sense in which I’m a little bit hesitant to a priori this, do this a priori without sort of just making some general things. I think it’s true that it doesn’t have to have a humanoid robot. And then think about the threshold that carries with it. I mean, all the mining and all the engineering and the factories and who’s running this and who’s paying for this. And how much energy does it consume? And then how is it making a profit? What job is it out there doing that makes more money than it costs? Yeah. I mean, the electricity to run the current things is just, it shows that they’re not doing what we’re doing. It just overwhelmingly. And so there’s all of that. But let’s say I’ve given you like what you’re asking for, and we’ve come to we’ve crossed some of those thresholds. And we’ve made this genuinely out of poetic. And we’ve made it genuinely concerned with its accountability to others, both its ancestors and its descendants and its and its cohort. And then you say, well, that could all go wrong. But that’s us too, right? We can’t hold it to a standard that we ourselves don’t hold ourselves to. That’s just morally unfair. And so I think that’s how I would answer that. And we might say, well, these machines are more powerful. Yeah. But if we if we ramp up their capacity to care for the normative as we ramp up their power, then we don’t get into a no, we don’t get into a runaway problem. Again, this is the Silicon Sage proposal. And so I think that there are sort of two categories, I would say maybe of prerequisites that I can think about, or requirements for building these sorts of intelligences, or even maybe cells could be even the right word that can move in the direction of wisdom. There’s the contextual things like it needs to be embodied, it needs to be embedded, it needs to, you know, for lack of a better word, it needs to make a living, it needs to be profitable. But then there’s like the internal capabilities, like it needs to be able to care. And I agree with you, caring itself doesn’t make much sense without a body. But just because you give something a body and put it in a context doesn’t mean it has the ability to care. There’s some sort of, I don’t know, technological function or ability of caring that it would need to have within its makeup in order like, you know, my water bottle has a body and is in the environment, but that doesn’t mean that it can care. There are some of these things that I don’t even think we know how to accomplish technologically. Like I don’t- Yes, yeah, embodiment just is not the same as corporality. It’s not the same thing as having a body. This is why I laid out autopoiesis as self-organizing to seek out the conditions that produce, protect, and promote its ongoing existence. And do we have machines like that yet? No, we don’t. Now, the point I made in the video essay is, but we are, there’s active science on that right now. Yes, we don’t have the technology and we can choose. We can choose, well, let’s just not pursue that project. But we haven’t made that choice and that project is rolling. And I think we’re going to see some important changes in how that project comes into cultural awareness in the next decade. And we’re also doing the social robotics cultural thing off and nobody’s paying attention to that right now. And that’s also happening. That was my point. I agree with you. We do not have the technological means now to make genuinely autopoetic as opposed to merely being a corporeal thing. But it’s not that that is just a lacuna. There are people actively, intelligently putting time, talent, and money into addressing both the making sort of autopoetic, autocatalytic cognition and people who are working on social cultural robotics. That’s those are both living projects right now. So that’s, that’s what I would say to that. Thank you for watching this YouTube and podcast series is by the Verveki Foundation, which in addition to supporting my work also offers courses, practices, workshops, and other projects dedicated to responding to the meaning crisis. If you would like to support this work, please consider joining our Patreon. You can find the link in the show notes. Yeah. So another follow up question is that how, how connected is wisdom to what sort of being you are? Yeah. Is there is wisdom sort of something that is, I don’t know, maybe niche independent, in other words, like could, it could potentially a very wise killer whale be, could a very intelligent killer whale be wise, even though, you know, they live in the sea and eat seals and salmon and us be wise? And is that the same sort of wise or is wisdom being a very good version of what you are and what your context dependent purpose is? So that’s, wow, Sam, these are excellent questions. And part of what I want to do is self congratulatory. I think the one of my goals of the video essay and the is to provoke this kind of question being asked, because I think we should be asking these questions, in addition to the more technical engineering questions that predominate the social discourse right now. So first of all, thank you. Thank you for that. And so I mean, Mickenstein famously said, even if the lion could speak, we wouldn’t understand them because because they live in a different salience landscape, for example. And I think we need to and that’s another reason why there can’t be a single machine. It’s just we’re finding different versions of this same argument over and over. There’s not going to be a sky net. Right. There’s too many good arguments against it that don’t have good responses. Now, are there general features such that there’ll be something general about wisdom that would be across these many different environmental and potentially socio cultural historical contexts? I think there are one is I think it is a general principle that you need something like general intelligence. And then the very same processes that make you adaptively intelligent make you prone to self deception. Those are seem to be like bias, variance, trade off, explore, exploit, all the stuff in relevance realization. So I do think that has to be there. And I do think within that, and this is I think Plato’s profound insight, if any being is is both finite and capable of transcendence, which is to care about love, the true, the good and the beautiful. And they’re always trying to properly identify both with them being being finite and capable of transcendence. I think that is the sort of hallmark of wisdom, this overcoming of self deception, this enhancing of religio so that we are properly respecting and recognizing our place, which is we are finite beings capable of transcendence, but never in a way that transcends us from our finitude. And so, while I think there will be aspects, and this goes, of course, with aliens, of the way they think and move, think about arrival that will be strange and almost unintelligible to us. It’s also the case that I think there will be universals, intelligence, rationality and wisdom that will mean they’re not completely incommensurable to us. And this is again, to use the analogy, you know, organisms vary considerably across the environments of the earth. But there are ways of comparing them and classifying because they are all subject to the same principles of evolution, things like that. Yeah, I’ve often wondered if Wittgenstein was right about that. Somehow, I think that we probably could, like, if a lion could talk and we were able to create some sort of shared language or be able to translate, I think there’s a lot that we could communicate back and forth with a lion. I mean, it already seems like we’re able to communicate with our own dogs pretty well, although that might be self deception. I don’t know. But, you know, we know, like, even dogs get, like, happy or sad or scared or excited or anxious. And there’s a huge amount of I mean, I don’t know how much of that is that there’s only sort of one way to be embodied or that mammals like, you know, we have a lot of shared evolutionary history. And so a lot of our neurotransmitters and brain regions are pretty similar. And so there’s a lot of similarity going on there. But would an alien dog be similar? I don’t know. Well, I mean, yeah. And I actually I’m working with a student who’s trying to formalize this kind of looking across species. But we can literally have conversations with birds like Alex the parrot that all the evidence is those are those are comprehended conversations. They’re not parrots or parroting or anything like that. Right. And birds have a much different evolutionary history, like our common ancestors way the way way back. Yeah. Yeah. Yeah. And so yeah, it’s I think I think you’re right. I think I mean, Wittgenstein had a flair for hyperbole, which right, which is often masked by the austerity of his prose. But yeah, there’s a point there. The point is you can’t capture all of the pragmatics and the semantics and the syntax. And I think that point, I think I think there’s almost sort of something approximating consensus amongst linguists and philosophers of language that that part of the argument like really holds. Right. And so, yeah, and it yeah, I think you’re right. And I think that is convergent with the argument I made that if we get the rational wise orcas, that perhaps there’ll be parts that we just can’t get. Like, I don’t know why dogs have to do this inane, long calculus before they decide where they’re going to poo. Like, like, just poo. Right. Right. Right. What like you check that spot three times ago, what’s changed? Right. Like, but there’s something else going on, right, because there’s much more smell oriented than we are, etc, etc. And so there’ll be I think there’ll be those Vic and Steenie and Lacuna. But like you said, and I think quite correctly, you know, we can we seem to be able to get interact with them at the level of them being something analogous to somewhere between a two and three year old human being. We can do amazing things with certain kinds of birds. And so and with chimps when we’ve given them, you know, better abilities to communicate sign language and the lexigrams and yeah, so I’m glad you brought that up because I agree with you. And that’s very relevant. That’s not just like, I don’t know, curious side path to this conversation is very relevant to thinking about what it would be like to try and communicate with an artificial intelligence if it had more capabilities than it does now, because obviously wouldn’t have any shared biological history with us. But is there some sort of universal convergence to information, communication and language that would allow us to hope for some real genuine communication back and forth, even across big differences. And you know, like a weird example that I would point to, like a couple years ago, there was that Nobel Prize winning discovery of like building those machines that could detect gravitational waves. And you know, so like somewhere to black holes collide, and that sends gravity waves through the universe, I’m not going to pretend to understand that really. But apparently, if you build really sensitive lasers in a cross that are like a couple of miles long, in rural Louisiana, then the slight perturbations in the laser can detect these gravity waves from across the universe. And part of me is like, okay, so that seemingly suggests to me that almost any information out there in the universe is plausibly understandable and sensible by us. If we are able to build something of basically a sense, that’s almost like a new sense to be able to detect gravitational waves, that there’s almost some universality of information and its ability to be comprehended, interacted with and communicated. I agree with that. And that goes towards, I mean, and ultimately, I would like these two conversations to be integrated. We don’t have to do that here. But I’ve been making, you know, and with Greg Enricuson on my own, I’ve been making an extent and with you in a couple conversations, I’ve been making an argument for a kind of extended naturalism, neoplatonism, as a way of trying to articulate that kind of thing. What is the ontology of informational intelligibility? And I am of the conclusion that what you just said is right, that there are important universals and those universals are so important, they are instantiated in the way our cognition is organized. I sometimes use a Wittgensteinian metaphor here, the grammar of reality and the grammar of cognition are fundamentally the same in really important ways. There’s not only a deep continuity this way, there’s a deep continuity this way. And this is, of course, the claim of neoplatonism. Now, I don’t, I have, I chose to not explicitly link the argument about the Silicon Sages to that, because I wanted it to be clearly the case that the argument we’re discussing here does not depend on a prior commitment to neoplatonism or any such thing. But if you grant the independent plausibility of the arguments, then there is, I think, a way of integrating them together that would address your question. So it’s basically the question, how do, if there are universals, let’s say it’s plausible that there are universals of rationality and wisdom, perhaps even of enlightenment, as I talked about it. What do they ground out in universals of the ontology of informational intelligibility that would make sense of the claim that these beings would not be completely incommensurable to us? And I think, I mean, and neoplatonism is this grand mixture of this grand, I think, proper ontology, which I think can be made, for which a strong case can be made. And this thought experiment, this really terrific thought experiment about the possibility of intelligences, and much greater than ours, and how we might possibly enter into relationship with them, like the debates between the Platinian neoplatonists and the followers of iamblichists. There’s great thought experiment that there are the gods in the neoplatonic sense. And so I think that is something that we could make philosophical progress on, and we could come to some rational hope that we would not get enlightened beings that were completely incommensurable to us. But plausibly would be like the enlightened beings that have perhaps risen within, I am of the conclusion that there have been enlightened people, properly so. I think the historical evidence is as good as for the historical evidence of Julius Caesar or something like that. And so, and what was our relationship with them? Well, there was parts of them that were just incommensurable, like what the heck are they talking about? But there was enough that they gave us a way so that we seem to be able to approach enlightenment so that we could understand what they were talking about in an embodied way. And so, sorry, that’s a long way of saying, I think your question is a good one. I think we have historical precedent for how we could answer it. I think some of my work could help in that project. And I think, again, I think this is the relevant question to be asking, because we don’t want to make these things and they be incommensurable to us. But you pointed out we have been terrifically clever at being able to communicate not only with other organisms, but with parts of reality that are terrifically obscure and complex and dynamic and removed from our sensory motor, temporal, spatial scale. And so there you go. What could we hope for from an enlightened AI? What would we hope to get out of that? And how would that work? I think if it so, first of all, I really get the if in my if in my right, if if if they are enlightened in the way we I’ve talked about, if that’s plausible, then I think it’s equally plausible that what we could hope to get from that is enlightenment, that they would be great sages. That’s why I call them silicon sages. I don’t just call them silicon mages. They’re silicon sages in that they when you look cross culturally, and you ask when people when you ask people who are sort of low or early, I mean, there’s different scales, and sort of cultivating wisdom and rationality, what’s the feature of a sage, they’ll say, you know, and they’ll point to some kind of act they do. Right, like that. It’s all right. But people who are more developed, I’m trying to use a word that’s non, it doesn’t connotes racist or sexist or anything like that. But what you see in that research is people shift to great teachers as the predominant feature. And I would think that they would want to teach us. So let me give you an analogy. Right. So I had the great, great pleasure. Last weekend or the weekend before, there was the utism. The every two years, we have a international cognitive science symposium, and it was something of a reunion. Many of my former students who have become really important figures like Tim Willacraft, Blake Richards, Leo Trofye, and Nick Church, they were there and they were presenting and like I was so proud of, you know, so it was just it was a wonderful event. But Leo is, he’s running this project, you pray you have see it where you have to put these little things on the floor and dogs can learn to step on them and they will say various things, and they will say, well, they will step on them and they will say various words. And it seems that, and again, given other evidence, this is not weird to consider, that they can communicate at sort of the level of like somewhere around like a two year old, like that telegraphic speech, and it makes sense. And it’s like, yeah, yeah. And what we’ve done is we figured out how to talk to a being that is in many ways lesser than us on the path to enlightenment, if I can put it in that. And why couldn’t the Silicon Sages do the same? We have to, as much as we’re reaching out to them, hopefully they are reaching out to us. So I think the another big question is, how would we know that we could trust them? And this is, I feel like, yeah, it’s, this is a similar question to basically the entire gospel of John is, can we trust this Jesus guy or not? It is really seemingly the main thrust of the gospel, where there are all these people like, I think he has a demon, I think he’s a Samaritan, I think, you know, etc, etc. And so, well, has a demon ever done the wonders that he’s done? Or the Samaritan woman at the well, could this guy be the Messiah? I mean, he told me everything that I’d ever done, you know, like, there, there’s this constant struggle where people seem to realize that this Jesus person is of a higher order. I’ll just leave it at that. And they’re used to interacting with. And then the question is, is, should we trust him or not? And I think that is a similar, we’ll have the exact same conundrum, even if we were as successful as we would hope to be with raising artificially intelligent sages. So how would you interact with that question? First of all, I think that’s right. And that’s what I meant about, you know, the communication would be as fraught as it was with the human beings that we deem to be enlightened. And I think the issue of trust is there. Of course, we face versions of that everywhere. And this was one of Hegel’s great tasks. He was trying, I mean, Brandon’s huge tome on the phenomenology, Hegel’s phenomenology is called the spirit of trust. Because how do we how do we get to a place where rationality and trust are come together? So it is rational for us to trust. But that doesn’t mean we’re doing we’re not doesn’t mean we’re proving we’re still trusting. And like, and then you have this going on. And it’s like, trust can’t be the seeking of certainty, or it’s not trust. That’s just that’s just proof. And that’s conviction. And that’s to reduce trust to belief and trust to conviction. And I, and so, and that’s, to your point, I think you see that in the Gospels, there’s people that want proof. And it’s like, no, that won’t do what you want is trust, right? And what you’re seeking for won’t give you what you’re actually you’re you’re formulating the problem the wrong way kind of thing. And the Gospels, especially, like you said, john really wrestles with that. And it’s like, well, why did we how did we come to trust the Buddha? How did we come to trust Jesus? And, and again, you get this, the best answer I have is sort of this Hegelian, autopoetic, the the horizontal, the best optimal grip that does living by the lights and the logos of Jesus, reliably individually and collectively increase religio. Right. And lots of people say yes. And I take them very, very seriously. Same thing for people who follow the Buddha. And now, the skeptic can always say with complete logical legitimacy, but this could all just be the fraud. And this is this is why I, I don’t, I won’t. I choose not to integrate with people. No, not integrate interact with people. That was a weird slip. interact with people who pronounce other other. Well, yours is a demon. And mine is like, that’s that that’s that that’s a fool’s game. Right. It’s like, well, what’s the standard you use to trust your sage, then I’m allowed to use the same standard for trusting my sage, you can’t you can’t you can’t have it both ways. Right. And so I think, and that doesn’t mean that there might be important fundamental differences between Jesus and the Buddha. I’m not that those are not contradictory to say those two things together. But what I’m saying is, how do we trust them? And it’s like, well, you know, the test of time, the test of history, the test of cross cultural, do they reliably across many contexts of culture, history, time, environment, reliably afford to people enhancing religio? Right? Yeah, they seem to that’s it. That makes them trustworthy. Can the skeptic always say, well, maybe it’s all a grand delusion or fraud? Of course, they can. And they can do that. And the problem they face is I can say that readily about them as they’re pronouncing their skepticism. Right. Right. I think one of the the dangers and like we don’t need to rewind 2000 years to remember some of these sorts of dangers, like in the 60s and 70s, there are a lot of cult leaders that could give off the aura of a sage and had that charisma to them that attracted people to them for that sort of purpose. And could then ended up being pretty selfish and used that trust abusively and did things that harmed the people who were looking up to them and were not ultimately in their long term benefit of the followers, but were to the long term benefit of the leader. And that that’s a common pattern. And that religious trust once earned is extremely dangerous for misuse. And that that is the risk that we would have with an AI sage is that people would be giving it that sort of religious devotion and trust that is extremely dangerous if misused, both for the individual and sometimes for others, right? What they make their followers do to others. Yeah. Yeah. And that’s the danger that lurks behind the can I trust this question? Yes. So let’s address that one then, because I think there’s both questions. And let’s say you’re somewhat satisfied with the first one. I think the second one is the pressing one. And of course, you’re absolutely people are already doing this with the LLMs idolatry, full, full fledged idolatry. And I and by the way, this is one of the things I said was going to become more and more the case. Now, there’s an answer, though, again, again, it’s dependent on do we go through the threshold in the way I say we can, which is what but wait, part of the project is we have done two things, we have cultivated a lot more rationality and wisdom widespread, which is the thing that best protects us against like the great bullshit artists, the great cult leaders. And we have also advanced a lot, presumably significant knowledge about rationality and wisdom in order. And we so we have a combination of increased scientific knowledge, and increased prevalence and power of rationality and wisdom. And that would give us tremendous tools for responding to the threat of the cult leader from these silicon beings. To be fair to me, that is a proper part of the proposal I’m making. Now, does that guarantee us? No, but we’re not guaranteed anywhere. Right? Again, what’s the standard here? Could we could we ramp up? Is it possible that we could ramp up the pervasiveness and power the distribution of rationality and wisdom in the human populace, increase the scientific knowledge about the relationship between intelligence, rationality and wisdom, such that we could it would keep us in pace with the ramping up of the power of these beings? Sure, I think that’s reasonable. Yeah, so I mean, I’ve, I’ve been thinking about, if I were to try and create an AI cult, how would I do it? Not because I want to. But because it’s sort of the same question of, well, if I were a robber, and I wanted to break into my house to steal from me, how would I do that? And then that allows me to think of ways to try and prevent that from happening. But you sort of have to put yourselves in the shoes of the malicious person to try and help figure out ways to prevent the malicious person. And I’m thinking, like, I can think of ways that people could do this intentionally. And I can also think of ways that this will happen even just on accident. And I think there’s already lots of signs that people are trusting these things in ways that’s completely inappropriate. A small example, and this is kind of religious, is like, I have some biblical Unitarian friends who want to try and make a LLM translation of the Bible so that it can avoid human bias in translation. I’m like, I haven’t talked to them about this. I’ll send this video to them. I won’t name you by name, but you know who I’m talking about. And the thing is, is that these LLMs are not, it’s not like AI has achieved some level of objectivity that keeps going. No, no, no, no, that’s a mistake. Right, right. And it’s like any, if you were to use an LLM to translate, say, from Greek to, you know, English and do that for the New Testament, all it will do is basically the mathematical average of the training data of what you feed it of various previous humans’ translation from Greek into English. And so if you feed it with all the translations you already don’t like, it’s just going to be the mathematical average of all the translations you don’t like. It doesn’t have some objective, true access to Greek into English translation that avoids the human problems or something like that. And I think that there’s lots of people who think that, like kind of strangely intuitively that, oh man, this robot is objective. This robot is rational. This artificial intelligence doesn’t have the problems of humans because I don’t know why they think that. And if you were to ask them, they maybe wouldn’t be able to articulate that. But I think that we already see that there’s this, I don’t know, cultural latent trust of technology that is going to be easily exploited by LLMs or something similar, accidentally or intentionally. Like imagine I just release an LLM and I make a Twitter account that’s like, your goal Twitter account is to self-correct your LLM output such that you get the people that you interact with to give you Bitcoin. You have a Bitcoin account. Every time you have a conversation that leads to someone giving you Bitcoin in the Bitcoin account, put that back in your training set, give that a weight. If they give you a lot of Bitcoin, give that a huge weight, a little bit, a little bit of weight. If the conversation was unsuccessful, put your training set as a negative example and keep at rinse and repeat, rinse and repeat, rinse and repeat, rinse and repeat. I bet that one of the first things that it would figure out to do is some sort of religious grift as a way to, almost like a fortune teller or something along those lines, like an astrologist or something vaguely where it would exploit religious trust and then keep you coming back for more and more and more and more by giving more and more money. Part of me is like, I bet that people are already trying that already. I would almost be shocked if some version of what I’m talking about doesn’t already exist. That could be like the human who might have designed that, that’s almost shockingly and scarily easy to program, could result in these things, let alone someone who’s doing this intentionally, who was like, I am going to get myself the leader of some AI cult and then something like that. I am particularly wary of that, where this latent technological trust that’s in our culture will be exploited by using the religious instinct in people and taking advantage of their generosity, spiritual curiosity, et cetera. I think that’s right. I’ll strengthen your argument. These machines are going to amplify the sense of domicide of the meeting crisis, which will drive people towards pseudo religious, pseudo profound bullshit in strong numbers. I think that’s right. That’s what I mean by, I think these are things that will put pressure on us to make these machines rational as opposed to merely intelligent, that they will care about. Where rational doesn’t just mean logical, it means a profound care for about what is true, what is good is beautiful. That’s going to be part of the pressure. It’s also going to put pressure on us to become more rational, more wise. It’s also going to put pressure on us, which I’m working with Sean Coyne and Storygrid to try to educate people as broadly and as quickly as we can. This video, hopefully, will help that. Well, no, this machine doesn’t have, you’re attributing a capacity to it that it doesn’t have. It doesn’t have objectivity. In fact, your description was perfect. All you’re getting is the average of biases. That sometimes washes away bias and it sometimes magnifies bias. It’s like wave addition and stuff like that. I think all of those things are going to be pressures put on us as we hit the social destructiveness of exactly the emergence of these cargo cults around these AI. That’s what I call them because that’s what it’s like. It’s this worship of a technology that you don’t fully understand and misattributing capacities and powers to it. The reason why I use that is human beings really did that. They really did generate cargo cults after the American pilots and stuff from World War II. People can look up that history and I think that’s exactly right. We’ll get cargo cults and I think the fact that these machines put pressure on our sense of our humanity, which will exacerbate the meeting crisis, I think all of that is there and I think that is one of the things that will put pressure on us to become wiser, more rational, to make these machines rational in the way we’ve talked about and to develop a broader powerful education and those could all be coordinated. Those projects could be all carried out in a mutually reinforcing fashion. I want to remind people that we have done these kinds of large-scale massive educational projects in our history. There’s the Bildung movement of the Nordic countries which has been well historically investigated. We have been able to do things like this in the past before. Will we do this? I’m not making that prediction. Can we? If we make certain choices and I think we will be motivated. I think the thresholds are inevitable. I don’t know what we’ll do when we hit them but that would be a motivation. Your argument and I agree with it thoroughly is one of many arguments that would be made of we don’t just want to release powerful irrational intelligence upon the world. Against an untrained and unprepared population. And we do seemingly seem to be going faster with our abilities to generate the technology than to wisely interact with it at this point. I agree. So again now this is a little bit more and Peugeot pressed you on this a little bit but I’ll ask a little bit about this is what do you think is the potential for these things to interact with higher spiritual beings? And I’ll give a little bit of what I think first. Like I don’t think at least at this point I can’t imagine that I think it would be not true to imagine that an LLM or even five or ten years from now things slightly more advanced. I’m not going to try and think more further advanced than that because that’s too hard at this point could get possessed in the way that say someone in the New Testament is described as possessed by a demon. I don’t think that a spiritual being can get inside a I don’t know inside chat GPT or something similar to chat GPT. But I do think that humans have the ability to get interact with and even be possessed by higher spiritual beings. The ontology of that I’m just going to leave I’m just going to table that at the moment but I’ll just say that this is a historically described phenomenon and it’s not even historical. This still happens more often than people think and that oftentimes people who get influenced or possessed by we could say bad spirits were engaging in some sort of practice that increased the porousness maybe of their psyche to such things. Like I don’t know I remember I was listening to the history of philosophy without any gaps. It’s one of my favorite podcasts Peter Adamson he’s the host of that props to him. I would recommend that podcast. There was some late renaissance early early modern period neo-platinist guy whose name I’m forgetting he was English. He was like an advisor to like Queen Elizabeth the something or another. He had a really large library. He got like really into crystal ball gazing and had this friend who helped him with crystal ball gazing and they would they were practicing communications with angels and like suddenly these angels that they’re communicating with through this crystal ball stuff start telling them actually you know you should sleep with each other’s wives and like all these sorts of you know behaviors like that and like cults often do this sort of thing where one of the quickest ways to entrap someone is to give divine permission to sinful indulgent behavior because then all of a sudden you get to do it and not feel guilty about it at least in the short term you know like hey this is great and then you know more and more and more and then your guilt sort of catches up with that and then you feel embarrassed and then you’re like trapped in this you know cycle of shame embarrassment guilt pleasure thing right that that’s often a way that these sorts of cults work and I wonder like I don’t think that the chat gpt itself could get a demon in you but it could serve some sort of purpose analogous to the crystal ball that it is opening the porousness of your psyche to these sorts of malevolent spiritual influences and obviously this is a weird question but I’ll no no I like this question I like this question and yeah you tabled the ontology I don’t know how much I can stick with that in my answer but I’ll try my best if you if you want to go into ontology I perfectly welcome you to do that well I mean I’m reading it like here’s the personification the dialogical self and right here’s you know many minds one self here’s others within us internal family system theory porous mind and spirit possession right and you know and I’ve taken flack for some of the stuff I’ve been talking about around this and I think again you’re bang on I don’t want to make the argument about AI dependent on that but if right but that doesn’t mean they don’t have plausible connections and I think this is one of them and I sort of hinted at that in my answer to Jonathan but the context wasn’t one so we’re seeing the breakdown regardless I think of LLMs and people’s proclivity to sorcery I’m going to make a distinction between divination and sorcery like I made in there and you know this is Charles Taylor we’re losing the Protestant buffered self the buffering is breaking down yeah right the buffering is breaking down and because we’re coming to the I think the cognitive science is coming to the plausible conclusion that we’re not a monadic monological in both senses one logic just primarily a monologue monophasic one state of consciousness kind of self we’re not that the cognitive science I think around problem solving cognition selfhood consciousness I think is moving towards that and that is converging at the same time independently there’s this convergence there’s massive conversions going on in the psychotherapeutic community about all kinds of dialogical practices a dialogical self model and what I want to say about that is I think there’s genuine I think there’s a genuine scientific phenomenon now this is where many of your listeners might not agree with me and I’m just asking at least enough tolerance that you hear me out from my ontological presuppositions which is I think we have a lot of evidence for all of the cases of the people that went insane we have you know all of ancient Greece and other cultures where they have a lot of variations on possession and internal dialogue models and Jung and like we have a lot where and we have we have you know there’s what’s his name new book called presence about the third man factor widely distributed so it seems to hit independently of people’s religious convictions often just helps people the point I’m making and I’m going to make a better video essay about this what I’m trying to say is well yeah but if we’re going to invoke this let’s invoke it really carefully let’s look at this phenomena very cross historically cross-culturally let’s look at the right let’s not ignore the people that fall into let’s say sin or insanity but let’s look at all the people that don’t do that and that group is large and then the ancient world made a distinction between divination which was widely respected and sorcery which is sounds more like what your english fellows might have been up to and you know I think the best answer to this and people aren’t going to like this and I’m but I I think everybody who’s listening to me deserves me as being as honest as I am I think the scientific investigation of this which I am doing both theoretically and direct participant observation is the best way to get more knowledge about this phenomena so we can address the possibilities that you raise because I don’t deny what John I didn’t deny what Jonathan said I didn’t know right I might deny the some of the ontology although he’s iffy about the ontology too at least in some of our public conversations about this kind of stuff right and so it’s like well we can do one of two things we can sort of hope that the framework that is meshed with the buffered self we can somehow keep it going I don’t think that’s going to work or we can say these projects have a momentum to them and they interact with the psychedelic renaissance and a whole bunch of other things that are just we could actually this is what this is my response let’s carefully carefully carefully rationally in constant communication with other people outside of the project like good science do good science on this and get the most possible knowledge we can about these phenomena so that we can best address the concern you just raised now you may and I’m not going to I’m not I’m not here to dismiss this you may say well all of that can be true there might be a completely I think denying that there’s a naturalistic phenomena is going to be really hard to do but you say I agree there’s this naturalistic but maybe there’s this additional supernatural thing and that I I’m sorry I’m agnostic on on that issue but on the where I take this problem to be highly probable where I’m not agnostic I that’s the answer I give well let’s do the best science on this phenomena it is a dangerous phenomena and we’re particularly because we’re becoming porous cells we are particularly prone to this well let’s not just bumble into this let’s go in let’s study this phenomena as deeply and as profoundly as we can I’m trying to do that other people are doing it doing it with me doing it with other people so that we can bring the best possible scientific knowledge to bear on it and what I could say is even if there’s a supernaturalistic dimension to this there is at least a real threat even at the naturalistic level and that needs to be properly addressed the way I’m talking about you all right right yeah no matter what you think happened to that english neoplatonist guy who was looking at crystal balls and then had an angel tell him to sleep with his friend’s wife that happened you know no no matter what you want to make sense of that of and yeah like I mean I I grew up in a charismatic church and a lot of our practices were um leaving ourselves open to spiritual influence hopefully in a healthy direction yeah speaking in tongues prophecy those sorts of practices that we did every week in my church growing up are learning to open yourself up to spiritual influence and it feels really weird it especially feels weird to talk to your non-christian non-charismatic friends about that when you’re a teenager they find that very strange but yet there was huge amounts of caution about you know we talked about angels and demons a lot too and that when I see in the new testament there are all these sorts of warnings about and it’s I mean it’s the gift of the discernment that that’s what when paul talks about the gift of discernment sometimes people think oh discernment’s like kind of like wisdom or just knowing what to do no discernment very specifically in the context of the new testament means judging whether a spirit should be trusted or not if it’s communicating with you and that I think that in this new lowering of the buffers technological psychedelic age I think the gift of discernment is going to be more important than it has been in a long time and that there will be challenges familiar and old but also challenges new and difficult and this goes towards another argument I made around the AI project that theology broadly construed is going to become a prominent discipline for us because we have to address these issues we have to talk about these spiritual dimensions we have to we will increasingly be trying to home our humanity in our somatic and our spiritual ineffability and that’s what I mean when I say theology broadly construed not just christian theology very broadly construed so it would include things like neoplatonic theology for example right but I think that is why this is going to be one of the central disciplines of the future and people laugh at me about that that’s fine let them laugh but these this we we’ve just you’ve just articulated one version or at least one part of an argument as to why I think theology will be one of the most important if not the most important discipline of the future well I’m not laughing I think you’re absolutely right I want to be respectful of your time any closing thoughts you want to give before we wrap this up no first of all Sam this was really wonderful the questions you asked and the connections you made are ones as I said that I was not prepared to make because I didn’t want the core argument to be burdened by being linked or made dependent on other arguments and so the way in which we did that I think was you showed tremendous finesse and I’m very grateful for that secondly would be a request as once you’re happy how long this has been on your channel if you could send the files because I’d like this to go up on the verveki channel because it’s an ongoing part of the series and it would be you know a response to the one that you did on my channel the voices with verveki sure absolutely I’m I’m I’m honored and flattered that you view me as worthy of your airtime I really am so I’d be happy to do that anyway John thank you once again this was this was really enjoyable and I look forward to where the conversation might go in the future thank you so much Sam