https://youtubetranscript.com/?v=mPaoHBlnsBA
Welcome everyone. The video you’re about to watch was originally on the Into the Impossible channel hosted by Dr. Brian Keating. I’m joined in that discussion with my good friend and colleague Sean Coyne as we discuss with Dr. Keating the advent of AGI and some of the issues that we explore in our book Mentoring the Machines. I hope you enjoy it. Sean Coyne and John Raveke have together authored a new book, Mentoring the Machines. It’s a book about artificial intelligence and the path forward that further develops the arguments of how to align artificial intelligence to human flourishing and it sets those arguments into beautiful and accessible writing. What is it about artificial intelligence that drives tech giants like Elon Musk, Mark Andreessen, Mark Zuckerberg, Bill Gates, and Sam Altman and drives them almost to distraction? Why are they racing to develop and own these thinking machines while unsure of the harm they may cause us? Can we trust nation states and non-government organizations to use their totalitarian strategies when they don’t truly understand the problems we face? The very processes that make us adaptively intelligent make us perennially prone to self-deception. Today we’re featuring John Raveke and Sean Coyne, two extraordinarily bright minds in the field of artificial intelligence. John and Sean have recently published Mentoring the Machines, a book in which they discuss the things that others like to sweep under the rug. This is very much a deep dive into the nature, structure, and function of being, knowledge, reality, and behavior. Today’s episode of Into the Impossible will tackle these questions with them and more. Join us for what promises to be another great deep dive into the world of groundbreaking technology, AI, and human stupidity. Welcome everybody to an artificially intelligent version with two amazing guests of the Into the Impossible podcast. It’s John Raveke and Sean Coyne joining us from various parts of the eastern seaboard where I think it’s warmer than it is here in California. How are you guys doing today? I doubt that it’s warmer in front of you than California. Yeah. It feels warmer relative to your baseline temperature. Here it feels pretty starkly cold. It gets below 60 in San Diego. We get upset. You guys have written a series of books that I understand it. As I told you before we started recording, we always like to do the thing you’re never supposed to do, which is to judge a book by its cover. But you guys know what else do you have to go on? If my audience isn’t familiar with your work, they only have this very mesmerizing cover and title to go on. So I thought we’d start with that. If you guys would be so kind as to explain the origin, first of all, of your collaboration. I’m familiar with John’s work, less so with Sean’s work, but tell me where did the collaboration come from and maybe start with John on that. And then Sean, you can explain the title, subtitle, and cover art of this book, Mentoring the Machines. Well, the collaboration predates the book in that I was looking to turn my video series Awakening from the Meaning Crisis into a book and I was reaching out to various publishers and having publishers reach out to me. And I met with Sean and first of all, I came to appreciate that he has a profound theory of narrative and that had a huge influence on me. And then he said something that really went through to the core of me. He said, I won’t make your book a bestseller. And I went, oh, and he said, I want to make it a permanent seller. And I said, you’re the guy. You’re the guy. You’re the guy. And so we started working together on that. Then a while back after GPT-4 came out, I did an online video essay on YouTube where I tried to talk about the scientific, the philosophical, and the spiritual import and impact of these machines or at least their potential descendants. And then Sean reached out to me almost immediately and said, I think what this essay is doing is really important, but it is pitched at quite an academic level, which is of course one of my besetting sins. But he said, I can promise you, because that’s what I’m good at, at making it more accessible, stepping it down without dumbing it down. And I mean, I already had confidence in Sean because of what I’ve seen, but I was still very doubtful about him being able to do that. And he said, well, let me give it a try. And he went and he wrote the first book to come out and showed me the manuscript. And I went, wow, you really did it. You really pulled it off. And so we just went forward from there. Sean is responsible for most of the narrative and the metaphor and the use of analogy. My job is to sort of track the argumentative structure, the scientific merit, et cetera. That’s how we’re working. And so far, I think we both are deeply appreciating the collaboration. Great. And Sean, can you talk about the title and the cover and the theme of this? I understand it’s a four-part series intended to be a four-part series. It may already be out in all four parts, but I read the second one. Yes. When we started the project, we had to figure out, how do we position this book for everyday people who are concerned about what the artificially intelligent future is going to bring us? And especially after GPT-4, as John mentioned, we really felt that it was important that people could track the origins and the development of artificial intelligence in a way that made sense to them as opposed to getting in the technicalities of it. When we were coming up with a title, things like the war of AI came up and things of that sort, until I finally had an idea based upon what John had done in his series, in his talks about AI. And that is the way John positioned the emergence of artificial intelligence metaphorically as a parent-child relationship, child relationships, so that it’s more, we should really be framing artificial intelligence as a mentoring of intelligent beings who have the capability and potentialities of becoming even perhaps better than we are. So that’s where the title came from, mentoring the machines. And then the subtitle, surviving the deep impact of the artificially intelligent tomorrow, is it speaks to the breadth and depth of the arguments that we’re making in this series of books, because it is a pretty complex story to tell. In fact, it’s getting ever more complex with the third project that I’m working on now, which is called Thresholds, because very much the structure of these four books is mimicking the structure of John’s original talk. So book one was sort of an introduction to the orientation of the framing that we need to think about when we’re thinking about artificial intelligence. The second one is the origin story. Thresholds, this third one that I’m working on now, that’s really about what sort of possibilities can we envision in the future for the development and evolution of these machines, not predictions, as John says, but the thresholds of possibility. So this is very much a deep dive into the nature, structure, and function of being, knowledge, reality, and behavior. It’s a very large project. My goal is to make it read in such a way that people can understand those concepts and those four ways in which we make sense of the world. As for the cover, that was inspired by 2001, the film by Kubrick, and it’s the moment when the Cure Du Lay character is going into the belly of the machine to sort of shut it down. And it’s a very chilling moment because he’s sort of, and it’s so brilliantly described visually by Kubrick that we couldn’t help but mimic it for the covers. Well, yeah, if you look in the back of my office here, you’ll see homage to 2001, of course, with the Open the Pod Bay doors, which is the theme of the podcast. We’ll open with that because the word podcast comes from that very movie. I don’t know if you guys are aware of that, but my audience has heard me say that many times that if it wasn’t for 2001 and Arthur C. Clark, who sent their ICO director at UCSD, we wouldn’t have the name podcast. It would call it something else, probably Ear Blog or something, but it wouldn’t be called podcast. So let me turn to some individual questions for the two of you guys because you’re each fascinating and kind of complimentary in this Arthur C. Clark sense of arts and sciences. So, John, you’re especially known in psychological circles, I guess, if I could say that. When we think about artificial intelligence, I don’t normally associate the notion of psychology except for the projection of human beings and anthropomorphization. Can we expect not a robot AI psychologist for us as our psychologists, but can we think of ourselves as being their psychologists and somehow educating them as we do with children? So what is this strange intersection of AI and psychology? What does it mean to you? Well, it means a lot. I mean, I’m both in psychology and cognitive science. So it’s both psychology and then within cognitive science, it’s philosophy, machine learning, linguistics, anthropology, et cetera. There’s two ways of answering that question. There’s the psychologist who’s what you might call an academic researcher and there’s a psychologist who’s doing something like therapy or mentoring. On the first, I mean, there’s serious questions about what do we mean by intelligence? What do we mean by a system to be actually intelligent for itself as to merely simulating it or mimicking it? How does that relate to issues of consciousness, issues of selfhood? And those are psychological, philosophical, cognitive scientific questions. And I think they should be addressed with the sort of most plausible work we can bring to bear on them. The psychotherapeutic mentoring thing is if after the first we come to certain conclusions about potential ways in which these machines can develop, then that would have an impact on how we are properly trying to afford their development, which is very much kind of like what we do when we’re doing pedagogy or we’re doing therapy or that weird combination that goes in the idea of mentoring, which is the title. One of the main claims of the book is if as we get a good answer to the first type of question, it sort of gives us guidance on how to answer the second question, which is a good way of dealing with what’s called the alignment problem. How do we get these machines to align with human flourishing and values? And in that sense, yeah, the alignment, you know, I’ve had Nick Bostrom on the podcast a couple of years back, famous for the paperclip problem. I often feel like the alignment, you know, it’s sort of extrapolation at absurdium in a sense. And I wonder if you, as a much more expert, you know, kind of purveyor of these ideas, and certainly am I. But, you know, when you think about that, these things are often done without context of physical limitations. So if you think about, you know, we live on a planet, we live on a spinning rock that only has so much metallurgical density per cubic, you know, meter of soil. And so, you know, unless you’re willing to say that these artificial intelligences can not only instantiate themselves in the physical world, but also can then, you know, manufacture new materials, have initio that will then placate their insatiable urge for, you know, for growth. So talk about alignment for a bit. And then, you know, how the ethics play into this, because I’ve often said on this channel, you know, I’m an experimental physicist, so we make, we make the computer chips, we make the, you know, sensors, I’ve got a couple patents for lithographed objects at various, for various functions. But, you know, I’ve often wondered, you know, if it’s not possible to teach them to machine or something without inculcating it with this notion of what we as human beings learn from, which is pain avoidance. And obviously, you’re much more expert than I am. But can we can we expect to train an artificial in silico life form or intelligence if they don’t have the same kind of visceral reactions to pain avoidance or pleasure seeking? Okay, this is an excellent and profound question. And it also goes to the point that Sean made about why we talk about thresholds rather than predictions. I don’t like these graphs, usually univariate and human beings are very bad at making exponential predictions off of univariate graphs. I found all of those predictions, both the zoomers and the doomers, we we try to distinguish ourselves from both, I found them silly. And so instead, we talk about something that I think you’re actually pointing towards, there’s a threshold. And the threshold, one of the threshold points is, are we going to make these entities embodied entities of some kind? That does not follow inevitably from sort of the development of these machines as they are right now. That’s a choice point. And as you said, it involves huge logistical issues about mining rare metals and transportations. It’s a huge commitment. So this is why we talk about thresholds. These are choice points. They won’t just sort of happen inevitably the way the earth orbits the sun or something like that. We will have to choose, we will have to create projects. Now the questions we could then ask if we reformulate it as a threshold is why might we want to do that? And then that goes to your second question, which I think is very astute, which is the deep question whether or not being embodiment being embodied is an essential feature of being an intelligent agent, a cognitive agent. Now, I have a lot of argument and work towards that I belong to a version of COGSI called 4E COGSI, which in fact explicitly argues, and we can get into the details if you want, I’m just answering your question at this larger level right now, that cognition has to be embodied in something that is properly autopoetic, self-making, not just merely self-organizing, but self-making so that it has real needs. We might want to do that because we might want machines that are capable of being rational as opposed to just being able to manipulate information in certain ways. We could get into that. The reason why it’s a threshold is there’s different alternatives. We could say, well, we’ll never embody these machines. And then that means we’ll have a certain kind of future. And then as you said, we might face the choice of, well, we may need to embody them because if we don’t, we won’t actually get proper cognitive agents. We might want proper cognitive agents because our argument, and this is part of mentoring the machines, is it’s only when you have genuine intelligence that is for the actual system or entity itself, or an autopoetic system, a system that cares about information because it’s taking care of itself in a moment by moment basis, only then could you have something that would actually care about the true, the good, or the beautiful. If you can’t get caring in this profound way interwoven with intelligence, you’re not going to have an entity that is anyway responsible to normative demands. You can try all the programming you want, but what you’re ultimately relying on is the fact that somehow that programming won’t self-organize beyond your purview. And that we, I’m suspicious of that ever being a workable goal. Whereas if we can get, like we do with our children, there’s no guarantees with them, but if we can get them to be properly internalized, a care for normativity, then that is how we could solve the alignment problem. Now notice what I’ve done there, Brian. I have not made predictions. I’ve said, here’s choice places we make. Here’s why we might make them. And here’s why we might go this way rather than that way. Sean, I think of you as kind of a gold standard for editing and publishing. I want to talk about the practicalities of this. I wrote two books before, you know, Chuck TPT came on board and I’m kind of glad that I did because now I don’t think I would tell my story in exactly the same way because I have this, you know, kind of tireless research assistant with a rapacious appetite for work and I can just assign it. And I’m not talking about my graduate students, you know, I don’t abuse them as much as I’ve used Chuck TPT. First question for you, how is it affecting publishing? How is it as an editor, a master editor and publisher of, you know, hundreds of books co-authored or produced through your handiwork, how is it, you know, how has it changed your practice in your craft? And what are some of the kind of ethical implications, you know, of dealing with these as a ghostwriter assistant and so forth? As I said, my books would look a lot different if they were written in 2024 instead of 2018. So talk to that, please. I’m using chat GPT, you know, to check a lot of my propositions as I’m working on this project. So this is a really good question for because I’m going through this right now and it’s been my experience that it’s a wonderful resource to check against propositions that have already been circulating in the collective cultural grammar, if you will. Right. So what it’s not so great at doing is making the binding metaphorical connections that are instrumental in creating a compelling story. So what does that mean? It’s not very good, nor would I expect it to be very good because it doesn’t care about the larger questions about what stories are after in answering. Right. So your earlier books would be different to a certain extent. They probably would have more footnotes. They would probably have even more concrete argumentation. But I think you would have been still required to have come up with the structure, function, and organization of the whole yourself because it’s not going to tell you your point of view about a particular subject matter or area of investigation. So in terms of publishing, if you ask ChatGPT to tell you a story, it gives you very, very sort of course, non-engaging things like tell me a story about a boy who was made of concrete. I actually did this with my son as a joke because I used to tell him a little story about I used to call him concrete boy. Right. So I asked my son, Hey, why don’t you write a story? And then he tricked me and he went to ChatGPT and it came up with a story and it was just very pedestrian because it was really only about the on the surface changes, not sort of your mind changes or your spiritual changes. And until as John was talking about, until these machines do have rationality and have the ability to care. And then if we can even step them up a little bit more and give them intentionality and make them their own cognitive agents, that’s the point where the storytelling could become pretty compelling. And I think we’re quite a ways from there. And I think people who rely upon ChatGPT or large language models to help them tell stories are only operating on a very superficial level that will not sort of like become all that interesting for all that long of a period of time. So when John introduced me by saying, I told him I didn’t want to make it a best seller, his work a best seller, I wanted to make it last. I wanted his project to sell for all time. That’s what I was talking about. And that requires a lot of work, a lot of thought. Yeah. And John, you know, one thing that I often talk about, you know, is the difference between the notion of knowledge and wisdom. And of course, you know, the word homo sapien in Latin means one who is wise, someone who has sapiens, and doesn’t mean, you know, homo knowledge, which means, which in Latin is scientia, science. So we as scientists have this notion that enough knowledge makes us wise. But I feel like that’s complete, you know, balderdash, because, you know, there’s some of the dumbest people I know are, you know, are geniuses, right? And I coined a term, you know, like Lenin, I believe instead of useful idiots, it’s useless geniuses, because we have so much like, intellectual kind of just a desert. And you’ve coined this term, I believe intellectual or wisdom famine. Can you talk about that? What is a wisdom famine? What has it? How are we being, you know, kind of afflicted by that in this age of almost infinite, instantaneous, you know, democratized knowledge, you know, science, separate the science from the sapiens, john, what do you mean by a wisdom famine? There’s sort of two parts to this answer. One is a distinguishing of wisdom from sort of knowledge, or at least from intelligence. And then the other is the answer is, well, why is that lacking in our culture right now? Part of it has to do with this idea, which is your adaptive intelligence is a result of some very complex dynamical systems. And you’re there, they’re massively recursive, and you’re connected and coupled to the environment in this really complex way. And again, where you want detail, I’ll supply it, I won’t inflict it on people. What comes out of this for some very good reasons and arguments that I’ve published on multiple times is the very processes that make us adaptively intelligent make us perennially prone to self deception. This has to do with a core thing I talk about, which is relevance realization, and related phenomena like anticipation. But this has to do with the fact that the amount of information that you could be paying attention to in the external environment is overwhelming, the amount of information you have in long term memories, and I’m also talking about possible combinations, overwhelming vast the possible options for your behavior. The same this, you know, this goes by various names, another name is the frame problem, etc. And the issue is, you what you don’t do is search all this information and judge if it’s relevant or not to what you’re doing. And this is where it sounds almost like a Zen Cohen, you you’re intelligent by actually ignoring most of the information. So it’s obvious to you, what you should be paying attention to, what you should be remembering, and what you should be doing. And that’s fine for common sense. But when we’re trying to bake machines, there’s no physics of obviousness, we have to try and figure out how the brain generates that intelligent adaptive obviousness. But the problem is, it works by making things obvious and having you ignore vast amounts of information. That is always a great risk of self deception, that you will be ignoring the needed information, and you will not be noticing what is actually true or relevant in that situation. And so we are endlessly prey to self deception, you can’t just sort of say, well, what I’ll do is I’ll shut off my adaptive intelligence machinery, that’s not going to work. So you have to do something a little more interesting, you have to try and intervene in this very complex, dynamical system that’s working at multiple levels in multiple ways. And it’s very self organizing and it’s adaptive. So when you try and intervene, it will, it has the capacity to just reorganize itself. This is why just knowing that you should stop smoking is largely useless, for example. Right. And so what what cultures have done across time and history is they figured out sets of practices, I call them ecologies of practices. And there’s a reason for that. They’re not just a set, they’re bound together and in sort of self correcting self constraining fashion, they’ve created ecologies of practices for ameliorating that self deception and helping to enhance our ability to connect to what is most relevant. I mean, and when you get that when you get people can see through illusion, see through self deception, and zero in on what is most relevant, especially in complex, ill defined situations with emergent novelty, we have tended, and I think with good reason to call them wise. Those are wise individuals. And now the problem facing us, that’s the first part. And the second part is why the famine? Well, the problem we face, and I’m hoping that there’ll be some charity because I’m addressing the problem and I’m not making carte blanche claims is that across cultures, generally these wisdom, ecologies of practices have been housed within religious traditions. And our religious traditions have because we’ve gone through this very unique process of secularization, it’s very unique in world history, that religious framework is largely non viable for a lot of people. I’m not here to argue about the metaphysics unless you want to. I’m not here. I’m not proselytizing for religion, neither am I here because I’m allergic to it. I’m just saying that’s what used to do it. But for most people, that’s not viable. And so if I ask you, I do this with my students, where do you go for information? They’ll pick up their phone like the cyborgs that they are, right? I’ll say, well, where do you go for knowledge? And they’ll sort of be a little bit more squeamish because they’ve sort of read some postmodernism. They’ll say the university science maybe. And then I’ll say, where do you go for wisdom? And there’s a deafening, anxious silence because they don’t have an answer. And they’re anxious because they know they should have an answer because they know they’re foolish. They know they’re self-destructive. And they know that they’re not as connected as they want to be. They know, at least intuitively, they have far fewer close friends. They’re less able to listen to advice from other people, all these kinds of stuff for which there’s empirical evidence. They have some intuitive sense of that. Well, that’s the wisdom famine. And part of that has to do with we lost the religious framework. Another part has to do with the fact that a lot of this meaning-making machinery, relevance realization machinery, is not carried by our propositional knowing, our ability, which is what is, and I’m a scientist like yourself, so I value this. This is the ability to assert propositions like cats or mammals. And that’s propositional knowing. But a lot of that adaptive intelligence is carried by non-propositional knowing, by procedural knowing how, by perspectival knowing what it’s like, and participatory knowing by being the kind of being you are. And so because of that, we’re in this kind of propositional tyranny. We don’t know where to practice. And we don’t really have much cultural education about the non-propositional aspects of our cognition and intelligence that are so central to the cultivation of wisdom and meaning. That’s the shortest answer I could give you. I’m sorry, that was a bit of a ramble. No, I think it strikes the heart of a lot of this. Ultimately, the extrapolation to the singularity of Ray Kurzweil, who’s coming out with a new book, the singularity is nearer, I think it’s called. So it got near, then it was here, now it’s nearer. I don’t know how all that works. But he sent me, I think it was a third of an acre of a rainforest in Brazil, a printout of the 500 page book. But anyway, he’s coming on the podcast soon. So I’ll ask him a little bit more about that. But the ultimate extrapolation to singularity is something that seems to me to be, if not a surrogate for God, maybe an actual God, these entities in the simulation hypothesis that can reconstruct and recreate all visceral experiences in a way that Descartes couldn’t hardly dream about. But I want to bring it back to something that I think will involve both of you. And maybe I’ll start with Sean, because he’s such a master in the art of storytelling. I claim that Einstein has this deserved impact on not just science, not just physics, but on society. And I actually think it’s because of his impact within and respect within the physics community, unlike say, Neil deGrasse Tyson, who’s an excellent storyteller, but he doesn’t have the bona fides of an Einstein. Obviously, I don’t think he would say that. But the legitimacy that he achieved through his Einstein’s works gives him a deserved crossover to the popular mentality. So in the popular mentality, he was viewed as time’s man of the century. And yet one of his greatest gifts was telling stories and being imaginative and doing what he called Gdankin experiments, thought experiments, experiments that exist in the mind. And in fact, the thing that he said most gave him palpitations and an titillating feeling of visceral feeling that caused him what he called to have his happiest thought was the notion and the understanding that an observer in freefall. I can’t find I usually have a finger puppet of Einstein around here. I’ll use Carl Sagan until we have you guys. Oh, no, there I’m stepping on how rude of me. So Einstein made the following observation purely mentally, Sean, that if the observer was in freefall, it experienced no gravitational field. There’s my laboratory demonstration. So I want to ask you guys both for Sean, to what extent can a machine have a happiest thought? A and and B, what does that mean to have a happy thought? And maybe, John, you’ll comment on that after you hear what Sean has to say. So how can a machine have a happy thought? What does that even mean? And B, if you like, what what would it mean for a machine to even have the visceral sensation of freefall without a body without embodiment? I had no chomsky. I’m not name dropping. I’m just saying, obviously, he believes a lot of communication is generated by physical and visceral, you know, bodily sensations and embodied consciousness. So how can a computer trapped in a wafer of silicon have a reaction that would lead it to have a happy thought? Sean, and then John. Big fan of Einstein, big fan of the Duncan experience, experiments, the way I would answer whether a machine can have a happy thought or an experience would require in my estimation embodiment to I would sign up for the fact that a machine would have to have a form of consciousness, a frame, its own umvelt, if you will, before that would sort of emerge. I’m in the same camp as John in the 4E cognitive science realm and embedment is I’ll give you the four E’s right now. You’ve got to be embedded, embodied, extended to enact. So in order for a machine to enact original artifact behavior, meaning novel behavior that has never existed in the universe before, in order for them to actually be able to do what we do all the time, I’m doing it right now. What I’m saying has never been said before in the universe in the way that I’m saying it. And this is the beautiful part of us beings as engines of creation, living life as an engine of creation of novelty. For a machine to create novelty, I do believe it has to experience its own frame. It has to have an umvelt and then on top of that, it would have to have some cognition, some intentionality. So I see it as sensing, feeling, thinking, acting. It’s that sort of quadratic from which we could have that we have happy thoughts and sad thoughts and metaphorical thoughts and Gadanekian experiments, etc. So I do think that the possibility of a machine acquiring embodiment, extension, it already sort of has some extension because it’s sort of operating off of the corpus of human language. But in order for it to actually reach a place where it can enact artifact, original behavior, make art, if you will, it would require a frame. That is as John was describing the frame problem. And Marvin Minsky, etc. from the 1960s sort of ran into this in the grand story of where we are currently in artificial intelligence. So that’s my answer. I think they do need a body. John, what do you think? Can you have a Gadanekian experiment that’s disconnected? Can we blow a capacitor every now and then to make it feel pain to teach? Einstein, of course, is famous not only for physics, he’s also famous because he was able to say things about science and knowledge and imagination. And that’s because he is also a spinosist. He’s quite a devoted follower of the philosophy of Spinoza. And I think Spinoza is very helpful at this point, because Spinoza was in contrast to Descartes. He argued that these two things were always inseparably found together, mind and extension or mind and body. And he very much was in the way the body was. He called it kinetis. The body was self organizing, but not just the way a fire is, it was self organizing to seek out the ongoing conditions of its own existence like a paramecium. That’s the difference between a paramecium and a fire, right? The paramecium will move towards something because it’s food and move away from something because it is poison. And as soon as you start talking that way, you can start to talk about very, very rudimentary senses of pleasure. But I think if we’re talking about what Einstein meant by his happy thought, he means something beyond visceral pleasure, not separate, like not completely independent from it, given the argument I’ve just made, but accepted from it the way I’ve accepted my tongue for speech, even though it didn’t originally evolve for speech. What we’ve done is we’ve taken those, that ability to care and to find pleasure or distress around that caring. And we’ve accepted it up so that we care about the true, the good and the beautiful. And we talk about happiness when we talk about it in terms of eudaimonia and not just pleasure, we’re talking about like, how much is your life connected to what’s real? How good is your life? How much beauty is there in your life? And I think, and then connected with that is that you, unlike the paramecium that is an agent that can detect the consequences of its behavior and alter its behavior to alter consequences, and that’s how it differs from a thermostat, for example, but unlike a paramecium, you do more than that. You’re not just, you don’t just self-organize, you organize into a self. You have this sense of self. We have some good evidence that other organisms have it, and we have a lot of evidence that a lot of organisms don’t. So merely being an intelligent, even quite intelligent organism, like a cat or a dog, is probably not enough for selfhood. But I think Einstein has a place where he can judge that he’s had a profoundly true thought that is elegant and beautiful in a way, and that will be good for other people to think. This is what I mean. He cares deeply, profoundly, normatively, and this makes his life more meaningful because these things are deeply relevant to himself as a self, as a being that is seeking its own meaningful existence. And I think all of that requires, again, for reasons we’ve already articulated and I’ve sort of repeated again, the spinozistic idea that if cognition is embodied, it’s not going to be capable of this profound kind of caring about what’s true, good, and beautiful. It cannot have a selfhood, right, in any sense, and therefore that kind of happiness that Einstein is articulating is not going to be possible to it. Do you feel like, I’ve often heard about the joke that, I’ll say it in this way, that consciousness researchers really are necessary to the theory of artificial intelligence as ornithologists are necessary to birds without any pejorative meant. But in what sense can we make any progress, John? I’ve been incredibly frustrated, although it’s exhilarating to talk to brilliant people like you and David Chalmers and Nick Bostrom and many, many other geniuses, but I always come down to the fundamental, the so-called hard problem. And if I couldn’t define what a planet was, it would make my job as an astrophysicist that much harder and less believable. And in fact, as the great Nobel Prize winner Lev Landau said, cosmologists are often in error, but never in doubt. How can we even think about the generalized intelligence or conscious machines that can be mentored if we don’t even have a definition of what consciousness is? So respectfully, John, what do you make of this? Are we putting Descartes before the horse? Well, I mean, I want to be a little bit careful here and the philosophy of science. Yes, you can do what body, you can sort of define what planets are because there was a lot like four centuries or more of philosophical work into what bodies are. And that just didn’t just drop intuitively onto the scene and it had to be worked out and had to be revised. And we went from thinking that bodies had to be completely inert and completely solid to etc., etc., etc. And so I hesitate to say that science starts with definitions. I think you start to get definitions when your science is up and running by something that is recognized by other sciences as a science. I don’t think there is a science of consciousness right now. That doesn’t mean that we’re just messing about with common sense intuition. I think people are trying to get clear. Now, I think there’s a couple of different questions. And one of the things we need to do, at least I argue, is you need to get clear about what the questions are and what the relationship between the questions are. We have to ask what is consciousness? And this is like the hard problem. How can something like consciousness exist in a universe that seems otherwise not conscious? There’s the function question, which is given that so much of your intelligent behavior happens unconsciously, what’s consciousness for? What does it do? And then here’s a question, and this is where I perhaps differ from David Chalmers and others, is what’s the relationship between those questions? I think like Descartes, those two questions have to be answered in an interdependent fashion. That if you try and answer the nature question without also developing a function question, answer to the function question or vice versa, you’re actually spinning your wheels in an important sense. Now, that’s what I just said is controversial, but I have argument for that. So for me, I think if we pursue the function question, we’ll be amazed to find out that there’s actually a growing consensus, a convergence about what consciousness does. And then if we were to start from there and then try and work our way back into the nature question, we can actually say a lot about the phenomenology of consciousness from that. Now, there’s two issues there. Will that be sufficient to give us a science of consciousness? That I don’t know. It’s not developed enough. The second question is, would that be enough that we could give better than just gut answers to are these machines conscious? I think that’s a reasonable proposal. And what I can, and I’ve published and presented on this, I think the emerging convergence about what the function of consciousness is, is it’s higher order relevance realization that is used in situations that are high in novelty, complexity, and ill-definedness. And that what we see is therefore consciousness. We actually, I don’t like, I understand why we have to do it analytically, but these four things are bound up together. They’re not identical, but they’re not separable. They’re inter-defining. Consciousness, selective intelligence, working memory, and fluid intelligence. And all of those are sort of zeroing in on relevance realization as the core functionality. Once you see that, then you can start to ask, well, how do we determine if a system is actually doing relevance realization in these kind of environments for itself? And then I think that would be a way in which we could start to make plausible arguments for the attribution of consciousness to these machines. I think you can go there on after to talk about the hard problem. For example, once you do that, you can make a distinction between types of qualia. One of the problems we have is we’ve sort of bound the notion of consciousness to adjectival qualia, the greenness of green and the blueness of blue. The problem with that is we have a lot of well established, I’m not making an argument of authority, I’m just putting my hat in the ring, of people who through various long-term meditative practices get to the pure consciousness event in which there are no adjectival qualia. There are no colors, there are no sounds, there’s no sense of self even, there’s just pure consciousness. The adjectival qualia all go away, but the consciousness doesn’t go away. What remain are the adverbial qualia, that sense of presence, that sense of here-ness, now-ness, togetherness, but it’s hyperbolic, it’s eternity and pure now-ness and pure here-ness and pure oneness. All those things don’t go away, which means they seem to be necessary and plausibly also sufficient for consciousness. Now the thing is you can explain all of those in terms of relevance realization. Now-ness is temporal relevance realization, here-ness is spatial relevance realization and togetherness is its relevance realization as it’s relevant to whatever problem or task you’re trying to solve. You understand what I’m not doing, I’m not claiming to have solved the hard problem. What I’m saying is, because I already said I’m not doing that, but I think we can make a lot of progress on the function question integrated with the nature question such that we can make plausible judgments about whether or not we should attribute consciousness to these machines. So, Joan, you and I are both in the professorette, which is a branch of society that garners great respect, but is also almost as sclerotic as any of the arteries are around the in-and-out burgers that are popular in Southern California. Academia really hasn’t changed much in a thousand years. So the year 1080 was a good year for our profession. It’s basically when the first Western university popped up in Bologna, Italy. And back then they had this practice where there’d be some guy and he would stand up with a piece of rock and he would scribble on another piece of rock, a blackboard. And that was the sage on the stage. And guess what? Now we have this magical thing called PowerPoint and we basically do the exact same damn thing. Is AI likely to have an impact on education beyond the online things that are popping up and so forth? Are we likely to have an AI? I’ve often said, why learn general relativity from Brian Keating when you can learn it from this guy? And there’s plenty of material, written form, LLMs that you could make. I made an LLM just based on the Feynman lectures. And it’s pretty reasonable, except it tries to be a little too funny and use a Brooklyn accent or whatever with Feynman. But tell me, Sean, are John and my days numbered in a sense? And what are the opportunities, threats, and things we should be concerned about as we as educators move forward? I think it’s going to transform into less of a broadcast situation, more of a dialogical feedback mechanism. So I run a company or I’m the founder of a company called Story Grid, which is a publishing company. It’s also a writing instruction and editing instruction company. So what we discovered over the last year or so is that selling the guy, you know, scratching things on the rock and doing the PowerPoints, it has not much utility, right? Because I can tell you how to swim and I can give you all the instructions, but until you get in the pool, you’re not going to be able to swim. So what we discovered at Story Grid is that, and it’s really revolutionized and changed the way we’re looking at making people better writers and better editors, is the feedback mechanism. So what’s required for a feedback mechanism, though, is to have a worldview of an optimal sort of place to aspire to. So one of the tricky things about writing and editing is that when I got into this business in 1991, I was expecting to be handed the keys to the car, which would tell me the optimization process of how people tell better stories. And I discovered there wasn’t one. So I actually had to go back to Aristotle and start from ground zero in the Western world, learning the principles of storytelling itself. Now, what we’re doing at Story Grid is we’re really stressing feedback. So instead of pushing and jamming a lot of mental, you know, stuff into somebody’s head, we’re giving bite-sized sort of minimum viable information and saying, now you try it. And then the students come back with what they’ve tried, and we can give them a quick feedback loop instead of, in 19 weeks, you’re going to get a final exam. I’m not checking in with you until that final thing, and I’m going to give you a grade, and I’m never talking to you, which I always found to be a pretty ridiculous format for learning. So I think, what does this have to do with AI? AI will be able to provide reasonable, with the right kind of instruction and software, to be able to provide feedback much faster that can be described to a person before they’re actually having interaction with the professor. So I think that might be one of the very useful ways AI can help in that students can get quick feedback and improve and optimize their understandings in a much faster process than they do currently. And John, as a fellow professor, what have you perceived as sort of the threats and opportunities and the so-called SWOT analysis of AI? Are we likely to be replaced? As I say, why learn from me instead of Carl Sagan? You’re a better teacher than I am, and it’s different what you do than what I do. But I do feel like a lot of this could be superseded. I think you and I may be involved both with Peterson Academy, which is your fellow professor Jordan’s endeavor, which aims to reduce costs of accredited degrees to a minuscule fraction of what they are and make it affordable without campus threats and harassment that many of us have faced over recent months. What do you see as the threats to academia or the opportunities for it as well, just restricted to what you and I do? The thing that I think I want to say first is, and I understand why people use it because it rhymes nicely, the sage on the stage, but I don’t think we’ve actually had sages on stages. I think we’ve had people giving lots of propositional information. We’ve had very little, although there has been an increasing awareness of this over, I would say, the last 20 to 30 years of pedagogy, sometimes informed by good science, sometimes not. That we should also be teaching know-how. We should be teaching skills. Skills aren’t true or false. They’re powerful or not. But in addition to that, there’s perspectival, knowing that’s knowing what it is like. I mean, knowing what it’s like is about orientation. It’s about the kind of knowing by noticing. I’m sizing up the situation in a certain way. I’m foregrounding and backgrounding. I’m sculpting salience in certain ways. And that is drawing from me certain skills and having me assume certain identities and roles. What’s all that’s going on? And you say, well, why should I care about that? Because, you know, and this is where epistemology is now overlapping with wisdom, because we have to also, we are role modeling. We are modeling virtues, intellectual and otherwise to our students. And that ultimately gets down to something that Sean said, the participatory knowing the way I know myself, because I’m aspiring to be a certain kind of self, and I’m aspiring to be more rational than I am. And that requires actual transformation. And how do you can’t like, you know, L.A. Paul, as she literally wrote the book on transformative experience, you can’t you can’t infer your way through real transformative experiences, because you’re going to be a different person with different sets of identities and commitments and values. So how do you how do you model that? How do you model aspiration to your students? They’re there to aspire. And I think these questions are questions that are not at all being addressed or even instantiated these non propositional kind of knowing by our current LLM. Now, I did not say that in principle, we couldn’t make machines that have procedural knowing, perspectival knowing and participatory knowing we can in principle, I’m not some crypto dualist or something like that. But those again, are threshold points, we you know, there’s no inevitability to it, we can choose it. And I think until we make those choices, we are responsible for a lot of the wisdom, the skills, the virtues, the character traits, the identity formation that go into a proper platonic, Socratic education that, as Sean said, come out much more in Socratic dialogue than they come out in semantic proposing. They’re just presenting a semantic information. And I think if you talk to the students, they write to me, you know, I’ve been teaching 30 years at the University of Toronto. But, you know, and I get students to email me 10, 15 years. And they say, you know, most of what you taught me propositionally, I don’t remember it, it’s obsolete. But you taught me how to think. You taught me how to care. You’re a role model of what it’s like. And I’m sorry, this self sells sound self emotional, but I’m trying to make a point, right? You are a role model of what it’s like to try and be a good thinker, a responsible scientist. And that matters a lot more. And that transfers to their lives, even if they haven’t taken up the life of a scientist. I think if we were genuinely sages on this stage, rather than just proposition machines, I think we will be doing something that won’t be replaced by these machines, unless we go through certain threshold choices as a society. As we wrap up, I’m going to have to meet with some real life flesh and blood students in a few minutes. So we’ll, we’ll schedule the next episode when the when the final volume in the uvra is complete. And I’ve had a chance to read it. But I guess I want to conclude with a question for each one of you individually. And you know, I can’t resist when I have expert literary and publishing and editing like Sean, one of the most foremost experts in such things, I feel a little bit of guilt. And maybe, john, your psychological training will come. Every night, I used to read books to my kids. And now I just, you know, I push the voice recorder, and I say, you know, tell me a story about, you know, a girl named this and a boy named that. And, and they have this adventure in a magical unicorn forest. And then the unicorn goes to that, you know, and, and, and they’re really engaged. And it’s kind of repetitive, the novelty, I think, appeals that their names are mentioned on like, you know, blueberries for sal or, you know, or reading some Dr. Seuss special here, green eggs and ham, that they’re actually in the story. But it kind of feels, you know, very pre digested. It’s, it’s almost always the same and, and kind of playing on tropes, etc. A, should I feel guilty for for resorting to such tools? But B, is there a new genre, you know, kind of tailored audiobooks tailored, you know, you know, paperbacks, whatever for for all ages? Is this on the horizon in the publishing industry? No, you shouldn’t feel guilty because you’re actually while you are constructing the story, I believe that the things that are starting to bore you are probably places where you need to innovate. And so you probably came up with some pretty cool stuff at the beginning. And now you’re kind of hitting a wall and you’re falling back on tropes and, you know, cliches that you’ve heard in your past. And so should you be guilty about that? Well, that’s kind of up to you. But this is kind of the problem that people have when they construct stories, is that they think it’s going to be magical, fun and easy because they’ve been exposed to stories their entire lives. And boy, it isn’t it wonderful that I was able to read The Hobbit and get such a wonderful experience from it. It must have taken Tolkien, what probably at least a week to write that, right. And they don’t understand that it took that person so long to create that story and so many fits and starts that it takes quite a while to write a very resonant story because you need to confront tropes that are sort of like, you know, they’re little way stage stations from mythology that are sort of pushing you to go beyond them. And that requires some reflection in your own life and say, well, what would it have been like if I were the princess and the unicorn blank instead of going, oh, what happens next? Well, I guess the unicorn breaks her leg instead of so the thing about storytelling is it’s it is a serious business. It’s a serious thing, whether or not you ever publish anything, just sort of running the script in your mind and thinking through what would happen if is an extraordinarily wonderful thing where lots of insight cascades happen. What will happen in the future for storytelling is it’s my it’s my dream that every single week, maybe every day, a new Hobbit comes out, a brand new Pride and Prejudice, because people will be empowered with artificial intelligent helpers that can enable them to think about what could push them past the tropes and cliches that are coming into their minds. So the future of storytelling is is as bright as the sun can possibly be, because we have not even gotten close to the edges of just how powerful we can use stories to communicate what it means to be, you know, members of society, to be ourselves and also to be members of the universe. So I have great hopes for the future of stories, book publishing and things like that are going to transform. It’s like a threshold. I can’t predict what will happen, but something’s going to happen that’ll be pretty magical, I think, and pretty wonderful. And John, just finishing up with your field of expertise, similar kind of thematic pursuit here. You know, looking forward, what do you think are the most exciting or promising intersections between AI and psychology? I had a Commodore 64 in the 80s. I was not wealthy enough to afford a tape drive. And so my older brother, when he was particularly pissed off at me, would come into the room and turn off the switch. And it would always be right as I finished programming Eliza or, you know, Snake or something like that. So I got a lot of practice, you know, what’s called copy work, you know, redoing the classics. But I remember Eliza and it was, it was, it’s not, I would say chat GPT when I’ve tried to invoke it as a, as a prompt as a therapist, it’s not orders of magnitude better than Eliza. Is that more a limitation on me, my lack of growth in the intervening 40 years? Or do you see promise for, you know, a dedicated app or, you know, a true therapist, it’s not going to give these blind, you know, responses like, well, how does that make you feel? Unless that’s the cutting edge in psychology? I don’t know, I don’t, I don’t really see therapists, but tell me, John, what’s the future of the intersection of AI and psychotherapy, perhaps psychology in general? Given a lot of the arguments we’ve made, it’s very, there’s two things that good story tellers are probing artistically. Tolkien built this entire world behind the Hobbit. And of course, you also have, you know, you mentioned Austin, you have people who build selves in very powerful way. And these machines are not possessed of selfhood and they are not situated in worlds. And so they can’t get those depths. And for similar reasons, the degree to which therapy is pushing towards selfhood and a self that is situated within a lived world, and I now use the term like how it should be used, existential issues. These machines are not going to be capable of good therapy. We’ve recently published a paper with Gary Hovnissian about how you can probably explain the big five theory, personality theory in terms of relevance, realization. He’s been integrating that and he just got his PhD about how you can integrate this into sort of psychotherapy in terms of ways in which human beings try to get an optimal grip on their sense of what a world is. And to the degree to which these machines don’t have those senses, because they don’t actually instantiate them, I don’t think they’re going to be capable of the deepest transformative kinds of psychotherapy. Now, could we use them for sort of triage and diagnostic purposes? I expect this is going to happen very soon. And I expect that’s not going to be in psychotherapy. I think we’re going to even see that in physical medicine. We’re going to see a lot of triaging and initial diagnosis being done by these machines. I think that’s very, very soon happening. I think there’s a dark side about the psychology of this. I mean, I think way before these machines become viable selves within a world in, like Sean said, an umvel, a Heideggerian world. I mean, we’re going to be facing the issues around threats to our own personhood and selfhood. Deep fake is getting very good. As we start to align deep fake with LLMs, which is just around the corner, we’re going to have a lot of serious issues. And people are going to discover that in very many ways, because there’s a women wisdom famine, they have not developed the skills, sensibilities, sensitivities, character traits that are needed to deal with the tsunami of shit that’s going to be flooding over us when this happens. And I think that’s going to be especially important in a way that people are not yet foreseeing. In terms of academic psychology, artificial intelligence has been playing a crucial role in cognitive science since its inception. Newell and Simon were as much pivotal figures in artificial intelligence as they were in psychology and problem solving. My work has been ongoing, profoundly influenced. I mean, how can I not? Jeff Hinton was here at the University of Toronto, and he’s one of the godfathers of this AI, right? And so I think the degree to its psychology has paid attention to what we’ve been learning from machine learning. These machines are going to have an impact. I think a lot of stuff that gets done now is going to be seen to be not being very helpful. Let me just give you one quick example. What role will neuroscience play? Because let’s say we get viable, human equivalent, artificial general intelligence that’s on a substrate completely other than an organic brain. I think it has to be autopoietic for reasons I’ve already argued, but it doesn’t have to have our evolutionary history or it doesn’t have to have our particular substrate architecture. What does that tell us? Is neuroscience still relevant? Is it important? Should psychology be trying to ground itself in an understanding of the brain? I’m not going to try and answer that right here right now, but these are the questions that I think are going to become very pronounced. They were already there in the philosophers, but nobody paid very much attention to the philosophers talking about this stuff, except I did because I teach it in my Intro to Cog Psy course. But I think that will no longer be a merely philosophical endeavor. We will have to answer some very serious questions as psychologists. When we’re studying intelligence, are we studying something that we think is actually seriously brain dependent? If not, what is it we are studying and what are we doing that’s different than the machine learning person and why and how? I think these questions are going to come to the fore very, very quickly. Well, guys, this has been a true delight. I talked to two experts, both getting the synoptic overview of this project, this massive project that you guys are embarking on. I know my audience is going to love it. Tell me how can folks get to know more about your work? I’ll let Sean go first. Yeah, Sean, where’s the best place for people to contact? Storygrid.com? Yes, Storygrid.com and also my business partner and CEO, Tim Grahl, has a channel on YouTube and he does all kinds of instruction from our corpus of intellectual endeavor. I don’t know what it is. But yeah, Storygrid.com is where it all is. For my work, if you’re interested in my academic work, you can just of course Google Scholar me. If you’re interested in the kind of work that I’m doing that’s trying to bridge between these scientific issues and existential and spiritual issues, then I have a YouTube channel. I have Awakening from the Meaning Crisis. I have After Socrates. I have the Cognitive Science Show. I have a bunch of these video essays about AI. If you’re interested in the cultivation of wisdom, we have Awakening to Meaning, which is a platform where you can go, where you can learn mindfulness practices. You can learn dialogical practices. You can learn imaginal practices. If you’re interested in the work in general that I do, there’s the Vervecki Foundation, verveckifoundation.org. I do not own it. I only have legal veto rights over its intellectual content. And that is where you can go if you’re interested in supporting this work via Patreon or donation or volunteer. You can also see the kind of things we’re doing like Awakening to Meaning, et cetera. So there’s a lot of ways to connect with me depending on how you wish to connect with me. Guys, mentoring the machines, pick it up in a format most convenient to you. I like the hardcover in case I need to be more polite to these machines and say thank you and please when I give them prompts. Gentlemen, this has been a great privilege and I hope to talk to you guys again soon. Thank you so much, Brian. Wonderful, wonderful. Excellent questions. Thank you, guys.