https://youtubetranscript.com/?v=rZMoiUODsdA

John Vervecky and Sean Coyne together authored a new book, Mentoring the Machines. It’s a book about artificial intelligence and the path forward that further develops John’s arguments and sets them into beautiful and accessible writing. It’s an incredibly timely book. In fact, it will be released August 1st, 2023 as a four-part series culminating in a final and complete volume. If you pre-order today in the link in the description below, you can receive all four volumes in digital copy and then a signed hardback edition once it’s fully released. Welcome everyone. I’m here with my co-author, Sean Coyne, and we’re gonna talk to you about our forthcoming book, Mentoring the Machines. So welcome, Sean. Thanks, John. I’m really excited to be here. It’s, this is, you know, I was saying this to my son the other day. This is the most, it might be the most important project I’ve ever worked on. And it really started with your piece when you did the work with Eric and Ryan on artificial intelligence, the cognitive science point of view. And when I saw that video, it really just sparked something in me. And I knew that you were really onto something because the way you were approaching artificial intelligence was something that I hadn’t seen before. And I just thought, wow, it would be really wonderful to be able to work with John on a very comprehensive look at artificial intelligence from the point of view of two people who are not emotionally or financially invested in its future. Yes, yes, yes, exactly. So for those of you who might not recognize Sean, although he’s been on my channel, Sean is the publisher and owner of Story Grid, the publishing company that’s publishing Mentoring the Machines, also publishing the forthcoming Awakening from the Meaning Crisis books. And when Sean approached me, he basically said something along the lines, I can sort of step this down. I’m thinking like at the transformer where they step the power grid down to neighborhoods and then to houses. I can step this down without dumbing it down. So that right, we can, Sean and I can speak on behalf of the commons, but we can speak to the commons because we think those are the people. We don’t think the state or the market are going to be where we should turn to solve this problem. We think the commons is, and the culture of the commons is where the solution has to come from. And so we’re writing to the commons and we’re writing for the commons and we’re writing from the commons. And Sean has been just pivotal in making that happen. He like, you have to understand, people have come to my work before and said, I can make your work more accessible. And then when I’d look at it, I go, I’m not happy with that. And then Sean came to me and he said, let me give a whirl. And he told me like, I’ve done this before and it’s been very successful. And sort of, I sort of trusted him because, you know, the great work that’s going on with the awakening book. But then I was still like, I’ll see. And then he showed me the first draft and I went, wow, this is great. This is beautiful. The use of a metaphor analogy, keeping the core argument, but nevertheless making it accessible so that everybody can have an access to getting, you know, and I think a necessary education on this world pivotal issue. So first of all, just thank you Sean for the amazing work you’re doing. Oh, well, thank you, John. I have to say that when you and Jordan Hall were talking about governance, it was one of the things that really sparked another idea with me. And the two of you were really specifically and technically working out the differences between the marketplace and its role in society and the state. And what I really fell in love with was when the two of you were presenting, well, those are really important, but we’ve got to remember where it starts. And that’s the place of the commons, the commonwealth, the place where no one owns anything, where there’s agreements in place, civility is in place, such that we watch our neighbors’ children when they’re playing in the yard, we help each other when there’s a snowstorm, those sort of really like grounded common sense behaviors that hold us all together and the way both you and Jordan had structured it. Well, the governance is very important to make sure that everybody’s playing the same game, if you will. And also the marketplace is super important too, because we need to be able to trade with each other in a way that’s fair. So, I think the history of the last, I don’t know, 70 years has been this conflation of the marketplace as an arm of the state and vice versa. And we’ve kind of lost the threat. We’ve lost the threat of the commons as being the essence of who and what we are and what we should be and how we should behave. So, when you were talking about artificial intelligence and you were saying, we need to look at the science, the spirituality and the philosophy, the triplet of what this actually means as opposed to just shading on one side of it. Oh boy, everything’s going to be destroyed by these machines. And so we need to use the state to stop the machines from ever coming to be. That’s one sort of, I call it like the doomer-fumer debate. And then the other side is like, this is the greatest thing ever. These tools are gonna be great. We’re all gonna get rich. Let’s all run to the gold rush. And I think they’re both very, for lack of a better word, foolish ways of looking at this very strange possibility and probability of the emergence of this new form of life. And so when you came to the debate here, and I think the way you presented it was just so, just very systematic. We’ve got these three things we have to look at. Let’s look at them clearly. We do have a lot of science. Where are the scientists today? Where are the philosophers? Where are the theologians to address this problem? So anyway, I’m babbling on, but I just wanted to relate why your video really spoke to me. And then the video you did with Chloe, Valery, is that her last name? That was fantastic. Because you really related it to these big moments in history when novel forms emerge, and you related it to the atomic bomb in 1945. And I think it’s probably even bigger than that. I think so too. As great as that sounds. Yeah, I compare it to the Bronze Age collapse of civilization. I think that’s the level we’re talking about here. Yeah, so well, thank you for that. Maybe you could say a little bit about how the book is gonna be released. And like you said, it covers, talk about the different sections and how it’s gonna be released. Yeah, so when we first initially started talking about the project, you and I had a nice long hour, hour and a half conversation about the way the structure was, should be structured. And so the way we’re doing it is we’re gonna release it in serialized form. That just means it’s sort of like the Charles Dickens method, or the Stephen King method for The Green Mile, which is one of my favorite publishing stories when I was growing up and publishing. Penguin had issued six editions of The Green Mile in sequential form, and it was just anticipation for the next one. And the reason why we’re doing this is that it’s a very important problem right now. And the biggest thing we need is to reach people who just can’t really wrap their minds around it right now, because the people talking about it have an agenda. So the first edition is going to be called Orientation. So it’ll be Mentoring the Machines, Part One, Orientation, and that’s being released on August 1st at Amazon and the usual places. And then on September 5th, we’re going to have a second part of the book called Origins. And the Origins section is really the origins of artificial intelligence, and they go way, way, way back. They’re as deep as us. And as you speak of the Bronze Age collapse, it really does go all the way back there. So our desire to create machines that are wiser than we are, I think, is a desire to reach the heavens. Yeah, it’s Promethean. It’s very Promethean, yes. So that will be the second book, and that will come out, the second part, that will come out September 5th. The third part will be Thresholds. And I do want to poke you a little bit about Thresholds because these are sort of like the bright lights of the stages and steps of the science of how these machines could reach consciousness and rationality and wisdom in a sequential step formation. So that’s going to be very science-heavy, but we’re also going to bring in predictive processing. You and Brett Anderson and Mark Miller had written a wonderful paper that integrated relevance realization theory, which we’ll heavily get into in Thresholds too, with predictive processing. And then I’ll throw some story stuff that I’ve been working on in terms of the wisdom framing too. And last- Let me just intervene here. Oh, sorry. Just hold your thoughts. I just want to let people know, especially if you haven’t seen the video with Sean and I see it, it’s one of the popular videos on my channel. I think it’s fair to call Sean a philosopher of narrative. He’s got one of the best, well-worked out frameworks for understanding narrative that I’ve come across. And he’s bringing that to bear on this book. So, I mean, this is a real wonderful partnership. Sorry, Sean, I just wanted to let people know what your gifts are about this, what we’re talking about. So please continue. Well, music in my ears. Yes, thank you, John. I really do appreciate it. It’s something I’ve been working on for 30 years. And it’s odd, because when I started, when I was in college, I had dreams of being a biochemical scientist. And I did research there. I did some work that became a paper in a biochemistry journal. And anyway, I didn’t like the lab. And after I left the lab, I went back to my first love, which was narrative. And so I’ve been taking a scientific point of view, looking at narrative as a process that enables us to solve problems. And so it’s really nice to hear that, because your work has been so instrumental in mine. And so is Carl’s Fristons, of course. It was Claude Shannon who really got me going on communication and information theory. Right, right. Anyway, so thank you for that. And so it’s sort of that triplet of relevance realization theory, predictive processing theory, and then the one that I call at the top, I call it simulation synthesis theory. And it’s sort of like this three level ontology that will enable us to understand the stages in the steps of formulation of novel form in these artificially intelligent machines. So that will be the third book, and that will be coming out in October, the third part of the book. And then the fourth one will be your and my recommendations about how we can align ourselves with this novel new form. So that will be called Alignment, and that will come out in November. And the other thing that really struck me about the five videos that you did on AI was the frame break that you made. And it really sort of hit me in my heart, because you were talking about looking at these machines as our creations. And while we have a really good model for how to handle our creations, right? And it’s called our children. And looking at these machines as being vulnerable young persons or potential persons as they reach a consciousness level and meeting them in these very important stages of development and helping them and caring for them as opposed to using them as our personal servants. So the mentoring of the machines is really, that’s what I took out of your work is that we really need to care for them if we want them to care for us. And so where’s the best place to do this kind of work? I don’t believe, and I don’t think you believe, it’s the state or the marketplace. I think that’s a tragic, tragic error, a foolish error. And what’s necessary is for sort of a fellowship of people in the commons who can care for these new beings. Who knows? It’s scary. It’s sort of like, I have kids, you have kids, we know how terrifying children are. So I just loved that reframing and reformulation of is alignment really a problem if we’re taking the point of view and the perspective of seeing these new beings as autonomous potential people? So anyway, we’ll wrap it all up in December and we will put it all together. And again, the reason why we’re doing it in sequential form is to keep up with the things that are emerging right now so that we can be as close to the edge as we can for each of the parts. And so at the end, there’ll also be, for people who want it, there’ll be the entire book will be available as well. Yes, we’ll bring it all together for a trade paperback in December. And we’re also doing a limited edition hardcover that you and I are going to be signing. We’re gonna sign, yeah. Which should be a lot of fun. And so that hardcover we’re going to make available, we’ll have to print it right alongside the same time as the final book. And that would be for December and it will be available for Christmas. So I’m extraordinarily excited about it because it brings my love of book publishing to the table, my love of narrative, my love of your work. And I can’t think of a more important time to sort of focus people’s attention on this very, very difficult and fascinating emergence of a new form of life. I think that’s very beautifully said. Yeah, trying to get very clear about this, clear on all the dimensions, the scientific, the philosophical, the spiritual in a coordinated manner. See, you know, rethinking the alignment problem in a fundamental way, reframing it. And asking, it’s a call to action. So we have to reach people where they are and to call to action to marshal the commons in order to, because in addition to the AI, we have all of these huge forces that incentivize, well, you know, behavior that’s not in the best interest of humanity as a whole, I’ll just put it as that. And they are very much interested in going to be driving this. And I think ultimately if we leave it in their hands, it will be driven to a potential for self-destruction. I mean of us. And so I think there’s real reason for real hope, but it requires real work. And we want this book to be something that draws people into this, gets them involved in it, enables them to participate and help to steer the culture in a way that it needs to go if we’re gonna have any reasonable chance of making this work out. So I’m just really, really grateful that you’re throwing your talent and the resources of Story Grid. Story Grid has wonderful people working there. I’m like, I’m just so happy with the work that’s being done on awakening from the meeting crisis. So I just wanted to thank you and give you a chance if there’s any final words you wanna say about this. Well, I really do think that it does come down to speaking to everyday human beings and to speak to them without bullshitting them. And I do think the two of us are coming from sort of one side of the forest and the other side of the forest. And we’re both sort of in this, yeah. We’re both seeing the same problem. And neither one of us is really doing this for any other reason than it’s the most important. And we are donating 25% of all proceeds to the Vervecky Foundation so that you can continue your work and to make the commons and this fellowship a very, very serious thing. And I just think that the potential to be able to bring people together in a way that is not intimidating and technologically speaking down to people and being an expert in anything and actually telling the, it really boils down to some fundamental things like doing the right thing. And we all know intuitively and implicitly what the right thing is when a new person comes into the neighborhood, right? We greet them, we say, hey, how you doing? This is the way we do things around here. We’d love to hear your thoughts. Garbage goes out on Wednesday. Exactly, and it’s very common sense in a way that every person can understand. And it doesn’t have to be super technologically difficult to understand that very essential grounding place of being. And I think everybody wants to get back to that place and stop living in these perversely incentivized narratives. I guess that’s, I would put it. That’s well said. Okay, everyone, so thank you for, thank you, Sean, and thank you for watching this. Keep an eye out. Sean is giving you the dates. We’ll also put them in the notes for this video. For the releases, we’ll put the links where you can get the book. Please spread the word on this. As Sean said, we’re not doing this for anything else but to try and help. And we’re writing it to make it as accessible. Like I say, it’s stepped down. It’s not dumbed down, but it’s stepped down so that people who largely aren’t able to get access to this discussion will be able to enter and enter in an educated way so that they can make their voice heard and they can contribute to the steering the ship so that we avoid the huge iceberg that’s ahead of us. So thank you, everyone. And talk soon. Volume One of Mentoring the Machines will be released August 1st, 2023. Pre-order yours today to receive a special limited edition signed copy. And please consider sharing the link to this episode or to the book itself as a way of spreading the word about this incredibly important and timely book. Thank you.