https://youtubetranscript.com/?v=0ll5c50MrPs
So the Hebrews created history as we know it. You don’t get away with anything. And so you might think you can bend the fabric of reality and that you can treat people instrumentally and that you can bow to the tyrant and violate your conscience without cost. You will pay the piper. It’s going to call you out of that slavery into freedom, even if that pulls you into the desert. And we’re going to see that there’s something else going on here that is far more cosmic and deeper than what you can imagine. The highest ethical spirit to which we’re beholden is presented precisely as that spirit that allies itself with the cause of freedom against tyranny. Yes, exactly. I want villains to get punished. But do you want the villains to learn before they have to pay the ultimate price? That’s such a Christian question. That has to do with attention, by the way. It has to do with a subsidiary hierarchy, like a hierarchy of attention, which is set up in a way in which all the levels can have room to exist, let’s say. And so, the new systems, the new way, let’s say the new urbanist movement, similar to what you’re talking about, that’s what they’ve understood. It’s like we need places of intimacy in terms of the house. We need places of communion in terms of parks and alleyways and buildings where we meet and a church, all these places that kind of manifest our communion together. So, those existed coherently for long periods of time. And then the abundance post-World War II and some ideas about what life could be like caused this big change. And that change satisfied some needs, people got houses, but broke community needs. And then new sets of ideas about what’s the synthesis, what’s the possibility of having your own home, but also having community, not having to drive 15 minutes for every single thing. And some people live in those worlds and some people don’t. Do you think we’ll be smart? So one of the problems is… Well, why were we smart enough to solve some of those problems? Because we had 20 years, but now because one of the things that’s happening now is, as you pointed out earlier, is we’re going to be producing equally revolutionary transformations, but at a much smaller scale of time. What’s natural to our children is so different than what’s natural to us, but what was natural to us was very different from our parents. So some changes get accepted generationally really fast. So what’s made you so optimistic? Hello everyone watching on YouTube or listening on Associated Platforms. I’m very excited today to be bringing you two of the people I admire most intellectually, I would say, and morally for that matter, Jonathan Pagio and Jim Keller, very different thinkers. Jonathan Pagio is a French-Canadian liturgical artist and icon carver known for his work featured in museums across the world. He carves Eastern Orthodox among other traditional images and teaches an online carving class. He also runs a YouTube channel, This Symbolic World, dedicated to the exploration of symbolism across history and religion. Jonathan is one of the deepest religious thinkers I’ve ever met. Jim Keller is a microprocessor engineer known very well in the relevant communities and beyond them for his work at Apple and AMD among other corporations. He served in the role of architect for numerous game-changing processors, has co-authored multiple instruction sets for highly complicated designs, and is credited for being the key player behind AMD’s renewed ability to compete with Intel in the high-end CPU market. In 2016, Keller joined Tesla, becoming vice president of autopilot hardware engineering. In 2018, he became a senior vice president for Intel. In 2020, he resigned due to disagreements over outsourcing production, but quickly found a new position at Ten’s Torrent as chief technical officer. We’re going to sit today and discuss the perils and promise of artificial intelligence, and it’s a conversation I’m very much looking forward to. So welcome to all of you watching and listening. I thought it would be interesting to have a three-way conversation. Jonathan and I have been talking a lot lately, especially with John Vervecky and some other people as well, about the fact that it seems necessary for us to view, for human beings to view the world through a story. In fact, when we describe the structure that governs our action and our perception, that is a story. And so we’ve been trying to puzzle out, I would say to some degree on the religious front, what might be the deepest stories. And I’m very curious about the fact that we perceive the world through a story, human beings do, and that seems to be a fundamental part of our cognitive architecture and of cognitive architecture in general, according to some of the world’s top neuroscientists. And I’m curious, and I know Jim is interested in cognitive processing and in building systems that in some sense seem to run in a manner analogous to the manner in which our brains run. So I’m curious about the overlap between the notion that we have to view the world through a story and what’s happening on the AI front. There’s all sorts of other places that we can take the conversation. So maybe I’ll start with you, Jim. Do you want to tell people what you’ve been working on and maybe give a bit of a background to everyone about how you conceptualize artificial intelligence? Yeah, sure. So first, I’ll say technically I’m not an artificial intelligent researcher. I’m a computer architect. And I’d say my skill set goes from somewhere around the atom up to the program. So we make transistors out of atoms, we make logical gates out of transistors, we make computers out of logical gates, we run programs on those. And recently, we’ve been able to run programs fast enough to do something called an artificial intelligence model or a neural network, depending on how you say it. And then we’re building chips now that run artificial intelligence models fast. And we have a novel way to do it, a company I work at. But lots of people are working on it. And I think we were sort of taken by surprise what’s happened in the last five years, how quickly models started to do interesting and intelligent-seeming things. There’s been an estimate that human brains do about 10 to the 18th operations a second. It sounds like a lot. It’s a billion, billion operations a second. And a little computer, the processor in your phone probably does 10 billion operations a second-ish. And then if you use the GPU, maybe 100 billion, something like that. And big modern AI computers like OpenAI uses, or Google or somebody, they’re doing like 10 to the 16th, maybe slightly more operations a second. So they’re within a factor of 100 of a human brain’s raw computational ability. And by the way, that could be completely wrong. Our understanding of how the human brain does computation could be wrong. But lots of people have estimated, based on number of neurons, number of connections, how fast neurons fire, how many operations a neuron firing seems to involve. I mean, the estimates range by a couple of orders of magnitude. When our computers got fast enough, we started to build things called language models and image models that do fairly remarkable things. So what have you seen in the last few years that’s been indicative of the change that you describe as revolutionary? What are computers doing now that you found surprising because of this increase in speed? Yeah, you can have a language model read a 200,000-word book and summarize it fairly accurately. So it can extract out the gist? The gist of it. Can it do that with fiction? Yeah. Yeah, and I’m going to introduce you to a friend who took a language model and changed it and fine-tuned it with Shakespeare and used it to write screenplays that are pretty good. And these kinds of things are really interesting. And then we were talking about this a little bit earlier. So when computers do computations, a program will say add a equal b plus c. The computer does those operations on representations of information, ones and zeros. It doesn’t understand them at all. The computer has no understanding of it. But what we call a language model translates information like words and images and ideas into a space where the program, the ideas and the operation it does on them are all essentially the same thing. We’ll be right back with Jonathan Pageau and Jim Keller. First we wanted to give you a sneak peek at Jordan’s new documentary, Logos in Literacy. I was very much struck by how the translation of the biblical writings jump-started the development of literacy across the entire world. Literacy was the norm. The pastor’s home was the first school. And every morning it would begin with singing. The Christian faith is a singing religion. Probably 80% of scripture memorization today exists only because of what is sung. This is amazing. Here we have a Gutenberg Bible printed on the press of Johann Gutenberg. Science and religion are opposing forces in the world, but historically that has not been the case. Now the book is available to everyone. From Shakespeare to modern education and medicine and science to civilization itself. It is the most influential book in all of history and hopefully people can walk away with at least a sense of that. Right, so a language model can produce words and then use those words as inputs. And it seems to have an understanding of what those words are. Which is very different from how a computer operates on data. About the language models. My sense of, at least in part, how we understand a story is that maybe we’re watching a movie, let’s say, and we get some sense of the character’s goals and then we see the manner in which that character perceives the world. And we, in some sense, adopt his goals. Which is to identify with the character. And then we play out a panoply of emotions and motivations on our body because we now inhabit that goal space. And we understand the character as a consequence of mimicking the character with our own physiology. And you have computers that can summarize the gist of a story, but they don’t have that underlying physiology. Well, first of all, it’s a theory that your physiology has anything to do with it. You could understand the character’s goals and then get involved in the details of the story. And then you’re predicting the path of the story and also having expectations and hopes for the story. And a good story kind of takes you on a ride because it teases you with doing some of the things you expect, but also doing things that are unexpected. And possibly that creates emotional… Yeah, it does. So in an AI model, you can easily have a set of goals. So you have your personal goals. And then when you watch the story, you have those goals. You put those together. Like how many goals is that? Like the story’s goals and your goals, hundreds, thousands? Those are small numbers, right? Then you have the story. The AI model can predict the story too, just as well as you can. How do you… And… As the story progresses, it can look at the error between what it predicted and what actually happened and then iterate on that. So you would call that emotional excitement, disappointment… Anxiety. Anxiety. Yeah, definitely. Well, a big part of what anxiety does seem to be is discrepancy. Like some of those states are manifested in your body because you trigger hormone cascades and a bunch of stuff. But you also can just scan your brain and see that stuff move around. Right. And the AI model can have an error function and look at the difference between what it expected and not. And you could call that the emotional state. Yeah, well, I just talked with the… That’s speculation, but… No, no, I think that’s accurate. But we can make an AI model that could predict the result of a story probably better than the average person. So one of the things… Some people are really good at… They’re really well educated about stories or they know the genre or something. But these things… And what they see today is the capacity of the models is… If you say start describing a lot, it will make sense for a while, but it will slowly stop making sense. But that’s possible. That is simply the capacity of the model right now. And the model is not well grounded enough in a set of, let’s say, goals and reality or something to make sense for a while. So what do you think would happen, Jonathan? This is, I think, associated with the kind of things that we’ve talked through to some degree. So one of my hypotheses, let’s say, about deep stories is that they’re metagists in some sense. So you could imagine 100 people telling you a tragic story, and then you could reduce each of those tragic stories to the gist of the tragic story. And then you could aggregate the gists and then you’d have something like a meta tragedy. And I would say the deeper the gist, the more religious like the story gets. And that’s part of… It’s that idea is part of the reason that I wanted to bring you guys together. I mean, one of the things that what you just said makes me wonder is imagine that you took Shakespeare and you took Dante and you took the canonical Western writers and you trained an AI system to understand the structure of each of them. And then now you could pull out the summaries of those structures, the gists, and then couldn’t you pull out another gist out of that? So it would be like the essential element of Dante and Shakespeare. I want to hear what Jonathan’s done so far. So here’s one funny thing to think about. You use the word pull out. So when you train a model to know something, you can’t just look in it and say, what is it? No, you have to core it. Right? You have to ask. Right. Right. What’s the next sentence in this paragraph? What’s the answer to this question? There’s the thing on the internet now called prompt engineering. And it’s the same way I can’t look in your brain to see what you think. I have to ask you what you think. Because if I killed you and scanned your brain and got the current state of all the synapses and stuff, A, you’d be dead, which would be sad. And B, I wouldn’t know anything about your thoughts. Your thoughts are embedded in this model that your brain carries around. And you can express it in a lot of ways. And so how do you train? This is my big question. Because the way that I’ve been seeing it until now is that artificial intelligence, it’s based on us. It doesn’t exist independently from humans. And it doesn’t have care. The question would be, why does the computer care? That’s not true. Why does the computer care to get the gist of the story? Well, yeah. So I think you’re asking kind of the wrong question. So you can train an AI model on the physics and reality and images of the world just with images. And there are people who are figuring out how to train a model with just images. But the model itself still conceptualizes things like tree and dog and action and run. Because those all exist in the world. And you can actually train. So when you train a model with all the language and words, so all information has structure. And I know you’re a structure guy from your video. So if you look around you at any image, every single point you see makes sense. Yeah. It’s a teleological structure. It’s a purpose laid in structure. So this is something we talk about. So it turns out all the words that have ever been spoken by human beings also have structure. And so physics has structure. And it turns out that some of the deep structure of images and actions and words and sentences are related. There’s actually a common core of, imagine there’s a knowledge space. And sure, there’s details of humanity where they prefer this accent versus that. Those are kind of details. But they’re coherent in the language model. The language models themselves are coherent with our world ideas. And humans are trained in the world just the way the AI models are trained in the world. Like a little baby, as it’s looking around, it’s training on everything it sees when it’s very young. And then its training rate goes down and it starts interacting with what it’s learning and interacting with the people around it. But it’s trying to survive. It’s trying to live. It has the infant or the child. The weights and the neurons aren’t trying to live. What they’re trying to do is reduce the error. So neural networks generally are predictive things. Like what’s coming next? What makes sense? How does this work? And when you train an AI model, you’re training it to reduce the error in the model. And if your model’s big… OK, let me ask you about that. Well, first of all… So babies are doing the same thing. They’re looking at stuff go around and in the beginning their neurons are just randomly firing. But as it starts to get object permanence and look at stuff, it starts predicting what will make sense for that thing to do. And when it doesn’t make sense, it’ll update its model. So basically it compares its prediction to the events and then it will adjust its prediction. So in a story prediction model, the AI would predict the story, then compare it to its prediction and then fine tune itself slowly as it trains itself. Or at reverse you could ask it to say, given the set of things, tell the rest of the story and it could do that. And the state of it right now is there are people having conversations with this that are pretty good. So I talked to Carl Friston about this prediction idea in some detail. And so Friston, for those of you who are watching and listening, is one of the world’s top neuroscientists. And he’s developed an entropy enclosure model of conceptualization, which is analogous to one that I was working on, I suppose, across approximately the same time frame. So the first issue, and this has been well established in the neuropsychological literature for quite a long time, is that anxiety is an indicator of discrepancy between prediction and actuality. And then positive emotion also looks like a discrepancy reduction indicator. So imagine that you’re moving towards a goal and then you evaluate what happens as you move towards the goal. And if you’re moving in the right direction, what happens is what you might say, what you expect to happen. And that produces positive emotion. And it’s actually an indicator of reduction in entropy. That’s one way of looking at it. And the point is… Yeah, you have a bunch of words in there that are psychological definitions of states. But you could say there’s a prediction and error is a prediction. Yes. And you’re reducing error. Yes. But what I’m trying to make a case for is that your emotions directly map that, both positive and negative emotion, look like they’re signifiers of discrepancy reduction, both on the positive and negative emotion side. But then there’s a complexity that I think is germane to part of Jonathan’s query, which is that… So the neuropsychologists and the cognitive scientists have talked a long time about expectation, prediction and discrepancy reduction. But one of the things they haven’t talked about is it isn’t exactly that you expect things. It’s that you desire them. You want them to happen. Because you could imagine that there’s, in some sense, a literally infinite number of things you could expect. And we don’t strive only to match prediction. We strive to bring about what it is that we want. And so we have these preset systems that are teleological, that are motivational systems. Well, I mean, it depends. If you’re sitting idly on the beach, like in a bird flies by, you expect it to fly along in a regular path. You don’t really want that to happen. Yeah, but you don’t want it to turn into something that could peck out your eyes either. Sure. So that’s a want. But you’re kind of following it with your expectation to look for discrepancy. Yes. Now, you’ll also have a, you know, depends on the person, somewhere between 10 and a million desires, right? And then you also have fears and avoidance. And those are context. So if you’re sitting on the beach with some anxiety that the birds are going to swerve at you and peck your eyes out. So then you might be watching it much more attentively than somebody who doesn’t have that worry, for example. But both of you can predict where it’s going to fly and you’ll both notice a discrepancy. The motivations, one way of conceptualizing fundamental motivation is they’re like a priori prediction domains. And so it helps us narrow our attentional focus because I know when you’re sitting and you’re not motivated in any sense, you can be doing just in some sense, trivial expectation computations, but often we’re in a highly motivated state. And what we’re expecting is bounded by what we desire and what we desire is oriented, as Jonathan pointed out, towards the fact that we want to exist. And one of the things I don’t understand and wanted to talk about today is how the, how the computer models, the AI models can generate intelligible sense without this, without mimicking that sense of motivation. As you said, for example, they can just derive the patterns from observations of the objective world, but there’s a… So let’s get, so again, I don’t want to do all the talking, but so AI generally speaking, like when I first learned it about it had two behaviors. They call it inference and training. So inferences, you have a trained model. So say you give it a picture and say, is there a cat in it? And it tells you where the cat is. That’s inference. The model has been trained to know where a cat is. And training is the process of giving it an input and an expected output. And when you first start training the model, it gives you garbage out. It’s like an untrained brain would. And then you take the difference between the garbage output and the expected output and call that the error. And then they invent the big revelation was something called backpropagation with gradient descent. But that means take the error and divide it up across the layers and correct those calculations so that when you put a new thing in, it gives you a better answer. And then to somewhat my astonishment, if you have a model of sufficient capacity and you train it with a hundred million images, if you give it a novel image and say, tell me where the cat is, it can do it. So training is the process of doing a pass with an expected output and propagating an error back through the network and inferences the behavior of putting something in and getting an output. I think I’m really pulling… But there’s a third piece, which is what the new models do, which is called generative. It’s called a generative model. So for example, say you put in a sentence and you say, predict the next word. This is the simplest thing. So it predicts the next word. So you add that word to the input and now say, predict the next word. So it contains the original sentence and the word you generated. And it keeps generating words that make sense in the context of the original word and additional words. This is the simplest basis. And then it turns out you can train this to do lots of things. You can change it to summarize a sentence. You can train it to answer a question. There’s a big thing about… Like Google every day has hundreds of millions of people asking it questions and giving answers and then rating the results. You can train a model with that information. So you can ask it a question and it gives you a sensible answer. But I think in what you said, I actually have the issue that has been going through my mind so much is when you said, people put in the question and then they rate the answer. My intuition is that the intelligence still comes from humans in the sense that it seems like in order to train whatever AI, you have to be able to give it a lot of power and then say at the beginning, this is good, this is bad, this is good, this is bad, like reject certain things, accept certain things in order to then reach a point when then you train the AI. That’s what I mean about the care. So the care will come from humans because the care is the one giving it the value, saying this is what is valuable, this is what is not valuable in your calculation. So there’s a program called AlphaGo that learned how to play Go better than a human. So there’s two ways to train the model. One is they have a huge database of lots of Go games with good winning moves. So they train the model with that and that worked pretty good. And they also took two simulations of Go and they did random moves. And all that happened was these two simulators played one Go game and they just recorded whichever moves happened to win and it started out really horrible. And they just started training the model and this is called adversarial learning. It’s not particular adversarial. You make your moves randomly and you train a model and so they train multiple models and over time those models got very good and they actually got better than human players. Because the humans have limitations about what they know whereas the models could experiment in a really random space and go very far. Yeah, but experiment towards the purpose of winning the game. Yes, well, but you can experiment towards all kinds of things it turns out. And humans are also trained that way. Like when you were learning you were reading, you said this is a good book, this is a bad book, this is good sense construction, it’s good spelling. So you’ve gotten so many error signals over your life. Well that’s what culture does in large parts. And then culture does that, religion does that, your everyday experience does that, your family. So we embody that and everything that happens to us we process it on the inference pass which generates outputs. And then sometimes we look at that and say hey that’s unexpected or that got a bad result or that got bad feedback and then we back propagate that and update our models. So really well trained models can then train other models. So humans right now are the smartest people in the world. So the biggest question that comes now based on what you said is because my main point is to try to show how it seems like artificial intelligence is always an extension of human intelligence. Like it remains an extension of human intelligence. And maybe the way to… That won’t be true at all. So do you think that at some point the artificial intelligence will be able to, because the goals recognizing cats, writing plays, all these goals are goals which are based on embodied human existence. Could an AI at some point develop a goal which would be uncomprehensible to humans because of its own existence? Yeah, for example there’s a small population of humans that enjoy math and they are pursuing adventures in math space that are incomprehensible to 99.99% of humans. But they’re interested in it. And you can imagine like an AI program working with those mathematicians and coming up with very novel math ideas and then interacting with them. But they could also, you know, if some AIs were elaborating out really interesting and detailed stories, they could come up with stories that are really interesting. We’re going to see it pretty soon, like all of art, movie making and everything. Could there be a story that is interesting only to the AI and not interesting to us? That’s possible. So stories are like I think some high level information space. So the computing age of big data, there’s all this data running on computers, but only humans understood it, right? Computers don’t. So AI programs are now at the state where the information, the processing and the feedback loops are all kind of in the same space. They’re still relatively rudimentary to humans. I guess some AI programs and certain things are better than humans already, but for the most part they’re not. But it’s moving really fast. So you could imagine, you know, I think in five or ten years, most people’s best friends will be AIs. And you know, they’ll know you really well and be interested in you. Unlike your real friends. Real friends are problematic. They’re only interested in you when you’re interested. Yeah, real friends are. The AI systems will love you even when you’re dull and miserable. Well, there’s so much idea space to explore. And humans have a wide range. Some humans like to go through their everyday life doing their everyday things. And some people spend a lot of time, like you, a lot of time reading and thinking and talking and arguing and debating. You know, and you know, there’s going to be a diversity of possibilities with what thinking thing can do when the thinking is fairly unlimited. So I’m curious about, I’m still, I’m curious in pursuing this issue that Jonathan has been developing. So there’s a literally infinite number of ways, virtually infinite number of ways that we could take images of this room. Right now if a human being is taking images of this room, they’re going to sample a very small space of that infinite range of possibilities. Because if I was taking pictures in this room, in all likelihood, I would take pictures of objects that are identifiable to human beings, that are functional to human beings, at a level of focus that makes those objects clear. And so then you could imagine that the set of all images on the internet has that implicit structure of perception built into it. And that’s a function of what human beings find useful. I mean, I could take a photo of you that was the focal depth was here and here and here and here and here and two inches past you. And now I suppose you could… There’s a technology for that called light fields. OK. So then you could, if you had that picture properly done, then you could move around it and imagine and see. But yeah, fair enough. I get your point. Like the human recorded data has… Has our biology built into it. Has our biology built into it, but also unbelievably detailed encoding of how physical reality works. Right. So every single pixel in those pictures, even though you kind of selected the view, the focus, the frame, it still encoded a lot more information than your processing. Right. And if you take a large… It turns out if you take a large number of images of things in general… So you’ve seen these things where you take a 2D image and turn it into a 3D image. Yeah. The reason that works is even in the 2D image, the 3D image in the room actually got embedded in that picture in a way. Then if you have the right understanding of how physics and reality works, you can reconstruct the 3D model. OK, so this reminds me… And AI scientists may cruise around the world with infrared and radio wave cameras and they might take pictures of all different kinds of things and every once in a while they’d show up and go, hey, the sun, you know, I’ve been staring at the sun and the ultraviolet and radio waves for the last month. And it’s way different than anybody thought because humans tend to look at light and visible spectrum and, you know, there could be some really novel things coming out of that. But humans also, we live in the spectrum we live in because it’s a pretty good one for planet Earth. Like, it wouldn’t be obvious that AI would start some different place. Like, visible spectrum is interesting for a whole bunch of reasons. Right. So in a set of images that are human derived, you’re saying that there’s… The way I would conceptualize that is that there’s two kinds of logos embedded in that. One would be that you could extract out from that set of images what was relevant to human beings. But you’re saying that the fine structure of the objective world outside of human concern is also embedded in the set of images and that an AI system could extract out a representation of the world, but also a representation of what’s motivating to human beings. Yes. And then some human scientists already do look at the sun and radio waves and other things because they’re trying to get different angles on how things work. Yeah. Well, I guess it… It’s a curious thing. It’s like the same with buildings and architecture. Mostly fit people. Well, the other… Yeah, there’s a reason for that. The reason why I keep coming back to hammering the same point is that even in terms of the development of the AI, that is, developing AI requires immense amount of money, energy, and time. And so… That’s a transient thing. In 30 years, it won’t cost anything. So that’s going to change so fast, it’s amazing. So that’s a… Supercomputers used to cost millions of dollars and now your phone is the supercomputer. So the time between millions of dollars and 10, $50 program to use. Yeah. Right? And then at some point, they’re also, you know, let’s say a difficult company, and they made money off a lot of people and became extremely valuable. Now, for the most part, they haven’t been that directional in telling you what to do and think and how to do it. But they are a many-making company. You know, Apple created the App Store, which is great, but then they also take 30% of the App Store profits, and there’s a whole section of the internet that’s fighting with Apple about their control of that platform. And in Europe, you know, they’ve decided to regulate some of that, which that should be a social-cultural conversation about how should that work. Yeah. So do you see the more likely, certainly the more desirable future is something like a set of distributed AIs, many of which are under personal, in personal relationship in some sense, the same way that we’re in personal relationship with our phones and our computers, and that that would give people the chance to fight back, so to speak, against the same. And there’s lots of people really interested in distributed platforms. And one of the interesting thing about the AI world is, you know, there’s a company called OpenAI, and they open source a lot of it. The AI research is amazingly open. It’s all done in public. People publish the new model all the time. You can try them out. People, there’s a lot of startups doing AI in all different kinds of places. You know, it’s a very curious phenomena. Yeah. There are two. And it’s kind of like a big, huge wave. It’s not like you can’t stop a wave with your hand. Yeah. Well, when you think about the waves, there are two actually in the book of Revelation, which describes the end, or describes the finality of all things, or the totality of all things is maybe a way for people who are more secular to kind of understand it. And in that book, there are two images, interesting images about technology. One is that there’s a dragon that falls from the heavens, and that dragon makes a beast. And then that beast makes an image of the beast. And then the image speaks. And when the image speaks, then people are so mesmerized by the speaking image that they worship the beast ultimately. So that is one image of, let’s say, making and technology in Scripture, in Revelation. But there’s another image, which is the image of the heavenly Jerusalem. And that image is more an image of balance. It’s an image of the city which comes down from heaven with a garden in the center, and then becomes this glorious city. And it says, the glory of all the kings is gathered into the city. So the glory of all the nations is gathered into this city. So now you see a technology which is at the service of human flourishing and takes the best of humans and brings it into itself in order to kind of manifest. And it also has hierarchy, which means it has the natural at the center and then has the artificial as serving the natural, you could say. So those two images seem to reflect these two waves that we see and this kind of idea of an artificial intelligence which will be ruling over us or speaking over us. But there’s a secret person controlling it, even in Revelation. It’s like there’s a beast controlling it and making it speak. So now we’re mesmerized by it. And then this other image. So I don’t know, Jordan, if you ever thought about those two images in Revelation as being related to technology, let’s say. Well, I don’t think I’ve thought about those two images in the specific manner that you described, but I would say that the work that I’ve been doing, and I think the work you’ve been doing too, and the public front reflects the dichotomy between those images. And it’s relevant to the points that Jim has been making. I mean, we are definitely increasing our technological power. And you can imagine that that’ll increase our capacity for tyranny and also our capacity for abundance. And then the question becomes, what do we need to do in order to increase the probability that we tilt the future towards Jerusalem and away from the beast? And the reason that I’ve been concentrating on helping people bolster their individual morality to the degree that I’ve managed that is because I think that whether the outcome is the positive outcome that in some sense, Jim has been outlining or the negative outcomes that we’ve been querying him about, I think that’s going to be dependent on the individual ethical choices of people, well, at the individual level, but then cumulatively, right? So if we decide that we’re gonna worship the image of the beast, so to speak, because we’re mesmerized by our own reflection, that’s another way of thinking about it. And we wanna be the victim of our own dark desires, then the IA revolution is gonna go very, very badly. But if we decide that we’re going to aim up in some positive way and we make the right micro decisions, well, then maybe we can harness this technology to produce a time of abundance in the manner that Jim is hopeful about. Yeah, and let me make two funny points. So one is, I think there’s going to be continuum, like the word artificial intelligence won’t actually make any sense. Right, so humans collectively, like individuals know stuff, but collectively we know a lot more, right? And the thing that’s really good is in a diverse society with lots of people pursuing individual, interesting ideas, worlds, like we have a lot of things and more people, more independence generates more diversity. And that’s a good thing where, the totalitarian society where everybody’s told to wear the same shirt and like, it’s inherently boring. Like the beast speaking through the monster is inherently dull, right? But in an intelligent world, where not only can we have more intelligent things, but in some places go far beyond what most humans are capable of in pursuit of interesting variety. And like, I believe the information, well, let’s say intelligence is essentially unlimited, right? And the unlimited intelligence won’t be this shiny thing that tells everybody what to do. That’s sort of the opposite of interesting intelligence. Interesting intelligence will be more diverse, not less diverse. Like that’s a good future. And your second description, that seems like a future worth working for and also worth fighting for. And that means concrete things today. And also, it’s a good conceptualization. Like I see the messages my kids are taught, don’t have children and the world’s gonna end, we’re gonna run out of everything, you’re a bad person, why do you even exist? It’s like, these messages are terrible. The opposite is true. More people would be better. We live in a world of potential abundance, right? It’s right in front of us. Like, there’s so much energy available. It’s just amazing. It’s possible to build technology without pollution consequences. That’s called externalizing costs. Like, we know how to do that. We can have very good clean technology. We can do lots of interesting things. So if the goal is maximum diversity, then the line between human intelligence, artificial intelligence that we draw, like you’ll see all these kind of really interesting partnerships and all kinds of things, and more people doing what they want, which is the world I want to live in. Yeah. But to me, it seems like the question is going to be related to attention, ultimately. That is, what are humans attending to at their highest? What is it that humans care for in the highest? In some ways, you could say, what are humans worshiping? And like, depending on what humans worship, then their actions will play out in the technology that they’re creating, in the increase in power that they’re creating. Well, that’s, well, and if we’re guided by the negative vision, the sort of thing that Jim laid out that is being taught to his children, you can imagine that we’re in for a pretty damn dismal future, right? Human beings are a cancer on the face of the planet. There’s too many of us. We have to accept top-down compelled limits to growth. There’s not enough for everybody. A bunch of us have to go because there is too many people on the planet. We have to raise up the price of energy so that we don’t burn the planet up with carbon dioxide pollution, et cetera. It’s a pretty damn dismal view of the potential that’s in front of us. And so- Yeah, the world should be exciting and the future should be exciting. Well, we’ve been sitting here for about 90 minutes bandying back and forth, both visions of abundance and visions of apocalypse. And I mean, I’ve been heartened, I would say, over the decades talking to Jim about what he’s doing on the technological front. And I think part of the reason I’ve been heartened is because I do think that his vision is guided primarily by a desire to help bring about something approximating life more abundant. And I would rather see people on the AI front who are guided by that vision working on this technology. But I also think it’s useful to do what you and I have been doing in this conversation, Jonathan, and acting in some sense as friendly critics and hopefully learning something in the interim. Do you have anything you want to say in conclusion? I mean, I just think that the question is linked very directly to what we’ve been talking about now for several years, which is the question of attention, the question of what is the highest attention. And I think the reason why I have more alarm, let’s say, than Jim is that I’ve noticed that in some ways human beings have come to now, let’s say, worship their own desires, they’ve come to worship. And that even the strange thing of worshiping their own desires has actually led to an anti-human narrative. This is a weird idea, it’s almost suicidal desire that humans have. And so I think that seeing all of that together in the increase of power, I do worry that the image of the beast is closer to what will manifest itself. And I feel like during COVID, that sense in me was accelerated tenfold in noticing to what extent technology was used, especially in Canada, how technology was used to instigate something which looked like authoritarian systems. And so I am worried about it, but I think like Jim, honestly, although I say that, I do believe that in the end truth wins. I do believe that in the end, these things will level themselves out. But I think that because I see people rushing towards AI, almost like lemmings going off a cliff, I feel like it is important to sound the alarm once in a while and say, we need to orient our desire before we go towards this extreme power. So I think that that’s mostly the thing that worries me the most and that preoccupies me the most, but I think that ultimately in the end, I do share Jim’s positive vision. And I do think that, I do believe the story has a happy ending. It’s just, we might have to go through hell before we get there, I hope not. Mm-hmm. So Jim, how about you? What have you got to say in closing? A couple of years ago, a friend who’s my age said, oh, kids coming out of college, they don’t know anything anymore, they’re lazy. And I thought, I work at Tesla. I was working at Tesla at the time. And we hired kids out of college and they couldn’t wait to make things. They were like, it’s a hands-on place, it’s a great place. And I’ve told people, like if you’re not at a place where you’re doing stuff, it’s growing, it’s making things, you need to go somewhere else. And also, I think you’re right, the mindset of, if people are feeling this is a productive, creative technology that’s really cool, they’re going to go build cool stuff. And if they think it’s a shitty job and they’re just tuning the algorithm so they can get more clicks, they’re going to make something beastly, perhaps. And the stories, our cultural tradition is super useful, both cautionary and explanatory about something good. And I think it’s up to us to go do something about this. And I know people are working really hard to make the internet a more open place, to make sure information is distributed, to make sure AI isn’t a winner-take-all thing. And these are real things and people should be talking about them and then they should be worrying, but the upside’s really high. And we faced these kinds of technological, like this is a big change. Like, AI is bigger than the internet. Like I’ve said this publicly, like the internet was pretty big. And this is bigger, it’s true. But the possibilities are amazing. And so, with some sense we could actually- So then, together, we could utilize them? Yeah, with some sense we could achieve it. And the world is interesting. I think it’ll be a more interesting place. Well, that’s an extraordinarily, cynically optimistic place to end. I’d like to thank everybody who is watching and listening. And thank you, Jonathan, for participating in the conversation. It’s much appreciated as always. I’m gonna talk to Jim Keller for another half an hour on the Daily Wire Plus platform. I use that extra half an hour to usually walk people through their biography. I’m very interested in how people develop successful careers and lives. And how their destiny unfolded in front of them. And so, for all of those of you who are watching and listening, who might be interested in that, consider heading over to the Daily Wire Plus platform and partaking in that. And otherwise, Jonathan, we’ll see you in Miami in a month and a half to finish up the Exodus Seminar. We’re gonna release the first half of the Exodus Seminar we recorded in Miami on November 25th, by the way. So that looks like it’s in the can. Yeah, can’t wait to see it. The rest of you, yeah, yeah, yeah, absolutely. I’m really excited about it. And just for everyone watching and listening, I brought a group of scholars together about two and a half months ago. We spent a week in Miami. Some of the smartest people I could gather around me to walk through the book of Exodus. We only got through halfway because it turns out there’s more information there than I had originally considered, but it went exceptionally well and I learned a lot. And Exodus means ex-hodos. That means the way forward. And well, that’s very much relevant to everyone today as we strive to find our way forward through all these complex issues, such as the ones we were talking about today. So I would also encourage people to check that out when it launches on November 25th. I learned more in that seminar than any seminar I ever took in my life, I would say. So it was good to see you there. We’ll see you in a month and a half. Jim, we’re gonna talk a little bit more on the Daily Wear Plus platform. And I’m looking forward to meeting the rest of the people in your AI-oriented community tomorrow and learning more about, well, what seems to be an optimistic version of a life more abundant. And to all of you watching and listening, thank you very much. Your attention isn’t taken for granted and it’s much appreciated. Hello, everyone. I would encourage you to continue listening to my conversation with my guests on dailywireplus.com.