https://youtubetranscript.com/?v=z5fVa9IrSEw
It’s another reason for hope. See, these machines have always depended on us as a template, a Turing-like template, that we compare the machines to us. And what we’ve been able to do is rely upon our natural intelligence. You don’t have to do much to be intelligent, for your intelligence to develop. You just have to not be brutalized or traumatized, properly nourished, and have human beings around you to talk. And then your intelligence will unfold. And so all of these people doing these machines and making these data sets, they can rely on naturally widely distributed intelligence. This is not the case for rationality. And this is not the case for wisdom. These people, I have no hesitation saying by and large, many of them are not highly rational. I doubt that many of them are highly wise. Insofar as we need to model, have really good models, if we want to give these machines a comprehensive self-correction, rationality, and caring about the normatives, wisdom, we have to become more rational and more wise. And that’s sort of a roadblock for these people. Now, they can just ignore all of that, and I suspect they might, and just say, we’re not going to try and make these machines rational and wise. We’re going to just go down the road of making these pantomimes of intelligence, and that has all the problems. But if they move towards making them something that would be, I think, more dangerous, then they run into the fact that there’s an obligation to do things. They and us, we have to become more rational and wise because we need the genuinely existing models. And secondly, we have to fill the social space, the internet, where all of the literature, where the data is being drawn, with a lot more wisdom and rationality. These are huge obligations on us. And that sort of gives me hope because it’s like there’s a roadblock for this project going a certain way that requires a significant reorientation towards wisdom and rationality in order for there to be any success. Before you get to the third point, I just want to… I haven’t even got to the second point, but go ahead. I just want to ask you one question based on what you said, is that my perception of the situation is that there’s actually a correlation between the diminishing in wisdom and the diminishing in wisdom traditions and the desire to do this. It’s like a sorcerer’s apprentice situation where the sorcerer would not have awoken all the rooms to do it. It’s like the little apprentice Mickey doesn’t know why to do things or why not to do things. That’s why he’s doing it in the first place. Yeah, I agree with that. It’s like our society is moving away from wisdom. And that’s one of the reasons why we’re doing this in the first place. And again, I’m not denying that. What I’m saying is, as we empower these things, their self-deceptive, self-destructive power is also going to go up exponentially. And we are going to start losing millions of dollars in our investment as they do really crappy, shitty, unpredicted things. And so there’s going to be a strong economic incentive to bring in capacities for comprehensive caring self-correction. And then my argument rolls in. And so that’s part of my response. The thing about thinking of them as children, I mean, we do make our kids, we make them biologically and we make them culturally. So I don’t want to get stuck in this word making. We could be equivocating. And that’s why we were using the term mentoring. The idea there is we have two options for the alignment. We can either try and program them and hardwire rules into them so that they don’t misbehave, which is going to fail if we move to the, if we cross the threshold and decide we want to make these machines self-transcending like us. And then what do we do? How do we solve that problem? Well, the only way, the only machinery we have for solving that is the cultural, ethical, spiritual machinery of mentoring. That’s how we do it with our kids. If we try to, if we try to just somehow hardwire them or being the kind of agents we want them to be, we will fail. And I, for me, I guess I’m trying to argue that’s the only game in town we have. We either have programming or we have mentoring. And I understand the risk. But if my answer to the first question has some validity to it and hopefully some truth, then the answer to the mentoring becomes more powerful because that means we also have to become the best possible parents, creating the best possible social discourse. The thing about the idol, I take that very seriously. And that’s what I mean when I said the theology is going to be the important science coming forward because we should not be trying to make gods. I agree with you. This is problematic. There are already cults building up around these AGI’s and I warned that that would happen in my essay. Right. And I said that and that’s going to keep happening and it’s going to get worse. We hear about it happening in the organizations themselves, which is the Yes. And the people who are doing wise AI are trying to challenge that. And so this is why I proposed actually humbling these machines. This is why I call them Silicon Sages. I did that deliberately to try and designate that we are not making a god. What we’re trying to do is make beings who are humbled before the true, the good and the beautiful like us and therefore form community with us rather than being somehow god-like entities that we’re worshiping. I would hope that like think about this. We find it easy to conceive that they might discover depths of physics and they’re already discovering things in physics that human beings haven’t discovered and in medicine and stuff like it. Well, why not also in how human beings become wiser? And so I guess what I’m saying is I take all of your concerns for real and I’ve tried to build in my proposal ways of responding to them. These machines should not be idolized. I think they should become like like you like I mean, let me give you an example. I have many students who are now surpassing me. I taught them. I mentored them and they’re surpassing me. And unless you’re a psychopath, that’s what you want to happen. And then what they do is they enter and then they come back and they want to reciprocate and that’s what I’m talking about when I’m talking about the silicon stages. Now, again, is this a high probability? Depends on the thresholds, depends about whether or not the first and the second argument work. But I’m still arguing there’s a possibility that they could be silicon stages as opposed to being gods. Because one of the things like I think in almost all of the wisdom tradition that happens is that the wise or the enlightened one, if you want to use that, appears as nearly invisible to most people. Yes, right. So Christ, Sal talks about the seed, you know, the pearl, these little these things which you cannot most people actually do not see that are hidden in reality. And then the the sages, you know, we have this image in the orthodox tradition, for example, that they know that there are there are people in the world that hold up reality by their prayers, but we don’t know who they are. They are they are invisible by that very fact because there’s something about wisdom which does that. And then a wise person appears too much. We hate them. We want to kill them. They annoy us. They’re they’re a thorn in our side. And so this is another issue is that what you have is these beings that are extremely powerful, like massively powerful and have a massive reach and have a lot. There are things the reason why they exist, like I said, have all this economic drive towards them. You know, the idea that that they would become these the sages in the way that we we tend to understand wisdom as being to me, that brings the probability way down, you know, because of that, because of what at least when we when we understand wisdom to be what it looks like, it looks very different. It looks like the immobile meditating sage who gives advice but doesn’t do much. I want to push back on this because what’s what’s in this is an implicit distinction between intelligence and a capacity for caring, the capacity for epistemic humility. And I think when you move from intelligence to rationality, that you can’t maintain that you can grow the one without growing the other. That’s that. So in fact, this is why intelligence only counts for like maybe 30 percent of the variance in rationality and even less of wisdom. I would put it to you that if you concede that these machines could get vastly more powerful in terms of intelligent problem solving, then concede the possibility they could get vastly more powerful than us in their capacity for caring and caring about the normative and being vastly more powerful in the capacity for humility as well. And so and that’s kind of what we see with these people. Right. We don’t see them just becoming super polymaths. We see them actually demonstrating profound care, really enhanced relevance, realization, profound commitments to reality that we properly admire. And they seem to want to help us as much as they can. And the point is these people don’t just and I think this is your point. They don’t just slam into us like epistemic bulldozers. They are in fact, one of the things that I was often admired about them, Socrates, Jesus, the Buddha, is their capacity to adapt and adjust to whoever their interlocutor is. And again, let’s imagine that capacity magnified as well. So what I’m asking is, is don’t I mean, first of all, I admit it, if we don’t cross a certain threshold, we could just accelerate the intelligence and not accelerate these other things. But I said there are deep problems in that that will become economically costly. And then if we imagine that rationality and wisdom are also being enhanced, then I think this addresses some of your concerns.