https://youtubetranscript.com/?v=V-7KNrF16Zs

But if you look at a lot of the things that’s happening with things like social media and Facebook, there’s an underclass, there’s all sorts of work, all sorts of hidden work that’s going out to human beings actually doing it. So appalling jobs doing content moderation, which are generally really, really poorly paid, and a job that no human being should have to do, having to look at the really, it’s just, just horrendous. And a lot of the stuff, a lot of the stuff like in machine learning is farmed out to people who are labelling images and doing sorts of, it’s actually people doing the labour. So in a sense, it’s a bit of a, it’s a bit of a con to say but it’s machines doing it because there’s armies and armies of people doing it. Really low grade, really, really poorly paid people. So that’s a really interesting point because it’s, like you said, because intelligence actually comes from humans, like real intelligence. Yes. So what they’re doing is they’re farming intelligence in people. Yes. They’re feeding it to the machine. Yes, yes, yes, yes, yes, yes. That’s really fascinating. And it’s interesting that it’s like you said it inevitably happens that the people that are doing this, like the content moderation, and the image labelling are like semi slaves right there. They’re at the bottom of the social sphere. They get paid minimum wage or whatever, you know, and, and they work at night and do all kinds of crazy like it, they end up looking like the dregs of society, you know, and so then it, it even, it even makes even more sense to imagine it like, like the matrix like really the matrix. But instead of like instead of getting body, instead of farming for body, which is what the matrix suggests right is that they’re getting energy from the humans, they’re not, it’s the opposite. They’re actually farming intelligence and farming, farming capacity to the capacity to identify quality, because that’s what the machine doesn’t have. So you get that from the humans. But I also think one of the one of the one of the big problems. One of the background problems for when people are trying to think about the ethical issues here, the paucity of how they’re thinking about it. So, a lot of a lot of the work that I see around AI ethics is basically built into what you might broadly call like broadly utilitarian or broadly consequentialist ways of looking at things, which also fits into the ways in which intelligence is is understood. So, so those are really interesting narratives I think as well. I’ve been running behind AI. So there’s lots and lots of different disputed definitions of how you might define intelligence and lots of different forms of intelligence. But, but but but the form that’s often used, which fits really well with computing is the idea that something along the lines of intelligence is the capacity to reach your goals to be able to take steps to reach your goals. So that intelligence is just instrumental. So that it’s like it’s just like a, an instrumental account of, like a means and rationality that you have a goal that you’re aiming to, aiming towards. So then it fits really neatly, it fits really really naturally into a utilitarian way of looking at things where you’re going to try to produce as much benefit as possible we have certain goals and preferences. So that it fits into notions of trying to maximize happiness or minimize unhappiness, or looking at how we might achieve our desires or achieve our preferences. Yeah, it tends to fit into that. So, one of the problems with that is that it fits into it fits into a, an idea of reason that fits really well into say like an enlightenment ideal of reason that we’re just making progress because we’re having more and more rationality, or we’re achieving more and more of our goals, like a Steven Pinker type view. And then everything. The thing is that it’s, I don’t, I don’t actually mind that way of seeing intelligence, the capacity to reach your goals, it’s just that there are higher goals and there are lower goals. And so, the world, like the materialist world and the Steven Pinker world. It’s like, it’s a lower world it’s a world of, of whims and desires and pleasures and pain, whereas virtue and the good let’s say in a, in a, in a Christian sense or a platonic sense. That’s the goal, right, that that’s actually the actual goal of reality. And not only is it the goal of reality, but it is also the Achilles heel of artificial intelligence, which is why it has to farm, because what it’s farming from people is not is not just it’s not information. It’s the good. It’s, it’s farming quality. The capacity to recognize quality and to engage in quality. Yes, that’s the purpose. Yes. So it’s flipping everything upside down, where it’s, it’s instrumental. It’s using instrument to let I can’t say that word, making instrumental the capacity to view quality in order to attain practical more like money making and desire, desire, filling goals. So