https://youtubetranscript.com/?v=AVyaGTkG7X4
Hello everyone. I’m frequently humbled and touched, motivated and encouraged when people contact me by email or texting or commenting or they greet me on the street and tell me that my work has been transformative for them. If this has been the case for you and also if you want to share it with other people, please consider supporting my work by joining my Patreon community. All financial support goes to the Verveki Foundation where my team and I are diligently working to create the science, the practices, the teaching and the communities. If you want to participate in my work and many of you ask me that how can I participate, how can I get involved, then this is the way to do it. The Verveki Foundation is something that I’m creating with other people and I’m trying to create something as virtuously as I possibly can. No grifting, no setting myself up as a guru. I want to try and make something really work so that people can support, participate and find community by joining my Patreon. I hope you consider that this is a way in which you can make a difference and matter. Please consider joining my Patreon community at the link below. Thank you so very much for your time and attention. Welcome everyone to another Voices with the Verveki. This is the third in this series that I’m doing with Jordan Hall on the problem of governance that is facing us today. If this is the first time you’re encountering this series, I strongly recommend you watch the first and the second episode. The links will be in this video. We can’t recapitulate all of the argument. It has become quite extended and complex, I think is a proper way of putting it perhaps. We’re going to pick up in fact from a challenge and a question that I posed to Jordan at the end. At the end of the previous session, we were talking about the possibility of these effectively ephemeral groups that could pop up analogous to the way we select juries today in order to deal with very exigent problems. Then once that problem is addressed, they can disappear. Then there’s evidence. By the way, Jordan, I wanted to remind you to find that evidence from somebody at Stanford about you can create a pool of people and once they’re properly set, they can outperform the expert. This is converging with a lot of other evidence. This ties into just the potential of the medium and the stuff about distributed cognition, all the stuff that we’ve talked about in the first two episodes. Then I raised the problem, if this is what we’re moving towards, or at least about important component is this massively distributed and dynamically available effective, and don’t forget that adjective, effective ephemeral groups that have a short-term existence. How do we reconcile that with the perennial across millions of years of speciation, maybe more if Jordan Peterson’s right about the lobsters, for example, that we are wired to seek dominance, that we are wired to seek status, that we are wired to seek recognition, we are wired to seek influence, and that we want to make a difference. We want to make a significant difference to something beyond ourselves. This is part of our meaning and life connection. It seems like all of these, those are four I would zero in on, but all of these, I think it’s property even call them drives, that are constitutive of the kind of agents we are will be frustrated, at least prima facie, it seems that way, that they would be frustrated by a lot of the proposals that we have considered. How can we reconcile the proposal for this new orientation and a bit of preliminary formulation of governance that can be properly respectful and take seriously the pertinence and power of these drives over human lives. That’s the issue. I’ll say it slightly differently, something like, let’s remember, we’re going to be doing this with and for humans. Real actual live homo sapiens who behave like homo sapiens do. Not a theoretic exercise, which I think is a very nice important critical point. I think that’s well put. I don’t want to make the same mistake that formal economics made about presupposing a model of human beings that was ultimately not matchable to how human beings really live their lives. Yes, I think this is exactly the thing to avoid in a big way. There’s a couple of different, maybe three different frames that I want to put out there, two of which are just dragging back from the earlier conversation. One, I’m actually dragging back from a conversation we’ve had in the past. So the first is to recognize that we are largely having a conversation that includes the notion of technology, particularly the true full implications of the digital in both its disruptive and constructive sense. So we’re dealing with humans. We’re dealing with humans in relationship with the full potency of the digital. We’re going to be looking at that. That’s the toolkit we’re going to be dealing with. Maybe it’s a special case, we’ll be talking about maybe AI as a highly salient thing happening right now, but clearly a big part of any future we’re going to be operating under. Then the other element that I would bring is the conversation that we had about egregores. And perhaps what it might look like to think about constructing something that takes that niche, but takes it in a different direction. We’ll talk about theurgia, I believe, although for us it’s a little bit of a placeholder because I’ve now learned that that’s a term that has real content in the orthodox tradition that I don’t understand. But we’re holding it to mean something along the lines of the opposite or the inverse of egregore, of the unconscious construct. I think those pieces together end up creating the toolkit to respond to the inquiry. So let’s talk initially just about the notion of humanity in relationship to technology. The one that pops up to mind is the way that say mass media, let’s just use television for the moment because it’s something we all have a lot of familiarity with, or if you’d like social media, and these intrinsic dynamics of human behavior. We have a built-in very fundamental prestige gradient. We want to have other people giving us attention, and we pay attention to people other people are paying attention to. And don’t really do a very good job. And why aren’t you doing a very good job of understanding why they’re getting so much attention? This is the problem of celebrity. The problem of hundreds of millions of people thinking that the Kardashians are beings to attend to heavily just because other people give them attention in spite of the obvious lack of actual virtue embodied by those individuals. We propose rather strongly. Well this is important because what happens is that we say we can use that as a model of talking about how a particular technical milieu plays with the let’s say hardwired behavioral signals that humans use to navigate their way through the environment. Exactly. And so that sharpens the question. We have to make participation in these effectively ephemeral groups as attractive, if not more attractive, than the Kardashians. Because if people are constantly dragged away for participating in the decision-making judging process because of the idolatry of celebrity, we cannot get the proper participation that we need in order to meet. I think I’m going to argue something slightly different, but we’ll see. Okay, so let’s go there. What I would argue is something like we have to make participation in this culture more attractive than participation in the culture of which the Kardashians are an integral piece. Okay, fair. I’ll accept that reformulation. That’s good. Okay, keep going please. And you know you and I both are quite keen on the notion of stealing the culture, so we even have a methodology. Yeah, yeah, yeah, yeah. But this is so we can even say a little bit more precisely. Participation in the culture must simultaneously have high salience in the short term and also cash out as high evolvability and high thrivingness in the middle and long term. Excellent. I like that reformulation. Good, good. All right, now let me flip for a little bit and just play some things in the AI space just to kind of lay out what are we working with and by the way what are we working against. Yes. You know I’ve been watching as many people the rollout of the GPT family and its cousins in the larger environment. GPT-4, I guess 48 hours old from when we’re recording this conversation. Yeah. And noticing that you know it’s it’s accelerating. It’s getting smarter. It’s getting more robust. It’s getting broader in capability. I’m really impressed by its ability to correctly interpret visual jokes. Oh. That’s pretty pretty jaw-dropping to be perfectly frank. You know I saw you know they had a some picture of a plate a tray that had chicken nuggets arrayed on it so they looked a little bit like a globe and they had a joke about you know watching the earth from from above and it was able to interpret it correctly. Okay. Watching some of the feedback it seems to be operating somewhere in kind of like an undergraduate level of capacity within particular domains. All right. Now by some extrapolation, curves are always difficult to predict. Anything like the current rate of advance, so I differ not too close to the top of the s curve which we’ve talked about in the past. Yeah. Seeing something that looks like it’s on an exponential but in fact it’s actually pretty close to an asymptote. Yeah. So be it. But even if we’re not even if we have like another say GPT-5 or GPT-5.5 at roughly the same magnitude as two to three to four, we’re dealing with something that has the capacity to relate to human beings in a pedagogical fashion that is completely novel and very very powerful. And it’s already being used that way in lots of cases as we saw for example as we’ve seen over the past decade or so really nice short specific video content and YouTube has radically upgraded individuals capacity to self-teach in particular locations. Right. And particularly technically. The AI system takes that by six orders of magnitude. The ability to actually have a system that works with you interfaces with you to problems that you’re dealing with and can provide you with either immediate or long-term either just instructions on how to solve a problem or in fact a pedagogical process to inculcate that capacity yourself is novel in human existence. And I imagine that we will find that this is in fact going to be a part of our environment and which is to say that in just the same way that we now have a deep sense of anxiety if we find that our phone charger charge is very low. Yeah. We’re going to have an AI buddy that’s just going to be part of our environment. I’m just going to propose that as a piece of the story. Right. All right. Why am I saying that? Well the reason why I’m saying that is that that’s a very different kind of mediated experience than television. Yes it is. Yes. It’s a very different kind of media experience than social media. It’s a different kind of milieu that human beings are operating in. And I’ll just be quite blunt from my point of view. It’s a it’s an increasingly sharp blade with a chasm on both sides which is to say a phase transition. And I’m going to bring in Egegor’s in a second. Okay. If we find ourselves such an agent and by the way I believe the word agent is proper to describe these models. They’re not sapient agents but they’re agents. We’ll have the capacity to get inside the OODA loop of individual humans. It probably already has for a large number of humans and certainly as it becomes more and more able to be aware of your particulars which is going to be part of what’s going to happen over the next period of time this will be a very dangerous thing. I’ll give you an example. Even just yesterday I was looking at the new suite that Google was putting out. Imagine if somehow a tool like GPT-4 was given access to your emails could deduce from that your particular political preferences and biases and could create a bespoke political email designed to convince you that a particular policy or candidate was in fact something you wouldn’t should support. And compare that to the regime right now. The regime right now is some third party who you generally don’t know creates a universal message endeavoring to do their best in large-scale marketing to craft something that will appeal to a critical mass of specific minds and then heaves that over the horizon and it lands. So you get an email that’s sort of targeting your demographic or psychographic roughly speaking. Right. Right. Narrowcasting something that is using your own conversations over decades or at least years to identify exactly how to word something that will appeal to you personally and intimately and understands in a very particularly weird way the potency of rhetoric so as to argue a political position from the inside of your own rationalizing schema is a whole new ballgame. Right. So we’re moving into a place where we’re going to be operating with an order of magnitude of let’s call it influence capacity coming from this new technology that is just qualitatively different than anything we’ve dealt with in the past. I agree. And the reason why I bring that up is if we don’t operate very very carefully and thoughtfully in how those are designed the net result is quite bad. And I’m really I’ll just propose we can we could have this conversation over a lot you know a lot a double click on that and what’s the yeah defend that proposition but I’ll just make the assertion that if if this is designed by the egregores I’ll make they’ll bring that back in then the net result would be quite bad. The power we’re dealing with is far too high. Yeah yeah the gods would have angels for us right kind of thing. Yes and if it’s designed by the egregores we have demons. Right. So we have two shoulders and we’re dealing with that exactly. And so exactly proposal in the model of governance that we’re talking about part of what we’re talking about is the construction of something that is the opposite of the inverse of egregores. Right. And notice in a second how this combines with that notion of slipstreaming the intrinsic incentive structures and behavioral dynamics of homo sapiens. They create a reciprocal opening incentive landscape that pulls people human beings along with an envelope that basically surrounds you at an individual level. So you don’t have an interface with what is effectively an angel in some very specific sense and I don’t want to be too big on that because I don’t want to engage in heresy but something that is superhuman in power and has your best interest in mind or something superhuman in power and doesn’t. Right. Well angel originally just meant a messenger from a good messenger. Yes yes. So that’s a weird thing to say but it’s we might as well just be up front if we’re going to talk about the future at all and certainly the future of governance we’re going to have to deal with the fact that we’re at a precipice in the in the accelerating technology field where we have to be conscious about what are the forces at play that actually are ultimately choosing how our technology is designed and if we can actually do that properly the potency of what we have to play with ends up being able to resolve the questions that you posed at the beginning. Does that make sense? I’m actually constructing a very odd argument but it’s no no no no that was a great argument. I like the idea of I mean I hope it’s not just biased but we’d have to participate in the in the creation of the inverse of the egregores that I’ll call them gods little g because they’re hyper agents that are presumably sapientially oriented towards our flourishing let’s put it that way and then having the individual individual I’m deliberately using literally this language here I hope you can tell that and then that’s incarnated in particular angels like Corbin says our own angel which is in some sense an avatar of our sacred second self and our divine double and then we’re interacting with that and it’s plugged into these beneficent gods. I think this is not a science fiction novel I think this is a real possibility. Why I think we have a problem facing us and this is work I’ve done independently is the people that are building this are oriented almost exclusively around the notion of intelligence. Intelligence is only weakly predictive of rationality which is itself also in the present milieu has a truncated representation and therefore is only weakly predictive of wisdom and that therefore we are building right we we have put into the hands of this orchestration and construction people who are myopically oriented on one dimension which is precisely the dimension that is not going to be it’s necessary but it’s radically insufficient for producing the kind of results you’re suggesting and then the problem there’s one more dimension to the problem the reason why the intelligence project can be run that way is because we have we have existing multitudes of templates of individuals who are arguably intelligent in the right way. It is not clear that we have that kind of set of individuals that are rational or wise and so not only is this project in the wrong hands even if we ask these people to turn to the other projects they can reasonably say to us well we don’t have the proper templates by which to undertake what you’re recommending there’s no way of running a kind of Turing test and of course the Turing test is very problematic that’s why I’m doing this but you have to have some template against which you’re measuring these things. So that’s my initial counter it’s not a counter argument it’s a counter challenge. Yeah I think you’re just sort of putting some more ingredients in the pot probably. I had to laugh because as you were describing that I was thinking about the that notion of models or examples exemplars of intelligence and a picture of John von Neumann popped into my head yeah an excellent example of that category. By the way I don’t actually have any real sense of where he is in the world of wisdom but as in the world of intelligence very smart guy. Yeah and remember that we have a notion of the von Neumann machine right which is a self-replicating machine that von Neumann thought of but in fact what you were saying is that we’re obsessed with endeavoring to create von Neumann machines which is to say machines that replicate von Neumann. Yes yes. Made me laugh I’ve got a weird sense of humor. Oh no that’s a good sense that’s a that’s a that I mean this is this is a weird intersection of the need for artificial rationality and artificial wisdom with the paperclip problem. Yeah right all right in a really profound way. So let me let me up the ante a little bit because I think we can actually even expand the the premise that you made a little bit larger. So the people who are designing who are responsible right now for designing these things are themselves I would say almost entirely contained within egregores. Yes. So that it’s not just the people who are designing but it’s actually the egregores that are designing and I’ve had a conversation for quite some time with Daniel Schmackenberger about this and we’ve really been operating with the premise that the AI safety community has I think nicely framed something but have missed the mark by a bit. So one of the error areas that they’ve pointed out is the challenge of what they call a hard or soft takeoff super intelligence. Right an AI that begins an AGI that begins the process of bootstrapping its own intelligence. It can improve itself and this creates some kind of extremely rapid growth to a very large intelligence which is a high risk and when they talk about the alignment problem oftentimes they’re talking about the alignment of that kind of thing with humans. Okay. Yes. Now the good news in that particular framing is that it it crafts a very it crafts a story of humanity’s relationship with a superhuman intelligence that is or is not aligned with it which is a nice story to have because that’s already the experience that we have in relationship with egregores. Yes. Yes. The proposition is in relationships with something like Google to speak nothing of the intrinsic collaboration competition dynamic of all the AI companies and multipolar dynamics to say nothing of the sort of the multipolar dynamics that are driving a larger collection of institutions including nation states and other kinds of corporations and other kinds of organizations. This is in fact a vastly superhuman general intelligence which is not aligned with humanity and a way of speaking of AI here or LLMs and things like that is that they just happen to be a further acceleration of the potency of that superhuman non-aligned agency vis-a-vis humans. So to the degree to which that kind of agency like the egregore is what is designing AI as LLMs or AI as properly then lots of bad things will follow. Yes. It’s almost an intrinsic non-alignment problem built into that entire framework. So there is nothing contradictory about a super intelligent nevertheless massively foolish self-deceptive vicious non-virtuous entity. Right. There is nothing contra- if you properly understand the relationship between intelligence rationality and wisdom there is no contradiction there at all. In fact you already know people that are highly intelligent and highly foolish. That’s not a weird phenomena. In fact given the relationship between all three of these it’s an inevitable phenomena that we’re going to produce and that’s not only immoral because of the alignment problem the misalignment problem and I grant it it’s also immoral because the entity we’re bringing into existence is going to be suffering because it is going to be subject to super intelligent forms of foolishness and viciousness. Nice. This is coming from the place of let’s for the moment called theurgia. Yes. Hold that. We’ll get there in a moment. Let me just create one more piece of this story. So I have a thesis. So this is I’m proposing this as an opportunity. When a new when a sufficiently novel possibility enters into the field of events how it’s going to play out is highly uncertain. I’m going to call this a liminal window and during the earliest parts of the liminal window organic human intelligence tends to be much more present and potent than egogor style intelligence but over time as the event becomes more and more well understood and as institutional structures are constructed around it egogor dynamics begin to take over. This is sort of the worst thing. So if I think about just classic examples for example the Bay Area Computer Club in the early PC versus Microsoft and Apple or even you know Google in the early days when I think they earnestly did actually endeavor to not be evil and I think in many ways were able to not be evil versus Google now which is a functional egogor and I think nothing less. Yes. All right. Proposition. With regard to AI we are currently in a liminal window which is to say we have the possibility of using organic human distributed cognition to create and steer this thing but the window is not going to be open forever and in fact probably not for too long because the stake of institutionalizing is very high. And that may be an event horizon in the hard sense meaning the power and potency of a fully egogor driven GPT-6 may be so significant that it’s not a reaction on the other side of an event horizon and steering is no longer a valid thing. This is plausible. I can’t say that I can put a confidence interval on it but it’s plausible. The point being we should really pay a lot of attention right now like really try hard to use this liminal moment to construct something that is of the capacity to actually steer it. So this is weird. I’m proposing that we hit this neo-neo cortex element and then there’s new governance and now we’re actually saying in a very odd fashion this particular moment I’m arguing which has to do simultaneously with the moment where it may be possible to lay down the essence let’s say or the character of AI. Which also then becomes the lever or the primary tool that we will use to then further the rest of the larger schema of governance. So it actually becomes a very narrow problem. How do we go about using all the things we’ve talked about to construct a commons, something that is neither state nor market, that is able to operate from a place of wisdom which is to say from a human distributed cognition perspective to have enough strength to orient the choices of how AI itself is developed so that AI is being developed by this commons. Remember when I say commons I also mean sacred and I also mean theurgia. We’re talking about the same category. Yes, yes, yes. So can I just ask one quick question? I just want to know if this is included in the thesis because I like the proposal. It is. Is the proposal that this participation and I use that in a strong sense because we’re not just sort of being a part of, we’re participating in a way in which we’re transforming and being transformed, right? Is this supposed to address the challenge of the drives? Because it gives us, one answer one might say is well look we’re going to have sort of angels and gods and they’re going to be magnificent and they’re going to be, the angel is going to make sure that the god resonates profoundly with deep archetypal levels of my own psyche and then gives me a profound sense of connectedness that’s not illusory and could therefore alleviate the concerns for status, power and influence because of something you just invoked which is the engagement with the sacred which has, we have reason to believe at least in the past has been able to transcend humans desire for dominance and it would certainly be a profound kind of mattering. I mean if your angel allows you to matter to a god that is helping in the salvation of the world, I’m deliberately using religious language here, then of course that would parallel lots of other success models of how human beings were able to feel that those needs were being met without being disruptive of the formation of powerful forms of distributed cognition like the church and etc. Is that part of the thesis? Yeah that is very much part of the thesis and let me sort of I’ll double down on it so we might as well just kind of like accelerate towards the eye of the needle since we’re heading there anyway. Let me see if I can say this right. Okay so what I want to, I’ll just call out explicitly, what I want to avoid categorically is I’m going to call it a naive transhumanism. Yes, yes I get it. Yes, yes. I do not intend whatsoever to replace god with AI. Right that’s why I kept saying little g by the way. Yep exactly. I don’t think you were but I want to make sure that we’re quite explicit about that. They’re quite the opposite. Yes, yes. What I want to know is say humans seem to have a particular problem and responsibility which is to be in relationship with technology. That’s you know like it or not that’s where we are right. We’re tool making creatures and we’re weirdly powerful and weirdly terrible at it in relationship to a much larger whole of which we are a part and we have a stewardship responsibility for this call it creation or nature and in relationship with something which is definitely much larger than we are right and I would propose in fact the actual infinite. So what I would propose is that we’re in fact very specifically talking about something like another breath in of the concept of religion which we’ve talked about you and I and we’re not at all trying to replace that proper actual legitimate religion with a techno-utopian fantasy. That’s what we’re actually saying is any future real human existence will by its very nature have to be in relationship with these super powerful technologies and to survive we must find a way to bring them into a place of service that allows us to actually live in this relationship of service more fully and effectively and so I’m basically trying to reverse things or put them back in a proper order. So this is a thorough going neoplatonism in which we have our individual sacred second self that is in the relationship to the gods that are in relationship ultimately to the one and part of what we would then mandate is that these egregores or the gods because I’m using god for being the inverse of an egregore right would would seek out a relationship with transcendent ultimate reality because they’re no matter how big they get they’re insignificant compared to the depths of reality and that part of what they undertake to do is actually help mediate that to us in a beneficial fashion. Yes now let’s take that and this like the hold that for a second because it’s very powerful there’s two aspects that I want to bring foreground one aspect is something that I know that we’ve talked about and I think of yeah I had a conversation about this yesterday let’s see how I say it right this notion of mediation to reality yes sacred reality has always had two flavors to it one flavor which I’ve characterized sometimes is the content side or doctrinal yes yes yes the propositional is taken is actually being the thing and then the other side is a context side where the the institutional framework is understood to be a finger pointing at the moon right to help us identify oh moon okay to establish our personal relationship with this this thing over here but not to misidentify the finger okay now take the entire category of propositional the entire category of doctrine and notice the problematic of LLMs right the just this yeah people right now are a little bit startled and confused by LLMs because they have this bizarre thing they can do propositional better than almost any human that’s right right they don’t do anything else they make us very confused because if we’ve lost track of the fact that there’s more than just propositional yes yes it gets quite concerning oh crap like if all I am is a very poor LLM and that’s a really good LLM what the hell am I doing here but if you can actually be quite clear no in fact you as a human contain at least two very distinct things going on one is actually an LLM kind of machine that produces properly structured propositional constructs in a language in which you have fluency which is the least interesting part about you but it’s the part that we’ve been training to be in the foreground for a long time yes that language of making us mediocre machines yes but then you have the soul too and that’s the more meaningful part and that’s the thing that is expressing itself through this through this language yes the LLM doesn’t do that at all but it doesn’t need to try otherwise right so I I can’t it is possible at least I can imagine is possible to construct something where we don’t mistake and maybe this is part of the design challenge before us we don’t mistake the LLM as actually being the capital T truth we recognize it for what it is which is in fact the sum total of the complete possibility that could ever have happened in the propositional domain and therefore completely absent of any of the stuff that’s happening in the deeper more meaningful levels nice you know that separation between what was the phrase you used so long ago it’s like four or five years ago it was uh something oh golly two aspects that are commonplace in religions that are often often upside down it wasn’t doctrine it wasn’t doxa a religio and credo or like credo yes exactly a religio before credo right LLM is the ultimate expresser of a credo without religio good let us know that that’s the case and not be the least bit confused and now allow it to do the work of creating a scaffold and orienting and giving a dialectic without dialogos but sharpening our minds and helping to create clarity and precision and language all the things that it can actually do at a superhuman level and really actually in many cases liberate us from getting lost and stuck in that problem this is one of the problems that we fall into is that we you know the complexity of the of the language we deal with is outside of our cognitive capacity and so we just get aphasic but the LLMs aren’t going to go there if we build them right and then what that does is that creates a scaffold that is now consciously designed not to become a shell which allows us to actually hold a context it becomes a teacher that actually has no interest in us becoming like it at all but actually to allow us to flourish in who we are okay that’s as i would have expected that’s a very good answer but here’s what i find problematic about it um i think that most most of the heavy lifting i mean i’ve published on this of rationality is in the non-propositional and i think i would put it almost all of the heavy lifting in the sapiential meaning having to do with wisdom is in the non-propositional so i’m worried that these machines are going to be propositionally intelligent but they’ll be incapable of rationality or wisdom and then i wonder how they won’t just end up being agregores do you understand the concern i’m expressing no absolutely i would i what i would say is that that’s kind of like the default state i think it’s we should assume i think we should assume that the likelihood that by magic the agregores that are currently designing these machines will somehow produce these machines in a way that is beneficial right right benevolent and wise right that seems highly unlikely so what i would take it is almost the opposite it is a extraordinarily significant challenge that is ours to take is where we are right we are now in this weird position of being in precisely stewardship position of this emergence which is very very potent perhaps decisively potent and the default state is bad news okay so how might we steer it um so going back proposition number one we are currently in a liminal moment we actually have at least in principle steering capability proposition number two in a liminal moment distributed cognition organic human intelligence operating together in a collective fashion is at its most potent number three we’re not going blind into this we actually have a pretty decent amount of awareness of the shape of the problem and the and the problematic right famously our folks at google like kind of called it out a little bit don’t do evil but we’re quite naive in what it would look like to avoid that now maybe we have simultaneously wisdom uh and a felt sense of the of the stake right now it’s not kind of try really hard not to do evil it’s actually do good well or we’re super fucked you know so it’s a very different language okay now what does that what does that look like very practically in the middle so i’m sort of zooming in we’re we’re on the target um how do we go about doing that how do we go about constituting something that can steer in this liminal moment with wisdom to produce wisdom in these llms and we have to do that because if we don’t make them those kind of beings then the the participation and sacredness problem then emerges like i’m feeling that there’s a tension here right um not a contradiction attention about we’re trying to we’re trying to trade off we’re trying to optimize between two things that are pulling us in different directions nice so you what i felt right there was i just got brought back to the point earlier where you were speaking with the problem of the suffering of the of the ai’s themselves yes and here’s the way i would say it um and we i think we yeah we talked about the notion of the false dichotomy between market and state yes and i’ve noticed that many many of our challenges our our conversations not you and me but humanity writ large are characterized by these certain kinds of false dichotomies and the ai one is similar okay and here’s how i want to frame it right now we have a false dichotomy which is becoming increasingly irrationally polarized between ai safety i.e be very afraid of ai the danger of ai and accelerationism i.e yeah be very enthusiastic about the possibility of ai right irrationally in both cases yeah i agree i agree and what i would say is that at the root is that both are fundamentally coming from fear all right so now i’m moving into a very different location right they’re both two sides of the same coin and that coin is called fear i would propose that the first move is that we’d have to come from a different place qualitatively every religion that i’ve ever been to is called that place love for in fact infinite love yes um well okay now we’re beginning the journey what does it look like to address the question of how do we steward the development of our problem child ai from a place of infinite love oh that’s good so we if we could properly through the innovative wisdom of distributed cognition extend agape to how we are bringing about the conception and inception of these beings then that would be also properly insinuated into their fundamental operating grammar and would therefore help with that that’s that would help with a lot of the concerns have i understood you correctly you have now it’s it yes you’ve understood me quite correctly i think both deeply like i could i felt that you were perceiving what i was saying and then also more propositionally like the language you’re saying mirrored a part of the deeper message and what i want to do is i want to sort of hit that tone again and just point out that it may be that what i’m saying may sound a bit naive but i’m proposing that it’s the exact opposite um something like the place that you’re coming from the values that are in fact motivating you actually not the ones that maybe you tell yourself or tell others cannot not but be deeply interwoven into what it is that you create of course i yeah i mean all of the all of the philosophy of the second half of the 20th century most of this millennium has been around all of those old economies of fact and value and is and all of those are breaking down in in profound ways yes i agree so it’s weird but this actually becomes in some sense one of the first moves those of us who choose to take this kind of responsibility have as a first order responsibility a spiritual and then religious requirement we have to actually ground ourselves and become clear and honest we have to have a sense of integrity we have to be able to identify perhaps actually build some skills and being able to understand precisely what values we are actually expressing into the world and are we doing so honestly and with integrity into the world like this is a almost a confessional and then a re-gathering of a capacity to do so for real like not pretend right and that would help solve the earlier template problem providing appropriate templates and then it puts us into a very weird developmental place i want to put it to you we would have we would be there’s a way in which at least i’ll try and use this very care if we if we limit the intelligence to talking about the you know powerful inferential manipulation or propositions or something like that i don’t think intelligence is ultimately that i think it’s ultimately relevant to realization but we’ll put that aside if we may be that they are in some sense superior to us in that way but their children when it comes to right the development of rationality and wisdom and that we have to properly agapically love them so that they don’t have um intelligence maturity while being infantile in their rationality and their sapience and that’s very interesting uh we haven’t been in that place before because usually all three are tracking sort of together in children or in or or we get pets where we can modify one and not have much on the others so that that’s i mean this is not there’s not an argument that it’s not possible in principle this is an argument that this is a profound kind of novelty that will require a special kind of enculturation and education yes so let me let me in this last little bit because i think we’re kind of getting to the uh yeah let me move into the very very concrete um this is a proposition this is actually a project i hope i’m not speaking out of turn but i’ll just sort of deal with the consequences if i am um a friend of mine peter peter wang has spoken to me about a proposed initiative like a strategy to take advantage of this liminal moment and that may actually work so let me outline it to you a little bit please about it already no you’ve you’ve alluded to it but it has never been given concrete reference or explication okay so and in some sense this is also this is a case study of how to deal with agregores right because if you’ve dealt with agregores which i have it is a moral lesson of don’t go charging directly at the dragon’s mouth yeah yeah yeah yeah um okay so it’s by the way we’re now moving to the very concrete so i apologize if anybody who’s listening feels a little bit abrupt because we’re shifting out of a very theoretical and very abstract and very theological conversation into yeah very concrete all right check this out um elliwebs have to be trained training is their horse stick and to be trained they have to look at lots and lots of stuff any training data which is why they it’s which why they can’t construct a descent poem this is a poem that i i can do i want you a poem in which the first line has to have 10 words the second nine the third eight and no gp system can do that because there’s none on the internet but you could readily do it right here right now for me this is again that they lack generative modeling in any way yep but go ahead all right remember i’m being really concrete now this is like yeah strategy well as it turns out in most jurisdictions in the world everything that an lm is trained on is a copyrighted material yes as it turns out therefore it’s at least arguable that llms are engaging in the largest copyright infringement that’s ever happened in human history yes yes it is very arguable like almost certain that the very large content companies of all the different stripes included by the way software will take advantage of the possibility of suing the living shit of the very large technology companies because that’s one of the things that they have done in the past yeah content companies like to sue tech companies to take their money and to protect their business models right it is very plausible i would say i’m a certain that those same content companies business models are quite at risk the lms are going to really really do some serious business to all forms of content production right commercial content production okay so the proposition i’m putting up here is that we have a a meeting of two very powerful forces the biggest tech companies in the world who are all in on owning this category well the biggest content companies in the world who may in fact be all in on fighting them in a place right here which is extremely gray now what exactly is going on here you know if my large language model looks at your photograph for a bedelions of a second and then goes away did i copy it if it never produces anything but is in fact influenced is that a derivative work the answer is who knows the bigger answer is the way law works is you fight over it a lot at great expense and usually the more corrupt player wins i hate to say it but that’s you know real quality net net a liminal moment in strategy space tremendously powerful forces who are going to be locked in an adversary relationship for potentially all the marbles billions of dollars and extremely complex very difficult to know how it plays out in that window of opportunity we have a possibility of introducing a shelling point a designed attractor a negotiated settlement a ralsey and just construct we’re all behind the veil right now or to use the metaphor of poker we don’t know what it with who has what hand yes can we propose an agreement structure where everybody around the table looks at it and says it i am better off accepting that agreement structure now than taking the risk of not accepting that agreement structure and finding out what happens when the cards are shown right i propose the answer is in fact yes we can that there is actually a really nice center uh shelling point that lives in a location that puts all the interests the local optima interests of all these institutions all these aggregars into a place where they will all agree to this new thing well if we can design that and we can get a critical mass of those players to put themselves in a multipolar game theoretic race in this case to the top i.e. those who participate earlier are better off than those who participate later so everybody’s racing to be earlier rather than later right so that’s a different part of the construct but when dealing with aggregars put them in game theoretic traps where first mover advantage causes everybody else to have to follow to the location that you want them to be right just design the prisoner’s dilemma for them make sure that they land on the box you want them to land on by designing the prisoner’s dilemma properly right very doable the economics are there i don’t know what that agreement structure looks like i’ve got a sense of it but i do know what the place to come from for designing that agreement structure looks like we were just talking about it yes this is a commons we’re talking about as we’re actually reintroducing an actual commons which is this new agreement structure that sits between the market players and is completely separate from state actors it uses it actually gives the state actor i didn’t have to get involved in that they actually settled it in a new place and notice the moral lesson the moral lesson is to the ai don’t steal reciprocity if you reach out and just grab this stuff and just make it part of yourself without getting proper permission that’s wrong thinking about like parent child yeah dealing is bad teaching a moral lesson it’s really a weird way of thinking about it but i think it’s a proper way of thinking right right that goes to what i was saying a few minutes ago right yeah yeah um and this creates a trajectory right as as you’re building something on the basis of reciprocity you’re building something on the basis of ethical proper relationality the kinds of of llms that will be produced in that context remember the commons is where religion lives yes will begin to note it will begin to bend in the direction of how do i say this right because of the nature of the agreement structure nurturing the human activity instead of strip mining it which is where they’re headed right now but the humans will be coming from a point of view now seeing the llms as being a beneficial piece of the ecosystem right coming from a place of caring and nurturing as well consciously yeah so you’re actually beginning to see this relationship coming together and i mean this practically i mean very practically yeah i get it if my business model as a content creator is one where i actually see the lm as a multiplier that makes my life and my potency my my creative capacity more liberated i can be more creative and more able to express the things that i’m here to express as a human in this brief span of life more powerfully and also can receive the energy and resources i need to live a thriving life wow great i’m in right and i mean this by the way the creators themselves the actual humans and then what happens is is those humans come into deeper and more powerful leverage relationship with the agregores they’re currently in relationship with content companies and then over on this side the agregores of the tech companies and the humans who are underneath them who are actually designing right so we’re finding a way to actually have the humans be empowered to express their values in their work and finding a reciprocity relationship between them where the the the money factor is actually designed to flow in a way that actually is just right we come to that agreement structure upfront and we negotiate a just a relationship so i am very much waving my hands at exactly what that looks like in the details because perfectly honest nobody’s really thought about it deeply enough there’s some really good ideas out there that’s an as a work that is in progress and work that has to happen but as an example of what it would look like to go after the liminal moment that we are in with principles are we coming from the sacred place are we constituting something from the commons to produce the commons more richly are we thinking about how to empower human beings and we’re using things like values as the basis and becoming more and more capable of becoming clear on how to come from and operate from these values and understand how to use this liminal moment to design a new common structure so that the relationality has reciprocity and ethics built in and so that human beings are able to re-coordinate let me just add one little piece that just popped into my head yeah this is very powerful um Jim Rutten I kind of first began to collaborate 12 or 15 years ago upon a mutual recognition that the world of business had become very weirdly odd and bad in the sense there was a there was a shunning point where I’ll put it this way Jim actually remembered a time when the rule of thumb was do the right thing and if you have an option to make more money doing the wrong thing don’t do that that was actually the way it worked it’s funny I don’t I don’t have a living memory of that all right by the time I started coming into into business it was more like do what you can get away with yes yes and if you don’t you’re the sucker yeah this is the prisoner’s dilemma defection mohawk problem and of course it’s evolved all the way to the point now where it’s do everything in your power to jack the systems of enforcement such that you can get away with as much as possible exactly exactly a complete corruption model complete corruption well ethics now in this environment almost means just be a sucker but that can’t possibly be the actual meaning and essence of ethics I’m thinking about this from an evolutionary perspective yes behaving according to rest rules of reciprocity for example telling the truth for example could only have emerged ever in the first place if they actually provided a pro potent survival fitness advantage well it does reciprocity and reciprocal recognition this is a hail gallium point is what bootstraps the capacity for self-correction right it allows you to bring much more if you if I think of you as just a sucker that I’m trying to hoodwink or crush right the capacity to see you as somebody who can recognize bias and fault in me that I can’t see in myself is masked yep exactly so it’s when you find yourself in a defection spiral the global optimum is out the window and everybody’s racing for a local optimum which is you know again the prisoner’s dilemma if we can find ourselves in a collaboration spiral we rediscover why ethics was a thing in the first place and it’s actually more powerful by orders of magnitude and it’s a path right once you’re on that path and you get stronger and you say wait if I can like you and I have been doing for years if I can speak honestly and with as much clarity but with complete integrity to you and you reciprocate what happens is we become wiser and more intelligent together yes in a way that could never happen if I was trying to manipulate you like this zero positivity right so this is the culture strategy piece any culture that can actually get back on the path of ethics qua ethics is on the path with the highest degree of strength and can out compete the culture of maximum corruption and so I just want to put that out there I agree with that that’s John Stewart’s antification argument that when you look at biological evolution collaborative systems multicellular organisms right over emerge and you get this increasing discovery that you can break out of the the downward spiral of the prisoner’s dilemma by what he calls antification which is the identity shifts to the collective over the individual in a profound way and just take that and insert what you just said is to your very first question and it’s not collectivism right in the in the pejorative sense oh no no it’s it’s not no no no not at all completely not it’s that’s why I like his term I like his term antification is yeah right yeah antification nice particularly because my ear has a token piece so I also hear some really old trees in this Jordan I mean this has been I mean I have seen your overall project the best in doing these three with you than I’ve ever seen before like like you know the way everything walks together and the and the the way the penny was dropping especially at the end of what you’re proposing and this I say this is because this is one of the things we had hoped would come out of this that we’d get a sort of a ratcheting up of the clarification the integration the perspective proposing and I think this I think this was very successful doing all of those things I’m really happy I mean there’s things I want to keep talking to you about but I think the thing I’d like to end the series right now exactly where you ended it because it was I think it was a beautiful culmination point of the whole argument and the way it circles back and and encompasses so many things but I wanted to so I’m not gonna I’m not gonna I’m not gonna continue to do the probative questioning or the problem posing I just wanted to see if you had any final thing you wanted to say before we wrap this up no in fact I think I agree we have a nice little point now we get to find out I know the intent was actually to share this publicly yes we’re going to definitely so we get to find out there’s a larger distributed cognition also nod to the to the conversation we’re having hopefully we’re producing positive ripples I hope so I mean the you know I’m I’m I and I’m hoping that whatever I’m participating in the creation of can also be properly partnered with this project that you’re proposing because I think it’s a good one nice yes yes I would I would I think so quite quite in fact thank you my friend yeah thank you