https://youtubetranscript.com/?v=ak91AzLlsbE

Good evening. Happy Sunday everybody. I’ve got some important news. I got a phone call from the bishop a few weeks back and he told me that things were going to change a little bit. The first change is that I’m going to be moving away from my parish assignment this summer. I’m going to be taking over a role as bishop secretary, master of ceremonies for the diocese, director of the liturgy office, and vice chancellor. If that vice chancellor thing sounds cool, it really isn’t. So that’ll be starting effective June 28th. And the second piece of news, and this one’s a little more up in the air, is that I’m supposed to start studying canon law this summer at the Catholic University of America. So I wouldn’t really be starting my job until August, although the appointment would become active June 28th. But I have not been accepted into canon law school yet, and I don’t have housing arranged yet. So I’m still working on it, and hopefully things will work out. But yeah, not sure if that’s happening. But you know, I wasn’t expecting to move so soon. I was kind of hoping for another year, but this should be a good thing. Congrats on the coming promotion. Yeah, that’s why we’ll talk about it. Talk about it as a promotion. It’s definitely not an opportunity for me to carry the bishop’s bags for the next four or five years. Thank you, thank you, thank you. So you may be wondering, what does the bishop’s secretary, master of ceremonies, director of the liturgy office, and vice chancellor do? As bishop’s secretary, I kind of help keep track of his schedule, but I’m not his only secretary. I’m the secretary that handles a lot of work for him. But I would help him when he’s kind of traveling around with those sorts of things and manage kind of a going around schedule, not the office schedule. Master of ceremonies is probably the most interesting and fun job. I get to coordinate all of the liturgies with the bishop. So that should be interesting. And with the Newman Center’s construction completing here pretty soon, I imagine that’s going to mean one of the liturgies I’ll be involved with is a dedication to the church, which is quite the prospect, quite the endeavor, doing a church dedication. And the liturgies usually take two and a half, three hours of rituals. You only do every once in a while. Director of the liturgy office means that anytime somebody has questions about liturgy stuff, I have to be the guy with the answers. And if there’s any changes coming down the pipeline, like recently, we made a minor modification to the formula of absolution. I’d be the guy handing those things out. And while Chancellor sounds like a really important position and Vice Chancellor sounds like it’s important, in the church law, the main job of the Chancellor is just to be the bishop’s notary to make sure that what he’s signing is really in fact the news of the bishop. And so I’m only going to be doing that job when the main full-time Chancellor is not around. So, there’s going to be another priest taking my spot here at Holy Cross, and I’m sure he’ll do just fine. He’s been a priest for two years now and he’s moving on to his second assignment. And the guy whose job I’m taking is getting a small parish in the middle of nowhere, which I’m knowing him quite certain that he is quite looking forward to. He did the job I’m taking on well, but he really has a passion for parish ministry. That’s where he wants to be. And I think it’s going to suit him just fine being out in a parish. So that’s what I got going on. This is always an exciting time of year in our diocese, and probably in most dioceses when these new assignments roll around. And the average rule is that the last Wednesday in June is when all of the new assignments come out. So that’s what I’ve got for news. One of the leads I have for my potential D.C. housing is the Ukrainian Catholic Seminary there, which could be super interesting. I haven’t called them yet. It’s been a little interesting doing this application process because it’s all kind of last minute. All the places that I’ve been calling is. Yeah. All righty. What’s going on in Catholic world this week? Well, let’s take a look. Your screen. Hello, Laura. Hello. You sounded like maybe you could use a conversation partner. I appreciate you hopping on. That was exactly what I was thinking. Are there any changes at your parish? YouTube. So you don’t get the echo? Yeah. OK. Thanks. Yeah. So I was going to ask you, did you guys do First Communion today or when do you do it? We did First Communion last week and in our diocese confirmation is done in third grade. So that was the same mass as confirmation in First Communion. So my parish, it was today. Did it take over an ordinary mass or was it a special extra mass? No, it was just at the ordinary mass. OK. Yeah. First Communion doesn’t take that terribly long. Did they actually have the kids in a procession or were they just seated down in their place already? Uh-huh. Yeah. OK. So if you’re not having them process in, it’s a lot easier. Yeah. That was pretty easy. Oh, go.go. You know what? I can’t top that go.go analysis from last week. Yeah. I’m not even wearing a hat. I could have come on wearing a hat. I’m sorry. No, it’s OK. But I could have asked you if you liked my hat and it would have been funny. But you know, not every opportunity can be taken. Yeah. And another fun thing that I got to do this week was a baptism for a friend of mine from high school. A couple of friends. You baptized your friend or you baptized his baby? Their baby. Their baby. Oh, OK. To be clear, to be clear, these guys have been baptized for years now. OK. Yeah. So yeah. And then I was one of their older kids brought How the Grinch Stole Christmas up to me and wanted me to read it. And I put as much character as I could in it. But I think the kids too, and he kind of lost interest. Didn’t get to maybe go dot go would have been it. But he was the one who selected it, not me. So I don’t feel like this is wrong at all. All righty. What is SSPX? It has taken over my algorithm. You go first, Father. Yeah, sure. So the year is 1970 and the man of the hour is Archbishop Lefebvre. He was the former superior general for the Holy Ghost Fathers. Holy Ghost Fathers are a missionary order, mostly French and Irish priests. And they were largely responsible for evangelizing, especially in Africa. Holy Ghost Fathers have a big presence in Africa. And he, you know, he was present. Yes, yes. I don’t think it has anything to do with Archbishop Lefebvre. He was very French. He did not like the direction that the new mass was taking. And his perspective on it was as a missionary. So it’s not like this guy was piloting a desk his whole life. He had actually gone and worked out in mission fields in Africa. And he observed the reactions that he got to the traditional Roman rites. And he thought that the beauty of the liturgy, the solemnity, the meaning behind it, that that was actually more effective than the new rite that had just been put together and promulgated. And him, you know, being a bit of a missionary, he’s used to just going off and doing things without really waiting for permission, because you tend to have a lot of extra authority when you’re out in the field, separated from your superiors. So he goes and he founds a priestly society. So what would that mean? That means basically a group of priests with a common mission. He founds seminary in Switzerland, I think, and begins training priests exclusively in the tradition of the church. All throughout the 1970s, there’s this back and forth between him and the Vatican, because he does not not go along with the new paradigm. But this Society of St. Pius X is growing in popularity. So through the 1970s and 1980s, there’s this continual tension, drama, misunderstanding between the Vatican and Society of St. Pius X. All of this comes to a head in 1988. Archbishop Lefebvre wants to have a bishop ordained to be his successor. And the main reason for this is that he thought it was very important that all of his priests be ordained using the traditional Roman ordination ceremony rather than the modern one. While he was around, he was able to do it. But he was worried that they weren’t going to be able to find bishops who were willing to do it in the old way. So there’s this continual back and forth between him and the Vatican. And a little bit of drama on that account of whether or not the Vatican was going to give him what he wanted. And if he had just waited around a little bit longer, would that have all worked out? But as it turns out, I think it was June of 1988, while having a letter in his hand from Pope John Paul II explicitly telling him not to, he goes ahead and he consecrates four men to the episcopacy, which meant an automatic excommunication. And it’s not like that was going to catch him by surprise. He knew what was going on ahead of time. And so the Society of St. Pius X has been in a state of canonical irregularity, bordering on schism since 1988. They are dedicated to the traditional Roman rites, not the Reformed rite that came about after Vatican II. And there was a splinter group from SSPX, the priestly fraternity of St. Peter FSSP, lots of S’s and P’s here, that splintered off from Society of St. Pius X in 1988 and stayed in full communion with Rome. And to this day, they have not had any trouble, well, maybe not any trouble, but they’ve been able to have all their ordinations in the traditional Roman rite, while being in full communion with Rome, no canonical irregularities or anything like that. Good job. So, anything you want to add to that, Laura? Yeah, one moment. Give me five minutes. Okay, so, oh yeah, just regarding the FSSP, when I was in college, we invited a priest to say the Tridentine Mass at our local church, and he was an FSSP priest, who had been an SSPX priest, and so we really enjoyed telling people that he was FSSPX SSPX. Formerly. Yeah. FSSPX SSPX. This is why people find Catholicism incomprehensible. Yeah, and we had a fun dinner with him after Mass, where our university chaplain, who was also the parish priest, and he was like a very toe-to-the-line kind of guy, he was just this really standard, like, you know, whatever my bishop tells me to do, I’m just going to do that, and he didn’t complain about things, and he wasn’t super opinionated about what was going on in the church, whatever. And so he was sitting at dinner with this FSSPX SSPX priest, and the conversation came around to Bishop Lefebvre. He was a bishop, right? Yes, yes. Yeah. And this priest goes, oh yeah, Bishop Lefebvre, wow, I mean, yeah, I knew him, he was a really wonderful man, and our university chaplain goes, yes, it’s a shame that he died in schism, and then the dinner continued. It is a shame. It is a shame. I, you know, I’ve got a lot of soft spot in my hearts for the arguments that he was making, the position that he was taking, but I want to answer Phlebas’s question here, so technically you have no chance of salvation. So this is the, now you’re going to figure out why I’m going off to study canon law, because what the legal reality of the church is, and what the real reality of the church is, they aren’t like one-to-one, right? So in excommunication, what the legal effect of that is that you cannot hold the church office, and you cannot receive from the spiritual goods of the church, so you can’t receive communion, you need to have the excommunication, lifted before confession, all of those sorts of things. So that’s what the legal effect of excommunication is. That doesn’t actually tell us about the state of his soul. All right, I’m just going to say bye, because someone’s going on at my house, but it was nice to see you. Bye-bye. Yeah, yeah, I hope that gets to continue, continue and continue. When I went to go learn the traditional Latin Mass, I went to go learn that at Our Lady of Guadalupe Seminary in Denton, Michigan. That’s the Priestley Fraternity of St. Peter Seminary. And the rector of the seminary there joined the SSPX way back in the early 20th century, joined the SSPX way back in the 1970s, you know, before it was cool. And he knew Archbishop Lefebvre personally. And his suspicion was that near the end, Archbishop Lefebvre had quietly taken on a position of seide vacatism, where he believed that the person claiming to be the pope wasn’t really the pope. That might have accounted for some of his behavior, even though formally the SSPX has never been a seide vacatist position. They’ve always, and there’s some really weird things going on, right? So when the SSPX thinks that a priest needs to be laicized, they will do that through Rome. They’ll send the paperwork through Rome, and Rome will process that. So that’s why it’s not really a schism. It looks kind of an awful lot like a schism. It would be bizarre to allow so many other forms and not the Latin, wouldn’t it? Yes, exactly, Phlebas. I’ve been saying this for a while. Hello, Mark. Hello. You look lonely. I figured since Laura couldn’t hold down the floor, I would join you. Yeah, hosting people, that takes work. It does. Well, and also you’re not only talking my name, Sieg, but you’re also talking about the Vatican II stuff that is so near and dear to both our hearts, I suspect. Yeah, yeah. Good evening, Rene. Good evening. How did your stream go on Friday? I was out with friends in meat space. Oh, well, I thought it went well. We split it up into two pieces, the stream and the after party. So now you don’t have to watch five hours, you can just watch the monologue. Oh, so that was planned. The views on the stream are awful. I don’t know why, but that’s okay. You know, maybe the weather’s nice and people are going outside. I hope so. Yeah, yeah, we’ll see. Yeah, I hope so. And I did a stream with Manuel earlier in the week, and I did a stream with Andrew Clay earlier in the week. So a lot of streaming this week, this past week. Yeah, yeah. Did you see the third wayism that I did with Andrew Clay? I did. I think I watched all of that. I hopped in at the end, didn’t I? That was when I hopped in. Yeah, yeah. You saw the whole thing now, right? I had to get up to talk to my boss briefly, and that was when Claire was on. Oh, you didn’t miss anything. Oh, there was a lot of suspicion that you left on different, for real reasons. And I was like, I don’t think so. I bet something just happened. You realize I’m in a cubicle right now, right? Anything could happen. Anything could happen. Yeah, they thought you left because of Claire. I doubt it, but maybe. No, no. If somebody else is there to deal with it, that’s fine. Yeah, yeah. It’s a strange position to take. Yeah. Yeah, I thought there was a lot of insight in that particular stream, too, because Van Der Klay seemed to go, oh, I see what you’re saying. I see the difference between the crisis of faith and meaning crisis. I’m like, good. That’s a good start. I think he watched the video we made. Yeah, yeah, I think he did, too. But that’s different from actually interacting and getting the point. It was funny, too, because we’ve been talking about this video for a while, and I was like, well, I have to ping him about it. So I pinged him on Twitter, and I got all this stuff going on. Emmanuel wants to do a video, so it’s like busy week. And then he gets back to me immediately and says, oh, yeah, how about today? And I was like, I’m going, I have to write down notes, because I’m not sure I can talk about it. I know about it, but I was figuring I’d have a couple days. I could kind of mull it out. Nope, nope, it’s a few hours. I need to rearrange everything about my day, cancel my stream with Emmanuel, and put it off for the next day, which was fine with him, because he wanted to do the same. So I was like, oh, good, we’re on the same page. So yeah, and that stream that I did with Emmanuel is about AI and chat GPT and that atrocious video that John did. I was just like, wow, what a train wreck this is. It’s amazing to get everything wrong. That’s remarkable. Yeah, yeah. I saw this video, I guess Matthew Pujo had shared it. It was in the style of a Metal Gear Solid 2 dialogue between I never played Metal Gear Solid 2, so the guy who’s not Solid Snake is controller talking about how putting all these AI tools out on the internet is actually supposed to just precipitate a crisis where we want to have digital verification of a real human being. And oh, man, that got all my conspiratorial neurons firing. It’s like, oh, dang, that makes so much sense. Well, the problem is there are perennial patterns that you need. Once you have X, you need Y. And it’s like, no, no, we’re trying to get around Y. So we’re going to put everything in the digital space. But once you have X, you need Y. And then no, no, no, no, so it’s this ongoing service. So this is a perennial pattern. Can you give me two former instances of this sort of thing happening? Sure. So in the early days of the country, you didn’t need a passport to come and go. Right. And then eventually it’s like, oh, no, no, no, we have to have one. Because once it reaches a certain size, now all of a sudden you do need to know who belongs here and who doesn’t. Right. Or, you know, then you need papers. So you need to walk around with papers, which doesn’t mean you couldn’t get into the country without papers. That’s what WAP means, without papers. It’s an immigrant without immigrant papers on them. They didn’t throw them out of the country. But the reason why they started to have this immigration green card system in the U.S. was actually because they let a bunch of mostly Italian anarchists in and anarchists throughout history, by the way, you need any history at all and pay any attention at all. Anarchists bomb people. That’s what they do. They bomb people and they shoot leaders. That’s pretty much their whole MO. This is why when people are like, I’m an anarchist, I’m like, well, I hope not, because I’d have to kill you. You would be an enemy of not only the state, but of me, because you bomb innocent people and kill them. And I’m just not a fan. Like, OK, so maybe you are. So all of the all of the immigration early 20th century, we have immigration. Now we need passports. Exactly. Now, well, passports. Now we need, you know, identification on the ground. Right. Then you get cars. So at first, identification is for things like alcohol and voting. Right. It’s just age verification. But then you get cars. Now you need verification that you are paying taxes on and able to utilize safely the roads, which you didn’t have with horse-drawn carriages because you don’t need it with horse-drawn carriages because they’re so slow that it doesn’t matter. And the horses, the horses have a sophisticated AI. That’s the thing. Like a horse is a sophisticated. It does the same thing as an AI. People. Yeah. People do not understand this. It’s like you’re you’re telling me you’re creating an intelligence. Did you see what happened the last time we put an intelligence to use in a way that we could control like with horses or or cows or chicken? Like we’ve done this before. And the pattern always plays out exactly the same way. It’s the same pattern. Once you introduce that, you need all these other things that you’ve been trying to get away from. And it’s not right away because scale matters. So when it’s 10 percent of the population, who cares? You know, but when it’s 30 percent of the population, now it becomes an issue or or when there’s an outsized effect like drunken driving isn’t really a big deal until a lot of people start dying because more people have cars. And now all of a sudden you have to pass a law against drug driving. And like there were for the most part no laws against drunken horse-drawn carriages and drunken riding your horse. Drunken riding your horse never killed anybody. Nobody cared. You know, you never killed anybody else. Somebody might have fallen off. Right. Right. Nobody is like, well, yeah, that’s your choice. But when it’s no longer your choice, now you need laws to deal with that. And some of that is velocity, like actual velocity of technology adoption or the velocity of the technology. Right. Like internet porn is not a problem in the beginning because transferring images over a modem is too slow. And so it’s you know, it’s like it’s not a thing because it can’t be a thing. But then once the speed happens, right, once the velocity picks up, now all of a sudden you need to do something about it. Of course, we didn’t. And now we have a big problem we need to do something about. But yeah, it’s the same perennial pattern. Yeah. Yeah. Apparently lots of people back in the day died from carriages and horses. So anyway, I haven’t studied carriages and horses. Don’t ask me. I hope William Branch isn’t pulling our leg. Hello, John. Hello. Congrats. How you doing? I think. Thank you. Thank you. I. That feels like promotion. I don’t know. But. I mean, what is it? It’s a lot of responsibility and especially with the master of ceremonies position, it’s a lot of visible responsibility. So if let’s say if an ordination or a chrism mass is an absolute disaster and nothing’s going right. That’ll be my fault. Ouch. Yeah. So anyway, man, I got to go back to school. I was I was pretty well expecting the school thing to happen. The the bishop’s secretary master ceremony thing was like, oh, that’s happening, too. All righty. I thought I thought I might go back. They’ve been talking about the canon law thing for years now. That’s what I get for showing interest in aptitude. Get extra schooling. I’m such an idiot sometimes. Yeah. Yeah, I wanted to get out of school as soon as I could. Oh, that first fall where I didn’t have to pack up all my stuff and go someplace and sit in a classroom. That was great. Yeah. Just the best. Yeah. I just got to watch the kids go back to school and I just continued doing my thing. Yeah. So I have a question. I mean, I need a answer for me on this question. I think it’s kind of interesting. So the let’s say so Peter is sort of the archetypal pope priests. I mean, of course, in the in the ultimate sense, Christ is our archetypal priest, you know, of the Hebrews, all of that. Right. So but but in the same sense, Peter is like the the typological pope. Right. Is that does that fair assessment? Okay. Yeah, I mean, yeah, basically. Yeah. Yeah. Yeah. Okay. So then what’s interesting about this is what part of part of being pope or even a priest or whatever is not being married. And that’s the negative take on it. You know, the positive take is, you know, this is this is your vocation to minister to the flock in a special way that a married person couldn’t do. Right. So that’s the positive take on it. I find interesting is that Peter was married and yet the Pope and the priests are, you know, not supposed to be married. So I’m curious to take on that discrepancy. Yeah. So what do you know about Peter’s wife? Well, that he had one. That’s about it. Yeah, that’s it. That’s literally everything. So we don’t know if she was around. We don’t know. I mean, you know, we can imagine Peter, he had a mother-in-law. Right. That’s the only reference we know that he had a wife is that Peter’s mother-in-law is mentioned in scripture. So all of a sudden he starts tramping around with this itinerant preacher. Like, apparently the wife was OK with that. Or maybe the wife had passed away. And so anyway, there’s just there’s like a thousand arguments you could make about it. And they’re all from silence because there isn’t even a consistent tradition in the church because you’ll hear, you know, one tradition say that Peter’s wife had died. Another tradition say that she went out and preached the gospel with him. Another tradition say that she went and lived in seclusion or something. You’d have to you’d have to really start reading it. So but the fact is, is that the tradition of clerical celibacy was not stronger anywhere else besides Rome and the Western the Western influence of the church there. And the the Roman clergy in the city of Rome, they had a very, very strong tradition of celibacy. It’s not like they never had any married clergy, but it was it was sort of. And it was always understood to be an apostolic tradition that this was something handed down from the apostles. So we just look at that. We say that whatever the fate of Peter’s wife didn’t preclude the Roman church from developing this tradition of celibacy. Imagine he ended up with a mother and a wife. Yeah, I feel worse from after that. It’s interesting. So I guess, yeah, I mean, I guess it is an argument for silence. Like we could speculate, you know, the the Catholics would speculate one way, sort of using it both ways, of course. But yeah, I don’t know. I mean, it’s just an interesting detail. Like we don’t have we don’t have evidence that any of the other apostles were married or not. I mean, you would expect that in that sort of, you know, in the Jewish culture and around the time of Christ, they would get married very quickly generally. Right. And they would get married quickly and start having children quickly. So I think the daughters would. I think the sons had to go out and establish themselves a little bit more before they would before they would get wives. OK, that’s that’s, you know, we’ve heard this from Jordan Peterson, right? Very consistently across cultures. It’s like slightly younger women with slightly older men and upper status class or across, you know. So yeah. So if they were all rather young and had attached themselves to this wandering rabbi, then it’s not all that crazy to think that only Simon was married. But again, it’s like the Gospels don’t tell us and there isn’t a consistent. I mean, the consistent tradition is that they weren’t married. So depends on what you think about tradition. So the OK, so the tradition is that they he had a mother-in-law, but he was he wasn’t married. I mean, besides. OK, so it’s like it’s obvious Simon, at least at some point, was married. Right. Right. Yeah. But all the other ones, the tradition is that they had never been married. Oh, OK. OK. Yeah. All right. Interesting. Yeah. That’s because like I guess the the you mentioned like how it was sort of a strong tradition just within Rome. So what do you think is the step that you get from this is a tradition that we haven’t imposed by strict law. This is just the traditional pattern that most of the clergy have been celibate. Then how do you get from that to whenever it was a thousand a.d. I can’t remember when the letter in council, Gregory the seventh, all of that. Yeah. So this is what I was taught. And I haven’t gone in and studied it myself, but that in the western part of the church, the celibacy thing was becoming more and more common as time was going on. The married priests thing was sort of like still happening at the edges of, we’ll say, Ecclesial life. You had small town pastors, they had wives. And what the Gregorian reforms were doing was universalizing an already common practice rather than introducing something novel. OK. So that’s that’s the argument that I was given. I haven’t I haven’t studied it myself. I mean, I don’t I don’t mean to impose it as a deal breaker by any stretch because it’s like the tiniest of details in the whole scope of this larger story. Right. I just find it interesting. Yeah, I don’t know. I just thought the connection was was kind of interesting. Why? You know, I feel like some Catholic somewhere has wrote about, you know, wrote about this in quite a bit of depth because, you know, one thing that the church does have is is very extensive thoughts and answers to all sorts of questions. Right. So I just thought I’d pick your brain about. Well, you’ve you’ve about picked me to the end. I don’t have anything anything more to give at this point. Maybe I’ll tell you something. And when you’re learning canon law, maybe you’ll do something. Yeah, I think canon law is going to be a lot of canon law. I’m going to have to get my Latin polished up to I can pronounce it OK, but understanding gets harder. Yeah, it’s like a computer language. It’s very big. They should have made it a computer language. So regular. It’s unbelievable. A lot more regular than Greek was. I’m glad I don’t have to do canon law in Greek. The verbs are awful. Verbs are just awful in Greek. Yeah, true. Do you do both T.L.M. and what is it? Novo Sordo? Yeah, yeah. So my bread and butter is the modern Roman right. The Nova Sordo. That’s what I do almost every day. I don’t often have an opportunity to celebrate the traditional Latin mass with the people. Although my predecessor in my new job, he would regularly do it on Sundays for the traditional Latin mass community that meets at our high school chapel. So that’ll probably be something I get to do more of. Get to have more opportunities to sell it. So that’s fine. Well, Gregory, the seventh reforms dealt with excessive independence of the clergy and lack of moral integrity by the clergy. Yeah, that lack of moral integrity thing is a perennial pattern. Yes. Yeah. It’s a people problem, I think. You get the people to stop peopling, it’ll be fine. Yeah. Yeah. It’s just, you know, it’s only the fact that it has implications for the structure of the Catholic Church more so than it does in Protestantism. It gets so much attention, right? Yeah, I don’t know. I think the Orthodox and, you know, Father Stephen will have to hop out here and correct me if I’m wrong, but I think the Orthodox still won’t have married bishops. Really? I think. I think. I’m not going to dive like all in on that opinion there, which means that because so many of their clergy are married, you know, when the guys are in the seminary, so again, I’m talking about the Orthodox here, which I don’t fully understand how it actually works, but I’m pretty sure you have to be married before you’re ordained. That’s kind of the rule. I think you’re right on that. Yeah. Yeah. And so guys will be real anxious to find a wife, because they know that if they’re a married priest, they’ll have no chance of becoming a, or very low chance of becoming a bishop. So it kind of kind of protects them, puts a glass ceiling over them, and then they don’t have to worry about it anymore. Which like, you know, dealing with seminary, I’m glad I didn’t have to deal with women as well, you know, one at a time. Yeah. Spouse of a minister can be say negative. Yeah. Solves some problems and creates new ones, you know. Yeah. Trade-offs. Trade-offs. Mark’s got a great video on trade-offs on the top quality channel, navigating patterns. Can confirm. That is true. Top channel, top video, for sure. Well, this is why, like, I don’t know how it looks in Catholic churches, but in a lot of Protestant churches that call themselves complementarians, the women run the church, right? Yeah. It’s kind of ironic fact. So, you know, the fact about the pastor’s wife having more influence than the pastor. Yeah. Yeah. I mean, they show up, the ladies volunteer, they do a lot of good work and everything would grind to a halt without them. And that is another one of those perennial patterns. Right. Across all churches everywhere. Women point. You’re not getting around the fact that women point. Og can build and Og can smash all day long, randomly. But women point. And that makes all the difference in the world. Yeah. Women think it’s very important that we have sandwiches and bars at every funeral. And they’re right. If I was running this, we wouldn’t have sandwiches and bars. I’d be like, what the funeral’s done. Go home. Yeah, I was reading this. This was a while ago now. But what’s his name? Leon Puddles? Something like that. He’s a Catholic writer or something. What’s the name of the book now? He was writing about what he sees as the feminization of the church going back, I think, to the Middle Ages is where he sort of starts a thing and sort of watching this progress. The church impotent. The church impotent, yes. That’s what it is. So it’s been a while since I’ve read it, so I might butcher it. But his argument is this sort of certain theology or practices that worked his way into the church in the Middle Ages. He quotes a few particular… Yeah, he’s probably not a Bernard of Clairvaux fan, is he? I think that’s one of his targets. Yeah, well, I think so. I mean, Bernard of Clairvaux, his kind of innovation, we’ll say, is that he began reading the Song of Songs on an individual level. Right? Because all of the fathers of the church had very consistently, they had understood that on a communal level. Right? So Christ was wedded to his church as a whole. And Bernard of Clairvaux began applying that to the individual. So Christ is marrying you. Now, I think that if you keep what Bernard of Clairvaux was saying in the context that he was saying it, he was saying it to a bunch of monks, then it works out just fine because they’re monks, you know, and they’re going to have the right, in theory, they’re going to have the right sort of maturity to be able to handle that without getting all weird about it. That’s interesting because in my… Once you start trying to tell that to Joe Sixpack, he’s going to be like, what the heck, you mean I’m married to Christ now? Right, right. He doesn’t need that. Well, that’s interesting because the sort of the dominant interpretation, and I wonder if this is in a reaction of this, but the interpretation I grew up with in my, you know, non- to non-charismatic church was that this, you know, the Song of Songs is strictly about Christ and the church. There’s no, you know, you can’t, you know, there’s, there’s, because we’re kind of, you know, they wouldn’t say this, but because we’re kind of a little bit Puritan, we can’t say that it’s about some sort of erotic love poetry, right? So that’s, I wonder, I just wonder where that interpretation came from because it’s so… the character, you know, that interpretation I grew up with because it’s so obviously lacking to me. Like it’s not wrong because Christ and the church is the archetype of, you know, female marriage, but it’s not just that, right? So that’s interesting. The framing is really tricky. When people say the feminization of something, masculine and feminine are inevitable reality. There’s no, you’re not getting around them, right? I think what they’re really referring to, and I do hope this book comes out, it’s called The Hidden Matriarchy. I think what they’re referring to is when you unhide the matriarchy, right? When the divine feminine, there’s too much light shown upon it, that creates all sorts of problems, right? And then you get these issues where people are trying to follow something, maybe they shouldn’t, right? People are trying to make something explicit that is supposed to be implicit because pointing doesn’t work quite so well. If it’s explicit, pointing in particular is better implicitly, you know, where you’re not saying you need to be nice to that person, but instead you’re saying, you know, niceness is a virtue. And it’s just sort of, you know, like, oh, I wasn’t nice to that person, right? It’s a subtle difference, but it’s huge in terms of how well it moves people, right? And it’s a difference because this has come up a few times over the past week, right? It’s the difference between trying to, being in a public discussion with somebody and trying to prove a point to the person you’re talking to and not caring about whether or not you convince that one person, but instead trying to make a point to the audience. So now it’s not about like making sure that one person understands your point, but it’s like, no, no, the audience should see something important and I can use this person who maybe I can’t convince them as contrast to show something, right? That’s an implicit way of doing things rather than explicit way of doing things where I’m trying to say convert you in the moment or something. And hello, Ted. It’s good to see you, sir. Hey, Mark. That Fred versus Dennis Prager, right? That conversation wasn’t for Dennis Prager’s benefit. Right. Yeah. Yeah, I mean, clearly Prager was not going to move on his position because, well, we know where his shadow is now for sure. And it wasn’t important. The important part was the fact that people like Ethan clipped it and threw it on my Discord channel because it’s like, here’s the contrast. You can see plainly what’s wrong with Prager and you can see plainly where you want to be in that spectrum because it is a spectrum to be fair, right? It’s not a yes or no thing. Is that a point to the coin? Yes. Yeah, yeah. He had Dennis Prager on. And they talked about that whole horn thing. He doubled down on his ridiculous and obviously incorrect and stupid position. Yes. Yeah, I mean, it’s just like hearing him talk about it. It’s like he still thinks his magazines, I think. Well, you have to know, you have to know a little, you have to know more about Prager. So he’s got a particular trick that works particularly well to help marriages. And once you understand that and then you add this, it’s like, oh, I can clearly see his psyche and his ethos around that one particular issue. And that makes it all more clear. Well, this is why PVK calls him a Cold Warrior. He’s totally out in that mold in every way. And like what I think he’s mixing up, I haven’t touched the Matt Fradd thing or like any criticism of Prager’s statements. My guess is the way that we should criticize him is to say you’re mixing up something like essential nature and accidental nature as a result of the fall. Right. You know, like, oh, men are just programmed to, you know, look for boobs. Oh, OK. Maybe there’s something not quite right about that, Dennis. I don’t know. You think that’s a good thing? I’m not sure. Yeah, yeah. I don’t know. All that talks about essence and accidents made you really sound Catholic, John. Well, OK. So I was like, you know, I’m on with Father Eric. This is something that at least I’m going to like. Probably if you would ask me what I think those things mean, I don’t know. I couldn’t tell you. Well, you use them correctly. You use them correctly. You’ve got at least like a fifth grade understanding of the words or something. Yeah, yeah. Well, it gets helpful because like people people struggle with the incarnation. Right. Like, how is it that? Yeah, yeah. Yeah. Yeah. The most high God comes in and incarnates in a specific time and place. Right. And so it’s sort of a neat way of sort of, yeah, these accidental properties. He’s a first century Aramaic speaking, you know, Jew, but, you know, essential properties, whatever. Right. So I do like those sort of distinctions. It sort of neatly wraps things up. So that’s the best. I like the word you used. Right. Because that is the problem. Right. You struggle with. Yes. And you know what? That’s correct. And if you’re not struggling with it, you’re wrong. And that’s we’re always like, I want an answer. But if you get an answer, you’re not struggling with it. But the right answer is to struggle with it. That’s the correct. That’s the true way to have the relationship, to have the engagement is to struggle with it. And I like that you used that word. That was really good. We who wrestle with God. Yes. Well, yeah, I mean, just just thinking about reading Cheeky Justin’s orthodoxy for the first time. And this isn’t always original to him, but his distinction between like the solar mystery and the lunar mystery, if you will. Like he’s like the moon. You can look at it. You can see there it is. And like it doesn’t eliminate anything. He’s like the sun. If you look at it, you’ll go blind, but it illuminates your whole world. He’s like, those are the Christian mysteries, particularly the Trinity, the Trinity and the incarnation. He’s like, you’re not going to like look into it and get and get it or, you know, comprehended, get around it, any of that. But like it’ll light up the entire universe for you if you let it. And so it’s that process of continually like working over it in your mind, in liturgy and prayer, in conversation and all of these things that it’s going to then like unfold all this truth for you. And yeah, I mean, that there’s two meanings of mystery, like mystery, the mystery of the Trinity versus mystery, like an Agatha Christie mystery, where it’s like you get your facts in order and then boom, there it is. And it’s like those aren’t the same thing, which Mark, it kind of reminds me of you. You’re talking about problems, right? Agatha Christie’s mystery is a problem in the sense that they think that as a solution, the Trinity is a mystery. And so it’s a mystery now in the sense that it has a solution that if you just look at it long enough, you’ll see the solution. It’s like, no, this is something that is that is infinite in its meaning. And so the possibility of encompassing all of that into something that’s no longer a possibility. And in that sense, it’s a mystery. And there’s there’s this idea, Ted, I don’t know if you’ve heard this theory before this thesis, but that, you know, what ruined culture, roughly speaking, is Sherlock Holmes. And because the mystery genre is a set of problems that’s resolvable if you’re just smart enough or detailed enough or attentive enough. It’s usually it used to be attentive enough. So if you look at Sherlock Holmes, his whole trick is attention, right? Because all the knowledge in the world without paying attention to the details of which side of your beard is darker doesn’t get you the answer. And so it’s clearly not a knowledge play, which is not to say knowledge is important. Sherlock Holmes has a lot of knowledge and he develops his own knowledge and does his own experiments. But it’s all detail. That’s it’s attention first. And we’ve missed that. But that’s the ruining of the world is the idea that you could pay attention to enough details and have enough knowledge that things that are unknowable previously or unknowable to everybody but Sherlock Holmes could become knowable. So, Mark, I don’t know if you remember there’s that particular scene in Sherlock Holmes where Watson finds out that Sherlock doesn’t know about the heliocentric view of the universe of the solar system. Yes. He’s like, and and and holds it like, great, I’m going to forget that as quickly as I can. Why? Because it’s not important to solving mysteries. He’s like, it doesn’t matter. So the point of it, it’s not knowledge. He’s like, you’ve just given me totally extraneous knowledge. Right. It has to do with this sort of like cosmic thing. He’s like, I don’t care about that. What I care about is, do I have the tools that necessary in order to solve the mystery? So I’m going to let all that aside. But I am interested in trying to be able to deduce what is what’s his example, the ocean from a single drop of water. It’s like, yes, that’s what that is. That is that is that’s what we’ve missed is that the lesson is right there that certain types of knowledge are important. So what what is that? What is the dust up of Galileo in the church? Anyway, the dust up is around which math you use to do astrological calculations. Period. Full stop. End of argument. And people don’t know this. But if you read, you actually read the original material, that’s what it’s about. It turns out that when you take the heliocentric view of the solar system, a bunch of math becomes easier and more accessible. Right. So you’re destroying a priestly class, not the priestly class, but a in previous times. That was the priestly class. So if you go back to, you know, ancient Babylon or Sumer or whatever, and the priests were the ones who did the astrology. And, you know, back then, they did have trigonometry. As it turns out, we’ve only learned that in the past, like, five, six years. They did have trigonometry. They could do all this. They’re different in Egypt the same way. Right. But that whole thing collapses if you make it easy and then everybody can do it because now it’s more accessible. So the church also doesn’t want that to happen because they’re in the hierarchy for some odd reason. They’re all about this crazy hierarchy thing. And so that’s really the argument isn’t you shouldn’t know this or you shouldn’t do this. It’s you shouldn’t teach this. You shouldn’t tell people about this. You shouldn’t write books about this. It’s not like, you know, this and you’re a witch and now we’re going to burn you. That’s not that’s the that’s the classical interpretation. This is where the crazy modern scientists get this idea that it’s a battle between science and religion indicated. That’s nonsense. That’s all nonsense. Galileo didn’t do himself any favors. He was not he was not cooperating well. Well, Mark, for what you said, like the the the narrow definition of of heresy is teaching falsehood, teaching incorrect doctrine or falsehood. Right doctrine or practices or whatever. So right. If do you think Galileo was still in that sense a heretic and against like against the church? That’s not clear from the stuff that I’ve read. And I’m not claiming to be super well read. But like I did go and look at some of the original materials and go, oh, this isn’t this stuff that people normally think about. This is clearly not in the text. It’s just made up. It is actual fantasy, which, you know, fine, fair enough. And some of it is fantasy through over reduction or, you know, wanting to simplify it so that people could understand that there was a battle. Right. But at the end of the day, like what happened to him? Oh, he was locked in his private home paid for by the church, by the way, and fed well and had servants. Can you do that to me, please? As punishment, Father, what do I have to do to the Catholic Church to get this to get my punishment to be in a beautiful place near the heart of the church with servants? How can I do that? Because I will do that right now. Whatever I get going on the going on the time machine, Mark, will make it happen. Oh, my goodness. Yeah, but Mark, I mean, to go back where I was, where I kind of jumped in and this whole discussion of the feminine. Right. So thinking about Sherlock Holmes in terms of because mine is my again, this is my reading of Peugeot’s whole discussion on the feminine is that frame that way that you evaluate which fact should be in your attention. That’s the feminine. That’s like the right. It’s the neck that points where the eyes are where the head’s looking. Right. Right. Yeah. Yes. And so you’re like, look this way and Sherlock Holmes is saying, you know, I’m not going to look there. I’m going to look here. I’m going to look here. I’m not going to look at this. It’s all here. And that’s that same. It’s the same sort of in terms of like strata of thinking. Right. It’s like there’s again, like, as you know, for Vicky and Peterson, they I think they are trying to articulate in different ways. Relevance, relevant realization, the problem of complexity and attention. It’s the same thing over and over and over again. How are we going to gather the selection of facts that we’re going to pay attention to, which is, I mean, honestly, that’s sort of what postmodernism was doing to except that it seems like the postmodern move was essentially, hey, look, we realized that there’s like a lot of different ways that you can look at this. So none of them are valid. And it’s like, okay, hold on. That’s all good. Ted, that’s all for Neoplatonism. It’s the it’s the undefined part of Neoplatonism, which is what is a whole like which set of parts defines a whole and what whole does it define? And it’s like, well, Neoplatonism doesn’t actually resolve any of that. It just says that that can be done. It’s like, that’s helpful. Yeah. Except it’s not. Isn’t that for Vicky’s whole point that it ends up just like basically manifests like finding a body in Judaism and Christianity and Buddhism and Islam and so on, because like it ends up being more of like a methodology almost. I don’t know if I buy this, but this is what I’m hearing from him. It’s like it’s a methodology in the sense that it can’t be believed outside of something to believe. Well, right. Because because he spends all his time saying we can’t go back, we can’t go back, we can’t go back. By the way, Neoplatonism and it all went wrong in Christianity. And if we just go back to Neoplatonism and change it, it’ll be fine. But we can’t go back. And then and then when that fails, because it did, like I’m just telling you right now, it failed. Right. You know, John’s project specifically, he goes back to Socrates, which is before Plato. So he goes back further and tries to resurrect that. But the bottom line is the problem of relevance realization, which is his great work, for sure. Like, no, no question about it is the same. And I think I said this on my stream on Friday is the same. It’s a problem of attention. You’re actually talking about the same thing. What do you pay attention to to call a hole? It is. And can that be arbitrary? Like, can you just grab things, you know, like I’ve got an M&M and I’ve got a whole bunch of things. And I’ve got a root beer bottle, right. A glass root beer bottle. Can I just grab like root beer and cross it with M&M and call it a one? Because in Neoplatonism, nothing prevents you from doing that. They wouldn’t have done that because they were all religious. And like everybody just lies about Greek philosophy. It was all embedded in religion. So they knew from religion, because religion is the art of kilos, roughly speaking. Right. It’s the final cause. They knew what a valid one or whole was and what wasn’t valid. So in some ways, you invoke Neoplatonism to resurrect postmodern philosophy by saying, no, no, no, the postmoderns are, you know, actually, it’s a long tradition all the way back to Neoplatonism. But again, you’re just you’re stripping out like Meneo was very important to Plato. And it’s more important than rationality. Did you ever hear John mention that? Anybody mention that? Almost nobody ever talks about that stuff. Well, this is what’s interesting about Peugeot is that I feel like even though he and Verveki can get in the same room and talk about this for hours at the end of the day, like he’s saying the fundamentally opposite thing. Not only is he saying, look, there’s lots of ways that people have found to talk about holes and you really can’t use that. But like, it’s deeper level than this. And like, he’s a philosophical realist. He’s like, the holes are actually there. You don’t get to pick which ones they are. This is a symbolism happens thing. That’s like a fun like people are accustomed to movies with symbols in them. And that’s like that’s kind of the language that traffic in is like, it’s like, no, you don’t get to get away from those things. And and honestly, like that was one of the things that I was going to that ended up taking me from Protestantism to traditional Catholicism, which was like, I don’t get to decide what those things are. I don’t get to say like what what the structure of worship is. That’s not an arbitrary thing. That’s that’s a thing in the same way that, you know, a root beer bottle and saying, you know, looking at that and saying it’s not a collection of atoms that I arbitrary arbitrarily described as a bottle. It’s like, no, it really is a bottle. And like the material of that. And here we go, Father Eric. Here’s our hybromorphism, right? The material of it is apprehensible, interactable, all of these things because it has the form of a bottle. Like it’s not the material, it’s the material in the form and realizing that, hey, that works all the way up and down. Right. But you know, what’s the difference? Yeah, Peugeot starts from emanation and emanation is constraints. And when you start from and then what the mistake they make in the argumentation is for Vicky will say, I assure there’s emanation. And then he moves into let me tell you about emergence. There’s a 20 minute speech on emergence. It’s like, wait a whoa, let’s give that kind of time to your idea of emanation because he doesn’t have an idea. Yeah, I noticed that in his AI video. I noticed that for the first time because I heard you said that. I’m like, OK, I’ll keep that. And then he did exactly that. It was frightening for you to you to describe it so well. But unity, unity is the forgotten transcendental, you know, like beauty, beauty and truth. They get all of the attention. But unity is that transcendental property of being because you can only have an intelligible thought. You can only have a knowledge of something insofar as it’s actually one thing. Yes. And people get into all sorts of trouble when they start treating aggregates as if they’re a proper unity. Right. So that’s why on this channel, we have the. I couldn’t agree more. But it’s ironic because it’s an aggregate. It’s an aggregate. It’s not an actual unity. It’s not. It’s ironic in some sense that these are the people that talk about capital T truth and truth as a noun as though it’s some solid thing. But then when it comes time to admit that there are we’ll say oneness is or holes that are valid and useful. And there are those that are not. They’re like, no, no, no, no, no, no, no, no, no, no, no, no. And then they go back to the appeal to capital T truth. And then that’s science. And they just happen to be scientists by sheer coincidence. I’m sure it’s coincidence. It’s got to be coincidence. Right. And so they’re the priestly class. That’s all they’re really saying at the end of the day. OK. OK. So hold on, Father. I’m getting this thought. OK. I’m only vaguely familiar with Thomas St. Thomas Aquinas’s proofs for the existence of God. But it seems to me that there’s a relation between this whole idea of contingent being and the fact that you have to go up. You have to work your way down to have those unities to think about. Am I am I making that up? I feel like I’m like, OK, right. Why is that you can’t get to unity and everyday things if it’s all in this? Well, because unity comes is coming working its way down through the order of contingent being. Right. It’s like if everything is contingent, how is there anything that exists as a thing? As a book, as a cup, as a person, as a soul, as an angel, as a cloud? Am I am I totally crossing wires there? I don’t know why you’re bringing in necessity and contingency for that, but I think what you’re saying is correct. So like, yeah, maybe I’ll think about it for a month and get back to you on it. Yeah, yeah. A little bit of contemplation while you’re out. You’re probably done seeding, right? I don’t know. Do you do you do any big agriculture? No, I do. No, we do homestead stuff. And then actually for the last while, I’ve basically just been doing heavy equipment operation, which I love. And so but I will contemplate it while I’m moving thousands of pounds of dirt around with my gigantic proprioceptive mechanical hydraulic arm. By the way, if anyone tells you that we don’t live in a world of giant exoskeleton robots moving around the landscape, they’re wrong. They just don’t spend enough time with blue collar people. Yeah, yeah, it’s just a bunch of those giant exoskeletons right out there. Yeah, the church is being built. Watch someone who knows how to sling dirt around with a forty thousand pound excavator and they’re doing it within like one inch accuracy. And you’re like, take that like Japanese mech anime genre. Like we got you already. Come to the core. The core music is for building, not destruction. It’s way better. Anyway, yeah, I feel like we need to get a Joe and Verbeke in a room to talk. They just need to have a discussion about. No, I think I think I think Verbeke is still receptive to criticism to an extent. So I think if I feel like Peugeot could just because Peugeot understands the emanation stuff as as you were saying, and you know, the what what Verbeke what what strange to me about Verbeke is that he gives so much he gives. He gives a lot of lip service to theology, at least in his awakening from the meaning crisis. He was talking about these theologians and he’s probably read a ton more theology than I have. And good for him. But he has he did no effort to integrate that the queen of the sciences to integrate the queen of the sciences into his cognitive science argument about AI and it was just aggravating because I’m I’m I’m I’m listening to this. And I’m like, look, AI has no. Let me see if I can stick with the accidental essential language here. No, AI has no essence of it whatsoever. Right. We, right. We have the essence of the nature of God. AI is just this. It’s it’s this technological babble golden calf. It doesn’t. It doesn’t. It doesn’t even have any any constituent parts that would make it a whole because there’s nothing essential there. Right. Tons of different things. Right. You can program it to fix people’s grammar. That’s great. Right. You can program it to deceive people to thinking that the pope was wearing a fluffy jacket around. Kind of funny. And then you can program it to do all sorts of evil. Very, very bad. Yes. Well, it’s it’s not the thing about it is it’s that’s why it’s it’s an idol. You know, you talk about it as a program, its entire definition is based around functions and functionality because it it implements. Algorithm or or programs and you can see those if you know all all of the sort of different fields of AI, you know, the names of the subfields of AI realize this or you know this it makes it, you know, evident. Like we have computer vision, you know, computers are going to see, oh, you know, there’s all there’s all stuff going on with that natural language processing. That is a doozy if you pick it apart. Right. You know, large language models. Right. And you know, wrapping all of these, these, it’s, it’s the, the, the scientific, you know, encapsulation and monarchical view trying to, you know, wrap up all of these, these human, these human conceptions and it’s just I mean I say this as a person who works in AI, it’s not going to work on the. It’s it’s it’s fundamental to the structure of the universe that this stuff will not work as any of these people predict it. It’s it’s just it’s ontological. Yeah, and the, the, the, they’re, they’re missing the fact that effectively with the project of AI is not only wrapped up in, we’ll say, parasitically stealing the information that it uses from others, but also any improper categorization. And so, for example, right, and this just happened, I think this year, right, that one of the big evangelists for self driving cars, 10 years after he said we’re less than five years away from fully autonomous self driving cars, admitted, you know, we right when I said that we hit a wall, and we haven’t made any progress and a self driving car cannot take a left hand turn after 10 years of research. Now everybody except people like me and maybe you, John, realize exactly why a left hand turn is infinitely more problematic than a right hand turn. Like we just already know what the problems are. Right. And nobody else left and right. They’re equal ones this way. They’re not in driving. And there’s lots of little reasons for that. So they pick this stuff. You pick up the category of left and right is being equal, and you try to solve them as problems of equality of equal complexity. You’re, you’re already, you’ve already used a definition, a wholeness that’s incorrect, because in driving, there’s no equivalence between a right hand turn and a left hand turn. Like there’s no situation on a two way highway that a right on a left a left hand turn on red is viable. But there are situations where a right hand turn on red is viable. And you can’t like that alone. And that’s only one of the small problems with the difference between left and right hand turns. So you put them in the same category, but they’re not the same problem. And they’re not the same complexity of problem. But because, you know, platonism, you just kind of pick the binary, which happens to be a false dichotomy and said, no, these two things are equal. You can take a right, you can take a left. They’re equal. They’re not even close to equal. So that just kind of exemplifies the problem of AI is that the categories that you pick actually really matter. And if you don’t pick the right categories in computers, you’re completely screwed. You will never fix that problem ever. Right. And you’re talking about this, this decision tree that is so large that it blows up into, you know, such such a computational complex level. What’s why? Why am I blanking on the the combinatorial explosion? Yes. And that that that combinatorial explosion is, I think that is the most insightful thing that Rebakey has helped me understand about AI, because I’ve been, you know, I, you know, go to school and I read these papers about these, these, you know, these, these algorithms that are trying to to the self driving car thing. They’re trying to derain or defog images or, you know, they’re trying to do all this sort of stuff. And the combinatorial explosion gets in is OK. So you see somebody, you see a person walking in the sidewalk. OK. So person slow down. But, you know, now you have to make all these these these split second decisions. Oh, wait, is this person facing you? So it’s more likely that they’ll see you and get out of the way. Is this person facing away? Do they have headphones in? Do they have a child with them? On and on and on and on. And that’s why. And because every single one of those decisions has such a a immediate moral dimension to it. It’s the morality thing that they’re never going to get right. Right. Because these you can’t. I don’t know what girl. Yeah. Right. Well, it’s it’s not like it’s not like you can’t program morality in it because chat GPT has a sense of morality that’s based on the basis of its of its programmers. Of course. But it doesn’t it’s not adaptable. It doesn’t learn and it doesn’t think in in those sort of split second decisions that are necessary for it. And that is something I wish I wish for. Vickie would get more to that this sort of psychological insights, because that was very helpful for me. His most recent video on it was just like, dude, you got to get some theology in here. Well, or or some realization. I mean, the problem with that video and I’m thinking about doing a treaty on it because it’s so full of contradictions is that he’s not getting his own point. And he’s contradicting things he says earlier. Like at one point, he said, we solved the silo problem, which is an absolute false face, incorrect statement. And then he describes the silo problem. And I’m like, dude, you literally just described the silo problem. I don’t you said it didn’t exist. And then in your description of where it falls short and the worst part is his example is, you know, I think, you know, at least one other person and him and he, you know, asked chat GPT to summarize his work. And it came up with a fourth graders interpretation. I think is what he said. Right. And then he said, but I bet a high end lawyer wouldn’t make that mistake. And I was like, oh, I bet they would. Because how many high end lawyers know as much psychology, cognitive science, philosophy and history as you do to summarize your work well enough to not make that mistake? Because I bet the answer is almost none. Right. I bet almost no lawyers could do a better job of summary of John’s work because there’s especially not at their high end because they’re specialized for lawyers. Yeah. And that’s I, I feel like, you know, I spent 50 hours watching Awakening from the meeting crisis. I haven’t watched the Socrates stuff. I don’t know. I just have too much stuff to watch. But garbage, you’re like, I’m not missing anything. I, I, great. Okay. I feel like I still had trouble understanding him in the end of it. Like the way that I understood him the most was with PBK as a filter or Peugeot as a filter. When they would talk to each other, that would bring out stuff in Rebecca, even, even Peterson with his sort of manic ways when he talks about it. They drag them down. Right. So like, so the most amazing thing you could ever possibly hope to watch on the internet is Peugeot, Vervecky and Jordan Hall. Because Peugeot somehow manages to get Jordan Hall not to talk almost at all. And it’s like, well, that’s amazing because you watch Jordan Hall and Vervecky talk and they’re talking way up here. I mean, it just floats right up immediately and you can’t even follow what they’re saying. And I’m not even saying they’re wrong necessarily, but some of the stuff is like, well, that’s not useful. You know, like, like, like telling me, for example, that, well, if you want world peace, we all just have to love each other. Okay. But what does that mean? Like, how would I do that? And the fact is I’m going to do that differently from Father Eric, who’s going to do that differently from Ted, who’s going to do that differently from you, who’s going to do that differently from PBK. Right. And it’s on and on and on. And so while what you’ve said is, quote, true, it’s not usefully true because it can’t be implemented. Right. It’s too vague. It’s too abstract to actually be helpful. But Peugeot manages to basically bring John down so that he’s talking more, you know, closer to the level of people and persons. Right. And Jordan Hall is incapable of that, basically. So he almost doesn’t speak to all the conversations. It’s a miracle whenever you see somebody actually take a sorcerer like Jordan Hall and silence him so that he’s not enchanting you with words anymore. Okay, I want to go back to AI, because I want to throw out. I want to throw out an alternative AI hypothesis. And we’ll call we’ll call this the head. So have any of you guys read CS Lewis’s ransom trilogy, ending with that hideous strength? Okay. Okay, so I hear this stuff from AI researchers on particularly like large neural networks, and like the LLMs and the image like all the image generation networks that we got out of the sort of adversarial, the all the GAN research that sort of eventually ended up in like mid journey five and whatever nonsense we’re in now. And my understanding is that functionally what we’re dealing with is massive spreadsheets of weighted variables that you’re running inputs through and getting outputs. And so they’re basically these enormous mathematical black boxes. Yep. Okay. Okay. So, so then so then you get these claims about, you know, analysis of things like lambda last summer, Google’s land the last summer when you had the crazy thing about limiting alive. Yes, it’s alive. Okay. Now, they’re like, Look, it’s not alive. What is doing is character prediction. You know, it’s statistical character prediction. Here’s, here’s what I want to fire back with. Everything that I hear about large networks, are the exact same arguments that I hear from people like Sam Harris, about human brain and, and generally like low grade consciousness biology people about, okay. So that really like discomforts me about all these explanations about LLM says, Oh, look, they’re just enormous databases of mathematical relationships that we don’t understand. I’m like, Yeah, but everyone’s saying that human minds are just gigantic vats of neural impulse relationships that we don’t understand. Right. Right. Right. This is important. Right. So what they’re doing is they’re saying, we can use modern neural net technology to understand the brain. Now there’s two fundamental problems with that. One is, modern neural net technology is based on a model of the brain. Okay, so it doesn’t make any sense to then use it to remodel the thing that it’s modeled from. But it’s based on the brain. But more importantly, the thing that people don’t realize, I have the code on this computer right here from NASA in the 1980s. Fundamentally, except for Jeffrey Hinton, University of Toronto, right in the 90s, coming up with deep neural nets. And more importantly, I think more importantly, the Bayesian energy calculations instead of the neural nets. And we know past that point in the 80s and 90s, we know that model is so oversimplified as to be completely different from the neural network model that we’re using. And that’s what we’re doing. The technology is no different, and it’s based on the neural network model from the 1970s. And we know past that point in the 80s and 90s, we know that model is so oversimplified as to be completely unhelpful and wrong. And so you’re using an unhelpful and wrong analog of something, in other words, a bad copy and saying now we can understand the original from the bad copy. That is so ridiculous. It’s not worth thinking about. Mark, that’s great. And I love what was the term again? It’s back back what back back propagation. Yes, it’s Yeah, are you talking about? No, no, you’re using your, you’re using you’re using it backwards, right? You’re using the thing you modeled to understand the thing you models. It’s like, what? Right. Wait, what? What? Okay, so so Mark, to catch you up on that hideous strength, the very bare bones plot is there’s this sort of neo utopian scientific government organization called the NICE. That’s basically just a bunch of demon worshiping fascists. And they, they cut off this guy’s head, they get a convicted criminal’s head, and they like, animate it. And then they pull its skull off and like use all these drugs on it to make the brain huge. Well, jokes on them, because basically all they’ve done is like, summoned a demon. And so now there’s all these demons talking to them through this hyper intelligent scientific head through this hyper and look, you know, this hypertrophic brain. Anyway, you can probably get with this where it’s like, okay, so we’ve got this bad analogy of the human brain that probably shouldn’t work. And then it doesn’t work at really small scales, because I know that the fundamental insight fundamental insight of neural networks is just make them really stinkin huge, right? The fundamental like, yeah, the main the main thing that happened is we just threw compute at it, like to the point where they’re like, if we keep doing this, we’re going to be like burning through small countries, GDP is just to train these things. I give it is mostly about size. And so this is why I don’t think also why I think that a conversation between Peugeot and for fakie about AI is going to fundamentally break down because listening to Peugeot, as far as I can tell, he’s pretty firmly in the like, CS Lewis head camp. He’s like, yeah, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, I’m going to be like, Yeah, we’re pretty much summoning demons. We’re creating this massively complex thing that we don’t understand. And we’re saying, all right, intelligence, come out of this somehow. And I’m like, what, I can look around the world and I can see intelligence in animals and I can see intelligence in people and I know where that comes from. That comes from other animals and other people. I’m like, you have something up here that moves it into something else. I’m like, okay, great. So now we’ve got this, this sterile mathematical network that starts exhibiting incredibly complex emergent behaviors. I’m like, what is going on here? Right. That’s the problem with for me. He stated this early on, two years ago. He said, if you if you take an AI and you put it in conversation with itself, it immediately goes insane. That’s still true, by the way, like you can do you personally, physically right now can do this experiment. And now all of a sudden, in his new AI thing, but no, no, there’s emergence here. Because he’s an emergence is good person. And the thing actually that I figured out recently, well, recently, today is that there’s being is good. There’s emergence is good. And there’s rationality is good or order is good. So as long as it’s ordered, it’s okay. It’s like, well, what? And these are the three axiomatic assumptions I see everywhere. And of course, the Christians are saying being is good. Go figure. Right. And then the scientists are typically saying emergence is good. And then the philosophers are typically saying order is good. Well, to keep on this on this point about the trilogy, Ted, the the the evidence that we are already conjuring a sort of spirit or, you know, it’s emerging or it’s emanating or whatever the evidence for that is because so many smart people like John Verbecky have been hoodwinked to totally change their perspective on AI because of this thing. The fundamental nature of this is not change this. It’s the same reason my, you know, my older professors, you know, the professors that have been there for 30 years, they complain about neural networks and AI because because it’s a black box, because everything that they worked with before this sort of thing, you could you could lay it out all explicitly. You could follow the math exactly. You couldn’t bullshit somebody on this stuff. The dark wizardry that comes into play here is that you can’t see any of this thing. And one of the fields that they’re trying to work on right now is called explainable AI, which basically means we’re going to try and see into this box and figure out what’s going on. But you think you’re going to do that with with chat GPT when it’s got hundreds of billions of parameters and you’re going to make all of these and they’re all connected to each other. I mean, do you know how many like a ridiculous If you get an efficient analog computer in there, which would use a ton less electricity. Right, right. But this is like my, you know, my, you know, there’s still a lot of the point in like, if you’re if you’re trying to do object detection or something, why would you use this fancy computer vision thing that you can’t really see into and it looks, oh, it’s, it’s got these really impressive results. But, you know, I can just use this thing called the common filter and I can analyze it from start to finish input output do all the equations. And get it done like I worked on a project in school relating to this where they’re trying to use a GAN to basically generate synthetic. So they start with MR images. And they train it on a data set of MR and CT images and they’re trying to generate synthetic CT images from Ground Truth real MR images. And the idea is if we get this to work, we don’t have to waste time or money on CT scans again. Right. That’s the that’s the that’s the whole of it. But the state of the art models can we have all these these sort of we call them, you know, image quality metrics in image processing. Basically, you know, we run the we run it through this mathematical formula. How well does it, you know, go out on the other side, you can have. These synthetic images that come out perfectly like they have, you know, the exact score, the exact same score is a real ground truth images. But you put them in front of a radiologist and they’re like, that doesn’t look real. That doesn’t look right. And that’s that’s what the neural networks are still falling prey to. You know, like you watch it. You watch this. What was it? This this Joe Rogan AI experience where, you know, it’s AI Joe talking to AI Sam Altman or whatever. It’s the same thing. It’s off. And we can tell because we don’t have intuition. You can’t program intuition. That’s the unmanned from parralandra. Yes, yes. The unmanned from parralandra. Yes, yes. He’s almost he’s almost perfect at mimicking those human movements. But every once in a while, ransom sees something in him. He’s like, oh, that’s not a human being. And his hair stands up on the back of his head. That’s not a human being right there. Yes. And and the whole thing of his intelligence in the conversation with the green lady in ransom as being this tool. And the moment that they’re out of his situation in which he’s in this sort of dialogue directed towards a particular tell us I eat that of corrupting an entire newly created world, then he just reverts to saying ransom’s name over and over and over. I’m like, if that doesn’t sound like two chatbots talking to each other, I don’t know what does. Right. It’s like, and so it’s like, it’s like that. Or like just destroying stuff, because destroying stuff. And there’s this instance in which like, intelligence is like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s like, you know, it’s thatashed me back. are ordered towards reason, then the game would be up. It’s like, because we all know we’re not rationality, but reason gets you. And it’s like, of course they’re not. Of course they’re not. Yeah, and that’s the thing. I mean, AI is a mirror. And then the irony is all of these, literally all of them are like, you know what the problem is? We keep personifying stuff. And that’s a big problem. And then what do they do? They personify the things they’re creating every single time because it’s a mirror. And they really don’t understand that simple principle and they keep holding it up as though it’s not. And the thing that we sense that makes it creepy is quality. And because you can’t program quality, it’s a quantity-based system. And the fact that it’s effectively what AI ultimately boils down to in the map is an open algorithm that is open to numerous possibilities. And the problem with it is that it’s open to too many possibilities. And so there’s a famous paper, white paper, I’ve talked about it before, about you can put a sticker on a stop sign and from six feet away, a human won’t even register the sticker. But if you put that sticker on that stop sign, the car will blow the stop sign every time because the stop sign is now invisible. And nobody knows why that is. I mean, they know technically why it is, right? But the people who wrote the paper basically pointed out it’s not quite infinite number of combinations that caused that problem. As in the problem of humans won’t notice it, but computers will is not infinite, but it’s real close. In other words, you cannot fix that at all ever with current technology. You could, we could fix it with some other tech, in fact, I know how to fix it, but no one’s gonna listen, so it doesn’t matter. But they can’t fix it using the current AI stuff because of the way they do landscape-based imagery recognition is actually fundamentally backwards because they’re looking for objects inside the landscape instead of using an emanation approach where you’re narrowing things down from the landscape to the objects. Because then you could ignore things within the stop sign. And it goes beyond that too, because it doesn’t just apply to the physical real world domain, like stickers on a stop sign. What the leading edge of research right now is called adversarial attacks on AI systems. And what that means is like, you can sprinkle in a little bit of very low level noise into an image in the same way. So as a human eye, it doesn’t look any different, but the AI system will just totally misclassify. It’ll turn a panda into a Pepsi can or something like that. And that is something that you can scale up. It is so easy to attack and dismantle these systems, which means that they’re not resilient like humans are. We are resilient. We’re the most resilient creatures on the planet, right? We can- Oh, water bears are. Water bears are. Sorry, they put those on the exterior of the International Space Station and they laid eggs. I mean, come on. Yeah, but they’re the most resilient species on planet and off planet. They’re not getting into the kingdom of heaven, Father Eric, so they’re not. Sorry. All right, I just wanted to bring up water bears. They’re cool. Mark, all of this is gonna change once we start putting these AIs into biomechanical bodies. That was his other argument. It’s weird to hear a discussion where he invokes Wittgenstein. I think it’s Wittgenstein with the lion. Even if a lion could talk, we wouldn’t understand it. And then later on says, but we’re gonna make AI into a tiger. And then that’s, and I was like, you just said Wittgenstein said we can’t do that. And now you’re saying we’re gonna do that? And the whole time, I’ve got this stupid spell check that changed on my phone. What did spell check change into? It changed to an AI spell check. AI spell check is terrible. The mathematical spell check that’s something like 200 lines, John, you might actually know this. I can’t remember the guy who wrote the original spell check that’s based on the keyboard and whether or not you’ve made a typing error is way more accurate than all of the AI spell checks, actually, and it’s all math. It’s much more accurate. And it’s weird, because this is, Verbeke, this is the same guy who spent so much time talking about zombies as the model for the current meaning crisis that we’re in. And now it’s like, well, let’s just make some more zombies by programming biomechanical things with AI. Okay. I just, I think that there is some deep, it’s hitting some really deep desire that people have. And I remember years back being given the distinction between begetting and making, right? So God makes us, we are begotten of other people, right? So the son is begotten of the father. We are made by God. My children are begotten of me. And we want to make people. We want to make persons. And I think that that’s part of why we look at that and we’re like, oh, it’s a person. There’s intelligence there. Because it’s not, because you get all the benefit of this relationship that you might have with another person, but you can own it. Because when you make something, you can own it. When you begot something, you don’t own it. And I think that there’s, I think, I mean, I don’t know. It’s weird. I don’t think it’s just this anthropomorphization. There’s this desire to be like, we have done this. Do it the old fashioned way. I’ll get you married. And that’s what Verbecky started with. Like the framing of his AI talk from the beginning is all about, you know, we need to be good parents to the children. Yes, yes. And at the same time, he’s saying, we can’t do that, but they’re gonna become enlightened. And then if they become enlightened, they’re either gonna become enlightened and leave, like the movie, Her, which is a great movie by the way, or they’re gonna become enlightened and make us enlightened. And so what is he talking about? He’s fundamentally talking about religion and God. This is the, what’s the deep desire, Ted? What’s the perennial pattern? The perennial pattern is called religion. The post 1530 definition, by the way, because anybody before 1530 didn’t even have this conception in their head because they weren’t retarded enough to come up with this. It was very easy for them. Everybody before 1530 actually knew how the world worked and didn’t need some like thing that you could separate from philosophy to get some pure rational world where, you know, nobody needed that. But now that we’ve cleaved those two in half, we need it back. And which is funny because John says that in the talk, religion is the way we relate to a hierarchy we’re not at the top of. Really John, very insightful, right? I mean, this is the problem with that talk. He drops these insightful bonds, everybody goes, wow, that’s so insightful. Yeah, but he didn’t resolve it for you. And it’s totally out of context of what he’s saying. But you’re right. It’s brilliant. Sure, I’ll grant him that. It’s brilliant. That’s a great point, Mark. It’s one of the ironies is that for most of history, Father Eric would have been considered a secular, right? Father Eric is a secular. He’s a secular priest as opposed to a religious priest who’s gone and lived in a religious community. Religion, the religious people were people in a monastery. Secular people, including the priests and the bishops and the laity, those were all, that was all secular life, not religious life. And it’s like, right, so exactly that. And it’s like, what do we, and so this is again, why I think that, I mean, maybe it’d be interesting to hear Verveki and Peugeot talk about this, but on the other hand, like in one sense, like right now, AI, you say AI is the mirror. I’m not exactly sure how far I’m willing to go in terms of AI actually being a mirror to us and how much it actually has some ontology. I think I’m gonna lean way more on the side of there’s some real ontology there. I’m not gonna fight that with you guys. But in terms of the way that we look at it is absolutely a mirror. I mean, I think it’s incredibly revealing when we look at that and we’re like, oh, here’s the thing. You know, this is how we think about it. It becomes, right, because for Verveki, all of a sudden, it becomes this tool for enlightenment, right? And that totally wild conversation that Peugeot had with, what was his name? The psychedelics, like live your best life guy. Does anyone else remember that? That turned it where the guy’s like, what was his name? Yeah, what is his name? But he was like, man, if we like greeted AGI and it like became the Godhead, wouldn’t that be great? And Peugeot was like, no, it wouldn’t. Yes, yeah. It’s not even that AI is a mirror, it’s a broken mirror. Sally Jo actually has a great, great book, she drew a broken mirror of AI on my server, which is fantastic. And that is the problem. It’s like a broken fun house mirror. It’s got a crack in it and it’s distorted. And we’re looking at it and yeah, we’re seeing what we project, but it is right Ted. It’s that drive to get to be gods, right? To create. Oh, here it is, excellent, yeah. That’s AI right there. That’s actually what AI is, right? I also love that it’s him. It’s not the friends, it’s him, his reflection that he’s seeing. Exactly, yes, yes, he’s only seeing his own reflection and it’s got a crack in it, although that’s a little harder to discern. The mirror is actually cracked. And the crazy thing isn’t just about what we think of it themselves, it’s what these AGI people who are advocating this describe how we think about it because they’ll just describe it, oh, well, when we see certain patterns or view all this AI and we anthropomorphize it, oh, that’s some sort of deviation from reality because it’s like we’re seeing patterns in something that’s not there. And these people, they don’t understand human psychology. They don’t understand spirits of reality, which again, Fadeki needs theology and some sort of theology of spiritual beings. I don’t even know what else to say because we have already developed a god here, but a lowercase g god. We’ve already instantiated an Elohim of some kind. It’s all- If what you want from this thing is to be the thing that is enlightened, how is that different from stating that it’s capital G god? Because I mean, in the Christian tradition, from my understanding, and I’m no expert here, but pretty sure my understanding is pretty close on this. All enlightenment comes from God, doesn’t it? Like, am I wrong about it? It’s like- It’s being needed by an angel. Sounds good to me. Oh, thanks. Oh, detail wrong. Didn’t angels come from God too? Am I- Yeah, yeah, yeah, yeah. I was being dumb. I was being dumb, Mark. No, no, I don’t know. It was perfect. I can be dumb whenever I want to. No, no, it was perfect, right? Because it’s like, well, yeah, technically, because it is exactly the move the scientists would make. Technically, the angels are God. It’s like, yeah, but they came from God’s muppet, so it’s all falling down here. Like, the fact that it’s not the top doesn’t mean it didn’t flow down the hill. That’s the trick they play. Well, that’s not the top. It’s like, what are you two? Like, come on. Yeah, well, yeah, I mean, Aristotle’s four causes. We could, you know, it’s like, it’s caused by X. I’m like, great, which of the four was that? We still have three more. Which, by the way, if you ever end up having young children, it’s a great trick because they will always just ask you why. And you just dodge between causes. Why is it raining? Well, here’s the physical cause. Why is it raining? Well, because God ordained it so that plants live. Why is it raining? Because of these atmospheric, you know, it’s like, because there was a cold front. It’s like, they’re all whys. It’s great. But if you’re just like, there it is, we answered it. No. Well, that’s actually really insightful, Ted, right? Because that’s what we need to do. We need to stop giving single answers to things because nothing has a single cause. No, no. I mean, but Mark, I mean, I remember when I, what, this was like three years ago when I first like actually sat down and read this. First read like some real Aristotelian and Thomistic science. And it was like, what is this? Like, I couldn’t even understand what the guy was talking about. He’s like, there’s four cause, there’s four principle causes to things. I was like, I have no idea what that means. What do you mean there’s more than one cause to things? And it took me like, I don’t know, six months, a year to finally start thinking, oh, that’s a material cause is different from a formal cause, which is different from a final cause, which is different from an efficient cause. It’s like, all right, I’m starting to realize that basically every answer that was given to me was the efficient cause, which is interesting. It’s like, why did this happen? It’s like, here’s the efficient cause. I’m like, no, I wanted some other answers than that one. Or a combination of formal and efficient. That’s the most common scientific explanation is a formal plus an efficient cause combined in some weird neoplatonic hybrid that is invalid. Which is why you end up with these like weird, weird, sorry about the chocolate on my fingers. My wife just gave me some chocolate chips, cookies, they’re fresh out of the oven, which is great. Glory be to God. But no, we stick to like that. And then we think, why are all these, why is someone satisfying? It’s like science and religion have like all this weird stuff. And I’m like, no, man, we’re just not like, people figured this out a long time ago in terms of understanding there’s like a lot of different ways to ask why. Anyway, I agree with you. I mean, like we should just, we can all just sit down and think about that for a while. And we’d be doing a lot better. And be happy with the struggle of not knowing. Yeah. Because it’s back to the struggle that John mentioned earlier, right? Like the struggle is important. You should be in the struggle. That’s the tension that everybody keeps talking about with the county processing, which is not cooperative for us. That’s the struggle. Yeah. It’s just interesting, man. I mean, thinking about those causes again, I just think about when I finally hit that out of grad school, you know, after quitting grad school to be more blunt. I like felt like I’ve been hoodwinked my entire education. It’s not like all the data that I had been given wasn’t great. I couldn’t use it, but it’s just like, how? And I, Father Eric, I think I’ve gone on this rant before in your live stream about this stuff, but it’s just like, how did I? Any rate, you know, you just teach your kids or you teach the people around you and you give them a better shot at it than you did when you’re just totaling along with all these atoms talking together. And you’re like, I guess God’s somewhere in the mix or something, I don’t know. Yeah, well, and that’s not just your struggle. So there was a great talk that John Vervicki did with Wolfgang Smith on the Meaning Code channel. Yes, that was a great talk. The summary for me, that was actually, it was back in September, I believe, because I was listening to it on the way back from Thunder Bay. The summary to me was, two guys get into science looking for the answer to life, the universe, and everything, figure out it’s not there pretty quickly because they’re smart guys, they’re not morons. These are way above average guys. What, and Wolfgang graduated with two or three majors from Cornell at 16 or something? I mean, it’s like you’re here as president. Who are you? Wolfgang’s not human, clearly. Yeah, yeah. But then they go into philosophy, and they figure out the answer’s not there either. And then one of them, the one that’s not human, actually is a Catholic now. I’ll leave you to guess which one that is. But it’s this quest for this answer that’s not there in science, and then this quest for this answer that’s not there in philosophy, and then this ultimate, like, how do we resolve this? Oh, and I like what Taleb says, if you’ve not seen Taleb’s just great, gotta read all his books, you just gotta, right? Because he says, I wanna live comfortably in a world I do not understand. Like, yes, that’s exactly what you wanna do. It’s very religious, even though you’ve not. I mean, that’s Chesterton. That’s Chesterton’s description of what he wanted to the universe. He wanted someplace where he could have an adventure at the same time as feeling at home. Like, that’s the same thing. He’s like, I wanna be the guy who thinks he’s, you know, exploring some great South Sea island that no man has ever been on before. I want to realize that I’m like showing up at Brighton. He’s like, I want all the fun and bravery and conquest, and yet to be at home, truly at home here. You know, that’s what finding a spiritual home, they’re gonna do that thing in California in a couple weeks. There’s a reason that it’s like- They’re not marketing this property, right? Yeah. Spiritual home, I’m gonna make them all Catholic. Yes, Father Egg, let’s go. We’ve already got the spiritual. Yeah, it’s like finding a spiritual home. Like, there aren’t any available. We’re gonna go and say Margaret Mary’s for Sunday Mass, boom. No, it’s historical. You know what it is? It’s the spiritual home building crisis that’s taken over and they just haven’t built enough spiritual homes for people. So we’ve got to build a few more. Gnosticism confirmed. It’s the spiritual housing crisis. Yes, thank you, Andrew. That’s exactly- Ours by the spiritual fed raising the spiritual interest rates too high. Yes, yes. What we need is a nice Gnostic apartment building to house everybody spiritually. Unfortunately, I don’t think that’s what I’m trained to do in my discipline. Andrew, I’m guessing that the Gnostic apartment buildings look like one of those Soviet era Soviet Union blocks where it’s just like concrete. Also as an aside, if any of you guys have ever not read Live Not by Lies by Rod Dreher. There’s some fascinating stories out of the Soviet Union. And one of my favorites is there was a group of people, and I don’t remember if they’re orthodox or Catholic, but they got permission to fix a toilet in one of those Soviet block building, apartment buildings, tore out a four story hole inside of it and built a church inside a Soviet apartment building. It’s so cool. They’re like, for years they were operating on a, they needed to fix a toilet permit. And they’re like, yeah, we’re still doing construction in here. The whole thing was so corrupt that worked. Yeah, exactly. Big father, that’s my picture for you at this conference. Here’s the Gnostic apartment building. And you’re like, we’re gonna go fix the toilet in there. I love it. It’s a great image. Oh boy. Mark, I want your predictions. What happens when we get a semi-functional biomechanical suit and we put an AI inside of it? It depends on the biomechanics almost entirely actually. That’s the problem is that, like this is one of those little things. There’s two things in AI that matter more than anything else. Like forget about the algorithm and the setup of the neural net and all that. It’s actually the least important part. The two things that matter most are number of inputs and type of inputs. That determines 80 or 90% of it. And so you kind of have to know, because a biomechanical suit, if you do it right, you have to have special different types of inputs. And actually if it’s any biology and you don’t make it an analog input, which you probably, they’re right on the edge of being able to do things like that, but it’s all research, small scale stuff. You would have to account for it, you would have to account for so many things. And so I don’t think it would work because we’ve already many, many, many times over done the physical constraints to digital conversion AI stuff and it’s not impressive. Like we’ve had that for decades and it just doesn’t work very well. So you can make little robots. So in fact, the big thing is little robots that have like tensor AIs and stuff that are trained on a computer in a physics engine. And then you put the trained model into a real world tiny robot and it can kind of do stuff. The analog robots of the same size without any training actually do better. So it’s like, because it’s analog and analog computers have all kinds of advantages because basically effectively, if you boil it down to math, which is inappropriate, the closest we can do, the range of the math is much greater in an analog system. It’s like seven orders of magnitude bigger. So like the things it can do is much better. And so far as we can measure it. Yeah. Right. It’s, yeah. The other thing about it is the scaling problem that we were talking about earlier because you’re talking about some sort of biomechanical system. Okay, well, what level are we analyzing that on? We could have an elbow joint. Okay, we have an elbow joint. Okay, we can have finger joints. Well, how precise are we gonna get with these finger joints? So we could go all the way down to individual cells. How complex is this thing going to be? And even if you were to get to that some insane level of complexity, what you’re gonna run into is the overfitting problem. You’re going to, this thing is going to learn certain patterns that are specific only to the data that you trained on. And then once you put it in the real world, it’s just gonna fall on its face because it’s stuff that it’s never seen before. And because you’ve given it too many parameters and specified it too much, then that’s why there’s this tension between making it more general and making it more specific and going on a smaller level down the hierarchy. And this is a problem that, this is a perennial problem they have not figured out that applies for any sort of AI system. So that’s my- It applies for everything. Yeah. The perennial pattern, the fundamental problem where they say, well, this has so many parameters. The more parameters you have, the more of a fit you are. It’s not exact, but it’s real close. It’s real close. And you’re too few and it’s gonna be underfit too many and it’s gonna be overfit. And the big lie they’re telling about AI that they don’t mention is that the tuning they do at that level, which actually is hard. So not anybody can do it, but the parameter tuning is a huge thing. It’s enormous. So you can go on the web right now and download any number of models to do character recognition or facial recognition or any of that stuff. And I guarantee you, unless you know something about datasets and what kind of data you’re feeding it, the odds that you’ll get a parameter set that will ever work are almost zero. It is very, very hard to manufacture the right initial conditions, because that matters a lot, which is ironic because John mentioned that in the talk. And I’m like, well, then it’s not, not only is it not general AI, but it’s not general within a silo. So where are we with artificial general intelligence if you can’t take AlphaGo and give it a six stone advantage and it still win? It can’t even do that. It’s not even general within the domain it’s good at. That means basically that, yeah, these parameters in this overfitting is a big problem because it’s not just overfitting to the data. In the case of AlphaGo, it’s overfitting to the starting condition. And we haven’t even talked about that problem in AI. Yes, yes, yeah. And this is what you see in like, reinforcement learning as well, which is its whole bear in itself. But that’s like the, you wanna see an example of AI failing to generalize in any situation. That sort of thing is where it’s at. And like you, what I think you’re going to see is, the use cases for these AI models are going to be for very specific tasks, very narrow range tasks. Like you could have a chat GPT that is, it has access to an entire hospital system’s medical records and it’s going to give you a diagnosis or a suggested diagnosis or whatever. Or a virtual representative that the patient can download on their phone. That’s just delightful, isn’t it? Yeah. But, yeah, like as long as you have a human doctor, that is the one who is interpreting and using that tool properly. There are, and as long as you have a human X doing all the interpretations or whatever these very narrow rebrand systems are putting out, I think they have a huge amount of potential in those domains. It’s just, you get the people that come on, like Lex Friedman’s podcast and he’ll ask him questions like, do you think chat GPT is conscious? It’s like, Lex, you’re smarter than this. Don’t do this, man. Their consciousness is like the, what would you say? It’s like the ultimate in what it means to be generalizable. And AI is, it’s not there. And maybe it will. John is- They can’t even define consciousness. That was Thunder Bay. They couldn’t define it. Well, I mean, that’s like Yuval Harari’s book. You know, I think it’s Homewood Day, it might be Sapiens where he’s like, I’m going to give you a rundown on everything that we figured out about human consciousness in the scientific field in the last 50 years. Nothing. And he’s like moving along and you’re like, okay, that’s something. But anyway, I was going to say, as much as I appreciate y’all’s pessimism about the capacity of AI to develop out of these highly specific confines and situations, I would say I have an almost unfailing faith in, I guess what Paul Kingsnorth would call the machine to look at that and say, all right, well, that’s where humans are living now. Because there’s two ways to fix that. One is to figure out systems maybe that can exist in all of these environments. And the other one is to say, hey, we solved it right here in here, we’re going to make you live there. And that’s a sticky one right there. When you think of say the degree of, I mean, honestly, even this again, break the fourth wall, do a grim gris. It’s like, we’re willing to accept a very low resolution form of communication with each other in order to get these benefits out of the fact that I can talk with these people that I would never sit down in the same room with or even fine. And it’s like, to what degree are we societally going to be willing to make those exchanges to say, hey, look, I get that this AI can only function in very certain circumstances. But if I live in just those very particular circumstances, say that initial setting for the go board, conversational equivalent, then I can feel like I have a friend. We already do that. Yeah, we already do that now. And this is like the big struggle. What are you going to be submitted to? Are you going to be submitted to the thing that gives you enlightenment that’s called AI? You’re going to be submitted to the thing that gives you enlightenment that’s called God? Are you going to be submitted to the thing that gives you enlightenment that’s called Gaia? Or you can go on and on and on. We’ve got all these Gnostic apartment building cults that we could, and people do pick one. It’s not that they’re not submitted. It’s not that they’re not able to operate in the hierarchy. They know, at least unconsciously, they need the hierarchy and they slide into one, right? Because if you believe you do not have a religion, one will be provided to you without your knowledge or consent. And that’s effectively what’s happening. So what you see in the religious wars of AI is people who go, yay, it’s going to give us enlightenment. You go, no, I won’t be under the thumb of the thing that gives us enlightenment. It’s the same battle all over again. It’s just a different target at the top of the hierarchy. Well, there’s a really, just a few days ago, there’s this really great clip from David Bentley Hart when he talks about AI. It’s like a clip from his book, The Experience of God. And he talks about all of this AI stuff. And I think it’s real, like one of the things he brings up is how this, like we were saying earlier, how AI does not, it has no essence in itself, right? And so what Hart says that means, it’s like the deep blue computer did not beat Garry Kasparov. Garry Kasparov and every single other human expert who’s ever played a chess game beat Garry Kasparov because that’s what that thing was trained on. That’s what it was trained to do. And the point is there is nothing, there is nothing emergent in that. It’s just an aggregation. It’s not an emergent. It’s like intellectual necromancy or something. You’re like raising to some sort of galvanic, unnatural life, the dead thoughts of everyone else. Yeah. Yes. Boy, that’s a great. And I mean, there are benefits to aggregation in one place, in one system, in very specific use cases. But to say, it was a deep mind, deep blue, whatever it was, beat Garry Kasparov, no it didn’t. Everybody in the world beat Garry Kasparov, including himself. This is one of the points that. What’s the. Yeah, but neoplatonism and therefore I can just make something called deep blue and say that that is the oneness to beat Garry Kasparov. How dare you, John? I’m a neoplatonic. I just, I called it. I named it. It’s been instantiated in being. It’s Wingardia Leviosa, sorry, Leviosa. That’s quite a modern subjective, postmodern-ish Platonism. I point at the one and I say it’s the one because I want it to be the one. Well, that neoplatonism doesn’t give you a resolution to that at all, even according to Breveke. That’s why you need the religion that’s not a religion. It’s like, seems like a lot of work for something. We already got out of Christianity, but hey, I mean, I guess if you wanna try to reproduce two 2000 plus years of work by yourself, in your own lifetime, I guess you can try. Like, I got things to do. I gotta go. Yeah, yeah. Oh, man. And I actually do, yeah. I’ve got one last thing, one last thing. I think no discussion of AI this week should go without, we’ll just go with the flat presentation view here. So feast your eyes, everybody, the AI-generated pizza commercial. This is pizza? Yeah. Vegetable, secret things. I asked you to secret things. Oh. Oh gosh. The horror. It’s like family, but with more cheese. Yeah. Sam, I’m about to wrap this up, but hello. Pepperoni hug spot. Ascent. I’m never going there. What’s like family, but with old cheese? It’s a pizza place that disfigures anybody who eats the pizza. That was great. Oh, wow. Alrighty. So. Good night, everybody. Someones children are more unhappy than mine. All right. Good night. This was really fun. I felt like I really got a lot out of it. Some good breakthroughs and other moments. I look forward to the next one. Okay, very good. Take care. Glad you did that.