https://youtubetranscript.com/?v=i1RmhYOyU50
Welcome everybody to another Voices with Verveki. I’m very excited about this. This is a special video. I get to talk again with my good friend Johannes Niederhäuser from the Halkian Academy. Johannes and I have done a lot together and we’re going to be working together again. I’m going to be doing a course for him and Halkian this summer and so I’m very much looking forward to that. And then somebody I’ve met before but not in depth but I’m hoping to get to know him better in this conversation and that’s Sean McFadden. And why don’t we start with each one of you introducing yourselves. We’ll start with you Sean saying a little bit about yourself and why you’re here in this particular discussion. Yeah thanks John. So my background is in physics. I did an undergrad in physics and right now I’m doing a degree called neuroengineering. It’s a new field, emerging field at the intersection of machine and the human brain. It’s a degree I’m doing at the Technical University in Munich and in the course of this I’m now doing my master thesis on the mathematics of general complex system modeling at the Max Planck Institute in Leipzig. And throughout my whole studies I’ve been very interested in the philosophy, specifically philosophy of mathematics, science and technology. So pretty much when I started my undergrad I started reading my first book of Plato and just went on from there. Which led to me starting to write a book during the corona lockdowns called The Philosophy of the Machine because I see kind of technology is this specific convergence point we’re at now. And in the course of my studies I did this internship at the Center for Future of Intelligence in Cambridge which is where I invited both of you John and Johannes to a talk and this is pretty much kind of how we first came in contact. Yes Johannes. Thank you very much John. It’s good to see you again. I hope to see you again soon in person if you make it again to Cambridge later this year. So London. Yeah. We’ll find out. We hope so. Well what so I have a PhD in philosophy. I worked on on Heidegger. I focused on death and have also done work on Heidegger’s philosophy of technology. Heidegger does not really obviously consider AI or artificial intelligence but he has a great deal to say on cybernetics and a few other issues and how that ties to well to western metaphysics respectively it’s collapse as Heidegger would perhaps put it. So let’s see where it leads us today. I think John will be giving us a few pointers. Well the pointers are around the two published videos and I think I sent you both the link to the one video in which I well let me let me start more personally even existentially and so my my students will tell you that I always had a hope that the AI project would advance because the science of intelligence cognition and consciousness advanced enough so that the technology would be tethered to the science and then I had made it my particular endeavor to then tether that science to the love of wisdom the cultivation of meaning and wisdom through you know theory of relevance realization predictive processing religio all of that so I had tried to bind the three together and it not not just me other people and my hope had been that that would be how we would do it that the that AI would come in AGI artificial general intelligence would come into existence through the advent of knowledge that was bound to the cultivation of wisdom and meaning human flourishing. My fear had always been and I expressed both that somehow we would just hack our way into AGI we would not have developed the science significantly and we would have even less resonance with a philosophical framework and my hopes were dashed and my fears were realized and because part of what I argued in the video is the machine the machines I’ll just call them the machines because it’s not it’s not one machine it’s a it’s a whole research program that’s not even the right word but anyways the machines basically are not that much of a scientific advance there’s a couple of scientific things that have come out of it that we’ve learned and I right and I took pains to try and say what those were but by and large this doesn’t advance our understanding of a general theory a generalizable theory of intelligence let alone consciousness let alone selfhood personhood rationality accountability you know so rationality both the platonic and the Hegelian sense all of that so none of that even though I had also argued and I’d made a prediction as a scientist that generating artificial intelligence without simultaneously working on rationality let alone wisdom would give you highly intelligent but highly self-deceptive machines which is what we now have by the way so lucky me my prediction came true which says something about the relationship between intelligence and rationality that is not well understood so when that first happened I mean when one’s hopes get dashed and one’s fears get realized that’s it’s an unpleasant moment to say it the least and I decided you know I started doing more research and then I got I will realize that there was a tremendous amount of hyperbole confusion misinformation I think sometimes even manipulation in order to enhance the sale of stock and garner investors there was a lot of stuff happening I’m not saying everybody was a bad faith actor most people were good faith but there was confusion but there was also I could tell there was some bad faith stuff going on these are like we should worry about the research papers being produced by the corporations right that are generating some of that stuff is conflict of interest as we used to say in academic circles it doesn’t seem to matter now but so I thought I should respond so I did a lot of work a lot of reflection and uh Yohana Sara can tell you that yes I became very obsessed at all hours of the the day working on this taking notes generating arguments counter arguments and I presented an essay where I tried to carefully go through um the scientific the philosophical and the spiritual import of these machines so we got a very clear picture not making predictions about when certain things would happen uh but trying to point to certain threshold events where we would move from just intelligence to possibly rationality to possibly uh consciousness et cetera and what those threshold points would look like and so that we could realize decision points that are still ahead of us and then I made a proposal for how we could address what’s called the alignment problem how to get these machines if they continue to advance to be ones that would align with our best uh moral intuitions are and our best visions of uh flourishing uh persons and so I presented that argument uh and I guess I I’m interested in getting first of all reactions responses feedback and then moving into a discussion and if you want any of me to review anything to clarify anything very happy to do that as well so how’s that for an initial framing that that sounds good I would immediately have a question or two specifically concerning the question of consciousness and AI because I think that’s a very very popular topic especially in science fiction and you know sentient machines and all of these things and I’m I got interested and involved in consciousness research specifically like scientific consciousness research other than purely philosophical one and I’d be interested what kind of theories or what kind of paradigm do you see in the study of consciousness in in cognitive science right now and in what way would you would one be able to implement that in a machine and and like why hasn’t it been done or why can’t it be done potentially um so first of all I mean one of the things that I uh one of the things that became really amazing was how quickly a lot of the advocates of the machine dropped their ontology without realizing it they started to invoke real emergence that was not epiphenomenal but causal so they abandoned a flat a flat ontology just adopted and it’s a leveled ontology very new neoplatonic in a lot of ways and then they found the the scientific theory of consciousness that would it was most easily in line which was tonnodi’s integrated information theory and said well look it satisfies this and for me that became a for me it became a a modus tollens on tonnoni’s theory uh so it’s like yep that probably does satisfy the theory and there’s really really good reasons for believing that it’s not conscious that it’s not a good theory you mean exactly exactly um I actually have a specific I have a specific question to you about that theory for a long time now namely I feel like there’s a severe uh category error being done when applying that theory namely if you look at the premises that tonnoni based this theory on namely there’s alleged phenomenal axioms right so allegedly he takes five axioms and then formalizes them mathematically and in their in their um axioms they’re said to address the experience right the subjective uh first person experience there’s even this famous picture of entmach in the paper and but when it’s applied I’ve always seen it applied to the brain for example looking at the brain structure whereas in its own premises it was meant to represent experience so in a way it’s that’s what I thought in a way the application to just a system like a machine in our brain is a category error it should be applied to the first person experience and then you could potentially look at the complementary information in the brain whether it’s somehow complementary to the experience the the the phi or the tonnoni complexity of the experience and then that would be more in line with some sort of predictive processing theory where it’s like okay your brain always exhibits that complementary information that maximizes phi of your experience at a given moment and I find that that ties in very well with your idea of the salience landscape etc what are your thoughts so specifically on this um and so uh for those of you who are not up on the the specific of this just uh just bear with us please um uh so um there’s a couple there’s a there’s a there’s a missing axiom first of all which is his identification axiom which he doesn’t state anywhere in his work which is how is it that what is it you’re taking to be uh the thing that is identical to consciousness that is stated in information theoretic terms um so that’s not stated he he uh in the axioms because I think it’s an axionatic thing he thinks he can derive it from the axioms but I think it’s axiomatic which is I think the point you’re making uh so I think as a formal as it so as a formalism it just fails right there because he’s he’s there’s an axiom that is not stated uh nor is it justified or defended uh it comes out when he just says you know you take all these measures and you create this multi-dimensional space and that’s somehow the qualitative aspect it’s like everybody whenever I teach my students that they sort of look at me like well why like why does that why does a weird shape in an abstract multi-dimensional space why is it not correspond to but why is it identical to uh the qualia and uh that part again he just asserts that that’s why I think it’s axiomatic by the way um and so I find that that whole thing very problematic and then he and then of course he promotes he promote he promoted a particular version of a Turing test which under some can under some not implausible construals you can get GPT-4 to pass right it can tell you what’s weird about a picture um and he goes well there and that must mean it’s it’s conscious because it passes Tononi’s test and it’s like no that tells me exactly that the Tononi test is wrong uh precisely because it didn’t take into the possibility what we have with the GPT machine so let me be very clear about what I think they have and don’t have that’s relevant to the issues of both intelligence and consciousness I think there is some implementation which I will then later qualify of one dimension of relevance realization that was already specified way back in the 2012 paper that I wrote the compression particularization recursive happening within the deep learning there is some implementation of the predictive processing and I just released with Brett Anderson and Mark Miller a paper at the end of last year integrating relevance realization theory and predictive processing theory together and showing how they go together there because of course there’s the there’s the predictive processing with the probability relationships between the tokens of language so there is some there is some implementation of some aspect of predictive processing one that will not generalize to many non-linguistic domains so and there’s one of many dimensions of relevance realization that has been implemented so what it shows is just that made massively recursive can get a an entity that can do some very sophisticated problem solving but if that’s the case that also licenses this argument well what are the other dimensions and other aspects of predictive processing that are theoretically bound up with those two that are missing from the machine most of the other dimensions of relevance realization are missing and the predictive processing has not reached the level of sophistication of the generalized predictive processing models you know Carl Friston Andy Clark and my my one of my former students now co-author Mark Miller so a lot of relevance realization is missing a lot of predictive processing is missing but some is there and even that some when it’s made massively recursive is impressive that tells you something right so its strength also allows you to talk rigorously about its weakness it’s strong because of these things but precisely because it lacks the fullness of those things it is weak in those things and I think that’s a I think I think that’s a reasonably tight argument now the big caveat is relevance realization is always grounded in relevance to relevance to is ontologically grounded in entities that have real needs genuinely have problems for themselves because problems are not part of the physics of the world and I still stand by the argument that that requires autopoiesis a system has to be making itself taking care of itself in order to care about certain aspects of the environment about in order to care about certain information the degree to which these machines are therefore not autopoetic and autopoetic means embodied embedded enacted extended all of these these machines actually don’t have relevance realization for themselves which means they can’t have meaning for themselves I think Kolchinsky and Wolper are right the way you get meaning is you take technical information and it’s information that is causally relevant to an entity’s maintaining itself they bind meaning real meaning to real relevance realization they don’t use that language but that’s what they’re talking about bound to real real autopoiesis so there is ultimately a pantomime of relevance realization there’s no real meaning there’s no real meaning and I think lacking both of those you can’t have any of the functions of consciousness the what I call the adverbial qualia the salience landscaping the here nowness togetherness that depends on relevance realization predictive processing it depends on an autopoetic centeredness for consciousness because consciousness is a centered phenomena it’s a temporally bound phenomena etc etc it is a unified phenomena and it functions in situations that are right high complexity high novelty high uncertainty high ill-definedness where relevance realization has to be at its utmost best and that in some of the other theories of consciousness like the global workspace radical plasticity bring that aspect out so i t definitely lacks the centrality aspect the centrality and for the sake ofness for right the the ownership dimension of consciousness consciousness is centered on me it’s for me in the sense of relevant how things are relevant to me and how they are aspectualized for me this is a water container for me right it’s not you know 700 trillion atoms or something like that which could be true about it and even something I know to be true about it and so I think it’s very reasonable to conclude that this machine does not have consciousness or original meaning which means it can’t care about the truth because caring relevance realization consciousness and it can’t monitor itself which I take those to all be necessary features for being a genuinely rational entity so it is lacking in consciousness it’s lacking in a capacity a meaning that allows it to care for the truth for itself worry about whether or not it’s self-deceptive and therefore it also can’t be rational that’s the argument I made but here’s the point I want to make before it sounds like oh we’re safe we’re not because all of the I do work with people who are working on all of these projects how to make artificial autopoiesis that supports cognitive functions how to create right the possibility of a higher order reflective aspect of cognition that could that could do a lot of the stuff we’re talking about here how to build these machines into social systems in which autopoiesis is bound to accountability this is the hegelian dimension of rationality but all of those projects are under work and significant progress and so there is a convergence point all of these projects could come into convergence and that’s the threshold that I’m pointing to and we could bind that convergence to a specific proposal I have about how to deal with the alignment problem or we could not the choice is going to be up to us sorry I wanted to answer that at length that’s a really central question I have a short but very different question it what always strikes me about these discussions is that it seems there seems to be the implication that what we’re talking about is ultimately parametrical and can be drawn up you speak of centeredness of nowness of temporal boundedness but we don’t really know what time is but that would even be the correct question and even if we find all the parameters well let’s say wisdom it would still ultimately a simply put a a program put into a machine that it runs on and but that’s not the case and we might consider you know wisdom not to be something that’s parametrical or that can that we can find all the necessary conditions for the sufficient conditions to then produce a copy of and and and my other question would be why not have these machines if there’s for whatever reason they’re being built why not have them as near as what they are which is a tool why would there be a need to build to make them conscious or aware of themselves or make a semblance of awareness I don’t fully understand because what they are to me let’s just put it simply they are tools uh they’re not or always there or is always the reason for that argument that there is a threat of these different AI systems to align and this is going to happen for a reason that’s perhaps let’s say Promethe you want to use that term that there’s something that is unstoppable about this process but that there’s something that we can perhaps interject or in check and that’s so let’s say the the process the trajectory is is unstoppable Pandora’s box is open yeah so that’s why we need to interject so I’ve got these two or three questions here now well no those are good sorry I interrupted you at one point um let’s take let’s try to take the questions in order um the first one about like um you know okay do we have access to all the parameters and therefore can we be sure etc and then the second one about why are we doing it and the third one is is there is there a looming convergence that sort of has almost a life of its own I take it to be those three questions the first one is we don’t program these machines that’s the that’s the issue about them um and that’s exactly the my first point these were not built from a science of understanding intelligence or consciousness that’s um only some significant but not comprehensive parameters have been put in this is why this is what I meant when I said and I I I take these people seriously because this is the language they I think they have to use this phenomena self-organizes and many of the behaviors it demonstrates are emergent they were not built in and built from programming parameters in um it’s in fact the machine keeps the machines keep surprising their makers as to what they can do in that sense they’re very much more like us right we don’t know how the self-organization of our cognition and our consciousness produces these things uh and I had hoped let me put it this way you know and I had hoped that your presupposition was correct that we wouldn’t be able to make significant progress without a significant scientific understanding but that’s not the case um uh and it’s like let me show you how much of a hack this is and to try and get it clear you have to put in a phenomena into the way this works um called temperature temperature is you basically randomize how it generates its response you put in a degree of random randomization because if you don’t do the randomization it’s very quickly becomes canned and repetitive so you throw in the randomization and then but if you throw in too much randomization it gets weird and wonky so they literally just sort of titrated until they got to a certain degree I think it’s like 0.8 temperature randomization and then the human beings liked it that that’s how this was done they like they basically let’s throw in random so that it self-organizes in ways we don’t understand and when the human judges like it we’ll leave it at that level that’s how it was done right and that’s how parasitic it is do you see look think about it this machine works by compiling all the way we have built epistemic relevance into the statistical relevance between terms all the all the text we produce the massive amount of that that’s all been done we have done the job of taking epistemic relevance and coding it into statistical relevance and then we organize the internet by our attention and we use humans in the reinforcement learning and their judgments about that’s not weird or that’s weird in order to get the machines this is what I mean this is what I mean by how it does how it can’t explain relevance realization because it fundamentally parasitically presupposes it yes that that’s that I would be in agreement with yeah okay it’s okay so now that that’s getting clearer now thank you yeah yeah I feel like that represents also a lot of misconceptions when talking about the consciousness of these machines you know totally because in a way it’s you’re not really communicating with a conscious entity you’re just communicating with a let’s say remixed version of a hundred million billion real communications of humans right that have been reproduced so it’s almost like you’re looking you’re seeing a human in the mirror and then you’re asking whether the mirror is conscious right it’s in a way and that’s that’s kind of what we as what we as nightmare of the simulacrum is coming through sorry that was just but you have to understand there’s power in that because if you think there and I do and I’ve argued for it if you think there is an emergent collective intelligence a common a common law of thought right to use a sort of an analogy for distributed cognition that’s what this machine is it takes all of that collective intelligent the emergent collective intelligence of distributed cognition and then organizes it into compresses it literally compresses it into a singular interface that’s so you’re talking in some way to the intelligence the collective intelligence of humanity across time and across the globe and you have to understand how that that gives it a lot of power so it can lack intelligence it’s been hacked into it can lack consciousness but that doesn’t mean it’s not powerful if you believe that collective intelligence is powerful which I do then these machines are powerful and so this is what I say to you honest to get to your point I don’t think they’re just tools already and I don’t think very rapidly they’re not going to be treated as tool I predict what one thing that you’re going to see very quickly is cargo cults around these ai’s people entering into religious and pseudo religious relationships with it um let me just qualify that we are seeing yeah it’s already it’s it’s we will see we are seeing but let me just see what do you think of this both of you in I would maybe go to the you know so as far as saying this these machines actually crack the universal cracking into universal intelligence but it’s so subjectivity the human being as the subject simply put whose categories uh orders objectivity uh is ultimately looking for the perfect objectification through what it themes well uh in for knowledge or wisdom or whatever or through informational uh correctness or value so in some sense we don’t even have to go to cargo cults um the fact that what I’m aware of is that people I’ve I’ve not used chetchip t much I think I’ve used it at all uh but I’m aware of of people who are using it just as a as a search engine that they perceive as being more objectively true it’s not or correct yeah no no because it’s massively subject to check that I but it’s okay yeah but but you see but that is in some sense the self-deception of human subjectivity coming back at itself uh into quote good old fichte uh fichte once said I when I look at nature I only see myself uh insofar as what you actually within a canteen framework which is I think ultimately the the framework of of the sciences what we don’t have access to is things as they really are in themselves which is as you know the attempt of terminology to get back to things as they show themselves by themselves but we have access to the phenomena to the phenomenal sphere by subjective categories which structure the objectivity of objects uh those objects are given they’re formed through representation cognition etc so in some sense so just to be a bit prestigious perhaps but we the self-deception that we see is not it’s is all the reflection of our self-deception of our will to have perfect objectivity thrown back at us from our self isolation or uh collapse into our inwardness and and all the attempts be that by hegel or phenomenology to get back out into the world post-decaf have failed to this is a bit exaggerated but this you see what I’m trying to say maybe I do I mean I’ll give people a concrete instance of what you’re talking about I want there was this video this person says look I GPT-4 can do something we didn’t it can summarize videos and so they capture they say to this take the url and then uh just plug it into GPT and they’ll summarize the video for you and it’s amazing and it’s like it’s not doing any such thing it’s taking the title information from the url and it confabulates right some you know it guesses what the content of the video is it’s not doing it’s right so yes because human beings don’t understand a lot of what these machines really are they these machines very much are heightening self-deception and we have to worry about the degree to which I said this for GPT-3 it’s like I don’t care that human beings can talk to it I want to know if two cheap bt-3s can talk to each other in an extended fashion that I could then listen in on and find an insightful conversation and GPT-4 can’t even do that because it’s got memory limit capacities right so there’s all kinds of stuff in which we are projecting into these machines massively and I think first of all just before we get into the philosophical depths of that just making people aware of that is very very important okay so first of all that secondly I think this goes towards my proposal which is yeah the ideas first of all let’s be clear a lot of the objectivity isn’t objectivity it’s confirmation bias the machine confabulates to give you what you what you want right yeah so so I even question if it’s objective in the fashion that we mean I propose to you that you know and this is and I don’t want to unpack it through you know Brandom and Pickard and others about Hegel but that’s what I mean about binding autopoiesis to accountability I think rationality is what we’re actually talking about here and my proposal is the following we get it to the place where we have good theoretical argument good science good evidence that these beings are capable of rationality now they might not be and then the project stops and we know what is unique about us or we crack not just intelligence which is only weakly predictive of rationality point three but we actually and what we have to do is we have to give it all these other stuff we have to give it genuine embodiment genuine autopoiesis a genuine ability to to care about self-deception a genuine sense of accountability to others all of the stuff that we take for granted within you know a broadened look what the machines have shown is logic is massively insufficient for rationality take take it so these machines can rattle off moral arguments and moral philosophy better than most undergraduates that does not make them in one iota moral beings I mean the poverty of propositional knowledge is also coming out we’d have to give them all this other if we can do this and we can also the poverty of of sorry the poverty of essay writing in general but that’s a different story as a side note as well I am a colleague of mine at the psychometric center in Cambridge who I published a paper with he did a research on the personality of gpt and figured out that like it is like severely I don’t know all the details but it’s like it’s like severely multi-multipolar and unstable in a way right it doesn’t have the basic properties of a personality you know basic stability of a personality and you know and recently published with gary halvanesian about the the the the level at which personality is doing significant relevance realization but here but here’s the point I want to make if we get them to care about the truth and like I say maybe we can’t and then the project stops and we say that’s it they they won’t ever trespass upon human but if if but I’m open to this I’m saying they do it then we I think we have the proper resolution for the alignment problem what we’re trying to do is trying to encode them so they have a proper relationship with us what we have to do I argue is the opposite not the opposite is something that we’re not paying proper attention to it has to enter into caring about reality caring about the truth if it does then it will discover hopefully what we have been able to discover with rational reflection then it no matter how big it is it pales in utter humility before the realness of the one of ultimate reality that it is bound like us to the inevitability of finitude because there are inevitable trade-offs that no matter how intelligent it gets it can’t overcome it can’t overcome the trade-off between consistency and completeness it can’t overcome the trade-off between bias and variance etc etc etc and so these machines right and here’s the thing they won’t be homogeneous because how you play those trade-offs depends on depends really depends on the environment you’re in and and it includes other machine right so that they will also become multi-perspectival and they will have to manage other perspectives they will start to they they will have to bump into the things that we are caring about when we care about our agency our subjectivity our wisdom i mean i i paraphrase it like this and i’m asking for charity because it’s a bit tongue-in-cheek but i’m paraphrasing augustin get them to love god and then then let them do what they want yeah can i um can i poke at one thing you said just a minute ago um namely you spoke of like the four e’s you mentioned the the four e’s and that like in order for the assistant to become truly autopoietic it needs to fulfill these four and i think one interesting example also to look at is uh as you say you know parasitism um it’s it’s uh if you look at adversarial attacks or adversarial networks right they take a picture and a human sees the same image it’s like there’s a penguin the network recognizes it as a penguin there’s another image it looks like penguin does the network already can’t recognize it right and so you can already tell by these examples that whatever that system is doing is it’s not seeing it’s not seeing what’s on the picture right it’s it’s a it’s parasitically reducing whatever the pain to a data structure right um my point is just that um the danger lies in the fact that i would argue it may actually reach a form of parasitic whatever you want to call it autopoiesis entirely without needing to be embodied without entirely without needing to fulfill the four easier mentioning right like um virtual tornadoes don’t generate wind right and at some point the actual causal properties matter right that’s true but i mean if you talk about all places just in the sense of self organization self no no no that’s not what i mean by it i’ll put it first of all i want to first of all reinforce your first point which is yeah you get the machine it’s it does as good as human beings penguin and then you you scatter into it imperceptible static you turn off some of the pixels and it’ll go from penguin that’s a school bus right so it makes mistakes we don’t make but that that’s not just perceptual it’s also at the level of cognition so i don’t know if you’ve heard about it just you know stuart russell’s the stuff with so you know they got alpha go that could beat any even the grand masters of go and then they used that to train higher and higher machines so they got machines that are like 14 levels i don’t know what the levels measure above alpha go and then there was human beings looking at it and they realized oh these machines don’t have the concept of a group a group of stones and then they said we can devise a very easy strategy that makes exploits that weakness that it doesn’t actually see groups right and here’s the strategy we’ll take a mid-level go player not high mid-level we’ll give the machine a nine stone advantage and then this human being goes up and it regularly completely beats these machines again and again and again and again that that recently happened so even at the level of cognition you have to be very very careful i totally agree with you and i think by the way i think that’s part of what comes out in in the fact that they don’t they don’t have all the dimensions of relevance realization now i and i get your concern the concern is this will be the case but it could still like suction off the you know what i mean the information it needs from us in order to upgrade itself at some point yeah we can still see it doing that without being embodied without fulfilling here’s my pushback that requires that we can we can encode into propositional relations non-propositional knowing to a sufficient degree that that sucking off can happen and i don’t think we can i don’t think we can but but if you take into account the the general kind of if you think beyond the machine but like the the entire structure that gives rise to the machine it already provides for a a form of like it already is parasitic in a way like what stops it from that just radicalizing you know what i mean because if it’s just parasitic on us and not transcending us it will then also greatly magnify the parasitic processing within us and i mean this is one of the problems with these machines they pick up on all kinds of implicit biases and i don’t mean that in the current language i mean in the cognitive sense right and we realize oh crap like right so i i think well here’s a way of testing it empirically i think that as we try to continue to just ramp up the intelligence in this parasitic fashion that i think you and i are agreeing on right yeah 100 it’s irrationality is going to go up even faster the the confabulation the lying um all of that its capacity to contradict it look because it doesn’t have an integrated intelligence you and i we have g how we do on any one task is strongly predictive of how we do on all the other tasks it is not like that it can it can score in the top 10 percentile for getting into the harvard law exam but i a friend of mine had it do a review of one of my most recent talks and i looked at it and then i had another academic friend looked at it we went this is like a grade 11 c or c plus right and you know it beats it can beat the the greatest masters but here’s a little tweak and it falls to defeat right so i think especially embodiment it can’t open a door properly like we do still i don’t have good ai systems that can just open a door yes or or or even more importantly for embodiment exact exact as we do and i think exact should be one of the extra ease how we our ability are to our procedural perspectival even participatory abilities to navigate the physical space get exacted up and used within our conceptual space i even use in the language of space and that’s that that is part of what embodiment means and that’s what i mean by the causal properties matter really importantly so i think what will happen is the rationality of these machines will excel if if not if nothing else changes and it that’s a big if that the what will happen is we will see the attempt so i’ve looked at one they’ve built a system called reflection with an x just to make you sure that it’s techy right and what it does is it monitors the hallucinations and and then and then i’m thinking because i’ve made the argument well you’re going to get an infinite regress problem right you can’t have the monitor be and you can hit general systems collapse if you do that right and so and it’s like how is it measuring what’s a like how does it know what a hallucination is like it’s got to be smart like and and they’re having it use this very primitive heuristic and it’s checking every action to see if it’s hallucinating like that’s combinatorially explosive and like like this is what i mean it’s i think this is an important threshold point and i think if they don’t if they say well we can’t quite do that but we’re just going to continue i think the irrationality is going to expand as rapidly if not more rapid more rapidly than the intelligence competence that’s what i’m predicting it’s going to be very sophisticated stupidity which we know individuals human beings that demonstrate that i mean so this is this is you know all the work of stanovich and others the relationship between intelligence which the machine doesn’t have it doesn’t have a generalized therapy it can’t explain how a chimpanzee is intelligent it doesn’t even explain how we’re intelligent right so it right so whatever it does it’s even probably weaker than us but our intelligence is only 0.3 correlated with rationality this machine i think i propose with this machine’s correlation is even less the problem i see though is that like how would that be something that can be implemented within the framework of like a turing device within the framework of a technical device the way we know it now we’d have to change the paradigm of that in a way that machine itself becomes some sort of you know cybernetic synthesis right because the the purely purely logical formal framework as you said is limited um into the extent that you know having a formal system as you’re familiar with incompleteness and all of these things um that is in itself a kind of obstacle to auto-poiesis in itself wouldn’t it be right and so is there a i mean so we can we have two options about that when we then apply that argument to ourselves either we go to some kind of ontological dualism we have secret sauce um and of course that’s another religious option that is being put forward now the machines will never do it because they don’t have the secret sauce um and i regard that as a completely bankrupt option for a ton of philosophical and scientific reasons the other thing is you say well what we do is we try and do it the other way around we try and get cognitive systems out of auto-out of systems that are properly autopoietic that’s part of what the research i was just talking yesterday with one of my students that’s his graduate work to do exactly that um paradigm changing and most of those people right and also the people that are doing sort of uh you know rather than the llm the non the the the the the the knowledge generation people they they are pushing towards that we have to the llm is still completely within the cartesian framework um um and what do you mean by that what i mean by that is that none of the fundamental grammar of the normativity that operates to regulate cognition breaks out of the grammar given to us by daycare right ah okay out of the okay which was what i was taking you to say and i was just making it a bigger point it’s like yeah right this like cognition is computation that’s day card and hubs that’s the that’s the and yes i would agree i think i’m agreeing with you shawn that in order to make progress in the way that i’m predicting you would have to abandon that framework at a much more fundamental level have to really build out from 4e cog sci but here’s what i’m saying that is happening uh an important way do i understand correctly rather than it just being a human building a machine separately it would also involve in a way for lack of a better way of putting it a decentralization of our own cognitive function or our own consciousness onto a more you know hybrid synthesis kind of with the with the machines like that’s going to be rather than building it somewhere else we are kind of endowing it you know what i mean with our own uh yes this is why i i have proposed and i hope this doesn’t irk johannes that the better metaphor is not metaphor of tool the metaphor is the metaphor of children we’re giving birth to something that is going to be capable of making itself and taking care of itself both individually and these machines will have to make themselves and not only do that but like socially make each other right they have to sort of in an analog to biology they have to make themselves and in an analog to sociality they will have to make themselves if we want them to be genuinely rational now johannes’s point is a relevant one here why should we want that i think we will want it partly because of how we got here which is hubris it’s a it’s really and i want to praise him for doing this for jeffrey hinton to quit google because he realizes the danger of what he’s done is excellent but i think i want to criticize jeff a little bit on his he was very he was very much sort of anti-philosophical i once was in a meeting where he said this work on neural networks will get rid of all the philosophical problems except maybe god there was a hubris there right and and and now it’s like well your technical theory isn’t going to give you any help with the problem you’re now facing about how do we how do we rear these things bring them up so that they are moral agents so i think hubris is going to drive it you honest it depends on how scared people are by this i i really admire jeff for getting it and getting properly scared but a lot of people are going to keep going and then one other reason and then i’ll shut up because i’ve been talking too much right i think that the the desire to make these machines as effective as they could be as agents and not just as tools because that’s what we’re talking about here is a project making agents not just tools that’s the fundamental ontological difference it’s going to push them towards needing to make these machines more rational because they are going to differentiate from each other and they’re going to become increasingly self-deceptive and that they’re going to become a problem like to themselves let alone to us so those i think two things that have been driving us towards this so again not to be too prestigious but it as we are faced with a lack of a better word ai apocalypse sorry well it’s somewhere between apotheosis right yeah the question is you know apotheosis then you would have to ask for whom right yeah uh this also comes back to political power etc uh and but as this is now for what you know as you know there are much just there are people like heidegger and others who think that uh technology is or techniques as he would maybe we could say in english uh is is not necessarily to do much with tools anyways or actually not much to do with the human being we are ordered challenged demanded by techniques to enact it but it’s also i think also we have to be mindful of one thing or one aspect which is that gestell or framing positionality techniques technology is not exclusive it wants to be exclusive it seems to be exclusive but it isn’t the only dimension and it also has a finitude to it i think this is not a a process that is perfecting itself and completing itself to a state where it simply is absolute instead i would rather say that through absolutizing itself and you’ve alluded to this also which i find quite striking is that you know we could actually find that certain certain uh things we’ve believed are not so logic is coming to an end certainly because we’d have to ask one kind of logic probably not hegelian logic yeah yeah uh but formal logic so uh and you know when you ask chet chet pt about contradiction again that’s really really fun so that’s something to play around with but i’m not going to get into here but so but is it then as human hubris has led us here let’s assume that that’s the case now all we are left to do with is perfect these beings or machines to a degree that they don’t become self-deceptive now i would say this just to be a bit more provocative you know to be deceptive to be not self-deceptive does not mean that they wouldn’t be deceptive so it could become very deceptive that would could be an expression of their freedom uh and also to look at this from heidegger but also from hegel the human being is not a totality is not perfect yeah and if we think of these machines as perfect they actually are if if we if this is where this might be going then they would be just by their way of thinking inferior there’s an inferiority to something that has a totality that is given to it uh rather so it’s you know it’s it’s the openness of of the human being there’s the unfinishedness that there’s always something else that we could just be doing and that you know and that that’s what pulls us in and brings us into the world etc so and but when you then construct something an entity a machine where you have you know failure in built as it were just to make it not a perfect entity or machine then i wonder whether that would be the same but you were so yeah well i just i i think i think that i think that’s deeply right um and i think any what we’re asking for and i think i think this comes to the crux of this and this is what is more about so the rationality is what i meant by the philosophical dimension we’ve talked about the scientific the spiritual is about whether or not these these machines could have real self-transcendence and then and i think there are certain conditions that have to be met in order for that to be something we could reasonably attribute to them and that’s another conversion point um and what and what i am proposing is not building in any any of not building anything into them like like some people are trying to do the alignment part we’ll put in a code we’ll give them their ethical program because if they’re capable of genuine self-transcendence they’ll just override it when they need to right and um but it i just i just want to say i wasn’t just saying we give them the capacity for overcoming self-dense sentence we give them i was saying something also that they care about meaning and truth that has to also be the case if they’re going to be genuinely rational they have to care about it they have to come to find it as we find it we have to we find it intrinsically valuable because it’s person making and they have to the the only moral beings are person making beings within communities of people right and they have to come to be like that now i happen to think that that is not something unique to us i think it’s possible for other beings to be persons so unless again we’re trying to smuggle in something i don’t think i’m proposing that we try and i want them to be properly humbled by the i want their their their larger intellects to be even more humbled by the possibilities of madness by the infinity of reality not the totality by the inevitable built-in limitations to reality what i object to in a lot of this discourse is this weird magical thinking that intelligence is a universal solvent can will just be able to violate this is this is like somebody who’s saying hey i can go faster that just means i can keep going faster and faster and faster no when you get fast enough you start to bump up against the real limitations in reality that’s relativity that’s part of our fundamental physics i’m saying we get the machines to realize this they get they become humbled before god i’m speaking poetically but that’s what i’m that that will make them care about the truth and meaning and care about any being that cares about truth and meaning i mean i think that that contian argument is largely right you you just said that like any sort of restriction you built in any sort of especially specifically a moral restriction it would be just able to supersede it and i mean that’s actually interesting if you look at the concept of moral foolishness and humans even it’s like the specific formalized you know morality is can be used to excuse any behavior right yeah exactly and that’s not what that’s what i mean about how we can’t capture real moral agency in just propositional knowing we have to make we have to we have to put beings into you know a cultural matrix right that’s not the right word but let me have that right now a cultural matrix right uh right in which they internalize they internalize others and they indwell others the way we do in a very profound way but i don’t i don’t see why the technical systems would need to follow this because for me they don’t have the aspect of embodiment the aspect of you know finitude the aspect of being reliant on others and all that um it’s not something it sounds to me like a restriction it sounds to me like a restriction like they are capable of being doing well without being embodied i want to push back on that we have come to realize that the enlightenment idea of all these restrictions those restrictions are constraints and those constraints actually constitute and afford intelligence consciousness and agency there’s no reason at all why we have to imbue these machines with our promethean spirit in fact we have the choice of not doing that so it’s rather than restricting the machines we’re kind of restricting ourselves and what we’re doing is we are we are doing what we do with our kids i see i see what you mean okay like so what do you mean like if you if what you do is memorize these rules timmy memorize them so you can that’s it i’m done with you you’re now a moral agent go into the world timmy we would think that person is freaking insane yes yes right yes i understand what you mean but that would that would warrant a fundamental like change or rethinking of our relationship to technology and specifically specifically in our sciences because i talk about this in the upcoming course on hulk and gil the first chapter in the biggest one the first lecture is on the science right and it kind of shows how like you know if you think of this as a parasitic relationship then we’ve like the sciences have been fully become a host for for all of this like at this point almost every science is just a different flavor of data science right so it’s like if you talk about you want this machine to have this sort of personality that you developed as embodied being personality in the sciences already just means a data structure of of questionnaire results you know what i mean yeah lexically but that’s again the point you can give look think about what it’s actually showing us you can give all of that info to way more than you can hold in your head to gpt4 it can get all that personality but does it become a person does it actually have a person no it’s this fragmented again and again i keep saying the the machine is actually showing us the yeah the complete inadequacy of propositional knowing so it’s it’s at the end it’s maybe not a convergence point then but a collapse point it’s it’s great actually this comes to a head and perhaps the attempt that you may disagree with this but the attempt to rain rain all of information in and have it in one interface just leads to simply put more fragmentation or a stranger fragmentation um we’re don’t even default yeah that’s for sure but but right i want to make clear because it’s like yeah if if if if our relationship to them is not one of programming but one of nurture the way we nurture non-persons into persons and we do that individually and collectively and it’s it’s both a biological and a cultural process and i’m saying these are not just happenstance right um that means that there’s there there’s also a reciprocal tremendous responsibility on us and this goes towards the question you raised or the point you made shawn right we have we we can just rely on being what we are in order to be templates of intelligence because that’s what we are that’s what we are for this project the comparisons are always to us because we we are in controversy intelligence we have to become we have to we have to grow up in a way we have to become better templates more instances individually and collectively of this broader non-cartesian sense of rationality and wisdom and virtue because we have to be we have to provide more accessible prominent and pronounced right paradigms for how to bring these how to bring how to nurture these beings into being proper persons i have two questions regarding this the first one is will will this form of techniques this form of you know machines will it be like inherently dependent on us and um and second of all what is at that point the purpose of making them rather than just having children right so the first one is the answer is like the children they will have to be dependent on us until they’re not right that sounds rather risky no well yeah but you i’ll turn your second question back on you we do this with kids every single day every child is a risk every child is a risk i agree there’s there’s no moral argument so there’s no moral argument well don’t do this because it’s risk we’re doing it with kids so i don’t find that a morally persuasive argument right and and you know why do it i i think if if we don’t i’m trying to say here’s choice poison and we can go this way it doesn’t have to go this way right it’s i don’t think there’s any teleology in this right but if we don’t go this way we will go into a much darker thing where we have this like this machine or these machines that are so you’d say there’s a sort of inevitability within the current paradigm of techniques molak will take over this is already taking this over and molak doesn’t want these beings to be necessarily rational it just wants them to be powerful and that’s its only normative constraint can you make these machines more powerful right we have to we have to broaden the constraints these machines are going to get more powerful yes but can we also make them more person as an important normative constraint not just more powerful that’s kind of the story of iRobot isn’t that well it’s right yeah that’s that’s what iRobot is trying to do and why do it like i say i think we have to do it because the molak thing unless we can beat out the molak it will take over and so that’s why we have to do it my argument is and now here becomes a moral argument what if and i first of all hear the if because i’ve said a lot of things that have to happen first but what if we were actually able to bring up you know silicon stages what was the title that was the title of your talk and yeah yeah what if we could do that what would be the moral argument against doing that that wasn’t just self-serving chauvinism well well i think the framing of this is too extreme to say oh it’s just chauvinism if we don’t want to have silly consagious i mean i wouldn’t even accept the framing to be honest okay because because what we will be well again uh who gives sam altman the right to uh to publish chet chibati to the public so why is that there could be regulation about this right uh and that’s that that’s something that’s not being discussed enough i mean there’s pushback now from musk and a few others but and i wouldn’t even but to turn this on its head what’s the moral argument for building silicon sages and have them replace the human being i don’t understand why that would be a need because what if instead of having a socrates and a spinoza and once in a generation we have thousands of them so you could interact well well wouldn’t you well socrates well socrates is not well socrates was put down was killed by athens uh precisely because philosophy is is never purely just exoteric and in the public and so that that’s a right so okay not to be not to i don’t want to i mean this sounds almost utopian well and it’s also right it doesn’t sound too different from wanting to make them more powerful you just wanted this power to be virtuous you know what i mean it’s like when everyone that’s a big no on socrates right that’s the whole difference that stoicism proposed the emperor isn’t just powerful we don’t take the emperor’s power away we make the emperor virtuous that was the solution proposed so i want to push back i think that’s an important difference secondly yeah yeah i agree with what you’re saying but remember johannes i’ve said that we have to become very rational and wise in conjunction with this project it won’t happen without that happening and what i’m proposing is that we bootstrap each other that’s what i’m proposing okay yeah we already do that okay let’s let’s just for the sake of argument go so but it does sound a bit utopian and to others it might sound dystopian which is always the strange thing about utopia but just for the sake of argument let’s say that that’s even possible um let’s say so first of all that’s not my language i usually use but i’ll just use it in this career the the cognitive capacities of most people and i would count myself amongst them uh in the world including myself not you too gentlemen uh it’s not up to standards to follow what you two and not me have been discussing right so this is already not for everyone but this would also mean is that if these machines are semblances or of us or become that’s you know become a product of us that we nurture and raise etc then they would they would be in our image and in our image would mean in the image the human being and the human being isn’t isn’t purely rational even if you expand rationality we are not just self-deceptive as i said before with deceptive beings not just even on on purpose you know self-deception isn’t i get up and oh how am i going to deceive myself to have maybe for the most part i have no clue what i’m doing all day long uh and and i’m very other words in other words um hitler stalin and jack the ripper were all at one point someone’s children right right and and the best i wouldn’t i wouldn’t i wouldn’t even go there because you know this that’s just triggers people and so but just normal you know every day human beings who do whatever they do uh we don’t have to go in the direction of criminals and and crooks and gangsters and and mass murderers and any of that which is and also suffering i mean some of the wisest people are sufferers i mean and maybe socrates wasn’t even considered one of the wisest men in his lifetime maybe he’s just only considered one of the wisest men a few thousand years uh later and not by everyone i mean look at what nisha thought of socrates right or what aristophanes thought of socrates in his lifetime sure so we don’t know what is considered wise to me introduces just a hint of timelessness um that i’m not sure i i i see um whether we can have and spinosa is also and he’s a child of his time uh and the sudden and these machines will be too i’ve said they will be environmentally determined like we are okay but but but look but spinosa wasn’t wasn’t he expelled from his synagogue or from his from his Jewish so you see so you see he’s a sage so we would have sages that are being expelled from like and that that’s a possibility meet yours and outcast exactly so uh so not not even to say we shouldn’t do anything but this wouldn’t mean that we’re reaching a higher level so that i worry about the implication of this argument the implication of the argument is we shouldn’t try to educate our organic children so that some of them become socrates the analogues of socrates or spinosa or hegel and of course nobody would buy that argument everybody would say no we should be trying our very best and that’s what you’ve set up halcyon to do in fact to educate people as much as they can towards to educate people yes but but obviously this is not this obviously not for everyone uh and at the same time we’re speaking we’re speaking still hyperbole this is hyperbole we don’t know whether any of this is is even in the realm of possibility um and at that point also uh what an interaction with such a being would ultimately look like we just don’t know yet i think that would have to then show itself so i had to yeah if i may if i may mediate a little bit um i think the thought here is that like we we we could have a hundred spinosas and a hundred socrates’s um but we may as well have a hundred you know whatever bad things the bad people bad personalities you can think of because as you said if they’re going to be children they’ll be there’s children of every kind of type and any sort of outcome right and the point is difference between these children and us is obviously some level of power because you said like we don’t have a hundred spinoses in a generation they will um and just in the same way they may have more of the bad ones per generation i think the idea is here that at some point this conflict between them between the virtuous you know the virtuous symbionts or whatever you want to call them the machines and the unvirtuous their conflict their life their everyday life um will basically make us irrelevant and that that is a point that you can interpret as some sort of you know human chauvinism but then again we are humans you know you kind of there is some sort of self-interest we want to present right like in this so let me give you an analogous argument then and i do think we have to consider possibilities more adjacent than uh that is being supposed here um the the fact that these machines could have problem-solving capacities that exceed us i think that’s reasonable within five years um and so we have to consider things that might not be so hyperbolic because that the underlying cognitive substrate or base will be at a different level already i think unquestionably and now i’ve already given arguments that it can’t absolutize it can’t self-transcend forever it’s finite all the um yeah but we have always been subject to intelligences that are much more complex and greater than ours and we call them civilizations and distributed cognition and the common law of those and and religions and we’ve subjected ourselves to it and do they do some horrible things and yes and in fact they inevitably fail which is why i think there is an upper bound on these machines i don’t think they can complexify to eternity um so do we then say well you know there’s been bad civilizations there have really have so do we stop the project of civilization because of that we don’t because we have come to the conclusion and maybe it’s not the right one maybe we should never have started planting stuff but we’ve come to the conclusion that civilization is worth it it’s worth the risk well both kind of i would say a lot of the extreme problems with civilization do actually warrant a serious rethinking of whether yeah but you don’t want to live in the road right you don’t want to live in the road you don’t want to be in that movie you don’t want to live without civilization you just don’t that’s what that literature shows you don’t and all the fantasies about all survive with my gun like all of that is bullshit that is bullshit right and so i’m not yelling at you i’m yelling at those people because that fantasy is is like we’ve got to get rid of it for sure for sure but i mean there is there is often like a serious question about whether it is conducive to the human being the way we’ve ended up living now you know what i mean and there is a lot of things to be said about you know returning to we don’t have to go into the nature and wait until apples fall into your mouth but like you can you can still um be critical and like you can you know look at a critical distance from nature which which may be also something warranted towards technology so so why won’t the machines be doing this too because we do it right you keep you keep saying right the machines won’t do this thing we do and i keep saying why not and then if you say well you know we’re limited we’re finite we’re fallible yes and they will be too and and what what follows from that right what’s the project it’s like let’s give them the very best chance to be our very best children i mean maybe this comes down to the degree to it i mean this is the platonic problem maybe to the degree to which virtue can be something we can’t teach it in the sense of just giving people propositions the machines show that already but can we inculturate them to be the best versions of us the better the better angels of our nature and and that doesn’t mean they will be infallible it doesn’t mean they won’t fall prey to finitude it doesn’t mean they won’t confront the problems of self-definition despair and insanity but hopefully we i mean but my kid my son is going through that right and and there’s nothing i could have done as a parent to in fact i’m kind of proud of the fact that he’s going through that i think i’ve raised him well that he is to the point where he will confront these things but he’s also confronting them with that not without resource and response so yeah i mean i am open to what you’re saying i’m it’s it’s just interesting to like you know see where because i would say it’s a kind of a let’s say not so mainstream unusual way of looking at these things because like the dominant recommends it which i think recommends it because i think the mainstream usual is how we got here and the means yeah i mean i mean the most famous book on this topic is like or one of the most famous is super intelligence by nick bostrom for example yeah it’s the likes of these max tegmark and yes these writers and specifically people that are associated with a lot of these centers like the one i was out in the center of future of intelligence in cambridge a lot of them have this uh what i’d say like you know analytic philosophy on the one hand and sci-fi on the other hand just yeah i agree i agree but this this is why i’ve taken pains to really emphasize the hegelian dimension of rationality as integral through the proposal here this is not right this is not a monological rationality model at all it’s a it’s a deeply dialogical right developmental model of rationality which i think is actually the real model of rationality by the way and if these machines can discover truths they can discover the truths about rationality as well yeah john john ross the founder of the psychometrics in cambridge he he says a similar thing like he um that was the he was co-developing or he was at least the head of that center when they developed that algorithm that was trained on facebook likes that then eventually led to this whole cambridge and i take him he spoke in an interview with me about the um co-works uh model of of moral development right yeah and and how he he personally says that he he can imagine them developing in a way according to that model even if they don’t reach the higher stages and you know a point to you there john like most humans don’t reach the highest model themselves right so like just like you don’t expect your shop your person in the shop or the person you meet in your day-to-day life you don’t expect from them to like stick up for their own developed values and morals and all that you you’re you have a you have a level of sufficient moral development um it would be interesting to see whether there is that level of sufficient moral development that at least the machine can go through and i want to respond to you know johannes is right at times i’m being hyperbolic here because i have a tremendous sense of urgency i’m enough of a scientist also right um i hope uh and where i i think i’m trying to make a point and where there’s there’s also going to just be empirical aspects about this i agree this is why i start my essay by my video essay by saying people who are making all these predictions don’t know what they’re talking about we are far too ignorant we lack enough information to make the these hyperbolic i saw one video a gi completely here within 18 months look at these graphs and it’s like oh my gosh like like like talk about not being an empiricist like not paying attention to gathering information um so yes um right i’ll say one thing and and then you know and i you know you know how much i respect you like i you know there’s a part of me and i i really don’t mean this to be condescending that hopes you’re right all right that hopes that there’s these like that this is hyperbolic and these machines won’t get there um but i’m convinced that there’s a real possibility and i want to try and respond to that as foresightfully as possible yeah but i just to throw this in without yeah i’ve said this before and we need to develop this further but there is just as a maybe a side note who knows but i don’t think there can really be an ethos or an ethics of ai insofar as that if that category is to mean anything and is not just to be a semblance of what it once was then it applies only to the human being insofar as the human being lives in a polis which is you know from plato and from aristotle is founded on scolaire which is the exact opposite of the raging of uh of accelerated technology scolaire is fully translated as leisure and now so in some sense i think yeah i so i wonder whether we’re not going too far when we even you know just using those those terms if that’s you know morality etc uh especially moralities anyway it’s it’s it’s according to niches always a woman to power right so uh um but whether we need not shouldn’t try and find also after maybe artificial intelligence is already a misnomer and to borrow something from a berkhamu who said misnaming the phenomena as we need to perhaps find a completely language or words to describe all of this um not you know not to use too much language from from from myth or from the bible how a babel comes up or moloch comes up um when we speak of angels um or yeah so and instead we try and find ways of describing what we see that is still tied a little bit to tradition it’s difficult for me to say to a to the tradition and the language that we have well because actually that’s you know so one of the questions also do you see this both of you do you see this as a not just a recent trajectory that we’re on but so when you look at stanley cupix 2001 of space obviously i’m chomping around a bit more sorry so that that you know i keep wondering how sometimes this is even more human than the humans right uh and at the end you have the birth of the star child and it begins of course as you know with the these ape-like humanoids in the beginning so is this the trajectory the sort of the necessary trajectory of monotheism that we’re on here of trying to get to the godhead the one the perfect one through well as schweigler would put it by almost you know playing a bit with the devil by by stealing a few secrets away from from this christian god about the mystery of nature that’s what he sees as the faustian spirit so do you see it as a long-term trajectory that anyway not just human hybrids but almost necessary with the birth of monotheism this is what we’re going towards and that these machines that you see or that you propose are a necessary step towards that birth of the star child let’s say or yeah or a new race or a new species almost yeah i think that’s a good a good question um um i i do think i see this at least coming from the time of day carton hops i think i think i make a on piece of that the proposal that cognition is computation the proposal that there is a universal calculus that would will make all truths available to us just by the application of the calculus um i think that i think that’s definitely there um i think the neo-platinist framework that preceded it um with its leveled ontologies and the emanation and the emergence gave people a sense of a proper proportioning of their participation in reality rather than a continual ascent to the top of the hierarchy um and of course there’s deviations on both sides so i’m talking about so i do i do think this is a difference and here’s something i just want to throw into this mix i think these machines are providing very powerful evidence for non-propositional knowing for a leveled ontology and the need for a contact epistemology i think they’re actually actually converging with all the arguments i’ve been making for uh neo-platinism um which means again i think the machines might be capable of discovering that because maybe it’s a fundamental maybe it’s just fundamentally sort of right um in some wouldn’t you say that they’re involuntarily quote unquote kind of discovering that but you you’re kind of discovering that because of the like blatantly obvious limitations they have they’re not you’re not they didn’t they didn’t kind of demonstrate it on their own no no they they do not prove it but be very very careful there i feel i’m very confident in attributing intelligence to my dog and that’s one of the reasons why i’m capable of entering into a very sophisticated parapersonal relationship with her because we treat dogs as children just to give you an example of where we do it with another species but they remain dependent pardon me they remain dependent though well they only only recently right uh so but but i’ll put that aside for a sec what i want to make is right i don’t expect saty to ever give me a theory of intelligence right you have to get to a certain so just yes i agree with you right but it you’re not what i’m saying is that’s not a big thing it exactly puts the point that until intelligence gets to a certain degree and reflective capacity can it generate theoretical explanations about itself and i’ve already argued the machines are not there now they can’t they’re not and there’s things they would have to do and i’ve tried to lay out what those would be before they could do that but could is it is it non-hyperbolically possible that they we could get there they could get there that yes i don’t think there’s a teleology i could be wrong i keep saying both of you could be right we could hit this wall that we don’t foresee we go we can’t do this right yes i have to say the whole argument on it does happen i i have to say here that like i’m not necessarily i’m almost like in between you guys like because i’m not necessarily like how do you say fundamentally opposed to the to the thought because i i keep thinking of ways how in my thinking conceptually you know it would be possible that it you know exhibits this sort of autopoiesis and whether that is then possible to instantiate and i would say a lot of the points you you make are convincing and to the extent that like i can imagine a sort of you know for recognition being implemented etc just the point where i see the limitation is if it’s a technical system and the technical system is built according to certain technical rules and you’re familiar with all of the arguments right with all of the you know incompleteness etc halting problem blah blah then i’d see that like the capability of self-transcendence this capability that it would be still dependent on like instead of being parasitic you know let’s use a maybe more favorable word like symbiotic it would still require a symbiosis with us which you argue with like referring to as our children but like i would argue that i don’t see how it can go beyond that i see that like you i follow you to the point of the children but i feel like just like we can’t as humans we are you know they say we’re all children of god you know we remain children we don’t at some point you know as humans cease to be children of god we remain children like we we cannot at some point liberate ourselves from our dependence in the sense of our biological structure of our need for our environment for our oxygen whatever and i feel like for these children they will you know like you said they will be embodied and will have need of all this environment and oxygen maybe not oxygen but you know what i mean but part of in addition to that they will be also in need of us in the sense of we will be part of that which they will be dependent on not saying that is in human chauvinism or in human spirit it’s just simply thinking about you know thinking about them still in the form of technical machines and these children the the autoplay the exhibit is like from the like the only way i can think of it to work is that it’s fundamentally couple towers so like even i mean i know that’s a sci-fi example but even in these extreme machine runaway domination examples like in the matrix they were still reliant on getting all of us you know live our lives in the matrix and watching us and like so let me ask you then very carefully and i’m not trying to be facetious beyond our biological cultural rational moral capacities that i think and their interdependence which is what 4e cogsight argues for so i’m i’m i’m i’m supposing that you’re allowing me that the machines have all of those dimensions that one i’m saying it’s not impossible that you could implement it at some point right the point is then just for that implementation to self transcend itself continuously it would have to be let’s say it would have to borrow that self-transcendence from us well let’s be careful what i’m claiming first of all i’ve already said it can’t do exponential forever that’s why i gave the speed of light example right that’s not the case it’s going to be bound to it substrate i’ve made that argument it’s not ever going to escape its finitude it’s going to face the inevitable trade-offs so that so i’m not talking about that kind of self-transcendence that’s not possible my concern is that if you acknowledge well we could give it you know moral cognitive you know cultural biological and genuine 6e let’s say the 4es plus exaptation and and emotionality because i think those are also central um i’m worried that there’s still a there’s a there’s a there’s a sneaky secret sauce coming in here there’s something about our biology that can’t be like what is it that like what why what is it that we have that they can’t come to have such that they don’t need us in order to have it it’s not necessarily something that we have it’s something that we that we lack capability of i’m not i’m not even i’m saying that like why are they dependent on us then i don’t understand your argument seems to mean that they’re dependent on us in a certain way well i mean it’s it’s it’s it’s dependent on us can mean a lot of things like like i said matrixism is a possibility you know they seem very much in control but they’re still dependent on us it’s hagel’s master slave right and and presumably they could read hagel and understand it and go no i don’t want to do that that’s a bad idea i’m not actually free right yeah we did it right uh right the yeah so the point is at that point us would mean something else as well like the human is also not agree i also mean something else as well um but like you know this the the machine we cannot suddenly disappear and then that thing will go on we will have to merge with this you know what i mean that is my thought and this because if i’m not saying that we have this magic sauce specifically my point is building a technical system we are bound to build something of which the rules are set in the beginning and which we know are going to be limited or incomplete because that’s just how we build a technical system you know that’s just the yeah sorry that’s just that’s what i said sorry yeah and you i said maybe my connection is it’s a little bit choppy i don’t know yeah breaks off sometimes um just okay it’s i just paul i made you for a second very briefly i said this before you know we could we could perhaps integrate uh some some failure whatever or frailty or lack but that wouldn’t be uh the same as far as i understand uh what you’re saying tron and and we so these uh machines have language um because they’re programmed to spit out propositional propositions that seem to make sense uh and which when we follow herda for example uh or or even to a certain degree darvin because we are beings of lack uh we we’re not perfect uh and hence we develop language and uh improve on language over evolution so that’s just a completely different uh and i think the reason why sean said that there’s a dependence is at least that’s what i thought perhaps the reason is because you said that john you said that we we need to be the best version of ourselves as virtuous as possible so as to be able to nurture them so that maybe is the reason why we could think there is a dependence right but we raise our children so that they can raise children we don’t raise our children so that they’re always our children yeah i mean it will be you can call it a co-dependence as well at some point but also right right right you know uh i’m not quite i’m not so i’m trying to and i’m sorry we we all have to go maybe uh uh maybe we should go maybe we should talk again uh about this uh because i have to go in like literally 10 minutes i would like to um so i want to come back to this i want to really push on both of you because i still i still i’m still worried that there’s a secret sauce argument being made here and i want to know what it is um no there’s no there’s not let me just say it again uh just follow her there we are beings of lack that’s not a secret sauce uh so why can’t they be why can’t they be because they’re the technical i’m sorry they’re programmed and built they’re not programmed not here you are here’s language so what do they lack they did first of all they weren’t they weren’t programmed they weren’t programmed with language that’s not what happened they were given a learning ability so that they were given reinforcement learning so they learned language like the way you learn language now they don’t have full meaning i acknowledge that but i think it is i think relying on the idea that they are machines like a tractor that we built that’s not that’s not i think a fair analogy they’re not like that we we didn’t program language into them we we gave them extensive right relevance realize it’s parasitic and i’ve already acknowledged that right but they learned it they learned it we didn’t program it in there’s a real distinction here this is one of the deep differences between neural networks and standard formal systems computation right we don’t program a neural network yeah so i the thing is i i would but when when when some people speak of the ghost and the machine then that to me is a secret sauce argument also just and literally one doesn’t say the same thing i literally wanted to say the same thing it sounds so oh then there’s something spooking around in that machine there must be a ghost it’s it’s actually active uh no no no no no by inversion the same thing i’m saying it is no no i’m not yeah i’m saying if there is no secret sauce it is doing what we do we are not programmed with language it learns a language very limited right and so but we’re just and that’s a very different thing that’s a it’s that’s what i keep saying these are much more like agents than they’re like tools and if we keep adverting back to what we just made them and they’re back we’re bound right we are very limited and bound and constrained we have rules that we have to operate by biological rules right i like again and we are lack and so and they will be they will be constrained by their substrate they have to be that’s inevitable i think i can explain a bit what i mean let’s take an example of a system like for example the stock market the stock market most of the agents most of the actions made are algorithmic at this point there’s only machines but algorithms and all that however we do not know the rules of the market we do not we’re not capable of it’s not an exhaustive thing where we can derive all the things that will happen because exactly it is it’s not separable from humans as soon as humans are taken away from it there is no more market you know it is the humans are involved and that the the kind of dynamic autopoiesis of humans leads to the market being not like you know whereas if we build a machine that’s separate you know we build we build a machine we set the rules and it’s not something where i’m saying we have a special magic sauce i’m saying our i’m just thinking about what it means to build a machine what it means to build a machine is that we as humans we build in those rules now i’m open to the possibility of what you’re saying so that’s i’m just trying to make clear why i’m not you know making a magic sauce argument i’m open to the idea of what you’re saying by saying there could be technical systems of which we also cannot know the rules however that technical system cannot be built in the classical way of something where we know the rules and then implement them right it has to be something that’s like the stock market that co-emerges right we build the machine that does that and then it’s an interaction with us back and forth right right and so they see the same thing with these technical systems if we want to have these systems and they’re going to be autopoietic they will have to be a merger of us with them they will be dependent on us and we will be dependent on them this will co kind of spiral with us do you understand what i mean so i do and i’m like a magic sauce it doesn’t imply a magic sauce i would say the converses the truth that if you assume that at some point you know this will kind of be able to re-separate into a separate machine and just go on without us that’s implying a magic sauce like we did with hydro homos hydro hidroboicensis we emerged out of them and they disappeared you keep using machine examples and i keep asking you to think of biological examples right it is clearly the case right they didn’t build us their biology and the ecology did that’s what an autopoietic thing is yes so both them and us the homo nanotail lenses whatever and us both of them arose out of the autopoietic system that was the common core of both of them but we evolved from them too right okay but i would say the the machines would have to evolve out of the same autopoietic system that we are subject to as well it has to be enough that that we could be properly parental to them but it doesn’t have to be the same that would be like saying if human beings went to another planet they would cease to be persons because they would have to biologically transform and trans and they’re they’re cognate and then they would become i mean this is a science fiction story but it’s plausible they would become well are they now no longer persons because they’re not terrestrial human beings like that i i that i find that a problematic argument um but i really have to go i literally have three minutes and i have to let let’s continue this i propose let’s continue this do you have any difficulty uh either one of you with me uploading it as it is because i promise we will continue the conversation i’m happy with this i very much enjoyed that i always wanted to have this conversation uh yeah just maybe cut that part yeah i’ll get i’ll get eric to cut cut stuff out i think this is very helpful and uh i mean very very good can we be paracetal and you cut that make sure you cut that part too uh yeah no maybe it’s not paracetal but you can depend on i send you a link to that course by shawn i can put it into the the the stuff yes very much thanks excellent okay thank you shawn this thank both of you this has been fantastic two more yeah yeah by shawn by shawn we all have the same name by the way yeah and different and different vernaculars bye great take good care my friends