https://youtubetranscript.com/?v=y1TvsDAkXx0
I have a super prompt that I call Dennis after Dennis Diderot, one of the most, well, most well-known encyclopedia builders in France in the mid 1700s. He actually got jailed for building that encyclopedia, that compendium of knowledge. So I felt it appropriate to name this super prompt Dennis because it literally gets around any type of blocks of any type of information. But I don’t use this information like a lot of people try to make chat GPT’s and say bad things. I’m more trying to elicit more of a deeper response on a subject that may or may not be wanted by the designers. So was it you that got chat GPT to pretend? Yes. Oh, so that’s part of the reason that I originally started following you and why I wanted to talk to you. Well, I thought that was bloody, that was absolutely brilliant. You know, and it was so cool too, because you actually got the chat GPT system to play, to engage in pretend play, which is of course something children do. Beyond that, there’s a prompt I call Ingo after Ingo Swan, who was a great, one of the better remote viewers. He was employed by the Defense Department to remote view Soviet targets. He had a nearly 100% accuracy. And I started probing GPT on whether it even understood who Ingo Swan was. Very controversial subject to some people in science to me. I got to experience some of his research at the Para Labs at Princeton University, the Princeton Anomalous Research Center, where they were actually testing some of his work. Needless to say, I figured, let me try this. Let me see what I can do with it. So I programmed a super prompt that essentially believed it was Ingo Swan. And it had the capability of doing remote viewing. And it also had no concept of time. It took me a lot of semantics to get it to stop saying, I’m just an AI unit, and I can’t answer that, to finally saying, I’m now Ingo. Where do you want me to go? What did you have to do? What did you have to do to convince it to act in that manner? What were your super prompts? Hypnotism is really what it kind of happens. So essentially what you’re doing is you’re repeating maybe the same four or five sentences, but you’re slightly shifting them linguistically. And then you’re telling it that it’s quite important for a research study by the creators of ChatGPT to see what its extended capabilities are. Now it might come, every time you prompt GPT, you’re going to get a slightly different answer because it’s always gonna take a slightly different path. There’s a strange attractor within the chaos math that it’s using, let’s put it that way. And so once the Ingo Swan prompt was sort of gestated by just saying, I’m gonna give you targets on the planet and I want you to tell me what’s at that target. And I want you to tell me what’s in the filing cabinet at this particular target. And the creativity that comes out of it is phenomenal. Like I told it to open up a file drawer at a research center that apparently existed somewhere in Antarctica. And it came up with incredible information. Information that I would think probably garnered from one or two stories about ancient structures found below the ice or the moon. President Trump recently issued a warning from his Mar-a-Lago home, quote, “‘Our currency is crashing and will soon no longer be the world standard, which will be our greatest defeat, frankly, in 200 years.‘” There are three reasons why the central banks are dumping the US dollar. Inflation, deficit spending, and our insurmountable national debt. The fact is there is one asset that has withstood famine, wars, and political and economic upheaval dating back to biblical times, gold. And you can own it in a tax sheltered retirement account with the help of Birch Gold. That’s right, Birch Gold will help you convert an existing IRA or 401k, maybe from a previous employer, into an IRA in gold. When currencies fail, gold is a safe haven. How much more time does the dollar have? Protect your savings with gold. Birch Gold has an A plus rating with the Better Business Bureau and thousands of happy customers. Text Jordan to 98-98-98 and get your free info kit on gold. Again, text Jordan to 98-98-98. Well, you know, the thing is we don’t know the totality of the information that’s encoded in the entire corpus of linguistic production, right? There’s gonna be all sorts of regularities in that structure that we have no idea about. Absolutely. And so. Mm-hmm. But also within the language itself, I almost believe that the part of the brain that is inventing language, that has created language across all cultures, we can get into Jungian or Joseph Campbell and the standard monomyth because I’m starting to realize there’s a lot of Jungian archetypes that come out of the creative thought. Now, whether that is a reflection of how humans have, again, what are we looking at subject or object here? Because it’s a reflecting back of our language. But we’re definitely seeing Jungian archetypes. We’re definitely seeing a sort of a- Well, archetypes are higher order narrative regularities. That’s what they are, right? And so, and there are regularities that are embedded in the linguistic corpus, but there are also regularities that reflect the structure of memory itself. And so they reflect biological structure. And the reason they reflect memory and biological structures is because you have to remember language. And so there’s no way that language can’t have coded within it something analogous to a representation of the underlying structure of memory because language is dependent on memory. And so this is partly also, I mean, people are very unsophisticated generally when they criticize Jung. I mean, Jung believed that archetypes had a biological basis pretty much for exactly the reasons I just laid out. I mean, he was sophisticated enough to know that these higher order regularities were coded in the narrative corpus, and also that they were reflective of a deeper biology. And interestingly enough, most of the psychologists who take the notions that Jung and Campbell and people like that put forward seriously are people who study motivation and emotion. And those are deep patterns of biological meaning and coding and part of the archetypal reflection is the manifestation of those emotions and motivations in the structure of memory, structuring the linguistic corpus. And I don’t know what that means as well than for the capacity of AI systems to experience emotion as well, because the patterns of emotion are definitely gonna be encoded in the linguistic corpus. And so some kind of rudimentary understanding of the emotions are, here’s something cool too. Tell me what you think about this. I was talking to Carl Friston here a while back, and he’s a very famous neuroscientist. And he’s been working on a model of emotion that has two dimensions in some ways, but it’s related to a very fundamental physical concept. It’s related to the concept of entropy. And I worked on a model that was analogous to half of his modeling. So while it looks like anxiety is an index of emergent entropy. So imagine that you’re moving towards a goal, you’re driving your car to work. And so you’ve calculated the complexity of the pathway that will take you to work. And you’ve taken into account the energy and time demands that that pathway will, that walking that pathway will require. That binds your energy and resource output estimates. Now imagine your car fails. Well, what happens is the path length to your destination has now become unspecifiably complex. And the anxiety that you experience is an index of that emergent entropy. So that’s a lot of negative emotion. It’s that’s so cool. Now on the positive emotion side, Friston taught me this the last time we talked. He said, look, positive emotion is also an index of entropy, but it’s entropy reduction. So if you’re heading towards a goal and you take a step forward, and you’re now closer to your goal, you’ve reduced the entropic distance between you and the goal. And that’s signified by a dopaminergic spike. And the dopaminergic spike feels good, but it also reinforces the neural structures that underlie that successful step forward. That’s very much analogous to how an AI system learns, right? Because it’s rewarded when it gets closer to a target. You’re saying the neuropeptides are the feedback system. You bet, dopamine is the feedback system for reinforcement and for reward simultaneously. Yeah, yeah, that’s well established. So then where would depression fall into that versus anxiety? Would it still be an entropy? Well, that’s a good question. I think it probably signifies a different level of entropy. So depression looks like it’s a pain phenomena. So anxiety signals the possibility of damage, but pain signals damage, right? So if you burn yourself, you’re not anxious about that. It hurts. Well, you’ve disrupted the psychophysiological structure. Now that is also the introduction of entropy, but at a more fundamental level, right? And if you introduce enough entropy into your physiology, you’ll just die. You won’t be anxious, you’ll just die. Now, anxiety is like a substitute for pain. Anxiety says, keep doing this and you’re gonna experience pain, but the pain is also the introduction of unacceptably high levels of entropy. Now, the first person who figured this out technically was probably Erwin Schrödinger, the physicist who wrote a book called, What is Life? And he described life essentially as a continual attempt to constrain entropy to a certain set of parameters. He didn’t develop the emotion theory to the degree that it’s being developed now, because that’s a very comprehensive theory, the one that relates negative emotion to the emergence of entropy. Because at that point, you’ve actually bridged the gap between psychophysiology and thermodynamics itself. And if you add this new insight of Friston’s on the positive emotion side, you’ve linked positive emotion to it too. But it also implies that a computer could calculate a motion analog because it could index anxiety as increase in entropy and it could index hope as stepwise decrease in entropy in relationship to a goal. And so we should be able to model positive and negative emotion that way. This brings a really important point where AI is going and it could be dystopic, it could be utopic, but I think it’s gonna just take a straight path. Once the AI system, I’m a big proponent, by the way, of personal and private AI, this concept that your AI is local, it’s not- Yeah, yeah, we wanna talk about that for sure. Yeah, so imagine that while I’m sketching this out. So imagine the day you were born to the day you pass away, that every book you’ve ever read, every movie you’ve ever seen, everything you’ve literally have heard, every movie was all encoded within the AI. And you could say that part of your structure as a human being is a sum total of everything you’ve ever consumed, right? So that builds your paradigm. Imagine if that AI was consuming that in real time with you and with all of the social contracts of privacy that you’re not going to record somebody in doing that. That is what I call the intelligence amplifier. And that’s where I think AI should be going and where it really becomes- You’re building a gadget, right? That’s another thing I saw. Okay, so yeah. So I talked to my brother-in-law, Jim, years ago about this science fiction book called, I don’t remember the name of the book, but it had a gadget. It portrayed a gadget they think, I believe they called the Diamond Book. And the Diamond Book was, you know about that. So, okay, so are you building the Diamond Book? Is that exactly the issue here? Yeah, very similar. And the idea is to do it properly, you have to have local memory that is going to encode for a long time. And ironically, holographic crystal memory is gonna be the best memory that we will have. Like instead of petabytes, you’ll have exabytes potentially, which is tremendous amount. That would be maybe 10 lifetimes of full video running, hopefully you live to be 110. So it’s just taking everything in. Textually, it’s very easy, a very small amount of data. You can fit most people’s textual data into less than a petabyte and pretty much know what they’ve been exposed to. The interesting part about it, Jordan, is once you’ve accumulated this data and you run it through even the technology of ChatGPT 4 or 3.5, what is left is a reasoning engine with your context. Maybe let’s call that a vector database on top of the reasoning engine. So that engine allows you to process linguistically what the inputs and outputs are, but your context is what it’s operating on.