https://youtubetranscript.com/?v=udlkps-81JM
Welcome back to Awakening from Meaning Crisis. So last time I tried to make some tentative suggestions as to what this religion that’s not a religion would look like and how it can make use of and be integrated with an ecology of psychotechnologies for addressing the perennial problems and a cognitive scientific worldview that can legitimate and situate that ecology of practices. And then I made some suggestions as to the relationship between credo and religio in our determination of our mythos and the issue of criterion setting. I made again another argument for open-ended, in that sense, Gnostic mythos. Talked about a mythos that always puts therefore the credo in service of the religio and that is always directed towards top-down the propositional being ultimately grounded in the participatory and also affording the emergence up out of the participatory through the perspectival and procedural into the propositional. I suggest in some ways in which we might set up a way of engineering, credo, something analogous to a wiki and create a structure that is a distributed co-op structure facilitated by things like the internet. And so, again, remind you, I was not trying to offer anything definitive or set myself up in any kind of way. That is not what I want to do. I want to try and help facilitate the people who are already doing this so that they have ways of talking to each other, coordinating with each other and facilitating each other’s development and growth. I then turned towards one of the culminating things we need to do, taking up on one of the deepest relationships between meaning, sorry, one of the deepest relationships that meaning has, which is the relationship between meaning and wisdom. We need wisdom, of course, as I’ve argued, because it’s the meta-virtue for the virtues. And we need that in order to give the individual pull for the relationship with the collective creation and cultivation of the motivation of the meta-psycho-technology for creating the ecology of psychotechnology. We also, of course, need wisdom before, during, and after the quest for enlightenment, the quest for a systematic and reliable response to the perennial problems. I then proposed to take a look at the cognitive science of wisdom. And we did that by taking note of an important article that comes out sort of after the first decade and a half of the resurgence of scientific interest in wisdom, and that’s the article of McGee and Barber. And they’re doing something consonant with what we’ve been trying to do in this series. They’re trying to, in a sense, salvage what we can from the philosophical theories, the legacy, and the actual age of wisdom and the psychological theories that were emerging at that time. And then they set them into dialogue with each other, a process of reflective equilibrium, trying to get a convergence between them. And they argue that all of these theories, the philosophical and the psychological theories, converge on a feature, a central feature of wisdom. And then following work that I did with Leo Ferraro in 2013, we can sort of expand beyond the explicit thing to what is also set alongside their phrase and also directly implied by their phrase. And so a central feature of wisdom is to systematically, sorry, the systematic seeing through illusion and into reality, at least comparatively so. And this, of course, is insight, but it is a fundamental insight. It is a systematic insight. It is an insight not just into a particular problem but into a family of problems. And then McGee and Barber make use of a point that I made use of when I was talking about systematic insight in higher states of consciousness. They make use of the work of Piaget. If you remember, Piaget found systematic errors in the way children are seeing the world. Remember, things like they fail at conservation tasks, counting numbers or pouring liquids, right? So you have these systematic errors which reflect a systematic way in which the children have over-constrained their cognition. They have to constrain their cognition. It’s adaptive, but they have to go through that process of assimilation and accommodation, constantly optimizing and complexifying their system of constraints. But what we see with the children is eventually they get a systematic insight, and we’ve all done it. We go through qualitative change, qualitative development. There’s an actual change in our competence because it’s not an insight into this problem, this instance where I’m failing to conserve, or this instance where I’m egocentric, or this instance, but it’s an insight into failures of conservation as a kind of error, failures of egocentrism as a kind of error, and having an insight that is not just at the level of framing, but at the level of transframing because it not only is reframing the problem, it is transforming my competence, so it is a transframing insight. It is a systematic insight. Because what it gives you is sensibility transcendence. That’s literally what’s happening to the children. Their sensibility is going through a form of transcendence. That’s exactly what development is. And they use that as a way of explaining what they mean. Of course, without realizing it, they’re making use of one of the paradigmatic metaphors for talking about wisdom, which is as the child is to the adult, the adult is to the sage. Just like the adult has had systematic transframing gone through development so that in a way compared to the child, they much more systematically see through illusion and into what’s real. The sage, similarly in comparison to an adult, sees systematically through in a transframing fashion illusion and into reality. So this is a core constitutive feature of what it is to be wise. And you can see something, and this is not something that Maggie and Barbara say, but you can see how this is automatically, I would argue, I would argue they’re not, but I would argue this is automatically connected to the project of enlightenment in some very important fashion. All right. What are a couple other important things that Maggie and Barbara talk about? They talk about that wisdom, and this is the beginning of the important distinction between wisdom and knowledge that we’ve been sort of also making use of throughout the course, that wisdom is not about what you know. Wisdom has to do with how you know it. And there’s two senses of how that I want to explicate that they leave rather implicit. There’s how you know it is how you have come to know it. What’s the processing involved as opposed to the product? So wisdom has a lot more to do with the process than with the product. Knowledge is often the product. I know this is what I know, and I know this and this and this. But wisdom is how am I knowing? How am I knowing? So definitely that, and that’s going to be pivotal because that’s going to immediately link wisdom to rationality because one of the key features of rationality, I’ve mentioned this before, we’re going to come back to this, the work of Stanovich, is a rational person is not only fixated on the products of their cognition, they pay attention to and find value the processing of their cognition. That’s what it is to be rational. So that’s one aspect of what they mean by the how. And then there’s another aspect of how you know. And that has to do, and this goes to a point made by Keeks, between descriptive, John Keeks, excellent philosopher, does work on wisdom. But Keeks makes a distinction between descriptive knowledge and interpretive knowledge. I often prefer to use the word knowing rather than knowledge, but that’s what we’re talking about. So again, this is grasping the facts. Where is interpretive knowledge, this points towards an aspect of wisdom that we’re going to have to come back to. This has to do with understanding. This is to grasp the significance of what you know. And of course, relevance realization is being invoked. They’re grasping the significance, right, connecting to the relevance realization. But so grasping, understanding is grasping the significance. So part of what we’re talking about with wisdom, and we’re talking about the how rather than the what you know, we’re talking about the process rather than the product. And we’re talking not about the description of the facts, right, but we’re talking about you grasping, understanding by grasping the significance of the facts that you have. So wisdom has to do with these things. It has deep connections to understanding, again, which has to do with the relevance realization. It has to do with the process rather than the product, right. And that is all tied into this, right, this systematic, trans-framing realization of what’s real. They then point to one other important feature of wisdom. They point out there’s a perspectival participatory aspect to wisdom. They talk about what’s called a pragmatic self-contradiction. A pragmatic self-contradiction is not a contradiction in what you state. It’s a contradiction in how, in the perspective from which you make the statement and the identity, the degree of identity you have in making the statement. Let me give you a non-controversial example. Okay, so I am asleep. There is nothing logically wrong with that. If I’m pointing to the fact of John being asleep, there’s no conceptual contradiction in John being asleep. This is a pragmatic self-contradiction because uttering it means I’m uttering it from the perspective of somebody who is awake because I have to be awake in order to say it. And of course there’s a sense in which I’m not just pointing out a fact, I’m actually pointing to myself with it. And that’s of course the degree to which I’m participating in the fact that’s being disclosed. Now, that’s very different by the way from lucidity in dreaming where people can realize in a dream that, oh, I am dreaming. Because you can realize you’re dreaming and remain in the dream. There is nothing pragmatically self-contradictory about that. Now they point out and think, you can just hear Socrates in this, they point out that this, I am wise, carries with it a sense, a very strong intuition of a pragmatic self-contradiction. To state that you are wise seems to be an indication that you are in a perspective and you have an identity that is precisely not that of being wise. And of course this is part of the Socratic, you know, I know what I don’t know idea. This is part of again how I’ve argued, and this is why awe, right? Awe as this two-faced thing between horror and wonder, right? And that what it does is it brings out, and again, I’m using this in the original meaning of the word, not what we mean by it now, right? Humiliation, the inculcation of humility, right? And so what that tells us right away, right, is that wisdom has this perspectival and participatory aspects to it such that, right, it’s not a matter of making, of even having true beliefs. There’s a matter of what perspective can you take, what perspectives, what identities are you assuming and assigning. So the participatory and the perspectival are also very central to wisdom. And that of course makes sense, again, with wisdom having to do with much more with the how than the what. And of course this is also perspectival and participatory because I’m seeing through a mis-framing and I’m going through trans-framing, I’m actually going through developmental change. My world is opening it up and I am in a coordinated resonant manner opening up to it and opening up through it, which is of course what wonder and awe are all about. Okay, so that gives us some very important things to take note of. And I’ve already indicated a connection to Stanovich with the idea of paying attention to process rather than product. And we can strengthen that connection by noting that at the core of wisdom is the capacity for overcoming self-deception. Now, Stanovich himself has published about at least overcoming foolishness and therefore at least by implication what it is to become wise. But he normally talks about this ability to systematically overcome self-deception with another term. And this is the term rationality. And throughout I’ve been proposing to you that part of what we need to do to rehabilitate wisdom is we also need in a coordinated fashion to rehabilitate what it means to be rational. Rational cannot be equated to a facility with syllogistic reasoning. Rationality cannot be reduced to logic. So let’s broaden the notion right away and make it connect to what we’re talking about, which is what we mean by rationality is a capacity to overcome self-deception in a reliable manner. So what I’m going to mean by rationality is reliably and systematically, what I mean by those in a sec, overcoming self-deception. And this is also in a lot of the work on rationality, especially by people like Stanovich, who are also affording flourishing, which is afforded by some process of optimization of your cognitive processing. What I mean by reliably, it can’t operate according to a standard of perfection, completion, certainty. Reliably does mean though that it is a high probability of functioning successfully. Systematically means it’s not operational just in this one domain. So let’s compare rationality with expertise. I can become an expert in, let’s say tennis. I’m not, tennis won, and I believe my dysgraphia is bad today. Let’s whatever, maybe it’s two ends. I can become an expert in this. We have to be careful because we equivocate on this term. There is one in which we can, it’s something that we can study, and one in which this is just a synonym for being good at something. I’m not using it in that sense. I’m using it in the sense in which it makes sense to say somebody is an expert in tennis. They have acquired a high proficiency in the set of skills such that they have an authority about tennis playing. That’s what we mean. You can become a legal expert, etc. So, the person, there is two ends in tennis. My brain is settling down. Or in the law, for example, to become a legal expert. So, what happens in expertise is precisely this. You find a domain, a bounded domain that has a reliable set of very complex, very difficult, but nevertheless reliable set of well-defined, or at least well-definable for you eventually, set of patterns and problems. You know its expertise precisely because it doesn’t transfer. My expertise in tennis won’t transfer even to things that are close. In fact, it will interfere with when I try to play squash. My expertise in golf will interfere when I try to play hockey. That’s not only does it not transfer, right? It will often interfere and transfer even to things that are relevantly similar to your area of expertise. Now, this is a way again in which we have to pay more attention in ways in which we can bullshit ourselves. Because we often confuse, right? Because we don’t pay careful attention to how we’re using similarity. We often confuse people’s expertise. What do I mean by that? So, here’s somebody who’s an expert, for example, in a particular domain, maybe in physics. They have expertise there. And of course, physics is about knowledge and about getting at what’s real. And so, that seems to be similar to, you know, philosophy, right? And so, presumably somebody in physics can therefore just transfer their expertise to philosophy and just make pronouncements about philosophy and metaphysics. Perhaps pronouncing that philosophy is dead or useless or some such thing. Which of course itself is a philosophical statement and pragmatically self-contradictory. And if we don’t pay attention to this fact about expertise, we may fail to see that the similarity between physics and philosophy may actually be good reason for believing that these people are the worst people to listen to for philosophical advice. Because their expertise in physics may be in fact interfering with expertise in philosophy, for example. At least academic philosophy. Just the way that expertise in tennis actually interferes with you trying to play squash. Okay? So, expertise is not systematic. It is limited in its domain. Rationality is supposed to apply within. It’s supposed to be apt within each domain and apply across many domains. Somebody, right, is rational if they can note self-deception when they’re doing, right, their daily life. Where they’re doing their professional work, where they’re engaged in friendship, where they’re engaged in rational, sorry, romantic relationships. Okay? So, and this is an important thing to remember. Rationality is in this sense a domain general notion as opposed to a context specific. Expertise tends to be a domain specific. Now, of course, this is a continuum. The more systematic somebody is, the more rational we can claim them to be. Somebody might be very rational in a couple of domains and irrational in others. So, on balance, they’re not that rational of a person, right? And of course, I’m not claiming that everybody is rational in a domain general way. I’m claiming that that is the achievement that we are aspiring to. So, rationality is to reliably and systematically overcome self-deception, also affording flourishing optimization. So, you optimize a set of procedures for achieving the goals you want. But, and Stanley Bich doesn’t talk enough about this. Other people talk about this when they talk, like Agnes Kullar when she talks about aspirational rationality. Part of it is also as you start to optimize your cognition, it will also tend to shift and change the goals you are pursuing. So, the goals also tend to come under revision as we pursue this reliable and systematic overcoming of self-deception and the attempt to optimize our functioning so that we can afford flourishing. Okay. So, given that that’s what I’m talking about, we can then take a look at Stanivich’s work and other people’s work. And the way to do this is to situate it within the cognitive science of rationality. And that is to take a look at the rationality debate. Okay. So, the rationality debate was driven by a whole bunch of experimental results that seem to show that human beings are irrational. Okay. And how that works is, I mean, this is, I’m not going to go into this in great length and I recommend you read Stanivich’s work. I’m just going to show you a couple of examples of the kind of experiments you do and then show you the features of them. So, you give people certain problems to solve and then you will note certain things about how they solve them. So, here’s one problem, right? So, here’s a, right, here’s a pond of water, right, and I’m covering it, right, there’s lily pads growing on it. It starts with one lily pad and every day the lily pads double, right? So, on day one there’s one, day two there’s two and so forth. Every day the lily pads are doubling. And then I tell you on day 20 the surface of the pond is completely covered. On what day was the pond half covered? And people say, oh, on the tenth day, halfway through, it’s half covered. No, right, on day 19 the pond is half covered because on day 19 I’m halfway, right, you have to ask yourself, right, on day 19 I was halfway towards being, right, full because doubling of half is what gets me full. So, it’s on day 19 that the pond was half covered by the lily pads. Now, what’s interesting here is notice how there’s machinery like your insight machinery. There’s machinery that’s making you leap to a conclusion. It feels like an insight, but it’s actually causing you to misleap. And we talk about this. You’re jumping to a conclusion that’s actually incorrect. Now, please note that, how that adaptive machinery that often causes you to have an insight is actually thwarting you in an important way. So, people reliably fail on this kind of thing, right, this kind of task. Or you can give people this kind of task. You can get them to, you give a preliminary test and you find propositions that they strongly agree with or strongly disagree with. Well, let’s say that some person strongly believes B. You know, I’m not taking a stand here on this particular issue, right, they may, you know, they strongly believe that abortion is wrong or they strongly believe that capital punishment is wrong. Now, what you do is you give them two situations. You give them a good in the sense of a logically valid argument that leads to not B. That means not. I’ll just put not in here. And you give them a bad, very poorly constructed argument that leads to B. And you ask them, take a look at this and tell me which one of these is a good argument. And notice, notice what I said earlier, how this points to what Stanovic argues, that part of rationality is your ability to remove your fixation on the product of your cognition. That’s like being locked in the nine dot problem, right, and be able to direct your attention and care about the processing for its own sake. This is critical detachment. And what you find reliably for many people is people will say, oh, well, this is the good argument. This is the good argument. They’ll fail at critical detachment. Now, here’s the thing. I’ll give you a couple more of these. But notice when I showed you the right answer in the pond example, you went, oh, yes, of course, of course. So you acknowledge the principle you should be using, but you don’t actually reliably apply it. So you know what the right reasoning principle is, but you don’t reliably apply it. You know, you know that I should be able to independently evaluate an argument, independent of what it leads to. Because if I can’t do that, then there is no rationality possible. Because if you can’t independently evaluate the argument, then you can’t use the argument to evaluate the conclusion. And therefore, I could never persuade you by argument. So you know that you should evaluate the argument independently from the conclusion, but we reliably fail to do that. You see what the pattern is? We know what the principle is. We acquiesce in it when it is stated to us. But in experiment after experiment, we reliably fail to do it. Let me give you one more example. There are so many of these. Look up the conjunction fallacy. Look up confirmation bias. Look up the waste and selection task. Some of you can read some of my work elsewhere. I’ll give you one more example of this just because it’s, again, so interesting about this. Right? So here’s a principle we all acquiesce in. I believe. Because whenever you ask people, they say, yes, yes, of course. That’s the rule we should be using. Here’s the rule. So I’ve got some evidence, and the evidence is the basis for my belief. Right? And then if the evidence is undermined, I should change my belief. Right? Of course. Right? Now, of course, we can have disputes about what counts as evidence, blah, blah, blah. But that principle, right? Right? If the evidence for my belief changes, I should change my belief. Now, the problem, of course, with testing that experimentally is your beliefs are based upon all kinds of background evidence and information you’ve got. So testing it in an experimental situation is sometimes difficult. But this is what they did in an experiment. Right? So what you do is you try and create a belief just in the experimental situation. So you’re trying to create a new belief in the person right in the middle of the room. You’re trying to create a new belief in the person right in that experiment. And so the experiment is actually the place in which you’re providing the evidence. So what did they do? They brought a bunch of people in, and they told them about this important skill that they wanted to see if they possessed, which is the ability to detect authentic suicide notes. Many of us have no experience with this, and so that’s why it’s plausible. Right? That this is going to be a situation in which a new belief is going to emerge. So the idea is I’m going to give you a bunch of notes, and you have to be able to tell me which ones are authentic and which ones are fraudulent. And this, of course, is a very valuable skill because it can help with first interveners. It can help prevent real suicide. It can help us determine people who are just faking it, or et cetera, et cetera. And so what you do is you give people a bunch of notes, and they make their judgments. I think this is real. No, I think this is fraudulent. And then you, of course, give them feedback. Yes, that’s right, or that’s incorrect. Right? And then what happens is, right, you later reveal to people the following thing has happened. People were randomly assigned to group A, randomly assigned to group B. If they were in group A, they were told they were very good at this task. If they were in group B, they were told they were very bad at this task. Of course, there’s going to be a group C, which is the control group, and it’s just going to be neutral. And you’re going to use them as a control. And I’m not going to go into that because that’s just good experimental design. Right? And so these people come to believe, again, on the basis of the evidence in the experiment, that they’re good. These people come to believe they’re bad. And now this is what you now do. Once you get them to reliably evaluate, like they self-evaluate and say, yeah, I’m good. Look, I keep doing well on this. No, no, I’m bad at this. I keep doing bad on this. Then you say, aha! Then you debrief them. Right? And you show them that they were, right, they were only getting the feedback completely randomly. You show them two things. All of the notes are fakes. All of the notes are fakes. None of them are real. And you were given the feedback only on the arbitrary, right, on the arbitrary factor, the completely random factor that you were just assigned to group A or group B. What that means, right, is the belief that you are good at this or bad at this should be completely undermined. Because the evidence for it, that these are real suicide, some of these are real suicide notes, and that I’m getting the feedback based on my performance has been completely undermined. Right? And now you give people a bunch of distractor tasks. So they’re doing other things, right? And then you come back and ask them, OK, what are the things that you’re doing? OK. But how do you think you would do on this in real life? These people reliably port, I’ll be bad at it. Oh, no. These people, I’ll be good at this. Or you ask them, how would you do on a task very analogous in this? How you’d be able to distinguish between fraudulent and legitimate marriage proposals. Right? Something like that. And these people say, oh, I’ll be really good at it. These people say, I’ll be really bad at it. This is known as belief perseverance, belief perseverance, that people maintain the belief, even though the only evidence for it has been completely, directly undermined in front of them. So once again, what do we see here? People acquiesce in a principle. They say, yes, this is the principle. Notice my language. I should use, I acknowledge and accept that I should use the principle that if the evidence is undermined, I should revise the belief. And yet they reliably do not do that. So again and again and again, you get all these experiments and there is a lot of them. I’ve just given you three examples and there are like, there’s like 15 kinds of experiments you can run and, you know, tens, sometimes hundreds of versions of these experiments. Right? So people acknowledge the principle and then they reliably fail to engage in it. So they suffer, notice my language here, from systematic illusion, systematic self-deception. All right. So a bunch of psychologists, cognitive scientists and philosophers were coming to the conclusion that, well, that must, human beings are just irrational. They’re just irrational. And so this idea that we’ve carried throughout all of our history from, you know, Aristotle on that human beings are the rational animals that’s ultimately flawed. We’re not, human beings are not rational. Now, that’s very problematic, right? Because think about what that means. If you were convinced that that was deeply correct, that human beings are not rational, then you’d have a very tough time justifying democracy. Because if human beings are reliably irrational, democracy is a very bad idea. You should have the few people who are reliably rational and let them rule, for example. I’m not saying this. I’m not advocating this. I’m trying to show you the consequences. You know, our legal system is also based on the idea that people are fundamentally reasonable, reliably rational. But if that’s not the case, can we hold people responsible for their actions? I mean, the way they’re connecting evidence to belief to action is seriously, you know, problematic. Morality depends, and this is something that Kant famously argued for, morality depends on rationality. People can only be held moral if they can also be deemed rational. If you keep doing the right thing because of luck or because of coercion, we don’t think you’re moral. But if you do the right thing because you have reasoned it out and come to the conclusion that that is the right thing to do, then of course we do deem you moral. So, as you can imagine, a debate arose, and this is a very good thing for science. Notice what’s going on here with rationality. Rationality isn’t just a fact out in the world, like whether or not the earth is round. Rationality ultimately goes, because it is so deeply tied to perspectival and participatory knowing, it goes deeply to who and what I am. And that has implications for what kind of political citizenship I can have, what kind of moral status I can have, what kind of legal status I can have. Even your judgments, for example, if I’m mature or immature, are going to be vectored through how well you assess how rational I am. Rationality is a deeply existential thing. So, a debate ensued around whether or not we should interpret the experiments or what they are, and they’re robust and reliable. They are not suffering the replication crisis, these experiments. So these experiments are robust and reliable, but there’s a division of the scientific community. And there’s always, and there always should be a debate in science about how you interpret your experiments. Should we interpret these experiments to mean that human beings are fundamentally irrational? Now, a debate ensued, and that debate is very important, and I want to go through this debate. Why are we doing this? Well, first of all, I’m trying to show you that the human being is fundamentally irrational. And I’m trying to show you the existential and political and moral import of rationality. And I’m also trying to get you to consider expanding and revising the notion of rationality in a way that will help us to come back and deepen our understanding of wisdom. Why are we trying to do this? Why are we trying to do this? Why are we trying to do this? Why are we trying to do this? Why are we trying to do this? Why are we trying to do this? Why are we trying… That’s a really panicked question, because that’s what we want to do. We want to help us to come back and deepen our understanding of wisdom. Why are we trying to understand wisdom? Because wisdom is deeply associated with meaning, and wisdom is deeply needed for addressing the project. Sorry, for cultivating enlightenment. that have driven the meaning crisis. Okay. So the rationality debate. The first major response is by Cohen. Cohen makes a very important argument. It’s an argument that we have to go carefully through. And see, again, this is what I mean. There has been so much deep work put into the notion of rationality we should not take the self-proclaimed promoters of rationality on YouTube to be clear examples of what rationality is. Okay. We have to do this more carefully, cautiously, reflectively, paying much more attention to the scientific evidence, the empirical evidence, and the debate. So Cohen argued that there’s a problem with concluding that human beings are fundamentally irrational. And his argument comes down to a couple of very key points. So let me use this word because, okay. Cohen says, okay, to be rational is to acknowledge and to follow a set of, right, standards. And we noted that. We can only attribute irrationality to someone, something, if it acknowledges the standards and then fails to meet them. To say that this book is irrational makes no sense because it does not acknowledge the authority of those standards. So the fact that it fails to meet those standards is no reason for calling it irrational. The book is irrational. Okay. So Cohen stops right there and he says, well, let’s slow down. Let’s ask ourselves, where do we get these? The way he asks this is, how do we come up with our normative theory? Normative not meaning statistically normal here, but normative meaning the theory about the standards to which we should hold ourselves accountable when we’re reasoning. So where does our normative theory come from? Right? And then he makes use of an argument that goes back to Plato and it goes all the way through to Kant. And it’s like, well, there’s a deep sense in which reason has to be autonomous. Let’s say I believed that my standards were given to me by some divine being. Right? In the sense that it is commanded of me. There is some Moses of rationality, and he comes back or she comes back with the commandments for how we’re supposed to reason. So if we follow these just because we are commanded to do so, that is ultimately not a rational act. That is just to give into authority, to give into fear. And we would be doing the same thing regardless of what those standards were. Right? If we follow the standards because we acknowledge that they’re good and right, that means we already possess the standards. This is an old argument that goes back to Plato. It’s in the Uthofro dialogue. Right? Where normativity has to be really deeply autonomous. If something is only good because the gods say it, then the gods aren’t good in saying it. Look, if God says to you, do X, and X isn’t independently good to do, then God saying do X does not make God good. Because it would only make God good to say doing X if doing X was independently good. And if we only do something because we’re commanded to do it, not because we independently accept that it is the good or the right thing to do, then we are also acting arbitrarily and not acting in a good manner. So, right? We have to possess the standards. This is an argument, right, that’s crucial and conch. Reason is ultimately autonomous. Not in the sense that people misunderstand it, that it’s like a god, or that it has absolute authority. It’s that reason has to be the source of the very norms that constitute and govern reason. Because that’s how reason operates. Okay, so we have to be the standard. There’s another way of seeing this. Odd implies can. I’m giving you two separate arguments for this idea. Odd implies can. If I lay a standard upon you, you ought to do this. Then you have to be able to do it. It makes no sense to apply a standard to you that you do not have the competence to fulfill. Okay? You ought to always say what is only certain and perfectly true. And if you don’t, you are failing, you’re immoral in some fashion. But that’s of course impossible. You can’t do it. You can’t lay on anybody the obligation to speak all and only what is true, because everybody has false beliefs. Most of our beliefs are false. And nobody can act comprehensively according to standards of certainty. If I lay that standard on you, it’s a mistake, because you don’t have the competence to fulfill those standards. Okay. Okay. So there’s just so much argument that converges on this point. Okay, we are the source of the standards. That’s of course why you so radically acquiesce in them. But then of course you should immediately say, right. But what the experiments show is, yes, people acknowledge the standards, but they fail to satisfy them. Well, then Cohen does something very interesting. He says, well, we have to be careful. People make two kinds of mistakes, right? And what we have to do is we have to make a distinction between competence and performance. So let me give you an example. This goes back to Chomsky, and we talked about it when we talked about systematic error. Let’s do it again just to bring it back into the argument. Okay. Competence is what you’re capable of doing. Performance is what you’ve actually done. You have a competence that greatly exceeds what you’ve actually done. You have a competence to speak so many sentences that you will never speak. Right? So it is false that I have held my breath underwater for 17 days while listening to Beethoven’s Fifth Symphony with a company of superintelligent starfish. That sentence happens to be true, by the way. The fact that I uttered it is bizarre. I probably would never have uttered it in my life, right? But I have the competence to generate it, and you have the competence to understand it. So competence is what you’re capable of doing. Performance is what you actually do. Now, the thing is, in between your competence and your performance, right, there are all the implementation processes. Remember this? So I have the competence to speak English. But if I’m extremely tired, the implementation processes, the English in me, doesn’t… It comes out garbled. I start slurring my speech, or perhaps if I was very drunk or something. Now, you don’t think that when I’m very drunk or very tired that I’ve lost the competence. You just think, rightly, by the way, that there’s something interfering with the implementation processes. Right? But if I get in a car accident and my brain is damaged and I’m slurring my speech all the time, then you go, oh no, John’s lost English. It’s a different thing. Right? Now, Cohen does something really clever here. He says, how do we come up with this? Well, we have to be the source of it, and it has to be something that we can hold ourselves to. Odd implies can. Okay? So where do we come up with these standards? Well, what we do, this is how we come up with all of our normative theories. What we do is we look at our performance and we try to subtract from our performance all of the errors that are due to implementation, implementation errors, or as they’re often called, performance errors, errors in how I’m implementing my competence. And so what I do is by this process of systematic idealization, I try to come up with an account of what my competence looks like completely free of performance errors. So what would I have to have in my head so that I could reliably speak and understand English all the time in a perfect manner? Now, of course, all the time I’m speaking, because of implementation processes, there are performance errors. I sometimes stammer, I sometimes stutter, there’s gaps, I speak elliptically. Notice that I just went, I, I, okay, those are performance errors. And you read through those, right? So what we do is we take our performance, we put it through a process of idealization, we try and subtract all the performance errors that come from the implementation, and then we get a purified account, right, of our competence, an idealized account in that sense, that it’s purified of distortion by performance errors. And then that is the standard to which we hold ourselves. That’s how we come up with a normative theory. That shows how we can be the source of it and how we’re ultimately capable of it, but how we can, nevertheless, a lot of the time fail to meet it. So what he argues, brilliantly, but we’re going to see there’s going to be problems with it, he argues that all of the errors in these experiments have to be performance errors, that all of the mistakes that people are making are like the slips of tongue that pervade my speech. They’re performance errors. Because why? People have to be the source of the standards, and they have to be able of meeting those standards. So we must have, at the level of our competence, all of the rational standards. We must be, at the level of our competence, rational beings. And the only reason we’re making those mistakes is performance errors, which means that human beings are not fundamentally irrational after all. They are rational. Now, what I want to show you next time is what’s right about that argument and what’s deeply wrong about that argument, how Stanovic and the work of Stanovic and Rest reply to this argument in a really brilliant way. And what it’s going to show us, again, about the nature of human rationality. Human rationality is much more comprehensive than facility with syllogistic logic. It is the reliable and systematic overcoming of self-deception, and that has to do with us not just theoretically, it has to do with us existentially. And therefore, this notion of rationality deeply overlaps with, and I’m going to argue, is a component of what it is to be a wise person, to be able to systematically see through self-deception and into reality in such a way that, like rationality with wisdom, we can actually afford meaning in life. Thank you very much for your time and attention. Thank you.