Stephen Fleming: Limits of self knowledge

Stephen Fleming: Limits of self knowledge

Stephen is a Professor of Cognitive Neuroscience whose work focuses on metacognition and the computational and neural basis of subjective experience.

Stephen Fleming is a professor of cognitive neuroscience whose work focuses on metacognition - what people think or know about their own minds - and the computational and neural basis of subjective experience. He’s the author of the book Know Thyself: The Science of Self-Awareness.

Today’s topics include the possibility of self deception; cognitive biases and what we can do to guard against them; the benefits and drawbacks of improved metacognition; the relationship between metacognition of conscious experience; the theoretical limits of self knowledge; and other topics.

Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Read the full transcript here. Follow me on LinkedIn or Twitter/X for episodes and infrequent social commentary.

Paradigm is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Episode links

  • Twitter: @smfleming

  • Book:

  • The Metalab:


0:00:00 Intro

0:01:20 Self deception

0:08:08 Cognitive biases and over confidence

0:14:40 Evolution of metacognitive biases

0:18:18 Dunning Kruger effect

0:20:52 Split brain experiments and self narratives

0:25:32 Delusion of self understanding

0:29:54 Isolation of losing touch with reality

0:34:38 How good is our metacognition?

0:38:53 Metacognition vs performance

0:42:46 How trainable is metacognition?

0:46:51 Limits of self knowledge

0:50:52 Theory of self vs others

0:54:16 Benefits from improving metacognition

1:01:40 Psychosis and reality vs imagination

1:14:15 The hard problem of consciousness

1:27:05 Book recommendations

1:33:35 Who should represent humanity to an AI superintelligence?


Importance of self-knowledge

Having robust self-knowledge and metacognition, or at minimum a good understanding of the mind's biases and failure modes, is probably more important today than in the past.

As Stephen and I discuss in today’s conversation, we’re living in a world in which technology and the incentive systems surrounding it amplifies the consequences of our cognitive biases, and of poor metacognition. To name just one example, there is the infamous Dunning Kruger effect, in which people with less expertise in a particular domain, tend to overestimate their ability, and present with overconfidence. Likewise, true experts tend to be more aware of their limitations, and present externally with less confidence. 

And in today’s world, this can lead to a range of problems that weren’t as significant in the past. These days overconfidence is often differentially rewarded and amplified on social media and elsewhere, at a scale that we’ve never really seen before. And more so than ever, its loud and confident voices that proliferate than it is expert voices. Or at least these voices form robust and often quite large echo chambers. And so we have the fairly perverse result in which the voices of true experts are often suppressed in favour of more confident, non-experts. This is really not the most effective way to tweak the volume dials of the voices in our society.

That being said, it’s increasingly important that we’re able to decouple the truth value of things being said, from the confidence with which those things are said. And that’s whether those things are are said by other people, or indeed by our internal self narratives.

Anyway this was an informative conversation and I hope you find it valuable. If so, please consider subscribing and sharing this episode with friends and family.

Thank you for reading Paradigm. This post is public so feel free to share it.



This transcript is AI-generated and may contain errors. It will be corrected and annotated with links and citations over time.

[00:00:00] Matt: Let's start with the topic, not of self knowledge, but of self deception. I think it's commonly believed that People are able to lie to themselves. You know, this is a, a common notion.

It's used in our language. We say it all the time, but I've always actually questioned this notion from a first principle's perspective because, you know, typically deception requires some sort of information asymmetry between deceiver and deceived. Um, I have to say something that I know that you, you don't know and they can't be the same person in, in nor everyday life.

Um, so to what extent do you, do you feel it's possible to genuinely deceive oneself?

[00:00:36] Stephen: Yeah, I, I mean, I think it is possible. Um, and I can unpack the reasons why we think that is plausible from a, neuroscientific perspective from a cognitive science perspective. And I think one thing, one useful place to start answering that question is to think more broadly about what we think the brain is trying to do.

What problem is it trying to solve? And in that we've been very influenced by, um, this broad notion that. The brain is a model building machine. It's trying to, um, essentially understand its, its environment. Um, it's locked inside a dark skull. All it has access to is information streaming in through the senses.

And this is kind of strange when you think about it. You know, the, the retina in the case of vision is a 2d sheet. We're, we're circading around all the time. The perception of the world is, is, um, likely to be some kind of inference. Like we, we don't have direct contact to the outside world and. This is increasingly being studied in lots of different ways in cognitive science, in neuroscience, um, this notion of perception as a, as a process of solving some inverse problem.

We're trying to build a model of, of what is out there and. Our working hypothesis, when we're thinking about self awareness, self knowledge, what we call metacognition, is that essentially the same thing is happening, but now the model that's being built is about ourselves, right? So it includes our own, um, skills and capacities and failings.

It includes what we like and don't like. It includes how we might react in certain ways to certain situations, right? So we have this kind of broad sense of, of who we are. And just like perception can be fooled by various illusions, um, and it might be set up to not actually create a veridical picture of reality.

In fact, we, we think we, you know, lots of perception science shows that it probably isn't creating a veridical picture and instead it's creating a useful. Summary of what's out there. So we think metacognition and self knowledge is kind of doing something similar. So if you take that view, then this model that we're building of ourselves.

It needs to be largely accurate, otherwise it will not be useful, right? So it needs to, it needs to have some contact with, with the reality of our skills and capacities and so on. But a distorted picture can often be useful in lots of different ways. So, um, you know, there's interesting ideas about potential evolutionary reasons why metacognition might be distorted in certain ways.

So. Um, there's nice work suggesting that for instance, overconfidence might actually be adaptive because it helps us engage in scenarios that we otherwise wouldn't have engaged in, in the first place, if we had a more accurate belief about our capacities. So those kind of deceptions, if you, if you like, it's, it's not a deception in the sense of like.

Trying to negate something that is true, but it's a distortion and I think those kind of distortions are widespread and, and part and parcel of the picture of how we think metacognition works.

[00:04:12] Matt: Yeah. Yeah. Interesting. And I guess, um, you know, in, in classical deception, um, the, the more well somebody knows you, the harder it is to deceive them. And I presume. In this description of metacognition, there would be something similar, you know, the more one knows themselves, I guess the harder it is to deceive oneself, right?

There's kind of less place for the deceptive information to hide. Is that consistent with your view of how metacognition works?

[00:04:40] Stephen: yes, yes, it is. I think, I think that, um, one way of thinking about that is where might there be flex in the system for deception to get in there? Um, so in the case of perception, because our sensory motor apparatus is connected to the outside world, there's less room for it to completely decouple from reality.

You know, if I, if I always perceive my coffee cup as twice as big as it is. Then I'll always knock it over and I won't be able to, um, you know, drink my coffee and so on. So there's clear, um, reasons why perception and action should hone in on a reasonably, um, accurate picture of the outside world, but metacognition has more flex and people like, uh, Chris Frith and Daniel Yon have written about this recently, suggesting that when you go to these higher levels of There's less constraint from the outside world.

So there's more room for maneuver, if you like, in terms of how that model gets built. And also it's probably. Computationally harder to build it in the first place because you need to integrate over lots of different sources of information. Um, so it's probably multimodal it's drawing on all the different, um, aspects of the mind to build up this self model.

Um, and so it's probably a lot harder to make it accurate, um, in the first place. And so in a sense, you might have two sources of inaccuracy or self deception. One could be just. A sheer computational limitation that it's just hard to build a good self model. So there's always going to be noise and, and inaccuracies in that.

And the other one could be coming from a more motivated or, um, adaptive standpoint that actually it helps us to have a somewhat distorted or deceptive self model, as in the case I described of overconfidence.

[00:06:43] Matt: Yeah, let's, let's dig into the question of, of overconfidence and the, the biases that might creep into our self models. I guess in other contexts these are well understood, you know, we're, we're very familiar with several cognitive biases. And even at the level of perception, I mean, it's something that I've, I've heard of, for example, we have a tendency to overestimate heights when looking down from above, um, and we do that more significantly than we do when looking down from, uh, up from below, and, um, it makes sense from an adaptive perspective why something like that might evolve, right?

You want to prevent falling off cliffs and, and, uh, um, all the negative consequences that arise from that, but, um, in something like metacognition, it feels like it could be a mixed bag, because, um, to one, um, on the one hand, A really, really good self understanding should allow someone to operate very effectively in the world.

They, they have a good understanding of what they can and can't do. Um, but then as you said, there are, there are other forces like confidence. You want to display confidence and maybe even a bit of overconfidence is a good thing.

[00:07:47] Stephen: So as you say, I think there, uh, we can separate these into. Um, categories. So one would be more about imprecision in how that self model gets built. So this is what in metacognition research we call metacognitive sensitivity. So that's how well you can track your behavior or performance on a moment to moment basis.

Am I aware of just making an error? Do I realize when I might not know the answer to something? So those are kind of. little, um, sparks of introspection where we realize, hang on, I don't have the full picture here. And we can measure that in the lab as the connection between your fluctuations in confidence over time and your fluctuations in performance over time.

So if I have good metacognitive sensitivity. I tend to be more confident when I'm actually right and less confident when I'm, when I'm wrong. And in a lot of the studies we do and, um, other labs do around the world, we, we, we study exactly that, uh, process, like this connection between performance and confidence.

What is it that determines good metacognitive sensitivity? And there's a. A long and interesting story about how, you know, the factors that influence that, but broadly speaking, the field has kind of homed in on this notion that there is this domain general, um, resource that. Humans bring to bear on, um, metacognitive acuity.

So we have this higher level process that we think is supported by, um, sub regions, the prefrontal cortex talking to other areas of the brain, essentially building up this, um, picture of when we might've made an error, um, when we're doing well and so on. So there's kind of like this process of self monitoring going on in the background.

So there's various sources of noise that can get into that. So various sources of bias and inaccuracy that can get into that and at lots of different stages of the system. So if you have say, um, a, if you have, um, Disease or damage to particular mental faculties, like, for instance, in the case of dementia, if your memory starts to fail, then you might just not get the signals that metacognition needs to realize that your memory is failing.

And so you might. retain a kind of fixed belief that your memory is still fine, even though it's not. So you might then start to have this decoupling, if you like, of, uh, that your metacognition starts to become detached from the reality of your performance. So that's one source of, of error or decoupling that can come into the system when, when it becomes harder for you to track uncertainty in different, um, domains.

And then another source of error that can come in is, is what we call metacognitive bias. So this is this general sense of, am I'm, do I tend to be more or less confident? Do I tend to see myself in a particular light overall? And we think that this is much more pervasive. Um, generally, even if you have a perfectly, uh, well functioning metacognitive system, it seems that people tend to hold these.

Slightly overconfident, um, views of themselves and is a really, again, it's really interesting story of like why, why that might be the case. Um, the specific case of overconfidence I mentioned earlier and the evolutionary explanation for this, there was this really neat study. Um, that was published a few years ago by computer scientists at Harvard, where they ran these, um, simulations, evolutionary simulations.

So lots of little agents competing for resources in a computer game. And the cool thing they did was that rather than making that, the decision about whether to engage in the competition conditional on actual performance, they made that decision conditional on the agent's metacognitive belief about their performance.

And what they found was that. The most successful agents in terms of fitness, in terms of gathering resources, were the ones that actually had metacognitively slightly distorted views of themselves. They, they were the ones who were slightly overconfident. And the explanation was that, that kind of hint of overconfidence gave them a little kick for engaging in competitions that were uncertain.

They didn't know whether they were going to win them or not, but just having that belief that they were slightly better than they really were, got them into the game in the first place and that over many generations helped them succeed. So there's a kind of neat story there that like. These, these distortions might not just make us feel good because they can make you feel good because you have a good view of yourself, but they might actually have a functional role to play in terms of how we get out of bed in the morning and we go and do things that might have uncertain payoffs, but we need to see ourselves in a slightly better light in order to engage in those activities in the first place.

[00:13:16] Matt: Yeah, when topics like that arise, there's always this question of to what extent have those biases, those adaptations that once served us well, um, to what extent are they still serving us well today versus has the environment changed and these are things now we need to somehow transcend and let go and, and, and, you know, in the case of confidence, I'm, I'm, I'm not sure what the answer is.

Maybe it's serving us just as well, but what are your views here? And, um, and in particular, if there are any other specific Um, issues with our metacognitive biases that perhaps were very functional at some point in our evolutionary history. And, and now, you know, it's time to, it's time to move on and do something about it.

What are your views here?

[00:13:59] Stephen: Well, I think another perspective on the influence of these biases, um, in human society is that. these, these kind of, um, distortions in how we see ourselves may have, may have roots in the kind of process I just described in terms of like promoting action, um, promoting behaviors that we might otherwise not engage in.

But what's interesting is that when they are in place, they then seem to also have social consequences. So there's nice research in social psychology suggesting that. People who do project more confidence, do project more, um, security in their own capacities are better liked. They tend to, um, be, um, get on in life more.

They get better jobs. They get promoted more often. Um, there were nice studies done in a business school setting suggesting that. You know, the, the ratings that you gave to your other classmates in your, in your business school class were driven largely by the way people project themselves and not their actual underlying grades in terms of their skills and capacities.

So there seems to be something deeply social about the notion of kind of projecting, um, a rose tinted metacognitive bias. You want to. Not only think of yourself in that way, you want to kind of project that out to others as well. And I think there's a really interesting intersection here with what I talked about in the first part of the answer, which was metacognitive sensitivity.

This need to, for adaptive behavior, you want to have this ability to recognize when you might've gone off track, when you might've made an error. Um, and so those things are intention, right? So if you want to always realize when you might have been wrong, then you also can't just have this kind of flat lining, high level of confidence, um, throughout your day, you need to, you need to somehow, uh, modulate that.

And so I think that, um, the, there are interesting stories that come out of. The psychology of leadership and the most effective, um, people in terms of, uh, motivating others tend to be these people who can do, somehow do both. They project confidence to others. So they get likes, they get. They project this sense of reassurance, but at the same time for themselves, they're also intensely introspective about whether things might be going right, whether things might be going wrong.

So there seems to be this interesting duality to the kind of optimal set point for metacognition.

[00:16:55] Matt: Yeah. As you're speaking, I'm envisioning in my mind this sort of two by two of, Um, the classic confidence versus competence two by two and, um, you know, obviously you would hope that those things track very well, but then you get classic cases like the Dunning Kruger effect where, um, there's sort of like a bit of an inverse correlation where people who are very low confidence tend to overate their, their capabilities and people who are very high confidence tend to sort of underrate their capabilities.

Um, and, uh, so


[00:17:27] Stephen: So there's, there's an interesting story there on the Dunning Kruger, which is the recent research. So as you, the, the phenomenon is absolutely, as you describe it, this kind of disconnection between average confidence and average performance. But there's been recent, um, intense debate in, in social psychology and cognitive psychology of.

Um, what is the source of this, right? So one, um, uh, one idea is that the source of the Dunning Kruger effect is a metacognitive one. That people who don't perform so well also lack the metacognitive ability to realize that they're not performing so well. And another explanation which has been advanced is that it's.

Essentially a statistical artifact that known as regression to the mean. So when you plot two variables against each other, then. You tend to flatten out the slope of the line because the, the, the, everything tends to regress to the mean of the, of the variable that you're, you're conditioning on. And so there's been this almost like fight among social psychologists for like, what is the meaning of the Donahue Krieger effect?

Everyone agrees it exists, but there's been this interesting fight about like, what's the source of it? Is it an interesting source or is it a artifactual source? And. I, my view of this, we did, we wrote a commentary on this recently, um, because there was a high profile paper that came out trying to resolve this and what they found is that essentially.

A large percentage of the effect was artifactual, but there was a tiny portion of the variance that seemed to be like lining up with the original idea of the Dunning Kruger. So there is, there does seem to be some reality to the idea that if you don't perform well in a domain, if you're low skilled in a particular task, you might also lack the metacognition to realize that you're low skilled, but the proportion of the variance in the actual data that that effect explains is actually quite, quite small.

[00:19:29] Matt: Yeah, well this, um, this immediately casts doubt on, you know, there, there are many, I'm not going to get into it, but there are a lot of interesting studies of this nature or even anecdotes that people comment on from studies in this field. And it does cast doubt on those. One of the ones that always comes to mind for me and that I've thought about a lot is, goes back to Roger Sperry and Michael Gerszaniger's split brain theory.

Um, experiments where, um, I mean, you, you would describe it much better than I, than I would, but, um, you know, very famously you have humans who, um, have had, uh, the corpus callosum connecting the two hemispheres of the mind, um. Severed. And this means that information shown in one eye, for example, would only pass to one of the hemispheres.

There's, it's not communicated in the normal way between both. Um, and because we have a, a hemisphere that can interpret language and another hemisphere that, that doesn't interpret language in the same way, it means you could, you know, show instructions to, to one eye while blocking the other and have the person interpret that instruction without understanding that they've done so.

And, you know, these patients, they're otherwise completely normal, but, you know, you show one of them an instruction to stand up and leave the room, and they do so, and when you ask them why, they make up some reason. They say, um, you know, going to get a glass of water, I think, is the, is the... Um, the actual example from these studies.

Um, and so in this case they can say that with full confidence and, and they believe it and, you know, there's no other reason to believe that they have low metacognition. Um, but of course the, the experimenters understand that this is completely wrong. And it does, those sorts of studies to me, um, cast doubt on the whole notion of subtracting confidence, uh, with, with performance and understanding of our actions.

Is, is that, uh, does that experiment fall into the same class of experiments that you've just described, or is there something more there?

[00:21:21] Stephen: I I think that it's a good question that the difficulty with studying split brain patients, and you described it very nicely there, that the idea that there's something remarkable about the suggestion there's a split psychology in patients who have had the corpus callosum severed. So because, as you say, the, um, the way the visual system is wired up.

Is that the one hemifield of space goes to the contralateral hemisphere, so you can set things up psychophysically, where people are essentially Fixating centrally and you flash an image in one half of the screen, and you can know that that will be sequestered in the hemifield that, uh, the hemisphere of the brain that, um, doesn't have the capacity to respond to that, uh, linguistically.

And so the implication of those results, the original Gesannig results is that essentially there's some. Interpretation that's, um, the typically left hemisphere is doing on our behavior, on our, on our actions is creating some narrative about why we did things. Um, now I think there's good evidence to suggest that even in a intact healthy brain, we're also continually creating these self narratives that don't always cohere with reality, but there does seem to be something.

Quite remarkable about how, um, dissociated narratives can become from behavior in the case of the split brain syndrome. But the difficulty is that follow then following that up with the techniques of modern cognitive science and brain imaging is really difficult because these patients are so rare and my understanding, and I'm not a neurologist, but my understanding is that those surgeries are much less common now.

They used to be done to try and resolve intractable epilepsy, but now the notion of completely severing the two hemispheres is, is very rarely done. And I have a colleague, um, Yaya Pinto, who has tried to find some of these remaining patients and study them psychophysically. And I know that the picture has got more complicated, um, in the sense that the idea of a complete split.

Consciousness between the two sides seems to be less secure than it used to be based on the original gga results. But I think no one, as far as I know, no one questions the fact that those original results do suggest this process of narrative self interpretation.

[00:24:08] Matt: Yeah, I mean, you, you can almost abstract away from this specific case. I think, you know, studying the patients with this particular sort of brain atypicality, I guess, that's important, but I think more striking to me is, is the fact that these people themselves feel normal and they behave normally. In most other ways, and that fact alone to me, um, sort of, I, you know, I find it riveting, um, because you, you question yourself, you could be walking around with some sort of brain atypicality or, or human brains could be doing something, uh, of a similar nature, what does it do to, um, you know, your, how you think about yourself and your own, like, agency of your actions, understanding of why you, uh, you behave in certain ways?

[00:24:56] Stephen: that's a good question. I, I think that in studying these topics, studying metacognition study self-awareness, it has made me, I guess more, um, Cognizant of the potential for self distortion, the potential for, um, you know, creating a narrative about how things went that might not, um, be the way someone else might see it.

And so I think it does give you on an interpersonal level, it does give you a, um, a bit more empathy, I guess, for. the fact that everyone else is walking around trying to do this to themselves as well. They're all trying to, we are all trying to make our way in the world and interpret our journey. We're trying Create a narrative that makes sense.

We're trying to tell ourselves a story about our life, essentially. And that's a hard thing to do. It's, um, it's not always going to line up with the way other people see it. And so just, just knowing that is, I think, quite a powerful, um, uh, life lesson because it then gives you a bit more empathy. Like if, if you're interacting with someone who you find.

difficult or there's tension with, then you can start having a bit more theory of mind thinking, well, they have a different life to me. They they've come from a different perspective. They they're creating a different self narrative about how this goes. I often find this useful to maintain in my head when I'm interacting with colleagues at work over some.

Um, you know, we're, we're, we're all coming at it from different angles and it doesn't necessarily mean that like someone's engaging in a fight with you about something, it means that they just have a different perspective on the situation. Um, so I think, I think that's, but then saying all that is, is easy to say, but I think it, it doesn't change the experience in the moment.

It, it often, I often find that. Even kind of being a researcher in this area doesn't make me immune to all the kind of metacognitive illusions, metacognitive biases that we've been talking about. And that's where I think leaning on an external perspective is incredibly helpful. Um, so whether that's formally by engaging in therapy or coaching or having someone.

Um, you know, try and sit down with you for an hour and figure out the narrative and make you see things differently or informally with friends and family and colleagues and so on. So I think those external perspectives are incredibly helpful and bit of a side note, I felt that actually during the pandemic, that that's one of the things that I really lost that kind of.

Um, you know, a lot of things we were able to and continue to do so in terms of the way we run science, run the lab, run things at the university. A lot of that has now switched, like a lot of workplaces online. But I think what you lose there is the kind of informal interactions that might just give you that check on how you're seeing things.

Um, and it's incredibly important, but incredibly, I think, uh, underestimated.

[00:28:30] Matt: It's your sense then that, um, people, I guess you're, you're interacting with people on a day to day basis and there is sort of feedback between each people and kind of keeps everyone in sync. Um, how quickly do you, do you feel, you know, the mind starts to lose touch, um, when you don't have that? Uh, looking at people who, for example, um, I don't know, sort of isolate themselves from society, like how, how long does it take that, for that internal, um, spiraling to happen?

[00:29:00] Stephen: Good question. I think it, I don't think we know. I think, um. Unraveling that would, would take, uh, some detailed and, and probably very hard to achieve empirical study. Um, I mean, I guess analogs of this can be studied through the lens of, um, psychiatric conditions where, um, there's this interplay between how self models and also models of the world are built.

And. social interaction. So the one I'm thinking that the example I'm thinking about here is psychosis, schizophrenia, where there's this complex interplay between. If, if you are, um, if you are suffering from a disorder of the brain, which essentially leads to distorted models getting built, I mean, that's one hypothesis about what psychosis is, is that you're really There's this, this whole model building machinery is misfiring.

It's, it's, it's generating somewhat distortive models, both of other people. So you might start having delusions about other people's intentions. Um, but also perceptually you might literally hallucinate, hear voices and so on. So we don't fully understand the sources of those distorted, distort, um, distortions in the, in the model building process, but it seems to be a reasonably.

Good characterization of the phenomenology. And so you can immediately start seeing how if that is distorted, then that might lose you, um, it might, it might lead you to lose touch with reality and therefore also other people, because the way we socially interact is grounded in shared assumptions about what is real, what is common knowledge.

Um, and when we lose that. Common, literally common sense, the, the kind of, um, belief in a common, uh, set of, uh, external properties, then, then that can socially isolate you. And there's been interesting work. Suggesting that that then creates this spiral that if you're then isolated, then it leads to further distortions in the model and so on.

So I think that would be the closest analog we have to try and answer your question, but I guess zooming out a bit further, um, a colleague of mine, Cecilia Hayes in Oxford has, um, thought very hard about the notion that a lot of. Things we think of as properties of our cognitive system, including metacognition, may in fact be culturally grounded, so The, the idea is that metacognition is analogous to something like reading.

So we teach our kids to read. Um, if we didn't teach our kids to read, then they wouldn't be able to do it. Even though once we've taught them, there is a clear basis, brain basis for reading because, you know, the brain creates specializations for things we do again and again and again. Now. Cecilia's view on metacognition is slightly different.

We don't have formal educational programs for metacognition. Perhaps we should, but we, we don't. And yet. Somehow we, in the course of, um, bringing up our kids and being parents, teachers, family members, and so on, we, we impart the skills that may be needed for building this self model. Now that's a hypothesis.

I think there's, you know, it's not, I'm not claiming that is definitely true, but I think it's an attractive hypothesis. And so one implication of that is that if you were, um, growing up on a desert island with no cultural. Uh, grounding, then you might never develop, um, self awareness in the way that we can talk about it today.

[00:33:15] Matt: it's um, you, you sort of mentioned this, this concept of a, a feedback loop there and I wonder, it's, it's almost like a very unfortunate fact about these sorts of things, um, you know, metacognition. Somebody who has very low metacognition would also not be as aware that they have low metacognition and as your metacognition increases, you become more aware of the sort of the gap.

And so it feels like there is this very unfortunate Well, a virtuous and a, and a disastrous spiral where if you've got a very high metacognition, it feels like you're, you're poised to be able to improve it. And if you've got low metacognition, you're not, um, and I would love to understand your, your thinking on like that, that state of affairs, you know, with reading is a good example actually, because we've got benchmarks that you can understand where you sit relative to some sort of scale.

with, with metacognition, I think most people just have an inherent feeling. They have a sense that, that presumably our way is to be much more systematic, but people don't have that. And so, um, my question to you is where, where do people sit along this spectrum and, and how does one know where they sit on the, on the metacognitive spectrum?

[00:34:32] Stephen: Yeah. So, I mean, one of the things we've been pursuing over the past few years is trying to define this more quantitatively and provide some measures of metacognition that are, uh, objective, um, that we can. Derived from data that we collect in, in the lab. And so this comes back to the notion of metacognitive sensitivity.

I described earlier that essentially what we're trying to do here is build up a statistical picture, um, of how well your metacognitive judgments cohere with your performance, how well they track the reality of your skills and abilities. And we do that typically by asking people to make. Metacognitive judgments on a moment to moment basis.

We literally ask people, how confident do you feel about getting that right? Or perhaps we might ask them, did you feel like you got, you made an error just then? Um, and if we do that multiple times, we can build up these statistical pictures of how metacognitive judgments, uh, track performance. And what we found in, we've tested now hundreds or even thousands of people on these kinds of tasks.

And what we find is that. First of all, there are systematic individual differences in metacognitive sensitivity that are not, um, explained by first order performance on the task. So you might actually be performing a task quite well, but be unaware of how you're doing on a moment to moment basis.

Conversely, you might be performing a task relatively poorly, but be acutely aware of the fact you're making lots of errors and so on. So metacognition and performance seem to decouple. In interesting ways and, and that metacognitive sensitivity parameter that we can derive from the data is a meaningful individual difference.

It, it, um, is relatively stable over time. It correlates with brain structure and function. It, um, is interestingly not predicted by markers of general intelligence. So you can be someone who's very smart, um, based on our kind of classical IQ tests, but actually have quite poor metacognition. So it seems to be a kind of meaningful thing that we can measure in the lab.

Um, and so we do now have a, I think, robust science of better cognition that, um, we can develop benchmarks similar to what you were describing for reading, um, and that then opens up a, a lot of the research we're pursuing in the lab is. Then using that approach to answer questions about, you know, what, what is the structure of our metacognitive capacities?

Um, do, does having metacognitive, does having good metacognition on one task predicts having good metacognition on another task? Turns out it generally does. So that's. Some evidence for the idea that there's some general metacognitive resource that we bring to the table. Um, yeah, I've, sorry, I'm forgetting the original question now, but hopefully that's a, it's a reasonable overview of like the, the, the science that we're, we're pursuing at the moment.

[00:37:29] Matt: Yeah, no, that's great. And actually, as you were talking about the relationship between metacognition and performance, um, actually, there's some view in which you might think that they would actually be inversely correlated because, I mean, if you just imagine the mind as a computer, just as a computer, and dedicating some portion of its resources to performing, to doing something, if you dedicated in it, for a computer, a separate portion of...

You know, processing power is just self monitoring, um, then the task that would be performing on it, you know, less would be going there. And I would, I would presume that the same is true for some types of performance. Um, and you would have some sort of trade off and then if that's true, the question arises as to what is the right level.

You actually don't want too much metacognition because then you're just thinking about thinking about thinking and not performing. Um, do you have a view here on, on, and maybe it's very task dependent, but how does one think about the optimal level of metacognition?

[00:38:26] Stephen: Yeah. So, I mean, I think there is some evidence that when. Um, particularly when a task is very well practiced, when it, uh, particularly in skilled performance, so in athletic or musical performance, um, when things become very automatized, then metacognitive insight on a moment to moment basis seems to drop off.

So there's been some neat studies on this. Um, so you get this, um, picture where when your metacognition seems to be most important. In terms of optimizing performance, when you're engaged in something novel, when you're learning a new task, when you do need to be aware of potential errors so that you can then correct them, um, also in a social setting, if you're, um, engaged in some group activity, then there's neat research suggesting that.

You know, one important thing to optimizing the group performance is being able to share method cognitive estimates with each other. So one example I like here is. Imagine two, a referee and a line judge, um, in a football match conferring about whether they saw a foul on the, on the, on the pitch, what they're probably, even though they might not use that exact language, they're sharing estimates of what they think they saw.

Um, so that's a metacognitive conversation going on. And I think a lot of areas of, um, of life are like that where we're essentially trying to figure out. what our best joint picture of how to solve a problem is or, um, you know, the best way to go about something. So in those kinds of settings, metacognition seems to be important.

And if we don't have it, then we suffer. But in other settings where things are very practiced, where we're doing a solitary activity, when we're hitting a golf ball or. We're, um, you know, playing our instrument, then it seems like there there's less need for that self monitoring because it's so well practiced.

We just kind of want to let it unfurl. And there's been some neat, as I mentioned, neat research suggesting that actually if you force people to engage in method cognition about those skilled activities, then performance decreases. Now, I, I don't think we have a good understanding of why that is. It seems intuitive, it's kind of this idea that like, you reflect on it and then you start screwing up your performance.

Perhaps one reason is that, um, because we're, because everything's so interconnected and dynamic, as soon as we start building this model of how we're doing something, then we just can't help ourselves trying to change it. And if it's already very well practiced and very skilled, then we shouldn't be doing that, we should just.

Leave it alone. Um, so that's one idea, but I don't think we have a good scientific understanding of why bringing metacognition back online seems to harm performance in some situations. And, but the data suggests that it, it does.

[00:41:23] Matt: And to what, to what extent is metacognition then malleable and trainable? Like the, the golf example is a great example because, you know, when you start out, um, everyone is terrible and the, the levels that you can achieve by practice are, uh, phenomenal. The, the improvement is, is huge. And, but for, you know, for mental training, for brain training, I think people just have this intuition that things are less malleable.

You can learn facts, but can you really, really change the... Like really the performance of your mind and your brain itself. How malleable is metacognition? How trainable is it?

[00:42:00] Stephen: Yeah. I mean, I think the, the broader brain training, uh, debate is an interesting one, I think, you know, one thing we do know that has, has shifted, um, in terms of both the neuroscientific orthodoxy, but also I think increasingly the public perception of how we understand the brain is that. The brain remains plastic into adult life.

Um, you know, everything we learn, um, if you, you know, told me a new fact today about Sydney, then that would somehow change, it would get stored in some pattern of weights in a way that we don't fully understand. But it would, you know, everything we're doing. Um, is in somehow in a small way, changing the brain.

Um, but then the question comes like, okay, fair enough in terms of like semantics, in terms of like our memories, but. Is there some way of shaping the way the cognitive system operates itself? And I think the story that comes out of the studies of brain training is that yes, you can improve in relatively narrow domains if you play a game for multiple hours, then that will improve your performance on that particular task or game, but it doesn't seem to transfer to other aspects of life.

And I think that on, on metacognition, the jury is still out. So I. I've written in the past about how perhaps we could be more optimistic about the potential for metacognitive training because it seems to be such a broad resource. You know, if we can improve it via training on one particular task, because metacognition seems to be relatively domain general, that offers some optimism for the facts it then might transfer to other areas of life.

And so we, we've done a couple of studies on this, um, showing that if you give people 20 minutes of training a day about, um, how well their metacognitive judgments are lining up with their performance, then people can get better. The metacognitive aspects, they improve their metacognitive sensitivity. And what was interesting for us is that if we train that on one particular task, then at the end of training, if we gave them a brand new task that they hadn't been trained on before, their metacognition seemed to be a bit better in that new task as well.

So that suggests some. transfer some, um, generalization of the training effect, but that's just one study. There's been a couple of follow ups to that, nuancing the picture in terms of like, what is the actual source of that effect? Is it a real metacognitive shift? Is it more about how we just communicate our judgments?

Um, uh, but it's promising and I think it, it needs to be followed up. Another angle on this is, um, by. Other incidental activities that might tap into similar mechanisms that we use for self monitoring. So one focus here has been on meditation and there's been some interesting studies suggesting that engaging in regular meditation practice has benefits for these objective measures of metacognition that we can measure in the lab.

Um, again, these are, it's early days for that research area. There's only been, you know, four or five. Published papers on this, um, which again seem promising, but I think understanding how that works, why it works is, is still an open question.

[00:45:28] Matt: And what are your views on just how good metacognition can get? Because, I mean, from a purely theoretical perspective, there is obviously a limit, right? Again, if your mind needs to have a self model. of itself. It can't be a perfect self model because then you get this infinite regress. The model models the model and so on.

So like from that perspective there's a theoretical limit and I think there are other reasons why but um, presumably we could get a lot better than the um, the average person is today. What are your views on, on to sort of like how much runway is there to improve one's metacognition?

[00:46:05] Stephen: I think we have a reasonably good understanding of this in the sense that the, our metrics of metacognitive sensitivity, the latest models of this are in, are in units of performance. So, um, in one popular model, it's known as the meta, meta D prime model. The statistic we get out of these, um, data is in the same units as performance itself.

And so the nice thing about that is that you can then simply create a computer ratio, which we call metacognitive efficiency, which is meta D prime divided by performance. And that tells you essentially how much headroom you have for improvement. And so the model, the kind of, um, the equations say that your, the ceiling of the ratio is one, like you can't get better than first order performance, you can't gain more information, although there are interesting possible exceptions to that.

Um. And when we measure it in the lab, usually these ratios come out around 0. 7, 0. 8. So it seems like people's metacognition is kind of using around 70 or 80 percent of the information available. Um, from performance itself to, to engage in this self monitoring. So there does seem to be room for maneuver.

And that was the kind of headroom we were, we were trying to encroach upon when we were doing these training studies. Um, but then as you say, like. The broader question then becomes, do we want to get to ceiling? Do we want to, um, be kind of moving away from the set point we have at the moment in the population?

And there's interesting arguments that actually come back to what we were talking about at the start, which is the flex you have in the self model, um, which suggests that maybe we actually want to retain some imprecision. Um, so. Chris Frith, who's a British psychologist, he was one of, uh, my PhD advisors, has argued that one interesting feature of metacognition is that it's socially sensitive, so If I say to you, you know, I, I, I'm not quite sure that you should have done that the way you did, or like, if I'm, if I'm kind of trying to be a constructive friend and say, I, you know what, like, I think maybe a better strategy was X rather than Y, like if you had quotes, perfect metacognition about yourself.

Then perhaps you just wouldn't care about what your friend says. You would just be like, well, you know, I, I know, I know how it went and I'm happy with that. And so there's an interesting intersection here between like, having some flex to allow some social influence so that we get a better collective view of the situation, and that's what Chris has written about, versus having some kind of solipsistic introspective accuracy that.

It says, I know the picture, both of the world and my own mind, and I don't need to rely on others. So yeah, there's, and I, again, I don't think we have strong empirical evidence that that's the reason for poorer metacognitive sensitivity, but it's a really, I think, neat hypothesis that could be followed up on.

[00:49:29] Matt: Yeah certainly and then there's also actually this question on that of um, applying that same measure to other people. So one has an assessment of their own confidence and, and you know what they can do. One also has a theory of mind for those around them and, and vice versa. Um, and we'll definitely understand how those two things track and correlate.

I think, you know, in these days, um, I think many people would think, for example, tech companies, it's commonly said that they can predict what we're going to do. better than, than we can. I'm not, I'm not sure that's, that's quite like a nuanced enough statement, but there are beliefs of that nature. Um, and that, you know, others understand us increasingly more than we understand ourselves.

How did these things track? Have you, have you, have you studied that? Have you looked at that?

[00:50:18] Stephen: Yeah. So in, um, development, there does seem to be a intriguing association between. Understanding of other minds and metacognition. So they are both relatively late developing capacities. So, um, it's not until kids, uh, reach the age of three or four that they start to pass these explicit tests of metacognition about themselves.

So they start to realize when they know something, when they don't know something. Um, and at a similar age, they start to pass these tests of. Um, understanding false beliefs. So realizing that someone else might have a different and potentially false view of the world compared to them. Um, and there's been some neat experiments run recently suggesting there are.

Um, commonalities to these two processes. So one, um, hypothesis that was developed by the philosopher Peter Caruthers is that the one important part of human metacognition relies on theory of mind. So essentially what we do is as kids, we build up the skill to realize that someone else has a different view of the situation to ourselves.

And then we apply that same skill to ourselves. We realize that. We might not have the full view of the situation. We realized that we might be wrong. So essentially applying kind of false belief understanding to yourself in a recursive way. Um, and that remained a hypothesis for a number of years. Um, and I think it's only recently that psychologists have started to try and empirically test this.

Um, and so there's been some recent data suggests that if you interfere with. of thinking about someone else, you also cause problems for self directed metacognition. And with a master's student a couple of years ago, Antony Vaccaro, we surveyed the literature on metacognition and theory of mind. And we did a meta analysis of the brain imaging studies that have been done on these two topics.

And while there were. Distinct networks that tended to show up in functional imaging studies for metacognition and theory of mind, there was some overlap in the prefrontal cortex. So that's, I mean, a meta analysis doesn't tell you very much about mechanism, but it does at least give some suggestive evidence that there is commonality in the neural substrates for thinking about yourself and thinking about someone else.

[00:52:53] Matt: and how much, um, how much benefit do you think there is to be gained from, from increasing, well, metacognition? I guess also if, if theory of mind of others comes along for the ride, all the, all the more reason to, to improve it. But, you know, how does the world change? How does the world change if people improve their metacognition?

[00:53:12] Stephen: I mean, I think, as you say, like if there is a, uh, some commonality between. The process is involved in metacognition about yourself and the fidelity with which you can build good models of other minds, then there should be myriad social benefits from developing more accurate metacognition. Um, and typically psychologists have tend to put.

Metacognitive benefits in two boxes. So one is intrapersonal. So the benefits of having good metacognition for your own, um, success and wellbeing, and this is something that has been written about a lot in the context of education. You know, we'd like our kids to cultivate sophisticated metacognition about what they know and don't know.

So they can guide their own learning. They can guide their own study. They can. Um, filling gaps in their knowledge and so on. So that's, that's been, um, something that's becoming increasingly popular in educational psychology. Um, there's also an area of interpersonal medical admission that we've been focused on in my lab, which is the contribution this might have to information seeking.

So you can think of this, I guess, as like education, but now in, in the real world, like when we get into society. We've got all these decisions to make about do we Um, seek out new knowledge on a particular topic or not. And we've done some experiments to suggest that one of the predictors of whether you do seek out new knowledge on a topic is metacognitive sensitivity.

And that makes some sense, I think, because like, if you realize, if you have this sense that, hang on, you don't have the full picture, you might be wrong about this. Other people might have something to tell you, then you should go and seek out that new information. And so what we did together with a former PhD student of mine, Max Rohlwager, we studied this in the context of large population samples, and we measured both their metacognitive sensitivity on a really boring task that had, it was just about deciding which of two boxes have more dots.

But we also measured their political beliefs about a range of, um, uh, factors, and this was in a U. S. sample, so we were asking questions, like, about... Um, your political leanings, but also your attitudes towards gun control, abortion and so on. And what we were able to do was create a, um, uh, extract out a measure from all those questions, which told us something about not what the person's beliefs were, but how strongly held they were.

How much did they think that they were right and everyone else was wrong? And what was interesting is that that factor that we call dogmatism from the political questions predicted their metacognitive sensitivity on this dots task. So people who were slightly. Worse at realizing when they might be wrong on the dots task were also the ones who tended to have very strong and rigid beliefs about political issues.

So that's just a correlation. We can't establish causation there, but I think it's suggestive that perhaps one role that metacognition is playing in our adult lives is essentially prompting us to rethink. Whether we have all the answers,

So that was more on the intrapersonal side and then there's all these interpersonal or social functions of metacognition that we've talked a bit about already, but one.

It's, I guess, benefit from cultivating better metacognition is that it allows you to then interact more effectively in a social group, because you'll be able to, um, essentially communicate your degree of belief That you have the answer to the problem the group is trying to solve, rather than just wading in saying, I know how to do this.

If you're more aware of the gaps in your knowledge, then you'll be able to tailor your advice or your contributions to the group in an appropriate way. And there's been some nice experiments suggesting that, that, uh, idea, um, holds true.

[00:57:35] Matt: I do, um, I do worry sometimes that, um, uh, some of those, I mean, I guess the aspects of the world these days that are set up to Um, how did I emphasize parts of, of the, I guess the, the negative aspects of, of, you know, good metacognition. So as an example, um, we mentioned people who have higher levels of metacognition would be better able to self assess their knowledge gaps and therefore would be less confident in their views and hold less confidence views.

And... In many contexts that's great, but in some contexts it's actually really not great, and especially when we have technologies that can accentuate views and, um, you know, high levels of confidence can get people really far. And so it feels like there are certain aspects of the world that are really not set up.

to manage this fact about the human mind very well.

[00:58:29] Stephen: A hundred percent. And I think it goes back to this tension that we talked about, uh, towards the start of the conversation, which is that essentially, you know, ideally you'd want this capacity to have good, um, awareness of knowledge gaps, errors, failings, and so on. And yet still be able to, um, project confidence in a way that.

means that your voice is heard. And I think that navigating that tension is, is really difficult. And I think it, as you say, I think it's becoming even more difficult in a modern world where We're just suffused now with, um, opinion from all quarters via social media, via, um, uh, you know, the fact that society is ever more interconnected.

So, I think that trying to navigate this tension between having enough confidence to contribute to a conversation, but maintaining enough metacognitive awareness to realize when you might be wrong is, is a really hard problem to solve, but it's becoming ever more important in modern society.

[00:59:44] Matt: Yeah, and it's, I mean, that's the sort of the type of question that I guess your lab does deal with either directly or as sort of a downstream implication. It's very big questions. One of the most interesting ones that I noticed quite recently was, um, I think you mentioned her earlier on, Nadine Dijkstra.

Um, I hope I'm pronouncing that correctly. But, uh, you recently

[01:00:05] Stephen: I think, I think, it's, it's, um, yeah, she's from the Netherlands originally. So I think it's Dijkstra, but then also I'm, you know, maybe not getting it perfect.

[01:00:15] Matt: well, you, you, um, together published a very interesting paper recently called, uh, Subjective Signal Strength Distinguishes Reality from Imagination. And it actually goes back to the topic we mentioned very early on, on the, you know, the, um, ailment of psychosis, where somebody's. Um, perception of reality sort of detaches from what's, what's really there, there, there.

Um, they're unable to distinguish imagination, I guess, from what's really out there. And, um, it's commented in that, in that paper that, you know, very interestingly, some of the, I guess, the mental hardware, the mental processing for imagination is similar to what would be used for perception or there is an overlap there.

And it is actually a profound question how one does distinguish those two things, you know, we're getting signals from the outside world, but what we're actually experiencing is a model. And likewise, imagination is, is a model. And yet people feel that they have a very good grasp of these two things and that they're able to distinguish them.

Um, I think this paper was very interesting. Could you, could you share a bit about the, the paper and the insights that were, um, that came out of that?

[01:01:29] Stephen: Yeah, of course. I, so I think it does start with what you described just then about this, um, idea that imagination. on the same kind of machinery that is trying to infer what is out there in the world, and it goes back to what we were talking about at the start of the call, that the way perception seems to work is that rather than it just being a process of taking in sensory input and processing that in various ways, that was the way I was taught how perception works as a psychology undergrad.

Um, 20 years ago, but the picture has shifted, I think, quite substantially since then we don't think of perception as a bottom up process anymore. We think of perception as a constructive process and the anatomy of the. Uh, different perceptual systems seems to bear this out. We have a lot of what we call top down projections.

So projections going from higher levels of the brain down to the sensory systems, just as we, even more so than you do having neural pathways going from the sensory systems into the brain. So there seems to be this very active, constructive aspects of perceptual machinery. And, um, one proposal then is that what imagination is, is the process of running that machinery backwards.

So rather than just taking an input and trying to infer what's out there, you're generating internally, you're, you're taking samples from these generative models that you've built to reflect the external world. Um, and a lot of Nadina's work when she was completing her PhD at the Donders Institute in the Netherlands was looking at this question using brain imaging.

And what she found was that when people imagine things, they recruit the same. Neural resources as you do when you perceive the actual objects. There seems to be this overlap in the brain between imagination and perception and that then raises this natural question. Because imagination and perception seem to be differing in degree, but not kind, how is the brain, how are we able to tell them apart?

And even can we tell them apart? This is the, the question that was behind our paper. And there was some really interesting work conducted back in the early 1900s by a pioneering female psychologist known as, uh, she, she called Mary Cheeves Perkey working in the US. And what she did was, um, got, asked people in her lab to imagine things on a screen.

And then she would use a, uh, a setup with a lamp and a, and a colored filter. To project an image, a faint image of what they were being asked to imagine on that screen. So if I asked you to imagine a banana on the screen, then she would project a little faint patch of yellow on the screen. And she would then ask subjects, how did that feel?

And they would often tell her. Wow. My imagination was so vivid. I really saw a banana there and they wouldn't know, they wouldn't be able to tell that she had essentially tricked them with this quote, real banana on the screen. Um, and so we wanted to follow that up. That was really the inspiration, um, for our study was that those were somewhat anecdotal results.

And we wanted to see, can we bring to bear the toolbox of psychophysics and cognitive science on that question? And it's a tricky one because. As soon as people might realize there are real stimuli in play in your experiment, then obviously the game is up because they're going to realize that you're trying to trick them by fading in real things.

So we had to somehow circumvent that. Um, and the way that Nadina, um, devised to, to solve this problem is. Taking advantage of the fact we can now do experiments over the web on very large numbers of people, much larger than we could do by bringing them actually into the lab. And so we did, we gave a very short experiments to every person.

We'd recruited into the study where first they had to imagine stimuli in dynamic noise. They had to imagine tilted lines embedded in dynamic noise. And then on the very last trial of the whole experiment, for some subjects, we faded in. The thing that they were being asked to imagine and for other subjects, we faded in something different, right?

And then we simply asked them two questions on that very last trial. One is how vivid was your imagination? So a bit like the question Perky asked in the 1900s. And the other one was, did you detect any real stimulus on the screen on that last trial? And we had two models of this. So one is the idea that.

Similar to the Perky experiment, if you imagine something, perhaps what you're doing is suppressing any influence of the outside world. You're, you're essentially ignoring the possibility that might be something out there. And if that were the case, then you should actually be less likely to detect a real stimulus when you're being asked to imagine that same thing.

Whereas another model based on this, um, more modern view of. perceptual generative models is that if these signals are all just getting intermixed, if imagination is just like another version of perception being driven from within, then it should actually be very hard for you to tell them apart and these signals should summate and you should actually be more likely to say something real was out there.

If your imagination on that trial was vivid and if it matched what we were fading in, and that's exactly what we found. We found that the best model of our data was that. When we faded in a real stimulus, two things happened. One is people felt their imagination was more vivid, but also they were more likely to say that something was out there.

So there seems to be those signals like intermixed, and I think this is consistent with. A broader view that we're pursuing in the lab, a broader hypothesis that what makes the difference for conscious experience is really a kind of process of figuring out how reliable is the inference that's going on in my perceptual model.

If I have strong and reliable signals, it doesn't matter where they've come from. I will just tag them as being real, as being out there in the world. And that's a hypothesis that has been put forward by a colleague of mine in Japan, Uh, known as perceptual reality monitoring. So the idea is that conscious experience is grounded in this, um, process of essentially tagging the operation of those perceptual models as either reflecting reality, because we have strong and reliable signals or as reflecting something going on internally.

And that really, you know, the reason consciousness exists at all is because the system is trying to solve this problem. It gives us a tag. Um, it tells us if you like that. There is this process going on in my brain right now is reflecting external reality. And that gives us this kind of very strong belief that there is a reality out there.

And that's what we call conscious experience.

[01:08:52] Matt: That, uh, that last part of your answer there on the source of consciousness and what consciousness actually is, was striking. I didn't expect you to say that. I'd love to dig, dig into that. So, could you expand on, on what you mean? I

[01:09:05] Stephen: Yeah. So I think that the best way of thinking about this and answering that question is to go back to this notion of perceptual generative models. The idea that the brain is trying to build a model of its external world.

But what's interesting about that approach is that there's nothing in that modeling framework that seems to distinguish between conscious and unconscious perception. And in fact, some of the proponents of this idea going back to the 19th century, like Helmholtz, suggested that a lot of this process of perceptual inference is unconscious.

We kind of, it does its thing and it serves up the results to us as conscious experience. So this then raises this interesting question, which is that, you know, if we assume that some aspects of this perceptual processing can take place unconsciously, then what is it that makes the difference between Conscious and unconscious perception.

What is it in those architectures that make the difference? And I think one insight into this, um, and here I've been, as I mentioned, very influenced by Hakuan Lau's views on this, is that it really comes down to a functional need for the system to try and distinguish. simulation, internal imagination from the external world.

So we wouldn't want to act on our imaginations. We wouldn't want to treat. and imagine coffee cup as similar to a real coffee cup, and behave accordingly. So somehow the system has to kind of tell these apart, and Hakuan's proposal, which I think is a very powerful one, is that perceptual representations become conscious when they're identified as being reliable reflections of the external world by this internal process of reality monitoring.

And in a sense, there's nothing left over to explain because what we mean by conscious experience is this incontrovertible belief that there is an external reality. That there is, and when I mean external reality, I mean an external reality that includes ourselves, includes our bodies. Um, so it just means that our brains have kind of tag something as being reflecting the world as it is now, including ourselves in that world, and distinguishes that from things that are being, that are going on internally, simulations, plans, imaginations, and so on.

And the idea is that those all are so, they, they are not phenomenally experienced typically, they're going on under the hood. Um, so when we're kind of simulating possible trajectories through our environment, they're not being Um, phenomenally experienced and imagination is kind of an intermediate case because sometimes it is phenomenally experienced.

And the idea there is that also from Nadina's work is that that, the reason for that is that in a sense, imagination is fooling this reality monitoring system. It's a kind of a intermediate case. It's like, ah, hang on. Maybe that is somewhat real, but you know, it's, um, so it feels, it gives us. conscious experience, but there's some potentially even higher cognitive level that says, you know, I am imagining, so I can discount that.

And I don't treat that as real, but in terms of the way the operation of this reality monitor works, the idea is that there's some kind of constant monitoring going on of whether the outputs of this perceptual model reflect external reality or not. And that's what we call.

[01:12:52] Matt: think some people would still, I mean, it's all, it's all, it all makes logical sense, but I think many people would still feel that there could be a possibility to have that same level of feedback and control. Without a notion of subjectivity or an experience, you know, you could just imagine like why couldn't that control makers be having with the lights, be happening with the lights off.

Um, how do you think about that question?

[01:13:15] Stephen: Yeah, no, I, I think it's, uh, you know, this is really what it comes down to that, um, if you're a functionalist about consciousness and I am, you know, I think that. There's, we are going to need to explain if we do have a neuroscientific account of consciousness, it's going to need to be in functional terms.

We can't have something that kind of floats free of, of, of the way the brain and mind work. And so there you end up essentially. Coming back to, um, the, the phenomenology, and this is encroaching on a position in philosophy known as illusionism, um, which I think this approach is somewhat aligned with. Um, I've, I've heard people like Andy Clark talk about revisionist illusionism, which is this idea that you're, you're trying to explain a, the, the way conscious experience feels to us.

So it's real. So it's not like, we're not saying it's an, it's an illusion, but it's, it's, it's an, it's an illusion in the sense that once we've explained that, um, the source of why it feels so incontrovertible, incontrovertibly real to us. Then that, then we're done, right? So it's kind of this third person explanation where we say, okay, I've got the cognitive science machinery to explain why you feel you have such a real conscious experience and to explain why I feel it's so real and so on.

And once we've explained that sense of incontrovertible reality, that there is a conscious experience, then that's, that's all we can do. And that's kind of, it does encroach on this illusionist position, but it doesn't go so far. Some people interpret illusionism as saying consciousness is not real. And I actually don't think that the illusionists are saying that, but anyway, that's a whole different debate.

then it comes down to, so that's kind of like the philosophical background or some of it, and so then, as you say, then there needs to be some, um, some account built of why. A system like us would need to be solving that kind of problem. And there it comes down to, I think, thinking about, um, you know, what are the benefits and you can tell an evolutionary story as well here that, um, what are the benefits for a system to be able to do all this offline stuff, to be able to simulate and plan and imagine and so on, and so this then encroaches on the science of decision making where there's lots of interesting work suggesting that.

In certain scenarios, the way we navigate through the world is by simulating possible futures. So, you know, I know on an implicit level that if I want to go and get a coffee, I can walk out the door, I turn right and so on. So I, I simulate the possible future and there's been some lovely work done. In, um, rodent models suggesting that when, uh, mice and rats are planning their path through a maze, then you actually see at the choice point, you see place cells essentially like propagating out ahead of the, uh, the rodent in very quickly on the, on the order of, um, you know, a few hundred milliseconds.

And so there's kind of this internal representation of the possible futures the rodent could take. And you can see it in the data and then you're in, in the neural data. And what I mean, introspectively, that kind of thing doesn't feel conscious to me. And, and I think there's lots of interesting empirical questions to be done on this.

So those kinds of simulations seem to be all running under the hood. You know, if I turn up at a new place, I might think, you know, this is the way I'm going to go, and perhaps my brain has run lots of different simulations about where I could have gone, but didn't. And so the idea is that if those simulations are being run in perceptual space.

Then there's some imperative to make sure that we keep the model of the world we're using as a basis for the simulations and the simulations themselves separate. We don't want to confuse the two. We don't want to simulate going down this path and then experience going down that path if we haven't.

Right. So that's the idea, um, that you need to keep those things separate. So the, the model of the world we're building, which is also dynamic and changing over time is one thing. And that's tagged as reality. And then all the simulations and planning we're doing within that model of the world is another thing, and that's tagged as internal and we don't experience

[01:18:05] Matt: I guess these are, these are the questions that your lab is, like the big questions that your lab is sort of looking at now. I would like to get your views on the, like what a solution to the understanding of consciousness could even look like. Because I think there is one camp of people. Um, who so buy into the philosophical hard problem of consciousness that they, they almost assume that one kid could never have an intuitive grasp of consciousness.

They take it so seriously that they think we would never be able to solve. Um, and fully understand consciousness. And I think there's another camp of people that, and, and I, I'm, I'm one of them, that looks at this problem kind of no differently to other scientific problems. It's a hard problem, but, you know, for example, uh, in understanding gravity, for example.

Um, you know, if I ask you why does gravity work the way it does, eventually you always hit a, a bottom where we don't understand it, it's fundamentally mysterious. And the historical progress of science in... Other fields has kind of gone down that same route, you know, eventually at, um, at some point people feel like, okay, I have an intuitive enough understanding of this thing that I no longer consider it not understood, but at bottom, it's always mysterious.

And my personal view is consciousness is. Um, probably going to turn out to be similar in, in some way, you know, eventually we'll understand it enough that the hard problem is sort of continuing hiding, hiding in a smaller place. But what are your views here? How do you, what do you think a solution to this problem could look like?

[01:19:39] Stephen: it. Yeah. I mean, I, I can certainly understand. The pull of the hard problem. Um, I can understand why people pose it in those terms. Um, like the original David Chalmers formulation and, um, people have discussed, uh, that issue since. Um, I guess my view on it is that I'm a psychologist. I want to understand why things feel the way they do, including why we think there's a hard problem of consciousness.

That's what, uh, um, Dave Chalmers has recently described very beautifully as the meta problem. Essentially the, the problem of explaining why people think there's a hard problem. I think that's much more scientifically tractable. We can do nice research on trying to understand why consciousness seems to be...

So, um, you know, it seems to hold this sway over us that we think that. It is, um, this property that is so distinct from, um, other aspects of how we understand brain function and, and things, things that seem more innocuous, like memory and decision making and so on, we can all kind of have a more third person understanding of those, but why is it that conscious experience seems so, um, um, So grabby from the first person perspective.

And I think that, that to me seems like a perfectly good and interesting psychological problem to try and solve. And I think some of the answer could come along in the terms I described earlier is trying to understand essentially why does, why does perceptual experience, um, come along with such a high degree of.

Um, the sense of reality, a sense of confidence that this must be, um, the world as it is now. Um, and I think once we start explaining that, and I don't think it has been fully explained, but I think once we do start explaining that, and once we start showing how, at the level of brain function, that works.

Then people will start to be like, huh, yeah, that's a bit like a perceptual illusion, but it's now at the metacognitive level. And this is where I think metacognition and subjective experience are intimately, intimately connected, because essentially what the kind of things I've been describing are meta level processes that are tagging a perceptual model as being reliable, um, as being a confident description of the world as it is right now.

Um, good enough for the use, for, for us to be able to use it as a rational basis for action

Essentially if, if, if that story that I told earlier is on the right lines, then we should be.

There should be a tight link between tagging a perceptual model as real, and then using that as a basis within which to plan and enact. And this is interestingly related to this, the data that have been emerging recently suggest that, suggesting that conscious experience has a relatively slow timescale, slower than would be useful for acting in the moment.

Um, so what seems to be happening at least based on the psychophysics is that Um, we're integrating over a relatively long window of time. Um, and you can think of this as like every quarter of a second or so you get served up a new dynamic movie of what's happening in the world. So the, uh, the, the integration is slow, but what you experience is, is dynamic.

Cause the, the little model that's been built includes things that are moving and so on. Um, and so we're trying to think of experiments to do where. We can take that kind of psychophysics and probe how this model that's getting built over these, uh, few hundred milliseconds is a, how is that connected up to subjective experience, subjective rapport, but also how is that connected with, um, the capacity to plan and, and use that.

model of the world that you have right now as a basis for rational action. So I think all that's going to become part of a, of a, of, of essentially a, uh, an account of the functions of consciousness. What is it doing for us? What is it, um, what, what problem is it solving? And again, coming back to the more metaphysical stuff we were talking about earlier, I think once we have a good account of.

The functional side, when we realize that it's helpful for the system to essentially like say, you know, this is definitely external reality now, cause I can use that now to act and plan and so on. When we kind of see that picture fleshed out from a third person perspective. I get the sense that a lot of this kind of the pull of things like the hard problem will start to weaken.

Obviously that's just speculation, but that's my sense is that like, it will seem less mysterious when we realize that what's going on is, um, potentially a, um, a functional process that is, um, generating this, this, this feeling of, of, of, of having this, um, continuous experience with a very strong sense of reality.

So that's the kind of, yeah, those are the ideas we're pursuing at the moment.

[01:25:21] Matt: Yeah, well I look forward to reading more about the work that comes out of that lab and hopefully a future book one day on, on consciousness.

[01:25:30] Stephen: Maybe, we'll see. Yeah.

[01:25:33] Matt: Um, on the topic of books, um, as we, as we sort of come to a wrap, some of the questions that I love to ask, um, I guess towards the end of the conversation, uh, about books. And my question for you is, um, which book have you most gifted to other people and why?

[01:25:48] Stephen: Yeah. I, when I was thinking about this question, I was thinking, you know, should I, should I go highbrow for the sake of the podcast here and say, you know, like this is, um, this is the kind of thing that I, uh, I like to disseminate to others, but I thought, you know, what, what I really have tended to, um, give or suggest people get copies of recently have been more things on, um, well, there's, there's two I have in mind.

So the first one is, um, a, a book, a beautiful graphic novel. I've actually got it right next to me here by, um, my former, um, advisor, Chris Frith, but also his wife, Uta Frith. Um, incredibly influential British psychologists and what they did was essentially an autobiographical, um, uh, account of their life in science together, but it's, it, it's like an amazing introduction to cognitive science, to cognitive neuroscience, the way you do experiments.

There's lots of very cool, um. Descriptions of real science, like it's not, it's in no way, um, dumbed down, um, but it's super accessible and I've loved reading this with my kids, but also I've loved reading it just, you know, flicking through and getting a sense of like. It's just, it just beautifully communicates what it means to do, to be a scientific psychologist, to, um, to do good, um, uh, to, to, to kind of pose questions about human nature and answer them with experiments, but in a super accessible and fun way.

So that's one that I've often recommended. And then the other one, um, is a book, uh, that called 4, 000 weeks by Oliver Berkman. Um, and it's, I think, I think part of the reason why I've become quite attached to this book is, so the title 4, 000 weeks comes from this idea of the average human lifespan, um, of being approximately 4, 000 weeks and.

And the book is essentially an antidote to the freneticism of modern life and advice on how to manage your time and get evermore done. And I became really attached to this when my kids were born. They're now four and two. Um, because I realized that, you know, with, as we all are very busy, um, trying to achieve more, trying to pack more in that.

Life, I mean, I'm now falling back on mega cliches, but you know, life is short, time passes, um, and he just brings together lots of, um, philosophical insight, but just practical insight really about, you know, how to take a healthy attitude towards the passage of time. And I, I think that it's very accessible, but I think it also is a very practically helpful book to read.

[01:28:53] Matt: Yeah, I'll, I'll second that, that, uh, second book recommendation because absolutely I've, I've loved that book. And it actually was not, it played more than a small part in my reason for, Actually pulling the trigger on starting this podcast in the first place. It kind of sat in the back burner for a while and, uh, it was shortly after reading that book that I actually just, uh, finally got my act together and did it.

So, um, uh, great recommendation. Um, and yeah, I guess that, that book is full of, full of advice. Uh, you know, I learned a lot from it and, um, the next question also relates to advice. Um. It's what advice would you have for people who, I wanted to ask to improve their metacognition, but maybe it's just more general now.

You know, what advice would you give to people as somebody who really understands how the human mind works and the human condition? What do people need to hear?

[01:29:42] Stephen: You know, one, one thing that learning more about how the mind works gives you is, I think, uh, just more empathetic perspective on how other people are seeing the world. Um, and. Kind of understanding metacognition as a brain process, as part and parcel of that model building process, makes you realize that when people might be unaware of their failings, when people might be, um, not realizing that they've gone off track in a particular way, then.

you know, as a society, then there are good reasons to apportion praise and blame and so on. But on an interpersonal level, I think it just gives you a little bit more tolerance for um, the fact that, you know, people just bring, they, we have a lot of history to the way that we see the world and we build different models.

And if we can understand that each of us might be building a different model for different reasons, then I think that provides a bit more of an empathetic perspective. It can, can loosen the, um, the often shouting matches that you see on social media and so on. Um, and I think that, you know, when it comes to.

So metacognition more specifically, um, it is difficult because there's, as we talked about briefly, there's kind of a limit on this nested recursion. We can't build a fully fledged model of how our metacognition is working. That's often the root of a lot of these illusions. Um, but I think one thing we can do is learn a bit more about how it works.

And that was part of the reason. I wrote the book was to try and, I guess, distill some of this science down into a way that might be useful for people to hear so that if we can kind of, um, understand how, how our metacognitive systems are working, then we can, um, take steps to avoid the pitfalls of when they, when they go wrong.

So I think it's hard to. Kind of give pithy one line advice to how to improve metacognition, but a more general, um, uh, encouragement to learn a bit more about how it works is I think a good, uh, a good piece of advice.

[01:32:03] Matt: Yeah. Sage advice. Um, the last question maybe is, I don't know if it's going to be more lighthearted or maybe not. Um, my question is who should represent humanity to a future AI superintelligence?

[01:32:17] Stephen: Yeah. So I, when I was also thinking about this, I was thinking, so there seems to be a whole conversation we can have about superintelligence. I'm, I, I'm skeptical that like we can define something that is. Um, super intelligent, very, we can, I think we can certainly define things that are different. I mean, something, you know, there's plenty of, um, examples we already have of AI architectures outperforming us in lots of different domains.

Right. Um, so I would, I would kind of rephrase the question slightly and think, okay, well, who, imagine we have an alien intelligence, we didn't build it, we didn't build it, but we encounter it. Um, so how should we interact with that new unknown intelligence? How should we figure it out? Um, and I, I think the only thing I would say is that I would be pretty worried if it was one person.

I would think we would certainly need a group given what I mentioned about how we all come with our idiosyncratic biases and our idiosyncratic ways of. understanding how the world works. So having a group, all of whom should have pretty good metacognition so they can share and pool their knowledge and so on.

We might even want some of our, uh, kind of engineering side AIs to come along for the ride as well. So we might want a large language model available that we could. ask questions to, so that we can give our alien intelligence, um, comprehensive answers of, you know, comprehensive accounts of what humans are and what knowledge we have.

But, you know, I think having a group, um, at a bare minimum would be important.

[01:34:05] Matt: It sounds, it sounds like we're going to send the folk from the meta lab.

[01:34:08] Stephen: Well, I'm not sure about that, but, um,

[01:34:12] Matt: uh, Steve, it's been an absolute pleasure speaking with you today. Thank you so much for making the time.

[01:34:17] Stephen: thanks, Matt. It's been great fun.

Conversations with the world's deepest thinkers in philosophy, science, and technology. A global top 10% podcast by Matt Geleta.