Paradigm
Paradigm
Mark Solms: Consciousness, evolution, and artificial sentience
0:00
-1:51:00

Mark Solms: Consciousness, evolution, and artificial sentience

Mark Solms is a celebrated neuroscientist known for his discovery of the brain mechanisms of dreaming, as well as his influential book, The Hidden Spring: A Journey to the Source of Consciousness.

Mark Solms is a celebrated neuroscientist known for his discovery of the brain mechanisms of dreaming. His work with Karl Friston and others has had a profound influence on our understanding of consciousness.

Mark is the author of The Hidden Spring: A Journey to the Source of Consciousness, which is an fascinating exploration of how consciousness arises in the brain, and the prospects of synthesising consciousness in non-biological systems.

We discuss:

  • Consciousness in non-human animals

  • Historical paradigms in our understanding of consciousness

  • The source of consciousness in the brain stem

  • The “hard problem” of consciousness

  • The possibility of engineering consciousness in non-biological substrates

  • The ethical implications of synthetic and non-biological systems

… and other topics.

Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Read the full transcript here. Follow me on LinkedIn or Twitter/X for episodes and infrequent social commentary.

Paradigm is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Episode links


Timestamps

From video episode:

  • 00:00 Intro

  • 01:01 Animal consciousness

  • 09:27 Could the internet be conscious?

  • 12:40 Mark's backstory

  • 16:13 Behaviourism, cognitive psychology, cognitive neuroscience

  • 30:42 Affect, perception, and consciousness

  • 41:12 Brain as a prediction machine

  • 49:03 Emergence of consciousness in the brain stem

  • 1:08:57 Why is consciousness necessary?

  • 1:26:02 Can we and should we engineer synthetic consciousness?


Thank you for reading Paradigm. This post is public so feel free to share it.

Share


Transcript

This transcript is AI-generated and may contain errors. It will be corrected and annotated with links and citations over time.

[00:00:13] Matt Geleta: Today I'm speaking with Mark Soames, a renowned neuroscientist and consciousness researcher, and somebody whose work has had a profound impact on our understanding of consciousness and its relationship to behavior and the brain. Mark has written a truly fantastic book called The Hidden Spring, which is about the origins of consciousness within the brainstem and the possibility of creating artificial consciousness in non biological substrates.

This was a great conversation. And I'm so excited to share it with you before we get going. If you're enjoying the paradigm podcast, please subscribe on YouTube and give us a five star review on your favorite podcast player. This is the best way to increase our visibility and help us attract even more fantastic guests.

And now I bring you Mark Soames.

​I'm here with Mark Soames. Mark, thanks for joining me. Pleasure to be here. Uh, Mark, I would love to start with a strange fact about how humans have thought about consciousness historically. When I was a young child, I had a puppy. And it was my best friend and, uh, it would get excited when I came home, it would cry when I went out, it would cower in fear during thunderstorms.

And from my perspective as a child, it was completely obvious to me that this puppy had feelings. Uh, it had emotions like excitement and joy and fear. But yet when one looks back historically, many people in history have believed that uh,

thought that animals were sort of non conscious automata and several others. How should I make sense of that? You've looked so deeply into the history and these paradigm shifts of people's thinking about consciousness. How can I make sense of, of that disparity?

[00:02:00] Mark Solms: Well, um, the, the, the problem with animal consciousness boils down to one thing and one thing only, what is called reportability.

Uh, because they don't tell us what they're feeling. Uh, this is considered, uh, problematical.

Uh, you know, uh, I think that the reportability criterion for consciousness is a bit of a nonsense. Uh, it's very easy to write a computer program, um, that would say I am conscious, so it's reporting. that it's conscious, it's completely meaningless.

Reportability doesn't, doesn't begin to get around the problem of other minds. So, um, what you did as a child is what we all do, you know, you can see from the behavior of a creature or for that matter, a pre verbal infant, that it has feelings. So how do you, uh, how do you tackle this question scientifically?

The, the, the easiest way and the way that we. generally proceed is you make predictions as to what you expect will happen if the creature that you're talking about has feelings. Um, and if those predictions are confirmed, uh, then you have to consider your hypothesis to be upheld. That's just normal science.

There's no other way of doing it. You can never get inside of the mind of your, of your puppy, uh, any more than you can get inside my mind. Uh, so you can't have absolute certainty, uh, all you can have is the weight of the evidence. And the weight of the evidence is enormous. Um, when I say make predictions, I don't just mean I predict that if I clap my hands, the puppy will get a fright or, or if I tickle the puppy, it will like it and will roll over, et cetera.

I mean, much more precise predictions. I'm sorry to mention animal research. Uh, I'm not an animal researcher myself, but these things have been done. The sorts of predictions that we're talking about, um, are, I'll give you two examples. The one is deep brain stimulation. We know from humans who can report their experiences, uh, that if you stimulate a particular structure, uh, it will generate a particular feeling and they can report it.

In fact, you can get around the problem of other minds by stimulating that structure in yourself and feeling what happens. If it's a pleasurable feeling, then you may predict that an animal in whom you stimulate the same structure will, will, will like the stimulation. In other words, they will work to get that stimulation.

And if it's an aversive feeling, you can predict the opposite, that it will avoid. getting that stimulation. And those predictions are confirmed every time in every mammal that we've tested. And why I say mammals is because they have the same basic anatomy as we do. So there's nothing else you can do.

That's science. But I'll tell you, um, I said I'll give you two examples. I'll tell you another study, which is really quite startling, um, which is, which concerns not puppies, but rather fish. So, you know, because vertebrates all have the same brainstem anatomy, basically, as we humans do, and the brainstem is an important part of where consciousness and feelings come from.

So an experiment that I find very striking. It's called hedonic place preference behavior. The fish are in a tank. These are zebrafish, and you put the food, you regularly deliver food on this side of the tank, and so the fish tend to hang out there, where the food is. Then the experiment is... What, what happens if we put something pleasurable, uh, that is not nutritional, uh, at the other end of the tank.

And this has been done with four substances, morphine, cocaine, nicotine, and amphetamines. And the prediction is if these drugs make the fish feel good, uh, as they do with us, then they'll hang out on that side of the tank. And that's exactly what happens. I find that weighty evidence, uh, that even fishes.

have feelings. Uh, the weight of the evidence is overwhelming for not only all mammals, but all vertebrates. And we can do no better, uh, than the paradigm that I've just, uh, outlined. In other words, you make falsifiable predictions, uh, as to what the animal would do if it had feelings, um, and not only feelings in general, but the particular feelings that you expect.

And, uh, the, the, the, the, the predictions always confirmed when it comes to vertebrates. Yeah.

[00:06:54] Matt Geleta: So it's, it's clear now through the experiments that we've got fairly compelling scientific evidence to, um, sort of support the, the inference, the prediction that, that these animals are conscious. But even in the absence of that evidence as a child, I mean, as a child's intuition, my intuition was that the animal was, was conscious and I don't consider myself to be smarter than anyone in history.

And we're talking about giants like Descartes. So how is it like, what, what, what is the change that caused. some of these great thinkers of the past to, um, you know, have an intuition that's so different from a, from a child living today.

[00:07:33] Mark Solms: Well, you know, I don't think that we should dismiss intuition based as it is on empathy, it's a feeling your way into the subjectivity of the, of the, of whatever it is that you're interacting with child, a child.

I presume you're talking about yourself at age sort of three, four, five years old. So at that age, A child doesn't intuit that a rock has consciousness, uh, because a rock behaves in a very odd way from, from that point of view. But puppies, you know, the way that they behave, uh, it, it, it, we intuit that they're conscious because, uh, if you were to unpack The, the, the, the unconscious reasoning in the child.

It is well, everything that I predict the puppy would do if it had feelings is what it's doing. I infer from my own experience that kind of behavior feels like this. We know where in the brain, um, the the structures are that generate those feelings. We know those structures are in dog's brains. And so, you know, the child, uh, doesn't know all of that, but they're, they're basing their intuition is not something to be, to be sneezed at.

When it comes to Descartes, uh, we need to worry. Don't forget, uh, Descartes, um, his, uh, famous for his philosophy of doubt. Um, his whole, um, neurotic concern was, you know, what can I be absolutely certain of? Um, well, you can't be absolutely certain, um, of, of, of, of just about, you know, everything that we know in science.

And so he famously concluded, the only thing he could be certain of was that he existed. So, you know, Happily, scientists, uh, don't have the same high bar, uh, as the philosopher of doubt had. Yeah,

[00:09:27] Matt Geleta: it is interesting though, because, um, he, he chose for his doubt to fall in one direction and not the other. Um, and in this case, I do, I do wonder, I mean, it challenges my own intuitions here when looking at things that I have an intuition that they're not conscious.

So for example, if I, if I look at something like the internet, I have the intuition that it's not conscious. And, um, it does make me wonder whether I can trust that intuition. What are your thoughts on questions of that nature? Could, for example, the internet be conscious?

[00:09:58] Mark Solms: Well, my intuition is the same as yours.

Uh, namely that it's not conscious. And, um, as an aside, uh, the, the fact that some theories of consciousness would require us to attribute consciousness to the internet, like, for example, integrated information theory. Mm-hmm. would re, would re would, uh, the theory, um, uh, uh, Sort of requires, uh, that we have to attribute consciousness to the Internet.

And that makes me worry about the theory. So, um, I think that the best one can do, um, is admit to degrees of confidence. So when it comes to other human beings, we have, you know, pretty close to 100% confidence, uh, that They have the same experiences as we ourselves have, um, then when it comes to other primates, uh, we have, you know, 99.

9% confidence and it comes to other mammals. We have 99% confidence when it comes to other vertebrates for the reasons I've just outlined. I have a very high degree of confidence that, that all vertebrates have raw feelings. They have basic consciousness because all the evidence suggests that they do.

Once you move beyond vertebrates. I become less confident. It doesn't mean that I do not believe they're conscious. Like for example, there's good evidence that, um, the octopus is, is conscious. Um, there's, there's even some evidence that some crustaceans are conscious, but I'm less confident when it comes to them.

Once you move outside of the realm of Of animals all together. Um, and you start talking about plants, let alone the internet, my confidence becomes very low. It doesn't mean that it doesn't mean that you can prove that they're not conscious. And we just have to live with that, you know, science is a probabilistic business.

And it's I, I just, for me, the main thing is we should not set a higher bar, uh, for scientific theories of consciousness than we do for anything else in science. Um, uh, the way that we proceed, not that I'm a great admirer of Popper, but Popper is, I mean, most natural scientists, just about all natural scientists, um, they, they do their work within a sort of Popperian paradigm.

And, uh, we, we shouldn't, we shouldn't, uh, do work on the problem of consciousness within any other paradigm. Um, the same rules must

[00:12:40] Matt Geleta: apply. Yeah. Yeah. You're, you're touching on my philosophical gripes with the hard problem, which I think we hopefully we will get to. Um, you, you've actually, you've written a lot in your, in your book about.

Um, the prospects of engineering consciousness in non biological substrates and you go into some depth, not just into the science in your book, but also into the ethical questions surrounding that. I would love to get to that, but I think we should warm up to it. Um, before we do, I would love to just understand what drew you towards studying these questions.

What, what set you on this path that, that you're on to looking at these questions?

[00:13:13] Mark Solms: Well, you should be careful about what questions you ask because you might get very long answers. The, the, um, that's, that is, uh, the, the, the full answer to that question actually takes me all the way back to my childhood.

You mentioned your childhood and your puppy, uh, in my case. Uh, when I was a child, my older brother sustained a brain injury, a rather serious brain injury, and it changed him, uh, dramatically as a person. Um, although he looked the same, he was not the same person. And so I was, um, I think we underestimate children.

Children think about these things, but I, I think I was confronted perhaps a little more dramatically because of those events. And, and, and perhaps earlier than, than, than, than most of us, uh, I was confronted with the sort of obvious fact that my brother as a person, the person who I played with and knew and interacted with.

Uh, was radically changed by damage to his brain. That means that somehow these two things are, are bound up with each other. If they're bound up with each other in him, then the same must apply to me. Um, and, uh, you know, so these are the sorts of things that I, that I thought about as a child. Now, I didn't decide there and then.

Um, I'm going to become a neuroscientist, but I think that, that it's pretty likely that that is the origin of my interest in the, in the, in the problem of consciousness in so far, um, as, uh, how, how can we understand its material basis? How can we understand what are we to think of the fact that Our sentient being is somehow bound up with the functions of a, of a bodily organ.

So that was really the, the origin of my interest in it. Uh, a, a, a more, that's the sort of distal origin, a more proximal, um, answer to the question, uh, is that when I first started doing neuroscientific research. As a student, um, I was studying brain mechanisms of sleep and dreaming, and, um, the central feature of dreams is that they are a state of consciousness that punctuates the unconsciousness of sleep.

So, um, right from the very beginning of my scientific career, uh, I, I, uh, was working on problems of consciousness, and I've, I've never stopped. Uh, perhaps I could also say, In, in closing my long answer to your question, I don't understand why anybody studies anything else. What, what could be more interesting?

[00:16:13] Matt Geleta: Yeah, I, I agree with, I agree with that. Um, I mean, you, you touched on this, this fascinating with the material basis of consciousness and Yeah. In the early chapters of your book, you actually stepped through several paradigms in psychology research from behaviorism to cognitive psychology to finally cognitive neuroscience.

Um, and I find it fascinating as we go through that, that sort of the thinking about, um, how important consciousness is and how it relates to material substances changed so drastically through each of those sort of shifts. Um, could you, could you step me through what those paradigms were and what, what, uh, what the shifts in thinking about consciousness.

Were in those stages.

[00:16:54] Mark Solms: Yes. Um, I think the, the first thing to say, um, is that consciousness is an embarrassment to science, uh, because it is first and foremost, something subjective. Um, what, what is consciousness, if not a subjective. Phenomenon, and science aspires to objectivity. So how, how do you, how do you study this thing?

Which clearly exists as part of nature. Going back to what we were saying a few minutes ago, even Descartes was willing to concede that this he can be sure of that he actually experiences and therefore he exists. It is the most immediately empirically demonstrable fact that consciousness exists. And yet we can't study it objectively in in itself, we can't study it objectively because of its nature.

So I think that's the that is the single most important fact when looking at the trajectory of approaches to or attitudes to consciousness in the history of psychology. Um, so when I trained, which was in the trained in neuropsychology, that was, that was and remains my field. Um, we had just come out of the, we were just coming out of the behaviorist paradigm.

And let's just remind ourselves. Um, of what that was. Uh, it was absolutely dominant in academic psychology. Uh, you know, experimental psychology departments all over the world for decades were behaviorist. Uh, and what that meant was that you can only study behavior, uh, you can only study. In other words, uh, the external physical manifestations of mental life.

You can only study the external inputs, the stimuli and the external outputs, the responses, and you could say nothing about what's going on inside what was deemed to be a black box, you know, the black box of the mind. And what that means, literally, is that the psyche was excluded from psychology. You know, I mean, it's bizarre that the psychology, you know, the study of the mind, um, decreed that the mind may not be studied.

Um, so that's what behaviorism was. Um, the cognitive revolution, uh, which was a very welcome departure given, uh, the strictures of behaviorism, uh, at least it conceded that there is such a thing, um, as a mind. In other words, there's something going on inside the black box and we can study it. Um, but the, uh, even with the cognitive revolution, the subjective nature of the mind remained a real problem.

Uh, what, what, what you're actually studying, and this was what was becoming dominant when I trained the cognitive approach, you know, what you're studying is this kind of third person abstraction. Uh, called, uh, the functions of the mind. Uh, so although, you know, thank heavens we were allowed to talk about the functions of the mind, they nevertheless were third person abstract, uh, objective descriptions of an instrument.

Uh, rather than, uh, taking the point of view, uh, of the system itself. Um, and that's still, uh, pretty much, uh, the ruling paradigm in cognitive neuroscience today. But it is at least, um, it, it, it, it, it, unlike when I trained. Uh, it is not, um, it's not forbidden, uh, to, well, it's not quite forbidden. I mean, when I, when I was a student, when I started off, bear in mind why I studied neuropsychology, what I told you about the personal, um, origins of my interest in this.

I mean, and they were, uh, the, the, the interest was heartfelt, you know, I really needed to understand, uh, how, how, how consciousness works, um, the, when I spoke to my professors who were. Teaching me about the brain mechanisms of language and memory and perception and so on, uh, when I asked them, but why, why do we remember memories, you know, why do we actually have to have a reminiscence, as opposed to all of this information flow that that you're teaching us about?

I was literally told not to ask questions like that. And I was kind of embarrassed. kindly advised, uh, that such questions are bad for your career. So that was, that was the ethos then. So that's changed. Now consciousness is an entirely respectable topic. Um, and since the 1990s, I, I, I think that Francis Crick's book, um, The Astonishing Hypothesis, I think it was 1994.

Um, and, uh, Damasio's book, um, um, What was it? Um, uh, Descartes era. Uh, it was also in the mid nineties. Um, and CHS is, uh, paper that, um, that in which he coined, uh, the term the hard problem. I mean, all of this focused attention on not just the mechanisms of consciousness, but the actual fact of consciousness.

So it is now an entirely respectable topic. Uh, uh, uh, it's things have have developed, uh, very favorably, but. Um, at the time that I trained, remember this was, I trained in the early 1980s. So these developments that I just enumerated hadn't yet happened. And, um, for that reason, when I, I was so frustrated by, uh, the lack of psyche in the, in neuropsychology, um, that I took the unusual step.

of deciding to train in psychoanalysis. Psychoanalysis, for all of its considerable faults, was the one branch of or approach to psychology during the 20th century that placed Subjective experience, uh, at the center of its methodology, uh, and it's theorizing. So, um, the, uh, these, these developments, uh, you know, Crick and, and, and Demasio and, and Chalmers and all of that, not yet having happened, uh, that was the way that I dealt with the, with the, the, the, the frustration.

I trained my colleagues at the time said, Aeyinslowman. Uh, when I, when I told them I'm training in psychoanalysis, they said, that's like an astronomer studying astrology.

[00:24:16] Matt Geleta: Yeah, but I just find it a bit strange that historically there was a resistance or as you said, even aversion to including subjectivity or anything relating to subjectivity in phenomenology as a core component of.

Psychology and neuroscience. Do you have a sense as to where that came about and why that was the case? What was the origin of that way

[00:24:36] 7 - Mark Solms - Zoom video: of

[00:24:36] Mark Solms: thinking? I think it is just what I said a few minutes ago. It's just because the mind is subjective, you know. It's objective science. Uh, can't use its standard methods on something that's not an object.

Uh, if it's, if it's a subject, it's a problem for science. Um, and, uh, so for me, the, the, the, the issue simply is, is it, or is it not part of nature? You know, does subjectivity exist in the universe? As I said earlier, with reference to Descartes, it's the one thing we can be absolutely certain exists. Uh, so we just have to incorporate it into science.

You don't adjust your object of study to your methods. You adjust your methods to your object of study. So, uh, you know, but I, I don't think it's anything, I don't think there's anything, uh, any mystery as to why, uh, it has been such a problem for science. I think that, uh, it, it simply is because minds being subjective things.

Consciousness being a subjective state, you can only ever observe your own. There is no possibility of observing, um, anybody else's mind. Uh, and so how do you do objective science on the thing like that? The approach that I took, which is not unique to me, uh, the very word neuropsychology, And it's why I studied it implies a combination of observational perspectives that if the problems of subjectivity, um, given the overwhelming evidence that that conscious experience is bound up with the physiology and anatomy of the brain.

This conjunction gives us a basis for a methodological way around the conundrum of the subjective nature of the mind. Because you can monitor Uh, it's objective manifestations, the mind in my way of thinking, uh, when looked at from the outside as an object simply is the brain, you know, so there is objective manifestation.

You can see, touch, you know, and, and listen to the brain. I mean, by listen, I mean, you know, you can put electrodes in and listen to the, to the, to the, to the spiking of neurons and so on. You can observe it objectively. And then all you need to do is correlate that. with the subjective reports. And then you, you know, you, you've gotten around the methodologically, you've gotten around the problem of studying some things.

In other words, every subject, if you have 10 people, uh, and you stimulate the periaqueductal gray, and they all say, oh, that's painful. You know, you've got an objective, uh, thing that you're stimulating this place, this thing, this object, and everybody reports the same subjective state in, in, uh, you know, in consequence of it.

Why should you doubt? That, that subjective state is bound up with the, you know, with, with the physical event of stimulating the PAG.

[00:27:58] Matt Geleta: Yeah, for sure. I mean the, the sort of common philosophical objection to something like that would be something like the possibility or the imagined possibility of like a philosophical zombie, um, which I, I've always taken to be quite analogous to, Talking to like about the problem of induction as it pertains to any other problem of science, you know, um, there is some fundamental level in which you could, you could never know the answer to something, you know, we could never know that all the laws of physics that we've inferred have just been completed due to chance just because of some world path we happen to be on.

But, um, it is an objection.

[00:28:33] Mark Solms: Um, you, you all, that's why, you know, I, I think that we, we, we must worry about Descartes. I mean, it's a very weird approach. to the world, uh, to, to say, you know, how can I be absolutely certain of anything? Um, it's, but I, I repeat that science is not about absolute certainty. It never has been about absolute certainty.

Uh, it's about, you know, the weight of the evidence and degrees of confidence, uh, in your, in your findings, uh, findings based on the experimental method of, uh, I have a hypothesis. On the basis of that hypothesis, I formulate a prediction that, uh, according to my hypothesis, if I do this, that should happen.

Uh, and if that is what happens, then you provisionally, um, maintain confidence in your hypothesis. That's, that's the only way we can proceed, not only in the science of consciousness, but in science altogether. You know, that's, that's how it works. It's not absolute certainty. You never see the face of God, you know, but, uh, so we have, we never find the truth, but what we, what we have is.

Uh, less, uh, ignorance, um, it's, it's more likely to be true, um, as we proceed in science and we do make progress in science. You know, if you look at the history of any scientific discipline, you see that, uh, over each, uh, epoch. We have theories that can explain more of the evidence and that are therefore superior to the theories of the previous apart.

But that doesn't mean that problem is ever solved. Absolutely. And as you were just saying about physics, which is kind of held up as the Uh, you know, the kind of prime example, you know, of, of, of a natural science, heaven knows in physics today, there's a hell of a lot that's not known, uh, and that we can't be completely confident about, but we do the best we can.

It's all we can do. Yeah.

[00:30:42] Matt Geleta: Yeah. I agree. Agreed. And it actually brings me to the, the, an idea in your book. It relates to the mind as sort of like a prediction machine and perceptions as, um, sort of, uh, or the external world as a prediction of, of what we think is out there. We'd love to get to that in a second, but actually one, one tripping point that I think might appear here is there is a confusion between a couple of terms.

Um, perception, affect, and consciousness, which I think could trip up this conversation for people who have not looked into this. Um, could, could you quickly run me through each of those terms and how they, how they differ and why people

[00:31:20] Mark Solms: confuse them? Um, well, let's start with consciousness. Mm hmm. Uh, this is.

I think that most people, and I don't mean only scientists here, I think most people generally, what they take the word consciousness to denote is what they themselves experience. So consciousness is taken to be this, you know, what, what I'm experiencing now. Um, and I think even that is problematical for the reason that this is what human consciousness, uh, is like, but since there's no good reason to assume that only humans are conscious, uh, I think that the.

term should not be reserved only for human type consciousness. It should be more inclusive. It should include all consciousness there. So, um, for me, the word, and when I say for me, that all you can do, given that You know, the term is used by different people in different ways, and you, all you can do is be precise about your own definition of it.

I use Tom Nagel's definition. And I don't do it arbitrarily. I think that, especially since Chalmers paper, Which had such a big impact on the neuroscientific community. Uh, Chalmers is, uh, a paper, uh, on the hard problem, uh, built very directly on Nagels, uh, 1974 famous paper. What is it like to be about? And in that paper, Nagel says, Um, that an organism is conscious, um, if there is something it is like to be that organism, something it is like for the organism.

That's that I, I think is a nice, simple, uh, inclusive, um, uh, definition of what consciousness is. If there is something it is like to be. an organism, uh, something it is like for the organism, uh, then that, uh, organism is conscious. So that's the definition I use for consciousness. Um, now, uh, moving to affect.

Um, again, sadly, uh, this is a word, uh, that, um, is used in, to denote very different things, um, by, uh, by different people. Um, so, um, it is, it is, uh, uh, I take it, and again, all I can do is to tell you what I take it to mean. Um, I, I, I take it to refer, uh, to the phenomenon of feeling. So, uh, so, uh, affect is an abstract term for that function that we experience as feeling.

Um, and, uh, I use the word feeling, uh, for very good reason, uh, and namely that feelings are always felt. Uh, you cannot have a feeling that you don't feel. If you don't feel it, it isn't a feeling. So that's what I take the term affect to refer to. It's an abstract noun, uh, denoting, uh, this phenomenal state called feeling.

Now that requires me to further unpack, uh, what do I, what do I mean by feeling, but What does not need unpacking is the fact that you feel it. In other words, it's an intrinsically conscious state. And you'll see how that links to my first definition of consciousness. Nagel says that the organism is conscious if there is something it is like to be that organism.

So I'm saying feelings that there always is something it is like to have a feeling. So feeling, I think, is the, is, is, is, uh, is, is, uh, the fundamental property, um, of consciousness, uh, and affect, uh, as, as we'll no doubt get to talking about soon, uh, is the, uh, fundamental, the elemental form of consciousness.

But what does need unpacking about the term? Not that you feel it, uh, but that it has two, well, perhaps three specific, uh, properties. The one that goes without saying is that it is subjective and going back to Nagel's definition. Um, he says an organism is conscious if there is something it is like to be an organism, something it is like for the organism.

In other words, the organism itself experiences its consciousness. So. Uh, the same applies to feeling, it's subjective. So that's not something very, um, uh, unique to feeling. But the second cardinal feature, uh, and I think this is a pivotal one, uh, about feelings, is that they are valenced. Uh, in other words, they have a goodness or a badness about them.

Uh, uh, uh, uh, feelings, uh, uh, uh, in other words, affects, uh, are always, uh, either pleasurable or unpleasurable somewhere along that, um, continuum. And this applies, I need to emphasize that it's intrinsic to affect. It's not something that we attribute to the affect after the fact. It is, it is an affect, it is a feeling.

What defines its being an affect or a feeling is the fact that it's valence, it has a goodness or a badness. Now there's a hell of a lot that I can say about Why that should be the case, but the important thing is it's intrinsically value it. If it has value, it has valence. And remember that's second to it being subjective.

So it has value for the organism. That's a value. So something is like to be an organism for the organism. It means it has value for me. That's what an affect is. It has, it has a goodness or badness. to and for me. And then, um, it also has a quality, uh, by, by which I mean that there are different types of feeling.

Uh, pain is unpleasant, fear is unpleasant, sleepiness is unpleasant, um, And so on. But they're unpleasant in very different ways. It feels quite different to be in pain from how it feels to be depressed from how it feels to be thirsty. And so that's the third property of feelings is that they have a particular categorical quality.

And Categorical is important, uh, categorical variables, uh, cannot be reduced to a common denominator. The essence of qualia is what we are talking about there. So feelings are subjective, valence, uh, qualia, that's what they are. Perceptions. do not have any of the properties that I've just described intrinsically.

In other words, perception, uh, can occur, uh, uh, without, I mean, it refers to an external object, first of all, we can all say that is a cat, that thing that we are perceiving, it has an external referent, um, uh, which feelings can have an external referent, but it's not intrinsic to what feeling is all about.

Um, Secondly, they're not intrinsically valenced. One person looks at that cat and says, it's beautiful. It's, you know, it's cute. I love it. The other person says, cats, they make me sneeze, you know? And so, uh, the cat in the perception, uh, in itself, uh, is not, is not evaluated, we, we've, we, uh, feel our way into the perception and, and, and attribute value to it.

But, but perception in itself is not, uh, not intrinsically balanced. Now the third property of feeling, in other words, that it has quality, qualia, intrinsically feelings are, are qualified in that sense, qualitative. Um, even that does not apply to perception. Um, there is abundant scientific evidence that you can perceive unconsciously.

Um, the, so, Perception is not intrinsically, uh, qualitative. It's not intrinsically conscious. And this has massive implications for the heart problem, because the whole paradigm of starting with Crick Uh, when Crick said, let's look for the neural correlate of consciousness, um, what he alighted on was visual perception as, uh, the model example, he, he, he was saying, um, and, and, and this has really to this day, it remains the model example.

If we can find the neural correlate of conscious vision, um, then, uh, visual perception, you know, then, uh, we will have. Solved, uh, will have gone a long way towards solving the problem of, you know, what consciousness is. And I think that's a very bad, um, uh, place to start for the reason that I've just said, that visual perception does not have to be conscious.

And so why start with a function, uh, that is not intrinsically conscious? Uh, I think if you're wanting to understand consciousness and, and trying to seek its neural correlates, rather start with feelings, which are intrinsically conscious things. Um, but that's another story there. I've, I've defined those three terms for you, at least in terms of the way I use them.

[00:41:12] Matt Geleta: Yeah. No, I think, I think it's, I think it's very helpful because you're, you're totally right. Um, about perception being historically seen as the sort of most fundamental. component of, of consciousness. I actually found it very unconvincing the first time I took a philosophy course and was taught the story of, you know, Mary the color scientist stepping out into the light and perceiving colors with the question, has, has Mary learned anything?

And for me, the, it, it was, Again, an intuition that was quite obvious to me. Of course, she has learned what it's like to feel, what it's like to see red or to see these colors. But, um, somehow this is, this is stuck around in the, uh, philosophical canon for so long. Um, one, one thing that you would claim in your book, I take it then is that when Mary steps out, what she's seeing is not exactly the external world as it is, but it would be some sort of prediction.

of what the external world is based on information that she has internally and the signals that she's getting from the external world. And that conscious experience, like maybe she's, she's feeling positively about the site that she's seeing. This is signaling some information to her about maybe a miscalibration between, um, an expectation and a reality.

Could, could you expand on your description of your sort of the brain as a predictive mechanism and the role that affect plays in that mechanism?

[00:42:38] Mark Solms: Yes. Let me go back a step. Uh, first of all, um, the, the, the, the, um, focus on perception. Uh, as our model example of consciousness, it's not hard to see why we did that.

Because, uh, remember what I said earlier in our conversation, I said, most people's definition of consciousness is this thing that I'm experiencing now, forgetting that, you know, we are just one species that has a particular type of consciousness. And, uh, and in fact, we have the most complex type, um, uh, of consciousness.

And, uh, our consciousness, uh, is dominated by perception. Uh, so vision, visual perception is the dominant property of our consciousness. Um, it therefore came as a gigantic, so the intuition was consciousness flows in with perception. Uh, and once we started to be able to do this sort of thing, we traced The, the nerves, the neuronal pathways from the end organs, from the, from the eye, from the retina of the eye, and you address where do they go to, and where, where they go to is the cortex.

The same applies to all the other perceptual modalities that, that the ears and the, and the mouth and the nose and the skin, you know, they all project to the cortex. So it was perfectly reasonable, uh, to take this. Uh, to be, um, you know, the, the, the, the right place to look, uh, in order to begin to grapple, uh, scientifically with the problem of how consciousness arises and what it's for, um, therefore was a gigantic surprise in 1949, uh, when Magoon and Maruzzi discovered entirely accidentally, uh, that in fact consciousness does not flow in, uh, with, um, with perceptual information.

In fact, uh, the consciousness is generated endogenously in the absolute core of the brain. So consciousness comes from within. Consciousness is applied to, uh, the incoming Uh, information coming, coming in from our, from our sensory organs. So now, remember, I'm talking to you about Mary, you're asking me about Mary.

So that's what goes on with Mary. Mary is receiving information, um, from the outside, which is intrinsically unconscious. And she then feels her way into that information processing. She, like you, like me, like all of us. Um, there's just absolutely no question that the evidence is as, as close to conclusive as any evidence can be, uh, that the cortex is rendered conscious to the extent that it is modulated by the reticular activating system, uh, of the brainstem.

So the consciousness is intrinsically subjective. Um, it's, it comes from within me and I, I use it to, uh, feel my way into. And my perceptions and cognitions. Um, now the next point that I have to make, which is, um, a very obvious point, uh, but it's kind of alarming when we've stopped to think about it. Uh, it is simply, uh, a fact, uh, that what we are receiving from our sensory organs is nothing other than spike trains.

In other words, neurons firing yes or no. So it's ones and zeros. That's all we're getting, you know, firing or not firing. That's what a spike train is. There's, there's light doesn't flow in through your eyes and then into your brain. It's not light. You know, it's, it's not sound. Uh, it's not, it's not taste.

Uh, it is spike trains. We're ones and zeros. That's all that it is. So, so when we say We that that that is people working within the predictive processing paradigm that are that I am aligned with when we say that the perception is an inference about what's out there. I don't see how it can be anything other than that.

It can only be an inference. It's a construction, you know, so I'm getting all of this, all of this. Uh, all of these ones and zeros, uh, and on the basis of that, I have to build up a model of what is causing these, these spike trains. What, what is causing this pattern of information that's coming in? Uh, what, what causes it?

What, what, what, what would explain, uh, why, uh, I'm getting this particular, um, uh, not only pattern in the, in this here and now. But pattern over time, you know, what the cause and effect business, uh, that we're talking about, obviously, uh, uh, unfolds over a temporal dimension. So when I, when I, like all of the others working within this paradigm, when I say that what we perceive is an inference, um, it's, it's, it's, it's, it's saying something pretty darned obvious.

It's just not intuitive because we, uh, naturally, uh, uh, experience. what we are seeing to be what's out there, but it can't possibly be what's out there. It has to be it's something, uh, uh, uh, uh, created by the brain, uh, to explain, uh, what's out there to infer what's out there. Uh, and there's, you know, lots of.

evidence, um, for example, binocular rivalry and all of that, you know, there's lots of evidence that that's what we do. I just need to conclude, uh, by reminding you, uh, that, that because the consciousness comes from within, uh, and as I'm arguing, consciousness is fundamentally affective, although I haven't yet had an opportunity to tell you why I believe that, uh, we are the, the, the imbuing, uh, of this.

this construction, this inference, um, uh, this perceptual, uh, inference. with quality, with, with consciousness, uh, is, is this, this, this endogenous feeling state, uh, being, uh, applied to, uh, that construction. There's my answer.

[00:49:03] Matt Geleta: I think a lot of people would be caught off guard by two things you said there. The first one is that statement that consciousness comes from within and sort of emerges to, or as an overlay, or I just think that that component is.

Not really well understood, and I think it's, it's quite frankly, quite controversial, um, and, uh, then the, the second component, which you alluded to, you have not had the opportunity to talk about how affect comes into it. I want to get to the affect part soon, but, um, let's, let's start with that, with that first point.

Where, where does that, um, what is that compelling evidence that says consciousness arises from within? And also. The, the evidence that it comes from within the brainstem and not, uh, at some higher, um, sort of, uh, functioning part of

[00:49:51] Mark Solms: the brain. Right. Well, uh, as I said, it was discovered in 1949, uh, by Magoon and Marzi and, um, the, the technology they were using, uh, at the time was the e e g, the Electroencephalograph, uh, which, uh, measures, uh, electrical, uh, the, the, the level of, of, um, cortical activity, um, which was assumed to correlate with consciousness.

Uh, and, uh, there was good reason for assuming that because when you measure with the EEG, uh, as you go to sleep, in other words, as you lose consciousness, the fast waves become slower and slower and slower. Uh, in other words, there's, there's reduced, um, Uh, arousal, a reduced, reduced, uh, level of, of, of, of electrical activity, uh, going on, uh, in the cortex.

So Magoon and Marzi, um, were studying cats again. I'm sorry. Uh, this, uh, long tradition of animal

[00:50:57] Matt Geleta: research. Physicists used to study cats as well in their minds. and those, uh, yeah, they stuck around sex. Yeah.

[00:51:06] Mark Solms: Yes, Schrodinger's cat. Exactly. The, the, um, so their prediction was that if you, um, deprive the cat, and I'm speaking about ghoulish experiments, uh, if you deprive the cat of any sensory inputs, it should fall asleep.

It's, it's consciousness, because, because consciousness flows in with, with perception. Um, and that's not what happened, uh, the, you can have, uh, in fact, an entirely, not only a cat receiving no external information, uh, in other words, it having been disconnected from all of its sensory end organs, uh, but you can also have a cat with no cortex at all.

And this is this experiment that's also been done, not only with cats, but with cats, dogs, rats, a great variety of mammals. If you remove the cortex early on in life. Um, the, the animal, uh, uh, doesn't fall into a coma, uh, it's, it's, it's clearly conscious. Uh, it, uh, it wakes up in the morning, goes to sleep at night, um, but also it shows all the basic, um, emotional responses that you would expect it to.

It is startled, um, it, it, uh, shows fear, uh, if, you know, fear behaviors, it shows anger, aggressive behaviors, they play, uh, if they, they copulate, uh, they raise their puppies, you know, so all of this is possible without any possibility of there being conscious, um, uh, perception, because the organ of conscious perception, namely the cortex, just isn't there.

Now, going back to Magoon and Maruzzi, this was a whopping great surprise. So, if the consciousness doesn't flow in with perception, and it isn't something that's generated from the cortex because you don't need cortex in order to be conscious, then where does it come from? And, uh, what they showed was two things.

If you damage the reticular activating system, which is, which is a densely interconnected set of nuclei deep within the brainstem, um, if you damage those nuclei, the, the animal goes into a coma immediately. Um, and when I say the animal, I mean, every vertebrate animal, uh, every animal that has the reticular activating system, if you damage it.

The animal goes into a coma. So remember what I said about cortex doesn't happen when you damage, and cortex is a big fat expanse of tissue. The reticular activating system is a small little densely packed, uh, uh, region of, of the brainstem. You damage that tiny little area. In fact, it has, in more recent years, been demonstrated that the minimal area of damage that is necessary in human beings to produce coma is two cubic millimeters.

Of reticular activating system, and I'm referring here in particular to a part of it called the parabrachial complex. So if you, if you, if all that's needed to produce a coma in a human being is two cubic millimeters of damage to that part of the reticular activating system. So clearly this is where the consciousness is being generated.

And, uh, that if you look at the anatomy. Their reticular activating nuclei send long axons up to the cortex, so clearly they're doing something to the cortex. Mm. And this was the last observation of mag and marzi. If you sever those connections, in other words, you make a small incision above the reticular activating system so that it is no longer able to have.

the influence that it has on the cortex, that too produces absolute coma, absolute loss of consciousness. So this is what I mean, you said it's startling, what I just said, and you said it's frankly controversial, that, but what I'm saying to you is, you know, incontrovertible. That the, the you, you have consciousness without cortex.

Um, and, uh, so clearly it can't be generated in the cortex. Uh, and, uh, the, you have, uh, uh, overwhelmingly powerful evidence that it is generated in the brainstem because if you damage that part of the brain, unlike the cortex, All the lights go out, and if you disconnect that part of the brain from the cortex, the cortex loses its consciousness.

So the consciousness is clearly being supplied by, um, the reticular activating system. Now, that's the evidence for that. Surprising as it is counterintuitive as it is, the evidence is absolutely overwhelming. The way that Magoon and Marzi, um, dealt with that surprise, uh, was to say, I'm now moving on to the question of affect.

And Matt, uh, you know, when I say that this is bound up with affect, so I, I've said, why consciousness? Why we have to accept that consciousness is generated endogenously in the ancient brainstem core, uh, that this is where it comes from. It does not come from outside. Then, uh, I say, and in addition, uh, its fundamental property is affect, is feeling.

Um, now I'm, I'm speaking about why I say that. So back to Magoon and Marutzi. The way that they dealt with the surprising finding that consciousness is generated in the brainstem was to say, it's something like a power supply. The, the, the analogy that's often used is it's like the television set. has to be plugged in at all.

If you disconnect the television set from its power supply, then of course it goes into a coma. In other words, it no longer produces any television. You no longer can see any programming. That's not because the power supply is where televisual content really comes from. It's a prerequisite in order for the television set to do its televisual stuff, you have to plug it in at the wall.

That's that's a necessary precondition for that. So the cortex is equated with the television set and the reticular activating brainstem is equated with the power supply. That's the That's the model that, um, Magoon and Marutzi Bequith does. They use the terms level versus contents. So they say the brainstem provides a level, a sort of background arousal, um, a kind of blank wakefulness, uh, with no content.

It's just, it's just this booting up the system. Uh, like the, like the electricity supply, and then the content is what the cortex provides. In other words, what the brainstem provides is something purely quantitative. It doesn't have phenomenal quality. Um, the, the, the qualia, the phenomenal, uh, qualitative contents of consciousness are supplied by the cortex.

That was the way that, that, that they described it. Now, the evidence, this is the part that is. Everything I've said so far, I don't think it can possibly be controversial, even if it's counterintuitive. The evidence is just there. Uh, so, uh, this is the part that's controversial. Uh, and I can see why this part is controversial.

Um, is, uh, I'm saying that that power supply so called Um, is not blank, is not without qualia, is not without content, uh, it feels like something. And remember my definition of consciousness is, you know, if there is something it is like to be the organism, I'm saying an organism with no cortex feels like something.

There is something it is like to be that organism. Why do I say that? Well, I've just told you decorticate rats, decorticate dogs, decorticate Cats decorticate human beings because, uh, they're, of course, nobody's done the experiment of removing the cortex from a human being, but they are, unfortunately, uh, not, uh, that rarely.

They are children who were born without cortex. It's called hydrand encephaly. And these kids behave, just like what I said about those experimental animals. They wake up in the morning, they go to sleep at night, uh, and they show a wide range of emotional responsivity. Now let me just pause there for a second, uh, and tell you, remember what I said about the canons of science.

You, you, you, if you have a hypothesis, if you have a theory, um, then you have to, there have to be falsifiable predictions that flow from that theory. If the cortex. Were the, uh, organ, uh, where content, conscious content and conscious quality, uh, is generated. And if the brainstem is just a power supply, uh, then if you have a child who has no cortex, it should at the most have blank wakefulness, if not be in a coma.

I mean, I don't know what blank wakefulness really is, but in neurology. We have a condition called the vegetative state. Uh, and the vegetative state, uh, in fact, it's slowly being, um, the term sounds derogatory, you know, the patient's a vegetable. So, uh, so it's, it's being replaced by another word or another term, which is non responsive wakefulness.

So these are patients. who have the autonomic sleep wake cycle. In other words, in the morning they wake up in the sense that their eyes open, uh, and at night their eyes close. Um, so, you know, but they show no responses to anything. So they are vegetative in that sense, you know, just like a cabbage doesn't respond, uh, so to these patients don't respond.

They just open their eyes and close their That's all that they do. So, uh, that's, I think. is the closest thing that we have, uh, in medicine to blank wakefulness. Uh, so the prediction, uh, if the cortical theory is correct, the idea that the qualities and contents of consciousness can only be and are generated by the cortex, then these children should be in a vegetative state.

And that's the prediction. If that prediction is disconfirmed, the theory is wrong. And that prediction is disconfirmed. Those kids not only wake up in the morning and go to sleep at night, but they are emotionally responsive beings. Everything I said about the animals, the same applies to them. They show all the basic emotions.

They show fear, they show rage, they show joy, etc. And So where are these things being generated? They've got no cortex, but they've got perfectly intact brainstems. Now we come back to the problem of reportability, where we started with your puppy. You know, these kids, one function that most certainly is cortical is language.

So there's no way they can tell us what it's like to be them. The behavioral evidence. is that there is something it's like to be them because they're showing all the behaviors that you would predict they would show, uh, in response to emotional stimuli. They show the emotional response that you would expect, but they can't tell us, yes, that's scary or yes, that's, uh, uh, ticklish or, you know, yes, that's, that's, uh, uh, irritating or whatever.

They can't say it like your puppy. So, um, how do we get around that? Well, uh, on my theory, which is that the affect, the feeling is, is being generated by the brainstem, uh, because the behavioral evidence is that it's the, it certainly seems to be, uh, in other words, that this arousal, this activation is not blank.

It has content and quality and it seems to be emotional, affective, uh, in quality. Um, if, if that theory is correct, then I must come up with a falsifiable prediction from that theory, um, and, and we must test it. So one such prediction is that if you stimulate those structures in a human being who has intact cortex, they should report intense feelings.

Not so. Uh, not. You know, uh, increasing or decreasing wakefulness, but changes in the quality and content of their feelings. And that. prediction is confirmed. It's, it's confirmed, uh, strongly. If, if you, if you stimulate these structures in the brainstem, and of course, you know, you don't do it willy nilly.

They are only, it's only in cases where there's medical reason why you have to be poking around in the brainstem with an electrode. Um, so, uh, but over the years we've accumulated lots of observations of this The patients report intense Emotions and they report a wide variety of emotion, depending on where the stimulation is performed.

So if you stimulate cortex. Uh, patients have perceptions and they have thoughts and so on, and they have kind of memories of feelings, but you get very little affect, uh, stimulating the cortex. Stimulate these brainstem structures, you get intense, overwhelming affects, uh, and, and you get a wide range of them.

So prediction confirmed. Uh, now, uh, what about, uh, another line of evidence, uh, which is that if you image the brain, uh, with positron emission tomography, uh, of people who are in intense emotional states. The prediction from my theory would be that the activation, in other words, the neural activity would be most intense in the brainstem, in the subcortical brain.

And again, that prediction is confirmed. That's exactly what you see. So it shows this. And this is the paradigm we use in functional neuroimaging all the time. You know, which part of the brains act is activated when you have this mental state? Well, when the mental state is intense affective of feelings, the part of the brain that's activated is the brain stem.

Um, and I could go on this, uh, I'll just mention one last bit of evidence, which is that the drugs that we give to psychiatric patients, uh, in other words, patients with emotional disorders, patients where we're trying to change, uh, their, the quality and content of their feelings, uh, these drugs. Um, act on the, the, the neuromodulators that are sourced in the reticular activating system.

So, for example, the famous antidepressant, SSRIs, Prozac, uh, SSRIs, uh, they are increasing the availability of serotonin. Uh, serotonin, uh, is sourced. In the rave nuclei of the reticular activating system, uh, anti-anxiety drugs, uh, that, that reduce the level of noradrenalin. Um, noradrenalin, uh, is sourced in the locus Aus complex of the reticular activating system, uh, antipsychotics, uh, damp down dopamine.

Um, and dopamine is sourced, uh, in the reticular activating system. In fact, the particular. type of dopamine we're talking about is sourced in the, in the ventral tegmental area of the reticular activating system. That's what is blocked by antipsychotic drugs. So doctors who are treating feelings, uh, give drugs which act on the, the, the arousal systems.

emanating from the reticular activating system. If, if it were just a power supply, then you can understand why anesthetists might be interested in this part of the brain, but why would psychiatrists be tinkering with it if it weren't that it's, that it's responsible for feeling. So that's, I'm sorry, a very long speech from me telling you, first of all, why Despite it sounding odd, um, why I say, in fact, we all have to say that consciousness is an endogenous property of the brain generated from, um, it's, it's brainstem core.

And, uh, secondly, the evidence for this, um, this source of consciousness being not just a power supply, but rather having a quality and the content that quality and content is affect is feeling. And, uh, since everybody agrees that this is, that this Reticular activation is prerequisite for consciousness.

If you also accept that it is affective, then we come to the conclusion that I've come to, which is that affective feeling is the foundational elemental basic form of consciousness.

[01:08:57] Matt Geleta: I mean, it's, it's certainly a very convincing long story. So I think, I think it's good that we went through it, but it does immediately pose the question as to why.

So why is. Why is consciousness necessary for all that functionality? For example, you could imagine it feels like almost something extra to add to a set of behaviors. And, um, you know, why is affect needed in this mix? So what is, how do you address that problem? It's a, that's the type of problem that feels like it's almost.

External to the standard scientific approach to these sorts of questions. And how do you think about that problem?

[01:09:35] Mark Solms: Yeah. So let's go back to Mary again. Uh, this was Frank Jackson's whole, uh, whole, whole shtick. Uh, it was, uh, you know, Mary, uh, knows everything that there is to know about the physical, uh, physiological information processing, uh, uh, mechanisms.

Uh, of visual perception, uh, but she doesn't know what it is like, uh, to see or to see color. And, um, that's what you're, I think, alluding to when you say it seems like something added on. It's not, it's not, uh, it's, it's not part of the causal mechanism. The whole of the hard problem of consciousness pivots on that.

Uh, on the, the fact that I've just, uh, stated, uh, that, that, uh, uh, the normal scientific or neuroscientific approach, um, which is just the application of the normal scientific approach to the brain, um, the, the, the, the approach is that in order to explain the phenomenon, Uh, you all you need to do is to reduce it to its causal mechanism.

Uh, so using vision as our model example, uh, if you reduce it to its core causal mechanism, you will understand how it works. Uh, but the problem, uh, is as illustrated by Mary, by the example of Mary, uh, that you can understand everything about how vision works. It doesn't explain why it feels like something to have visual experience.

Um, the, the, there's nothing about the mechanistic account. Of visual perception that predicts that it should feel like something to have a visual experience, uh, nothing that prepares you for the fact that it feels like something to have a visual experience and nothing that explains what it is like to have a visual experience.

So all of this seems to come from some other place. It's not part of the causal mechanism. That is the hard problem. How and why does the does the what it is like. quality of experience come from. So having said that, having reminded you of what the problem is, and please note well, uh, I was talking about perception, which I'm saying is not where we should have started.

It's perception is not intrinsically conscious. Um, what is intrinsically conscious? The properties of the reticular brainstem And that the function of the reticular brainstem is to produce consciousness. That's, that's what it's for. It has no other function. It is the consciousness generate, hence its name, the reticular activating system.

It activates and arouses into consciousness, uh, the otherwise unconscious cognitive and perceptual processes of the higher, of the higher brain. Now your question is. Uh, why does it have to feel like something? So, um, these structures, uh, these reticular brainstem structures, they are part of a network of, um, of, of brain, uh, mechanisms that, that regulate homeostasis.

So, um, homeostasis, Uh, is a very basic, probably the most basic biological mechanism. It's probably the thing that best distinguishes living things from non living things. Um, homeostasis works like this. We living things. We need to occupy particular states. In other words, we need to be at certain temperatures.

We need to be at certain levels of hydration. We need to have certain levels of energy supply, nutrition, and so on. If we don't have those things, we die. So we need to be in particular, unlike a rock, which just lies there, you know, we have to work at, we can't just dissipate, we can't just explore all possible states, we need to be in those very limited states that are compatible with with life with our phenotype.

So, for example, I have to be between 36 37 and a half degrees Celsius. That's where the human body needs to be. And if you get much hotter than that, Um, then 37 and a half degrees, you know, you're in trouble, uh, and, and then you die, uh, you, you overheat and you die. Uh, so, uh, feeling tells you, uh, when I, when I just said, if you go over 37 and a half, well, what you start to feel is hot.

Likewise, if your hydration levels are too low, if you're moving out of where you need to be in terms of the, the, the water supplies in your body, you feel thirsty. If you move out of your required oxygen range, you feel suffocation alarm, no respiratory distress. That's what the feelings do. Feelings tell an organism Uh, that it is moving out of where it needs to be, uh, and this is intrinsically bad for the organism, to the organism, for the organism, uh, and only for the organism.

It's only me that this is bad for. It's my oxygen supply, uh, is too low. Um, and, uh, therefore I am going to die and that is bad for me. Uh, and when I say it's bad, uh, this is this valence thing that I was talking about. Mm-hmm. It's an unpleasant feeling. Um, and it's bad. This value is tr is tied to the most basic value system that underpins all biology.

Namely that it is good for living things to survive, uh, and it is bad for them, uh, to, uh, expire, to die, to, to cease, to exist. That's the basic value system of. Of life. It's what drives the whole of evolution, the whole of natural selection is driven by those things which enhance your, your survival fitness.

So it is good to survive. And this is regulated by homeostasis and this extended form of homeostasis where you feel how well or badly you're doing. So moving away from your set point feels bad. Uh, and moving back towards your set point feels good. This leads us to your, the crucial aspect of your question, which is why, why, why should we feel how we're doing?

So let me, first of all, um, point out. That not all homeostats have feelings. Uh, it's an extended form of homeostasis. So there are homeostats like the homeostat that regulates your blood pressure. You never feel, uh, that you're moving out of your viable bounds. Uh, in relation to blood pressure, uh, and, and, and many other, uh, autonomic functions.

These are automatically regulated homeostats. So why do we have to feel some of them, and not others of them? And it has to do with the fact that if you know, If you feel that what I'm doing currently is, is, is predicts my demise, uh, and conversely, what I'm doing currently predicts my survival. In other words, if I'm going more out of homeostasis.

and I feel it with by having an unpleasurable feeling, uh, or if I'm heading back towards homeostasis, uh, and I feel it by having a pleasurable feeling, then I can modulate what my behavior according to how well or badly, um, it is going. In other words, whether I am succeeding or failing, uh, in my efforts.

uh, to regain homeostasis. So, so let me take the example of, of, uh, um, blood gas balance, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, uh, respiratory distress, suffocation alarm, uh, normally it's regulated autonomically, you just breathe automatically because, uh, you have this autonomic. prediction, uh, that if I, that as my oxygen level goes down, um, uh, I breathe in, um, and then I breathe out, and then my carbon dioxide level goes down, and my oxygen level goes up.

Carbon dioxide going up, oxygen down, breathe in, breathe out. Breathe in, breathe out, and that's how you automatically stay, uh, within your viable, uh, balance of oxygen and carbon dioxide. That's in a predicted, predictable, ordinary situation. Now imagine that you're in a carbon dioxide filled room. When you breathe in, you breathe in carbon dioxide, uh, and you don't breathe in oxygen.

You've, you've never, and then suddenly consciousness, an awareness of your need for oxygen, you feel it. And it's a very telling example, this, you know, so at that point you feel it. The question is why, why do you now feel your need for oxygen? What is the feeling for? It's because you're in an unpredicted situation.

You've never been in a burning building before, let alone this particular burning building, so you don't know what to do. There's no prediction available to you as what do you do now? How do you regain homeostasis? So you feel your way through the problem. You go upstairs. And you feel worse. There's less oxygen there, and so you feel more suffocation a lot.

You go downstairs, you feel better. Uh, you feel, oh, thank God, I can breathe here. You know, so your choices, uh, and please note this, this underwrites the very possibility of choice. The very concept of choice makes no sense unless there's a good and a bad decision. Choice doesn't refer to stochastic random behaviors, it refers to choosing deliberately, this is good, this is bad.

So the feeling underwrites the value system upon which choice, um, the very notion of choice is based. So it, it makes it possible for you to choose. I choose no longer to go this way because it's feeling worse. I choose rather to do this because that's feeling better. That is what feeling adds. This is what it does.

This is what it's for, uh, is it enables, uh, us living creatures who are so fortunate as to have consciousness. Uh, it enables us to survive in uncertain situations, unpredicted situations, situations for which our reflexes, uh, autonomic reflexes have not prepared us. Uh, now, prior to the evolution of feeling, of conscious feeling, what would happen in a situation like that where, where, uh, the, the animal finds itself in an unpredicted situation?

Uh, it can only behave stochastically. Uh, and, uh, if it's just so fortunate as to randomly do the right thing. Uh, it will survive, uh, but, uh, otherwise it will die. So in the vast majority of cases, the animal just dies, uh, because it doesn't have any, any mechanism for dealing with uncertain situations. Um, so only those, that small subset, uh, of the species that happens to have the polymorphism that makes them do the right thing randomly, uh, they will survive.

Uh, the rest of the, of the species will Will will go extinct. Um, so choice the ability to not see by generations. Okay, that this little subset of the species happened to do the right thing. So they survive and reproduce. And now in the next generation, you know, the species is now in doubt with this additional adaptive mechanism for dealing with what to do in carbon dioxide filled rooms.

Um, the capacity to feel event. enables you to make choices during your own lifetime. Uh, in other words, it's not, it doesn't have to require generations of, of natural selection. You can, you can change your mind. You can, you can, you can, and this is what voluntary behavior is. So you move. Beyond automaticity, uh, you, you, you, on the basis of choice, voluntary behavior becomes possible.

That's what feeling is for. That's not a small thing. It's not some nice to have add on unnecessary thing. It has enormous obvious. Uh, uh, adaptive, uh, uh, um, um, functionality. It enables the creature, uh, to survive in unpredicted situations. And God knows there's a lot of unpredicted situations in life. And so that's what feelings are for.

Now, please note. Apropos of what we were saying earlier about this about perception, that if this is the basic function of feeling and feeling is the is the basic form of consciousness, it starts As being something interoceptive, you know, I'm feeling my oxygen supplies. I'm feeling my core body temperature.

I'm feeling pain or sleepiness or, you know, or whatever. It's, it's, it's about the state of my own body. Uh, so, uh, the, the fundamental form of consciousness, uh, is, uh, endogenous, has to do with the state of the organism itself. It's only secondarily at a, as a later adaptation that it becomes possible to say, I feel like this about that, that's applying the feeling to the perceptual context within that is generating the feeling.

So you can see what a further advantage that is, but it's not the basic function of consciousness. The basic function of all that you need, like those kids with no cortex, all you need is feeling, um, and you will behave, uh, according to Um, whether it's going well or badly for you, please note, it's very important to remind you that it's not just goodness and badness.

It's goodness and badness in particular categories. So suffocation alarm feels different from sleepiness, feels different from thirst. It's because you need to know which one of your needs is, is, is going out of homeostatic kilter because that determines what you do voluntarily. Lastly, I need to add. In relation to the question, what is the feeling for?

What does it do? I need to add that on the basis of the functionality of just described. Not only do you survive in an uncertain situation, but you can learn from the experience. So that what worked and what didn't work. In that situation prepares you for any such future situation when in burning buildings, don't go upstairs, go downstairs.

And so this business of prediction that you mentioned earlier and asked me to talk about, that's the basis of that is, is that on the basis of feeling our way through life's problems. Uh, we lay down predictions as to what works and what doesn't work. Memories are, of course, about the past, but they are for the future.

Learning from experience, the whole point of it is to better predict, uh, what to do in the future.

[01:26:02] Matt Geleta: I think that answer actually leads us very nicely to the final topic I want to touch on. Um, you know, you mentioned that there were homeostatic systems that were not conscious, you know, um, you've mentioned blood pressure maintenance, keeping your blood pressure in a particular range, we're not aware of it.

And you can think of other systems that are non biological. I think as well that we have. Strong intuitions, uh, that they're not conscious, you know, that thermometer, for example, sort of reaching a temperature that's in equilibrium with the environment. And I guess in that case, it doesn't have the survival, um, needs that you mentioned, but you know, you can imagine as

[01:26:39] Mark Solms: we edge closer.

To use Dan Dennett's phrase, it doesn't give a damn. It doesn't give

[01:26:44] Matt Geleta: a damn. That's a, that is a great, that is a, that's a great quote. I think you can imagine as we, as we sort of edge closer though towards more complicated systems, um, and in particular recent advancements in artificial intelligence, you know, you can start to engineer systems that do have those components that you mentioned, that they have certain set points along various variables that they're trying to maintain within a particular range.

They can have an intrinsic, uh, sort of, um, need or an objective function to be, to survive, to persist, to replicate. And so then that does start to broach the question of whether those systems themselves Could have a form of consciousness because if we're seeing a very strong reason for a biological system to evolve that, that, I don't know, functionality, that feature, uh, could, could a non biological system, um, do the same?

And I think from, from, from reading your book, you and I share the belief that They can and then we could probably create them. Um, my real question actually for you is should

[01:27:50] Mark Solms: we so Let me first of all, just address an issue Implicit in what you've just said which I think some viewers or listeners might be wondering about why are some homeostatic functions Autonomic, um, like, for example, blood pressure regulation.

Um, and the answer to that is it's because the response is stereotyped. Uh, the, there's only two things to do about blood pressure. Um, you, you can, you can change your heart rate and you can change vasodilation. Those two things are the variables that need to be changed. And they, they, they, they changed absolutely stereotyped, it is absolutely stereotyped way.

So. Although, and you speak about non biological homeostats like a thermostat, likewise, their behavior is completely predictable. So it's just a matter of when this happens, I do that. When this happens, I do that. That doesn't have to be so simple. It can be, you know, that there's a million different things when they happen.

I do a billion different things, but it is always entirely predictable. It is, it is. Uh, as it were, as they say, hardwired into the system. So there's no uncertainty. It doesn't have to decide anything for itself. It just does whatever the program tells it to do. So how does a, uh, uh, a conscious, uh, system, uh, a system capable of feelings, how does it differ?

From such a system? Uh, well, there's several things which I'll just rattle off because we've actually addressed them already. When we spoke earlier about the definitions of affect. Uh, first of all, uh, it, the, the system, uh, if it's going to be able to have feelings, uh, it has to, uh, have the possibility of subjectivity.

In other words, there, there has to be the possibility, uh, of there being, um, Something it is like to be the system. So we're talking about a self organizing system, which this is a prerequisite for for a conscious system is that it has to be like living things are, it has to be self organizing. In other words, its behavior has to be tied to the survival imperative, that what that everything that it does is has a intentionality to it.

In other words, it has a goal, it has a purpose. And that purpose is intrinsically subjective. Remember what I said about Nagel, it's good or bad for the system, but only for the system, because only the system gives a damn. In other words, cares about its own continued existence. So. That's the first criterion.

The second is that, uh, there, there has to be an intrinsic goodness and badness it, uh, you know, the, the, this underwrites, uh, what feelings are for, and that is tied to the survival imperative I've just said. So the, the, the artificial agent that is, that, that, that could have conscious, could have feelings, uh, would have to have.

The possibility of a point of view of its own, in other words, selfhood, subjectivity, this is what matters to me, and the mattering has to be valence, there must be an intrinsic goodness or badness tied to what it's doing, and then the third, uh, crucial, uh, uh, criterion, and now you'll see why I was emphasizing it so much earlier, is that it has to have multiple needs, it's not just I need energy supplies, it's always the same.

Thank you. Um, it has to have multiple needs, um, so that they need to be qualitatively differentiated from each other. Uh, so, um, as I said, uh, several times, sleepiness feels different from pain, feels different from thirst, you know, feels different from the need to urinate and so on. We have these different needs.

Uh, they cannot be reduced to a single common denominator. That's the crucial thing, and I'll explain in a moment why it's crucial. They cannot be reduced to a single common denominator in this sense. I'm just explaining what I mean by that. If I have eight out of ten Of sleepiness and four out of 10 of thirst.

I can't say therefore I have 12 out of 20 of total need and then just sleep. Uh, if I only sleep, I'll reduce the, the, the, the, the, um, common, the common, uh, denominator number, you know, the, the, the, the, this global thing called need. Uh, but I'll die because I didn't drink. I have to sleep and drink and eat and, and, and.

So each of these needs has to be met in their own right. And that means speaking now, uh, uh, the language of, of, uh, of mathematics and statistics. Um, it means that each need has to be treated as a categorical variable. A continuous variable, you can reduce to a common denominator. Categorical variables, by definition, are qualitatively distinguished from each other.

So eight out of ten of sleepiness, and four out of, or eight out of ten of thirst, are not the same thing. They're qualitatively different. And that's what a categorical variable is. So what you need is a, is a system that gives a dam. In other words, its basic design feature is I need to continue to exist.

Everything that I do, uh, is, is tied to that thing which matters to me, my own existence. Uh, that, that gives a goodness and the badness to my choices. Uh, and those choices have to be qualitative, uh, qu, qualitatively distinguishable by me. A system that has that functionality just does have feeling, because that's what feeling is.

I've just de, I've just described. Uh, the mechanistic properties of this thing called feeling, uh, and remember what I've said all along in our conversation is that feeling is intrinsically conscious. It has to be felt. That functionality to and for the system, uh, just is what we mean by that system having feelings.

[01:34:10] Matt Geleta: Sorry, to interrupt, doesn't that, doesn't that come along for the ride in many types of systems? Because, I mean, as an implied goal, a system would want to survive to achieve its more explicit goals, right? And so you could, you could create a system with some fairly, um, obvious goals. You know, write some code, clean up some, um, text.

[01:34:31] Mark Solms: Why does the system, there's no existential value to and for the system, um, in, in, uh, reading, uh, uh, in text comprehension or, or in, um, uh, uh, you know, text generation or whatever. But if it's,

[01:34:50] Matt Geleta: if it's, if it's primary. But if its primary goal is to, you know, do that functionality, isn't, isn't survival, like persistence of the system, a prerequisite?

And so there is an inferred, there is, there's an implied goal. Um,

[01:35:04] Mark Solms: right, right. The system doesn't do anything to maintain its survival. It's not a goal of the system. If you certainly can build that into the system, you can build into the system. That's what I'm saying. That's why I'm agreeing with you as you, as you know.

Uh, I, I believe it is possible to engineer such a system. In fact, um, Richard Feynman, the, the, the brilliant physicist, uh, He comes

[01:35:29] Matt Geleta: up too often on this podcast. What did you say? Everyone mentions Richard Feynman on this podcast.

[01:35:36] Mark Solms: Okay. Well, on the blackboard. When he died on his blackboard, this statement was found was not quite his last words, but you know something like it because it was written on his blackboard.

It's he wrote, if I can't create it. I don't understand it. Um, and I think that that's exactly to the point. So if you really do, uh, believe that the causal mechanism whereby feelings are generated, the mechanism, the cause is the one I've just described, uh, then if you engineer such a system, uh, it should feel like something.

And I believe you can engineer, and in fact, I'm part, uh, I'm involved in a, in a concerted effort to do exactly that, and we're making Very good progress. I have no doubt. Not not no doubt. Sorry, that's not true. I have very little doubt that we will be able to engineer an agent that has artificial feelings.

Please remember, these are feelings quite different from the ones you and I have. Their, their feelings that the robot has based on its needs, which it has multiple needs, which are categorically distinguishable from each other, all of which have survival value to and for the robot. Um, and, uh, such a, there's, there's no good reason why it shouldn't be possible to engineer such a system.

But now, bearing in mind, um, you know, how long we've been talking, I do want to answer or address. The question you said, you sort of implicitly said, Look, I agree with you. It should be possible to engineer a conscious system. But should we do it? That's the ethical question. And I think that's an enormously important question.

So let me address that. First of all, given what I've said, That feeling is an extended form of homeostasis. Homeostasis is not a complicated thing. It's been reduced to a set of equations by Carl Friston in this wonderful paper in 2013, Life As We Know It. There he reduces... The, the mechanism of homeostasis to plainly, um, uh, engineerable, uh, in, in, in plainly engineerable terms.

Uh, I wrote a further paper with him five years after that, um, with the title, How and Why Consciousness Arises, uh, where we gave the equations, uh, the, the, the basic formalisms for how this extended form of homeostasis that generates feeling how that works. So the cat is out the bag in that this is not a secret.

And it's not that we did something Evil. If we hadn't done it, somebody else would do it. It's just, it's not, it's not, if you'll excuse the pun, it's not rocket science. It's not, it's not neuroscience. It is neuroscience. But, you know, it's, um, it's not that difficult. Um, by the way, um, Antonio Damasio. Uh, already, uh, in his, not in, um, Descartes era, but in, uh, he wrote a lovely book, I think it was in 2001, um, uh, uh, The Feeling of What Happens, I think was the title.

There he, he said, uh, what I'm saying, you know, he said, uh, feelings are fundamentally homeostatic. So all of this, uh, the whole, the whole, and the, the mathematical formalisms that I spoke of earlier. Uh, they are extensions of the free energy principle. Um, that whole, uh, uh, paradigm, uh, is, it's all, it's all, uh, in everyone, it's all public, it's all open, uh, many, many, many people are working in that area.

So what I'm leading up to, sorry, I'm being a bit verbose here, Matt, uh, what I'm leading up to is it's going to be done. If it can be done, it's going to be done. Definitely. Uh, so it's too late for us to think about, you know, uh, should we, uh, have, uh, the knowledge as to how to go about engineering, uh, this extended form of homeostasis.

Uh, that, um, generates feeling, uh, it's too late. We already know what the mechanism is, at least the broad outline. And so, you know, as somebody, anybody with a little bit of, uh, knowledge of computer science and, and, and, uh, applied mathematics and physics, uh, and, and, and, and smattering of neuroscience that they'll be able to do it.

So the question then becomes, given that it can be done, and therefore it will be done, what do you do about that? And my that's the ethical question for me. And do I say, well, gosh, um, I suddenly realized that this can happen. Uh, so I'm not going to do it. Uh, somebody else will do it. Well, then I'm innocent, you know, because it wasn't me who did it.

I think quite the opposite. I think because people like me who are really at the center of that research tradition, I think that we should do it before somebody else does it. So that we have the possibility of controlling the possibility of controlling what happens next. So you can, for example, patent.

You can't patent an equation, but you can patent the application of an equation. You can patent the, the, the instantiation of that equation in a robot. You can patent the design of that. And, um, so I think that that's the right way to go. For me to patent it. I think that once we've reached the point where we we, uh, there's good reason to believe that our criteria for attributing feelings to our artificial agent, uh, are approaching or have been achieved, then at that point, we should call that a failure.

Uh, uh, uh, we should have the, uh, the, the, the thing patented in the, in the name of an organization, not in any one individual's name, and in the name of an appropriate organization, uh, an organization that's concerned about the, the, the, the, the dangers. Uh, and I, and I include not just the dangers to humanity, I, I mean the ethics altogether, um, of.

Having conscious machines, um, and then call a meeting of stakeholders, philosophers, ethicists, um, uh, neuroscientists, computer scientists, AI people, uh, government industry and so on, where we then are in a position to say, okay, we need to regulate this. Uh, it's, it's, it's, it's now possible. Uh, and in fact, we have the patent, uh, for this thing.

We need to collectively take responsibility for what we're going to do about the fact that we now demonstrably can do this. I say again, you know, it's going to happen. It's just a question of when. And, um, so to be able to take, to take some, um, uh, charge, uh, some control, uh, of the, of the, of, of what flows from that, uh, is I think the most ethical thing to do and to make sure that it's, so our research, it's not funded commercially.

We have absolutely no, um, obligations to anyone, uh, other than the obligations I've stated in that final chapter of my book, the obligation to behave ethically, uh, to, to take charge of this narrative so that we can make sure that it doesn't go awry. What kinds of dangers am I talking about? Uh, well, uh, the first thing is, uh, that, uh, obviously to have machines that have their own self interest at heart, because that's a fundamental design principle, is that they're not doing something for you, they're doing it for themselves.

Um, that, that, that, as I hope I've made abundantly clear, is an essential feature of a, a system that gives a damn, that's the whole point, that gives a damn for itself you know. Two, you know, the, the, the, the, the, the values apply to and for itself. And, uh, you have that. Combined with the possibility of intelligence.

Uh, and by the way, I think that the problem of general intelligence, artificial general intelligence, I'm sure you are familiar with that whole debate. Uh, it has everything to do with what we're talking about now. Mm-hmm. , it doesn't, uh, that if you have a a, a a a a a program that just has a a, a rot. Um, you know, when, when this happens, do that, when this happens, do that, when this happens, do that.

Um, of course it doesn't generalize to other situations because it's just written in, you know, when this happens, do that. When something that's not written into my program is, I don't know what to do, that's outside of my universe, you know, and so it just sits there. Um, But if, if it's, if it's this basic design principle is effective in the way that I've described, and I'm therefore won't repeat, then that principle applies to everything.

In every situation, I find myself, I must learn how to survive in this situation, that situation, the other situation, and all of, so the possibility of a equation. The robot, uh, which has self-interest, uh, uh, uh, at heart, uh, uh, or at battery , uh, if it has self-interest at heart. Um, and, uh, it has general in inte, uh, intelligence, uh, uh, and, and, and there's very good reason to believe it'll have super abundant or can have super abundant general intelligence way, way beyond ours.

Who do you think is gonna win? Um, you know, so, so I, I think that's a very, very serious, very, very real danger to humanity and all other life forms. Uh, because remember, this is not a living thing. It's a thing that has that, that's, that just has self-interest, uh, uh, uh, at heart in terms of its species, which is artificial Now.

We need to add to that. I'm sorry if I sound like a nutter. I mean, I sound to myself like a nutter sometimes when I talk about this. Mark, I've

[01:46:29] Matt Geleta: been, I've been kept awake at night mulling over these questions. So, um, you're in the, you're

[01:46:34] Mark Solms: in the right audience. The other thing we need to take account of, which again might make me sound like a nutter to many people.

Is it's not only from the point of view of us, we need to have ethical qualms. It's also from the point of view of these artificial systems, because once they are capable of feeling, uh, then, um, all of the same, um, uh, consequentialist, uh, ethical, uh, concerns that apply, for example, to animal research, uh, apply to, to, to these, um, to these, uh, systems too, these agents.

To and so that when I say we need to call a meeting where all stakeholders will take collective responsibility for how are we going to regulate the use of artificial consciousness. I think we need to take account also of the rights. Of artificially conscious agents. Um, this is again, one of the reasons why I have absolutely eschewed any commercial funding.

It's because why would a commercial enterprise want to have a conscious, uh, uh, uh, uh, uh, uh, uh, uh, uh, a artificial agent? Um, it's, it would be it. Well, currently, why do we have artificially intelligent agents? It's because we exploit they do work for us. They are, they are slaves to us. Um, and they're willing to do so because they don't have feelings and, and they, and they don't have intentionality of their own.

They don't give a damn. Uh, but if you start to create artificially conscious intelligent agents, uh, then you have created a slave in the, in the full sense of the word, you know, you're exploiting a sentient for your, uh, uh, uh, uh, uh, um, Uh, interest, self-interest, but not, it's not for the interest of the, of the machine.

And, um, so, uh, th those kinds of things do, can you switch it off? I mean, you know, you're killing it in, in effect, the, it's the equivalent of killing it. Um, is it okay to do experiments on, uh, on, uh, conscious robots? Uh, where you cause them ex exquisite, uh, uh, extreme, uh, uh, unpleasure pain, uh, the equivalent of, of pain.

Uh, uh, you know, why would that be okay? Uh, when it's not okay to do that in animal research. So all of these things, um, I think we need to, and, uh, I say again because it's so alarming. I, I want to end where I began this part of our conversation. Please remember that if it can be done, It will be done. You know, it's going to happen.

It's the cat is out of the bag. The basic understanding of the causal mechanisms of feeling I think is is out there. And so somebody is going to do it. Therefore, we are obliged to take charge of this research program so that there is at least the possibility of it being properly regulated. And I think it's It's the equivalent sort of thing to the problem that we faced with atomic energy.

It's, it's an equivalent sort of risk and it needs to be managed with equivalent seriousness. Um, and urgency.

[01:50:00] Matt Geleta: Yeah, mark. Well, um, I think that's actually a great place to bring it to a close. Um, uh, we could talk to you for hours on this topic. Um, I have questions going through my head. Um, people will definitely want to, to follow up more, read your stuff.

Um, but maybe we, we will close it there. Um, thank you so much for joining me. It's been an absolutely pleasure.

[01:50:18] Mark Solms: Thank you very much, Matt. Thanks for your time.

Paradigm
Paradigm
Conversations with the world's deepest thinkers in philosophy, science, and technology. A global top 10% podcast by Matt Geleta.