Paradigm
Paradigm
David Krakauer: Free Will and Complexity
1
0:00
-1:38:29

David Krakauer: Free Will and Complexity

David Krakauer is an evolutionary biologist whose research explores the evolution of intelligence and stupidity on Earth. He is currently President of the Santa Fe Institute.
1

Good News and A Few Words of Thanks…

Hello Everyone, I want to share some good news and a few words of thanks.

Good News: Paradigm has reached a point where it now gets multiple new listeners every minute, every day, all around the world. I've been overwhelmed and humbled by this response, and it’s given me the confidence invest even more into Paradigm.

As such, you may notice that I’ve just published all episodes right here on matthewgeleta.com, and enabled community features (e.g. comments and likes).

Please jump in and like, comment, and share - it would mean a lot to me ❤️

Words of Thanks: As a gesture of thanks to YOU, my early supporters, I’ll be offering all current as well as the next 500 subscribers free access to Paradigm, forever. (That’s almost as good as the Founding Membership I’ve just offered to some of my Paradigm guests!) Beyond that point there will be a paywall on certain content.

Thanks for your support, and enjoy this mind-expanding conversation!

Matt


Share Paradigm


Episode Notes

David Krakauer is President of the Santa Fe Institute, the preeminent institution dedicated to the study of complex systems, including computational, biological, and social systems.

David was named as one of the “Fifty People Who Will Change the World” by Wired Magazine, and he was included in Entrepreneur Magazine’s list of visionary leaders advancing global research and business. He holds a PhD in evolutionary theory from the University of Oxford, as well as degrees in biology and computer science from the University of London.

We discuss:

  • Determinism and free will

  • Reductionism and fundamental vs effective physical theories

  • Broken symmetries in the laws of physics

  • The purpose of science

  • Developing a new science of complexity

  • Paradigms and paradigm shifts

… and other topics

Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Read the full transcript here. Follow me on LinkedIn or Twitter/X for episodes and infrequent commentary.

Subscribe for free to never miss a post


Episode links


Timestamps

Timestamps are for the video episode

00:00 The Limits of AI and Predictability

02:10 Fundamental Laws and Predictive Power

04:33 Laplace's Determinism and Its Challenges

07:22 Epistemic Horizons and Free Will

21:13 Symmetry Breaking and Quantum Fluctuations

39:41 Emergence and Effective Theories

49:43 Human Intuition and AI

51:14 The Evolution of Intelligence

51:21 Physical and Cognitive Tools for Human Enhancement

53:52 Science: Humanistic vs. Utilitarian

01:05:50 Complexity Science and Its Applications

01:12:18 The Future of Complexity Science

01:27:57 Books and Resources on Complexity

01:34:06 Final Thoughts and Reflections


I hope you’re enjoying Paradigm. This post is public, so please share it with others.

Share


Transcript

This transcript is AI-generated and may contain errors.

Introduction and Setting the Stage

[00:00:00] Matt: I'm here with David Krakauer. David, thank you for joining me.

[00:00:02] David Krakauer: Thank you. This is going to be fun.

[00:00:04] Matt: Definitely will.

The Limits of AI and Predictability

[00:00:05] Matt: Um, David, let's, let's start with the topic of intelligence and predictability, which I know you've thought a lot about. we live in an age where large companies are spending millions of dollars to develop AI models to better understand human behavior, make recommendations, make predictions about what we'll do.

Um, I would wager that the people who end up listening to this conversation are probably doing so because an algorithm has recommended it to them. Um, and as we know, these models are getting much more powerful and we're deferring more and more of our autonomy to them. My question to you is, do you feel like there is a limit to how well they will be able to predict our behavior and our choices.

[00:00:47] David Krakauer: there are several issues here, right? One issue relates to the fundamental, fundamental limitations to prediction based on fundamental theory, right? Um, what can physics predict on its own? The Laplacian conceit, which we should talk about And then there is what you can predict at mesoscopic and macroscopic scales that are of greater interest to us. And there, of course, we kind of know the answer because there are coarse grained things we can predict quite effectively, average things, supply and demand.

For example, if I charge 10 times as much for a toothbrush, You're going to be more careful with the toothbrush that you already own, um, as opposed to halving its price. And so, macroscopic prediction of that sort is quite strong. But when it comes to the specificities of preference, or the behavior of organisms at a microscopic scale, then of course we do terribly.

So I think it's We do very well at the fundamental level and then it gets worse and worse until we have a principled reason to average and then it gets better and better.

[00:02:06] Matt: Yeah, well, let's stick with the fundamental level then for a second.

Fundamental Laws and Predictive Power

[00:02:10] Matt: I think a, um, I mean, this is an age old philosophical question, the question of, you know, determinism and whether the universe itself is in some sense, in principle, predictable. Many physicists, I think, do hold the intuition quite strongly that there should be at least some set of underlying fundamental laws.

that would in principle allow us to predict what the universe does. Do you feel like such a set of laws exists? Is there any reason to think that they do or they don't exist?

[00:02:40] David Krakauer: Right. So I think several things there. So I think the laws exist. Um, I think there is a fundamental set of laws. It's just, I don't think they're very useful. Um, I mean, take physics alone for a second. There is no unified theory of physics, right? So, we don't know how to go from quantum mechanics to the continuum limits in quantum mechanics.

a theory of gravity, so that's a fail. We don't know how to go from classical mechanics to statistical mechanics, that's a trick, the so called egodic hypothesis. And we don't know how to go from really fundamental physics to mesoscopic observables because of the problem of degenerate ground states or vacuum states, you know, string theory, you could argue is perhaps one of the most fundamental theories we have.

But it has, you know, 10 to the 500, if not 10 to the 5, 000 solutions, all of which are compatible with the laws, right? The laws don't make a distinction between these solutions, but mesoscopically in the world that we live in, they would make a difference. So even within physics, right? When you say, is there a fundamental predictive theory? It depends. what scale you're asking the question, right? So, okay, that's the first point. But then there's a whole really fascinating set of issues that give rise to what have been called epistemic horizons, which is how, you know, in some sense, like a real horizon, how far can you see into the future? This is a temporal horizon.

And there are, we should discuss these many theories and, and insights that were not available to say Laplace, which who typically quoted in relation to the predictive power of fundamental theory, by the way, historically inaccurately, perhaps we can also discuss that, right? Because that's

Laplace's Determinism and Its Challenges

[00:04:33] David Krakauer: also not, yeah, well, actually, let's go there first, because it's just sort of fun, you know, when Laplace was making his statements about a super intelligent being aware of the state of every possible particle at every scale at some time that we can extrapolate into the future.

I mean, he was making those statements at the beginning of the 19th century. based on an incomplete understanding of systems of differential equations. Laplace did not know that if you had perfect initial conditions and laws, there would be unique solutions, right, in other words, that you could predict.

That actually wasn't proved for several decades. after Laplace, right? In other words, um, so that result, which is a technical result, was not, was not known to him. Um, and that's sometimes called Lipschitz continuity, the thing that we now generalize as this notion of unique solutions to, to systems of differential equations.

So Laplace's position was not based on physics, it was based on metaphysics. And this is often forgotten, right? So the two big influences on Laplace. came from on the one hand Condorcet and Condorcet was very interested in what he called, you know, um, necessary versus contingent laws. He thought Newton's laws were contingent, you know, whereas the universe was necessary. So the laws could be different. And, um, What are the implications of having universal contingent laws? And there Laplace was borrowing from Leibniz, right? The principle of sufficient reason and the principle of continuity. So based in a nutshell, the principle of sufficient reason says every observable, every event has a cause. Every effect has a cause. And the principle of continuity says that if you were to observe more microscopically, they subdivide time between the original event in its cause. You'd find in another event and its effect rather. So in other words, The principle of sufficient reason that the universe is causal plus the continuity assumptions gave rise to Laplace's metaphysical belief, not mathematics and not physics. And my view on Laplace is that if Laplace lived now, someone as smart as that, um, he would never have made that statement. It would have seemed ludicrous to him. And I think the reasons for that are precisely all of the various contributors.

Epistemic Horizons and Free Will

[00:07:22] David Krakauer: to what we think of as this epistemic horizon. Um, and so, okay, let me just list a few, just to make that explicit. first of all, Laplace assumed an infinitely intelligent being who had infinite resources to measure initial conditions. That's impossible. It would require really a deity in order for that to be done. Because if you can't measure initial conditions perfectly, you are measuring volumes. And if you're measuring volumes, you are dealing with probabilistic trajectories. Okay, so that's gone. So determinism in the simple has gone, then you have deterministic chaos, right?

Which is at every point in a trajectory, all trajectories diverge exponentially. So once again, uh, At any point have you made a measurement, if it were not perfect? you would be inaccurate in your projection. So these are epistemological limitations on Laplace. So a lot of people say, well, who cares about that?

Laplace had in mind an infinitely powerful being. Forget all this nonsense about human limitations. Well, now there are logical objections. Um, Turing showed us about computing undecidability. Maybe the future state of the universe is Turing undecidable. Maybe the future state of the universe is Turing undecidable.

Kolmogorov incompressible. Maybe you have to simulate it to know where it goes. You can't predict it, you have to run it. We know plenty of systems where that's true. So those are logical problems. And then there are ontological problems. As you know, like Heisenberg uncertainty relations. Okay, I measure this perfectly, but then I don't know anything about that.

Measure position, I don't know momentum. I measure momentum, I don't know position. And then there's, which we should talk about in a bit, spontaneous symmetry breaking, that these systems can fall into, like the string theory vacuum states, they can fall into alternative solutions that are all compatible with the same fundamental laws.

So, these now, we now understand, in the early 21st century, Go way beyond the science and math available to lap place, which make a mockery, I think, of Laplas determinism. So I think for me it's completely moot at this point.

[00:09:48] Matt: Yeah, well, let's, let's, let's, um, very soon get to the topic of symmetry breaking, but sticking with Laplace for one second, you know, um, as you said, um, and I didn't know this actually very interesting that he was not what the theory of differential equations hadn't yet been developed. And it was not known that, you know, given.

Instead of initial conditions and the laws that then govern the evolution, unique solution results. But that is, that is a very intuitive thing to think, if you think about dynamics, and, and again, as you said, you know, LAPLACE is imagining an infinitely intelligent something that understands all of this.

But I think that the, the question to me still does remain, you know, forget about any observer, forget about any being, thinking about , this evolution, the fact of the matter that. a set of initial conditions and some set of laws that govern what happens next does determine what happens next. Modulo quantum uncertainty and so on.

And modulo practical issues with chaos, uh, if those initial conditions have a bit of noise or something like that. It does still feel like, in principle, the future is determined from, from the past. So is what we're saying, is, is that statement in and of itself contestable, incontestable?

[00:11:07] David Krakauer: Um, well again, for the reasons I gave, um, that I think is true of the entire universe because it's the only system that's truly closed,

[00:11:21] Matt: Yes.

[00:11:22] David Krakauer: and I think that. So I don't think, I think I would be willing to accept that if you say it of the whole universe. Um, but for any entity in that universe, given what we know about the structure of space time and the fact that you live on a world line and the limitations of the speed of light, which no entity, no subset of the universe can be.

aware of all other states in the universe. Forget limitations of epistemology. It's not about agents not having big enough brains. It's a fact

of relativity. And so that's something that is often forgotten. So any point in space has a limited causal past and causal future. And I think that's the ultimate constraint. on local Laplacian determinism. Because it means that there are variables that you can't know. Right? And hence, since world lines might eventually intersect, right, um, there are causes that you can locally not be aware about that will matter in the future. Hence prediction, cannot be perfect. It can be perfect if you had a digital twin of the whole universe, right, with infinite computational power simulating it. But it's a very different thought experiment, I think, to the thought experiment of Laplace,

[00:12:58] Matt: Yes. I mean, you would need, you would need a computer at least the size of the universe, if not bigger, to have a digital twin of that nature,

[00:13:09] David Krakauer: exactly, that lived outside of it, predict,

that lived outside of it, that wanted to predict a part of it. And, and I think, This is why in the end, um, it's an interesting metaphysical null hypothesis, you know, it's an interesting thing to have in one's mind in order to adduce limitations on the concept through epistemic horizons of the kind that I've mentioned.

Um, and, but unfortunately, as you know, that thought experiment has in fact been mobilized as a defense of the no free will thesis, which is really problematic because that's where these epistemic horizons really come into their own. I mean, so just for example, a metaphor that I find useful for this is imagine that you're in a city and you want to get from point A to point B. Let's imagine furthermore that you have a perfect map, and furthermore, that there is one, there are unique best paths between those two points. That's fundamental physics. It says that there is a, and again, if you want me to clarify any concept here, you know, I don't know who knows.

We can, oh yeah, we can extremize the action, right?

We can optimize in such a way as to discover what path will minimize time or minimize energy. between A and B. Now let's imagine that there are two paths which take the same amount of energy or time. Which one do you take? Well, fundamental theory can't tell you. So what it says is you take them with equal probability, half and half. If I were to observe the system, um, the symmetry of the law implies symmetry of state, but it can't predict exactly which one. Now imagine that you're an agent that has a partial map of the city. So not only do you have what's called the degeneracy problem, that is two paths which have the same, uh, minimum, but now you don't, you can't even optimize, so you have to use a rule or a heuristic, a search algorithm, which is no longer any, in any way in, within the purview of physical law. It's a different set of concepts. And we're dealing, I think, with almost everything we care about with partial maps, with degenerate solutions. And so all of the free will debate actually turns on the idea that is determinism in the world of mind, that we have a partial, a full map with, with a complete and unique best solution, the Laplacian world.

So The Laplacian thought experiment seems benign and perhaps only of interest to physics, but it turns out it's been recruited in defense of a bad argument attacking free will. But free will is, I would say, is the name we give to partial maps with degenerate roots. That's what is meant by free will.

[00:16:44] Matt: Yeah, so that might be true, but I think a lot of people would, the reaction would be that this is a watered down version of free will and they would feel like, okay, well we've, we've, we've redefined what, uh, what we mean by free will and this is not quite satisfactory. So just, just to pull up, what we're saying is, you know, Yes, uh, in principle, potentially the laws of physics might, um, have, in a, in a particular case, be able to fully predict what is going to happen, um, but in the case of limited agents with limited minds and limited information, uh, that is not the case, and it can, in principle, never be the case because we're never fully closed systems in this universe, um, and so there is this sort of space of uncertainty, of unpredictability, and we have freedom to operate in that space, and agents have freedom to, to act in that space in a way that is inherently never going to be 100 percent predictable.

But is that, is that the free will that people feel that they have? Uh, you know,

[00:17:42] David Krakauer: I think it's the free will that we're referring to. It's interesting you say that because I do think that, um, the, so let's take the contrasting position, the no free will, full determinism, Laplacian psychology

[00:18:02] Matt: I should say just as a brief interjection, I think even without determinism, there is still a class of no free will arguments that are worth exploring. But

[00:18:12] David Krakauer: Yes. Okay.

Oh,

[00:18:14] Matt: let's,

let's stick with the determinism

[00:18:15] David Krakauer: okay. That would be interesting. Okay. Let's go there. I just want to make the point that the standard argument, right. Is that free will is an illusion. It is a state of mind. superimposed on top of a deterministic machine. And okay, I would like to say that free will, it's not an illusion, but the name that we give to perfectly reasonable theories of epistemic horizons, right, and that that single term refers to that entire class of phenomena, which are not about necessarily limited human capability.

As I said, some of them are about spontaneous symmetry breaking. Some of them are about the Heisenberg uncertainty relations and so on, right? Some of them are fundamentally ontological, some of them about Turing's undecidability. So that whole class of phenomena, I think. contributes to our feeling of self determination.

I don't think it's one thing. And Dan Dennett in his book, Freedom Evolves, talks about compatibilism, right? This idea that you can have Laplacian universes and still have free will. And I think the way he justifies it is he says that free will evolves, right? Mesoscopic order, macroscopic order evolves, which makes all those epistemic horizons stronger.

And I think that's also true. But, uh, but I think that one way of actually resolving this debate is to ground agency in a physical theory of the epistemic horizon. And then we get a principled notion of free will that would satisfy the physicalists who don't like human fallibility.

[00:20:09] Matt: Does it, does it, does it satisfy that sort of inner desire that people have to feel like they're truly agents of their own destiny or is it, is it what we're saying is, uh, you know, you're of limited mind and you have a physical theory of that limited mind and within that sort of conception, you're free, you're free, uh,

[00:20:30] David Krakauer: it satisfies me. It satisfies me. You see, I, I, because I think it's not quite the same as saying, I lack imagination, or I lack computing power.

Right, what Turing showed us is that you could have all the computing power in the universe, and there are still functions that can't be computed.

So if you allow for this more expansive sense of limitation, I think it's slightly less offensive to our sense of agency.

[00:20:58] Matt: Yes. Do you, do you feel like the, the sort of like, I mean you mentioned Heisenberg uncertainty and, and uh, there was also the question of sort of inherent quantum randomness, you know, so some, some outcomes.

Symmetry Breaking and Quantum Fluctuations

[00:21:13] Matt: inherently not being determined. Does that have bearing on this question beyond, beyond, as you said, the epistemic horizon?

[00:21:22] David Krakauer: I think it might, um, in the evolutionary sense of broken symmetries. Right, that, so again, I, just to explain this idea, I think the city metaphor works. Let's say there are two paths that you could take, and you have the full map. So this is not about imperfect information anymore. This is about degenerate ground states, that there are two equally good paths to take between A and B.

And we now know, of course, that there Particles come into existence, molecules come into existence, organisms come into existence through breaking those symmetries, right? R-N-A-D-N-A have are right-handed molecules. They have ality proteins and molecules. Most molecules are left-handed. And the fact of them being left-handed or right-handed is why we have a biological world, because if that wasn't true, they couldn't interact.

But that left-handedness and right handedness. is symmetric with respect to the laws of physics, right? So what broke the symmetry? Why did we get some molecules right? And quite consistently so. And I think that it could be that at the level of fundamental interactions or particles, fluctuations of the kind that come from the quantum domain might be important.

I think as you go up into the mesoscopic scale, we're really talking about thermal fluctuations. which then would be compatible with Laplacian principles, right? Because you'd say, ah, that molecule that broke the symmetry, I know, I knew it was going to break it towards the left. And I, so I think quantum might have a role to play in a much more fundamental sense of spontaneous symmetry breaking.

[00:23:11] Matt: Yes. Got it. And so just again to, to play it back for those listening, the idea here is, you know, when we look at Let's say amino acids, they all have, they're, they're all, um, they have one chirality, but it could have been the otherwise. There is no underlying reason. And if you look at the laws of particle physics, I guess that would have determined them.

They don't have this asymmetry in them, but through the evolutionary process, there are quantum fluctuations. And at some point one path was taken, and this is what we've ended up with.

[00:23:39] David Krakauer: Exactly. It's a, exactly. So in other words, the, I mean, this is one of the great mysteries, right? The, the, the symmetric laws of physics don't correspond. To the asymmetric population of physical states.

[00:23:54] Matt: yes, yes. Does the, does that broken symmetry then, again, does that, does that consist in not having sufficient information about what those underlying laws are? Or does the symmetry, like in what sense does that broken symmetry exist in and of itself?

[00:24:14] David Krakauer: Yeah, so that is an interesting question. Um, so the broken symmetries, as you know, I mean, are real ontological effects. Um, this is what, this will take us to emergence, right? Because it's one of the most fundamental concepts in the theory of emergence that. Philip Anderson in 1972, in a very famous paper, More is Different, wrote extensively about this problem, and, which he thought was at the root of all complexity, which I guess, um, and he gave the example of very small molecules that have more than one ground state, more than one configuration corresponding to the minimum free energy, um, will fluctuate between the two.

And so the symmetric laws of motion, the molecular dynamics, will produce a symmetric distribution. So if you took a measurement of the real world, you'd find it in A 50%, B 50%. He gives the example of ammonia, NH3, a small molecule. But if you have a slightly bigger molecule, like phosphine, PH3, now, the state that it lives in is the state that it started in.

Same symmetric laws of motion. So you would predict 50 50, but unfortunately, the energy barrier that has to be overcome to move from one state to the other is large enough that it gets confined in its initial state. And so what happens is, is that the unknown parameter what establishes the initial state is dominant and the law subservient.

And if you think about physical theory, after Eugene Wigner, who wrote extensively about this, he said all physical theory tries to maximize the contribution of law and minimize the contribution of initial conditions, because those are the unknowns, those are parameters that come from nowhere. And unfortunately, as matter gets larger, the state you observe is more and more consistent with the initial conditions and less and less predicted by the law.

And, um, and that's not about ignorance. It's about ontology. But as I said earlier, what breaks that symmetry could be some tiny fluctuation. Maybe it's fundamentally irreducibly random and quantum, or maybe it's thermal.

[00:26:45] Matt: It's um, it always surprises me that this, this way of thinking about broken symmetries doesn't feel to be widely known or acknowledged even in let's say, the, I don't know, maybe in some parts of the physics community, the physics community is not one thing, but I think there is still this, this idea of, okay, you know, we have, um, you know, let's say laws of particle physics and they give rise to laws of chemistry and they give rise to, rise to laws of biochemistry and so on.

And there's a very clear sort of You hierarchy. Uh, we want to go up all the way to even maybe the social sciences, but in principle, it's possible to come all the way back down. And in principle, all the higher level things are determined, um, by the underlying, um, sort of more fundamental laws. Um, the, the idea of like, you know, just how important these broken symmetries along the chain can be, does not feel to be very acknowledged.

Why, why is that the case?

[00:27:40] David Krakauer: I don't know. It's very interesting. I, um, I remember when I was at the Institute for Advanced Study in Princeton and I would have lunch with Phil Anderson, who had once won the Nobel Prize for work in condensed matter, and at the table next to ours was Ed Witten's table, the string theory table, and Phil used to refer to them as theologians, and um, and the reason he did refer to them as theologians is for this reason actually, The condensed matter physics community, so people who work on molecules and larger structures, you know, crystals and fluids and bodies, even soft condensed matter, um, they understand this perfectly well.

You know, the high energy physicists, the particle physicists seem to have forgotten. And I think the reason is a kind of what the psychologist would call a selection bias. Because if the things you measure and study exist at that subatomic level, Then these kinds of considerations Are less important. Right. But if you studied crystals lattices, right, if you study larger objects, then these considerations become vital. And I think there's a sort of history in which the high energy physicists were this sort of preeminent intellectual community in physics, right? You had, you know, my colleague, you know, Murray Gell Mann, you had the Feynmans, you had the Julian Schwingers, you had Max Born, you had Niels Bohr, all the really smart people seem to be doing that stuff.

And, and I think, but that's not about the structure of reality. It's about the institutions of physics, right? And I think that's really a part of it quite seriously. Um, and then the other thing has to do with what falls under the purview of explanation, because I think for a lot of people, things that are not fundamental are, are sort of accidents. And they don't recognize that there are effective laws, right, like Darwin's or Mendel's or, you know, the laws of condensed matter physics, fluctuation dissipation theorems and so on, that are also laws. It's just that they don't operate at that fundamental level.

and exploit symmetry to the same degree, which tends to be a kind of almost spiritual quality of certain theories. And so those aren't recognized as law like, even though of course they are. And many people have been trying to make the claim in, certainly in our community, that computational theories, rule based reasoning, people like Stephen Wolfram has been making this claim for many years.

Have another. foundational character. Not everything has to look like, you know, quantum field theory. And so there's all sorts of things that are going on that I think have made that community a bit blind to this reality.

[00:30:42] Matt: Do you feel that the, uh, just for a moment with, let's say particle physics or string theory, or pick any, pick any flavor of sort of like underlying fundamental theory. Do you feel like it is, are any, uh, is there any theory that is truly fundamental? Because, and to, to say in one way is every theory has to have some sort of conceptual framework and that, that framework has to exist in something, or potentially it has to be exist in a mind, um, and you know, with that, can, can you have something that is truly fundamental in and of itself?

Or is everything in some sense effective?

[00:31:23] David Krakauer: Yeah, I mean, that's a really deep question that we could debate. forever. And I think I probably switched my mind a hundred times in our argument. I think that, um, the famous statement on this was made by, in Thomas Nagel in 1989 in a book he wrote called The View from Nowhere. And the view, the view from nowhere is the conceit that there can be an absolute objective understanding of reality, right?

That's the sort of one, Laplace's one version of the view from nowhere. And what you're saying is, any entity in the universe capable of intellection of reasoning about the universe is subjective and that fact imposes a non fundamental character on all knowledge. And I think and I'm sympathetic to that to some extent, but I'm also sympathetic actually with the view from nowhere when it comes to the entirety of the universe, right?

If you said, not the agentic universe, but the entirety of the universe. I actually don't study that. I study parts of it that do things we call intelligent, right? Or stupid,

both. Um, so actually David Wolpert and I have proposed an alternative framework for this, um, that we call the reality Ouroboros, which is a pluralistic, non subjective, framework and it works like this. Let's say that you were like Roger Penrose, a Platonist, like Paul Dirac, like most physicists, actually. Um, so mathematics comes first, not physical reality.

The most extreme version of this that annoys the physicist is John Wheeler's It from Bit. I mean, in a sense, It from Bit is Platonism, post George Boo, right? I mean, it's a sort of, it's an odd binary Platonism, but okay. But let's say we start with that world, out of which comes approximate representations of Platonic perfection, physical theory, let's say, okay?

Out of which comes chemistry, out of which comes biology, out of which comes psychology, out of which comes society, But now what happens is that that society through cultural evolution develops mathematics. So it's gone in a circle. And what David and I say is that you can start at any point in that circle, but you then have to follow the circle rigorously around clockwise.

So you could start with mind. Let's say mind is fundamental. Okay, simulation theory says that, says all of reality as we know it was made by someone else, a super programmer. Okay, that's mind. Mind first. Okay, you get mind. What does mind do? Mind develops mathematics, math Then it develops physics using the mathematics it developed and on it goes around the circle again until you have a theory of mind.

So, we have this notion of actually a symmetric theory of reality in which any point in the Ouroboros can be defined as fundamental, but after which everything has to follow rigorously according to your laws and experiments and regularities. So there is no one fundamental, there's an infinite number. And I think it's interesting that particular perspective does no less work, right?

In other words, you still have all the theories of physics. You'd still be able to predict things in the real world. It's not a philosophical exercise with no implications. It's consistent with science as we practice it. And it's also consistent with the plurality of ways of being in the world, right? If you're a poet, I would say within this framework, you're no less fundamental as long as you can follow the Ouroboros around.

So it's a less greedy position to adopt. It's a more plural position to adopt, but preserves, if you like, the objective rigor of the deductive sequence that ensues from pursuing the Ouroboros around in a full cycle.

[00:35:29] Matt: It reminds me a lot of, uh, Douglas Hofs. That's, uh, strange loops sort of idea. I dunno if you're familiar, familiar with it.

[00:35:37] David Krakauer: I am, I am, I, I'm very fond of Doug's work. And of course, Doug applies that to consciousness and the origin of self awareness in particular, but, but this idea that the universe is full of these self referential. If you like loops, it's very appealing, yes.

[00:35:54] Matt: Yeah. I mean, there is one sense in which you kind of feel like it, it almost has to, you have to think of it either this way, or you have to think at some point there is just a fundamental mystery, um, where at bottom something is assumed and, and that's that, and it has to be taken as fundamental. And I guess in, in this, this framing.

It is still, you know, the existence of this Ouroboros in itself is, will always remain a, a mystery in the sense that you can't stand outside of it and, and explain it in any other way. Um, but, you know, I mean, maybe this is just a personal question. Does it? Does it bother you? Does it give you discomfort?

What is the feeling you get thinking about the fact that there always has to be some just, some part of this will always remain fundamentally mysterious? And maybe I'm assuming, maybe I'm assuming the answer there. Will some part of this always remain fundamentally mysterious? I

[00:36:54] David Krakauer: I, I

want to answer that in two ways. Um, so first of all, I have nothing against mystery. I like murder mystery. You know, usually there's a culprit, but okay. Um, I even like fantasy where there isn't, right? So, um, but I don't think you have to call it mysterious because it's a little bit like the Heisenberg uncertainty relations.

You could say they were mysterious, right? Because there's always something you won't know if you know something else. But the thing about the Ouroboros is it's simply saying The subjective move in this game of chess is to pick where on the Ouroboros you start. But once you do that, you can trace out very rigorously the entire cycle back to the initial point.

And the point is what we call fundamental. Um, so I don't know if I'd call that mysterious, I think because there is an ontological necessity that that Ouroboros traces out a particular set of paths. That's not subjective. Um, I think what we're saying in that argument is the word fundamental is a value judgment.

It's the, it is the mysterious part. For you to declare that point of the Ouroboros is fundamental, you're being mysterious. We want to say, no, it's symmetric. It's the correct non mysterious solution to the problem is to accept the infinity of the reals, right? That there is any point in which you can insert.

I consider that non mysterious because I'm not unhappy with infinity.

[00:38:35] Matt: think I was more referring to the view from nowhere type of mystery where you could imagine different. Ouroboros, I don't know what the plural is here, but,

[00:38:45] David Krakauer: Yeah, Ouroboros is

[00:38:46] Matt: in,

[00:38:47] David Krakauer: yeah.

[00:38:47] Matt: and, and, and we, and we happen to find ourselves in, in one of them. And, uh, it feels sort of like, I'm

not sure if there could ever be something that really intuitively that.

[00:39:00] David Krakauer: I think we are in all of them. I want to make that clear. We're in all of them. We're in all of them, the full infinity. Because, um, you, in a sense, I mean, this brings it down to earth a bit, makes it slightly more quotidian, but the definition of a discipline within this framing is the insertion point in the Ouroboros.

That's what it means to be an English professor or a chemist. or a mathematician, right? It's, we are, we do coexist in all of those realities. There is a professional commitment to any one of them based on finite time, right? But we do live in that infinite space.

[00:39:40] Matt: Okay.

Emergence and Effective Theories

[00:39:41] Matt: Well, let's, let's, um, let's then move on to what we mentioned just before this was, which was emergence. So we talked about effective, we talked about fundamental theories. We talked about effective theories, which are sort of higher level, coarse grained. descriptions in a sense, but also, you know, from a practical, uh, from, um, uh, for all practical purposes, very useful and can also be in and of themselves, sort of I guess complete in a sense that, you know, you don't need to look at something more fundamental to do everything you can with this theory.

Um, let's, let's explore this topic then of, uh, emergence. Um, how would you, how would you think about where emergence comes from? Almost, almost reflecting the question of itself. How does emergence emerge?

[00:40:25] David Krakauer: Yeah. I mean, again, it goes back to that. If you were talking to Phil Anderson, he would say the roots of emergence are in broken symmetry. Right, that, um,

that point at which the fundamental laws no longer do the predictive work that they have been charged with. Right? I mean, that's the thing about scientists as opposed to metaphysicians is that we're pragmatic, right? So science, we like it because it does something. If it doesn't do anything except entertain us as a series of thought experiments, then it's not science anymore.

And, um, and Phil rightly points out that, At the point that symmetry has been broken, the fundamental law no longer does work. So what do you do? And so we can, look, I mean, let's take a DNA molecule. It's made up of nucleotides, A, C, G, T, T, C, A, G, and so on. All of those permutations are compatible with physical law, right?

They can't tell you about those permutations. They can tell you nothing about it. They can't see them, right? They don't operate at that level, at that mesoscopic scale. So I want to understand them though. Why do you have ACCGT and a mouse has ACCTG and a fly has GGCAT and so on? We want to understand it.

We want to predict it. It turns out to do that. We need another theory. And it's the theory that takes us from DNA to RNA to proteins to protein interactions to phenotypes and to behavior. And those are what we would call effective theories. They work. It's very principled, right? The theory of protein folding.

It's very rigorous, it's just not based on fundamental theories of physics. And, um, that was a failed enterprise. That's why, by the way, machine learning is so good at protein folding, because it's not fundamental. If you, that's a funny thing, if you try to do ab initio protein folding, you get to about a hundred atoms spanning about a picosecond, right?

So forget it. There is no fundamental theory of protein folding. And so there are effective ones, which means you put a lot of bias in, you put a lot of constraint, a lot of what would now be called priors. And, um, so again, at any given level, you have a set of models and theories and dynamics. You push them as far as they go.

until they can no longer explain the distribution over the observed states. And then you say, okay, a symmetry has been broken in that theory. I need a new theory that operates at that new level, which can explain the distribution of observed states. And you push that as far as it can go until that breaks.

And then you put another one in place. So there's actually a very rigorous iterative process of model building. And every break point is a point of emergence, right? And, and another way to say that would be that if you were to stay with the model you have at a higher level, there would just be too many parameters that you couldn't account for.

Right, that, that's its, that's its feeling. So the ratio of the dynamics to the initial conditions would start to skew. And in the limit, they'd become infinite. Because everything that you need to explain has to be explained in terms of parameters whose origins you don't know. And when that ratio starts to behave that way, it's a clue.

It says, well, you know, you need a new law. new model or new theory. And that's the non mysterious way of talking about emergence.

[00:44:21] Matt: Why is it that you think that so many people think about. Um, the idea of reductionism, which we were talking about previously as in somehow in conflict with the idea of emergence, because as you've just described it, then, um, almost like emergence is almost a consequence of, of reductionism in a sense, but I think a lot, a lot of people think about it as.

As, as sort of these two ideas that, you know, it's either reducible or, or it's, or it's, it's not reducible. Where does that come from?

[00:44:54] David Krakauer: Yeah. I mean, good question. You know, I think there are two, the first confusion is the confusion over what gets called ontological reductionism versus epistemological reductionism. The argument between, say, complexity science and reductionism is an argument between complexity science and ontological reductionism, meaning that the only way to understand the universe is to put things in super colliders, right?

That the Higgs field is what really we want and, uh, none of that nonsense about neuroscience, that's just all epiphenomenal on physics. And that's an argument about, that has been won because fundamental physics doesn't do work at that scale, right? In other words, it doesn't predict anything. And if a theory doesn't predict or explain anything, it's useless.

It's become philosophy. So I think that's easy. Where things are more interesting is epistemological reductionism. And I think that's where we all agree because, um, and sometimes that's called parsimony or elegance or minimality, um, regardless of the level that you work on. you would like your explanation to be intelligible, right?

And so hence the preference for compressed representations of reality at all scales. And there is no argument, I don't think. There was a very beautiful, underappreciated paper, in fact, in a book series that I just edited, by an Argentinian Canadian philosopher of science, Maria Bunge, who worked at McGill.

And it was called the Complexity of Simplicity. And he pointed out that one reason why there's all this disagreement is because parsimony is actually complicated. We think it's easy. Oh, just choose the simplest. But he said, well, actually, simple is really hard. Simple is complex. And he said, you know, very nice idea.

He said, he called it the four dimensional manifold theory of simplicity. So what are the four dimensions, right? So one of them is what we've discussed. you know, um, the parts versus the whole. What, what, what are the mirror logical units of my theory? What's the basic constituent of reductionism? That kind of, so simplicity means breaking it into its Lego pieces.

That's what simplicity is.

Exploring Epistemological Simplicity

[00:47:34] David Krakauer: Then there's the other one, which we said, the epistemological. No, simplicity is not about that. Simplicity is, can I write my theory on the back of an envelope? We said there are two others.

Pragmatic and Psychological Minimality

[00:47:46] David Krakauer: The other one is pragmatic. And he invokes Mac's concept there, that theory is sense experience economically arranged.

What we would call compression, okay? Um, that needn't be fundamental or a theory per se. It could be statistically Zipf compression or MPEG compression. That's not a theory as we understand it. That's another kind of minimality, right? And the other one was psychological, which was, Does it resonate with my understanding of how the world works?

Is it simple to my mind? Is it? And those four different dimensions actually compete.

Large Language Models and Minimality

[00:48:29] David Krakauer: Take something like a large language model. It's hardly epistemologically minimal, sort of trillion parameters for Christ's sake, right? In other words. So on the other hand, maybe it's as minimal as it needs to be to explain the phenomenon of interest.

So its ontological minimality is actually there, it's just we don't epistemologically grasp it, right? Um, it's clearly not intelligible, so it's not psychologically minimal, and so on, and I think it's very useful, and I suspect that the reason why there's been so much disagreement is because we've been living in a higher dimensional space than we think, and one consequence is we're speaking at cross purposes.

Right? And so the critique, the criticism of machine learning, oh, it's just unwieldy and it's a mess. Well, yes, if that, if your criterion is that particular epistemological minimality, and, um, which I think is a reasonable one to have is I'm not putting any weight on them. I'm just saying that I think that is part of the enduring, um, source of this disagreement.

Human Intuition and AI

[00:49:43] Matt: That's a very interesting perspective and I do want to get to the book, uh, the book series and the paper series in a, in a minute, but what you've just said brings up this question to me our intuition is constrained in a, in a particular way or we seek, we seek theories that feel, uh, intuitive or graspable to us.

Um, but. I mean, clearly there are different ways of thinking, there are different ways, the human way of thinking is not the only way of thinking and, um, you know, basically are we optimizing for the wrong thing, uh, in many cases, um, you know, in particular, I think the case of large language models is, is great because from a practical standpoint, what, what they can do is, I would say already phenomenal, um, but it, it feels completely opaque.

to us. And we think of that as a big problem. And I do wonder quite often if The thing that we're optimizing for, uh, is just a kind of sort of like often an unfortunate fact of just the way our minds work. And there is a whole realm of theory, of intelligence, of whatever you call it, the things that we're not looking for.

Um, is that, is that something you, you think about at all? You know, different ways of thinking, non human ways of thinking. of thinking and actually how much of the, the problems that we've been discussing actually just relate to that versus being inherent in the, in the thing itself. Yes.

The Evolution of Intelligence

[00:51:14] David Krakauer: to the evolution of intelligence. And one of the questions is this one, and I'll give you two analogies that might clarify it, perhaps.

Sport, Tools, and Human Enhancement

[00:51:21] David Krakauer: Let's consider the current debate about sporting events that are either enhanced or not enhanced by drugs.

Okay, there are people out there who'd say all that matters is watching someone run very fast. Okay, in which case, who cares? The limiters of that, of course, they just get in a Ferrari and drive 100 meters. Okay, which none of us would find interesting. But it is in some sense, just an extrapolation from biochemical enhancement of physiology.

And there are others who say, No, actually, there's something about pushing the human to its limits, that interests me. And I think when it comes to sport, I'm probably in that camp, I kind of don't think I want to see people who are cyborg. I mean, it would be an interesting event in itself, but I still would be fascinated by the Olympic Games.

I'm still something about. Okay. And of course, I mentioned that as an analogue to an intelligible theory, making sense of the world with our own minds. Thanks. That's like running the race without chemical enhancement. Let's take another example. Um, the use of tools, which is something I think about a lot.

I call it expodiment, which is enhancing our function with tools. I use a knife and a fork to eat. Somehow I don't feel that that's compromising in the way that taking performance enhancing drugs is, you know, if I went to a meal with you and you ate with your hands or with your mouth, I might actually give you a knife and a fork.

You know, I think, you know, given the objective,

you'd be better off being enhanced by a tool that exceeds the capability of your kinematics, right? Why are they different using a knife and fork from using performance enhancing drugs? And I think it comes down to the first being about the objective function, if you like, is human performance. The second objective function is metabolic intake and Of course, mathematics also fits into this story, because mathematics is like a knife and a fork.

Once you become facile with those particular instruments, you don't have to think about how they work, and they're clearly enhancing our ability to imagine and compute.

So the punchline here is, what is science?

Science: Humanistic vs. Utilitarian

[00:53:52] David Krakauer: Is science a human activity to grasp the universe the way an athlete performs in an event? Or is science utilitarian exercise that seeks to maximize the extraction of free energy from the universe, right? And I think it's both. I think that's the thing. I think it's both.

And, um, it's a humanistic exercise and it is a utilitarian exercise. And we just have to somehow be clear about, in each case, which it is. Um, the arts suffer less from this because they don't have such an obvious utilitarian value. I think they do have utilitarian values, but it's not quite as obvious, I think, as science.

And I think that's what lies at the root of this problem, and people haven't been clear enough on those distinctions, and hence, endless, pointless debate.

[00:54:45] Matt: That's a fascinating perspective. I think, um, you know, when you do go, we talked about Ed Witten and the string theorists table earlier, and clearly that they would fall more into the camp of the, um, understanding is the thing, versus utilitarian. And I guess a lot of people do find, some people find this more objectionable in a sense, you know, they feel science should be directed more towards practical.

More practical, outcome driven matters, um, I think complexity science in the area that you work, something that I like very much about it is that it does seem more than any other way of doing things we're thinking, it bridges the various levels and the various disciplines and I actually feel it does bring very fundamental ideas out into, into the practical world in

[00:55:37] David Krakauer: Let me, yeah, can I give you, uh, my middle ground on those two positions now? Which I think perhaps resolves the dialectic a bit. So let's take the now the example, not of performance enhancing drugs, not of a kitchen utensil, but a musical instrument. Let's take the example of a violin. Okay. Clearly, it's performance enhancing, right?

There are things you can do a violin, but you can't do by whistling, presumably, right? Well, someone might be able to, so, okay. Might not sound very good. But the thing about a violin is it also permits. human expertise in the way that an athlete can be expert. So here's an interesting in the way that I say a fork can't sew readily.

So the artifacts and enhancements that I'm interested in that have utilitarian value are the ones that also allow for humanistic expression. And the question I think about LLMs is whether they are violins. Right? I have no problem with something which is different and enhancing above the baseline of human performance.

At its maximum. But I want that thing to be able to allow me to express myself in novel ways, like a violin. And I think the jury is out. I think this is, for me at least, why I am on the fence. Not about the fact that they do amazing things, but what their contribution to human civilization will be. And, um, that hasn't had enough debate and discussion.

There is some discussion, for example, around the idea of, you know, context window coding. That what will replace coding is being really good at it. having a facility with asking the right questions in some sense. And maybe that's true. It doesn't quite feel like a violin to me, but you know, maybe that's my lack of imagination.

Maybe that's just a question of time. But, but I think that is how we should think about, um,

resolving the, on the one hand, as you say, humanistic pursuit of science through the pursuit of an an intelligible universe. And the more engineering like Ambitions of science to have instrumental value in the world, and I'm really interested in the things that have instrumental value, quite literally instruments that also are humanity expanding and thinking about that space more carefully.

It would be worthwhile.

[00:58:22] Matt: Yes. Yeah. And there's so, there's so many things that come to mind here and so many historical examples of this. Um, but you know, one, one framing here, um, you know, the, I love the, I love the framing is, you know, is, are LLMs violins? The fact of the matter is LLM, well, machine, suppose machine learning, it doesn't have to be an LLM, but an AI could produce music that from the, the listener's perspective, was better than anything that they'd heard before.

And from the subjective perspective of the listener, so forget that the person who produced it, it did produce, it, it, it sounded creative, it sounded all of these good things. Um, you know, that being the case, there does become this sort of like trade off to be made and we're getting a little bit into like the ethical weeds here, but whether, um, in that case, You know, the, the human is, is holding that violin back.

And, um, and, and is, is that really the right thing to be worrying about versus. The output, you know, the, the beautiful music that can be made from that violin.

[00:59:26] David Krakauer: Yeah, I mean, I think it's a really, this is a question that has obsessed. Humanist for a long time. And the, the, the essay that is always mentioned in, in, in this connection is Walter Benjamin's fantastic essay on the, the role of art in the age of mechanical re reproducibility. Um, if I said to you, Matt, look, here are two paintings.

I dunno what's in the background there, but let's say it was something like a Robert Rauschenberg like painting.

[00:59:58] Matt: wish,

[00:59:59] David Krakauer: Yeah, that's why I said like, you know, and um, and here in, you know, given a choice between the original Rauschenberg and one that's, you know, identical as far as you're concerned. The fact of the matter is you would much rather have the Rauschenberg, not just because of the resale value, which would be a good reason to prefer it, but because it's aesthetically more pleasing.

Why? Because human beings are interested in provenance. We're interested in priming. We're interested in what Walter Benjamin called the aura of the original artwork. Now, you can deny that and say, no, there is, the only experience I have of an artwork comes through my eyes. But that seems to be a very depauperic conception of what a nervous system is.

Because what, It's what comes through your eyes and what comes through your knowledge, right? Your reading of art history and so on. Those things are as real as sensory perception, and they integrate into a quasi unified position on the aesthetic value of an object. So I've always thought it was a bit silly because, um, it's strange to say that sensation has primacy over memory.

And so I think that's what Walter Benjamin was trying to say. And so when you talk about a violin, producing better work. Yes. I mean, you know, uh, there's nothing wrong with that. As long as you allow for the possibility that, um, people also care, even if it's not present in the physical sensation that it was performed on a Stradivarius, right?

And, and so forth. And I think, again, I think this just encourages us to have a more complete understanding of the nature of human aesthetics. Uh, and that the thought experiment is actually limited in its application because it denies everything other than primary visual perception or, you know, auditory perception.

[01:02:00] Matt: I really liked your framing of, um, what, what, what did you use for the, the fork, for, for, for tool? You said

[01:02:08] David Krakauer: Ex bodiment.

[01:02:09] Matt: an X Body, I really liked that framing and, you know, the fascinating thing about, well, physical and cultural evolution is that, you know, with expodiment, with tools. Over time, this actually reflects back into the, the user of the tool itself in, in a, in a, in a very real way.

Um, you know, our minds change over generations, our bodies, our bodies and minds change. Um, do you, do you worry about what the existence of these sorts of, you know, I don't know, cognitive substitutes, cognitive augmentation, cognitive expodiment,

[01:02:44] David Krakauer: Yeah.

[01:02:45] Matt: tools out there. Will do to us over the longer time.

[01:02:50] David Krakauer: I think it is what makes us human. It's not, I don't worry about them. I consider it defining of the human condition. The, uh, Celia Hayes at Oxford, um, calls them cognitive gadgets, like mathematics, right? Like language, right? If you ask what makes us special, I mean, it is precisely these things. And I think, you know, drums and violins and paintbrushes and so forth are the essence of the human condition.

And so I don't worry about them. I worry when they don't have this character that you're referring to, which I obsess with, which is. getting good at them, you know. I'm very interested in things that you can get good at, right, and paintbrushes and violins and sport and all those things, um, that are enabling of self expression or collective expression.

And so I'm not worried about them. I'm I'm a bullish about them, you know, and, but, but, you know, but when do they become, I mean, I'll give a good example, you know, let's take the example of the automobile, you know, it's a very interesting, complicated case because on the one hand, it's quite obvious, right, that the, what economists would call the externalities of driving, covering beautiful landscapes with asphalt and running countless millions of people over and, and leading to, you know, An epidemic of obesity because people don't walk, run or cycle, you know, so there's obvious negatives.

Um, on the other hand, there are, you know, who's going to deny the beauty of an Aston Martin? You know, who's going to deny the extraordinary skill of a Senna in Formula One race driving? You know, everything is complicated. And, um, and I think just. My approach has always been to sort of dissect in such a way to understand all of those quite distinctly, the aesthetic contribution, the practical contribution, the cognitive contribution, and, um, and think about them carefully.

Um, and the question for me, right, is, you know,

you know, will there be a Steph Curry of LLMs? You know, it's, it's, you know, will there be an extraordinary athlete of that world? Will there be an extraordinary mathematician? Or is it something else? Is it more like a really badly designed car that just emits a huge amount of pollution and, um, will never be an elegant machine?

And I think it's fair to say the jury is out at this point.

[01:05:32] Matt: would agree. Yeah, I would, I would agree, but waiting with bated breath. Um, David, we've, we've covered such a, such a broad range of topics today and sort of peered into rabbit holes and pulled out again, and, um, I think this, this is representative of the.

Complexity Science and Its Applications

[01:05:50] Matt: the field that you're mostly known for and that you're working in, you know, this, this field of complexity and the work that is being done at the Santa Fe Institute, um, and the, and the book volume that you're putting together.

And, um, uh, I must say I've very fortunately got to read the, the very sort of large and expensive introduction and it is

[01:06:09] David Krakauer: Oh, oh, I should tell you quickly, the, the, the introduction is coming out in book form and will be available within the next week. So for those of you called the complex world. So that bit that you read is is a standalone book as well as being introduction to volume one of the four volumes.

[01:06:25] Matt: Ah, fantastic. I will, yeah, I will, I will link it into the, the notes and also it is, um, it's just exactly the right blend of science and philosophy that this, uh, this podcast is about. So I think people will enjoy it. Um, I mean, I almost don't know where to start with this because it's such a, it is such a wide ranging piece.

Would you mind just giving me a little bit about the sort of background here, the series, the development, like what is the, what is the origin of this, of this series?

[01:06:52] David Krakauer: you know, there's this question people ask rightly, what is complexity? Is it a discipline? Is it like chemistry? Is it like physics? Is it like biology? What is it? It seems to be nonsense. You know, it's like, um, is it some new age set of ideas? Um, you know, what is it? And I think it's a totally reasonable question to ask of any field, quite frankly.

And so it started there with me trying to reflect on what it is. And so I went to all my colleagues, you know, and the Santa Fe Institute, for those who don't know, is based in New Mexico, in Santa Fe, on a mountaintop, and two campuses actually. And I started asking my colleagues, you know, if there was one paper that I really would have to read that you think is in some sense constitutive of what complexity science is all about, what would it be?

It's not an easy question, by the way, to answer. And people say, well, let me think about it. Some people never answered me, right? Some people answered immediately, others, you know. And, um, and so I started accumulating these papers. Um, and they span 100 years, these papers. They start in 1922. with Lotka and then go to, there's a large famous paper on Maxwell's demons in 1929, and they end with Bob Laughlin who won the Nobel Prize in condensed matter, and Elinor who won the Nobel Prize in economics for collective action.

So it's a hundred years, it's about, around 90 papers. And I asked each of these folks, and there are many, to also write an essay to place every one of those papers in context. Why was it important? Why did you recommend it? What was the enduring influence? And I got all these papers and I started to see, oh my god, there's actually There's a coherent pattern that runs through all of them.

There is a centroid here that we call complexity. I really wasn't sure, right? In other words, I kind of, I have to be gung ho about it. I do run this sort of institute and, and there are many things I could have said before, but now I really believe it in a way I'm not sure I even did before. And what we essentially work on is systems that are either self organizing or selected, far from equilibrium, dissipative structures, with either short or long memories of the past that are used to make predictions about the future.

We, we work on teleonomic matter, purposeful matter, and purposeful matter is everywhere, right? It's biology, obviously, um, it's also in engineering, and So why aren't we just biologists and economists and engineers? Because we try to look for the principles that span all of them. That's what makes our approach a bit different.

It says, instead of putting the phenomenon in the primary position, put the principles in the primary position and ask about all phenomena that somehow can be explained or explicated or understood in terms of those principles. So that's what we do. where I started going. And then I realized, well, what came before?

You know, what was the history of these ideas? And it's, it comes out, it's amazing, right? It's very clear. Um, as I say in that book, in the same way that modern physics and, let's say, inorganic chemistry comes out of the scientific revolution of the 17th century, complexity science comes out of the industrial revolution, and What we study are machines.

We study machines that were either evolved or that were built. The prequels to what we do are, you know, thermodynamics, statistical mechanics, the theory of evolution, the theory of nonlinear dynamics, and the theory of logic and computation. You know, Boole, Babbage, Maxwell, Clausius, Boltzmann, Darwin, Mendel, Wallace, Poincaré.

That crew were essentially defining the circumference of what would in the 20th century start to be connected. in movements like cybernetics and others that were trying to connect information theory to dynamics to computation to physics. And as I said, the sort of con center of that endeavor is trying to understand purposeful matter.

That's what comes out through the superposition of all of these papers. And I think we've just started.

[01:11:50] Matt: Yeah, I mean the, the, um, the, the name of this podcast is Paradigm and uh, I dunno if it's a consequence of that or what it is, but, uh, very often we end up talking about paradigms and, you know, looking not just at, at what we know, but at the whole system of knowledge in, in which we know it. Um, sort of examining that and, um, this, this whole complexity business.

Does seem to be at the start of defining a very new paradigm, a new way of thinking about things.

The Future of Complexity Science

[01:12:18] Matt: As you said, we matter with purpose. Um, what is your, what is your vision for what this looks like over the coming years? What does a complexity paradigm look like?

[01:12:29] David Krakauer: Yeah, I mean, I think it's, I mean, this is why I was so interested in your podcast. I thought, oh, that's, I've been, in a sense, my last several years have been trying to understand the nature of a particular paradigm or an emerging paradigm to use that word as well. Um, the, you know, most people are familiar with that term from Thomas Kuhn, from the structure of scientific revolutions, that revolutions are periods of time in which paradigms emerge.

And it's difficult to understand what that means, but if you look more carefully, he wrote another several papers after that book where he talks about a paradigm as the structure of a disciplinary matrix, or if you like, there's a set of ideas that connect harmoniously. And then there's a set of ideas that just don't connect.

It's like a disconnected graph. And the connected graph is what we mean by the paradigm. And periods of normal science are where you just add vertices to that graph and connect them. And periods of revolution is where you make a observation. And it just blows the graph apart. So I don't want to live in there.

You know, that's exactly like quantum mechanics and classical mechanics. It doesn't fit. I need to build an entirely new set of graphs, right? Because it's not going to fit in the classical one. And the world is just full of these disconnected graphs. It's very interesting. And part of what we try to do is connect them.

And complexity is this. set of ideas that are in themselves challenging, like non equilibrium statistical physics, adaptive dynamics, nonlinear control theory, all these things, right? Um, thermodynamics, computation, they themselves are hard and to reconcile them is even harder. So when you say, what is the future?

That's the future. We need, you know, a theory of the dissipative system called the brain, which is a complex, high dimensional, nonlinear dynamical system that somehow does computation in a non standard way, right, in order to be fitness enhancing, in other words, to be an evolutionary agent. So you can sort of see how it sort of has to come together, but I don't think we have a clue.

I don't think we have a clue. I don't think we have a clue. I don't think we have a clue. It's very new, and I think that's one of the other reasons why people have asked the perfectly reasonable question of, what is it? Because that graph is very incomplete. But, you know, just to say we're interested in purposeful matter, and that we think that markets, and minds, and machines have a lot more in common than we ever thought, suspected, uh, is something that a lot of people would say, yes, I think you're probably right, right?

And, um, and we need that kind of paradigm to understand the modern world, which is hybrid, right? In a way that it never was. So I think that's partly why there's renewed interest, because it's practical now.

[01:15:46] Matt: Do you have strong perspectives on the types of problems we will be able to solve, I guess, with a, with a complexity paradigm? I mean, I mean, looking again, historically, let's take something like quantum mechanics. You know, at the time, we didn't know that this would be the thing that gives us the sort of mental technology to do things like build atomic weapons, um, Or to, uh, build semiconductors and, and all those other things.

Um, so it is hard to speculate into the future. Um, but I would imagine having an effective complex, having a complexity paradigm. would open up a whole new class of problems that are now solvable through the use of these tools. Do you have a vision of what that might look like?

[01:16:29] David Krakauer: Yeah, I mean, I do. I mean, probably many, um, we could sort of almost go through, one way to play this game would be to go through every area where we've been. successful but have hit a boundary or have been unsuccessful, right? So successful and hit a boundary is miniaturization of transistors because of the, you know, the energy dissipation problem, uh, and the problem of building effective thermal sinks for miniaturized transistors.

And there, there's an interest in reversible computation, right, where, as we know from Landauer and others, um, we don't have to pay a cost. Um, in the same way that we do for reversible. We don't need to erase bits. So that's an area where actually there's active work. You know, could we just build more efficient computer technology based on new ideas from the thermodynamics of computation, which is one of the fields we've worked on a great deal.

Um, another area is healthcare and medicine. Um, most silver bullet. Pharmacological interventions are a disaster. In certain areas, mental health care, it's witchcraft. Right? And Those are clearly systems that require new principles to be understood. We don't know how to do interventions into 86 billion agents, that is the number of neurons in your head.

We don't know what scale to do the intervention. There are people who say we should do psychoanalysis, right? There are those who just do pharma. Well, this seems like a perfect question for science where emergence has principle definitions. Um, and then the biggest questions of all, And it's the reason why we finished those four volumes with Ellen Ostrom's work on collective action, which is, you know, how do we solve climate problems?

How do you solve global conflict? Now maybe you can't, right? Um, but we've never done or built systems of coordination that operate at that scale effectively. I mean, there are treaties, you know, there are organizations like the United Nations, but these are not based on fundamental principles of complex systems.

They're based on intuition. Sometimes that intuition is amazing. More often than not, it's rubbish. And so I think our belief is, just as we built better transistors, just as we have built more efficient buildings. There are insights into building more efficient societies or ideas that we could be using, um, so as to coordinate our information.

Um, so I think those are the really big ones that are unavoidably require a new discipline. You don't go to a physics department, you don't get a chemistry degree, you don't get a economics degree. With a view to solving problems at that scale. We need to connect those fields. Who's connecting those fields?

Well, we are, you know, there should be others, right? There should be many others doing it. I don't know why we are so relatively unique, right? And I mean, it should be, it's nuts as far as I'm concerned. But I think the next, sort of, if you like, civilizational paradigm will be this, the level of knowledge integration required to solve collective action problems at the global scale.

[01:20:08] Matt: mean, potentially one of the reasons why there are so few people working on this is just a, I guess, maybe historical contingency with the fact that The way our knowledge institutions have sort of progressed is, you know, we do have departments that focus on very specific things and, and for good historical reasons, these things are difficult.

There's a lot of, a lot of learning to be done to even get a grasp of those particular paradigms to work within. And uh, you know, I guess there, there aren't, there just aren't that many people who can sit across all of these disciplines and work together in this way.

[01:20:42] David Krakauer: We'll look at your, I mean, can I just quickly interject and just call out to you and people who are doing what you're doing? Because in some sense, right, if, if the universities and departments were doing this well, what you do might not be so necessary. Right. In other words, there's a real sense in which what you're sort of doing applied complexity in the sense that you're saying, look, there are ideas that need to be connected and communicated.

Um, they're not being, this is not being done. And this whole fascinating sector that has emerged. technologically enabled sector that has emerged over the last several decades is fascinating and I think is a homeostatic response to the deficiencies of the existing institutions and which is kind of a wonderful thing.

I mean I think we have to take over and you know enough of that nonsense. It's, it's time to address the problem seriously and I think the way it's been happening quite naturally in a decentralized fashion I think is It's kind of fascinating.

[01:21:46] Matt: yes, I agree. I will say it, um, just by, by the nature of how things are set up, it is, it is difficult and is uncomfortable because it means always feeling sort of uneducated and like not an expert across every different topic. Um, but I think you're totally right. It does serve that function. And I think the evidence shows that there is a, There is a need for it.

There is a need for it, um, which is why these things, these things are working. Um, do, is your, do you see, you know, I guess, 50, 100, however many years time, do you still see the sort of structure of our knowledge institutions being structured along these departmental lines? Is, is complexity science sort of the cross disciplinary unit that brings them together or, or do things fundamentally change?

[01:22:36] David Krakauer: Yeah, I don't know, that's really interesting. Are we Wittgenstein's ladder? That, right, that you sort of can, once you get to the top shelf or window, you can throw it away because you've reached your target. And that's very interesting, in which case, wonderful, if that's true, if it helps us reach that point, I have no problem.

I don't think it is because I think that, um, it will evolve. as all disciplines must, as all fields must. Um, it's just the problems we're dealing with, I think, are genuinely difficult. I think they're really hard. Um, I mean, just if you think about it in these terms, most fields are explained in terms of new observations that had to be accounted for, right?

In other words, um, You know, nonsense stories like Newton and his apple, right? It's not as if he hadn't seen other objects before he was taking a snooze in his mum's garden, but okay. But more interesting ones, right, like, you know, the double slit experiment, right? That's, oh, what happened there? That was weird, right?

Or Rosalind Franklin's Photo 51, which was the basis of the, um, Crick Watson inference of the double, double helix. Um, so most science works that way, you find something and it sort of challenges you or challenges a paradigm. The emergence of the complexity paradigm is a bit different because the things that we're observing are things that we've observed that are hidden in plain sight, as I like to say.

Society, markets, human intelligence, this was not something that was hiding somewhere that needed a high energy laser to be revealed. Um, so it's a different kind of revolution. It's a revolution in coming to understand the things that we took for granted. The things that we kind of thought weren't interesting, almost.

Sort of weird. I mean, why is society less interesting than what an electron does? I mean, it seems to be more interesting. But it was so part of our everyday experience that it didn't seem to warrant the same kind of rigorous attention. And I think that has to change, right? That the ordinary has to become extraordinary for the complexly paranoid to. to succeed.

[01:25:10] Matt: it feels in some ways almost like a more openness to philosophy within, within the sciences than, than what we've seen in many fields, you know, I don't know if you share this, um, perspective, but complexity scientists, complexity researchers writing on complex science tends to, at least from my perspective, to have a slightly more philosophical flavor than what one might find elsewhere.

[01:25:33] David Krakauer: I think that's partly because of our embryonic character. The, you know, it's interesting, I used to talk to Murray Gell Mann about this a lot, about reading mathematics books that were written by, say, Poincaré, versus texts that were written by Grotendieck, right? Um, and what changed there was the loss of, like, the philosophical context in which the rigorous questions are being asked.

And so if you pick up a modern text in category theory or what have you, sort of jump right into definitions and you're sort of a little bit lost as to what the question is. You know, why does this matter? I mean, why are you even bothering? I go to so many talks where they begin in media res. It's like, okay, consider the following group.

And you think, well, hold on, you know, and when what you call philosophy, I think is an effort to understand the structure of the problem and the relevance and its connections to other things that we care about. And, um, and I think that happens a lot in the early phases of, of the emergence of a new paradigm.

I think if you, like I said, you compare 19th century math text to 21st century true physics, right? You pick up a, if you read Heisenberg. Or Schrodinger's books on quantum mechanics. They're actually very readable. Not saying that they're not technical, but if you pick up a 21st century textbook in quantum mechanics, it's just, it dries dust, you know, and, you know, you get beautiful books like Tony Z's books, which are more readable, but by and large, and which is a big shame, because what happens is you anneal prematurely, right?

It's almost as if all of the questions have been asked, it's now just that we're chasing down the answers. And what you're philosophy is. Keep the questions in the air.

[01:27:32] Matt: yes. Yes, good that you mentioned, uh, Tony Zibi, because I believe there is at least two of his books on this,

[01:27:39] David Krakauer: Ha ha ha, great, you see. Yeah,

[01:27:41] Matt: out to Tony. Um, on the, on the topic of, of books, you know, complexity science is a, um, it's a big topic. Get out of there then. Sort of many places. One could, one could get into it and one could come to it, and, uh, it can be very technical.

Books and Resources on Complexity

[01:27:57] Matt: Um, maybe I'll ask, you know, apart from the, the book that we've just mentioned and, and that you're working on, if somebody wanted to, to sort of get an early introduction, so to get started then, you know, not a topic expert, does anything come to mind as a, as a good starting point? You know, books, resources, like where does one get started with this, with this field?

[01:28:18] David Krakauer: Well, I think there are several, so I'm obviously completely partisan about this, so I think my, my colleague Melanie Mitchell's book, um, on complexity and introduction to complexity is a very nice book, um, and it's, it's, it has a particular perspective that I don't entirely share, incidentally, but I think it's a very nice book.

introduction. In fact, in my book, I have a table of all of the previous books that are ranked according to that, how technical they are. So that, you know, that might be of interest to people. You know, I,

At this point, um, as I said, I do, I try to be generous that way and, and, and list a lot of books that I think are, are interesting in different ways. You know, for me, like a lot of people, I remember reading Doug, you mentioned Doug earlier, Doug Hofstadter's book, Gödel, Escher, Bach. And I, I still think that is actually, it wouldn't be, is it behind you as well, next to the Tony Z book.

Yeah.

[01:29:12] Matt: it's, it's, it's,

[01:29:13] David Krakauer: But right, it's there, of course it's there. And I think he's dealing with a lot of these issues like recursion, undecidability, cognition, representation, encoding, a lot of the stuff. He's not doing the social stuff, economics, he's not doing archaeology, not doing history, but he touches it. And then at the same time, you know, end of the 70s and the early 80s, these amazing books by people like Prigigine, right, on complexity.

And, um, Manfred Eigen's beautiful book called Laws of the Game, where he basically tries to take Hermann Hesse's novel The Glass Bead Game and use a go like game to explain all of complex realities. It's an under read book by the way, which I think is wonderful. And then of course, now the field has textbooks.

Um, And any number of them. Mark Newman has a beautiful textbook on networks and on it goes. Um, I think my colleague Geoffrey West's book on scale is a very interesting area. One area of complexity science that shows its enormous application to issues of energetics and urbanization. And there are many, and I could go on and on.

Um, I would encourage people to come to the Santa Fe Institute webpage. Um, we do have resources webpages for people who are developing curricula, who are just interested. We have a MOOC called Complexity Explorer, which is a greater commitment of time, because there's some expectation you would take the course, and not everyone has that time.

So we do have lots of resources online for people to look at. Um, but you know there are, I don't want to claim that SFI is the only place doing complexity. Um, I am. unapologetically biased, so, but, you know, if you went to Amazon and searched, I'm sure you'd find all sorts of books that would be of interest to you, given, you know, whatever one's own perspective was.

Fantastic. Um, have there, have there been any particular books and they don't have to be based on complexity, books in general that have been particularly influential for you personally?

[01:31:22] David Krakauer: Many, um, it's funny, it's a question lots of people ask, and I even have an article on it, um, the, yeah, actually with my brother, my brother and I were interviewed on exactly that question, and it's a, it's an article, I'll send you a link, um, you know. I've spent my entire life devouring fiction. I love novels, I love science fiction, I love fiction, and, um, you know, if you ask me what the most important books for me that way were, I'd say, which I share with my colleague, former colleague, Cormac McCarthy, who is here forever, wonderful novelist, um, Moby Dick, I think Melville in that book, he kind of touches on everything we've touched on in a sort of weird way.

I mean you have the Ahab character who's just despondent and enraged by the fact that he doesn't understand everything. You have biology, you have teamwork, you have currency exchanges, it's a sort of extraordinary microcosm. I think it's partly why I love it. Um, I love Robert Musil's The Man Without Qualities, which is, in some sense, to me, it's much less read, but it's really the modern Moby Dick.

It sort of takes, he was Mac's student, actually, um, uh, worked on, on the philosophy of physics, and then became a novelist. So I think novels, for some reason, it's why I'm such a fan of people like Neil Stevenson, who is also here, and Ted Chiang is also here, um, and I talk to them a lot about these topics.

Um, in the non fiction world, Of course there are books like Doug's, um, but there I tend to read more technical books, you know, I, I, there's sort of limited tolerance for popular science books, and that's simply because I'm in it, you know what I mean? I feel as if I'm either going to read the technical books and the technical papers or I'm going to read novels, you know, and I, With very, very few exceptions do I enjoy reading popular science.

Um, you know, there are great books by Richard Dawkins and Roger Penrose and all these people are just wonderful people. Um, but, you know, it's sort of a problem, I'm a little bit too close to it to be satisfied by that level of dilution. You know, it's, it's, it's like kind of been ruined for me somehow.

It's like being a, it would be like being, you know, a sommelier or something and going to a restaurant, you know, spitting out all the wine they give you. It's not that it's bad, it's just that you've, you've been spoiled by having, by drunk, by having drunk too many good bottles of wine. It's

[01:33:58] Matt: Yes, yeah, totally, well I think great, great, um, recommendations there and I will link them into the, uh, into the episode notes.

Final Thoughts and Reflections

[01:34:06] Matt: Um, David, as we bring it to a wrap, one question that I, I'd like to, to end off on, you know, we've talked a lot about, um, many things, but we've talked about intelligence, we've talked about artificial intelligence, um, suppose we were one day to be visited by a Or to create an AI super intelligence and we had to choose one person, either past or present to represent us to this, uh, this super intelligent other. Who would we send? Who should we send?

[01:34:37] David Krakauer: You know, up until I read Benjamin Lapidute's Maniac, I would have said someone like John von Neumann,

[01:34:43] Matt: Hmm.

[01:34:44] David Krakauer: but now that I know more about him, he's the last person I would have sent. Um, right, in other words, incredibly clever, but the morality of a sort of a sea squirt. So, um, you know, part of me wants to say Thelonious Monk, you know. Um, Part of me wants to say Nelson Mandela. Part of me wants to say Von Neumann. Um, I mean, what we've discovered, you know, what makes the question impossible, of course, is the fact that we are all so limited in our abilities. And according to which side of the bed I get out of, I either think being ethically, Deep is more important or scientifically deep is more important.

And I don't think I can break the symmetry. I think I, um,

sort of met so many extraordinary people. And, and one of the things about getting to know extraordinary people is coming to understand how limited they are. Right, um, so in the spirit of SFI who works, that works on collectives, it's going to have to be a group. I'm so sorry. It has to be a group of extraordinary people.

I'm going to send, it's going to be one of those sort of balloon type contests. I'm going to send an artist and a musician and a scientist and an activist. I'm putting them all together, you know, and maybe one day the collective intelligence that we call an LLM or its future descendants will contain multitudes.

And, and, and, you know, maybe we'll send that thing.

[01:36:28] Matt: Well, I think that's a, that's a, that's a beautiful idea and a beautiful place to, uh, to wrap up. Um, David, thank you so much for the conversation today. It's been fantastic.

[01:36:38] David Krakauer: welcome. Thank you so much.

Discussion about this podcast

Paradigm
Paradigm
Conversations with the world's deepest thinkers in philosophy, science, and technology. A global top 10% podcast by Matt Geleta.