Joel is a renowned mathematician and philosopher whose work covers a wide range of important topics, including logic, computability theory, game theory, the philosophy of infinity, and more. He’s also the top rated user by reputation on the MathOverflow network.

Joel is the author of several books including Lectures on the Philosophy of Mathematics, and the The Book of Infinity, which he’s publishing in a serialised form on his Substack, Infinitely More.

**Topics**:

the concept of truth in maths and elsewhere

the nature of proof in mathematics

the acclaimed completeness and incompleteness theorems

the relationship between mathematical thinking and the human mind

… and other topics.

Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Read the full transcript here. Follow me on LinkedIn or Twitter/X for episodes and infrequent social commentary.

**Episode links**

Lectures on the Philosophy of Mathematics: https://amzn.to/4dhMh14

Substack: https://www.infinitelymore.xyz/

Book of infinity: https://www.infinitelymore.xyz/s/the-book-of-infinity

Panorama of logic: https://www.infinitelymore.xyz/s/panorama-of-logic

Other books: https://www.infinitelymore.xyz/p/books

# Timestamps

0:00 Intro

1:17: Truth

8:38 Intuition vs objective truth

13:15 Proof

20:39 Completeness

30:18 Incompleteness

37:20 Is completeness a 'problem'?

43:07 Hierarchies of logical systems

48:44 Axioms and where they come from

1:03:50 Motivations for studying pure mathematics

1:19:57 Joel's books

1:22:58 Who should represent humanity to an AI superintelligence?

**Transcript**

*This transcript is AI-generated and may contain errors. It will be corrected and annotated with links and citations over time.*

[00:00:12] **Matt Geleta:** I'm here with Joel David Hampkins. Joel, thanks for joining me.

[00:00:15] **Joel David Hamkins:** Oh, it's a pleasure to be here.

[00:00:17] **Matt Geleta:** Let's start off with the concept of truth. This is a central concept in mathematics and elsewhere, and I think it's commonly believed that in mathematics, mathematical truths have a different character to truths in other contexts and in other fields. How do you think about the concept of truth within mathematics?

[00:00:39] **Joel David Hamkins:** Right. So I guess that it's not only the concept of truth that's different, really, the, the of existence is different in mathematics than it is in many other domains of knowledge. I mean, mathematics is concerned with are regarded as abstract objects in, in, contrast with say physics and so on, which is concerned with the nature of physical existence and it's a totally different character of existence, isn't it?

I mean, you know, if you think about numbers, what, what is a number? What are numbers? Well, there are these you know, there are these abstractions of a certain kind, which is maybe confusing to think about the nature that it seems different than the nature of existence of an apple or a few apples on your desk or something like that. Or, Um, I mean, of course we also have, we also have abstractions in the physical world, like beauty for example. does beauty exist? Those beautiful things, but, uh, what about beauty itself? That's a kind of abstraction. And so in what sense does the abstraction somehow, re does it reduce to. the individual items that instantiate that concept, or maybe the same thing's going on with numbers, right? And so when one is talking about truth, it's just like another layer of abstraction, of course. I mean, if, if we, uh, talk about whether certain statements of, uh, existence assertions about certain kinds of numbers. are there infinitely many primes, or is there a number such that it solves some, a given equation and so on, then what is this exists? What does it mean? and and it's not at all. clear what we might need. Vanassaraf wrote a famous about what he viewed as a kind of fundamental problem about this abstract existence, namely the problem of causal interaction with this abstract realm. Maybe, maybe when I mean, of course, the idea of this sort of platonic realm of mathematics or of ideal forms is, is quite old, uh, but the, the sort of problem with causality is that we seem to gain knowledge and make truth assertions about that realm, but how is it that we're able to interact with abstract realm at all.

This is Nasraf's challenge, right? It seems like We can't seem to have any kind of causal, uh, there can't be any sort of causality flowing in either direction, either from us, you know, as physical beings to the platonic realm, or in the other direction, is sort of this impassable divide between what we experience and the nature of the objects, uh, in, you know, in that abstract realm.

And so, This was viewed, uh, uh, as a kind of puzzle for how we can ever come to mathematical knowledge about that realm or make true decisions about it, because we can't seem to get across this bridge. but okay, so a lot of people take this objection quite seriously. I mean, it's, uh, there's hundreds of papers written about this idea, um, but other philosophers, uh, often, uh, tend to reject the objection itself.

For example, Barbara Montero, um, argues, uh, that it's not like we have such a great understanding of causality in the first place. I mean, causality itself, even in the physical realm, is quite mysterious. And if you think about sort of the problems of causality in, say, relativity theory or, um, uh, It's quite confusing and, and so it's not at all the case that we have a perfect understanding of causality and that this unbridgeable gap between the platonic realm and a universe is a clear cut violation. So it seems like Vanassaroff is relying, is making this concept of causality do too much in a way that uh, our imperfect understanding of causality just can't succeed that way. And so maybe, uh, maybe, maybe we don't have to take the objections so seriously, as, as that. there's, there's other kind of issues though that come up with this abstraction, uh, namely, it's quite commonly described. that the sort of difficulty of, of, the mathematical existence assertions has to do with their abstract nature. But if we could give a kind of account or reduce the abstractions to say, physical assertions about the physical world, then everything would be good. It's somehow this idea is somehow presuming that our, our. understanding and our accounts of physical existence are totally clear, and this, this abstract existence that's the worrisome one. but I've that maybe that has it, maybe we have it backwards uh, on that point because seems like physical existence, the more physics, you know, the less clear it

[00:05:51] **Matt Geleta:** Um, Uh, Um, Hehehehe

[00:05:52] **Joel David Hamkins:** with the, uh, of strangeness of quantum mechanics.

And so the the deeper you dig down into the fundamental nature of physical existence, the more kind of incoherent it is even in this. And it seems like we can't really give a very coherent or full account of the nature of existence of, say, electrons or, uh, or an apple on my desk. I mean, if we really want to give a kind of complete account of what does it mean to say that such a thing exists. Um, whereas in certain simple kinds of mathematical, existence assertions like the existence of the empty set is an example I like to use for the second hand in the empty set, it seems like we can give a pretty satisfactory account of what it means to say that the empty set exists. We can talk about that uh, are impossible to instantiate.

And, you know, this predicate is, uh, is kind of the members, you know, the individuals that fall under this predicate are exactly, uh, the members of the empty set and okay, we can talk in this kind of way. and, and it seems to me, it's a sort of more comprehensive account of what it means for that abstract thing to exist, than is possible.

I can't even imagine the nature of a corresponding full account of the nature of existence of an electron, say. in a way that was as complete. so, so what I think is the mysterious one is physical existence and abstract existence is maybe much easier. So from this point of view, maybe these mathematical true decisions are a little easier to come by than the true physical ones.

[00:07:38] **Matt Geleta:** Yeah, I think, I think it definitely connects with a lot of people's intuitions about mathematical truths, at least being somehow objectively more true in a way, or, um, more fundamental in some sense than these physical truths that you mentioned. But it does strike me if you look at it from first principles, even the physical truths, like we're using concepts to talk about them.

And in some sense, they are all abstractions that. that live in some mental realm or some mind space, um, and, uh, it's actually not very clear, uh, kind of how to differentiate the spaces, right? You mentioned, uh, electrons and, you know, an apple on your desk. Those are just words, um, or ideas floating in my mind in the same way as a symbol on a page.

And, um, you know, yeah, I think you're totally right. It's, it's, uh, how, how does one then connect the idea in the mind? to whatever it is it's meant to be referring to outside of the mind. Um, it feels like there's a bit of like a mind projection fallacy. Uh, at play here. Um, but that leads me to the idea then, is it, is it easy to mistake the actual truth value of something from our presumed knowledge of that truth?

Um, so, you know, certain things might seem intuitively very clearly true to us. And, you know, intuition can, can take you so far. Um, Is it, do you think it's a, it's a common problem to mistake that, like the strength of intuition, um, of, of something's truth value for the objective truth value out there in either the, the real world or the platonic realm?

Um, how do you, how do you think about that, that role of intuition in truth? Mm,

[00:09:23] **Joel David Hamkins:** that you use that word intuition in that particular way because, this is a common way to talk about this sort of historical rise of intuitionism. I mean, as the intuitionistic logic, in classical logic, typically, um, that one makes a clear distinction between what's true and sort of our knowledge or reasoning about what's

[00:09:47] **Matt Geleta:** mm,

[00:09:48] **Joel David Hamkins:** Those are just totally different things. And there's this idea that, okay, there's a, you know, an objective nature to the, to what's really true true sort of independently of what we might know about it or reason about it or come to deduce about it or observe about it or, or, whatever. And so we have, um, uh, kind of separation between ontology and epistemology very clear.

[00:10:10] **Matt Geleta:** mm,

[00:10:10] **Joel David Hamkins:** whereas in intuitionistic I mean, historically, this sort of more contemporary, uh, mathematicians working in, in, with constructive logic don't have this view, but, uh, historically the. The view in a way to mix up what's true with our way of coming to know what's true by,

[00:10:33] **Matt Geleta:** mm

[00:10:34] **Joel David Hamkins:** replacing truth conditions with what amount to assertability conditions. So in intuitionistic logic, for example, we, we only assert. p or q if we're also prepared to say which one.

And

[00:10:50] **Matt Geleta:** hmm,

[00:10:50] **Joel David Hamkins:** Whereas in classical logic, of course, we can have a disjunction p or q. can positively assert that even when we don't know which one is true. Maybe we're going to argue, look, it has to be one or the other, because, if both of them are false, then, we get a contradiction or

something

[00:11:06] **Matt Geleta:** mm hmm.

[00:11:07] **Joel David Hamkins:** and Whereas in intuitionistic logic, uh, the sort of assertability criteria would be, that you assert the disjunction only when you're also prepared to assert one of them. And, and there's this way of viewing that as mixing up the concept of truth with the, with the concept of truth. knowledge or our way of coming to know the truth. but as I said, there's, you know, also true for other, the other logical connectives in intuitionistic logic. and, um, but there are, it's also this sort of independent contemporary program of using construction mathematics and topos theory and so on. in, that situation, it's, it's not so much, uh, mixing up epistemology and so on.

It's rather just the nature of truth in these non classical mathematical realms. obeys intuitionistic logic. And so they're calculating with that logic because that's the nature, uh, of of truth exhibited by those mathematical structures. And so it's not burdened by this kind of philosophical objection that I was just making.

[00:12:15] **Matt Geleta:** It does, it does lead very naturally then to the question of how we do know, um, that something is, is true or not true. Um, and on the one hand, um, you know, you hope that at times those two different ideas, the ontology of something and, the epistemology coincide. You know, our intuitions are strong enough and clear enough that we can trust them.

And, uh, we, we can trust them to indicate something that's true out there in the real world. Um, but I mean, another approach or another methodology or don't even know what to call it, um, in maths is proof. Um, the, the concept of, of proof in the process of proof, maybe taking a step back and just looking at that, that, uh, that concept in sort of zoomed out.

How do you think about um, the notion of proof, or how should people think about the notion of, of proof in mathematics?

[00:13:08] **Joel David Hamkins:** Right. It's actually fascinating, fascinating puzzle, I mean, problem to think about. What does it mean to prove something, if you ask mathematicians, uh, what a proof is, I mean, mathematicians who haven't studied, say, logic very much, but just are, you know, expert in mathematics, then They're often hard pressed to say exactly what it means to prove something, I mean,

to,

[00:13:32] **Matt Geleta:** Mm.

[00:13:33] **Joel David Hamkins:** you a definite account, you know. what they mean, you know, it's one of these things you recognize in practice, you give them, and, and, and, what it's gonna be boiling down to in the end is something like a very convincing argument that makes clear what the logical steps of reasoning are or something like this. Um, And, uh, for example, I wrote a book called Proof in the Art of Mathematics, which was teaching undergraduate students introductory book, how to learn how to write proofs. And in that book, it's not a logic book, and I didn't give a formal definition of proof. And I said a proof is a, is a It's a clear and convincing that, logically demonstrates that the conclusion follows from the premise. and, uh, and it's sort of an informal definition, but it's workable in practice when mathematicians are writing precepts, what they mean.

But in the subject of mathematical logic, we have a concept of formal proof, which is different from proof as it's used by mathematicians, because most mathematicians, when they prove a theorem, they're not giving a formal proof. They're giving a convincing argument that demonstrates that the conclusion follows from the you know, the logical consequence of the premise, but

it's

[00:14:45] **Matt Geleta:** Mm.

[00:14:46] **Joel David Hamkins:** proof in the sense of mathematical logic, but a formal proof is Uh, one has to set up the uh, uh, kind of context for it. There's a formal language of mathematics in which these proofs are taking place. And so we have a definition of what counts as a formal language of mathematics. There's certain kinds of relation symbols and variable symbols and logical connectives and so on. And we can talk about expressions, formal expressions in this formal language. And then what a proof is, is a certain arrangement of those formal expressions, that, that constitutes a proof. And, and so maybe the proof system, the formal system, there's a huge variety of

[00:15:26] **Matt Geleta:** Mm.

[00:15:27] **Joel David Hamkins:** of proof systems, but in some of them we might have some axioms, some logical axioms, or then there's the axioms that we're going to be reasoning from. In the proof, and then maybe there's some deduction rules that tell you, for example, one of the most common deduction rules is called modus ponens. And this is the rule that says, if you have a statement P, and you have an implication P implies Q, then on the basis of those two statements, you can deduce Q. So this is a classical rule. deduction rule, goes back to Aristotle and so on, um, and it's usually part of most of the proof systems. And, and so a formal proof is a kind of arrangement of statements in the formal language that either are using the, the, the formal axioms that were allowed or at each step they're using. the formal inference rules that were explicitly stated to be allowed and such that at the bottom, at the end of the proof is the statement, the theorem that's being proved. So we have this concept of formal proof

[00:16:28] **Matt Geleta:** Mm

[00:16:29] **Joel David Hamkins:** and, uh, and the way I think about formal proof is something like the way I think about Turing machines as a

[00:16:40] **Matt Geleta:** hm. Mm hm. Mm hm.

[00:16:42] **Joel David Hamkins:** I mean, Alan Turing designed the sort of theoretical model of computation called Turing machines in 1936. He abstracted away from what a human being did when sitting at a desk and undertaking a kind of computational process, he was led to this idea of putting marks on a piece of paper and so on, and led to this concept of a Turing machine, which ultimately is. It's a kind of machine with a paper tape and it can put marks on that tape and it moves in very specific ways back and forth over the over the tape and looks at what it had previously written and it's in one of finitely many states and so on and it provides a model of computability. And when you first learn about Turing machines, maybe you think, well, this is a totally primitive

[00:17:30] **Matt Geleta:** hm.

[00:17:31] **Joel David Hamkins:** that could probably hardly do anything at all.

It shouldn't be very useful. But remarkably, it turns out, and Turing, this is part of what Turing had done, is he, he proved that Turing machines are amazingly powerful. And I mean, in principle, the kinds of computations that these very simple can undertake. include simulations of essentially any kind of computational process that you can imagine using some other more powerful, say, computer language.

If you, you know, the operation of Python programs or C whatever, all of this can ultimately be simulated by these Turing machines Okay, so why am I talking about Turing machines now? Well, the reason is that uses Turing machines for actual computation. We study Turing machines as a theoretical model of computation in order to come to a deeper understanding of the nature of computation. And so this has led to a huge number of, of insights and conceptual, foundations of the subject in terms of the PNP problem and computability Halton problem and the complexity. uh, hierarchy and so on. All of these are based ultimately on the kind of theoretical framework that that model of computability provides. So we're not using Turing machines to compute, but we're using them to understand the nature of computation. And that's how I think about formal proofs. everyone thinks about formal proofs this way, but this is how I think, how I like to think about them. We don't use formal proofs to prove things. We use informal proofs to prove things. These are the, the informal proofs are the arguments that mathematicians use to understand mathematical ideas and to communicate with one another, we use formal proofs to understand the nature of proof the the concepts of independence, logical independence and consistency and so on.

These are the ideas that flow out of having a formal concept of proof and that provide a kind of framework of our understanding of the nature of proof.

[00:19:39] **Matt Geleta:** No, absolutely. It, um, it, it reminds me of, it's actually sort of, it's somewhat unrelated example, but I think it illustrates one of the points, at least of the utility of, you know, spending so much time studying a formal system and sort of abstracting insights up to some higher level system that we use in everyday.

So for example, um, in, uh, in natural language, in speaking, um, um, The, our intuitions, uh, sort of coincide with, um, uh, what you said that earlier about, um, you know, if, if P implies Q and P then Q, I think that's, that's a very natural thing to understand in, uh, in everyday life, um, and yeah, including that in a formal system, um, you know, the, the, the example that came to mind was actually a fairly silly one, but I think it is called, it's something like the, the, The table theorem, um, and it's, it's this idea, you know, you sometimes see someone sitting in a restaurant and, and the table is wobbly and, uh, you've got a flat surface and, uh, they're sort of trying to gerrymander this, this table around to try and get it to be static.

But,

um, if you rotate the table, um, you know, there, there will be a point at which you have three, three legs all touching the ground at the same time.

And it's again, this idea of, of. You know, at a very deep mathematical level, you can prove something about surfaces and three points, and it abstracts to something very useful in, in the real world.

And again, the concept of, you know, I mentioned it, a Turing machine, I think it has very, has very much this, this quality where it's a, it's a, an idea of computation that. Would be completely impractical to instantiate on an actual physical computer, no one would use it. Um, yet you can, you can show that it can, uh, um, you know, the things that you can prove about the Turing machine would apply to computation that happens in, in other contexts.

Um, what, what, um, I always found very groundbreaking or sort of at least intuition bending about the concept of a Turing machine and some of the implications that come from it. Um, uh, um, the, the incompleteness theorems and, you know, the, the relationship between, you know, what you can prove about whether or not a program can, can halt on a certain problem.

And. You know, what this means about being able to, to verify, uh, statements, you know, in, in mathematics. Could you, could you set the, the picture here a little bit, how one goes from looking at the, the, um, idea of a Turing machine to making it what many people consider to be, consider to be a very verifiability.

in, uh, in mathematics, uh, and in, and in proof theory.

[00:22:26] **Joel David Hamkins:** right. So, so you mentioned the Incompleteness theorem, but the question that you just asked is maybe more connected with the, what's called the completeness theorem. So Gödel, sort of funny, Gödel proved The Completeness Theorem, and he also proved the Incompleteness Theorem, but they don't contradict each other, of course. So the Completeness Theorem says following, it says if you have a theory, sort of a set of statements in a formal language, that's what a theory is, and it has a certain entailment it implies a certain statement and I mean this not because there's a proof, but rather because it implies it logically, in the sense that whenever in any mathematical structure in which the theory is true, then the statement also is true. For example, maybe you're talking as, Maybe we're, have a theory of certain kinds of orders of some and we make a statement about, in the language of orders, like, that this, you know, that the order is dance or something like Then, if every order satisfying the axioms satisfies the conclusion, And that's what I mean by saying that the statement is a logical consequence of that theory. And the completeness theorem says, whenever a statement is a logical consequence of a theory, that's equivalent to there being a proof of that statement this formal sense of proof. And it is just amazing that this could be true, because it means, for example, If a statement is true in all groups, a certain kind of mathematical structure called a group, if a statement in a language of groups is true in all groups, there's a proof of it, a finite proof of it in this formal sense from the group axioms. And it is just astounding that that could be true because The statement about it being true in all groups is referring to this vast realm of different mathematical structures, including uncountable groups of enormous cardinality. And the mere fact that the statement happens to be true in all of those different groups means that there's a proof. finite combinatorial formal and the way I think about it is, okay, certainly if there is a proof, then, because the proof system is a sound, you know, involves only sound reasoning steps, it should be true in all the groups. That direction seems totally clear. If we have a proof, then it's going to be true in all the groups, in all the models of the theory, whatever the theory is. the other direction that's profound, that if it happens to be true in all the mathematical structures in which that theory is holding, then there's a proof, a reason. So it's saying, basically, that everything that's true is true for a reason. The reason is the proof, And I just find this remarkable. and and it's what, it's what's building the connection. The fact that the completeness theorem is correct is, what's traversing the land from sort of this. finitary land of proofs and formal statements and symbols that are being arranged in a certain way with the sort of semantic land of models and what's true in them, including these enormous uncountable models of the theory and, and the fact that there's this equivalence I just find it amazing.

[00:25:53] **Matt Geleta:** It's, it's, uh, just,

[00:25:55] **Joel David Hamkins:** I'll go ahead.

[00:25:56] **Matt Geleta:** it's just, it's, it's interesting that, um, you know, I think if you speak to many mathematicians or people who are sort of familiar with mathematics, but have not studied it in the depth that you have, I, my, my experience is that the intuition falls the other direction. Uh, they find it, um, They expect that true statements will be provable.

And that seems to be a deeply held intuition, certainly in early years when people are studying mathematics. So I didn't mean to interrupt you there, but I would love to understand how your intuition on that point differs so much from at least my experience of speaking to people about this issue.

[00:26:35] **Joel David Hamkins:** I think there's a, okay, there's a difference when you're talking about sort of true true statements being provable, and that is, um, the completeness theorem is about theories that define a class of models, namely all the models of that theory, say all partial orders, or all groups, or all lattices, or, you know, whatever kind of mathematical structure you're talking about, you can often write down a theory And axiomatize that theory, and you're defining the class of models of that theory. And the completeness theorem is about this kind of truth, namely true in all the models of the theory is equivalent to being provable. That's what the completeness theorem says. But oftentimes, mathematicians are not working with this sort of class of all models of a theory. Rather, they're working with a particular structure, like the integers, or the real numbers, or some particular, you know, the complex field, or something like this. And... those theories are not, in general, characterized by a first order formal theory. They cannot be, because of what's called the Levenhans Golden theorem, and it's sort of fundamental results in, in logic show that If a first order theory has an infinite model, then it has a lot of different ones that aren't isomorphic to each other. So when you're talking about a particular structure, can never, I mean, a particular infinite structure, you can never characterize it as the, as you, you can never uniquely characterize it by a first order theory. You can give these. Categoricity results in second order logic. And for example, the integers and the complex numbers and the real numbers all enjoy categorizations in second order logic. the problem is, We don't have a proof system, a sound and complete proof system, in second order logic. So there's no analog of the completeness theorem in second order logic. And so that's the, the sort of issue is that if you're thinking about, say, the arithmetic truth and you want to know everything that every arithmetic statement that's true should have a proof from some fixed theory.

can't be a first order theory because, because, uh, then you're only looking at one model instead of the whole class of models of that theory, which is going to be enormous. And if you're looking at second order logic, then you don't have a formal proof system that's uh, complete, and the completeness theorem breaks down. I don't know if that's clarifying or not, but it's sort of how I, um, think about that difference.

[00:29:03] **Matt Geleta:** Yeah, no, it, uh, it is, it is, and, um, I feel like you were on the cusp of something profound when I rudely interrupted, so,

um, let's, uh,

so let's, let's get, get, get back to the amazing stream of thought.

[00:29:18] **Joel David Hamkins:** So maybe we should talk about the incompleteness theorem, which is the assertion, right, that, that there are true but unprovable statements, say, in arithmetic, and what does that mean? Because I just said that every true statement is provable, right? that's the completeness theorem, and the incompleteness theorem says that there are true statements that aren't. But it's exactly this difference, because in the context of the Incompetence theorem, when we say there's a true statement, we mean it's true in the particular model of, say, the natural numbers. The standard model of arithmetic has the natural numbers, and a zero and one are constants, and it has addition and multiplication and the order, and this is the sort of language of arithmetic that's used for the piano axioms. uh, the problem, maybe it's It's natural to sort of go back to the beginning of the 20th century when answers to the many of these questions weren't yet known. at the end of the 19th century, Piano, based on Dedekind's work, had presented this beautiful theory of arithmetic using what's now the Piano theory of arithmetic, based on Dedekind's axioms. and he showed us how on the basis of very few principles, but including especially the induction principle, uh, one can prove essentially all of the standard classical, theory of arithmetic. You can prove the infinity of primes and, and, the fundamental theorem of arithmetic and, you know, of, can develop the whole theory of elementary number theory on the basis of piano. arithmetic axioms. And so it seems quite natural to wonder, well, maybe those axioms are complete. Maybe they, those axioms settle every question, uh, in arithmetic. It could be. We can write down the list of axioms, You know, they're just all totally ordinary axioms, you know, about the nature of addition and, and multiplication and how they interact and so on, plus the induction principle that says, you have a statement and it's true at zero, and whenever it's true at a number, it's also true at the successor, it should be true for all the numbers, right?

That's sort of common induction principle. And, and one might wonder, well maybe this theory is complete, right? it follows from Gödel's theorem. Gödel's theorem is exactly the claim that there can be no such theory that is, whose axioms we can write down, which is settling All the statements of arithmetic, can't be any such theory. So in particular, the piano theory isn't such a theory. And one way of proving that sort of my favorite elementary proof of the incompleteness theorem, I can give you a proof right here. oftentimes when I teach mathematical logic, uh, I like to give, you know, five or six different proofs of the incompleteness theorem, but the first one I always give is based on Turing's holding problem.

Namely, you first talk about Turing machines and computability, as we just were, and you prove that the, halting problem is not computably decidable, So this is the question, given a Turing machine program and an input for that program, the question is, will it halt or not? So that's a kind of... of infinitely many different questions that we could ask. And say that it's not decidable is to say that there's no program that will correctly give you the answers to all those questions. So there's no computable procedure that you can use that will tell you yes or no in all cases, whether or not a given program will halt on a given input. And it's not difficult to prove that theorem because suppose towards contradiction that there were such a procedure, and then you design a certain program that, uh, that asks about another program whether it would halt on itself, given itself as input, yeah? It's kind of a weird thing to use a program as an, as input to the program, but It's sort of understandable because the program is just this funny sequence of instructions.

And so we can think of using that program as input to another program. So we make a, we make a program that would check whether a given program would halt on itself or not. And if the halting problem were decidable, then we could, we could answer that question. then what we do is we make a program now, which given an input, it asks, does that program halt on itself? And if it does, then our program should do the opposite. So we're either going to go into an infinite loop, or we're going to halt immediately, in exactly the opposite way, That's the answer to that question. Okay, so now, we made a program that does that, and then we feed that program to itself. And the point now is that that program would halt on itself if and only if it does the opposite to that, because that's how the program is designed.

And that's a contradiction. So there can't be any such program, and therefore the halting problem can't be decidable. Okay, so that's basically a proof, Turing's proof, 1936 proof of the undecidability of the halting problem. But now let's come to the girdle here. Suppose that piano arithmetic was complete. But now, look, we can do is design a procedure going to look for proofs. that follow that theory that flow from that theory, And we can design a procedure that will systematically try out all possible proofs steps successively. So it's going to be basically enumerating all the possible theorems of those axioms, all of them, all and only the theorems. So it would be, it's kind of, I think about it as a box with a crank on it. We're going to turn this crank and it's going to spit out more and more theorems of Piano arithmetic, and it's going to spit out all the theorems, all and only the theorems. So now, If, the theory were complete, then given any program and input, I can formulate the assertion that the program halts on that input, and I can turn the crank and see if that statement ever shows up, or if the negation of that statement ever shows up. And if the theory were complete, one of them would have to be showing up. And when it does, I could answer the halting problem. That would be a computable solution of the halting problem, that's a contradiction, because we already argued that there isn't any such thing, so therefore the theory cannot be complete. So this is a kind of reduction of the completeness theorem to the halting problem, right? Basically there can't be a complete theory of arithmetic with a computable set of axioms, because if there were, then we could use it to solve the halting problem, but that's impossible, so that would be a contradiction. I don't know if that was

[00:36:20] **Matt Geleta:** It, it absolutely is. And, and, uh, again, for for listeners who wanna dig deeper, I think you, you do this very nicely in your set of lectures on, well, both in your book, but also your set of lectures online, which you very kindly, uh, put up on to YouTube, which I've watched and enjoyed very much. Um, what what strikes me as curious, very curious about this whole thing is, you know, That, that proof that you just gave is actually not that complex once you understand, um, once you understand the concept of Turing machine, once you understand that, um, the, the proof of the, the halting problem.

Um, it, it follows quite naturally this idea of incompleteness. Yet, if you look historically at the, the context, you know, when this was first... understood broadly. It was seen as very profound. Um, and I think even today people think of it, many people think of it as a, as in some sense of a problem, not just a fact of mathematics, which, you know, it is a fact of mathematics.

Everything that's true in mathematics is a fact of mathematics, but many people perceive this to be a problem. You know, they have emotional valence associated with this thing. Um, does it make sense to refer to or to think of incompleteness as a a problem rather than just a fact like any other.

[00:37:37] **Joel David Hamkins:** Oh, I see. Oh, that's very interesting. I mean, I don't view it as a problem. It's a theorem. It's a fundamental fact of the nature of mathematical reality is how I think about it. I mean, and that one is advised to take it on board because we've, we've established its truth. It's a fundamental feature. I remember, uh, I, my doctoral supervisor was Hugh Woodin, who was at Berkeley at the time, but now at Harvard, and admire him very much. But one of the things that I admire most about him was his ability to take on board new results immediately and then start using them. He incorporated the new things into his thinking. And I observed this many times, you know, interacting with him on our weekly meetings and whatever. And, you know, I would bring in a proof of something that he hadn't known about, and then immediately, you know, he was doing x, y, z and making further steps in a way that I hadn't, uh, as a young, uh, young mathematician at the time, I wasn't sort of up to speed with, with, with doing that.

But what I I learned That how important it was to take on board this sort of new knowledge and then proceed further from it. And so that's sort of how I think about it is one if one is just uh, in the state of looking at it as a kind of uh, profound mystery, but the But take the further steps that would flow with the knowledge that the incompleteness theorem is a fundamental phenomenon, then I think you're, you would be missing out.

So, So, I guess I would kind of reject the proposal that you just made and rather take it fully on board. Incompleteness theorem is a fundamental nature of mathematical reality. We just can't have a computable maximization of. even arithmetic truth, all kinds of things flow from that. It means, for example, I mean, the second incompleteness theorem is a kind of refinement of the first incompleteness theorem.

It says that uh, no computably axiomatizable theory can prove its own consistency. This is one of the statements that you're not going to be able to prove is the statement that the theory itself is consistent. And that leads immediately, if you follow this process that I'm saying, to the consistency hierarchy.

Namely, you have a theory that you like, maybe it's piano or rhythm music. That theory is not going to prove its own consistency, which is presumably something that you believe is true, because you like the theory, so you... should probably think the theory is consistent, too. So therefore we can add the consistency assertion to the theory.

That's going to give us a stronger theory. But that theory doesn't prove its consistency, so we would add the consistency assertion of that one as well. makes another third theory a stronger one. so when we keep adding the consistency assertion, that makes a stronger and stronger theory. We get this hierarchy of theories. Oh, you might say, well, we just, that's it, we made the consistency hierarchy, and we would be done at that stage, but that's not true, because if I, if I look at the resulting infinite extension of adding all of those consistency assertions at every finite stage, a perfectly good theory, also, which is computably axiomatizable. And it is also incomplete, and doesn't prove its consistency, and so we go to step omega, omega plus one, omega plus two, into the ordinals. get this enormously tall hierarchy, this consistency tower. And so Gödel's theorem is saying, look, Whatever your theory is, there's going to be this tower of consistency theories that are stronger than it, towering over it. And then we can look at other parts of mathematics and we can see, sometimes we have these sort of towers of theories. and we can prove even, like for example, in the large cardinal hierarchy, we can prove Certain parts of this hierarchy instantiate this increase in consistency strength, so higher theories in the tower are proving the consistency of lower levels of the tower, and so on, in a very natural way.

So these are sort of naturally arising towers of theories that are getting stronger and and they instantiate this phenomenon that's totally predicted by the, uh, the second incompleteness theorem and the tower of consistency strength.

[00:42:07] **Matt Geleta:** One of the things you said there was, was, uh, you know, interesting, this idea of, of kind of creating a hierarchy, um, of basically kind of adding things in to our sort of mathematical formalism in order to prove consistency at lower levels. And in some sense, I think for many people, this would feel almost like kind of plugging holes or, uh, kind of, um, Resorting to things outside of, um, what's absolutely necessary.

You know, the, the mathematics system is absolutely necessary. Um, and kind of adding fragility, I guess, to the system, you know, you, you, you continue to build something more complex, build on top, build on top. And I think many people do worry that. You know, in, in the grand edifice of mathematics, we, we now have an completely unwieldy, very large, complex system on which, um, you know, many things that we think are true depend.

And, uh, there's kind of no one around who can have the full picture in mind and, and have full confidence that there aren't these, these holes floating around. Is this something that concerns you at all? That, you know, we've, we've, we've grown mathematics, uh, into something that's very, very large and beyond, beyond the, the comprehension of any one individual.

And, uh, you know, running at full speed down, uh, down many, paths. Um, but down, down at bottom, you know, low down the hierarchy, there, there might be things that we're missing. There might be things that we, we have wrong. Is that, is that something that concerns you at All

[00:43:41] **Joel David Hamkins:** I guess it, I guess it does. I mean, there's sort of two aspects. I mean, I'm going to sort of pull apart in your question, I don't really view this building of the hierarchy as plugging holes in the way that you describe, but rather it's, it's sort of opening us up to, to realize that there's this sort of new awaiting for our us to explore. I mean, the fact of the matter is that even a century ago, as I said, all almost all of the elementary number theory is provable, you know, at the bottom level, we already knew that we, we, we could prove so much in that base theory that we thought it might be complete, right?

This was the question that the incompleteness theorem is answering, right? And The discovery that it's not complete, you know, means that there's these statements that are true in the natural number structure, but not provable on the basis of those axioms. And these statements are very hard to come by, but when you find them, they're fascinating. and not just about the consistency statements, but the sort of other instances of of independence and, and when you can say that a statement is definitely independent, this is a kind of fascinating situation which greatly enlarges your mathematical understanding. So, so it's not at all plugging a hole, rather it's sort of revealing this higher realm that you would have totally missed if you hadn't been undertaking this process. And so that's the sense in which I'm not at all concerned. about it. I mean, for theoretic realm, as opposed to arithmetic, then we have the Zermelo Fraenkel, axioms of set theory, which were, you know, up in or so. And, um, uh, Uh, and, uh, and we've now observed that an enormous variety of statements are known to be not Settleable in the basis of the standard axioms of set theory. For example, the continuum hypothesis and the of choice can be settled from the other axioms and, and reason's, hypothesis and, and, uh, Uh, existence of large cardinals and, uh, Susan's hypothesis and so on.

There's an enormous variety, hundreds, thousands of different mathematical statements that are independent of C F C, and expectation is that, uh, basically any statement in infinite combinatorics is either trivial or else it's independent of ZFC. That's the kind of experience that we have. So many statements are not settlable in the axiom. Um, of, of, set theory that way. So, but there was another aspect to your question, which is about like whether one person can be master of all of mathematics and that's, uh, mathematics is just too vast today uh, for sort of a single person really to, to, uh, the survey of the whole um, subject. It's just impossible.

It's too big. Um, I mean, even you can't even know all of logic or all of, you know, algebra or something like this. so subjects individually are also so vast and, the mathematics has become so specialized. I mean, uh, I don't do it as a, as a problem. That's going to be the nature of any extremely successful intellectual endeavor, right?

Once the amount of knowledge in the subject is So enormous. It's of course, it's going to, have areas that are more specialized than others and so on. And it just won't be possible for a person to be expert in all those different specialized areas. So it's not particularly concerning. It's just the nature of I think any intellectual activity that's extremely successful and has produced so much knowledge that there's just too much of it for one person to be the master of.

[00:47:44] **Matt Geleta:** Yeah, yeah, um, you, you mentioned, uh, a couple, a couple minutes ago, several axioms, um, you know, the axi the continuum hypothesis, and then you mentioned the, um, the axiom of choice, so there is a famous example that I think is well known amongst physicists, or mathematicians would know it as well, but physicists are familiar with this one, in geometry, where I can't remember which of the, which of the five axioms of geometry was, I think it was probably the fifth one.

Exactly. Exactly. So, um, you know, five, five axioms of geometry that all make a lot of intuitive sense and that, um, you know, we built a whole edifice of, of geometry off, off the back of, um, with the fifth one being, uh, some, some statement to the, to the, to the effect of you have a line and a point, uh, off, off that line. There is a unique line that's parallel to the first that passes through that point. And it feels, it feels very intuitive, but people worried about this for a long time. And nonetheless, you know, whole theories of geometry were built off of this and they had implications in physics and, and elsewhere.

And at some point, this, uh, this, um, This axiom was, was relaxed and, and new types of geometry emerged and these turned out to be very important, you know, they turned out to be, to be very important for our understanding of, um, space time, for example. But that, that's one example of, of this idea where, um, you know, we have, because of how we've evolved and, and the world we live in, we develop, you know, intuitions for, for axioms, um, as to, you know, what should be, what should be true in, in something that's a useful piece of mathematics.

Uh, and, and it turns out that we could in some sense be mistaken. And, um, you know, the, the, the, uh, the axiom of choice, uh, for me there stood out as, as, as one. Potential example of that, because it is used in so many other parts of, of mathematics. Um, but that's the type of thing that I was, that I was referring to when I was worrying, you know, we were building grand edifices on, on things that at the end of the day rely on, on intuition.

You know, at the end of the day when push comes to shove, at bottom there is an intuitive choice being made. Does that concern you?

[00:50:02] **Joel David Hamkins:** Well, I mean, of course I do think Vince wins me, I guess, so. there's a debate in

[00:50:06] **Matt Geleta:** Ha ha ha.

[00:50:06] **Joel David Hamkins:** the philosophy of set theory about the nature of the various axioms of mathematics, and in particular the axioms of set theory and of the common distinctions that's made is the distinction between what's called intrinsic justification for an axiom versus extrinsic justification for an axiom.

And idea is that axiom enjoys intrinsic support or intrinsic justification if If the axiom is expressing a fundamental idea that, that we can see, you know, to, to use this sort of uh, intuitive language that you were just mentioning. I think it's, it's most like tightly connected with, with that idea.

When when we, part of the nature of the concepts that we're, that we're talking about, that this principle should be true. So, For example, in set theory, the axiom of extensionality is the assertion that two sets are equal if and only if they have the same members. And this is expressing a sort of core idea about what we mean by sets.

I mean, what we mean by a set is it's a collection of objects. And so if you have, you have a set X and another set Y, and they have the same members, then they're the same set. That's sort of what we mean by sets. And the axiom of extensionality is expressing that idea in a quite clear way. Um, and, and therefore it's enjoying intrinsic support, but other axioms Enjoy. What's called extrinsic support. Maybe it's sort of consequentialist support. We can, on the basis of the axiom, we can prove a lot of things that say, generalized known things. And so we might say, well, look, that's a reason to believe in the axiom if it's, it's almost a scientific way of proceeding, right? You're saying, look, this axiom applies to all these things that we like very much and that we know are true in many, many, instances. And so it's a kind of. consequentialist. We judge the truth of original axiom on the basis of that it's correctly making these predictions, right, sort of like experimental evidence a little bit. And it's fundamentally different in character, a fundamentally different way, reason to accept an axiom, if it has this extrinsic support only but so it's sort of like you think it's probably true, but you don't really know why. You know, but it has all these consequences that you want to keep and it's a way, a unifying way of organizing those consequences.

And so maybe that's reason to believe that it's true. Sometimes people talk about the axiom of choice. as having this extrinsic support, uh, it's very useful in mathematics, It's used all over the subject, as you mentioned, and, uh, and and so we find all these consequences of it that give us, you know, that we like and that give us reason to, to believe in it. But my view is that actually the principal reason and principal justification for the axiom of choice is is one that's intrinsic. we have this idea of sort of arbitrary collection of objects. So the axiom of choice is if you have a family of such sets, then there's a way of choosing an element, one from each of each member of the family. And so if, if we have a, say, a family of disjoint sets, then there should be a set that whose intersection with those sets in each case has exactly one element, the choice set formulation. And, and the intrinsic way of thinking about this is that, well, of course, there should be such a set because I don't care how the choices are made.

And my conception of what sets there are is that sets come sort of in all possible and ways, It's regardless of whether they're following a definition or a procedure or whether they're constructive in some way, uh, that all of the sets, whether they're constructive or not, are part of the sort of set theoretic realm.

And so one of those sets is going to be the one that you know, a set that makes such kind of choices. And that's why, you know, one could believe the axiom of choice on intrinsic grounds. and so it's a kind of debate though about, uh, do we believe in axioms? Why should we believe in axioms? What are the grounds for accepting one axioms rather than another?

And furthermore, what does it really mean to accept an axiom? Because Does it mean you can never reason from a different, incompatible axiom again? What if you want the axiom of choice on Monday, but then on Tuesday you want to look at know, consequences of the failure of the axiom of choice, then it becomes incoherent if you're insisting still in keeping the axiom of choice on the Tuesday, right? And so what does it mean to adopt an axiom? Does it mean forever that you have to use those axioms and only those ones? Or, or, well, absurd. It seems like we can reason in different theories in different times. and and maybe it's not so urgent to, like have a final list. That's, you know, the, the official list of axioms that we're going to use, rather we just have a lot of different theories and sometimes we use some of them and sometimes we use the others and depending on, you know, the nature of our argument or whether we need to use the axiom in our argument or, or what we, what we feel like doing, why not So it seems like Maybe there isn't so much urgency in settling, uh, what the sort of final official list of axioms should be, and that we can be more kind of open minded about kind of mathematical theories we're, you know, willing to undertake investigations in.

[00:56:03] **Matt Geleta:** I do, I do question the, that, um, sort of dichotomy between intrinsic versus extrinsic to some extent and whether that decoupled from basically the human mind, the way we think about things. So for example, if I were to take a statement that Almost everyone would have the strongest, uh, sort of gut belief that it is true.

You know, 1 plus 1 equals 2. Everyone understands that to be true. And, uh, you know, I understand to, to actually formally prove that is, is quite some work. But, you know, it's a statement that people could, to take to be true. Um, but if I took, you know, two 60 digit numbers and multiplied them together and displayed the answer, that would be true in just the same way.

So, independently of humans, that, that answer would be true in the same way. But, um, almost nobody would have anything like the same level of intuition for, for that statement being true. And so I do, I do wonder if the, um, you know, the, the truth value of, I mean, you know, we want to choose axioms that are useful.

Um, and, uh, it's basically our intelligence that lets us, um, sort of intuit which are the right ones. And, you know, if we were, if we were vastly more intelligent, the, the two 60 digit numbers being multiplied together and that answer, that would be intuitive in the same way as, as a one, as a one plus one.

Equals two is, is intuitive to most people. And so it does feel to me that those two concepts that you mentioned there sort of are, are related. But the, the, the, the, the through line, it seems to be some sort of level of intelligence, some human understanding, um, you know, given a sufficiently smart. person, um, things that, uh, would be, would have this quality of, of being extrinsically true, um, feel like they would fall into the intrinsic bucket.

What do you, what do you think the, the, the, the relationship between the human mind, um, and the, um, mathematical truth of, um, or the selection of, uh, of the axioms of, um, mathematics?

[00:58:12] **Joel David Hamkins:** against your, you know, what you said a little bit. um, Because it seems to me that this intrinsic extrinsic is kind of getting at this human way of thinking. I mean, in the dispute between intrinsic justification and extrinsic justification, there's, there's the idea that, look, it's the intrinsically justified ones.

That's a better justification. It's a more satisfactory one, we don't we can't always achieve it because some of our axioms don't seem to be intrinsically justified and they only have this extrinsic support, which is a kind of a lesser kind of support. But what it means to be intrinsically justified means that the truth of it is something that is intuitively the case.

And so it fits into your, your intuitive category. The way I think about it is to provide intrinsic support for an axiom is to explain, you know, why it's part of our human understanding of the concept a very direct way. And that's what it means to have intrinsic support for the axiom. Whereas the extrinsic support happens or we care about it, really, only when we're not able to achieve this intrinsic one. maybe there are mathematical statements and our intuition is failing us. We don't, we can't tell if it's true or not, but if it has extrinsic support, that's still evidence. You know, we have this conundrum. We want to know, So, should we adopt this axiom anyway, even though we can't intuit whether it's part of the concept or not. And, this is a very common thing in, you know, when you're in this sort of very strong theories in set theory or whatever. Um, you know, you have mathematical principles that are expressing a clear idea, but you don't know if it's true or not. But you can fall back on this extrinsic support as a way of you know, trying to answer the question whether that principle is true or not. And so, uh, so I don't think it's actually so different from this sort of intuitive way that you were talking. It's just, that's the intrinsic support category. And it's a pity. It's this kind of, uh, regrettable fact that there are many mathematical statements. that we can't tell if they're true or not. And so we're struggling to, to see, well, should we adopt them or their negation as an axiom, or do we think they're true or not? And so we're sort of forced to find other means of deciding such questions. extrinsic support is one way of talking about that. I ultimately the question, if you have a statement and it's independent in the way that the parallel postulate is independent. So that's what we're really talking about. We have a very strong theory, say Zermelo Fraenkel set theory, for strengthenings of it, and we have statements that are neither provable nor refutable, just like the parallel postulate is neither provable nor refutable from the other axioms of geometry. And one way of proving that a statement is independent is to exhibit models. the statement is true or in which the statement is false. But all the other statements are are, true in both cases, and that exists in geometry. For example, we have Euclidean geometry, which satisfies all the geometry axioms, including the parallel postulate. we have these various non-Euclidean geometries like hyperbolic space or spherical geometry, Ian Sphere.

And so, on, in which all the axioms are true except for the parallel postulate, which is false in these non nucle geometry. So we have these models and they satisfy all the axioms except the one that's independent, And in one case it's true, in another case it's false. that exact same thing happens in set theory.

For example, we can give models of set theory where all the Zermelo Fraenkel axioms are true, say, the axiom of choice, but the continuum hypothesis is true in one and false in another, and and therefore, the continuum hypothesis is independent. So we can often make these kind of independents, uh, prove these independence results.

pervasive, the independence phenomenon. And for many of these independent statements, we just don't know if it's true or not. Should we take em as true or not? And so we we're grappling with the question. And so we, um, uh, it's a non mathematical question because the statement is independent. We can't prove it or refute So we're not going to answer it by proof. It has to be some other sort of philosophical justification or reason to adopt the statement, or its negation, or to, or to study both, or neither,

or,

[01:02:47] **Matt Geleta:** Hmm?

[01:02:48] **Joel David Hamkins:** it's how it goes.

[01:02:50] **Matt Geleta:** Yeah, I guess at the, at the, at the bottom of it, in, in examples like that, um, you know, an aspect of it is just which, which paths do humans find interesting? What, what paths are we drawn to? Which areas of mathematics do we explore? Because when you mentioned it earlier, it is so vast and in fact infinitely vast in some unimaginable way.

And so, you know, Human mathematicians do have to make a choice, um, and it's, it's not quite the same way in, in sort of more practical sciences that that choice is often driven by things out in the real world, you know, a cancer researcher, um, you know, it's the motivation for studying oncology. It can be, it can be justified in, in many ways.

The motivation for choosing a particular area of, um, pure mathematics to study, something that's very abstract, far removed from practicalities of life, uh, it, it's, it's a sort of a, more nuanced and, and. complex thing. And an answer that you would hear a lot of the time from people in these fields is, you know, it's, it's, um, the, the sense of, of beauty of, um, the, the intricateness of it all.

It, it sort of feels like it's touching something deep within people. I have a question for you there. Firstly, if, if you sort of feel the same way and, and like, you know, that is what's drawn you to the questions that, that you look at and you study. But I also have a follow up question. Um, sort of the more general question is, you know, how do you think about.

How mathematicians should be choosing the areas to investigate in such an abstract realm, you know, but by what criteria, um, should we be, um, deciding which questions to pursue and not, given that these are often very far removed from practicalities of, of life, at least in the near term, at least in the foreseeable.

[01:04:44] **Joel David Hamkins:** Right. So that's a very difficult question to answer. I mean, for my own part, I mean, let me just answer personally. I've always Follow the practice of working on whatever mathematical questions I find. Interesting. and That's basically the only criteria. If I find it interesting or if I'm curious about a mathematical phenomenon, I want to get, you know, I want to understand it more deeply. I want to gain some insight into some mathematical context or something, then that's enough for me. I just work on it. and I try to adopt and I recommend to all my students, including undergraduate students, this kind of idea of playing with One's ideas. And I think this is true in any realm, not just mathematics, but I always say, look, you just Play around with your ideas. Maybe you have, you learn a new concepts. Well, then you should play around with it. You should look at examples and tweak things a little bit and put them together and see what happens. Or can you make, observations or deduce consequences, you know, and, and I think it's so important, for Making advances in these intellectual realms to have people that are playing. Okay, but it also means I mean, sort of personally, it has meant that I've often worked on some kind of non standard or quirky topics that aren't part of the mainstream. I mean, it's sort of what I'm a little bit known for, not like totally wacky things, but sort of unusual topics. And for example, let me just give an example. I spent a long time, a lot of work studying, um, infinite computation, infinite time Turing machines the paper that I wrote with Andy Lewis. And I had started this when I was a grad student. Still, I was still a student and I was working on it then. and I had just got my Ph. D. and left and, and I was uh, thinking about whether to like really get more deeply into it and develop the theory much more fully. And I had talked to various colleagues, and so on, and some of them told me not to do it. and and I really thought a lot about it and I decided, well to hell with them, I I think it's interesting, and I'm gonna do it, and I did it, and now this paper is my most highly cited paper It has hundreds of citations and, and there's been dozens of masters. There's theses written on this topic and PhD dissertations written, you know, following up and many, many dozens of papers written following up on this thing and conferences and so on. And so it's one of my most successful projects ever. And, and I'm really glad that I did it because first of all, it's super interesting.

And I really learned a lot. And I learned from other people who, you know, took the ideas further, and all of that was just fascinating. and, And, and, so if people tell you not to work on something because, you know, of some reason having to do with uh, expectations or something, then my advice is to just ignore that totally and work on whatever you're interested in. There's another example. My son, when he was in, I don't remember what it was, third grade or something like that, were learning about prime numbers in his school and his teacher sent him this exercise. the question was, um, Can you think of a number which is prime and has digits that add up to 10 and it has a 3 in the tens place? That was the question. And probably the teacher was thinking about two digit numbers, because 37 is prime, and the digits add to 10, and it has a 3 in the tens place, and it's the only two digit number like that. I mean, it would have to be sort of redundant, because if the digits add to 10, and it's two digits, it has a 3 in tens place, then the other one has to be a 7, and so it would have to be 37 anyways. so. Um, but it wasn't stated in the question two digits, and so my son and I were at the cafe, and I said, well, what about three digit numbers, and so on, let me just okay, We have, right, 433 is prime, it adds up

to 10, and also 631. 1, 531, and also 100, 333. And, and so I, I went to this list of primes, you know, up to a billion or whatever, and and found more and more instances.

Of course, they have to have lots of zeros when they have many digits in order for the digits to still add up to 10. Most of the digits are going to have to be zero. And I, I, wasn't, and I realized, well, how many examples are there? And I just didn't know. And I didn't even know, like, what methods one would use to prove that there are infinitely many. And so I asked on, uh, MathStackExchange this question, um, you know, how can we come to know this? And then pretty soon, people were posting answers like these, with these 100 digit primes, 200 digit primes with mostly zeros, but the digits added up to 10, and had a 10 space and so on. So there was this international sort of collaborative effort to produce more and more huge examples of the answering my son's second grade teacher or fourth grade or

whatever it was. Let's see, fourth grade, I guess, and so, uh, so that's an example of play, you know, it's this sort of silly example, but actually it got into these. I view as quite sophisticated ideas about how one can can come to uh, analyze such kind of number theoretic assertions and, and, as far as I know, it's still an open question whether or not there are infinitely many examples uh, of that phenomenon. I recommend play.

[01:10:55] **Matt Geleta:** Yeah, it's a, it's a beautiful example of, um, yeah, I mean it play following, following what naturally draws us. And I mean, very often historically, if you look at the, the, um, progress of mathematics that has actually led to things that have also been very powerful. of great practical use, um, whether in, in sciences or even just more generally.

And, um, and I think that often, often when, when one is asked, um, of the value of studying these, these very abstract mathematical questions, um, one, one answer given is often that, you know, we, we don't know of the practical use that might come out of it in the future. Another side of the coin is, is often that it doesn't matter.

And one does not need to apologize for, for what they're studying.

[01:11:45] **Joel David Hamkins:** One doesn't ask the composer what's the practical use of your symphony, the artists, you know, is it going to help us make a better toaster? Your work of art, will it help make better toasters? I mean, you know, we, that's a kind of demand that we just don't put on the artists or the novelists, or the other great thinkers. So what, why, Why should it be required that in order for mathematicians to justify, you know, spending a lot of time working out their ideas, that it should lead to some practical thing? I just don't agree with that principle in the first place. And I think it's a kind of cultural achievement that we're coming to. understand in a deep way these mathematical questions that have confused people for centuries and now we are coming to understand them in a very deep and profound way. And this is cultural advance and it doesn't need these practical applications as far as I'm concerned in order to justify it. No, it happens that mathematics is happens to be very useful and, and, has many, many practical applications, but. I'm going to be studying infinite chess, and infinite computation, and infinite other things, uh, even if they don't have any applications, which probably those examples don't, um, but still they're interesting, and there's open questions to be looked into, and they're fascinating, and I encourage anyone who's interested in those questions to join me, and, and take a look, and let's figure it out together.

[01:13:25] **Matt Geleta:** Yeah, I fully agree. And, um, and certainly, you know, one does not need to, to sort of justify these pursuits to others, but I do think it's important. for, for one to, to think about their own motivations for themselves, at least in a, in a clear right way. Um, one, one analogy that comes to mind here is, um, you know, we, we are drawn to these questions, um, basically, like if you, if you boil it down, it's, it's due to the constitution of our minds, and it was not evolved to do mathematics necessarily.

It was evolved for lots of other things. Um, and if I, if I took the, the, the sort of toy example of letting a child loose in a, in a sort of grocery store, they, um, they would tend to naturally gravitate towards the, the, um, like the candy aisle, for example, and, um, and they, they, they don't yet have the.

Perspective of sort of the, the, the broader perspective of what they want to be doing with their lives. There's sort of like evolutionary impulses to draw them towards certain, uh, certain parts of that story and then they're drawn towards the candy aisle. I do sometimes wonder if, um, in the pursuit of, um, questions, abstract questions, whether we're, we're, we're like children being, being drawn to, towards the candy aisle, you know, drawn by interest and drawn by play.

And. Uh, maybe not putting enough, um, of a, um, of an emphasis onto where that motivation is coming from within the mind. Um, I think, I think, fortunately it seems to, to bear fruit and to, to lead to, to good places. Uh, but I, but I do, I do wonder. Um, you know, our minds were not evolved to do, to do mathematics and what really is it that we're being drawn to, you know, is it, is it, um, is it, is it possible that we're sort of fumbling around in the, in the candy aisle of, of some mental space and missing the, missing the fruits and vegetables, missing something that actually could, um, could be more meaningful.

Do you, do you, do you sort of think about the questions you focus on in, in that sort of way at all?

[01:15:34] **Joel David Hamkins:** Yeah, I mean, I I totally do. Um, mean, I don't think I mean, I work on a lot of different topics, and I don't, I wouldn't say any of them are are are frivolous, although the book I'm writing about infinite games is called Infinite Games Frivolities of the Gods, because I'm going to be looking at infinite chess and infinite drafts and infinite hex and, uh, sort of infinitary versions of all of our, uh, familiar games. um, but, yeah. but, it's kind of an excuse because really what's going on in even that sort of frivolous seeming topic is a kind of, uh, careful analysis of the nature of strategy and game strategic reasoning and so on in this infinitary realm. And I think one gets insight, uh, know, that that, sort of is larger than the particular games that are studied, as sort of unifying principles and, um, and, and, and, so on Um, that are kind of unifying our understanding of the nature of strategic reasoning in,

uh, in these infinitary realms.

And, and, and furthermore, there's even sort of to foundational issues when you get into the axiom of determinacy and so on, which has profound mathematical consequences and actually extremely strong consistency strength. It's an instance of having very high consistency strength. Um, so it, it builds this connection with these other deeper philosophical questions in the foundations of mathematics. and And so even though it started off, you know, seeming to be possibly frivolous or weird, but yet it's, built into this subject, which is building these, um, uh, insightful, uh, consequences With fundamental questions on the nature of mathematical reality, and and and that happens again and again and again, and it happens so much that it, you can, you can almost sort of count on it in, in logic in mathematical logic, in particular, then it's pretty likely that questions are gonna, you know, be having these deep connections with something really fundamental. so it's easy to. find justification sort of on, on, on, those general mathematical grounds for for almost any of these questions in the subject. it used to be, you know, 150 years ago, When you go back and look, people were hopelessly confused about the nature of truth and proof and they maybe didn't even distinguish between those very carefully, even in the early 20th century. It's shocking how much confusion there was in the writings of people about proof and truth. But nowadays, we're totally clear on this in part because of the formal analysis of these notions that's risen from mathematical logic and philosophical logic. And, and so it's adding so much clarity and depth to our understanding of the nature of the foundational questions, that one, you know, we can't help but think that there's huge progress in, in in those realms.

[01:18:57] **Matt Geleta:** And the, uh, the other great benefit, I think, uh, that you sort of alluded to is, um, that I think the other people are also finding this very joyful and fun. And I know you've got a very interesting online presence. Uh, Both, um, on, on Substack and, and in various books, which, um, people are finding a lot of value in.

Maybe we can turn to, as we sort of bring it towards a close, towards what you're, what you're doing there. I think for, for the past year or so, or at least this year, you've, uh, been serializing a very interesting book on Substack. Why don't you, why don't you tell us about that, uh, that book and your endeavors there?

[01:19:32] **Joel David Hamkins:** it's the, the Book of Infinity. So I started my Substack in, in January of this year, and it's because I was teaching a new class called Infinity. Uh, is this sort of undergraduate level class for, uh, uh, students here at Notre Dame to fulfill, actually they can fulfill the second philosophy requirement Um, and so I had a whole bunch of STEM majors and different, all different majors, um, uh, mixed in, in that class, and And, uh, and so I decided, because the the book I wanted to teach them

[01:20:09] **Matt Geleta:** Heh heh heh.

[01:20:09] **Joel David Hamkins:** during while the course was proceeding. And so I was, uh, uh, always a bit ahead and posting the chapters on the sub stack they, uh, were completed and, and everything went really well.

And so I was able to cover All of my favorite conundrums and paradoxes and puzzles and, examples, including a lot of historical stuff, um, um, going back to Aristotle and Archimedes and so on, Zeno's paradox, and a lot of Galileo, and, uh, and then getting into uh, more contemporary things as well, but also, of course, Cantor and Gödel and everybody. so it, it was really quite a lot of fun, the Book of Infinity, and the, the name of the subsect is infinitelymore. xyz. If you go to infinitelymore. xyz, you can find all the books there. And I'm I'm not, I'm still putting new material into the Book of Infinity. Um, there's going to be a chapter released soon on the surreal numbers. And I've also begun serializing separate book project called Panorama of Logic, which is a kind of introduction to topics in logic. And that's proceeding, And also my on infinite games will be serialized there, and I have another project also. math for seven year olds, which is a bunch of, uh, projects that I have developed over the years, um, sort of activities to undertake with young, people who are interested in mathematics. I used to go into my son's school and my daughter's school and do little math projects with the students in the classes, so I have these collections of various things, and I'm going to put them all together and put them on the substack. Math for seven year olds, but actually it's for people of any age.

[01:21:57] **Matt Geleta:** The, the last one is, um, you know, we've, We've talked a bit about intelligence and we've definitely talked a lot about intelligent people. And, uh, one of the topics that's very widely discussed at the moment is, Uh, the, the concept of AI superintelligence and whether and when we'll be, we'll be visited by an AI superintelligence.

My question to you is, um, if you were to imagine that we would be, and you had to elect or pick one representative from humanity, uh, either past or present to, to represent us to that AI superintelligence, who comes to mind? Who would you, who would you pick?

[01:22:37] **Joel David Hamkins:** Well, I mean, of course, there's huge numbers of people that I admire very much, uh, in my subject, but also artists and so on. I mean, I think maybe it would, uh, might be a kind of mistake to try to sort of an extremely smart person, the smartest person that I know maybe isn't necessarily the best person for such a kind of task, right?

So I remember this essay that I read once a while ago, um, alien intelligence comes to earth and, uh, sort of a kind of oracle and offers to answer any question, any one question that would be asked. And so humanity was supposed to organize and, and decide which question would be put to the oracle. And at first people were proposing questions, you know, so there was a kind of that was held to discuss the proposals and, you know, ultimately to decide and, and people were proposing sort of engineering kind of questions or medical questions, you know, the cure what's the cure to cancer or something of that nature or, but then there is, well, what if there isn't actually a cure to cancer, then that, um, the answer won't be so useful to us.

And had the idea, well, why don't we ask what is the answer to the best possible question that we could ask. And, And people said, Yeah, that's great, because it'll be, you know, whatever the best question is, then that answer will be really useful to us. But then people objected and said, Well, no, if we just ask it like that, then maybe the answer's going to be, you know, 42 or something and we won't know what the question is.

And so the proposal was made. uh, that The question should be, is the ordered pair whose first entry is the best question that we could

[01:24:22] **Matt Geleta:** Yeah.

[01:24:22] **Joel David Hamkins:** and whose second entry is the answer to that question? And, And, this had almost unanimous support at the conference, that That's the question that we should ask.

Okay, and so it was voted on and approved and that was the final question. And so they put it to the Oracle, and the Oracle, beamed down or however it was, and, and, answered the question, and said, The ordered pair whose first element is the best possible question you could ask me and whose second coordinate is the answer to that question, is the ordered pair whose first coordinate is the question that you in fact asked, and whose answer, whose second coordinate is the answer which I am now providing.

[01:25:09] **Matt Geleta:** Oh, that's fantastic.

[01:25:12] **Joel David Hamkins:** and I found that so hilarious,

but also it, it brings up, you know, this question of like who to pick. Well, it's not who you think it's going to skills that really don't have anything to do with intelligence or something. It's really some other kind of criteria that one should be thinking about Uh, to select the representative of humanity for such a purpose in my view.

[01:25:40] **Matt Geleta:** that's a, that's a very uplifting and lovely place to end it. I think, um, Joel, thank you so much for making the time to speak to me. It's been an absolute pleasure.

[01:25:47] **Joel David Hamkins:** Yeah, thank you so much for inviting me.

[01:25:53] **Matt Geleta:** Thanks for listening to this episode of the paradigm podcast. If you're enjoying this podcast, please subscribe on YouTube and give us a five star review on your favorite podcast player. This goes a long way towards increasing our visibility and that helps us attract even more fantastic guests. You can also head on over to our website where you'll be able to submit questions for our guests, get access to special, ask me anything episodes and some other nice perks.

The paradigm podcast is free, but donations are very much welcome. Thanks for listening. And I hope you'll join me again next time.

## Joel David Hamkins: Philosophy of Mathematics and Truth