Paradigm
Paradigm
Junaid Mubeen: Artificial intelligence and the future of education
0:00
-1:25:40

Junaid Mubeen: Artificial intelligence and the future of education

Junaid Mubeen holds a PhD in maths from Oxford and a Masters in Education from Harvard, and is the author of the book, Mathematical Intelligence.

Junaid Mubeen holds a PhD in maths from Oxford and a Masters in Education from Harvard. He has spent over a decade working on innovative learning technologies, including as Head of Product and Director of Education at Whizz Education, and as Chief Operating Officer at Write the World.

Junaid is the author of the book “Mathematical Intelligence”, which explores the role that creativity plays in maths, and the edge this currently gives human mathematicians over artificial intelligence. He’s also working with acclaimed science communicator, Simon Singh, in developing the world's largest online maths community. And, as a cherry on top, Junaid once earned fleeting fame as a winner of the TV game show series, Countdown.

We discuss

  • AI in chess and other games

  • AI performance on standardised tests

  • Problems with the education system and the role AI might play in alleviating these troubles

  • Risks in the use of AI in tutors and teachers

… and other topics

Listen on Spotify, Apple Podcasts, or any other podcast platform. Read the full transcript here. Follow me on LinkedIn or Twitter/X for episodes and infrequent social commentary.


Episode links


Timestamps

0:00 Intro

1:24 Gary Kasparov & artificial intelligence in chess

5:32 GPT4 performance on standardised tests

11:21 Ken Robinson & risks of AI in traditional education

17:03 Khan Academy and advancements in EdTech

38:19 Government regulation of educational AIs

44:20 Will AIs replace teachers and tutors? 55:55 Big picture view of AI in education

1:02:49 Identifying high-potential young mathematicians

1:11:04 Junaid's work with Simon Singh

1:16:06 Advice for young ambitious people

1:19:44 Book recommendations

1:23:14 Who should represent humanity to a superintelligent AI?

1:25:05 Thanks and wrap-up


Transcript

This transcript is AI-generated and may contain errors. It will be corrected and annotated with links and citations over time.

[00:00:12] Matt Geleta: Today I'm speaking with Junaid Mubeen. Junaid holds a PhD in mathematics from the University of Oxford and a master's in education from Harvard. He has spent over a decade working on innovative learning technologies as director and head of product at Wiz Education and as chief operating officer at Write the World.

Junaid is an author of the book Mathematical Intelligence, which is about the role that creativity plays in maths. And the edge that this currently gives human mathematicians over AIs.

He's also working with acclaimed science communicator Simon Singh in developing the world's largest online maths community.

And, as a cherry on top, Junaid once earned fleeting fame as the winner of the TV series Countdown. Today we dive deep into the latest developments in generative AI and how this might impact the education space. We discuss AI in chess and other games, AI performance on standardized tests, problems with the education system and the role that AI could play in alleviating some of those problems, the risks in the use of AI as tutors and teachers, and other topics.

As always, if you're enjoying the Paradigm Podcast, please subscribe on YouTube or give us a 5 star review on your favorite podcast player. This goes a long way towards increasing our visibility and that helps us attract even more fantastic guests. And now I bring you, Junaid Mubeem.

[00:01:41] Matt Geleta: Mubeen. Junaid, thanks for joining me.

[00:01:42] Junaid Mubeen: Thanks for having me. Pleased to be here.

[00:01:45] Matt Geleta: I'd like to start talking about chess actually, uh, chess grandmaster Garry Kasparov, very famously in the late 1990s was defeated by IBM's deep blue AI. And ever since then, chess AIs have been far superior to, to human beings. And today a human being will basically never beat even the best chess playing AI. And yet since this point, Garry has continuously claimed that the best performance will always come from some combination of. An AI and a human being, you know, an AI with a human in the loop of some sort. And I've always really wondered about that statement. I mean, in essence, he thinks a human in the loop is necessary for optimal performance of an AI in basically any context. I know you've thought about AIs in the context of education a lot. What do you make of Gary's, Gary's claim there?

[00:02:37] Junaid Mubeen: It's something I grappled with because it's something that you you you yearn to be true It would be great if as humans we could we could find a persuasive argument that We ourselves are necessary in the loop in order to achieve optimal performance. It's a tough sell though with chess. It was true for a while. was true in the immediate aftermath of his defeat to, um, to Deep Blue. This form of freestyle chess emerged where you'd have teams of humans and computers collaborating and competing against other teams. And they found Uh, on quite a few occasions, the most effective teams weren't the ones with, uh, the highest powered computers or the ones with human chess grandmasters, but actually teams that consisted of amateur chess players who just knew how to Thank you.

Thank you. Work alongside computers and effectively get, um, the best of both. So that edict was certainly true for a while. The idea that human plus machine will, uh, that the right combination of human and machine will supersede either one. Um, but I think now as AI becomes more sophisticated and you have systems like, um, alpha zero, which.

Plays a whole range of games, um, I, I, I think, I think it is becoming a harder sell. I, I, I think it, that there was this fleeting period where it was true. And I think what I've concluded is that it probably tells us more about chess as a discipline, uh, than it does about our human potential. So. In the very early years of AI, there was a belief that if you could develop a computer program to master chess, well, that, that, that would then be the pathway to, to AGI, to artificial general intelligence.

I don't think they called it that back then, but chess was seen to be. Endlessly vast and deep in its scope. So master chess and you'll master, uh, the world, I guess. Um, but Deep Blue itself wasn't very sophisticated. It was really, it operated by brute force. It, um, scanned a database of, um, millions of human chess moves and then had an evaluation algorithm for figuring out the best one at any given moment.

So it's not terribly sophisticated. So what that showed is that actually you can... achieve superhuman levels of performance in chess, um, even without having any real sophistication. This wasn't machine learning. It wasn't getting smarter as it plays. Um, and certainly what's true of, uh, even today's machine learning based systems is while they're very good at chess, They, that's literally all they can do, that's all they're programmed to do.

So I think there's a, there's a very, uh, key distinction to be made between mastering chess and mastering all of life's other challenges. You can understand why somebody like Kasparov might want to use chess as a metaphor for life, his entire career and identity is steeped in it. But I think that is a tough sell.

[00:05:32] Matt Geleta: Yeah, I mean, your

answer is actually a very good segue into something that I've been thinking about, and we've been seeing a lot in AI applications in more general context. I think recently we've seen large language models like GPT 4. Very famously outperforming humans on a very wide range of well known standardized tests.

So, you know, it's smashing the bar, it's doing very well at the GAMSAT, uh, GMAT, SAT. Many of these are the tests that are meant to put human knowledge to the test and human intelligence to the test. And I think these tests have I've always been seen as. Um, sort of gateways into very coveted positions that involve or require very high levels of general intelligence, you know, Ivy league universities, medical school, law school. Um, and, and now we're seeing, you know, AI is performing very, very well on, on those. What do you make of that? Is that, is that an analogy? is it similar to the, the chess, case?

[00:06:30] Junaid Mubeen: So, there's obviously a lot of buzz and excitement when GPT4 comes out and just smashes all of these traditional, Um,

Exams, but

[00:06:39] Matt Geleta: Um,

[00:06:41] Junaid Mubeen: we then need to examine how it's gone about achieving those benchmarks. And, you know, GPT 4 has been trained on a huge amount of data. No one knows exactly how much, um, or how large the model is, although more information seems to be coming out, but there's certainly a sense that it's.

It's prepared for those tests. It's prepared for those tests in a way that doesn't really lend itself to comparison with humans, um, because, you know, the amount of information they're able to ingest as part of their pre training, it would just be insurmountable for a human, so it would be akin to a human Um, preparing for a test by just spending hours and hours and days and weeks and months and years just absorbing all the information the internet has to offer.

Every pass paper for those particular exams are included within that. And so then, it's not altogether very surprising that when they're, when they then confronted with a, uh, uh, one of these standardized exams, that they're able to... Um, perform reasonably well because they've basically seen the questions before or seen, um, close approximations of them, but there is a bit more to it.

I don't think we can dismiss large language models and the intelligence they have. I don't think we can just characterize it as memorization. I think that's a bit too blunt. But it is revealing to see where they. Um, they, they kind of show their brittleness. So if you change the wordings of certain questions, um, it's amazing how they, they suddenly go from being able to solve them flawlessly to being basically going off the rails.

Uh, I, I examine, um, in close detail, just how, how they perform on different types of math problems. And it's fascinating to me how you, you, if you cherry pick certain. Math Olympiad problems, which are some of the toughest problems that a high school student could, uh, uh,

uh, attempt. Um, the, the, there are examples of where GPT 4, and this came out in the white paper, um, where GPT 4 is able to solve them.

Then you give it a rudimentary arithmetic problem. And it goes off the rails. So you've then got a query. What kind of intelligence is that? uh, it seems almost arbitrary in terms of the kinds of problems it can solve. So it doesn't seem to lend itself to, um, to, to, to comparison with, with human performance.

But one thing I, one of the benefits I think of AI is that it does shine a light on how we. Um, conceptualize our own intelligence because as it smashes through these benchmarks of standardized exams a lot from a lot of this analysis about how, Uh, their, their, their training is contaminated with the data that they've been tested on or how brittle they, they turn out to be when you change, um, the wordings of questions, the conclusion when the AI community is these benchmarks aren't suitable for computers, we need to give them different types of tests. uh,

because, um, because they, they, they essentially prepare for those tests differently to humans, but I think that misses, I mean, I think that's true, but it also misses, Uh, another key insight, which is all those tests actually fit for purpose for humans, because, Everything I've just said, levelling that critique at, um, GPT 4 or, or, or whatever, it could just as well be levelled at humans.

It could certainly be levelled at me. I passed a number of exams as a high school student, hell, even as an undergraduate, really just through brute force, um, not, not, not just memorisation per se, but just by exposing myself to a whole... literally have passed papers anticipating what questions are going to come up.

So certainly my pre training in preparing for those exams would have to then be subject to the same critiques. And what you, what you're seeking in, in, in, in, in, in a curriculum and through assessment is teaching students knowledge and skills that they can transfer to other settings. And so cognitive psychologists talk about.

Uh, transfer, and there's near transfer and far transfer. And so there's a suggestion that GPT 4 is showing near transfer. It can solve problems, and it can solve other problems that are similar to ones it's seen before. But if you drift too far away, uh, give it a problem that's completely different, then it struggles.

Um, but quite often the same is true of students. They absolutely nail the SAT, uh, or the GRE, or the GCSEs, whatever, whatever exam, um, you choose. Um, but then give them a question that they've not seen before, whose structure is fundamentally different. Um, and it's amazing how often they just come apart.

And so... I, I would hope it's a moment for us to pause as educators and ask whether these benchmarks are actually fit for students as well.

[00:11:21] Matt Geleta: Yeah.

I

mean, that, that brings us straight to the question of the education system in general and whether we are, I guess, preparing students to, uh, you know, understand and learn in the right way for the right things. Um, and I guess if we take a step back and talk about the education system more broadly, I think everyone listening or watching would have had some form of traditional education, um, involving, uh, classrooms, age based peer groups, standardized testing, rigid curriculum, the whole lot. And I think many people do believe that the system has things in it that are broken. Um, if I remember correctly, um, Ken Robinson's TED talk on the education system is still probably the most widely watched TED talk on how schools kill creativity or something like that it's, called. Um, and I do, I do wonder if we are then deploying AI learning technologies that are operating within this system. if we're kind of over, you know, optimizing within the wrong paradigm. Um, what are your thoughts here? Is, Is there a risk that AI teaching technologies, learning aids operating within a paradigm that we think might be broken, uh, could just further entrench that paradigm.

Um,

[00:12:32] Junaid Mubeen: so I think what Sir Ken Robinson did very well was just... Just beautifully articulate the the pitfalls of one size fits all education and how it just doesn't give students enough opportunities to express themselves In all the different ways they can I felt Sir Ken Robinson was less effective just through reading his work was in actually elucidating a solution to that, an alternative.

And it is tempting to think that AI is the natural solution. We know that one to one tutoring is incredibly effective. There's the two sigma study from Benjamin Bloom that kind of put that debate to bed some 30, 40, 40 years ago. So there's a view that, you know, with AI, if you can assimilate the behavior of a human tutor, you can replicate those benefits at scale and maybe, um, offset many of the Um, the issues with with mainstream education, but as you say, you have to.

And if you have any designs on bringing AI into mainstream education, you have to fit it into what Larry, I think it was Larry Cuban calls the grammar of schooling. And I've had experience with this in my, in my previous roles where you go into a classroom environment and you're, you're working with a teacher who is skilled in what they do.

Um, incredibly, you know, well intentioned but facing the most severe constraints, which is trying to usher 30 or more students through a very rigid curriculum, usually at the same pace, teaching them all in the same style and preparing them for a high stakes exam at the end of it. And then you come along with a value proposition that you have this AI based virtual tutoring system that's going to somehow liberate every student to learn at their own individual pace.

And at the same time, empower the teacher with real time insights into how their students are learning and where their knowledge gaps are. And I spent many years trying to figure out how to square that circle. Um, because it's difficult for a teacher, even a teacher who buys in to the premise of what, uh, large scale AI tutoring can do, for them to figure out how to fit that into their day to day demands.

So, it's very... It's, it's, it's very easy to espouse AI as the silver bullet solution, but then you have to ask how you're going to fit that in within the current structures of schooling. And there's just a degree of disruptiveness that those technologies imply. And what I've seen happen more often than not, is that the, the vendors behind these technologies, as well intentioned as they are, they have to then retrofit.

They're innovative technologies into the pre existing structures. So, in the end, I think this is what you're getting at. This AI tutoring, um, system, and there are many of these out there, it ends up just being a conduit to teach students the same old material in the same old, and there may be some differences.

It may offer individualized pacing, um, but in the end, it's really just preparing them for the same standardized exams. Now, there's... There's another reason why AI tutoring systems have up until very recently, at least, at least, been limited to that. And that's because of the way they operate. So an AI tutoring system has until very recently, uh, been very restricted to very, Um, structured kind of content because it relies on the ability to automatically assess the student.

And that's often the, the, the, the pitch to teachers. You don't need to mark students work. Now we'll do that for you. This, the system will automatically do that, but that relies on having, Um,

objective responses from students that you can, you can, uh, reliably grade. And so that usually means that the questions are very structured, very closed, often multiple choice or short answer input.

And that represents a very tiny sliver of all the different questions you can ask of students. In fact, it, it tends to then lean towards, um, the, the, the same kind of rigid content that. that we recognize is so limiting in the first place. So there are many ways in which AI tutoring systems perpetuate the, um, uh, the issues within education in terms of the content, in terms of how they assess, in terms of ultimately guiding students through the same old curriculum.

There is a sense that that might be poised to change with large language models. And now with generative AI, maybe Tutoring systems can handle more than multiple choice questions and short answer input questions, but that's not without its issues either.

[00:17:03] Matt Geleta: Yeah. I would love to get into some of those risks and, and how you see them playing out in the future. But before we do, maybe it's good to set the scene a little bit about just how the space has developed. Um, so maybe let's, let's get to AI assisted educational technologies that we've seen. I mean, over the past few months, um, I've seen in the news, um, you know, an explosion of activity in this space. Many of the really. new. uh, You do. um, educational companies are announcing AI powered tools. I think the most well known example here is probably Khan Academy's KhanmiGo, uh, which runs I think using GPT 4 and, um, and there are several others that have been announced. Could you set the scene for me a little bit more generally? What are we seeing in the, edtech space? Hmm, hmm,

[00:17:48] Junaid Mubeen: In some sense, we're seeing nothing new. It's very tempting if you're just getting into this space to think that this is, this is the, the defining moment in, in, in history where... Technology will finally disrupt education and bring about the promise of personalized learning at scale. Uh, funny thing is that, you know, I've been in this space for just over 10 years, and if we were having this conversation 10 years ago, it's uncanny how similar the dialogue would be, because we would be talking about...

Disruption. We would be talking about how technology is poised to revolutionize education. Khan Academy was, uh, actually on the rise. 10 years ago, it had launched its adaptive tutoring platform. Sal Khan was giving his TED talks. They were very inspiring. Um, there are certain pivots they made along the way.

But this idea of personalized learning through technology has been around not even just for 10 years, because what I realized was that there is a Uh, a long history that, uh, that, that ties technology to these narratives. So this idea that you can use, uh, technology as a, as a teaching machine to, uh, ultimately bring about high quality instruction at scale, that's an idea that goes back many decades.

In fact, um, you know, Thomas Edison said. The educational television is is going to be the future of learning that just having eyeballs on a screen is going to fundamentally do away with the need for textbooks. He basically said textbooks will be redundant within a generation. And of course, that didn't happen.

We had, uh, electromechanical devices. Uh, BF Skinners teaching machines in the 1960s, and they were very rudimentary that, that they would serve up a question and you would, there'd be multiple choice. You'd give your answer, and if you got the correct answer, it would move on and give you the next question.

If you didn't get the correct answer, it, it would keep you where you are. So again, it was adaptive instruction, uh, or, or, or albeit in a fairly, Uh, primitive, um, uh, delivery vehicle, but, um, it was still the same, the, the, the same underlying, um, pedagogy, if you like, that we're going to issue a series of questions to students and make them progressively more difficult or less difficult based on.

On, um, how they're, how they're tracking against those questions. And so you do need to ask, why has that promise not been fulfilled for many decades now? People have been saying that these technologies are going to disrupt education, bring about high quality teaching and drive better learning outcomes.

What's different this time? And the only answer you can come up with is that the technology is now better. Now, you know, now with generative AI, the technology is finally at a level of sophistication that allows us to achieve that potential. I find that slightly disingenuous because 10 years ago, when the same pitches were being made, and I should say it wasn't just Khan Academy, there were many others out there, I was working for a company, uh, WizEducation that was in this space, and it was just flooded with competitors, and everyone was making the same pitch, um, and I didn't see any hint, uh, in all the marketing, all the TED talks there that the technology isn't quite ready.

The suggestion was actually the technology is ready right now. So I find it disingenuous that generative AI has come along and that we're saying now is the tipping point when we were, many, many people were saying that 10 years ago. And I think the reason many providers felt That conviction 10 years ago is because as limiting as their content was, as limiting as their algorithms were, it was fit for purpose for the means of guiding students through a traditional curriculum.

And what the pitch I'd want to see with generative AI now is that actually it's an opportunity to finally rise above and beyond traditional curriculum and assessment. And it's not just about. Efficiency of learning gains and guiding students through the curriculum more quickly, as you say, optimizing within a particular flawed paradigm, but actually thinking about how we can bring about learning experiences and deliver the kinds of, um, interactive experiences and the kind of content

[00:22:11] Matt Geleta: mm

[00:22:12] Junaid Mubeen: that traditional.

Um, schooling has, has lacked for so long, but the moment you invoke generative AI and which is so early in its, you know, in its journey, but the moment you invoke generative AI in any context, you have to, you have to reckon with the tradeoffs and with some of its current limitations. And I think the biggest one that we're all aware of is its tendency to.

[00:22:38] Matt Geleta: mm

[00:22:39] Junaid Mubeen: call it hallucination, others call it confabulation. And in an educational context, that is pretty serious, because as a professional, a working professional, if you're aware of those limitations, and you have a baseline of knowledge and expertise to rein them in, then you have the opportunity to get the best out of these tools while...

Um, warding off their worst tendencies in an educational setting when you're unleashing these tools onto students, but not equipping them with the knowledge or expertise to understand how to probe them, how to reign them in, how to apply appropriate guardrails. There is a risk that we're going to go too fast.

So I'm, I feel there has to be room in mainstream education for generative AI. We can't just. in the sands and, um, presumably these tools don't exist, but it's also not as straightforward as saying that we have AI tutors now because no human tutor in their right mind would, uh, w would go to a student and just issue 80% 90% of valid statements and 10% of confabulations.

And so, so I don't think, I think the bar is being raised with generative AI tutors, but we need to think about how we then offset their very worst behaviors.

[00:24:04] Matt Geleta: Yeah.

Do you, do you have do you have kids of your

own? Oh, great.

[00:24:08] Junaid Mubeen: I do. I've got a four and a half year old daughter and I have a two and a bit year old son.

[00:24:13] Matt Geleta: and what what would your personal posture be towards, you know, letting them learn from, uh, some of the, the newer generative AI tools that are out there or do you,

uh, let

them

[00:24:25] Junaid Mubeen: not something I worry. It's not something I need to, I feel I need to worry about immediately because. The age that they're at, the most important, um, dimension to their learning is exploration and play and, you know, with the, with Lena, that my, my daughter, we are bringing in some stuff that's a bit more formal, we're doing a bit of phonics here or there, but that's, that, that, that is overwhelmed by, you know, our emphasis on just, just reading in general and reading for pleasure and similarly with numbers.

You know, I'm getting it to count up in twos and threes, but really, I just wanted to play with numbers and so I'm always looking for different sort of physical manipulatives and different types of games that we can play and just finding ways to bring those concepts into her everyday life. And so the idea of putting her in front of an AI tutor, it just feels so remote and so unnecessary.

Uh, we are very judicious in the, the kinds of technology that we expose them?

to. So there are a couple of apps that we use, but they're not AI based. I think, um, over the last 10 years or so, um, EdTech has really just been largely dominated by platforms that are fueled by data and analytics and clever algorithms, and there's not enough emphasis on content.

And I think content is, is absolutely king, especially at that age, but actually all through the Um, learning journey, the data means very little, Um,

if the content isn't robust. So there is some really good content out there and that's, that's really tends to be what I prioritize. I will also say just very plainly, um, you know, we're, we're, we're, we're, we're very privileged.

You know, we're, we're, we're, we're a middle class family. The kids have two parents that are educated that can support them with their learning. And we need to be very careful when we evaluate, Um, These tools and determine where they can have value. So you see a lot of, um, pilot programs. Like it's a car Mego, for example, is used at their laboratory school.

And the demographics of, of the parents at that school are presumably very similar to what I've just described, which means that those gaps. In those AI based systems, those knowledge gaps, that tendency to confabulate and make things up, they can more readily be overcome because you've got nurturing parents and teachers that are in that case of working with very small class sizes that can.

Give students, continue to give students the personalized attention they need and actually support them in a very hands on way to use those tools effectively, um, and to keep them at arm's length when it's appropriate to do so. In other words, to regulate their ways of learning, but So, so it can work in a context where you already have a, a base level of, of support for students, it is dangerous to then assume that you can just transplant that model over to context where students maybe don't have that support because, Uh, their parents aren't equipped or just don't have the time to support them at home or they're in large, Um,

class sizes where they don't receive that personalized attention.

So the idea that you can just parachute in an AI, an AI tutor there. without checks and balances, particularly now in a context of generative AI, where these systems are issuing a lot of mistruths, I think it's potentially quite dangerous. And especially, um, in an international context, certainly. There's a massive shortfall of teachers.

UNESCO say that, you know, we need 69, we need to recruit 69 million teachers just to achieve universe, basic levels of universe, uh, of education universally by, by 2030 as measured by their sustainable development goals. So 69 million teachers and it's very... Tempting in that context to think, well, we'll just solve that shortfall, address that shortfall through AI.

[00:28:23] Matt Geleta: Um, Um, Uh,

[00:28:23] Junaid Mubeen: But I think it's potentially reckless and certainly unethical to unleash these very, sort of, unpredictable, brittle, slightly black box tutoring based systems on students who don't then have the support. To identify and address those limitations.

[00:28:44] Matt Geleta: Yeah, I mean there's, there's, sorry, go for it,

[00:28:47] Junaid Mubeen: well, I was just gonna just wrap up by saying, so what we may end up seeing is a widening of disparity where the previous digital divide was between those that had access to technology and those that haven't, and now there's a sense that technology is becoming more ubiquitous and the divide might be defined in terms of those that have the technology, but also have the human support.

to use that technology productively versus those that have access to AI tools, but are just using it without the, the, the support and knowledge of, of, of how to actually reign them in.

[00:29:18] Matt Geleta: Yeah.

Yeah. You pointed to like several different risks there that have been sort of bubbling up in my mind. Um, and, and one of them in particular, this, this disparity gap actually is where you just closed. Um, yeah, one risk I see is just in how the incentives work here, um, because these tools are inherently so scalable and there's so much scale economy is baked into the development of these tools. One thing I do worry about is. Um, you know, a very small number of very effective tutoring tools, AI education tools developed by a handful of companies, uh, located in one area. And there would be inherent bias in, in that, um, bias in the content, bias in ways of learning and um, you know. I, I wonder if, there is a significant risk of that introducing very large scale bias into, uh, the education system in general. What do you make of that, that risk? Is that something that you think about?

[00:30:17] Junaid Mubeen: It's interesting because that risk hasn't really manifested in the previous generation of AI based systems because they, they've all in the end converged towards the same form of content. As I said earlier, it's very structured. It's, um, Because it was predicated on the need to automatically assess students.

So you haven't seen much plurality in terms of the types of content. Now, some content tends to be more interactive than others. Some content tends to be quite animated and engaging and others is really quite static. Some tends to be a bit more, um, expositional and rooted in direct instruction. But. It seems to me that they're all variations on the same theme.

Very few of these systems are able to really promote things like inquiry learning, and actually getting students to explore concepts for themselves in a way that a good tutor will be able to. Now, I, I would say with generative AI, the possibilities are increasing. So we should expect to see a greater plurality of approaches.

Um, I, I can... So I think, you know, what's happened in the previous generation is a small number of players have really made their mark, but ultimately by aligning with, with mainstream approaches. And I think we're going to continue to see that trend. I think we're just going to see, I think we're going to see a lot of the potential of generative AI lost, because in the end it's going to continue to just have to be aligned to the way that curriculum and assessment is done.

Until there's a fundamental change to the way that we imagine schooling, it's hard to imagine generative AI. Spreading its wings too far, but I think we will see some bright sparks and you know There are already a couple of examples out there that predate generative ai But I hope we there will be space for platforms that don't just assess students on right or wrong questions, but actually give them space to explore ideas and work in a sandbox of, of, of, of ideas, um, and foster more inquiry based approaches.

Um, and my feeling is that even if those approaches don't take off, Um, within a formal schooling environment because they don't, um, align to core curriculum and assessment practices. Hopefully there's enough appetite and recognition outside of formal schooling among parents, uh, among wider society that that's the kind of learning that students need.

But then I would go back to my earlier concern, which is that that may end up just widening inequalities because of the kinds of, the kinds of parents that would be alert. Um, to such pedagogies and approaches are probably the ones who are in need of such tools in the first place.

[00:32:56] Matt Geleta: Yeah.

Yeah. That's interesting. What, out of interest are the, areas that you see this most likely to happen? Like what, is it that you're not seeing in the traditional, edtech technologies, assistive technologies that we're hearing about in the media today that you, think we should be hearing about?

[00:33:17] Junaid Mubeen: Yeah, so what what does a good human tutor look like? Let's take that as the starting point. And I think, I think there's, there's two dimensions. One is a cognitive dimension, which is, you know, how we want students to learn and there's this huge debate often not very productive Around, you know, whether, uh, we just need to teach students core knowledge and facts and just give them direct instruction or whether we need them to take control of their own learning and engage in more inquiry based approaches.

And in the end, you need a mix of the two, and I think the best thing you could say about AI tutoring systems. Of the past is they do the first bit very well, the direct instruction piece, teaching students core knowledge and the value proposition could be that that then frees up students and teachers to then do more of the really kind of rich, interactive inquiry based learning.

Um, but that always, that always felt like a slight cop out to me that you're, you're still deferring the most meaningful and richest learning experiences. Um, to, to, to someone else. And now I would hope that the systems take on just richer learning, uh, content. So a lot of the work that I do in my day to day role is supporting high potential, um, students, uh, as young as 11, as young as 10, in fact.

So 10, 16 year olds, um, helping them to really develop themselves as mathematicians. And that means going above and beyond the curriculum, supporting them to become better mathematic, mathematical thinkers and problem solvers, exposing them to really rich tasks, um, which quite often have a right or wrong answer, but can be approached in, in many different ways.

Um, and actually exposing them to ideas that are often a lot more open ended and just unstructured. And I just hope to see a lot more of that with generative AI, given its potential to grapple with more open ended content, notwithstanding some of its present day limitations. So I hope to see more of that.

And I'll just give an example of a platform that I think does this really well, so brilliant. org. Um, I think really goes out of his way to put inquiry and exploration at its core. So it doesn't just feed you concepts. It actually has you rolling up your sleeves and grappling with problems for yourself and then bringing in relevant concepts at relevant times.

And, you know, there is an earnest attempt there to, to really open up the learning experience, to have students, um. You know, thinking in very expansive ways. So hopefully more of that with generative AI. But then the other dimension of good tutoring is a more human one. It's the relationship between a tutor and a student.

It's that ability to inspire a student to show that you can take genuine interest in their learning. To have such an intimate understanding of how they learn, what motivates them, what kinds of misconceptions they have. So it's a deeply, uh, social experience. It's an emotive experience. And I think in the end, you know, your charge as a tutor is to instill in, in your, your student, not just the knowledge and skills to get through the next exam, which is often what you're paid for, but also to hopefully instill in them a love of learning and an intrinsic appreciation.

For knowledge and expertise so that, you know, they, they have something to, to carry beyond the next, uh, academic milestone. Uh, and I, I, I'm, I'm a bit wary as to how far we allow AI to go to become that socio emotional support for students, because I'm certainly persuaded that it can simulate. Some of those behaviors, uh, an AI tutor will eventually, if indeed not already, be able to, um, convince the student that it does care about.

their interests, and it may be able to inspire them in a certain sense, but I think it's important to call out, uh, what's real and what isn't, and to recognize the difference between, you know, a flesh and blood human tutor that genuinely has sentience and consciousness and actually cares about your learning and, and, and your general prosperity, and then an artificial construct which seems like it cares, which seems like it's able to Pull all the right triggers, but ultimately is operating off a set of algorithms and and doesn't have any sentience and doesn't have any genuine sense of caring.

So I think this is a very murky area because we're already seeing how a lot of. Uh, chatbots, for instance, are managing to woo their human subjects and this guise of, of, of, of, of caring about them and, uh, and developing a relationship with them. I think that that does raise ethical questions, particularly when we put them in front of students.

[00:38:19] Matt Geleta: Yeah,

for sure. It's also a very, just a difficult question to answer as to where we do want to draw the line because, you know, if, if the objective at the end of the day is to deliver fantastic education, um, get people motivated, get students learning and, uh, to do that, you know, we think developing a relationship with whatever is educating you.

Um, to deliver, you know, to, to, to motivate you, to make you feel like it cares is an effective way to do it. It does become very difficult to, to draw that line, particularly in the case of, um, as you said, maybe this stuff is deployed in lower resource settings because it's much cheaper than a tutor or there aren't enough tutors. Um, and so it does become difficult. Uh, how, how do you think about that trade off being made? Like who, who first of all should be. Making those decisions at the, at the large scale, is that, is this a place for government to step in?

Um, Yeah,

[00:39:10] Junaid Mubeen: Yeah, so absolutely. So, you know, we had this, um. There's a thing at the White House this week with a number of representatives from the big AI players who agreed on certain concrete measures. And one of them was around watermarking, just to be able to sort of get a better handle on AI generated content and its potential misuses around the spread of misinformation.

And wherever that goes, at least it signals some intent to be able to, um, hold these systems accountable, but most of all, to give users just that awareness of what kind of content they're, they're coming across. And I think this is something we all have stock in right now that we're. Um, exposed to more and more gen AI generated content.

I think, I think we want to be aware of when something is AI generated and when it isn't, because our entire interpretation of, of that piece of content may well change depending on, on where it's come from. Um, and I think the most simple thing we can do is just apply giant labels, um, on anything that is AI based.

So it doesn't matter how. compellingly human an AI tutoring based system may, may seem. It may have an avatar that, that is indistinguishable from a human. Uh, it may speak and engage in conversation in a way that is indistinguishable from a human. But I think just reminding the learner at the other end that you are actually engaging with an artificial system.

You know, whether it's through a watermark or just some very clear and regular reminder that this thing on some level isn't real. I think, I think that alone would, would, would go some way. Uh, I, I don't think that is... sufficient. But without it, I'd worry that we're going to end up cheapening what we mean, or how, what we understand good teaching to be. at the moment, it seems that a lot of people who think that AI tutoring is now going to, um, Disrupt education for good. They have a very loose understanding of what constitutes good teaching, and often, often that they do just conceptualize it in terms of efficiency of knowledge acquisition. And so the, the value proposition is to be able to learn at 3, 4, 5 times the rate, which has never really been the problem with education.

I think that a much bigger problem is that. Education leaves many students indifferent towards learning. And that's not so much a function of how much information they've, they have or haven't acquired at school, but how they've acquired it, and how they orient themselves to knowledge, and, and, and how they're motivated to acquire knowledge for themselves.

And I do think we relate to authentic experiences and we're inspired by things that are rooted in real lived, you know, human experience. And, and maybe, maybe we will come to be inspired by synthetic experiences too. Maybe we'll see an AI doing things in, in a virtual setting and think, great, I can take inspiration from there.

But I think what we want to avoid is the conflation between the two. And so just. Having very clear, transparent labels that, that remind us that we're dealing with AI based systems, and it would go a long way. And I think also just being a lot clearer about the limitations that these systems have, so they all have their caveats, but it's usually just a footnote at the bottom, which is, you know, this system may occasionally generate a false output, beware.

The, that, that needs to be like a giant heading when, when it's applied in an educational context, but it's not just a heading, it's that in itself is then a prompt, the learner to not necessarily see this tool as a tutor. A tutor itself may be the, the, the wrong label, but to just see as a tool and may maybe a collaborator, but a collaborator that is going to offer you a lot of, um, uh, uh, a lot of benefits, but also comes.

With all of these potentially hazardous side effects on to engage with these tools, there's just a level of awareness and a level of knowledge and expertise that you need to have in order to use them productively. And so I think that should be a requirement to, um, that you don't just put these tools out there, but actually that they've got to come with manuals and you've got to take learners on a journey.

So that the education isn't just using these tools to answer questions, but it's actually understanding these tools well enough to know what questions to ask of them, to reign them in, to probe them. I mean, I, I, I think that really ought to be front and center of the curriculum now, um, you know, how, how to interact with AI and that really has, you know, critiquing and questioning at its core.

And these are just fundamental thinking skills anyway. Um, so I think there is a context where generative AI can help students, but it has to be coupled with all of these priors to make sure that students are then equipped to get the best out of them.

[00:44:20] Matt Geleta: Yeah, it is interesting when you talk about the, you know, the you know, 10%, 5% of answers might be wrong, might be hallucinated. And I do wonder when you, if you play the statistical game with the average teacher out there and an average tutor out there, how much of, of, you know, what, what comes out of their mouths could also be wrong. And that does not come with that label. And I do, I do wonder if at some point there is a tipping point where. Um, I think in many fields that involve AI, there is this question, you know, in in autonomous vehicles, there is a question, eventually, do they become safe enough that they become the default and we don't have humans behind the wheel? And and I do wonder if something similar might happen in education at some stage.

[00:45:03] Junaid Mubeen: Yeah, I think that's a fair question because if, if the hit rate of chatbots went up to 99. 9%. We'd probably say, well, the incident rate of, um, um,

spurious outputs is so low that we're just, that on balance we're willing to accept that trade off. I think right now we're far from that, and you look at just how rudimentary some of the mistakes are that these chatbots make, particularly in a mathematical context where, as I say, they, they trip upon very basic arithmetic at times. Uh,

So I think whatever that bar is where Whatever we consider a reasonable threshold, we're far from reaching, but I think that's a good conversation to have. But I think there is still a difference with a human tutor. I mean, I, I've tutored students for over 15 years, and I would never ever want them to think of me as this omniscient sage that's going to just deliver, um, A truth at every turn.

I think what you always want as a tutor or a teacher is a degree of humility to recognize that you don't have all the answers. But then, to demonstrate to students that, and to model this behavior of humility and self reflection that, where you have uncertainty, you're able to express that. And I face this all the time because I teach.

Students in, at the moment, a lot of my time goes on helping students, um, prepare for, for challenge, maths challenge papers. And the maths questions are, are really very hard, even though they're pitched at 13, 14 year olds, they're very difficult at times. And if I'm faced with a question in the moment that I can't solve right away, I'm not going to, I'm not going to bullshit.

I'm not going to pretend I have the answer and just shoot from the hip. I'm going to be very transparent with students that actually... That one has me stuck as well, and that's just a great teachable moment to then show them how we can work through that struggle. And it may or may not lead to the correct answer, but modeling that uncertainty and showing them the processes that you then undertake to bridge that gap in your knowledge, that's where many of your teachable moments come from.

Chatbots could plausibly behave that way. Right now they don't. Right now they just very confidently, um, spout answers indiscriminately. But you could imagine, and I know that these are the kinds of this is the kind of fine tuning that Carmigo is trying to implement, where it's trying to sort of regulate its own impulses to just give the answer.

So, you know, part of that is about giving students. The space to think for themselves and not just spoon feeding them, but part of it should also be about Reining themselves and say I'm actually not entirely sure about this particular question and and and the thing is that this shouldn't be difficult at all because The outputs of large language models are all based on a probabilistic model.

So they're predicting the next token in a sequence, and it's based on the assignment of probabilities. So if that probability falls below a certain threshold, they ought to say, Look, um, I'm just, you know, here's my answer, but you should know that I'm not entirely certain about this. And I think that's a perfectly legitimate response.

Um, now then as a student, in a student tutor relationship, what would happen next? Well, you'd want to work with that student and say, well, you know, here's how I've come to this part of the solution, which I'm not entirely sure about, but maybe there's something there. Maybe there's a semblance of an idea.

Maybe my solution isn't quite robust, but maybe there's a... An ingredient in there somewhere that we can run with, Um.

and that is a, that's an incredibly valuable skill to impart on students to know how to take a large output from one of these models, interrogate it step by step, pick out the salient details and ignore the rest.

So, so the, the lack of omniscience amongst human tutors can be channeled into a really positive. Uh, learning experience. Chatbots would have to be fine tuned in such a way that they turn that into a strength. And right now, it's a major weakness of them that they just indiscriminately give answers, modeling themselves as a sage.

But I, I think that's just a design flaw. That doesn't seem like a fundamental limitation of chatbots. That's just how we present them in an interface.

[00:49:27] Matt Geleta: I do wonder what the long term limits of that is. Um, I mean, it's actually a topic that's very close to my heart in another context. I, um, one of the things I do is work with a medical AI company and, uh, that builds tools for radiology and pathology. and, um, in, I mean, in that setting we're faced also with this problem of, you know, having a high enough hit rate. It's a medical context. The hit rate has to be extremely, extremely high to pass regulatory muster and to, you know, be accepted basically anywhere. Um, and in this context, I mean, we, we are there, we, we've done it, um, and. People are now starting to wonder, okay, well, if that's true, if you've got AIs that are performing extremely, extremely well in these contexts, does that then put, uh, the role of the radiologist or pathologist at risk or does it significantly change it? Um, and I think fortunately in the medical context, there is such a lack of, capacity. uh, medical capacity, um, that that's not really a problem.

Like there are just not enough radiologists and pathologists and having AIs that can do a lot of that stuff is basically always a good thing. But in the long, long term limit, I can see that changing where they become so good and so comprehensive that a lot of that work that's being done becomes, um, sort of significantly less in demand for humans to do. And I do wonder how then the role of the teacher and the tutor changes. In the long term limit of, of educational AI. And so, well, you know, we, we have hours that are performing fantastic hit rate when they do, when they're not sure they can communicate it, they have this semblance of humility, you know, all the good things that we'd want. Um, how do you, how do you see the role of the, the teacher or tutor, um, transforming or developing, um, over the next, you know, over the coming period?

[00:51:09] Junaid Mubeen: healthcare is very limited. I'm aware that Geoffrey Hinton...

[00:51:13] Matt Geleta: Mmm.

[00:51:14] Junaid Mubeen: back in 2015, said radiologists would be redundant within five years. Now he's saying, well my prediction was just off by a few years, it'll still happen. But the counter view is actually it's just going to, um, supercharge the capabilities of radiologists because it makes them more efficient, it picks up on their own blind spots.

So I don't have an informed view on that, perhaps you do in terms of... whether the net effect of generative AI is ultimately going to be a positive one for radiologists. And it sounds like you're saying that in the short term that will be seen, but in the long run, the ratio of radiologists to patients is going to vastly, uh, increase in the direction of patients, which is great, right?

That's an efficiency in healthcare. That I think we would all welcome in society. The thing is, that efficiency metric may not apply in an educational setting. Do you really want to have one teacher for a hundred students? Um, it's often touted as, you know, the only sustainable model for education, given the shortfall of teachers.

But ask yourself, if that's your child, do you want them to have one hundredth of your teacher's attention? One hundredth of their love and personal care. Um, you know, I, my daughter's just finished nursery and the class sizes there were very small and it's just incredible just the degree of connection she had with each of her teachers.

Um, and, and that's something I, I hope she'll never lose, but I, I accept is, it's just a trade off that you face as, as you go through schooling and you deal with these constraints. So I. I can say, you know, if we think that the role of teaching is just to impart knowledge, then there are efficiency gains to be made, and you can have fewer teachers.

For, uh, for more students, but it comes at an expense and I, I, I feel like there's, there has to be a natural limit, you know, just as there's a limit to how many personal relationships you can develop. I think the answer to that is about 130, isn't

[00:53:10] Matt Geleta: Yeah. Yeah. I think so.

[00:53:11] Junaid Mubeen: you can, you can have, there's a study that says you can have about 130

[00:53:14] Matt Geleta: It's a Dunbar number I think it's called. yeah.

[00:53:16] Junaid Mubeen: Yeah.

before you, before you plateau, my own personal numbers is much lower. I should say everyone has their own threshold, I guess, um, but for a teacher that there's just a fundamental limit and it's the limit of time. It's a limit of attention. It's a limit of just emotional capacity and bandwidth. So I don't know what that limit is, but I'm very nervous about the idea of just allowing class sizes to balloon because even if you may be able to.

Even if, and it's a big if, if you can make that work from a knowledge transfer point of view, all of those other dimensions of teaching I think would be lost. I guess one of the claims of AI systems is because they take knowledge off the plate of the teacher, they free them up to be able to give more of themselves to students.

I think There's some, there's a kernel of truth in that, um, teachers waste a lot of time marking work, they waste a lot of time teaching students things they already know, so having, having a much more targeted approach to instruction definitely makes sense. But actually, the, those are the dimensions of learning, inspiring students and having that emotional connection, well that's all situated within a context of...

Of knowledge transfer anyway, so teachers aren't just going to sit there and act as role models completely, you know, removed from the curriculum. You inspire students by teaching them by talking through concepts and the idea that you're going to just outsource that bit to an AI completely doesn't make sense to me.

So I think in the end, there has to be a division of labor to ask, well, what bits do we really want to just leave to these AI systems? And then what, what bits do we actually want to leave to a teacher and how do we then do those justice? And I just can't see how beyond the class size of even 15 or 20, we can expect teachers to develop that intimate understanding of what makes each student tick and how to support each student.

So, I'm not sure efficiency is the metric I'd be optimizing for, uh, in a classroom setting.

[00:55:20] Matt Geleta: Yeah.

[00:55:21] Junaid Mubeen: And also I should, I mean, I should say it just seems to imply a fundamentally different model of schooling where it's not classroom based, but you know, it's more like campus based where you go into a lecture hall and you're surrounded by hundreds of students now receiving instruction.

But the problem with that, of course, it's a very passive experience. It's impossible for the lecturer to meaningfully engage every student in an interactive way. So I'm not really sure what a 1 to 100 ratio really means for a student beyond just a passive knowledge transmission exercise.

[00:55:55] Matt Geleta: Yeah. It does take me back to that question we mentioned earlier of whether we're optimising in the wrong paradigm and kind of optimising very myopically, you know, for small things. an analogy that comes to mind here is Henry Ford's um. famous quote. I don't actually, know if he actually said this, but you know, he was thought to have said that if he asked customers at the time, what would they have wanted as a sort of a, by way of transport, they would have said faster horses because they, you know, the automobile wasn't in their consciousness.

It didn't exist yet. Um, and obviously that was the wrong answer. And, um, the, the car was, was something, that was A, a massive you know, revolutionary step. And I do, I do wonder in this context of AI tools and education, whether there is some similar dynamic at play here where we're thinking, you know, how could we. optimize the process of learning to do well on this particular test or, um, optimize this, uh, you know, how this curriculum is delivered, but missing the much bigger picture, the revolutionary idea that is actually enabled by the technology we have today. Do you have any views as to, you know, whether that's the case and whether there are any sort of more revolutionary, big picture things that are not receiving the attention that they should be receiving?

[00:57:07] Junaid Mubeen: So I don't think one needs to be a visionary to understand what high, you know, world class, high quality learning looks like. It's only a question of how you deliver that at a scale. So on an individual basis, um, as a parent, if you've got the means, um, really, the financial means, perhaps, if you've got enough social and cultural capital, you can...

support your child in a way outside of a schooling setting, um, that will, you know, really set them up for long term success. So you will get them tutors, you will get them coaches, you'll get them a chess coach, you'll get them a sports coach, you'll get them, um, um, you'll get them playing a musical instrument and they'll get piano lessons.

Um, and. The thing that all of those things have in common is they're able to learn in a very intimate setting under the hand of a bonafide expert who knows how to nurture their individual talent and knows how to respond to their individual cues. So this path towards mastery, I think is quite well understood and I should say in many of those contexts.

It's also about allowing your child to interact with others, other students. And work in a collaborative setting as well. And I think, you know, parents who have the means do this almost by instinct. They just realize that this is, this is the path to mastery. And it's expensive and it's not very scalable, but that's not something those parents have to worry about.

Um, and so we know what high quality learning and teaching looks like. And it's only really a problem of, um, delivery and scale. How do you now do that when you've got... You know, 300 students under your charge rather than just a couple. Uh, so I think we need to make sure that when we're using technology to solve the scale problem at the delivery end, that we're not losing sight of what the actual product is.

And we're just going to remind ourselves. Of what, um, you know, the, the same parents, I mean, there's an irony, right, in, in Silicon Valley, which is that quite often, um, Silicon Valley executives, they, they, they, they don't get high on their own,

[00:59:37] Matt Geleta: Hehehe. Yeah.

[00:59:39] Junaid Mubeen: right? So they will, they will develop their products in certain ways.

But then the way that they expect, um, people to use their product differs in the way that they use them with their own Children. At times, they'll just outright ban their own Children from using certain devices or accessing certain social media websites, even though that's that's that's where they and they're living.

So there is a hypocrisy in that. But I mean, there's also just a recognition that, um. Um, The way that these technologies are used, you know, it very much depends on the context of one's environment. And I think if you're developing an AI tutoring system, it's not, it is hypocrisy to say, well, we're going to use this to address teach shortfalls in developing countries. We're going to use this to Uh, scale up high quality instruction, uh, across the state schooling system.

But there's a hypocrisy if you're not willing to deploy the same AI systems with, with, um,

your own children under the same constraints. Um,

and it's difficult because the other thing you then realize is that your own children, as I've mentioned before, are privileged. They have you. They have, they have, uh, they have the means to, to really get the best out of those tools.

So. It's very important that when we develop systems that are designed for large scale, Um, delivery models like schooling,

[01:01:17] Matt Geleta: Um,

[01:01:18] Junaid Mubeen: that we understand what the end point looks like and we have that reflective moment to compare it with how we're educating our own children, who are not so constrained by large class sizes and lack of home support.

And that we're making the effort to bridge that gap. And my worry is that AI systems are just going to end up diluting the quality of learning and perpetuating existing practices. In the mainstream, while the people developing those systems will continue to support their students in ways that aren't so scalable.

And the harder question to address is how do we take what we know is working with our own children and develop the technologies to scale that up? And there aren't any easy answers there. Like, how do you... Scale up high quality tutoring, which isn't just imparting knowledge. It's also mentoring and coaching and developing an emotional bond.

It's not a given to me how you develop that scale with, with, uh, with, with generative AI. And, and again, it's just a question of how you conceive education and it's very easy to reduce it to an exercise in knowledge transmission. And in that case, you can see the role that generative AI has to play. But we do aspire for more, don't we?

As parents, we want our children to learn more than just a discrete set of facts. We want them to actually develop a whole array of skills, and there are many things we'll do to make sure they have that support. How you then use technology to scale that up, I think, is a much harder question.

[01:02:48] Matt Geleta: Yeah. yeah. I guess what one of the, like the meta solution to a lot of this is, um, you know, more human intelligence finding. individuals who can solve very difficult problems of this nature. Um, and it actually, it makes you think of one of the things that I know you're doing is actually basically looking for those individuals or, you know, helping, helping create those individuals. There is a, for people who have been involved in the mathematical world, you know, I have a couple of maths degrees. I've, it's something I do in my spare time. And the story of um, Ramanujan is, um, sort of something that we all know. And we all think about a person who, um, was very unlikely to be given educational opportunities and do something great, but kind of by chance was, was discovered. And I think it would be, it's almost a statistical inevitability that there would be people like this all over the world and who are not being discovered and not being given the chance to rise to the top and make the contributions that they would be making. Um, And as far as I understand from some of the work you're doing, you know, you're, you're, you're looking for those real high performers in, um, in, in maths at a young age.

I would love to hear more about that and you know, what is the work you're doing and, uh, what you're trying to achieve with that work.

Yeah.

[01:04:09] Junaid Mubeen: Yeah.

so, I mean, Ramanujan sets a very high bar. Um, I don't, I don't think we expect all of our students to become Ramanujans, but the, I think that is though the ethos that there is a lot of untapped intelligence out there and we're focused very much on identifying students that might otherwise slip through the net.

So I've always considered myself as the, Um, the one that Somehow didn't slip through. So, you know, I came from a working class background, went to a very sort of standard state school and managed to sort of book the trend. um, and I will, I, I should say I had a major advantage in that my older brother was actually the one that booked the trend.

So I was the one that then just followed in his footsteps. But somewhere along the line, there was just a disruption to historical. Trends at that school. And I think a lot of that came about because of the support we were given in our home environment. Our parents weren't particularly, they weren't very well educated.

It's not that they were helping us with our homework, but they applied pressure and high expectations to make sure that we, basically failure wasn't an option. And as Um, as, as, as tough and rigid as that may seem, it was, it felt in hindsight, it feels very necessary for the context that we had at the time where the vast majority of students around us would spend their evenings just hanging out in street corners and really not, not doing very much.

So the question I've always asked is, Um,

how do we make sure that Um, success for people like myself isn't an anomaly and that talent that you, that, that, that you, that you possess, which most students do is, is, is nurtured. Now, you know, if we had the means to support every student in society, we would. Um, but we've decided to focus on a subset of students that show very clear potential for maths, uh, for mathematics at a young age, and that can be demonstrated in any number of ways. Um, but students who come from these marginalized backgrounds, and probably, in all likelihood, uh, under the status quo, won't go on to have... A successful STEM related career because somewhere along the way that talent is going to be snuffed out, not through malintent, but just through lack of resources and support, lack of awareness.

I mean, the number of things we had to do that just went against the grain. Um, you know, I was self teaching myself parts of the curriculum because my school wasn't able to offer those subjects. And. It shouldn't be that way, right? I think that takes an extraordinary amount of initiative and actually quite a lot of luck to then overcome some of those, Um, some of those restrictions, and we just want to make sure that these students that have that potential from a young age, Um, continue to develop that potential and don't have to rely on luck so that by the time they're 15 or 16, they're already set up for success.

So, you know, if you're an aspiring athlete or footballer, you'll join your local athletics club. If you want to become a professional musician, you'll learn to play an instrument, you'll join an orchestra. What do you do if you're a talented mathematician? um, and you know, you, There's some aspiration there to make maths part of your long term future.

And that aspiration may not come from you yourself, it may come from a parent or a teacher. It's not a given that just because you get taught maths at school, that's going to be your pathway to success. In fact, in many cases, it's going to take you in the opposite direction. So we're just there as a level of support and mentorship for students to work with them in a very close knit way week on week.

Over five, six years developing their problem solving skills, exposing them to things that they just won't see in school. So we assume that they're, they're quite well covered with the curriculum and that that's a real luxury for us that, you know, we, we don't need to teach them the basics because that's almost like a prerequisite for joining our program, but it's then taking them from.

good to great so that they're on a level playing field with their most affluent peers and whatever then comes of their long term, uh, prospects. Um, we we've given them every chance to make maths a big part of their lives. And

[01:08:40] Matt Geleta: Um,

[01:08:41] Junaid Mubeen: say another part of what we do is, um, we run large online math circles, which are a lot more inclusive in their ethos.

Anyone can join. We get hundreds of students joining them at once. No reason that couldn't be thousands. But we wouldn't pretend there to. Have deep educational impact that that's there more just for sort of recreational value just to offer something to the masses. But this tutoring initiative is one that we think could really transform students prospects.

And this is the first year that we're running it. We're actually still in the phase of bringing students on. Um, so it's still very small scale at this point. But it would be a, I think, an amazing, um, achievement if, if we can say that regardless of your background, regardless of your color, your creed, regardless of your parents income levels, if you demonstrate an enthusiasm for mathematics and you demonstrate some potential as a, A nine, 10 year old, we can set you up on a path to success.

And it may be that, you know, technology, you know, technology does have a role here because it's all done online. So there's, there's a lot we have to figure out there. It may be that AI has a role here as well in terms of helping us to scale this offering. But when I think about the bar that we're setting for our maths tutors, AI, generative AI is so far from reaching that, that it's just, it's just unthinkable to me that we would just...

Bring in, uh, a chat bot or whatever, and have them take on the role of a human tutor. But I think we can ask ourselves as time goes on how we might leverage these technologies. to facilitate what we're doing. I mean, I use ChatGPT all the time. I use it when I'm doing data tasks. I use it when I'm doing admin stuff.

It just speeds up a lot of my work. So, I don't have a dismissive attitude towards these tools. But I think they've got to be used in their proper context. And it's, I'm not yet figured out how to, there is one context where I use them with students, which is I I will occasionally, when I give them a math problem, I'll feed it into ChatGPT as well and I'll compare the responses.

And I'll have the students look at the responses to, Um.

so that they can get used to sort of evaluating what comes out of these systems, but that's using it as a tool. That's using it as a way of teaching students critical thinking and reasoning. It's not using it as a tool that's going to actually do the tutoring for, um, for us, which I think it's a long way off from.

[01:11:04] Matt Geleta: Yeah. That's, um, that, that mission, um, actually really resonates with me as well because, um, I had. A relatively similar experience, you know, I was born in a very small mining town in South Africa, moved around from school to school, very non privileged places, um, and I think for a long time it would have been very unlikely for me to, to discover science, maths, and all that, that very fortunately I stumbled on. And I think many, many people are in a similar position, you know, I once, I once wrote a little article that went viral, um, about this, about this problem. The question was, how did you get into Oxford? And I basically, the answer was I applied and I didn't. I, hadn't, uh, I didn't have self belief, um, when I was younger because I was not in an environment that kind of exposed me to people who could go to these places. And by happenstance I kind of ended up on a path that led me to learning that it was something that was achievable and, you know, completely changed, transformed the, direction of my life. And, um, I think a lot of people reading that, the reason it went so it was so. successful, um, was I think so many people around the world are in the exact same position. And so I think it's a fantastic mission. I'm, I really hope it is successful. I think, I think I saw that you were working with a name that I know very well, Simon

Singh.

um,

too.

[01:12:20] Junaid Mubeen: I, I, I should have mentioned that. So, so Simon's saying. It's known for his books, most of all, um, Fermat's Last Theorem, uh, The Simpsons and Their Mathematical Secrets, The Codebook, all bestsellers, and he's, uh, over the last ten years or so, he's turned his attention towards education initiatives, and so he, he's developed this project called Parallel, and I joined him about two years ago.

To launch the circles in the first instance, because we thought this was during lockdown and I think there's just a real appetite for giving students something to do. And so I always said I was always captivated by this vision that we would take. The amazing kind of mathematics that Simon's books, um, promote and that you'll find now in across the popular math and science genre.

But as you turn them into, um, lesson material and teach them to the masses. So that's what the circles do. Uh, we're on summer break, but come September we'll be running them again, different circles for different year groups. And it's all totally free, parallel. org. uk. But now this tutoring, um, arm of what we do.

is, and again, it's all through this charity, and it's certainly for now, it's no cost to students, but it's driven by this desire to really have meaningful and measurable impact on students. And we've come to the conclusion that Um, you really need to have that very close, intimate contact with them on a, on a regular basis, as, as is true of any mentor and coaching scheme, your own example is, is very powerful, um, because it often just takes that first step, doesn't it, to just put yourself out there and that willingness to fail and, and, and also you realize that when, when you take that step, that this prospect wasn't as daunting as it may seem that the things that Many people believe about places like Oxford and Cambridge are actually quite incredible.

Now some of those things actually turn out to be true. That there are, Oxford is incredibly weird in its own way. But actually most, you know, most universities have their own quirky personalities. But what I found really endearing about Oxford in my time there was just how diverse it is. And how you can, you can find your feet there regardless of your own background.

And I gotta say, you know, it gets, the perception of Oxford is dominated in this country, in the United Kingdom, by politicians who all happen to study PPE there. Um, and there is an arm of Oxford that lends itself to that kind of prestige and privilege. Um, that's not the side of Oxford that... I promote when I, when I, when I tell students about, um, about what they're capable of.

And I also have to remind them that the standard isn't as high as they may think, that Oxford isn't full of geniuses, and that actually as long as they just commit themselves to their subject of study that they'll give themselves a good chance. And if you... If you can catch them early, you know, if you can catch them at the age of 11 or 12, you've got a long time to work with them to mould them into the kinds of students that if they then set their sights on studying at a place like Oxford or Cambridge, they will, uh, they'll have a good chance.

Um, and it's just, Yeah.

showing, giving them relatable examples and role models and other students that they can then, uh, interact with. And again, you know, you could give them a chatbot, a chatbot that behaves as if it went to Oxford and as if it cares about them and... About their prospects and maybe give them all the right cues, but I do think we've got to make a distinction between the real thing and a simulated version.

[01:16:05] Matt Geleta: Yeah.

Well, I think that's a, it's a nice place to bring us to some, some sort of rapid fire questions that I'd like to close with. Um, you know, one thing I, I like to ask my guests is what advice they would give to people.

And I think you in particular, based on the work you do, the thinking you do, you've probably thought about this question quite a lot. Um, and so maybe let's, let's narrow it down specifically to people who. Um, you know, younger people who are looking to make their way down a more prestigious, more competitive path, you know, want to really push themselves academically in general.

What advice do you give to someone like that? They know they've got it in them and, uh, and they really want to give it a go.

Yeah,

[01:16:49] Junaid Mubeen: Yeah, I'd say don't ever compromise on enjoyment because it sounds trivial, but if you focus on the things that you genuinely enjoy Um, it'll take you a long way. So, when I was thinking about my university options, I knew that mathematics was the subject I really enjoyed. Um, but I was being encouraged by a lot of well intended people to apply for a different subject, chemistry.

Why? Because the success rate of applicants in chemistry was much higher. It was just a, uh, A subject that wasn't quite as in demand as mathematics at the time. So go and do chemistry and that way you'll get into Oxford and you'll enjoy all the benefits of Oxford and you'll get on just fine with your degree.

And I went along with that thinking. I actually went as far as writing a personal statement and I was almost ready to submit it. And I had a almost like a last minute change of heart and actually it was through Um, partly through reading Simon Singh's books that I thought this is, this subject, mathematics, is just on a whole different level for me.

Um, it just makes, it makes me tick in a way that no other subject does. And I don't know if I'm going to make it into Oxford or not, but I don't really care because either way I'll end up doing mathematics, even if I have to do it somewhere else. And the thing is, if you, um, you know, if you enjoy what you do and you get a real intrinsic buzz out of it, you'll never have to work a day in your life.

That isn't quite true. There's always a degree of graft in anything you do, but even in those moments of graft, you'll, you, you, you will feel a higher purpose. And I don't think there's any purpose higher than just enjoying what you do. And actually, especially if it's, you know, as, um, intellectually satisfying as, as, as matter of fact, actually any subject can be intellectually satisfying.

Um, whatever scientists may tell you, there's no hierarchy to subjects. So I, I, I think. Enjoyment is overrated, and even if you can't yet join the darts, uh... And see how that might lead to economic prosperity in the future. You just have to trust that it will work out because you're ultimately going to be recruited for the thing that you're best at.

And the thing that you're going to be best at is going to be the thing that you enjoy the most. Because that's where you're going to be willing to put in all the extra hours without feeling resentful. It's where you're going to produce your most creative work. And also, like, nobody has a reliable outlook on the future of jobs.

You know, there's there's no consensus. So it is folly to choose a subject now on the basis that it is guaranteed to lead Lead you to a certain set of career prospects. Obviously if you have your heart set on becoming a medic or a lawyer Uh or whatever, but actually even there there are many pathways to getting there.

So right now if you're In school, and you're just, uh, evaluating y y y y your options. Your next step should just be a function of what you enjoy. And and and I really do believe everything else will follow.

[01:19:44] Matt Geleta: Yeah, it's, It's amazing. As you said there, how, how much books, you know, finding the right book at the right time can really influence someone and change their direction. And then it actually leads very nicely to the next question I want to ask you, which was, um, you know, what book have you most gifted, um, to, to somebody else and why?

Yeah. Yeah. Haha,

[01:20:04] Junaid Mubeen: Yeah, good question. Well, there's there's an accurate answer to that, which isn't a very inspiring one, which is my own book.

[01:20:12] Matt Geleta: that answer has been said before on this podcast.

so, you're in good company.

[01:20:16] Junaid Mubeen: so, so, okay, so, so the obvious answer is my own book, Mathematical Intelligence, what we have that machines don't. Um, and, and, and obviously it's, it's an absolute privilege and pleasure to be able to, uh, to do that.

Um, but I feel that doesn't capture the spirit of, of your question. So what book have I gifted that I haven't myself written? And, um, it's probably, I think, yeah, I'll mention, can I mention two?

[01:20:44] Matt Geleta: You can.

[01:20:46] Junaid Mubeen: Yeah.

so one of them is Mindstorms by Seymour Papert, who's, who we haven't actually mentioned in this conversation, we probably should have, because he had a lot of very interesting ideas about how technology, he was one of the big pioneers of early AI, and he had a big interest in education, and Mindstorms is all about how you can use technology and computers to teach students how to think critically and creatively, and he had this idea that The child would program the computer rather than the computer programming the child and he had these, these, these systems, um, the system called Logo where, which was, I don't know if you've ever played with Scratch or similar tools today.

Um, and it would be interesting to see what he would, if he was still alive today, what he would make of generative AI because generative AI in many ways removes that coding layer. And so you, now the skill that you need is just inputting a natural language prompt, whereas he was a big. Uh, proponent of actually having students think through step by step how these systems work and how your individual commands will result in specific outputs.

And he was able to relate that to mathematical thinking in a way that I, I found quite compelling. So I, I've often gifted that to colleagues. But another book, the one that I gift the most to other parents, uh, anyone that works with young children, is a wonderful book, uh, called The Men Who Counted. Um, and it's, uh, by Mulbatterhan.

Which is the pseudonym, that's the pseudonym of a Brazilian mathematician who just had a big interest in, uh, the Middle East and Islamic culture. So he, and, but he was also a mathematician who loved maths. So, it brings together a lot of my interests and it's basically a book that is, like, it's a mix of Arabian Nights.

And popular maths and it's just a wonderfully toxic, uh, uh, combination, um, intoxicating combination is it's as, as, as a maths aficionado, I just found it amazing to see some of my sort of favorite and even unfamiliar puzzles play out in this mysterious. universe that he'd created, and it's just so beautifully written.

It's, I'd say it's ideal for 9th, 10 year olds. The prose is quite challenging, but gosh, the payoff is incredible when you get to the maths.

[01:23:06] Matt Geleta: Yeah.

Amazing. Well, I can, uh, I can link it into the show notes, but I'll probably also link your book into the show

notes, which I

think would

[01:23:13] Junaid Mubeen: appreciate that, yeah.

[01:23:14] Matt Geleta: Um, we, we've talked, so the final question here, we've talked a lot about intelligence, um, about exceptional people.

Um, we've also talked about advances in AI and, you know, progress towards something that could even resemble AGI. Um, now I wonder, you know, if we look far into the future, suppose that we were uh, faced with. an AGI or an artificial Superintelligence, And we had to pick one representative from humanity, either past or present to represent us to the, uh, this superintelligent other, who would you pick? Ha ha ha.

[01:23:48] Junaid Mubeen: Gosh, um. kind of reject the premise of the question. Uh, you can't, you can't, you can't, you can, um, imbue any individual with that responsibility.

[01:23:58] Matt Geleta: ha. I've had, I've had all manner of answers from Buddha to Greta Thunberg to, uh,

Ha ha.

[01:24:03] Junaid Mubeen: Yeah,

I don't know that I would name any individual. Because, I think our biggest hope against AI is our collective intelligence. And therefore we would need a plurality of voices and perspectives. So if they, if, if, if AGI ASI insists that we send one representative, I think I will tell them that that's a contradiction in terms, because in order to represent a broad church of human perspectives and skill and experience, we're going to need a team of people.

So, so, and even then I'm not sure who I would pick, but they would just have to be, but they would have to be very different and very, from one another and very complimentary to one another. So, yeah, maybe just the Avengers.

[01:24:52] Matt Geleta: and then just I think it's a, it's a good answer. That's a great place to um, wrap it. Junaid, thank you so much for, for joining me today.

It's been an absolute pleasure.

[01:24:59] Junaid Mubeen: Cool, thanks for having me. Really enjoyed it.

Paradigm
Paradigm
Conversations with the world's deepest thinkers in philosophy, science, and technology. A global top 10% podcast by Matt Geleta.