Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Neil Johnson: Bad Actor AI & the Online Battlefield

Neil Johnson is a professor of physics whose work explores impact of generative AI tools like ChatGPT in online battlefields (e.g. Twitter, Facebook, YouTube and more).

Episode Notes

Neil Johnson is a professor of physics at George Washington University. He heads up the Dynamic Online Networks Lab, which combines modern data science with cross-disciplinary fundamental research to tackle problems such as the spread of online misinformation, and the impact of bad-actor generative AI tools in online battlefields.

Neil is a Fellow of the American Physical Society (APS), was former Research Fellow at the University of Cambridge, and Professor of Physics at the University of Oxford. His published books include Financial Market Complexity, and Simply Complexity: A Clear Guide to Complexity Theory.

We discuss:

  • bad-actor artificial intelligence in online misinformation

  • mapping the online information battlefield

  • impact of AI in global elections

  • challenges of controlling bad-actor AI

  • timing and nature of AI-driven threats

  • relevance of complexity science and interdisciplinary education

  • cross-disciplinary societal issues like climate change and human conflict

… and other topics


Share Paradigm


Watch on YouTube. Listen on Spotify, Apple Podcasts, or any other podcast platform. Read the full transcript here. Follow me on LinkedIn or Twitter/X for episodes and infrequent commentary.

Subscribe for free to get access to everything and never miss a post


Episode links


Timestamps

Timestamps are for the video episode

00:00 Understanding the AI Battlefield

01:21 Global Context and Online Mapping

02:38 Challenges of Online Bad Actors

03:58 Congressional Hearings and Platform Responsibilities

05:49 The Role of Smaller Platforms

07:10 AI's Impact on Content Creation

08:36 Defining Bad Actor AI

20:18 Ethical Considerations and Access to AI Models

39:35 Distrust Subset and Community Influence

44:51 Navigating Distrust in Online Information

45:13 The Growing Distrust Subset

47:58 Quantum Dots and Vaccine Myths

54:58 The Complexity of Bad Actor AI

56:03 Predicting the Frequency of AI Attacks

58:01 The Red Queen Hypothesis

01:04:17 Endemic AI and Control Strategies

01:09:06 The Role of Complexity Science

01:20:12 Encouraging Interdisciplinary Studies

01:24:41 Book Recommendations and Final Thoughts


If you’re enjoying Paradigm, please share it

Share


Transcript

This transcript is AI-generated and may contain errors.

[00:00:00] Matt: I'm here with Neil Johnson. Neil, thank you for joining me.

[00:00:02] Neil Johnson: Thank you so much for inviting me, Matt.

[00:00:05] Matt: Uh, Neil, in a recent interview with Elaine Dawson, you said, uh, if humans are in a battle with AI, then there needs to be a deeper understanding of the battlefield. What is the, the battlefield and, and are we in a battle with AI?

[00:00:19] Neil Johnson: Yeah, that's a fantastic question. Of course, what we need to do for any kind of battle, I mean, whoever won a, whoever won a battle without a map of the battlefield. So what does that battlefield look like for AI? So to answer that, you've really got to, I mean, AI is going to be used online. It's going to be used to, it's like a, it's like steroids for all the kind of myths and disinformation that we might see online.

That's the battle battle isn't against, you know, of course we want to understand AI and all those, that's not the battle. The battle is how can we stop bad actors? Using AI to the detriment of society. So to know where and how and when they're going to use AI when you really need to know about that online battlefield.

So that's what I mean by the battlefield. Because it's in the online space that AI will come into its own.

[00:01:21] Matt: Yeah, I guess, um, just in terms of global context for this issue, many people would be aware of The fact that 2024 is a super election year, you know, I think it's something like half of the world's population is in countries where there will be an election this year, if I'm not mistaken.

[00:01:37] Neil Johnson: correct. I mean, it's unbelievable, but that, that's, that's, that's the truth. And so, you know, we're kind of, the world's kind of going into this blind in the sense that, you know, even before AI, we didn't really, you know, if you ask most people, even myself, I mean, when we started our study, we tried to map this out.

But, you know, it's like, well, You know, picture Europe, picture Australia, picture, yeah, I can, I, we all know what the map of the world looks like more or less, you know, I'll get the countries wrong, a lot of people get continents wrong, but we've got a general sense, but if you say to someone, okay, now do the same thing for the online world, oh, and by the way, continents are now different platforms.

And you know, whether they will sit with respect to each other. Most people, I know I would, you know, kind of draw a blank, kind of maybe, you know, imagine some kind of storm clouds of kind of, you know, some good guys in the middle and a whole bunch of bad stuff on the periphery. So that's what we set out to investigate.

[00:02:38] Matt: Yeah, and then there's even this, um, you know, how we think about the structure of this battlefield. I think there's a complete collapsing of distance as well, you know, even if it is geographically bound actors, for example, in the online space, there is no distance between, between individuals and there's quite a different type of battlefield, isn't it?

[00:02:55] Neil Johnson: Correct. And that's the thing that very much like, unlike any kind of battle, you know, kind of battlefield map that you could possibly imagine that everything's connected to everything in principle. And so what does that look like? So when, when my own intuition, when we started trying to map out bad actor activity, and I can talk about what that was, but you know, we're just bad.

What, what do people do when they're doing something bad online? They might, I, I, we kind of imagined that there'd be a whole bunch of good stuff, wholesome stuff going on in the middle. Somehow, like a kind of beehive, you know, there's all the good stuff being made and talked about and nice, sweet honey type stuff.

And then buzzing around that we thought we'd find the kind of the random kind of bad stuff, uh, you know, kind of people, people try to cause trouble, but not no kind of kind of coordination. But what we found was the complete opposite of that.

So what we, what, what, what, when we looked. And this is kind of set up by, I mean, how many, how many of us have seen on the, how many of us have seen kind of footage of, you know, I sit here in D. C. and there are endless kind of congressional hearings of the platforms of what they need to do to, you know, battle bad actors and hate and extremism and far right, you know, Europe's got this problem, and Australia's got a bit of this problem, um, you know, the U.

S. has this problem, um, you know, what they can do, how they should kind of, you know, Manage the problem. And these congressional hearings, well, the most recent one was, was the largest by, by any means, because it had five, five platforms represented, um, you know, not just Facebook and X, which used to be Twitter, but it also had, um, discord, which is a kind of gaming channel used by teens.

And I mean, that's that tick tock and that, you know, that's, that's kind of unusual, but what we found when we mapped it out. And by the way, we'd assume that others had mapped it out, they hadn't, um, was that these platforms are almost like the receivers, the receiving end of actually where the bad stuff is.

I mean, most, it's a curious bit of science because, you know, most kind of, the younger you are, the more you know about it, because, you know, ask any 15 year old, they're using a lot of platforms that most people have never heard of. And certainly Congress has never heard of, because they never invite them to these discussions.

And they're key, because they provide the kind of glue that holds together, that provides the kind of strength and make, make, make the bad actor activity so, so robust. So, all of these issues about, oh, Facebook, you need to do more. Facebook replies, we are doing more. How can they both be right? Well, it turns out they're both right and they're both wrong because all Facebook is seeing is the kind of end result of this incredibly interconnected network of smaller, of many, many smaller platforms, communities, bad acts of communities on these many, many smaller platforms that many of us have never heard, heard of.

They interlink with each other and that gives it a kind of web. And then they pump stuff out to the main platforms. And so Facebook's forever playing this kind of whack a mole game where they're kind of knocking stuff off. And then it really, it's like, It's like, you know, imagine you live in a neighborhood where there's a kind of bug problem, infestation problem.

Yeah, there is definitely. Here in DC we've got, we've got mice and rats all over the place here. So imagine, you know, uh, you've got m mice coming in or rats coming in, and you know, first thing you do is blame the neighbors. Well, the thing is, as you've said online, everyone's a neighbor, so you don't know where that's coming from.

If you don't know a map of the neighborhood. It's exactly the same thing for the map of the online space. You need to know how it's plumbed together, how it's wired together in order to do something about it.

[00:07:09] Matt: Yeah. And, uh, and then I guess like on top of all that, we have the increasing role of, of AI acting in this, within this map where, um, you know, every, everybody knows everyone listening will know that AI can't generated content is proliferating, um, uh, in a really unbelievable way. I think the last I looked, 20, uh, states in the U S had passed.

regulations against things like deep fakes within elections. Um, I think on the federal level, this remains a bit stalled. No one really knows how to approach this problem. Um, but certainly it's something that you've, you've, Addressed in a fairly head on way in a recent paper on this topic, controlling bad actor artificial intelligence activity at scale across online battlefields.

So maybe we can turn to that paper.

[00:07:56] Neil Johnson: Yeah, sure. Yeah. We, I mean, and we, we, we were literally, we were thinking this was a year ago that we, um, put this first online and we were thinking, you know, around that time, there weren't so many people talking about the latest, you know, the kind of the latest kind of chat. GPT or the GPT version wasn't so great that everyone was thinking that it was an immediate threat.

And so we just, we, we literally just asked the simple question of, you know, where, what, what, what, what kind of bad actor AI will appear? Where will it appear? When? And what might be done to control it?

[00:08:36] Matt: I mean maybe even taking a step back in terms of defining what even is bad actor. bad actor AI? Because this is a, this is a little bit of a tricky question. Um, and I would even imagine, um, that there would be disagreement between people on any particular case because, you know, for example, uh, an AI that promotes a particular political view by some might be seen as a bad actor if they don't agree with that view.

Whereas others who do agree with that view might not define it as a bad actor. You know, it's just promoting something that they sort of believe in. How do you even think about. Defining what bad actor AI even is.

[00:09:12] Neil Johnson: Yeah, that's a fantastic question. And you know, it's a little bit like. The discussion of, um, you know, what's misinformation and what's disinformation, um, even back to the idea of, you know, kind of defining terms like, I mean, if it was, you know, a real world, violent terrorist versus freedom fighter versus all of these things, these definitions are really hard, but the, but what we take it, we take a very simple view.

It's, there, there are, when AI appeared, it was like when COVID appeared. There were already communities online that were kind of beyond the just kind of distrust scale of we don't quite trust what you know, the kind of health authorities are telling us. Um, they were, they were beyond that and they were in a kind of state of what we are being told is wrong.

And I'm going to go and tell you what's right now. Why is that? That's not a bad actor necessarily. I mean, actually a lot of, you know, parenting groups do that online because they're trying to do the best thing for their kids. Um, so that itself is not bad, bad actor. Taking that further, there are communities that purposely look to stir up things.

They bring in racism, they bring in, you know, and misogynistic content and they, they push towards the kind of hate and extremism area. And that is what we define. So when we do our studies and we collect data. We collect data on communities that if the Department of Justice, um, were looking at them would say they have used hate speech or they have used, um, um, they have done, they've, they've carried out and they're inciting extremism.

And so those are fairly clear in the law, at least in the U. S. Um, and so once that content of their community hits that, okay, slightly fuzzy bar, but once it gets up there, we, we, that's what we call a bad actor community. And you might think those communities might just focus on, as I said, racism, misogyny, misogynistic content, these kinds of things.

But if they're stirring things up and again, maybe, maybe, maybe some of them think they're doing right. Maybe they do, but it's certainly hate speech and extreme, inciting extremism, they need to create content. And so they are the ones, and we've already seen it starting to happen, that are going to run to a tool like AI to generate content and spread it more widely.

And the curious thing is, the interesting thing, again, one might think AI, I'm thinking of the latest GPT, etc. They only need some simple version of it to actually 24 7 create content that is, by all the definitions I've just said, or at least the kind of parameters, counts as AI produced by a bad actor community, so therefore bad actor AI.

[00:12:41] Matt: Well let's, let's dig into that final point then. So again, in the paper you address four very important questions. What kind of bad actor AI will happen? Um, where will it happen? When will it happen? And what we can do about it? I think on the question of what kind of bad actor AI we will see, Um, I thought that last point was really interesting, you know, there is, I feel like there is this idea in the general public that the most damage will come from very powerful models, like for example, LLMs.

like GPT 4 and higher, because the content will be extremely convincing, you know, it will be very sophisticated. Um, but actually in the, in the paper, you've claimed that more basic tools like GPT 2, um, rather than more sophisticated ones are probably more likely to, to cause a lot of the damage. Um, let's dig into that claim.

So what is the basis for, for this insight?

[00:13:35] Neil Johnson: it's a fascinating, Um, issue because, um, it turns out that things like GPT 2, now, people may or may not know this, you know, those things can run on a laptop. They can even run, I think now, on your cell phone, and they have no filters in the sense that, you know, they were early versions.

All the latest ones, you know, I'm sure OpenAI and all the other companies, they say that they're putting on all these filters, so that if you ask it. a question about something that pushes towards hate, extremism, you know, kind of stirring up trouble, basically. And then again, we have to get into the lines cause I'm not sure what filters, but you could just try it out yourself.

It will come back with, I don't have opinions or beliefs on that or something else. You give that to GPT 2 and it will tell you. Now, GPT 2, as we all know, is only trained on a small corpus of content, and it's basically like a really, really early version, and so it's not very good, you know, it will never write a literary classic.

But online content isn't literary classics, as anyone knows. I mean, it has spelling mistakes, it has repetition. I mean, that's what people put online when they're putting stuff online. And so GPT 2, it turns out, as we showed in the paper, it's very good at just producing content that looks good. Like it's human, online human content.

So, basically, running on my laptop or on a phone, GPT 2, I could 24 7 pump into any community that I'm linked into this kind of content. And it would be very hard to tell the difference between that, you know, long gone are the days of the bot, the Russian, the supposed Russian bot of 2016, just repeating stuff.

Trump is good. Trump is good. Trump is bad. Trump is bad. Um, they've gone. Those days have gone. Easy to produce kind of short text that looks like it could have come from anyone, you know, kind of standing at a bus stop and pumping this stuff out

[00:15:58] Matt: Yeah. Do you think, um, I mean, So, if you look at the world in current state, I think you can say, you know, these tools like GPT 2, very basic, but can produce content that is, it seems like it's human generated and, um, you know, it's not very easily detectable yet, um, but presumably we will get better at detecting these things and I don't know how.

There might be things that a human eye can't, that can't detect that um, online filters will be able to and I, I mean, I would imagine that in the limit, um, you know, our, our detection methodology would be better at identifying things that were produced by less sophisticated models and, um, less effective against more sophisticated ones.

How, um, you know, what is your, what is your sense as to, um, you know, how. Well, we will be able to sort of spot and correct and sort of like filter down the track these more, um, basic tools or, or have we reached a point where it is so human, uh, like that, that that's already kind of an impossible task.

[00:17:03] Neil Johnson: I'll tell you a quick story. So, um, My wife actually teaches in a social science and she and a lot of the other faculty in her institution have realized that actually the hardest essays to grade now are the kind of B minus ones because you can't tell if it was done by a student at 1 a.

m. in the morning, you know, five hours before the deadline. Or some basic GPT. Um, it's the more sophisticated ones where you're showing kind of critical thinking, or there's some kind of logical steps, which are the easier ones. Ah, yeah, there's no way that that could be done. I mean, that would be amazing if that was done by a machine.

It's the more kind of mundane stuff. Which is much easier for a machine to mimic a human. And online, a lot of the harms we're worried about, could be misinformation, could be misinformation about vaccine, could be misinformation about mpox. That's the latest stuff we're seeing, of course. Um, you know, that's very mediocre kind of sentences.

Mpox will come, you know, it comes from this, and it will cause you that, and et cetera, et cetera. Doesn't, doesn't need to be in a sentence, doesn't need to be a sophisticated.

[00:18:26] Matt: And I guess there's even compounded by the issue that, I mean, it doesn't even have to be false statements and misleading statements in and of themselves. It can even just be skewing the distribution of what's out there. You know, for example, the mpox one, you could fill the internet all day with correct statements and true facts about, about this, um, about this particular phenomenon.

Um, and it's actually just the act of sort of amplifying the volume that, that leads to, um, sort of negative impacts as well. Um, and you certainly don't need a very sophisticated model to do that.

[00:19:00] Neil Johnson: Matt, I just lost you there. I don't know what

[00:19:03] Matt: You're back.

[00:19:04] Neil Johnson: We have, yeah, yeah, yeah, we have a thunderstorm here. I'm not quite sure what happened there. Or, or, conspiracy theory would say that somebody was

[00:19:14] Matt: But, you know, it's funny, it's funny you say that. I've had several conversations about these sorts of topics, and in those conversations, all of them, and in only those conversations have I

[00:19:24] Neil Johnson: ha

[00:19:25] Matt: issues.

[00:19:27] Neil Johnson: That proves it!

[00:19:28] Matt: That proves it.

[00:19:30] Neil Johnson: Ha ha!

[00:19:31] Matt: The question I was asking was, Um, or maybe it was, it was more of a, more of a point, you know, uh, it, it is one thing to, to ask whether we can detect, um, you know, if there is bad actor information being shared. But I mean, one of, one of the other issues is that it doesn't even need to be false information.

It can just be an amplification, um, you know, the OCH example. So, um, one could post true statements, facts all day about this thing, and just by ramping up the volume. Um, that could already have deleterious effects. And certainly you don't need anything very sophisticated or powerful. You don't even need a large language model, um, to, to do that.

[00:20:12] Neil Johnson: you're absolutely right. I completely agree with that.

[00:20:18] Matt: um, I mean, there is a sort of, it may be a rabbit hole question, but an ethical consideration here about the extent to which, Individuals should be allowed to access these models, even the most basic ones. Um, to, to take an analogous example, if you look at actual weapons, you know, nobody would argue that individuals should not be, have the right to bear bazookas or nuclear weapons.

That's, that's obvious to everyone. Down the, the sort of scale, depending on where you are, the right to bear handguns, um, It's a different matter and it brings in questions of individual liberty, liberty, and so on. But I think people do still kind of draw the line somewhere on the spectrum of utility and danger, and that is certainly a factor.

Um, and I guess with, with tools such as these Large language models, I feel like there was a similar dynamic at play, you know, and extremely, extremely think about the most powerful large language model that has ever been created or that will be created in the next 50 years. I think people find it obvious that no, that shouldn't be any individual.

off the street should not have access to that by default. But then there is this question, you know, where does one draw the line? Is this a GPT 2? Is it a GPT 6? Do you have any views as to how to think about this problem as who should be able to access? these various, uh, these various tools.

[00:21:41] Neil Johnson: Um, I, I think that, um, again, my concern, my major concern is with the mediocre models. Um, because there are even, you know, there are versions of GPT 2 that are trained on hate speech that are just trained on hate speech and they're commercially available online.

Unbelievably. Um, so although all the attention is being placed on, you know, what's the, it's, it reminds me a little bit of. The um, kind of a Cold War thing, you know, you've got to have, who's got the biggest missile? Well, you know, when it came to the war in Iraq, it turns out that garage openers were the key, because they were the ones that triggered IEDs.

And the US and the coalition forces had no kind of nothing against that. They had to go and kind of create garage opening blockers, um, and other things. And even though they didn't work. And so it's kind of like the simplest technology can actually be when it's used at scale. So to give an idea of scale, I didn't give an idea of scale, but just to give an idea of scale.

I said. In these communities, see, one of these Bad Axe communities, there's about, there's about a hundred thousand people in each of them. And these people can be from anywhere in the world, and that's part of the power of it. You know, it can be someone in Europe, someone in the US, someone in wherever. Um, A hundred thousand per community.

And then what we found is that on each platform, there's about a thousand of these communities that are important, in the sense that they're, they really are bad actor communities that connect to each other. And so they create the web. So you've got to, Doing the math in my head, a thousand of a hundred thousand, a hundred thousand times a hundred thousand is a hundred million.

There are 10 platforms. Actually, there's many more than 10 platforms, but let's just take a hundred million. A hundred million times 10 is a billion. And so we're, even if all those people were not themselves, you know, kind of doing bad stuff, there you've got a billion or more people across the planet who are They're immediately exposed to the power at, and so these mediocre tools, the more mediocre tools at scale, of course, a large, you know, like the most, you know, the latest version, of course, could be, but the latest version needs to run on some other server because I haven't got control of that server and they can block that.

But a more mediocre one, like the garage hat opener, I can hold that in my hand, so can an insurgent. And they don't need to connect to some base station or anything. That's, that's the power there. So I think we often get, um, kind of carried away with the science of it. You know, what's the most powerful scientifically?

doesn't necessarily mean the most dangerous for society. It, there's a little bit of a, a, a, a trade off there with scales.

[00:24:54] Matt: Yeah. And, uh, and I mean that, that ties in very nicely into the subsequent question you answer, uh, discuss and answer in, in the paper, which is where will it happen? And, you know, here again, I think there is a natural assumption that it is the very largest platforms. That would be the place, um, you know, because people congregate there.

There is a lot of people on there. It's YouTubes of the world where this will be most impactful, um, and therefore where most attention should be invested in controlling bad actor AI and its impacts. Um, then almost counterintuitively at first, your, your, your work suggests actually, this is maybe not the case.

And, um, and that smaller platforms might have a very substantial role to play. in the impacts of bad actor AI. Um, I mean, I'll put it to you. Why is it that smaller platforms would play such a critical role in your view?

[00:25:51] Neil Johnson: Yeah, I mean, first of all, there's a lot of them and, um, and that's now includes, it now includes, um, platforms that run on blockchain. So you can never actually shut down their content because it's stored there forever. And it doesn't actually exist on any particular one server. So if I take lots of individuals.

Again, it's almost like an insurgency, lots of individuals that are, are, are strongly connected to each other. Suddenly, I've got something that's, first of all, it's decentralized, so I've got no one thing that I could take out that would then make the whole network fall apart. So it has this kind of decentralized strength.

And together they're actually bigger than any one of the other individual platforms in, in, in terms of their connectivity into the general public. So, you know, Facebook has a certain, certainly it has a huge audience. Not among 16 year olds, you know, 16 year olds are using something else. 16 year olds in a couple of years times, couple of years time will be voters.

So, as this goes forward to future elections, these elections, future elections, people growing up on these other platforms, using these other platforms, they're already being exposed to this material. And so, getting Facebook in front of Congress is great if you want to stop your grandmother or grandfather seeing wrong information.

But if you want the next wave of voters to stop seeing it, you've got to do something else.

[00:27:30] Matt: Yeah. And it, I mean, it also just brings up the question of, um, you know, a lot of policy decisions have contemplated ideas like breaking up large platforms. Um, you know, they're too large, too much power, they're posed too much of a risk. Um, and again, this, this might just be a consequence of how we think.

In these sort of geographical terms, and I'm not sure what it is, but, um, networks are different, and, um, it, it, it, if, if what you're saying is true, there are some ways in which it might be even the wrong decision and, and sort of more dangerous to break up large platforms into many, many smaller fragments.

[00:28:08] Neil Johnson: Correct. Imagine you drop a, a glass. Done it many times in the kitchen. If it's, you know, one of those glasses that, I can't remember if it's the cheap ones or the expensive ones, but anyway, one of the, one set of only break into a couple of pieces. Yeah, it's a pain, but you can pick them up. They're dangerous, but you can pick them up.

You can see them and you can pick them up. Now imagine the other ones that shatter. into a million tiny pieces. No one of those pieces on its own is going to really hurt you badly. But you take all of those and leave them around and that is going to create a problem. All those shattered shards on on the ground.

Now you try and pick them up. How long is that going to take? Shard by shard instead of picking up big pieces. Okay, it may be hard to fight one big piece, but at least you know where it is. You can surround it in some way and you can remove it. But now you're picking up thousands and thousands of small pieces.

That's the analogy for what would happen if you start to break. I mean, we already see it now with those small communities. Um, you know, they're always looking, but one has to remember those, those, I'm sorry, small platforms. One has to remember that Bad Axe communities, that why, why would they make a noise?

Well, they always want new recruits. They always want to spread their message. So they're public, they're quite, they're, they're secret, but hiding in plain sight, as it were. They, they, they're putting out their message. And so, you know, if you've got, you've got, you've got, Platforms with lots of communities in them, and that the platforms are small, but they're interconnected with each other, which is, they do that so that they can attract more recruits.

That is a substantial thing to try and beat. And even in the kind of military sense, let's face it, even the U. S. is not very good at beating delocalized, decentralized opponents. They're very good at standing off with some big piece of glass, but not with thousands and thousands of little pieces where you can't even decide what the head is.

[00:30:18] Matt: yeah. I do wonder actually if this plays into, if there's this almost a bit of a bias here that plays into how discussions on these topics are done. Um, you know, if, um, you look at the discussions that have been just out there, they do tend to make large platform assumptions. Um, again, I think on the one hand, it's just because this is initially the natural thing to do, but I wonder if, if another part of it is also just because the other problem is so difficult to understand.

Um, you We, we, I guess we don't really know even what to, how to approach, you know, hundreds of millions of bad actors versus one, one large platform with many bad actors. Um, is, is that your, your sense? And I mean, I guess more general question, what is it that has stopped others from thinking about the problem in, in this way and focusing more on the many fragments of small, um, platforms and bad actors versus the larger ones?

[00:31:15] Neil Johnson: Yeah, nobody has a map. Nobody has a map of what that looks like. And we get back to our initial discussion. I ask, you know, if you think to yourself, well, what does the internet look like? Now, maybe we're imagining lots of little things connected together and, but only because that's what we found when we, when we mapped it out.

So it's that lack of clarity. And I'm, I'm always amazed by this because. You know, could you imagine going to, uh, you know, a doctor and saying, well, you know, I've got a, my wrist hurts or something like this, or I've got a pain in my side. And the doctor kind of just guessing, no, you expect the doctor or the specialist to know how the pieces of the body are connected together, even though they, you know, you can't see them and nor can they, but there's some knowledge, somebody has mapped it out.

So that they can tell, well, if I do this, it might have an impact on this, which may impact this, you know, they, they, they can see that, but guessing that went, you know, centuries ago, thank goodness. So we expected people to have mapped this out, but it turns out they haven't. And the reason is that there are many reasons.

Um, first of all, it's actually really hard. You know, platforms don't just turn around and say, Hey, Oh, yeah, we'll give you our information. They don't want to know, you know, from Facebook down. Um, and maybe quite rightly, it's like turning up at, you know, Coca Cola and saying, give me in the list of all your, um, you know, your customers.

And they're not going to do that. Um, so getting information is hard. That's why we focus on the communities and not the individuals. These communities of 100, 000 people, because actually it's been shown, first of all, then we're not running into privacy issues, but second of all, it's been shown that, um, that's where trust develops, you know, all of us humans, we like to be in communities, we like to be in groups, and online communities, as it showed you in COVID, et cetera, people turn to their online communities, good people and people trying to do bad things, turn to the online community for kind of support, My favorite example is there's a, like, one of the first studies showing trust in communities is a group, you know, looking at the community of, they call them SADs, stay at home dads, who on Twitter, and this illustrates the problem with something like just looking at Twitter, which a lot of people do.

In, in academic work, because it used to be available. Um, Twitter, you go on, the dad goes on and says, yeah, I've changed the nappy. I've changed the diaper and the, you know, I'm, I'm a star. And then in Facebook, they're in their community saying, well, you know, I'm worried about this. I'm worried about the relationship side.

So the concerns. are within these communities, the shout out announcements are on Twitter. So unless we unravel or map out how those communities trust. I mean, we know that they trust everyone in there. Who are they linking to? Who are they talking to and who are they listening to? So in this study that we're talking about now.

We know that the places, the communities that will start to use AI first are the ones that want to get out a message for good or for bad reasons. And those communities sprung into life early in COVID. The same communities tend to spring into life, whatever the crisis is, and they will be the ones that use AI.

Um, for getting their message out and so early in our, our, our look at the communities that were concerned about COVID, they tend to be less, they're not really hate communities, not extremism communities, but they're certainly things like parenting communities. They sprang into action early, connecting to each other and connecting to, well, what other communities are talking about this new unknown disease.

Who were those other communities? The bad actor communities. They were saying, oh yeah, this was a Chinese, you know, everyone Asian has created this, a, you know, somehow this is a disease against, um, you know, it's going to take out, um, you know, people of colour, it's going to, I mean, all of those insightful racist things started immediately that the, that the rumours started about a disease coming out of China.

[00:36:00] Matt: Yeah, the, um, the COVID example is really interesting because, um, it brings up this, this question, you know, in, in mapping the online battlefield, um, this is almost like a sort of Damocles type thing where it is on the one hand, extremely useful for good actors to, to understand what this looks like because it gives us some control, but it is also very dangerous knowledge.

Um, and we certainly wouldn't want this information about the structure of all these online spaces to be. Again, openly available. The reason COVID reminded me of this is because, again, there was a lot of talk of gain of function research and there's an argument that gain of function research is extremely important because it is what's going to enable us, arm us with the knowledge to protect ourselves against future diseases and so on.

But at the same time, it also arms us with dangerous knowledge and has its own sort of dangerous consequences if it goes wrong. And so there's sort of a divide between people as to, you know, they think, should we do this? Shouldn't we? And I would imagine there's similar dynamics at play in mapping these online spaces, their questions of Who should get access to this information?

You know, there's geopolitical considerations, there are institutional considerations. How do you see us like rallying together to, to both develop this, uh, this accurate, um, map of these online spaces, but also doing it in a safe way, in a way that doesn't expose us to potentially very dangerous negative consequences of having the sensitive information out there available and available to bad actors.

[00:37:33] Neil Johnson: that is absolutely a fantastic question. In our research, we never get into the details of the, you know, who's in what communities, etc. Um, just it's more the kind of. It's like when water comes to the boil, looking at the bubbles and how close they are to each other, rather than what are the atoms inside each of those bubbles.

We never do that. And actually, to know when something's boiling, you just need to know about the bubbles. So, you know, all of our research, just to reassure everybody, all of our research is absolutely anonymous. Um, surrounding, surrounding that, but you raise a fantastic point because in the end it becomes a tool like all things, like a, like a car that can be used for good and bad.

And so one would hope, we hope our goal is that this will at some stage open up a dialogue, maybe in the next congressional hearing, who knows. Where, instead of having five people on a table staring at the congre con you know, um, members of congress, there's a great big picture in the background of the map.

And you know, they can close it and put it, you know, turn off the TV cameras or whatever, but to have the discussions with that in the background, because, you know, how else are you going to have the discussion? It's always otherwise rhetorical. Do more. We are doing more.

[00:39:02] Matt: No, I think, I mean, that's a, that's a fantastic idea. Um, I guess, uh, from a, from a personal perspective, a question that comes to mind then is like, you know, where, where do I fit into this map? And am I, am I, am I at risk? Am I vulnerable? Um, I do think a lot of people, are aware of these issues and they try to avoid these communities.

Um, but I found an interesting point that you brought up in the paper was this idea of a distrust subset, um, which is almost something like, feels like somewhat of a slippery slope into a quite a vulnerable community. Um, what, what is the subset?

[00:39:37] Neil Johnson: Yeah, this is, um, so if we imagine, you know, everybody talks about bad actors and the rest of us that are presumably good, of course it's a spectrum, like a colour rainbow spectrum. Um, if we talk about communities that promote Uh, hate that actually use hate speech and extremism, obviously, they're at one end of the spectrum.

If we talk about communities that are focused on, you know, something absolutely wholesome, that's the other end of the spectrum. But in between, there are, there is And this is where COVID plays a role here. There are a lot of active communities that kind of, with, with people and anybody who wants to know, you know, they want to know, they want to share things, they've got a common interest, they want to know about things.

Um, and you know, it can often be communities where people share Because they've got someone in the family with a particular disease or the, um, like I mentioned, parenting communities, they're absolutely by the way. an incredible source of influence. And, you know, they, of course, because parents have, usually parents above them and kids below them.

And so they actually tap into a lot of, and they want to know. And so that's what we call, there's a, there's a, there's a subset of them that we call the distrust subset, which were the communities That as soon as official information was coming in about COVID, they distrusted it. Now, were they being bad?

No, they're on this spectrum. They're not bad actors. They're But they're also not just talking about, um, you know, benign things. They're, they are distrusting what they're told about vaccines. I'm not saying that's good or bad. I'm just saying that there's an element of distrust in there.

They want to do what's good for them. And, I mean, for their kids. They're trying to, um, um, uh, hey, all of science is not correct. Science keeps correcting itself, et cetera. They're trying to pick up on the latest things in science and they're trying to share information. And let's face it, in COVID, early COVID, there was no information coming out from the science community.

So they were filling that void. But it was, it turned into distrust of, say, public health, uh, entities, particularly in the, in the US, um, but UK, Europe, and in Australia, there were a lot of those communities and they connected to each other. So, for example, there was a. Very active community in Canada, in Saskatchewan, Canada, connected directly into a community in Australia, connected directly into one in the UK, connected directly into one in the middle of Ohio, and they would share information and they would share, it turned into a kind of distrust of what they were being told, um, and that is a little bit of that slippery slope then.

That you mentioned because we only happen. I'm not saying they were, but you only have to go a little bit further down that spectrum and you start to get to communities that were worried about and save the Children. What's happening with the children? We've got to protect the children. Suddenly you hit the QAnon movement in the, in the US.

Saving the children also emerged in the UK riots a few weeks ago. If you look online, which we have been doing, There's a lot of activity around because they were apparently kicked off by, they were kicked off by, um, you know, stabbing of some, um, children, tragic, but it kicked off a lot of the immigration, far right, um, extremism and those communities started to, so it's exactly as you said, there is this.

Slippery slope. Communities. I mean, take a group of three or four people sitting around in a, in a, I mean, we've all been there. Um, you know, some, some, sometimes the conversations can get off track. Usually when you're in a, in a room doing that, you, somebody's got to go home or the Uber arrives or, you know, something.

Or you get fed up and you need to go. Doesn't happen online. Nobody goes. They just keep coming back. And so those communities can find their narratives dragged down. And then to the point, if you've got now AI adding into this, the AI is linking in and getting in, for example, this is what people really have to be careful of under posts in communities, look at the replies and the comments, because that's where a lot of times it can start to get down that slope.

[00:44:36] Matt: Yeah, I mean, I have personal experience with that if you go and look at the some of the comments on various YouTube videos that I've got. I mean, we would often, we're talking about, um, topics that are kind of getting towards the frontiers of, of, you know, scientific knowledge and sometimes miss. toying with ideas that may be overstepping the mark.

Um, you've got to act in that space. And, and certainly, um, I would say at least 10 percent of the comments that come through tend to be pulling towards this, uh, this distrust subset.

[00:45:07] Neil Johnson: Same, same with my emails. Same with my emails Matt.

[00:45:11] Matt: it's, uh, it's quite something. Um, I mean, one thing that I think about and worry about is I feel like the distrust subset is becoming I mean, the word subset makes us think of something that's relatively small, but I feel like the subset is growing and it's becoming maybe a fairly large proportion of all communities out there.

And at some point I feel like, and this might be happening already, that our default relationship with online content is one of distrust. Um, you know, if you had to ask me 10 years ago to go and fact check something. I would have had no issues with googling it and reading some articles and that would have answered it for me.

Today, uh, if I'm asked to do the same thing, um, I actually don't, I don't, I don't trust this process and I think there was a recent paper in Nature, um, from last year by Kevin Aslett and some colleagues that actually said researching, fact checking online by my sort of Googling um, Um, uh, false statements, it can actually reinforce the wrong view because of how these algorithms work and what you'll be shown and what's then boosted.

And so actually fact checking could be the exact wrong thing to do. Um, and so in a space like that, it feels to me, okay, well this distrust subset, um, or the, or the default position to distrust online information is very large. How do you, how do you think about that? And like, do you, do you have a sense as to, you know, just currently how big this, um, distrust subset is?

Okay. Thanks.

[00:46:44] Neil Johnson: you're absolutely right. I actually think it's, um,

I think it's quite large. Um, and it's not so much that it's, there's a lot of people putting stuff out all the time. It's that they are, those communities that are putting stuff out are linked to the huge mainstream who usually are talking about other things.

Sport. You know, where they go on holidays, vacations, et cetera, like that. But they're still getting comments, replies, people coming in, material coming into them. And so when's the next crisis? I mean, how many of us had mentioned the word vaccine, for example, before COVID? I don't think I'd said that word, you know, maybe when one of the kids needed a vaccine, but none, that was it.

And yet they were ready. You know, they were kind of ready, already kind of primed with Well, with this or not. And so, yeah, I think this is actually a very large subset. Sometimes for good reason. I mean, it doesn't, it amazed me that, let me start by saying there are no microchips in vaccines. Okay. There are no microchips.

It turns out though, I did my PhD on quantum dots, which are tiny little semiconductors that could be used for that kind of atomic levels in a couple of levels. You can use them to have transitions and therefore detect them basically. And it turns out that In December 2019, which is when the really that was the start of COVID, even though it didn't have a name, and it wasn't a pandemic in 2019 December in science, I think science, translational medicine, I think a group at MIT.

put out an amazing piece of science, I have to say, an amazing piece of science, where they injected into rats, rats, mice, um, a liquid containing these quantum dots, and then showed that you could actually pick up the, the resonance. And I think it was Scientific American had a article, one of, one of the, um, you know, kind of reporting the science said, Oh, you know, this could be great for, you know, we could use it for kids to detect whether they've had certain vaccines and, and, um, you know, wouldn't that be amazing.

Well, yeah, it sounds amazing, but run forward a few months and it sounds terrible because suddenly you've got the idea that, yeah, so quantum dots could go into vaccines. Doesn't mean they are. It just means that there's a scientific capability. And also, unfortunately, talk about, you know, kind of series of unfortunate events.

That work was funded by the Gates Foundation. And of course, Bill Gates is a, is a, is a target for this and the Chinese National Science Foundation. And so when these communities That we call the distrust subset. We're picking up on things and then being told, for example, I've got a, um, you know, I just keep it to the side a screenshot of the BBC trying to say to people, that's ridiculous.

And they showed a, um, a syringe with some huge printed old circuit board, you know, basically saying that how could you get that into a circuit board? Well, you can't, but that wasn't the point of the science. So the distrust communities, some of these knew about this, the, the, the work of the MIT group. And so when they were being told, and when, when they were getting the message, Oh, don't be silly because you can't put a printed circuit board into a vaccine.

It made them distrust even more. The kind of official, you know, that's what they think of us. We're, we're onto something. And once you get a group of people thinking they're onto something, You can find online other groups that think they're on to something as well. And it doesn't have to be flat earth people.

It can be some people are thinking, you know, um, you know, kind of particular types of organic food, climate change, some, suddenly all these other topics are now causes for doubt. So this is, there needs to be a science of this at scale. I mean, that's what we're trying to contribute to for that reason.

[00:51:29] Matt: I mean, when you, when one realizes just how many of these communities, um, uh, there are in their scale of them and how many people do find themselves in these communities, I guess the one reaction is just to think it's very unfortunate for those people, but there is the other reaction that then says, okay, well, if this is really happening at such scale, how do I personally know?

that what I'm seeing is reliable? How do I know that I'm not in one of these communities? And it becomes, it's, I mean, I would say currently majority of people probably get the majority of the information from online sources and a very large number of people get news from social media. Um, and so we'd love to ask you like, as an individual, how you think about Protecting yourself from finding yourself in these sorts of, um, information environments that could maybe end up misleading you.

Mmm.

[00:52:26] Neil Johnson: is the holy grail kind of question. How come? And a lot of people have tried to think, Oh, well, it's a, it's like a virus, this thing of, uh, and therefore I'll just kind of vaccinate people against, um, misinformation and disinformation and mal influence, but it doesn't work like that. I mean, we all have probably in our family somewhere, someone that kind of turns up and says, Oh, I read this.

And, you know, they, over here, I don't know why they do it, but they call him a kind of Uncle John, you know, at the Thanksgiving table, suddenly turns up and spouts out something that they've read. And others say, Oh yeah, that goes along with something else that I read. Of course, they're probably reading the same thing or something.

And so, but I think, you know, in that context, so we're all used to hearing that. So, we, we have actually got our own kind of defenses over the years. I'm sure we don't always believe our Uncle John on those things. And so, um, I think it's just that, that same Um, Skepticism that we might bring to being told by goodness knows how many times I've heard from UK governments over the years saying this is true and it turns out not to be true.

So, you know, that kind of skepticism that we'd hold from that, it doesn't mean I can go and then go and fact check it. I don't know what it is. I'm just skeptical. Um, we can be skeptical about skepticism in some sense. Um, so yet then the question is, well, where does the truth come from? Yeah. Okay. Um, now we're, you know, now we're getting into difficult territory.

Um, again, I get to the back to the thing, if you don't know where it's coming from. You know, we may know it comes from Uncle John because they walk through the door and say it, but online, I'm not actually sure where it's coming from, but I know it's in a community that I trust on other things. So why wouldn't I trust it?

Uncle John, I only see him once a year, so I don't really trust him, but people in my community, I've, I've trusted in other things. So how am I not going to trust them? That is a tricky one. That's more of the, I mean, that's almost, I mean, that, that is the conversation that should be. Being being had, but as a prelude to that, you need the map.

[00:54:45] Matt: mean, that could certainly open up a very, very large discussion that is its own conversation. So, perhaps not digging too much more into, into that particular line of thought. And the third, the third question addressed in your paper is the question of when it will happen, um, and the frequency with which it will, it will happen and we'll see these bad actor attacks.

Um, this is very interesting because I think a misconception, again, that people have is that because all of this is so new, it's so speculative, uh, conversations around bad actor AI, tend to be quite qualitative. And, um, I think there's an assumption that we don't really have the tools to say anything very concrete and, um, quantitative about future predictions.

So for example, on the frequency of bad actor AI attacks. Um, and actually this is really interesting. It's a classic complexity science approach where you, you look at the situation at a slightly different level and. actually do make some very concrete predictions. Um, and so I'd love to turn to that question, um, and even look at some of the methodology behind, um, sort of predicting the, the frequency in the future of these bad actor attacks.

Uh, maybe let's start with the prediction itself though. Um, what, what does the paper say about the, the expected frequency of these attacks in the future?

[00:56:08] Neil Johnson: Yeah, we applied some, and it's very interesting, the science behind it that we built on. Um, but we made the prediction. So when we put this paper online in the middle of or late summer last year, we, you know, the kind of, we did this triangulation that I can talk about in a moment, and we pretty much made that, we pretty, we worked out that it would be, you know, very broadly around middle of.

And it's very interesting that, um, uh, both, uh, open AI in particular have come out and said that they've uncovered use of, um, their AI tools by, um, foreign actor, what they, you know, what the US would call foreign actor, foreign influence, um, state influence from outside. Um, using AI tools, um, As of summer 2024, they've just come out with that.

There are many various articles in the news about that, um, finding by open AI use of their top net technology for, um, their, their AI technology by bad actors. So, you know, okay, so how did we do that? So the kind of story starts with that with, um, I think it was someone who was observing, um, how, I think it was post office, um, workers, how quickly they did tasks, almost like 100, nearly 100 years ago.

Um, but they, how quickly they did tasks. So they do a task and then they repeat it. And every time they repeated it, they got quicker. And they, this was called a kind of learning curve. You know, it's a simple thing, kind of, you know, if you do something first time, it takes you a while, second time, less time, et cetera.

Okay. Now imagine you're doing that with an opponent stopping you. Now it gets into the idea that, and this is where we lent on the idea of a red queen. So, you know, as in, you know, kind of Alice in Wonderland, you know, the red queen kind of runs on the spot just to keep up. So now imagine the red queen is red, usually the color associated with the bad actor.

So red can pull ahead. Of the opponent, which we called blue. It doesn't matter. But anyway, so there's a relative Advance relative distance that red has, and that's an advantage over blue instantaneously. So this is beyond the kind of Alice in Wonderland, where the Red Queen was just on one one spot, just staying there to keep up.

So now Red Queen is pulling ahead. Bad actor pulling ahead, so we say that the distance that they pull ahead is, it's like their relative advantage, and so that will be, um, kind of the rate at which they'll be able to kind of progress with technology, the next one and the next one and the next one and get quicker and quicker and quicker.

So, and now let's imagine that Unicode okay, the opponent, the You know, kind of the state or open AI, whoever it is who are trying to pull them back. kind of make gains and so it's like a given text like a tussle like a tug of war and so we take the just we just literally took then the uh and to get an eye to get a handle on this we took the how far am i ahead if i'm undergoing a kind of random walk with respect to a wall that i'm walking away from the kind of so called drunken walk walking walking it goes like the square root of the number of steps and so that is the rate At which I will be able to, so the, the rate, the average rate at which I'll be able to progress with these attacks goes like the square root.

So that's to the power of 0. 5. So the time between attacks will go like one over that. which is, you know, to the minus 0. 5. And so, um, and so when we looked at how other technologies had advanced, and we used two examples, we used example of algorithmic attacks on the financial market. So algorithmic trading hitting a price in a, in a, in a negative way, That also pretty much followed this and so did completely other setting but Chinese, they know we're known to be Chinese, that's why I say it, um, Chinese state attacks, cyber attacks on U.

S. infrastructure. That also had this minus 0. 5. So we were on, we knew we were onto something because It looks like the advances in technology that a bad actor, a Red Queen, can have with respect to an opponent, the State, or whoever's trying to stop them, follows them, this learning curve, we call it a progress curve, because they're not learning, they're just trying to get there, they're adapting to what the enemy is.

Um, what the opponent's doing, and of course the opponent is counter adapting, then they're counter counter adapting, etc. etc. Hence the random walk. Um, that, that, it, it matched, and so we could make a prediction then when this would come down to the daily scale. And the prediction was then, when we did the numbers, came out as middle chip 2024.

And pretty much, bingo, mid 2024, that's, that's when OpenAI was starting to report, hey, we're getting regular attacks now, um, from outside using our AI technology. And you might think, well, I haven't seen a story of a big attack. Well, this is, it's almost like death by a thousand cuts or a million cuts or whatever it is.

It's, these are little attacks. All the time. And they're getting faster and faster and faster. And in the end, it will be like a thousand pieces of shard, shard glass. It's overwhelming for a system. So no one attack, no one piece of glass is going to quote unquote kill us. But, um, lots of them again, swamping the system will be, it will overwhelm the system.

It will overwhelm the defenses. So that's the win of the AI use.

[01:02:32] Matt: Yeah. And so just to make sure I'm sort of fully understanding the thinking here, like to make a very concrete example within the context of, let's say open AI, um, you know, they have, uh, they've, they've released the model. It's got some certain protections, but people figure out ways to jailbreak and use it for malicious purposes.

They do. So open AI figures out and sort of like. more protections and counterbalances and there's this back and forth, but the consequence of this is that over time the sort of like the collection of bad actors progress in such a way such that the attacks will become more frequent, um, and to the point where you know the prediction is now this is a daily occurrence.

Is that a correct

[01:03:10] Neil Johnson: Correct. Exactly that.

[01:03:13] Matt: Yeah. What is the um, what is the sort of very long term limit of, of this? Because presumably, um, I mean, something would have to break at some point. Um, you know, how far can we extrapolate this out? You know, is this something that will continue to become just a bigger and bigger problem, more death by a thousand cuts, a million cuts into, you know, 2026, 2027.

How do we think about the longer term version of this? I

[01:03:43] Neil Johnson: Yeah, I think that, um, definitely there's a kind of ramping up, maybe there was a kind of slowness on the side of OpenAI, um, to respond, um, they were probably pushing their top end technology, I don't, I don't, I can't speak for them, but that's what I imagine, so they've ramped their game up. So the others, you know, the bad actors have to ramp their game up again.

It's back to this dy what we call the dynamic red queen. Um, I think we need to have a kind of change of mind about this. So just as, again, we've been talking about COVID, I'll just use it as an analogy. I think for some reason society has decided that having endemic COVID is okay. You know, it's kind of okay to have it rattling around.

And I mean, I, that seems to be the, the word on the, you know, the kind of that that's the official, I think it's going to be the same thing. I think it's going to be the same thing with AI. It will be, again, because of the large number of communities on small platforms, and the large number of those small platforms, and how they're kind of interconnected into the web, you can't eradicate them.

Again, like the shards of glass, I can't go around picking them all up. There'll be others by the time I've finished, and I'll exhaust my resources. In fact, as we know, it's so hard to pick up one shot of glass. You could spend ages trying to do it because it's so tricky to pick it up. You, you've wasted all your time.

So it will be the same thing. So there'll be a level of endemic AI bad actor presence. Um, and I always qualify it that you said exactly right. I mean, bad actor AI is a big, big label for us, you know, in a tiny way. It can mean all sorts of things. We try to define what that was. Basically, AI being used in a purposely bad way, there'll be an endemic level.

And I think that's it. So it's more about keeping it, which gets us to the last part of the paper, which was how do you keep it under control? And the point of that last part of the paper was precisely It was a calculation, not based on an analogy, but I'll give you the analogy again. If I spend all my resources trying to look for then the largest small shard, and then I'll look for the second largest small shard, and we did this calculation of how long it would take, you know, the universe is over by the time we've, we've picked them all up, basically.

Um, so it's more a case of just kind of once you find that you've, you know, you've got this endemic state. Just making sure that the shards don't connect to each other to try and kind of reconstruct some bigger piece. So it's more about breaking the links. You don't want to shut down communities. Nobody wants their community shut down whether they're doing good or bad, and it's not right.

You know, it's kind of free speech and all this kind of thing and we won't get into that, but you know, there's a lot of, a lot being said about that. This isn't about that, this solution isn't about that, it's more about, okay, you've got a right for, to do what you do, maybe in your community. You haven't necessarily got a right to link into all sorts of other communities.

I mean, that's just a facility given by the platforms. In fact, it's not even given. I mean, Facebook don't even know the links that are coming into their communities from other platforms. They don't know, they have no control over it. So, I mean, they'd have to shut off their system completely to stop those links coming in.

Um, they can control links going out, but they can't control the ones coming in. So, um, there needs to be kind of bilateral agreements between platforms saying, Hey, well, I've got a lot of links coming to you and you've got links coming to me. So let's just kind of agree to kind of, we'll police these. And so it's more about policing the links, not policing what people are saying, not policing the.

community, but policing where the links go, because the links are controlled by the platforms,

[01:08:03] Matt: Cold, clear eyed analysis leads to what seems to be quite an unfortunate conclusion, which, um, I take it as, you know, essentially, we're not going to be able to fully, um, or even maybe very substantially control, um, the proliferation of bad actor AI. And as you said, um, with the COVID example, at some point there has to be some acceptance that, uh, it, it, it does become endemic.

Um, but on the other side of that, I think it also raised the question, well, is, is the whole way we're thinking about controlling it, the methodology of, you know, containment, uh, for example, removing particular bad actors who are the worst, um, is that the right way or is there something broader? Uh, and this is a very open question to, to you, um, you know, In terms of the sort of very broad methodologies, um, you know, ways we might approach, um, controlling this, um, what else is there to consider that hasn't been considered in, um, in this particular paper?

[01:09:06] Neil Johnson: in my view, the missing piece, the missing piece from policy, et cetera, is this kind of It's the complexity of it. It's the bigger, it's the big picture of, um, how do I control a system with feedback, with internal feedbacks, with memory? with active objects.

Um, how, how do you control a kind of complex system? You're not going to control the individual pieces, all one piece. I mean, you don't stop water boiling by taking out one bubble or, you know, one particularly fast molecule. And so it's got to be this kind of You know, the whole is more than the sum of the parts.

We all know that. Um, so it's more that kind of thinking, which of course brings us up to, well, that's the hard problem in the science of complexity is that, you know, what's control theory for a complex system. I have a kind of inkling of what that might be. I think it's a kind of, a kind of a soft control.

It's a, it's instead of it being kind of go in and take out one piece or take out five pieces, it's more of a continual kind of nudging of the system and kind of working out where its trajectories might be heading, you know, kind of, and then kind of nudging away from danger areas. So you're not, you haven't got perfect control, but you're just kind of nudging the system one way or the other.

Um, I think the closest thing that this is, um, the situation we have in the online battlefield, et cetera, small pieces, delocalized, surging up, surging. It's like an insurgency. It's the first time we've ever had online. And so it's like a cyber insurgency, but on some Big scale, whereby in many insurgencies, civilians are involved, people like us are involved, and we don't even really know what side we're on.

Um, so it's, it's almost like that new thinking about insurgency again, you know, the U. S. did, and the U. K. and all their allies did spectacularly poorly, um, for, Maybe no thought of their own in trying to do a head down top down approach in Recent insurgencies that didn't work and it was actually David Kilcullen.

I think was he was in the Think was an advisor to the Australian Strategists that say Um, has a, has a, has an, has an interesting kind of paper out or book or maybe on, um, you know, looking at the complexity of the system. Doesn't solve it, but just kind of pointing that that was the kind of missing piece.

I think that's the missing piece here. Now, what does that mean? Probably jobs for complexity scientists, you know, but there certainly should be one sitting at the end of that table in the congressional hearings, you know, with the map in the background, one sitting on the end there with all these heads of the different platforms.

That, that's the only way you're going to get the kind of system level dynamic taken into consideration.

[01:12:23] Matt: Yeah, yeah, it's um, it's actually great to see, um, complexity science. I feel like it is increasingly starting to have, um, very, very real world applications showing up all over the place. I think for, for a very long time, it's been for, for people who kind of have been in the field or adjacent to the field have always had this view that this is going to have, a very substantial impact and application area in the real world.

But perhaps it's been a bit delayed to kind of make its way into, into doing that. And I'm not actually, I would love to get your thoughts as to why that is, because, um, even going back to some of the earlier work, like one of my favorite books is Geoffrey West's book on, on scaling laws. Um, and it's I don't know, it's ten years old and it blew my mind and I thought this would, this would change how biology is done, this would change how city planning is done, all these things.

And it does not feel to have done that yet, um, and I feel that complexity science in general kind of somewhat falls into this, um, this category. Do you feel like we're at the cusp of a, of a sort of more foundational change? Complexity science getting out there and doing its thing in the real world?

[01:13:32] Neil Johnson: Yeah. I mean, it's a fantastic point. I see it as a kind of People problem. And what I mean is the following, um, you know, a lot of these and I've experienced it myself. I'm sure a lot of others have as well. Um, you go barging into a topic like conflicts or city design or something with a new idea. You're going to, you're challenging basically the status quo.

Yeah. And just like the kind of in substance, the bad actor communities invading invading into a space, you're trying to invade in a space. The problem is that that thinking off the status quo. In the academic and research and science side is entrenched in how we structure universities, grant funding, the names of degrees.

You have a degree in this and you have a degree in this. So don't talk to the person who has the degree in that. Um, so I think we shoot ourselves in the foot. The academic community. Um, you know, a city isn't like a sandpile. Sandpiles are fantastic. Got to be taken forward with data. Got to be connected to the real situation.

That, you know, my, my, my brother was a town planner and I think he, I remember him telling me a story of having a complexity scientist turn up and tell him about, you know, kind of scaling in cities and all that. And apparently they just looked at each other and then headed out the door and it was a kind of a free afternoon, but they weren't going to do anything.

Why? It wasn't that the idea wasn't good. They didn't get it. They didn't understand it. It wasn't presented to them in a way that was immediately accessible. So, to answer your question, it's like a, yeah, it's like a huge debate in itself. But in some sense, complexity scientists like me have shot ourselves in the foot by kind of barging into different areas.

But what else are we going to do? We don't know how else to do this. So I think there's a lot because of the people structure of universities. The academic world grants. It's not conducive to complexity science kind of expanding. However, the whole rise in data from systems that you never thought would have data is a way forward.

And if we can just kind of make all of those concepts more concrete and more applied without using this idea that everything's the same, you know, in the end you get the spherical cow view of the whole world. Um, but without saying everything is completely unified by instead saying, well, those, for example, those scaling laws, they're like a benchmark.

So instead of guessing that things could be anywhere in some space that you can't even imagine, there's a kind of benchmark. And things are near that benchmark. You know, planes are near a paper plane in some sense. So I think, I think of all those models from complexity science that we all know, all this stuff, they're each like a paper plane.

No one's going to go in it. No one's going to, you know, sit in first class or in economy. There aren't going to be any, you know, kind of captain drive because it's not got the right structure, but it's got the right idea. So I think there's a lot of work to be done by complexity scientists, obviously the next generation because people like me will be, you know, soon, you know, we out the door, unfortunately, before a lot of this has gone transferred over, but there really is work now.

And now there's a need, there's a societal need for the climate change, online misinformation, the online offline influence, These are big questions, and they clearly involve all the components of a complex system.

[01:17:41] Matt: Do you have a, do you have a vision or a hope for over the next, let's say five to 10 years, what it might look like and, and maybe concretely what specific issues, we've talked about one of them today, um, and you've just mentioned climate change, we might be able to address and tackle with a mature Sort of open complexity science, um, is anything beyond what we talked today, like very front of mind as an issue to be able to, that we might be able to materially improve or solve?

[01:18:10] Neil Johnson: I particularly, um, think that the area of human conflict, because that was already known back in the years of, um, Richardson in the mid 1900s, um, that they follow these kind of scaling laws. And so, and we've, we ourselves have done quite a bit of work on this. Um, it does look as though there are patterns of organizational behavior.

After all, conflict is done by humans against humans. So it's not like it depends on some meteor arriving or a change in temperature. Maybe the change in temperature affects how much we fight each other, but it's a human organizational issue. And I think those human organizational things are prime candidates, particularly ones where you've got one human organization of some type against another.

And those patterns, I think, are crucial for, um, what we see as just kind of random numbers coming out, you know, out of whatever conflict, there'll always be conflicts, making sense of those and thinking about tipping points, interventions, and, you know, unfortunately, casualty risks.

[01:19:23] Matt: Yeah. Yeah. That's, um, I'll actually, it just reminds me, I listened to an interview of yours with Sean Carroll, um, which was I think a couple of years ago now, but you, you were talking about this, this topic and I thought it was a very interesting interview. So I will, um, I'll link that in the, in the episode notes here.

Um, okay. Well, maybe, maybe sort of as we. bring it towards, um, a close. Um, you know, we, we've talked about a lot of things, a lot of them pertain to, um, what individuals can, can do or should do. Um, I think that this is relevant at many different levels from individual to institutions to someone. Um, and I'd want to like give you the chance for a very open, I guess, call to action, you know, someone who's listening, who's interested, um, who wants to follow up, where would you send them?

What should they, what would be a good next step for, for someone listening?

[01:20:12] Neil Johnson: Well, if there's any, I'll start with saying if there's any university, um, kind of not administrators, but you know, chairs of department, heads of schools and things, the first thing, allow people to get degrees across subjects. And I don't mean, Oh, I've got a major in this and a minor in that, or I've got, allow them to do things that mix disciplines to attack a concrete real world problem.

Knowing that they won't, you know, they won't have done, you know, reproduced what Einstein did if they were doing physics and they won't have done, you know, written some, you know, amazing treaties on, on whatever it is on conflict. If, if, if it was yet they've combined elements in a way that nobody's done before in every institution I've been in.

That's been impossible. know, having somebody, oh, they want to do, you know, kind of physics models of conflicts or something. Okay, yeah, they've got to take nuclear physics. Why? You know, so imagine now they're taking a year of that, and so they can't take courses on conflicts, and they'll never get out, basically, and when they get in front of their thesis committee, They'll just be slammed because, quote unquote, it's not physics.

So, I think there has to be a huge rethinking there. And, hey, the good news is, that's what students want to do. Students, my perception of students is they actually want to do something that matters now. They don't want to do things just because their parents did things, or their predecessors did things.

They want to do things because it leads somewhere, and it kind of matters in some way. And those things matter. The trouble is, the answer to them is not within one discipline. So that, that would be my little message out there to anybody. And then my message to anyone who's interested in getting into this, go and push for that.

Push for, I'd like to do a project. I'd like to do something. I'd like to be involved with this, but I also want to be involved with that. Uh, you know, two different disciplines. And I want to look at this type of problem and be prepared to be pushed down and all these kinds of things. But the probably you'll be doing something new.

In fact, you'll know it's new because of the level of difficulty you'll have in getting it approved. Usually when things have been done before, they're easy to approve because there's a precedent. When you're the first one doing it, You will know because there'll be all sorts of things in your way. And so, and my own experiences, you don't have to go very far from what's known to hit something that isn't known.

I think, you know, you and I all know that. And we all, we all know that people, everyone, everyone listening will know that, um, you know, you go and visit a country and go down a street you've never been to, even if it's your own country or another city, you go down another street, you've never seen that stuff before.

Um, so, um, Anyone listening wants to get into these kind of real world problems that are going to need more than one tool. In other words, more than one discipline. I think, you know, the kind of future is yours in that sense, but be prepared to be pushed back by being told that you've got to go and take two years in nuclear physics.

[01:23:34] Matt: Yeah, fantastic, um, and, and very much, very much resonates with, with me. I mean, I, I, I seldom do this, but to sort of echo some advice from personal experience, you know, my, my own path started off very much in the exploitation phase, you know, going very deep into physics and maths and eventually ending up in a PhD in pure maths.

Um, and probably not enough exploration in the early stages and advice I always give to younger people now is, you know, In the earlier stages, it's the best optimization algorithms are one that explore a lot and get a good map of the landscape before exploiting. Um, and I hope that, uh, societally we kind of start doing that a lot more because otherwise we all end up optimizing in very small parts of this landscape and, uh, we don't end up in the, in the best path for ourselves.

[01:24:18] Neil Johnson: We certainly don't need more people like me. We need people in the future who've done, maybe even stronger bridges between these, between disciplines.

[01:24:29] Matt: Yes. Yeah. I totally agree. Um, two, two questions that I like to ask, uh, in closing, um, first one is on book recommendations. So people that I speak to on this podcast tend to have read many books, often have written many books and have certainly been influenced by books. And I would love to ask you which books come to mind as ones that have most influenced you personally.

Okay.

[01:24:55] Neil Johnson: I actually don't have one particular book to point to, but I do have the recommendation that on YouTube now, there's an enormous number of these snippets of people saying things like I'm saying. And that is actually a great place to start even for some research. You know, you can put two and two together there between what two different people are saying.

I think a search around on, on, on YouTube to see what people like me mentioning the word complexity, complex systems, systems thinking, this kind of thing. Look at those and that's actually better than any one book that I can think of. Because I can't actually think of one book that kind of does enough of this kind of bridging.

It all tends to kind of go down one path. So, um, not avoiding the question on the one book, but it's more the case of, I think this is such an active field now that people just haven't come up with those books. And so I do recommend people just exploring around on YouTube.

[01:26:02] Matt: I actually think that's a great recommendation. I mean, if you think about the function of books historically, I mean, as great as they are, a lot of it was serving the purpose of transferring information through space and time and so on, in absence of other mechanisms for doing so. And I genuinely do think the content that you can find on YouTube, podcast conversations, all of these, can perhaps even in cases be richer experiences.

I certainly probably get most of my information. through those channels. And so I certainly echo that point. That's a, that's actually, it's a great sidestepping of the question and I, well, very

[01:26:36] Neil Johnson: no, and people should listen to your podcast completely in full for all the episodes, because I think absolutely you're hitting on questions. Well, look at it. You've asked me questions that nobody's ever asked me before. And so, um, how could I put that in a book? In fact, if I was writing a book on it, I probably would have avoided those questions, even if they'd have occurred to me because they were difficult to answer.

So I think you've done a great job. That, that's what people should do. And of course there are, and there are other people doing podcasts. Of course there are, you know, and there are some amazing things out there.

[01:27:08] Matt: Oh, well, sorry for putting you on the spot, but thank you for, uh, for being a great sport. Um, last, last question, a bit of a fun one. We've talked a lot about, um, Bad actor AI and development of powerful AI models. My question is, if we were to develop an AI superintelligence, and we had to pick one person, either past or present, to represent humanity to the superintelligence, who comes to mind for you?

Who should we pick?

[01:27:35] Neil Johnson: Neil Armstrong. And not because I've heard him speak very much. I mean, I heard him, but what he did, he did something. you know, he didn't say, Oh, I'm going to go to, you know, nowadays imagine what someone, Oh, you'd be a lead up. There'd be a TV series. We'd hear about the, you know, the family, we'd watch them at the end.

We never heard of a Neil Armstrong until, you know, he stepped on the moon one small step, etc. Um, he did something. So I think that footage, which was formative for me, Sunset's YMO still over here in the, in the U. S. because it just, oh my goodness, there's a country that does this. Um, I think that to me, the doing more than the saying or the, so the fact that Neil Armstrong did that, the only person to have done the first one to, sorry, to have done that, that, that for me is the person.

[01:28:34] Matt: That is a spectacular answer. And I think a really great place to, to wrap up. Um, Neil, thank you so much for, for joining me. It's been fantastic.

[01:28:41] Neil Johnson: Thank you so much, Matt. It's been a pleasure. Fantastic. Thank you. ​

Discussion about this podcast

Paradigm
Paradigm
Conversations with the world's deepest thinkers in philosophy, science, and technology. A global top 10% podcast by Matt Geleta.