Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Scientific Curation and Clickbait - The Best Interview You'll Watch Today: Cailin O'Connor

Full Ad-Free Video | Cailin O'Connor is philosopher and evolutionary game theorist whose work explores journalistic practices and the spread of information.

Podcast Update

Hi Everyone, I want to share some good news - Paradigm now gets multiple new listeners every minute!

As a gesture of thanks to early supporters, I’m offering all current as well as the next 500 subscribers free access to Paradigm, forever. Beyond that point there will soon be a paywall on certain content.

So, if someone you know might enjoy Paradigm, please consider sharing your favourite episode and encourage them to subscribe for free.

Thanks you! Now please enjoy today’s conversation.

Matt

p.s. My next guest is Oliver Burkeman, author of 4000 Weeks - one of my favourite books of all time. Subscribers can scroll down to access a Q&A to submit questions for this conversation (and others).


Share Paradigm


Episode Notes

Cailin O’Connor is a philosopher of science and evolutionary game theorist. She’s a Professor in the Department of Logic and Philosophy of Science at UC Irvine.

We discuss:

  • news media and exaggerated and misleading headlines

  • scientific and journalistic practices, and the incentives behind them

  • how content curation impacts public perception and understanding of what is true

  • the role of algorithms in perpetuating misleading content on social media

  • practical suggestions for improving our information environment

  • personal reflections and recommendations on navigating misinformation

… and other topics

Watch on YouTube. Listen on Spotify, Apple, or Substack. Read the full transcript below. Follow me on LinkedIn for infrequent social commentary.

Subscribe for free to never miss an episode.


Episode links


Subscribers only

The below section is for subscribers. It includes:

  • Q&A form for my conversation with Oliver Burkeman

  • Timestamps and transcript for the current episode


Q&A form

Here’s a Q&A form: Q&A form - Oliver Burkeman

Timestamps

00:00 Misleading Titles & Headlines

02:59 Journalistic Practices

04:02 Curation in Journalism

11:41 Challenges in Fair Reporting

22:08 Hyperbole and Extreme Headlines

26:42 Confirmation Bias and Trust in Sources

32:09 Best Practices for Science Journalism

35:50 Need for Algorithm Regulation

38:03 Case for Social Media Regulation

43:01 Role of Incentives in Algorithm Design

44:45 Challenges of Algorithm Transparency

47:09 Improving Social Media Platforms

53:54 User Responsibility vs. Systemic Change

56:20 Future of Misinformation

01:03:17 Navigating Information in the Digital Age

01:09:37 Books and Personal Influences


Thank you for reading Paradigm. If you enjoyed this post, please share it.

Share


Transcript

This transcript is AI-generated and may contain errors. It will be corrected and annotated with links and citations over time.

[00:00:00] Matt: I'm here with Kaelin O'Connor. Kaelin, thank you for joining me.

[00:00:02] Cailin: Yeah,

it's my pleasure to be here.

[00:00:04] Matt: Kaelin, we're going to be talking about our information ecosystem and scientific and journalistic practices. Um, but I thought it'd be fun to start with a game. So before the discussion, I went onto the homepage of a well known news website called Science News Daily, and I copied a few of the headlines.

Um, and I should emphasize, I didn't go digging. Uh, for anything specific. It was literally from the homepage. I just took a couple headlines. I want to read them out and I would love for you, using your knowledge about curation and journalistic practices, take a guess as to, you know, what the article is about and just how significant the findings in That, article are.

So the first one is titled, Generative AI could break the internet, researchers find. What do you reckon that article might be about?

[00:00:51] Cailin: That, I mean, there's a lot of ways that generative AI could break the internet. That one, I, I have really very little idea. I mean, maybe something about how AI could create too much content or content that's too misleading or, yeah, I don't know. What, what is it?

[00:01:12] Matt: Yeah, so the actual article, I mean, it relates to that. It is about feedback loops and AI training on AI generated content. Sort of going off the rails. And so, um, from my perspective, breaking the internet is, is quite a far cry from what the paper is actually about.

[00:01:31] Cailin: Yeah, I mean, it sounds like an example of how we often see exaggeration or click baity titles or things being, um, made to sound sort of bigger than they are when science journalists write up studies.

[00:01:46] Matt: Exactly. Yeah, exactly. And I mean, honestly, I think there were, there were probably 20 articles on that page. I've got five titles here. I won't read, I won't read all of them. Um, Oh, maybe I'll just read all of them. We can stop the game. But scientists lay out a revolutionary method to warm Mars. And this is basically about polluting the Martian atmosphere to make it warmer.

[00:02:08] Cailin: Oh, okay, maybe we could pump our carbon over to Mars.

[00:02:11] Matt: yeah, exactly. That's, that's what it is. Um, uh, there is one here that says, if you snore, you could be three times more likely to die of coronavirus.

[00:02:19] Cailin: Oh, yeah, that, I mean, I want to say that the pandemic was like one of the best times ever for really overly hyped and misleading clickbait titles and this brings me back to that.

[00:02:34] Matt: Yeah. Yeah, exactly. Um, and a title that it was not on that page, but you would be familiar with was entitled the best paper you'll read today, which you'll know all about.

[00:02:44] Cailin: because I, I helped write that paper.

[00:02:49] Matt: it Was in fact the best paper I read that day, so, um, it was actually not misleading in the end.

[00:02:54] Cailin: Did

[00:02:55] Matt: paper? I read that

[00:02:56] Cailin: that paper?

[00:02:57] Matt: I did.

[00:02:57] Cailin: that's what I suspected.

[00:02:59] Matt: Um, but it brings us very nicely to, to this topic of, um, journalistic practices and curation because I mean, Science News Daily is regarded by many as a very reliable source.

You know, it's a, it's a source that reports on university research findings and people do use this, uh, this source for their science. news, the science information. Um, and so let's, let's pull up and discuss that concept of curation. Um, I think it's, it's for me, not surprising that popular news websites can have exaggerated titles because, um, you know, everyone is competing for attention and, and these things happen.

Um, but I think these practices probably exist in a broader way in our world. information ecosystem than many people realize and, um, probably many of them are even implicit in, in, in sort of hidden ways and they could lead to very real consequences. And I know this is a topic that you've thought about a lot and explored a lot.

Where else do you see this sort of phenomenon, uh, happening and just how big of a problem is it?

[00:04:02] Cailin: Yeah, so some co authors and I got interested in thinking about curation written very broadly because, um, I mean, we were working on things related to misinformation. There's all these attempts to control or regulate or prevent the harms of misinformation online, and that's really good, we thought then, and I still think that.

Curation, the way we take accurate, real data, real scientific findings, real events that happen, and then just shape and select what ones people see can be just as important to understand, just as important to think about and regulate if we want people to have accurate data. Um, that was how we got into that topic, this sort of lack of attention to curation.

I mean, one of the reasons it matters so much is that there's tons and tons of things that are happening in the world all the time, every day. There's no way we can pay attention to all of them. We're only going to find out about some small portion of these. And That's going to really shape our feelings about how the world works and what's out there.

Right? Um, it's going to shape our beliefs about, say, how dangerous COVID is, or how much climate change is happening, or are windmills killing birds? Uh, it matters as much to our beliefs or more, I think, as whether we get. Accurate or inaccurate information.

[00:05:38] Matt: Is it, um, do you have a sense as to how big of an issue? This is in practice. I mean, no doubt it happens and we've just talked about some examples on on a particular news website But if we had to say on you know, the scale of problems that are sort of impacting society Today, so how significant is this?

[00:05:58] Cailin: Well, curation is happening literally all the time. You know, as I pointed out, there's so much stuff happening that we couldn't possibly know about all of it. There's so many scientific articles coming out. We can't know about all of them. So in some way, it's just a massive scale. You know, when science journalists pick out just some studies to report when it's Social media algorithms pick just some things to promote when we pick just some things to tell to other people when people write textbooks or teachers prepare courses and they just share some information but not other information curation is happening in all of those places.

Of course, it's not always going to be a problem. There's lots of things we don't actually need to know about. But part of what we've done in our research is pointed to a bunch of places where curation really does cause problems. And in the paper that you were mentioning, we used models to show how even really good learners, people who are ideally rational learners, could come to develop very confused views of how the world works just on the basis of Curation.

[00:07:04] Matt: Well, let's let's dig into the paper then. I found it very interesting and You know there in that paper you presented three sort of broad points So categories of Curation.

hyperbole, extremity bias, fair reporting, and analyze them in turn. Um, could you run me through those different types of, of practices and, and sort of what they mean and where we might see them?

[00:07:26] Cailin: Yeah. And I should make clear, you know, this paper was about science journalism in particular. So we were trying to think specifically when it comes to science journalism, what are the ways that science journalists tend to curate out of all the studies out there? What will people see? Um, so there's lots of other things that happen in curation, but of these ones, one thing, um, That science journalists tend to do is try to be fair or balanced and all journalists tend to do this.

This is part of the codes of ethics and norms for good journalism. And what that means is that when you're talking about some controversial issue, you try to present the 2 sides either with equal weight or in a fair way in practice. This often means just giving equal air time or equal print space to 2 sides of a controversial issue.

Issue. Now, generally, that's a really good norm to try to present things in a fair way because it helps you avoid things like highly partisan news or highly partisan opinion. Um, but people have pointed out that when it comes to science reporting, it can be a problem because you end up sometimes with false balance.

And we saw this a lot traditionally with climate change reporting. for having me. Where there's a controversy about is climate change happening or not. It's not really a scientific controversy. It's a social controversy, but to report in a fair way, people would try to present both sides of that issue, giving them equal weight, whereas in fact.

One of them had much, much more scientific evidence behind it. So that fair reporting would be kind of falsely propping up the side that's wrong, if that makes sense. So that's fairness. The other two things we talk about were less driven by journalistic norms than by the incentives that journalists face.

So literally, in order to, To be a journalist, to survive, to have a paper, um, to not be fired, you have to get attention as a journalist. If people aren't putting eyes on your column, you know, what, what's your function, right, which means that there are these huge incentives to create things that are interesting or novel or surprising or noteworthy or newsworthy and that grab people's attention.

That's why we see those clickbaity headlines. But that often shapes how journalists curate what they report on. So one thing that we talk about is what we call extremity bias, which means that when you're looking at stuff happening in the world, scientific events, you see reporting much more on things that are on the extremes, meaning they somehow surprise people.

So for example, I've just been looking at a case of like, are male and female brains inherently importantly different? Almost every study on male and female brains measures, you know, say a lot of things. All the connections that they can measure with MRI in the brain is going to find some statistical differences and then also a lot of overlap.

Um, so the truth is. in this middle space, right? But when you see reporting on it, you see people reporting on the studies that are like, we found that men's brains are just really different from women's or else the ones being like, there is no difference between male and female brains. So these extremes get reported much more than the kind of stable middle core, if that makes sense, hyperbole or exaggeration.

Is also something we see driven by those incentives for attention, um, to take things that are really happening and just make them sound a little more extreme or exciting. Like we're going to warm Mars up or we're going to break the Internet with AI.

[00:11:13] Matt: Yeah, that, that's fascinating. I mean, the, the fair reporting example one is really interesting because, uh, you know, before thinking about this very deeply, that, that one would strike many people, I think, as the obvious. best thing to do, you know. I think maybe even the language there of fair reporting is a bit misleading itself because fairness means something very particular in this case.

Before jumping into the results of the paper, I mean maybe to linger on that point a little bit. Um, you know, fair reporting, imagine yourself as a journalist choosing on what to report on. As a journalist or as a scientist, you also don't have access to non curated information in, in some sense. I mean, so for example, reporting on climate change, there is some sort of abstract notion of the truth of the matter and you know, what actual evidence exists out there.

Um, but really what you are reporting on as a journalist is. research that's been done, um, and, uh, other articles that have been posted and so on. And those things are already, in some sense, curated in various ways. And so how does a, how does a journalist, if they wanted to even do something like fair reporting, how would they even think about approaching, you know, you know, taking an accurate sample of the sort of like underlying truth space versus this curated world that is in front of them.

[00:12:36] Cailin: Yeah, I mean, that is a massively difficult problem. It's one that we, in this work, kind of ignore, which is, as you're pointing out, the stuff that gets produced in science is itself already shaped by all sorts of things. The values of scientists and the values of funding agencies, you know, what does the NSF want to fund or whoever, um, and just randomness, you know, what do people happen to work on?

What study did they happen to produce? You know, all of that. Shapes the science that already exists. I can't even begin to say how hard it is to somehow think we could create science. That's sort of perfectly sampling from the things in the world. I would say we got to just put that aside starting from the point of view of like a science journalist.

What do you have? You have some set of scientific studies out there, right? Um, that's what you have to draw on, you know, in some ways, those are already going to be distorted, but. That's your best starting point in thinking about fair reporting. Usually, what you want to do is not to try to think of something as controversial, where there's a yes and a no side, and then you're going to give those equal weight, but you want to ask, you know, what sorts of evidence do you have?

Exists in what distributions and what would be the most accurate way to report that. So, for example, in the pandemic, there was a lot of reporting on how dangerous is covid. What's the infection fatality rate? And sometimes you'd see, you know, an article being like a new study found that it was way lower than we thought.

Or a new study found it was way higher than we thought. What you would want to do there to be fair or balanced is look at all the studies and then give some sort of Overview of like across all these studies. What's the distribution? How trustworthy are those different studies? Were they well done or poorly done?

What was the average? Um, that would be the best way to be fair to the data that exists.

[00:14:34] Matt: Yeah, Um, not, not to get too deeply into this sort of, um, replication crisis, rabbit hole and those sorts of things. But, you know, even in the cases of meta analyses that do look at, let's say thousands of studies that have been done, maybe many small, low powered studies, you know, and drawing inferences, there is also that issue of.

The, the boring studies that didn't have any interesting findings, never making it to publication. And, and so the whole meta analysis itself is, you know, terribly skewed, um, towards the extremes. Uh, how does a, how does a science communicator think about, um, uh, sort of not being caught out by, by that issue.

[00:15:13] Cailin: Yeah, so I mean, what you're talking about, people sometimes call the file drawer effects, where, um, if you're studying some topic, say you get a finding that's not very interesting, or you, it's not statistically significant, you know, you don't find any association between two things, um, does, um, This caused cancer.

Well, sometimes you just don't get any sort of clear answer out of your data. You find no link. Um, and then a lot of times those things don't get published. It's that journals like things that have like positive associations. So, When people are doing meta analyses, there are techniques they use sometimes to try to recover the missing data.

You know, one thing people will do is look at a whole distribution of findings and they'll say, okay, if We see these ones printed that were statistically significant in this direction, and these that were statistically significant in that direction. We can infer that there was something in between that just never got printed, and we can try to, you know, get some signal out of the noise there.

Um, that is really complicated to do. I don't think it's something we should expect just your everyday science to do. Journalist in any way to be able to do to try to infer from the data that exists, plus this file drawer effect, what data should be there. Um, of course, it is when you, when you can look at a meta analysis or review paper where someone's already done that, that is usually a pretty good source of accurate information to share a report on.

[00:16:51] Matt: well let's uh, let's dig into the, to the paper and the um, the model and the inferences that we can draw from them. Um, so as we said the, the paper is titled the best paper you'll read today. Um, and we looked at those three categories of curation, um, I don't know if curation is, is the right word for all of them, but hyperbole, extremity bias and fair reporting.

Um, could you give me an overview of the um, the model? And, um, and then the inferences that we can, can draw from them, from the model.

[00:17:24] Cailin: Yeah, so what we wanted to do with this paper was set up a situation where otherwise we would expect. So we're modeling. Science journalists, right? We're modeling some series of information they're drawing from. And in our model, we're just assuming like the information they get is accurate. They're getting a good distribution of data.

We're just ignoring the possibility that as we were talking about that science is already distorting data. Um, so we say they start with. Access to great data, then they're going to report to people who are really good learners. People who, if they get good reporting are going to be able to learn how the world works.

And then we just add to that the possibility of curation. Now, just make this journalist fair in this particular both sides. Equal weight type of way, or just make this journalist and extreme reporter. They only report the extremes of events that happen or make them a little bit hyperbolic. They report a real event, but they just make it a little more extreme or novel or surprising.

Um, so the strategy was to say, set up an otherwise perfect scenario, then just add this possibility curation and see how it messes up learning. And then, of course, indeed it does. It does mess up learning.

[00:18:41] Matt: Yeah. Yeah, fascinating. Well, let's, let's, let's dig into some of the implications. So let's take for example, um, the, uh, fair reporting. So again, a good example here is, is, uh, climate change, um, you know, media outlets, science reporters who let's say only report. Um, evidence that, you know, there is human made climate change are often told that, um, they are not giving the other side, uh, any sort of airtime.

Um, but of course, you know, almost all climate scientists fall on the side of thinking that there is anthropogenic climate change. And so, you know, equal distribution of airtime would, would mean very little. Um, for, for the sign that says there isn't. So suppose that journalist then makes, makes that adjustment and does give somewhat, uh, of, you know, equal airtime, airtime or something like it to both sides.

Um, in, in this model, what is the, what is the implication? What is the result? Of that type of curation.

[00:19:38] Cailin: Yeah. So what we find is this can end up with people just having a distorted picture of the average sort of thing happening in the world. So there's something that's like a little technical I have to explain about the model, which is that we're saying these journalists, they're looking at some series of events.

Like we could imagine those events are this temperature in Melbourne and that temperature in Madrid and that temperature wherever, you know, these different temperatures, they're going to report these and they're going to try to do it in some way. Yeah. Right. Like fair, right? Um, we assume in the model that there's some social idea about what counts as fair.

And when we look at climate change, the place people drew that line, what counts as fair was the line. Like, is it happening or is it not? Basically, the line was like 0 degrees of average warming, and it would be fair to report evidence on one side of that and evidence on the other side of that and give them equal weight.

So we assume there's like a fair line set up by society. Then we say, okay, the journalists are going to report stuff from either side of that fair line equally. And basically what it does is it makes people think that the world is kind of closer to what this fair line is than it really is. Right? So, in the climate change case, that would be people coming away thinking like, We're closer to expecting 0 degrees of warming than we really are when, you know, the actual expectation should be whatever 2 degrees of warming by the state.

And so they're kind of they're kind of pulled back into the social expectation that already exists. In that case,

[00:21:13] Matt: Yeah, and it, I mean, it it, I dunno if you've looked in, in, in sort of actual real world practical examples of this, I mean, this is what the, what the, the model says. And it points to a very clear picture. And we've talked about climate change, but are there particular examples of this that you've seen where we can actually say, well, look, this does pan out in the real world.

This has happened. Um, you know, we can can point it and, and, and we're seeing this actually happening. No.

[00:21:37] Cailin: well, I mean, I think climate change really is one of those cases. There was a reason everyone got upset about fair reporting in climate change for that very reason. Journalists have become much more aware of this. And much better at not doing it over the last decade or so, um, in part because of like the 2016 election.

And so I don't know that I can pull out of my hat, like another great example.

[00:22:06] Matt: That, that, that, that's okay. Um, let's, let's look at, um, the, um, hyperbole example.

then, because again, I think this is, I think this is one that is very obviously present everywhere. Um, basically everything that that you, you read online is going to, um, be impacted in some way by a bit of, um, hyperbole or exaggeration, anything that is trying to attract.

reader attention. So we'll have, we'll have somewhat of this. There is even an argument from a scientific perspective that, you know, if you can imagine good science reporting competing against the universe of all other information out there, um, maybe it's even a good thing to be a little bit hyperbolic if it attracts, uh, viewership and makes it entertaining because, you know, it is providing better education than what else is out there.

Um, But at the same time, you know, there are negative consequences to exaggerating things. Um, what did the, what did the model find on the topic of hyperbole?

[00:23:07] Cailin: Yeah. So what we found, and of course we modeled hyperbole in this really particular way where we said, okay, once again, there are social expectations for what's going to happen in the world. And to make something hyperbolic is to. Take some real thing that happened and kind of push it away from those social expectations.

So maybe we expect that the fire season in California this year is so bad. And then when you report it, you're like, it was even worse than that. Right? So, when reporters are doing that, what we find is that. The people getting that information end up thinking the world is just more extreme than it actually is, which is sort of what you'd expect, right?

So people think like, wow, we're having a lot of really terrible fire seasons and then a lot that just like don't even show up as a blip on my radar. Um, or, you know, extreme weather events are just much more common. Then you would have thought they were could be another thing. You could come away with or instances of extreme violence are much more common than you would have thought.

So basically, people end up sort of seeing the world as distorted out towards these events that in reality tend to be pretty rare.

[00:24:22] Matt: Did, did you look in the model, um, of, you know, again, you have this, this imagined journalistic layer that has a real access to the actual underlying distribution of things and reports on it to, Learners who learn perfectly and because, because of the curation, they develop these distorted views. Did you model out the effects at all of Well, actually, you know, those, those consumers are also journalists, um, if you think about it, the journalists are also consuming that and, and So they, you know, you can iterate that forward and, um, you know, the, you can imagine just layers and layers of reporting, which is actually what happens, um, what happens after sort of many iterations of this type of, of curation and, and how, um, you know, because, you know, There is a question as to just how significant each one of these things is and which ones are most sensitive and which ones aren't.

And you can imagine if we were to tune our journalistic practices, being aware of all of these different things, how would we choose to tune them and where should we be most worried? Um, did you, did you look at, um, at that, uh, sort of iterative curation at all?

[00:25:34] Cailin: So could you clarify for me what's iterating there? The journalists? Learn something and then

[00:25:42] Matt: So imagine the

[00:25:43] Cailin: the loop?

[00:25:45] Matt: In the initial stage, phase of the model, you imagine the journalist has actual, um, access to the true underlying distribution of things. And they curate and report on that. This defines a new distribution, which is reported on again and again. Um, you know, you can imagine it's news, news media picking up, uh, what other journalists have said and, and so on and so on.

And eventually, down the track, most of the reported content would actually have gone through multiple layers of this curation.

[00:26:16] Cailin: We did not model that. No, I and I'm not sure what would happen. I mean, you know, our strategy was like, what we want to do is really isolate just this one little thing that you would think that's not so bad, like, not so bad to report fairly, or not so bad to report the extremes. Who cares about anything but the extremes and stuff that happens.

So we didn't look at that. One thing we did look at was. Slightly imperfect consumers, which what we modeled work, people who engage in confirmation bias, which is a type of reasoning bias that we all engage in where we're more likely to trust or believe information that fits with what we already. And in that version of the model, we made it so that, um, the people learning from these science reports, if they heard something that already made sense with their picture of the world, they'd be more likely to change their beliefs on the basis of that.

More likely to learn from that. And they'd be. Less likely to learn from things that didn't fit with their picture of the world and so in that way they can kind of Double they tend to double down on what they already believe and that was interesting because in that sort of scenario when you have people reporting extreme things or um, Engaging in hyperbole, you end up where everyone can kind of double down on what they already believe in anyway.

Like, you already think climate change is happening. Will you see a lot of reporting saying that it is you already think it's not happening. You see lots of reporting saying it saying it isn't you think that vaccines are safe. You see a lot of reporting on that. You think vaccines are dangerous. You see these like stories about any worry about vaccine harms.

Right? Um, and so that was one variation we looked at with these kind of. Like a little more realistic dynamics.

[00:28:11] Matt: yeah. Yeah, and I think that's a, that's a very relevant one and a, and a, um, one that we do see in the real world a lot. I mean, it's not exactly confirmation bias, but it's related. You know, there is also the question of whether one trusts or to the degree to which one trusts various sources, um, even to the extent where if one distrusts a particular source, one could, and that source reports accurate information, um, one could, Actually, actively then distrust that.

So as, as an example, um, if you fall on one side of a political line, um, let's say your, in the states, you're sort of die hard Democrat and Fox News reports on something which might be true. Um, It's sort of like a disconfirmation bias. You have this inherent assumption that this is not a trustworthy source.

And that can actually push you further away from the truth. So I definitely think that's one you see playing out a lot in the real world.

[00:29:11] Cailin: there's a couple of things going on in there and what you just said. So I mean, confirmation bias and like what people call the backfire effect, um, they're about the way people respond to evidence that they get or information that they get. So it does, is this information stuff I tend to believe or disbelieve?

And there's, it's kind of controversial, but some people do find that people actually like if they get, you know, Stuff that goes against their beliefs will sometimes, like, dig in their heels and go further in the other direction on these polarized topics, like climate change or vaccines. What you were talking about, though, is also something different, like.

Trust in sources, right? Uh, how I respond to a person who's sharing something with me based on, like, do I think they're part of my in group or out group? Or do I trust them? Or do I think they're an expert or not? Um, and. One thing I think is really interesting that you pointed out is this kind of phenomenon where a really mistrusted source, by saying something, can make you almost doubt that thing more.

[00:30:15] Matt: Yes. Yeah, exactly. I mean, this is going really technical, but, um, you know, there, there, I think there is this belief that people have where as long as everybody has access to the same information, then our beliefs and views of the world will converge. And it kind of, it feels, it feels intuitive. But there is a very well known probability textbook by E.

T. Jaynes, uh, where he gives an example, where he says, well, so imagine two people with different priors, uh, where one distrusts the information source and one trusts them. And that source presents them the same information, what happens to their worldviews, and, um, they, they actually diverge, because they, they interpret that information differently.

Which is really interesting, I, I don't know if it has, practical implications for journalistic curation practices, um, but it means even presenting, uh, information in a, in a particular way could lead to different people receiving it, um, differently and, and diverging in their worldviews on that basis, which is quite counterintuitive.

[00:31:18] Cailin: Yeah, I think that that's right and that that can happen. It can even happen for other reasons, just having more to do with in groupism and out groupism and these trust dynamics between people and an elite just shared that with me, you know, should I ever believe anything an elite says or whatever, or, um, Someone on the opposite political team, uh, to some degree, you know, I think it just can't be on science journalists to try to game out what nutty things people are going to do.

You know, I, I think that at the end of the day, if you're a journalist or someone who's preparing information for the public, like, you should do it in the most responsible way you can, and then. That's all you can do.

[00:32:03] Matt: Yeah, I agree, we can't always put it on the shoulders of the science communicators. Um, I guess it does draw up the question though, you know, as a science communicator, what is the best thing to do? What is the, is there sort of like a theoretically Um, correct way to, to do, do curation, um, and, and again, there are other considerations, um, versus just accurate information, um, sort of dissemination because, um, again, like a journalist does want their, um, you know, if they think, they think they've got Important science to communicate.

They do want people to read it. So it should be interesting. It should be written well Um, so how how do you think about that that problem? Um, and and whether there is sort of like are there other theoretically correct principles

[00:32:51] Cailin: oh yeah. I, I mean, I tend to think, like, if, if you're doing science journalism, hopefully you're writing on a topic where if you present the survey the data in that area in a way that's

So I think there should be just much more focus on reporting on some kind of consensus or an emerging understanding rather than reporting on, say, individual studies or particular events, because when you do the first thing, you're Usually, you can give an understanding that's more nuanced, that's going to be more accurate, that's already emerged out of a scientific literature, a whole process of scientists coming to understand the world, you know, um, they've already done tons of tests on climate change or on vaccines or whatever it is, and then you can report on the good understanding that's come out.

The thing I think people generally shouldn't be doing is Selecting like one study without the proper context, without an understanding of what are the other studies in that literature, and then just reporting on that. So that kind of practice during the COVID 19 pandemic was extremely harmful and misleading.

There was an example where, you know, this. California research team found an extremely low infection fatality rate, and that just their one study got reported all over the place, because it was very exciting, because it wasn't what people expected, it was at the extremes of what people would expect, there are lots of ways to present things in an interesting way where you're still covering like a whole area. Like here's the things Scientists as a group have come to figure out about the way trees grow. Maybe that, that's interesting to me. That's probably not a good example of things that are interesting to other people.

But I, I think you see what I'm saying.

[00:34:55] Matt: Yeah, yeah, no, I, I, I do. I do. Um, well, let's, maybe let's pull up from the case of a particular science journalist, let's say, and, and to institutions, um, because I, I feel like the, the rules may be planned a little bit differently at that level, um, and one, one sort of analogy that comes to mind here, if I were to take something like, um, the medical industry or the pharmaceutical industry, you know, to, to get a new drug or a new medical device to market or something.

Um, a lot needs to happen. Um, it needs, often there is a theoretical step. You have to have some theoretical basis for believing that this thing will work and is safe. And there would be often, um, some modeling work involved, similar to what we've just talked about from the curation basis. And then there might be actual real world clinical trials and so on.

And eventually if there's a strong enough belief that this thing is good and safe, you might get the new drug with the new device, um, to market. Um, When it comes to things like, um, curation algorithms, you know, things that would, for example, um, curate one's social media feed, you know, there is no such process that works as well to, to my knowledge, um, you know, these things are just like let out into the world, um, and I'm not even sure The extent to which these companies really, really do think about the impacts in the same way as you've done in your, in your work.

Do you think that there is space for something like that? You know, some sort of, again, if you treat the, treat the curation algorithm as a, as a new medical device, just as an example, you know, um, is there space, is the need for the, the kind of work that you've done looking at. how the different algorithms could lead to different, um, sort of impacts in the world before these things are released out there.

[00:36:49] Cailin: I mean, we would hope that the people running social media platforms before they Make a new algorithm before they make whatever other changes to their platform would think really hard first about How is this going to impact users? How is it going to impact the spread of information? Is it going to create a problem for disinformation, whatever?

Um, I think sometimes they do but As you're pointing out, it's not like we have any regulatory body that's saying, you have to do that. You have to be careful how your algorithm works. You have to figure out what impacts it's going to have before you use it. Um, if we want to keep up the medical analogy, that used to be true of medicine too.

There wasn't always an FDA. Um, And before we had government regulation of drugs, there were a lot of people creating all sorts of wild things that they were giving to patients or selling as cures for diseases that were, sometimes they didn't work, and sometimes they would actually hurt you, or sometimes they had mercury in them, or, or, um, cocaine or whatever.

Uh, so. We might think that what we're going to want for social media is to have something like the FDA, but where what it does is works with platforms to say, whatever you're rolling out or, um, whatever new challenges you're facing, we're going to work with you so that you comply with certain standards to protect users from misinformation or the spread of bad information or bad curation or whatever it is.

[00:38:32] Matt: Yeah, I guess for people to take the idea of, um, I mean, for people to not be so resistant to the idea of regulation in this way, you have to believe that the consequences of not regulating our Significant enough and again to take the medical analogy even further, you know, there are cases where let's say the case of thalidomide, for example Um, this is.

a case where um for for people who don't know um, you know drug used for morning sickness and Many, many years later it transpired that it was, um, it also resulted in, um, sort of genetic defects and, and children born with, you know, merged arms and things like that.

Uh, and so it was, it was very delayed, you know, the drug in market used in the real world and then the consequences emerged down the track and, and then after that the regulation came in. Uh, but there are other cases where just on a purely theoretical basis we know, Um, now based on the knowledge that it's, it's likely to be dangerous.

So again, you mentioned mercury. Now that we know that mercury is poisonous to humans, um, we know that already we don't have to go and do clinical trials with things that are very mercury laden. We know, um, based on, based on, um, based on theory alone, uh, that drugs shouldn't contain too high levels of mercury.

Um, and I, I, I wonder in the case of, um, the, the sort of curation algorithms that Do we know enough purely on a theoretical basis to justify very seriously looking at, um, at regulation in, again, so suppose there were just those three types of curation that we've talked about. Um, you know, hyperbole, extremity bias, fair reporting.

Just imagine that the algorithms just had some blend of those things on a purely theoretical basis. Do we do we know? Enough based on the sort of modeling work that you've done and so on that regulation would be would be necessary.

[00:40:30] Cailin: Well, I just want to point out, I've, I never have thought and do not think that social media algorithms, rather than journalists themselves, Are like are facing the same incentives as journalists or curate in the same way that journalists do. I don't think that I do think they tend to select for extreme content of certain sorts, but it wouldn't be in the same way as science journalists.

Um, now, just kind of stepping back and getting it, like, more of the heart of that question. Certainly. We know that there are there are and have been extremely serious harms from Internet disinformation things that have. Kill people and, you know, harm democracy, stuff like this, uh, you know, in the U. S. A lot, a lot of people have taken Ivermectin to treat COVID, you know, it doesn't treat COVID and it's not supposed to be for humans.

It's a dangerous thing to do, you know, again, in the US, we had, uh, an insurrection on the US Capitol and part driven by QAnon conspiracy content online. So we know that. Real harms can come out of social media misinformation. Um, I would think that alone is enough to think we need to take regulation seriously.

I think, you know, when people feel scared or resistant. When we talk about social media regulation, it's because of free speech laws and free speech norms, and free speech is an incredibly important thing to protect in any country, but when it comes to things like a social media algorithm, and this isn't my point, this is something many people have pointed out, uh, they're already making choices, it's not like you're just getting anything.

Some magical perfect bubble of speech of whatever is random selection of what everyone's saying It's picking things to show you and not show you and some things are getting platformed and some things are getting deep platform So it's already making all these choices. The question is Do we want to have controls who care about public health, who care about democratic functioning, who are saying, once you have these algorithms shaping what people see, what information gets sent out to people, what gets curated, um, do we want that to be done in a way that's good for us, the users?

[00:42:53] Matt: Yeah. Yeah, I would, I would love to get into the specifics there, but maybe just lingering on the point of the, the incentive system. So you, you mentioned you, you're not claiming the incentives faced by scientist journalists are the same as the incentives that, um, sort of shape social media algorithms. Um, are, are there.

Presumably, like there are some incentives that are very helpful and some that are not and some that are worse than others. Are there any sort of, um, uh, sort of specific or very material ways that you feel that they differ, that are, that are important?

[00:43:25] Cailin: Yeah, well, so first of all, you know, These social media algorithms don't face like ethical norms the way science scientists do. So something like fair reporting is completely out the window. I do think there tends to be selection for extreme content where extreme are the things that are surprising to people what they think about the world right now.

So I think that actually is quite similar. Um, it's Something that I think is quite different is that there have been studies showing that the algorithms on some social media platforms tend to actively select misleading information or false information to promote compared to accurate information. And the reason that happens is that often misinformation is more surprising because it's false.

Um, so it tends to be stuff that seems weirder and people are more interested and then the algorithm picks up on that. Whereas I think science journalists tend to be the opposite, you know, of course they want to make things more exciting and they want to report the You know, novel or extreme science, but they're choosing things that are by and large accurate and good information to report on.

So there are real disanalogies there.

[00:44:45] Matt: Do you, do you feel like, well, how much of a problem is it that we might not fully understand what is actually happening curation algorithm versus, um, what's happening with the science journalist? Again, with the science journalist. You could speak to them, they could explain, it might not actually faithfully represent what is actually happening in this sort of curation practice.

But I think we have a much, much more insight than we would if it's just a very, very large black box algorithm. Um, I mean, how big of an issue is that? Just the pure fact that these things are very opaque and very complicated and we don't really know how they're working under the hood.

[00:45:22] Cailin: Yeah, I mean, I think that is an issue in some cases, you know, we can get under the hood information about algorithms. I can't remember if it was. Twitter, I mean, someone at some point released, like, here's how our algorithm works, but the other thing about social media systems is that they aren't just algorithms, you know, it's a system where you have sometimes millions of users.

They have connections between each other. They're interacting with content in ways that then is shaping whether the algorithm picks it up. So it's this extremely complicated extended system where you have real humans. You have this online platform. You have a set of rules. You have a computer algorithm.

Sometimes you have AI involved in that as well. And so for that reason, it is really hard to understand, like, what's getting picked to go where and why, in some cases.

[00:46:16] Matt: Yeah. Yeah. And again, like, um, and none of this is sort of new thinking, but, you know, different social media platforms, all of them are living in a world where their businesses are driven by attention and attracting users. And so, like, the whole existence of these businesses does require. that, um, and, uh, they are in a sense or competing on that basis.

Um, and so there are, you know, there are certain sort of business constraints, business considerations. Um, but, but even given that, are, are there things that you think, um, again, like back to, back to principles, are there any things that you think could change that would both allow these businesses to operate as successful businesses, but also lead to meaningful improvement?

in what they're doing to our information environment.

[00:47:09] Cailin: Yes, they do. Um, so, it's funny, I was just talking to someone about like, well, could we have some kind of neutral algorithms or neutral news feeds that, that aren't distorting content in whatever way, potentially harmful ways? And the answer is like, well, social media would be much more boring. If we tried to do that, and when platforms have tried to do that, they've been pretty boring.

Um, and so obviously, like, both platforms and users don't want that. But if we're thinking about what we have right now, certainly we can take what we have right now. And make it better. And there's a lot of ways we can make it better. And there are things that various platforms have done that already have done that, like, for example, community notes or context notes on things, um, you know, these will be added information.

So it's not a threat to free speech. They're not taking stuff down. They're adding information, giving context to the, to whatever you're seeing. I think of that as I think these are great. You know, if we sort of turn back to the curation project in a way, those things are often giving information about like, what's the rest of the distribution of events?

What are the other things that happened? How can we help you better interpret this limited piece of data? You're seeing? Um, those are great. So that's just 1 example of something that actually has been added that has improved. Social media sites. Another thing, I think, you know, it's been very well established that most really misleading content tends to come from a small number of users. I think that most sites should just have rules saying, like, when you sign up, you sign an agreement that says, if I. Send around too much highly inaccurate content. I'm just removed from the site.

You know, it's just an agreement that this is the kind of space where you have to not share too much highly misleading stuff. I think that's a change that every social media platform should make. I also think when we're talking about curation, there are ways that you can try to make your algorithm, um, track distributions of information in More like less misleading ways.

So for example, it's well established that high emotion content tends to get picked up and amplified by algorithms, right? Because it's interesting. And so here, it's a little hard to get rid of that because people like high emotion content, but you could make your algorithm just a little less interesting, you know, take that really angry stuff and just send it a little less far or, you know, promote the high emotion joy content, which people also like a little more, um.

Sorry, we're getting away from science journalism for sure, but I, I have lots of thoughts about all sorts of misinformation. So maybe if you want, we can kind of come back on topic.

[00:50:12] Matt: No, no, no. I think, I don't think that there is a, I don't think we have to confine ourselves to science. The posture I take is like follow the, follow where the interesting conversation goes and what's most important. So no, for sure. We let's, let's go there. I mean, have, have, have there been any practical examples that.

you've seen again, sticking with, with social media or it doesn't have to be social media, but of this concept where there have been active measures taken across different platforms and then we've had time to see the results.

Um, so again, like not all social media platforms are, are equal. TikToks algorithm is extremely addictive for users compared to some of the others. And, um, uh, you know, different, different platforms have tried different things. Have we seen any real world results of these changes being made in certain areas and then what happened, um, as, as a result?

[00:51:04] Cailin: It's a little hard to know because Uh, it's hard to study when one platform like makes a specific change in this highly complicated system where people are on multiple platforms and all this stuff is happening. It's hard to know exactly what happened. I think there are a few cases where you could say, like, we saw a platform decision and an impact.

So for example, in the week of the January 6th. Insurrection, a lot of platforms kicked off Q and on posters, um, and kind of the leaders of that movement. And that seemed a lot of people thought based on the evidence that they could gather after that, that that actually had a measurable impact on the ability of that community to spread the misinformation they were puddling and to organize.

And so. That's a kind of extreme example where you're like, yeah, what the platforms did really did have an impact, but obviously that wouldn't be the kind of thing we're usually talking about. But a lot of people do do studies where they try to get, you know, a controlled population and check how certain types of changes on platforms would then impact.

That group. So, um, one thing that's interesting that a lot of people have studied is friction on social media platforms, which is where you make it just a little harder to share things like you add friction to people's behaviors and. I think this is pretty well confirmed by the evidence that just adding friction tends to decrease how much people share false content or bad content.

And so when you slow people down a little bit, it turns out they're actually not that bad at identifying what is going to be misinformation. And just a little more thinking, they tend not to share it. Even better is stuff like, you know, these little alerts you get on some sites being like, you didn't actually read this article.

Are you sure you want to share it? Are you sure you want to repost it or whatever? Uh, and so there are some things that from experimental evidence, it seems that they can actually improve sharing or make a difference. There's some platforms though. I mean, It's very hard to study platforms like TikTok where the content is videos because the content is like more complex than what you might get on these other platforms where it's words or words in a picture.

Um, and so I think when we're looking at TikTok, and I think kind of video content is the future for the next while, um, it's much less understood. I think how to. Stop or decrease the spread of misinformation on that kind of platform.

[00:53:53] Matt: Yeah. And I guess the flip side that we haven't really talked about in all this, you know, we've talked a lot about the, the role of the curators themselves, whether it's a journalist, whether it's an institution, whether it's a platform, um, we haven't spoken that much about. The actions that can be taken by the consumers of that information to the, the learners, um, and the users of these platforms, um, which is another side of the coin.

I mean, do you have views on, um, you know, what are the things that individuals, for example, can do? To, I guess, protect themselves from some of the consequences that we've been talking about. Um, you know, being impacted by hyperbole, um, extremity bias, all these things. Does it, does anything stand out, uh, from the individual perspective?

[00:54:45] Cailin: yeah, so there certainly are things users can do to improve issues around misinformation. That side of things is not usually the one I like to focus on. And the way I think about it, so I think that it's just not that effective for us all to try to learn to be like, really information savvy compared to just having good information environments we live in.

And for me, the analogy I like to think about is everyone carrying metal straws. I don't know if you, this was a big thing in the people around

me. Cause I hang out with a lot of, yeah. Okay. I hang out with a lot of environmentalists. It was like no more plastic straws, everyone get your own metal straw.

And it's like, or we could just have regulation around what kind of one use plastic people are allowed to produce. And If we did that one change to the whole system, we don't all have to do this really stupid thing with them carrying our own metal straws. Um, So people can learn a lot about how misinformation works, they can learn how to share less information, people should do that, it's not like there's any reason not to do that, but it's just so much more effective to have changes in government or regulation or on platforms that protect, you know, a million users at once, if that makes sense.

[00:56:17] Matt: Yeah, for sure. No, for sure. It does. Um, then let's, then let's think about then, you know, how this might pan out in the future. So I think I think people do have the sense that this problem has gotten a lot worse over the past decade or so. I don't, I don't actually know if that's, if that's true. Do you know?

Is it, has someone looked at that? How much worse it actually has gotten or, you know, do we just feel it's worse? Is it a worse problem than it was before?

[00:56:44] Cailin: You know, I think it is. I don't know of, um, yeah, I don't know of like actual empirical data where people did real studies looking at how much disinformation is there or how bad are algorithms, but part of the reason I think it's gotten worse is that, you know, When these new areas of media were created, I just think people had not yet realized how much they could be used for the purposes of disinformation and then had not yet built up the skills and tools necessary to use them in that way.

And you also see all these things happening where, for example, after the Brexit vote in 2016 and after the U. S. So I think that there's lots of reason to think that, uh, you know, people quite self consciously being like, Oh, well, if other people can do that, I can do that too. So I, I think that there's lots of reason to think that.

In fact, it has gotten worse. Just more people have realized this is something they could do. Um, the techniques people have been able to use have gotten more savvy over time. At the same time, we increasingly see attempts to prevent or regulate the harms of disinformation. So there is pressure on the other side of this system, um, trying to improve things.

[00:58:18] Matt: Yeah. And I guess the critical question is which side will win over and like where is the pressure mounting more? You know, it feels from my perspective that there is a huge degree of uncertainty about how things will pan out because, you know, on the one hand there is this increased awareness of how big the problem is, but on the other hand there is also a lot going on out there in the world.

You know, we talked about generative AI breaking the internet. There is a lot of. There's a lot of content out there that is generated by now generative AI, and it's very hard to spot, it's very easy to publish content, there's very low friction to get content spread all over the world in a very scalable way.

And so it feels like there is a great tension between these two sides. How optimistic are you, I guess, as to how this might play out? How do you see it playing out?

[00:59:14] Cailin: Yeah, I guess I kind of have a mix of optimism and pessimism, um, there's always been misinformation as long as there have been humans, you know, whenever people can transfer information from one person to another, they're sometimes going to be sharing things that are false or misleading. So it's not like that's going away.

I think the question just is sort of how bad is it or how much is it happening at once? At a scale that's unlike things that have come before, um, the optimistic thing is that if you look at the history of media, I think you see a lot of cases where, uh, there are new information technologies, like better printing presses or new kinds of newspapers or whatever, or the radio, um, then you see the spread of misinformation.

Via these new technologies. And then you see people kind of figuring out how to regulate it or protect themselves or develop new norms to solve those social problems. So there's like this, you know, history of people solving this same kind of problem. Uh, the pessimism part comes from the fact that the speed of digital technology change is just so fast now, that it's not just that social media was invented, it's that every few years, there's a new platform.

That people are jumping into, you know, there was Facebook, and then there was Twitter, and then there was Instagram, and then there was WhatsApp, and then there was TikTok, and, um, and each one of these is different, you know, they have different rules for how information can be shared, they have different sorts of information being shared, you know, the difference between TikTok, where everything is videos, and there's all these specific rules for how people can stitch with others, or repeat things, or copy them, uh, The difference between that and something like Twitter, where it's whatever 200 characters of text and maybe a picture or link, is very different.

So I think the question is, can we figure out How to regulate or control misinformation given all of these new platforms constantly emerging.

[01:01:28] Matt: Are there any emerging technologies that you think will be particularly important to think about and focus on in this space? I mean, generative AI is a very big, it's a very broad term. It means a lot of things, but, you know, automatically generated video content is an example of something that they can do.

We've seen deep fakes, we've seen very, very personalized content. Does there anything, is there anything that jumps out to you as particularly troublesome? In this, uh, in this fast moving space.

[01:02:00] Cailin: I mean, I, I honestly, I'm not sure. And I'm not really an AI person. Uh, one thing that a lot of people have worried about, which seems right to me, is that when you can make generative AI, it decreases people's trust in all sorts of content. So now people become much less sure. If a video they're seeing was a real video or if it was an AI video or if a photo could be a deepfake and so it decreases the information value in normal media in a way that seems worrying.

What I'm guessing, just based on how things are gone, is that we just are not going to have any idea what the real threats are. Until they happen, uh, I think we're gonna look back in 10 years and be like, wow, we just really weren't expecting that. I mean, that's, that's what's happened on the internet at every stage.

Uh, we thought it was gonna help us never be wrong again, and then that's really not what happened. Uh, or, you know. With the origins of Facebook, it was like, oh, this is going to be a fun little silly thing for the youth to connect with their friends on, and then social media just became something totally different from what we would have expected.

[01:03:17] Matt: Yeah, yeah, no, it's a, it's, it's tricky, um, I mean, what, what do you do, uh, personally to navigate this space and to protect yourself? Again, there's some obvious things, you know, trusted sources, do somewhat of your own research and so on, but is there anything that, that you incorporate in, in your sort of personal life and in professional life?

to, so navigate this effectively.

[01:03:44] Cailin: Yeah, I mean, when it comes to news, I pay attention to source quite a lot and tend to go with mainstream news sources, you know, the Washington Post or whatever, something of that sort. I think the, the place, you know, I'm, I'm a person who does like misinformation research, so I tend to know more about like how to deal with particular kinds of misleading content.

I think the place that even for me is extremely challenging actually goes back to curation, but now curation based on research Preferences, right? Where, like everyone else, I tend to see the content that tickles me that I find enjoyable or uplifting or confirming of my worldview. Um, and that means that there are a lot of things I'm not seeing, especially on the other side of the political aisle, and I'm not seeing the opinions that really differ from my own.

And, you know, I try to kind of extrapolate out in my mind, like, remember, you know, this is just my echo chamber. There's all this other stuff I'm missing, but I find that extremely hard. Um, it's actually a place where I worry. I don't think I can do it. I sort of don't think anyone can do it to really understand what's going on outside of their own little bubble when you're only seeing what's in your bubble.

[01:05:06] Matt: Yeah, no, I worry about that too. I mean, going back to your, your, um, the original paper we were talking about, one of the key assumptions there for that in the, in that model was that the journalists were getting an accurate. They had access to sort of an accurate representation of the underlying distribution.

And, and I feel like before the digital age, at least to some extent, this was true in the sense that information came through kind of geographically constrained networks. You would bump into people on the street, you knew people in your area, you know, of course you could get bubbles within the village.

Um, but, um, you know, I think the environment forced. It's somewhat of a sort of a bigger spread of the ideas we would get exposed to versus, as you said, today with a higher degree of personalization based on individual preferences. It's almost, it's almost impossible for one to know, um, you know, to, to what extent they're, what they're seeing.

As much as your own research as you want to do, um, you know, how does one know whether this is getting somewhat of a, an accurate sampling of the underlying. Truth space. I'm not trying to solve that problem, but it's something I worry about as well.

[01:06:20] Cailin: Yeah, how quirky an individual is the information or the opinions or whatever that you yourself are seeing. It's just hard to know.

[01:06:27] Matt: Exactly. Yeah, exactly.

[01:06:29] Cailin: I mean in some ways if you think about like pre internet information spread, it's not that this wasn't a problem. It was just that it was a different problem, you know, people Were, as you say, more influenced by their geographic locations and the people there. And there are really interesting, like, formative studies showing that that's the case.

For example, I think there was one on MIT students in their little, like, housing groups that showed people would have more similar. Um, so you're still having some kinds of, like, effects of space, but now space is different. Now space has to do with these virtual environments and how close you are in virtual space rather than your literal physical environments.

[01:07:16] Matt: Mm. Yeah, and I think there is also, I mean, one of the biases of curation that you talked about was the extremity bias. Or maybe like cherry picking is a better example here, where there is a lot of stuff going on in the world. And, you know, previously, Um, you know, you could sample events all day and you just simply would not get exposed to the number of extreme events as we do today.

Today, because the information that we're getting is globalized, I think it is possible to spend all day, every day, just getting a sample of very, very extreme events on any particular topic. And I don't know if our psychology can, uh, can quite handle that. really unwind just how skewed that distribution is.

And I'm not sure if we were able to do that.

[01:08:01] Cailin: Yeah, and I think even with traditional media, that was something people were worried about, you know. Uh, do people have a really, um, skewed view of how common crime is, for example, because crime gets reported so often. And then on social media. You can go even further on this. Like, if you're, if you're interested in situations where, like, a cat mom raises squirrel babies, you can go find a hundred videos today of that and maybe get a really skewed distribution of how often that happens.

I don't know. On my, on my social media sites, that happens pretty often. surprisingly often. So now I kind of have an idea that like, there's a lot of cats raising squirrels and a lot of chickens with little kittens under them.

[01:08:51] Matt: Yeah, well, I almost, I almost introed, almost suggested that we intro this conversation by comparing our news feeds on social media sites. But I thought that it was probably, it was probably a bit risky.

[01:09:04] Cailin: Yeah, it's a little risky. It would definitely reveal my political preferences more than I try to do in like professional spaces.

[01:09:12] Matt: I mean, I mean, the other, the other interesting thing there is that, um, uh, you know, quite a small sample of, of what people are receiving on their curated sites can reveal a lot about their political preferences and other preferences because of how tightly these things cluster, which is maybe, maybe a topic for a, for a different conversation, but also, also an issue.

Yeah. Um, one of the places where I personally, um, try to get. sort of higher quality, well curated information is, of course, books. Um, I think, you know, people spend a lot of time, uh, to develop a, a book that's well written. Um, and books have been absolutely critical for my life. I'm sure, I mean, you've, you've, you're the author of many papers and, and some books as well.

Um, I'd love to turn to the, the topic of, of books. Um, and if you have any books that jump to mind that have most influenced you in your, um, You know, professional or personal life.

[01:10:11] Cailin: That's like such a Matt, that's like such a big topic jump, especially when you threw personal life in there, and I'm just like reeling. Uh,

[01:10:22] Matt: let's, let's iterate, let's, let's iterate, let's, let's do, let's do something that's much more closely related to

[01:10:29] Cailin: okay, yeah, yeah, pull it back.

[01:10:31] Matt: Yeah.

[01:10:33] Cailin: Yeah, okay, right, alright, books that have influenced my professional life, I mean, a lot, but, you know, so, I do this stuff on, um, misinformation, public belief, false belief, um, social network spread, but my actual discipline is philosophy, so a lot of what I read is actually philosophy books, which, you know, they're, they're not always for everyone.

So, like, the first things that pop to mind are, you know, these, like, very esoteric types of things, like, I love David Lewis's work on what social conventions are, but like, you hear what I'm saying.

It's not that relevant here. As far as stuff relevant to the topic we've been talking about, um, T. Wynn's work on, like, games and gamification recently has been pretty fascinating.

Uh, Lately, what I've been reading tons and tons of are like, I'm trying to read all the books on how, uh, our particular beliefs in a society influence the way we produce science, because I'm working on a project relating to that, so I'm reading all this stuff about how, like, Our beliefs about gender influence science and our beliefs about fat and, you know, uh, disability and race and, yeah, I don't, yeah,

[01:12:02] Matt: Yeah. No, I mean a huge, a huge topic and very relevant for this podcast actually because I mean the name of the podcast is, um, Paradigms and, or Paradigm and it's often about the, looking at the paradigms in which we work and, um, not just working within them, but actually looking at them. And, um, and they do shift and they do shift and many of the conversations I've had have addressed questions just like that, you know, when one gets started in science, you often see it as something that's largely uninfluenced by, um, politics and social society and so on, uh, and so that is just completely not true, that is

[01:12:39] Cailin: yeah, yeah, in fact it's quite deeply influenced and shaped by the people who are producing it and the way they've been raised and the culture they're existing in and all these factors.

[01:12:50] Matt: Yeah. Exactly. Um, to then make a smaller leap from, uh, books that have professionally influenced you to, to, uh, books that have influenced you more broadly. And maybe it's the same because philosophy does that, but, um, does anything come to mind as, as books that have, uh, that have influenced you in a more general way?

[01:13:07] Cailin: Gosh, I am really like an obsessive reader. So this is like a stressful question for me. I am just gonna go for the first thing that pops up. I'm Have I mean for my entire adult life been absolutely obsessed with Khalil Gibran's The Prophet, you know this book?

[01:13:25] Matt: Yes, yes, yes. For sure. Yeah. All, all the, all the wedding, uh, all the wedding readings come from that book.

[01:13:31] Cailin: A lot of them do, uh, but, I mean, it's just, it's so beautiful and so deep. I mean, the part on, like, joy and sorrow, I go and read, like, every month or so. Yeah. I love that book.

[01:13:46] Matt: Amazing. That's a great recommendation. I will, I will definitely link that one here. Last two questions. Firstly, just a, just like a, you know, a call to action for the audience, I guess. I mean, if people want to look more into this stuff, uh, if they want to get more involved, if they want to find your work, anything, um, you know, any, any words to share?

with, uh, with the audience.

[01:14:07] Cailin: if people are interested in things like, um, misinformation and especially disinformation, there are good books. Recent books, I mean, um, Network Propaganda by, uh, Kathleen Jamison is really great, for example. Uh, if people are interested in looking at my work, I have a website, kaitlynoconner.

com. It's really easy to find. I am on Facebook. Twitter, but I mostly just go on there to post like when students in our graduate program have gotten jobs and Occasionally to ask questions to help me in my research Yeah, I don't know what other sorts of resources might be Useful or would people usually share?

[01:14:54] Matt: Uh, yeah, usually, usually books. Maybe we can link your, your own book would be a good idea.

[01:14:59] Cailin: Yeah, so I wrote a book on misinformation called the misinformation age with Jim, whether all who's my colleague and also my husband, um, we talk a little bit about curation in there. Not very much. Most of the curation stuff. We started developing later. Uh, that book mostly uses network models. So models where you, um, represent a whole social network in a computer simulation and then try to look at how various information or ideas, um, Spread between people so we use those and then a lot of like historical cases of false beliefs to try to understand Various features of how people share information and where they go wrong.

[01:15:41] Matt: Yeah. Fantastic. Yeah. I'll, I'll link that as well. Um, last one, I'll give you a prior warning. It is a big jump from the topics that we've been talking about, but it's, it's one that I always end with. Um, we talked a little bit about generative AI and there's a lot of talk about, um, the prospect of developing an AI super intelligence.

Um, and my question is if we were to create one and we had to pick a person, either past or present, to represent humanity to the, to the AI superintelligence. Who should we pick?

[01:16:13] Cailin: Oh, Dolly Parton, obviously Just

[01:16:23] Matt: Ah, very good. Um,

[01:16:28] Cailin: just all around very lovable, you know

[01:16:30] Matt: she is, she is. Uh, I have a funny story about Dolly Parton, but again, it's one for a different time. Um, um, ​

Discussion about this podcast

Paradigm
Paradigm
Conversations with the world's deepest thinkers in philosophy, science, and technology. A global top 10% podcast by Matt Geleta.