0%
Still working...

The big idea: do we worry too much about misinformation? | Society books


On 30 October 1938, a US radio station broadcast a dramatisation of HG Wells’s apocalyptic novel The War of the Worlds. Some listeners, so we’re told, failed to realise what they had tuned into; reports soon emerged of panicked audiences who had mistaken it for a news bulletin. A subsequent academic study estimated that more than a million people believed they were experiencing an actual Martian invasion.

A startling example of how easily misinformation can take hold, perhaps. But the story is not all it appears to be. Despite oft-repeated claims, the mass panic almost certainly didn’t happen. In national radio audience surveys, only 2% reported listening to anything resembling The War of the Worlds at the time of the broadcast. Those who did seemed to be aware that it was fiction. Many referred to “the play” or its narrator Orson Welles, with no mention of a news broadcast. It turned out that the academic analysis had misinterpreted listener accounts of being frightened by the drama as panic about a real-life invasion.

Almost a century later, the idea of large-scale misinformation is, if anything, more salient. We regularly see headlines about the millions who have been exposed to falsehoods online. In a 2018 Gallup survey of Americans, respondents reckoned on average that two-thirds of the news they encountered on social media was misinformation. But, as with that War of the Worlds broadcast, misinformation isn’t necessarily the problem we think it is. As Covid spread during spring 2020, monthly visits to English-language news websites labelled “untrustworthy” by the rating service NewsGuard – such as Breitbart and the Daily Wire – increased from 163m to 194m. But during the same period, visits to “trustworthy” sources, such as the BBC and the Guardian, grew from 5bn to 8bn. In other words, credible websites received 40 times more visits in early 2020 than questionable ones.

Outright misinformation may be rarer than we think; it is also only part of the problem when it comes to navigating fact and fiction. There are two errors we must avoid if we want to get closer to the truth: we shouldn’t believe things that are false, and we shouldn’t discount things that are true. If we focus solely on reducing belief in false content, as current efforts tend to do, we risk targeting one error at the expense of the other. Clamping down on misinformation may have the effect of undermining belief in things that are true as well. After all, the easiest way to never fall for misinformation is to simply never believe anything.

When I supervise students new to scientific research, I often see a change in their attitudes over time. Early on, they will treat papers in established academic journals as almost sacred. Because the paper has been published and peer-reviewed, goes the logic, it must be accurate. Then, as students realise these papers are often flawed, and occasionally outright fraudulent, doubt sets in. Everything could be wrong; nothing can be trusted.

This is not a new problem. At the turn of the 20th century, the mathematician Henri Poincaré warned about the risks of too much trust or mistrust. “To doubt everything or to believe everything are two equally convenient solutions; both dispense with the necessity of reflection,” he warned.

Rather than either embracing or shunning all that we see, we must instead find ways to manage the risk that comes with trusting that something is correct. For example, in medicine, we typically design clinical trials in a way that reduces both the risk of concluding something works when it doesn’t, and the risk of concluding something doesn’t work when it does. We can never have total certainty in a result, but we can still build enough confidence in what we uncover for it to be useful.The damaging effects of overscepticism has made it a popular tool for those looking to undermine common knowledge. In 1969, with concerns about the harms of smoking on the increase, a tobacco industry memo stated: “Doubt is our product since it is the best means of competing with the ‘body of fact’ that exists in the minds of the general public.” They weren’t trying to get people to believe different facts; they were trying to undermine the idea there could ever be enough evidence to act.

Often, it is not outright falsehoods that sow doubt online. Last year a study found that, among vaccine-related links viewed on Facebook during the spring 2021 Covid vaccine rollout, only 0.3% were flagged as false or out-of-context by factcheckers. Crucially, the posts that had the biggest overall impact on vaccine confidence were factually accurate, but potentially open to misinterpretation. For example, the most viewed link – which reached seven times more people than all the fact-checked misinformation combined – was this Chicago Tribune headline: “A Healthy Doctor Died Two Weeks After Getting a Covid vaccine; CDC [Centers for Disease Control] Is Investigating Why”. Strictly speaking, all of this was true. But it didn’t provide enough information to draw meaningful conclusions about the safety of vaccines or their relative risk compared to Covid.

When I’ve encountered conspiracy theorists, one of the things I’ve found surprising is how much of the evidence they have to hand is technically true. In other words, it’s not always the underlying facts that are false, but the beliefs that have been derived from them. Sure enough, there will be a logical fallacy or out-of-context interpretation propping them up somewhere. But it’s made me realise that it’s not enough to brand something “misinformation”: more important is the ability to find and address the flawed assumptions hiding among voluminous facts. We must give people the conceptual tools they need in order to spot skewed framing, sleight of hand, cherry-picked data, or muddled claims of cause and effect.

That means moving away from the idea that people are threatened by a tsunami of falsehoods. Calling information that is technically accurate untrue merely undermines trust. And if we issue warnings that most of the content you find on the internet is made up, it will distract from the bigger challenge of ensuring that technically accurate information is correctly interpreted.

To borrow from Poincaré, believing that falsehoods are widespread and easily identified, or believing that most content is accurate and hence requires no further thought, are two equally convenient solutions. Both could harm our ability to tackle the much thornier reality of mistaken beliefs and misplaced trust online.

Adam Kucharski is a professor at the London School of Hygiene and Tropical Medicine, and author of Proof: The Uncertain Science of Certainty (Profile).

Further reading

Misbelief: What Makes Rational People Believe Irrational Things by Dan Ariely (Heligo, £10.99)

The Age of Magical Overthinking by Amanda Montell (Atria, £10.99)

The Art of Uncertainty by David Spiegelhalter: How to Navigate Chance, Ignorance, Risk and Luck (Pelican, £12.99)



Source link

Recommended Posts