0%
Still working...

Are a few people ruining the internet for the rest of us? | Social media


When I scroll through social media, I often leave demoralized, with the sense that the entire world is on fire and people are inflamed with hatred towards one another. Yet, when I step outside into the streets of New York City to grab a coffee or meet a friend for lunch, it feels downright tranquil. The contrast between the online world and my daily reality has only gotten more jarring.

Since my own work is focused on topics such as intergroup conflict, misinformation, technology and climate change, I’m aware of the many challenges facing humanity. Yet, it seems striking that people online seem to be just as furious about the finale of The White Lotus or the latest scandal involving a YouTuber. Everything is either the best thing ever or the absolute worst, no matter how trivial. Is that really what most of us are feeling? No, as it turns out. Our latest research suggests that what we’re seeing online is a warped image created by a very small group of highly active users.

In a paper I recently published with Claire Robertson and Kareena del Rosario, we found extensive evidence that social media is less like a neutral reflection of society and more like a funhouse mirror. It amplifies the loudest and most extreme voices while muting the moderate, the nuanced and the boringly reasonable. And much of that distortion, it turns out, can be traced back to a handful of hyperactive online voices. Just 10% of users produce roughly 97% of political tweets.

Let’s take Elon Musk’s own platform, X, as an example. Despite being home to hundreds of millions of users, a tiny fraction of them generate the vast majority of political content. For instance, Musk posted 1,494 times in his first 15 days of implementing government cuts for the so-called department of government efficiency (Doge)earlier this year. He was, essentially, writing non-stop. And many of his posts spread misinformation to his 221 million followers.

On 2 February he wrote, “Did you know that USAID, using YOUR tax dollars, funded bioweapon research, including Covid-19, that killed millions of people?” His behaviour fits the pattern of many misinformation super-spreaders. A mere 0.1% of users share 80% of fake news. Twelve accounts – known as the “disinformation dozen” – created most of the vaccine misinformation on Facebook during the pandemic. These few hyperactive users produced enough content to create the false perceptions that many people were vaccine hesitant.

Similar patterns can be observed across the internet. Only a small percentage of users engage in truly toxic behaviour, but they’re responsible for a disproportionate share of hostile or misleading content on nearly every platform, from Facebook to Reddit. Most people aren’t posting, arguing, or fuelling the outrage machine. But because the super-users are so active and visible, they dominate our collective impression of the internet.

That means the resulting problems don’t remain confined to this small cohort, which distorts how the rest of us make sense of the world. Humans create mental models about what other people think or do. It’s how we figure out social norms and navigate groups. But on social media, this shortcut backfires. We don’t get a representative sample of opinions. Instead, we see a flood of extreme, emotionally charged content.

In this way, many of us are led to believe that society is far more polarized, angry, and deluded than it really is. We think everyone on the other side of the generation gap, political spectrum, or fandom community is radical, malicious, or just plain dumb. Our information diet is shaped by a sliver of humanity whose job, identity, or obsession is to post constantly.

This distortion fuels pluralistic ignorance – when we misperceive what others believe or do – and can shift our own behaviour accordingly. Think of voters who see only the angriest hot takes about immigration or climate change and assume there’s no common ground to be found.

The problem isn’t just the individual extremists, of course – it’s the platform design and algorithms that amplify their content. These algorithms are built to maximise engagement, which means they privilege content that is surprising or divisive. The system is optimised to promote the very users who are most likely to distort our shared perception of reality.

It gets worse. Imagine you’re sitting in a busy restaurant, having to speak a little louder just to be heard. Before long, everyone is shouting. These same dynamics happen online. People exaggerate their beliefs or repeat outrageous narratives to get attention and approval. In other words, even people who aren’t especially extreme may start acting that way online, because it gets rewarded.

Most of us aren’t spending time on our phones trolling our foes. We’re busy working, raising families, spending time with friends, or simply trying to find some harmless entertainment on the internet. Yet, our voices are drowned out. We have effectively handed over a megaphone to the most obnoxious people and let them tell us what to believe and how to act.

With over 5 billion people now on social media, this technology isn’t going away. But the toxic dynamic I’ve described doesn’t have to hold sway. The first step is to see through the illusion and understand that a silent majority often lurks behind each incendiary thread. And we, as users, can take back some control – by curating our feeds, resisting the outrage bait, and refusing to amplify the nonsense. Think of it like deciding to follow a healthier, less processed diet.

In a recent series of experiments, we paid people a few dollars to unfollow the most divisive political accounts on X. After a month, they reported feeling 23% less animosity towards other political groups. In fact, their experience was so positive that nearly half the people declined to refollow those hostile accounts after the study was over. And those who maintain their healthier newsfeed reported less animosity a full 11 months after the study.

Platforms could easily redesign their algorithms to stop promoting the most outrageous voices and prioritise more representative or nuanced content. Indeed, this is what most people want. The internet is a powerful, and often valuable tool. But if we keep letting it reflect only the funhouse mirror world created by the most extreme users, we’ll all suffer the consequences.

Jay Van Bavel is a professor of psychology at New York University.

Further reading

The Righteous Mind by Jonathan Haidt (Penguin, £12.99)

Going Mainstream by Julia Ebner (Ithaka, £10.99)

The Chaos Machine by Max Fisher (Quercus, £12.99)



Source link

Recommended Posts