Like many people, I was thinking about this problem after the 2016 election. The argument usually goes that online media, especially social media, polarizes voters by making it easy for them to retreat into ideological bubbles, although not everyone agrees.
60 years ago, people got most of their information from a handful of newspapers and magazines. News was comparatively uncontroversial, because that was better for business. Today, as technology removes distribution costs, even smaller players can attract a global readership. This makes carving out a niche, for example an ideological niche, a much more viable strategy, because there are billions of potential readers.
Another driving force is the increasing number of options. With specialized tech news, political news, soccer news, and fake news (not the kind you’re thinking of; I mean The Onion; I’ll get back to the other kind) all a click away, there’s less need for sources with broad but bland perspectives, like CNN.
The priority is differentiation, and it may be causing a sort of Cambrian explosion of news phyla. The trend reinforces itself. Content targets smaller niches of readers, readers get a taste for specialized content, content targets yet smaller niches.
There are incentives for news organizations to target smaller niches, which means we can expect political news to get more diverse, and thus more polarized. This doesn’t bode well for establishing political consensus, but I don’t see that as fatal to democracy. But the rise of news with little or no basis in fact is also part of increasing diversity, which is more worrying.
Taking after President Trump, let’s call it fake news. It didn’t take long for the world’s tech entrepreneurs to see the potential in fake news. Forays into journalism made by Macedonian teenagers or an American college grad looking to pay off his loans make for fascinating reads, but their primary subject is opportunism rather than electioneering.
Their method, unfortunately, can be weaponized. Check out the summary of this article, about a so-called Firehose of Falsehood.
The Russian propaganda model is high-volume and multichannel, and it disseminates messages without regard for the truth. It is also rapid, continuous, and repetitive, and it lacks commitment to consistency.
What the Russian government understands is that if you muddy the waters enough, facts can’t stand out by their virtue of being true. Think of facts, in the engineering sense, as the signal. If you drown them in noise, if the message is too garbled, the important part becomes inaudible.
Fake News and Machine Learning
Now read a bit about Tay, whom you’ve probably heard of.
[Tay] caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch.
Here’s where things get scary.
If there are already bots whose speech is real enough to offend people, soon there will be bots whose speech is real enough to mislead people. If the average person can be misled by a bot, noise is free. Electioneering is scalable.
I’m sure there are governments working on this right now. I bet there are startups working on it. What do you think?