Social media is “a nuance destruction machine,” as Jeff Bezos concisely put it in his recent widely reported Congressional testimony. On another recent occasion he expanded on this idea (quoting his son), saying “…if you really want your meme on the Internet to go far and travel fast, you have to eliminate all nuance in it, and add conflict.”

Many people (including legislators) are increasingly aware of this. But it seems the only remedies being considered amount to swimming against the tide. They want to censor bad memes — those that lack nuance and create conflict.

That is an attempt to treat the symptom, not the disease. The disease is a distribution system designed to favor the elimination of nuance and the elevation of conflict. Social media algorithms favor viral content and conflict generation because that increases engagement — increasing ad views, and earning them added billions in revenue.

Ad

Zuckerberg contended at the hearing that Facebook doesn’t profit from hate speech, and “is very focused on fighting against” it, and “has invested billions of dollars in that.” The argument that the company doesn’t profit even though fake news increases engagement does not impress many experts. And it remains clear Facebook is not doing enough. Many view its efforts as cosmetic and ineffective.

As long as social media’s financial incentives favor engagement (or “enragement”) over quality, its filtering algorithms will be designed to be favorable to messages of hate and fear. As long as that happens at Internet speed, bolted-on efforts to add back nuance and limit conflict will be futile. We will waste time and resources with little result, and democracy may drown in the undertow.

How can we swim out of this tide?

  1. Change the incentives.
  2. That will motivate platforms to redesign algorithms to filter for nuance — and against incivility and hate speech.

How to change incentives? The internet is not inherently a nuance destruction machine. The advertising model of certain platforms is what motivates them to become nuance destruction machines. (And for all of Bezos’ quote-meistering, that model exists on parts of Amazon as well.) It is very hard to get social media to favor nuance and disfavor conflict when inaction will enable them to sell more ads.

Ad

What we need is a subscription model that makes the user the customer. People should pay for the services they enjoy, with some of that cost defrayed by compensation that is given for their data and their attention. Today, as is often said, users (and their data) are the product, and the true customers are the advertisers.

If we compensated users rather than exploited them, the platforms will be motivated to serve that customer. It can be done in a way so that advertising is still allowed. Ads can be used to defray subscription costs by crediting users for their attention and data — at negotiated rates that the users opt in to, or not. If the user is the customer, advertisers will be motivated to make their ads valuable and acceptable to consumers.

Many, especially investors, worry that subscription fees would result in far fewer users and much lower revenue. But without the easy bucks of today’s “involuntarily extractive” ad model, the platforms will be motivated to find subscription models that are affordable to consumers. Market-based solutions could drive innovation that enables new levels of nuance in pricing (as I previously suggested in Techonomy), combined with the opt-in ad credits, to make some level of service affordable to all who want it. If the platforms cannot be convinced to see the merit in that, we can regulate to force such innovation (in ways I also suggested here at Techonomy), much as society put laws in place to require fuel economy in cars (though there is likely to be a cost to profit margins). Swimming across the tide may not seem the most direct route, but sometimes it is the only route.

How then to change the algorithms? Once the platforms are motivated to filter the content they distribute for quality, they will do better at it. Google’s original search algorithm proved decades ago that the internet can select for quality and nuance by harnessing the subtle signals of vast numbers of human users — and can do it at internet speed. But the obscene profits of the ad model were too seductive, so quality no longer mattered and was lost.

There are clear paths to creating quality-seeking algorithms, as I have outlined on my blog. Computers can deal with nuance when programmers want them to. Computers can mitigate conflict when programmers want them to. The shape of digital systems reflect the values of their makers.

Bezos and his son have put their fingers on the core problem. Understanding the problem points to how to solve it. Let’s hope we have not already been swept too far out to sea.