This is the cover story of the fall 2018 issue of Techonomy Magazine.
Never before has one company’s failure had such a devastating impact on the world. Facebook’s presence is truly global, as are the consequences of its failure to anticipate how its platform could be misused and abused. And yet never before has a single company been asked to fix a global problem.

Racists, autocrats, and purveyors of hate and disorder have found Facebook the perfect medium for spewing poison, normalizing it, and gaining adherents. Facebook is a broadcast platform for anyone, including those who would break the rules, fake the news, lie, and mislead the community. Societies around the world are reeling from the consequences. Politics and democracy are under duress. And thus far, Facebook does not have an effective way to fight back. It is only beginning to try, though a company spokesperson showed surprising contrition during a lengthy interview for this story.

Ad

What mystifies many is why Facebook did not take action sooner. “I am disappointed in how slowly they acted,” says Wael Ghonim, the Egyptian citizen who, as a Google employee, helped catalyze the Facebook-driven revolt in Egypt during the Arab Spring in 2011. “They should have been paying more attention as their system was aggregating more power globally. This is a very profitable company.”
Facebook received repeated warnings for at least six years from informed observers around the world that things were going wrong. But it appears to have almost entirely disregarded them. Gaining more users in any and all geographies was the company’s overarching priority, and it succeeded. The service was growing at an incredible pace into just about every country on earth. It is now a dominant medium of communications, typically the most dominant, in about 190 countries.
It expanded at full tilt into innumerable geographies where it lacked local expertise. But how could it possibly ensure its rules were being followed when nobody at the company spoke the local language or understood the nuances of the local culture? It couldn’t.
Yet Facebook suddenly became what amounts to the global town square. Sanjana Hattotuwa, a longtime human rights and digital activist based in Colombo, Sri Lanka, is appalled at Facebook’s persistent failures, which have periodically led to chaos and bloodshed there. But, he says, “I remain engaged with Facebook because I see it as so critical to the future of my country. It is intertwined with elections and democracy. It is the primary driver of news and information. It has no equal here.”

(Illustration by Mike McQuade)

There is tragedy and irony in this company’s history. Facebook empowers those who use it. That’s why so many of us found its emergence heartening. (I wrote a generally laudatory history in 2010, entitled The Facebook Effect.) But the company’s leaders—and particularly its all-powerful leader ­Mark Zuckerberg—have been so enamored of how it gave ordinary people new tools and contributed to building communities that they overlooked how much it also could undermine or even destroy those communities. The company badly underestimated what it would take to govern a “community” of 2.25 billion people. Facebook does engender connection, friendship, community building, and user empowerment for billions. But that does not reduce the gravity of the disastrous epidemic of misuse.
Finally, this year, Facebook recognized just how badly its platform was being abused by evil politicians and the worst kind of rabble-rousers. The company insists it is going all out to remedy the weaknesses it has identified and to improve oversight. Activists engaged in countering online hate and disinformation mostly agree that the company is trying hard to address the problems. But nobody except Mark Zuckerberg and Sheryl Sandberg seems to believe the measures being adopted now are sufficient.
Facebook only finally began admitting there was a serious problem after the U.S. presidential elections, and even then only reluctantly, and slowly. The public heat grew unbearable once it became clear that false Russian government-sponsored propaganda and paid advertising on Facebook had flowed unimpeded, and may have even helped alter the election’s outcome. (A new book Cyberwar: How Russian Hackers and Trolls Helped Elect a President compellingly argues that it did.) The crisis deepened when we all learned that large amounts of personal data had been illicitly obtained from Facebook by the British marketing firm Cambridge Analytica, and that some of it was probably used to inform the Trump campaign’s strategy . The scrutiny of Congressional investigations, hearings, and the probing of Special Counsel Robert Mueller forced the company to pay attention.
For all that, Facebook’s public rhetoric has up until recently implied that this was primarily a U.S. problem. In fact, it is a momentous crisis in just about every country. The service is being manipulated by evildoers in most if not all of the 133 languages in which the service operates. In many less-affluent countries a gigantic number have come onto the internet for the first time only recently on mobile phones, and are often naively credulous when faced with digital fiction. Facebook’s future growth will be greatest among the inexperienced and previously disconnected, which makes getting a handle on this problem all the more urgent.
Hattotuwa has been involved in both human rights and internet issues ever since he set up the first Facebook page in South Asia devoted to media, in 2007. “But after the civil war in Sri Lanka ended, around 2012,” he says, “people began coming onto the platform and using it to promote hate and harm. The company had zero—zero— oversight capability on the platform in my language. I started telling them that, back then.”
But Facebook didn’t listen, even as Hattotuwa and the Centre for Policy Alternatives where he worked began periodically issuing and sending to Facebook well-documented and detailed formal reports about system abuses regarding unregulated hate speech on the service in Sri Lanka. One report was in 2014, a second in 2015, and a third in 2016, on how hate speech affected local elections in 2015. The final report includes this line: “The relatively unshackled freedom of expression found in social media… invited unchecked expressions of hateful and defamatory material targeting candidates.” It was prophetic for what the U.S. itself would experience the following year. But, says Hattotuwa, “We never heard back from the company.” He was not alone in raising alarms. Other activists and experts in Vietnam, the Philippines, India, the Balkans, and Mexico, among other places, were warning that Facebook was being weaponized. “It was like giving knives to toddlers,” says Hattotuwa.
Over time, he says, the company’s response actually became worse than nothing. By 2018 Facebook had finally hired at least some moderators in Sinhala, the language of the mostly Buddhist people who constitute the Sri Lanka’s majority. But they were not doing their job. A series of false and incendiary posts on Facebook demonized the country’s Muslim minority and helped precipitate a horrific outbreak of ethnic fighting. “At the height of the violence, a Facebook post called for a pregnant mother to be killed—it was very explicit—‘cut open her womb and kill her unborn children like dogs,’” says Hattotuwa. “We complained, and Facebook got back a couple of days later and said it was perfectly fine. That said to us that not only was the company’s oversight insufficient, but that its moderators were themselves racist.”
Only when Hattotuwa and other activists in Sri Lanka and several other countries wrote a series of open letters to Mark Zuckerberg in the spring of 2018 did Facebook finally respond. (Here is the letter from Vietnamese activists.) Media heat was focused on the company because of congressional hearings where Zuckerberg was testifying, making the letters harder for Facebook to ignore. “The company did not want to look at what was happening beyond user growth,” continues Hattotuwa. “I get pissed off when Sheryl Sandberg and the company constantly repeat that they were slow to respond. That suggests they had identified the problem and for some reason did not do what they should have. But I saw no response at all until this past March. That to me is a moral failure.”
Asked about Hattotuwa’s story that Facebook didn’t remove a call to kill mothers and babies, Debbie Frost, who for over a decade has headed Facebook communications and public affairs outside the United States, acknowledged the company’s failure. “I can’t explain that to you,” she says. “I wish we had done a better job and not been so slow and had paid closer attention. He’s exactly right. Only recently have we been deploying the full weight of our integrity efforts to do a better job.”
Sri Lanka was particularly vulnerable to voices of hate because of a history of deep ethnic division and a horrific recent civil war. But its experience was not unique. In Myanmar, India, and Indonesia, just to name three countries, there have also been repeated incidents of ethnic violence and killings encouraged by incendiary and dishonest posts on Facebook.
Most recently, among numerous related stories emerging worldwide, the New York Times has documented in shocking detail how adroitly and malignly the mostly Buddhist military leaders of Myanmar misused Facebook and violated its unenforced rules in order to encourage genocidal hatred towards the country’s Rohingya Muslim minority. The article reports that  “Myanmar military personnel…turned the social network into a tool for ethnic cleansing.” Facebook is Myanmar’s overwhelmingly dominant media platform (along with Facebook Messenger), and it was used to encourage hatred on both sides, not only to foster genocidal killing of the minority but, cynically, to create a perception that only the military could maintain order. As many as 43,000 Rohingya may have died in the conflict, and over 700,000 of them have fled the country to neighboring Bangladesh. (The image at top is based on a photo of panicking Rohingya Muslim refugees.)
And Facebook’s ungoverned platform has harmed society in another disturbing and parallel way. In country after country, dishonest, corrupt, and anti-democratic political parties and leaders have been coming to the fore, with few scruples about using any means to win elections and retain power. People like Recep Erdogan in Turkey, Rodrigo Duterte in the Philippines, Viktor Orban in Hungary, and ultra-right or neo-fascist leaders in Poland, Slovenia, Austria, and elsewhere all are aggressive users of Facebook, in the electoral fray and also once in office. Similar leaders are advancing in Brazil and Italy. Using both paid advertising and unceasing propaganda posts, such leaders, parties, and their allies use Facebook to advance their agenda, slander opponents, and typically seek to whip up a nationalist frenzy.
To the company’s credit, it finally does not deny it has done a poor job preventing such abuse. Facebook spokesperson Frost spent over an hour on the phone being confronted with many of these points. She did not argue. “I accept everything you’ve said. We can only move forward and move quickly,” she said. “There were gaps in our offering that made it very difficult for people to understand our rules, for them to report things, and for us to effectively deal with the reports.”
Facebook is now hiring more content reviewers, especially in countries where oversight failures until now have been so grievous, like Sri Lanka and Myanmar. It has embarked on new ways to promote digital literacy and to educate users about the platform’s policies. (Previously, it hadn’t even had its community standards translated into many languages until years after use took off in those regions.) It has committed to increasing the number of Sinhalese-language moderators in Sri Lanka sevenfold, though Hattotuwa complains the company will not even tell his group how many there are. In Sri Lanka, it also temporarily took the unusual step of putting its “community standards” at the top of every single user’s news feed, in three languages. In Myanmar it created a cartoon-style version of the standards for people with low literacy. It’s taken similar measures in the Philippines. To the extent that these steps help, Facebook may deploy them in other countries and languages around the world.
The company has made it easier to report abuse and to get a response. On Facebook Messenger in Myanmar, for example, the company made it possible for the first time to report abusive group messages directly inside the service. For political advertising, it has in several countries introduced new transparency requirements that make clear who placed the ads. It has increased security for political pages. And it has set up what it calls a “war room,” so engineers and other employees at the company’s headquarters in Menlo Park, California, can monitor and protect electoral campaigns around the world. The company deserves credit for trying.
But for all the progress, Facebook’s excessive caution in admitting mistakes and taking aggressive remedies has become a problem in itself, which undermines its credibility and raises serious questions about how fast problems can be resolved. On Techonomy’s own stage two days after the U.S. presidential election, Zuckerberg famously said it was a “crazy idea” that fake news on Facebook had affected the vote, even though others at the company were by then already becoming aware of the problem.
But this pattern has continued. In her September testimony to the Senate Intelligence Committee, COO Sheryl Sandberg bragged about being able to combat fake news and hate speech in 50 languages. That sounds good, except that Facebook operates in 133 languages. The company’s own initial reports in early 2017 suggested that only 10 million Americans had seen fake Russian campaign ads and disinformation about the presidential election. It later upped the figure to 126 million. In Sri Lanka, it took the government actually turning off the use of the service in the entire country for Facebook to respond to repeated desperate complaints from civil society activists and government officials about the service’s role in encouraging ethnic violence in early 2018. In Myanmar, it removed the ruling general’s Facebook account for promoting racist hatred, but only after the United Nations had issued a series of scathing reports and recommended the general be prosecuted at the International Criminal Court.
And some of Zuckerberg own claims have smacked of naivete, even self-congratulation. He has enormous confidence in the powers of artificial intelligence to help govern speech and prevent abuse, and let’s hope he’s right. But in a July interview with Kara Swisher of Recode, he suggested that by the end of 2019 the company will have been able to “fully retool everything at Facebook to be on top of all the content issues and security issues.” The reality is that given the resourcefulness of miscreants, this will never be solved.
Zuckerberg, as I know from numerous and extensive interviews with him since 2006, is meticulous and methodical, and not prone to looking back at errors. Rather, he focuses on what’s next. The evidence suggests he is putting this effort at a top priority. Every conversation with Facebook executives, on and off the record, underscores that they realize the untenable situation their company is in. But to continue battling this problem with necessary urgency will be extremely expensive, and almost certainly continue to reduce company earnings. Luckily for Facebook, advertisers still have no better medium to target their messages at consumers. It will likely remain extremely profitable for some time to come.
The company recently hired former British Deputy Prime Minister Nick Clegg, indicating a deepening seriousness of intent. But his arrival hardly guarantees success in remediating such profound weaknesses in Facebook’s systems. And ironically, Clegg’s focus until recently has been opposing the UK’s exit from the European Union, which he calls “the greatest act of self-harm committed by a mature democracy.” Evidence increasingly suggests that the country’s vote for Brexit was substantially affected by fake news and distorted and dishonest advertising on Facebook, partly paid for by Russian government agents.
The last 150 years of global progress towards universal democracy may be imperiled. But it’s not only Facebook’s fault. And the company can’t fix the problems alone. Karen Kornbluh served as U.S. ambassador to the Organization for Economic Cooperation and Development (OECD) under President Barack Obama and is now senior fellow for digital policy at the Council on Foreign Relations. “The leaders of Facebook are being asked by the market to generate growth and continued profits,” Kornbluh explains, “but so far there’s no clear ask from society or government to do anything different. Their motto of ‘move fast and break things’ made sense for an internet that was a tiny piece of the economy and society. But when our whole lives moved online, we needed to have a societal conversation. And we didn’t have that. Shame on all of us. So the question, really, is what is society going to do?”
A panel on this topic will close out the upcoming Techonomy 2018 conference on November 13 in Half Moon Bay, California, moderated by the author and including Wael Ghonim, Sanjana Hottotuwa, and Roger McNamee, author of the upcoming Zucked: The Education of an Unlikely Activist. (Facebook said it has no executives with time to come and speak on the company’s behalf.) Kirkpatrick is founder of Techonomy and author of  The Facebook Effect: The Inside Story of the Company that is Connecting the World.