In recent years, Big Tech has encountered an astonishing set of critiques, for good reason. There’s the news of Russia’s involvement in U.S. elections. There’s growing concern about social platforms, particularly Facebook, tracking consumers’ behavior, for profit, and even undermining democracy. Google, meanwhile, has seen its reputation plunge over a perceived lack of transparency, and very public employee walkouts after its handling of sexual harassment, gender inequity, and LGBTQ rights. It isn’t entirely surprising, then, that only one pure tech company – Microsoft – is among the Reputation Institute’s list of the world’s most reputable firms. On a variety of fronts, Microsoft (which is number 5 on the list) has operated with remarkable transparency and ethics, and worked hard to link its products and services to a higher social purpose.
But the ultimate question, at the moment, is this: What is tech’s responsibility to society? It’s a complicated question, and there are no easy, singular answers.
This has come up frequently in the realm of artificial intelligence, as many people wonder how much of the well-documented bias in AI is rooted in the fact that most of the technology is still developed by monochromatic, overwhelmingly male teams.
Kai Fu-Lee—the former president of Google China, and an author whose work partly centers on AI – says the technology’s bias may not be as bad as we think because, well, humans are deeply biased. He points to one key example, in which Israeli judges were found to offer harsher sentences when they were hungry. He also points to banks, which have been found to offer loans to certain consumers to buy, say, a home, because the AI processing customer data is trained to give loans to people who presumably appear more likely to pay it back. “What if gender or race correlates with the likelihood to give back?” Lee asked during a recent Techonomy discussion in Wipro’s pavilion in Davos, on the sidelines of the World Economic Forum. In Lee’s view, there are solutions to bias in AI: we can remove biased elements that society won’t tolerate, like gender or racial bias. “It’s not hard,” Lee says, adding: “You just hold out some data, and check the…data against certain parameters. One could create a tool that gives warning about bias. You’d just need to train engineers.”
Lee’s view is particularly interesting, given that he co-chairs the World Economic Forum’s AI ethics task force with Brad Smith, Microsoft’s president and top lawyer. Smith appeared with Lee at the session in Davos. Microsoft is among the handful of companies that have generated headlines for refusing to sell law enforcement authorities access to its facial recognition technology, mainly because of concerns about potential human rights abuses. Facial recognition systems have been found to have a high error rate, especially among people of color. The company is taking positive action on this front in its home state, Washington, and globally. “This is an area where there’s substantial cause for optimism,” Smith says.
During the conversation, Smith warned that while technology is a powerful tool that connects people from disparate backgrounds, it too often has been weaponized in the form of cyberattacks, voter suppression and other ways that threaten democracy. This is especially evident in journalism, which has been disrupted by technology for much of the last quarter century. While technology has created platforms for an ever-broadening range of voices to create and distribute content, it has also given rise to fake news. “We should be pessimistic about the ability of tech, in the long-term, to detect fake news,” Smith says. “But we should be optimistic about tech’s ability to protect legitimate media in new ways. We shouldn’t assume that tech can completely solve every problem. But it’s a fundamental tool.”