In 2019, AI policy analyst Mutale Nkonde wrote an essay for Harvard Business Review entitled, “Is AI Bias a Corporate Social Responsibility Issue?” In it, she referenced some headline instances, such as an Amazon hiring algorithm that disadvantaged women, but then focused on the upside. “There’s an opportunity here for businesses that want a first-mover advantage in differentiating themselves in the marketplace by using fair and accurate AI,” wrote Nkonde, who went on to found the responsible AI organization AI for the People.

The response was overwhelming, not in a good way. “Everybody laughed at this,” said Nkonde. “They were like, ‘Oh my god, Debbie Downer…She doesn’t want us to make any money,” she recounted at Techonomy 23: The Promise and Peril of AI conference. (Disclosure: Nkonde is a personal friend.)

But there was one exception—the highest-valued company in the world. “The reason that I got an email from Apple’s marketing team…was that their point of market difference is the privacy policy,” said Nkonde, who later consulted with the company on AI strategy.

[wrth-embed link="https://www.youtube.com/embed/7QJm46q_VSY?si=bzWjkAKmsNy72Aix"]

2019 was a geologic era ago in the development of AI, and the zeitgeist in government and business has changed dramatically. With the explosion of ChatGPT for text, Midjourney for images, and the ever-growing list of their competitors, everyone is coming to see the power of AI for good, and bad.

But the dangers were far more than theoretical, even four years ago. Algorithms have long been involved in deciding who gets hired, gets a mortgage, gets medical care, and even goes to prison. “With colleagues at Stanford and MIT, [we] were looking at the margins of the margins,” said Nkonde. “So, what does this mean for women? What does this mean for people of color? What does this mean for poor people?”

To address these issues, Nkonde collaborated with U.S. Representative Yvette Clarke (D-NY) to draft the Algorithmic Accountability Act of 2019, which would require companies to assess “high-risk systems” that involve personal information or make automated decisions. Nkonde described the approach as similar to the way the FDA evaluates the safety of products we consume. AI is “techno-social,” said Nkonde, meaning that it’s not simply a technology in a vacuum, but a phenomenon that affects people’s lives in very real ways.

But the time wasn’t right—in Congress or the White House. “We were dealing with [Senate majority leader] Mitch McConnell and [President] Trump—not that this is a political thing,” said Nkonde. “But I was going in and basically saying, ‘Don’t make AI racist.’ And they were just all looking at me and being like, ‘It’s not racist, go away, stop bothering us.” Even with 31 cosponsors (all Democrats), the act died in the 116th Congress.

Four years later, the Biden Administration has issued a sweeping executive order on “safe, secure, and trustworthy artificial intelligence” setting guidelines for development of the technology. And the Algorithmic Accountability Act has been reintroduced, by Rep. Clarke in the House and by Democrats Ron Wyden (OR) and Cory Booker (NJ) in the Senate. The momentum is still mostly from the liberal side, but tech companies are realizing that they have to take issues of fairness, bias, and accuracy into account (as the existential turmoil at OpenAI over the pace of AI development shows).

Like it or not, companies employing AI will have to build privacy and fairness safeguards if the industry is to continue growing. For instance, investors and AI developers are excited about using consumer data to personalize services and advertising further, said Nkonde. 

“But in order to personalize products, we have to give up so much personal data that…then it becomes an aggregate total of data for which there is no governing system,” she said.

“Everything that’s left over could be used at any other point and turned into another technology. What do we do with that? Is that something that we want to collect? How do we protect innovation?” she asked. “Because we don’t want to say, ‘Stop collecting data.’ Because what we’re then saying is ‘Stop developing AI.’ That’s not a national priority, because China and other governments are going to keep doing it.”

Nkonde also posited that responsible AI has to be a priority throughout industry. It doesn’t just fall to the established or startup companies developing the tech.

“This is something that we speak to LPs [limited partner investors] and VCs [about] all the time,” she said. “We need to go to the top of the cap table, and then we need to set standards there.”

But responsible AI isn’t something that most of the industry would have initiated and developed on its own, according to Nkonde. There’s a role for government in setting standards—although the approach can vary. The E.U. for instance, is very detailed and prescriptive in its draft AI Act. That doesn’t fit the light-touch approach in the U.S., however. 

“The intellectual property that we have in this country is unparalleled. So how can we put it towards these problems? And then I think we want opportunities,” she said, providing her take on the U.S. government’s philosophy. “America is a capitalist country, and people want to get rich. So how are we going to get rich from this? And then how are we going to be richer than everybody else?”