Online data privacy has always been a contentious issue, but generative AI has turned it into a minefield. As AI becomes more sophisticated, citizens will want to know just how much of their data is available as a training resource. Providing these answers will require both private companies and government agencies to cooperate and come up with guidelines as the nascent technology develops. However, expecting AI to handle private data responsibly might be premature, considering that humans themselves often fall short on that count.

Three experts weighed in on this topic at the Techonomy 23: The Promise and Peril of AI conference in Orlando, Florida. Cosmin Andriescu (cofounder of Lumenova AI), Nia Castelly (cofounder of Checks by Google), and Bryan McGowan (U.S. trusted AI lead at KPMG) focused mostly on the ethical growth of AI, and what role governments might play in regulating it. Analyzing private user data, for better or worse, is one way that AI language models can grow.

Castelly’s platform, Checks, uses AI to ensure data privacy compliance in the Google Play Store. “I think it’s about information,” she said. “If the consumer knows how you’re going to use the technology, and how it’s going to benefit them, then that’s an informed decision, and an easy tradeoff.”

Castelly also discussed the most-common problems she’s encountered in companies’ AI compliance: Workers are simply strapped for time.

“You don’t have time to talk. ‘Oh, I thought I meant to tell you, we should change this,’” she said “ Or, ‘I thought they actually took that out.’ The amount of time that you need to take to do these, whether it’s spreadsheets, or multiple meetings, time is something most of us don’t have. That’s where the mistakes or things that you overlooked can happen.”

Of course, compliance standards can vary tremendously, depending on your company, your industry, and even where you work. The experts compared July’s European Union AI Act with October’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in the U.S.

“I think our recent executive order goes a long way in at least aligning the U.S. in what we should focus on,” said McGowan. “In terms of being closer to, I’ll call it a ‘published’ framework or set of regulations, the EU AI Act is out ahead of where we are today.”

One difference between the two regions that McGowan pointed out was sustainability. While both documents call for AI development to grow in an environmentally responsible way, investors in the U.S. seem more concerned with ESG—environmental, social, and governance factors.

“Most of the funders are asking for term sheets related to sustainability now. It’s also coming up in terms of questions from investors and activists,” he said. “You’re now building these energy-consuming large language models. How do you balance that with the statements that you’ve made publicly around ESG, sustainability, et cetera?”