At Techonomy this year, the conversation about artificial intelligence kept circling the same conclusion: we’re heading into a period where the federal government simply isn’t going to regulate AI. Not meaningfully. Not soon. That vacuum isn’t hypothetical—it’s already shaping how the industry builds, invests, and competes.

During a panel on building trustworthy AI, Amba Kak, executive director of the AI Now Institute, reframed the debate in a way that stuck with me. “Rather than have trustworthy AI or be asked to trust companies more,” she said, “we create a market environment that doesn’t require us to trust companies.” In her view, the challenge isn’t whether AI systems behave as intended; it’s whether the public has any real leverage over the systems that increasingly govern their digital lives. “If there’s a crisis today in the AI market,” she added, “it’s one of agency.”

And agency is precisely what the federal government has stepped away from.

Over the summer, Congress came close to passing a decades-long moratorium that would have prevented states from enforcing their own AI regulations. Justin Hendrix, C.E.O. of Tech Policy Press, reminded the audience that it “made it almost to the finish line” before being pulled at the last moment. Even so, the near-miss revealed Washington’s posture: regulate less, innovate faster, and trust the market to sort itself out later.

Hendrix pointed to the White House’s own language—an explicit commitment to “achieve and maintain unquestioned and unchallenged global technological dominance.” That’s not regulatory guidance; it’s a geopolitical mission statement. In that frame, domestic guardrails look less like protections and more like self-inflicted handicaps.

Kak went further, describing what she and her colleagues call the “AI bailout.” She noted that the federal government is offering “premature and exceptional…red carpet treatment” to the industry. This includes opening “thousands of acres of federal land” for new data centers, fast-tracking permits, and even providing loans to foreign governments so they can buy American AI systems.

This isn’t a regulatory environment. It’s an industrial policy designed by and for the AI industry.

The States Step Forward—And Hit a Wall

With Washington stepping back, states have become the only meaningful arenas for AI governance. And despite the usual assumptions, the emerging picture isn’t just blue states making rules while red states opt out.

Kak noted that Texas passed a Responsible AI framework that’s “on par with what we’ve seen in Colorado and California.” Bipartisan interest exists. But so does bipartisan pressure.

She described “an army of tech lobbyists” working the hallways of state legislatures, stripping bills down to “the floor, not the ceiling” of what’s needed. Hendrix added a concrete example: when a New York assembly member backed a modest transparency bill, a PAC backed by Andreessen Horowitz announced plans to target him with part of its $100 million campaign fund. “The industry appears to have the view that there is no appropriate regulation of artificial intelligence,” he said, “or at least none they’ve seen yet.”

The message is clear: states may experiment, but only within boundaries the AI industry will tolerate.

A Stack Built for Concentration

The pace of innovation is not the only issue. The structure of the market is narrowing who gets to innovate at all. Hendrix captured the stakes in a single line: “AI is perhaps the greatest technology ever invented for concentrating power.”

Today, the companies that control cloud infrastructure also build the leading models, own the distribution platforms, and dominate the consumer interfaces. It’s a stack that reinforces itself. Kak described it as “one of the most toxic structural elements of the AI market,” noting that firms operating the infrastructure “play at every single level of the stack.”

Her solution—structural separation—sounds extreme only because we’ve forgotten that the U.S. has done this before in railroads, telecom, and energy. “If you play in the cloud market, you can’t play in other markets,” she said. It’s a straightforward principle, but one that would require a federal government willing to challenge entrenched platforms.

Whenever regulation surfaces, the conversation inevitably shifts to China. Kak called “What about China?” one of the “great successes of the big tech lobbying machine,” invoked “anytime you ask for any kind of accountability.” Yet the idea that American AI succeeds only if unregulated is already eroding. Models like DeepSeek—built with orders of magnitude less spending—suggest that efficiency can be as disruptive as scale. Kak argued this shows the U.S. industry is not “competing on the merits,” but is instead being “coddled by government.”

Meanwhile, China’s own AI action plan struck what Hendrix described as a “more open and conciliatory and diplomatic and multilateral approach.” In other words, the geopolitical narrative may not be as zero-sum as advertised.

The First to Feel the Impact: Kids

The human consequences of this regulatory gap become clearest when the discussion turns to children—a population already shaped by the last generation of unregulated platforms.

Kak pointed to the market incentives now governing AI firms. As anxiety about an AI bubble grew louder, one major leader announced plans to “turn the dial up again on sycophantic” behavior—because users liked it—and even floated “age-gated erotica chatbots.” It was, she said, “an obvious sign of an industry…having to prove a revenue case…whatever it takes,” and “where young people have the most to lose.”

Hendrix urged the audience to read the lawsuits involving minors and AI-fueled harms. The transcripts, he said, force a simple question: “Take a hard look at what these men have built… and then ask yourself if you want to be in business with them.”

The Unregulated Years Ahead

For the next three years, AI innovation in the United States will operate in the wide-open space created by federal inaction. States will try to fill the gap. Lobbyists will test how far they can narrow it. And the public—already downstream of every major digital shift—will navigate the consequences.

We’ve chosen not to decide, and that choice has its own trajectory. The question now is whether we’ll recognize where it’s taking us before the next wave of AI systems decides for us.

Watch our Full conversation below.