When we read history books, decades from now, November 30th of 2022 will mark a turning point. That was the day Open AI unleashed ChatGPT on an unsuspecting public. Five days later ChatGPT had 1 million users. And just like that, we entered the biggest and possibly most consequential beta test of humanity.

Reports of hallucinating, obstreperous chatbots and talk of stochastic parrots set off a media frenzy. Five months into the experiment and we have hundreds of competing AI chatbots vying to do everything from rule on legal matters, diagnose our medical conditions, invest our money, hire us for our jobs, and yes, write this column.

Ad

Like stages of grief, we are cycling through the stages of public experimentation with AI. Very publicly.

Stage 1: Marvel. “Isn’t this amazing”?

Stage 2: Mastery We can break this because we’re smarter.

Stage 3: Rational terror. “Gee, this is getting scary. They’re learning faster than I am and I have no idea where they’re learning what they know.”

Stage 4: Backlash. We’ve got to put some guardrails around this thing before it does serious damage.

The Backlash Scenarios

Backlashes come in all shapes and forms, from mild concern to dramatic over-reactions. But, it’s useful to look at the broad range of reactions we’re seeing to the birth of publicly available AI. Here’s a look at the primary concerns.

Information Will No Longer Be Free

Large language models like ChatGPT are trained on as much of the Internet as they can gobble up. Vast repositories of human information housed on places like Reddit, Twitter, Quora, Wikipedia as well as books, academic papers and crazy chats are scraped as training grounds for these models. In the past many platforms have made their Application Programming Interface (API), the method through which outside entities can incorporate program’s information, freely available.

Suddenly, freely available means “no cost” trainers for companies like Open AI. Should companies whose businesses rely on its body of knowledge be giving this away freely?  Doubtful “free” will continue as companies demand compensation for data feeding the AI beast,

Ad

The NYT recently reported that Reddit wants to begin charging companies for access its API. Twitter is also cracking down on the use of its API, which thousands of companies and independent developers use to track the millions of conversations. Microsoft announced a plan to offer large businesses private servers for a fee. That way they could keep sensitive data from being used to train ChatGPT’s language model and could also prevent inadvertent data leaks.

Then there are businesses that will begin to opt out or silo their data and communications, fearful of being part of ChatGPT’s unquenchable thirst. Companies including Samsung banned the use of generative AI tools like ChatGPT on its internal networks and company-owned devices. Financial companies including JP Morgan, Bank of America, Citigroup, Goldman Sachs, Wells Fargo and Deutsche Bank have either banned or restricted the use of AI tools.

The implications for both scenarios are vast. Large platforms have, in the past made their APIs available to the public for research, 3rd party development, and more. If they begin asking for compensation, all bets are off. The entire ecosystem of 3rd-party development could be upended. So could the entire nature of an open-Internet.

The natural extension of opting out is that every news outlet, blogger, and other contributors to the web starts either asking for compensation based on their contribution or silos its information. Information would be held hostage.

Great Minds Call for a Moratorium

Dr. Geoffrey Hinton, a Google researcher often called “The Godfather of AI,” very publicly announced he would leave Google so that he could speak more freely about the risks associated with using AI.

Before Hinton’s departure, came that well publicized “bro” letter where industry insiders including Steve Wozniak, Elon Musk, and other typically impetuous thinkers, wrote a public letter to Open AI asking the founders to pause further training on its learning model until it was better understood. That’s probably not going to happen. ChatGPT already opened pandora’s box, let the genie out of the bottle.

Regulation

Italy banned ChatGPT. So did China, Cuba, and a host of other countries. Regulators and legislators worldwide are scurrying. The EU has been leading the way for some time. In 2020, a European Commission white paper “On Artificial Intelligence—A European Approach to Excellence and Trust” and its subsequent 2021 proposal for an AI legal framework looks at risks posed by using AI. Following suit, the White House put forth its own blueprint for an AI Bill of Rights. It is not codified into any law, but rather a framework created by the White House Office of Science and Technology Policy to protect the American public in the age of artificial intelligence. This week the White House took its next move by announcing a series of steps to fund AI research institutes, systematically assess generative AI systems, and meet with the CEOs of large AI platforms in order to inform policy.

Chamath Palihapitya, the founder and CEO of Social Capital, a large venture fund, argues that  “given the emergence of so many different AI players and platforms, we need a public gatekeeper. With an effective regulatory framework, enforced by a new federal oversight body, we would be able to investigate new AI models, stress-test them for worst-case scenarios, and determine whether they are safe for use in the wild.”

Regulators are still debating what to do about everything from existing social media platforms to cryptocurrency. They are unlikely candidates to move swiftly on the AI issue.

You Can’t Regulate What You Can’t Understand

Looking elsewhere, one of the most clear-headed analysis comes from Tim O’Reilly, author and publisher of O’Reilly media. Like Palihapitya, O’Reilly penned an open letter calling for increased AI transparency and oversight, but not necessarily governmental. The first step, says O’Reilly is “robust institutions for auditing and reporting.” As a starter he asks for detailed disclosure by AI creators in some form of public ledger.

There’s also a sweet irony in all these scenarios. Technology cheerleaders who have always been leery of regulation are now begging for some regulatory body to step up to the plate. Cynics might call this enlightened self-interest, but I suspect it’s because they truly understand the magnitude of this technology.

There’s a long-standing precedent of oversight by committees when new technologies hit the market. A recent example is the Metaverse Standards Forum, which I’ve written about before, where important issues from interoperability to privacy are tackled by committee. In the world of biology and genetics, CRISPR technology is also wending its way towards a regulatory framework.

The definition of a backlash is “a strong, adverse reaction to a controversial issue.” Economic models of compensation will be worked out. An outright ban is a folly. A moratorium seems unlikely. Regulators will forever be behind the eight-ball. That leaves us with oversight, transparency, and a public ledger as the best defense. The backlash to AI is not a human hallucination. It’s real, and deservedly so.