AI could be a dream come true for unethical marketers, as it makes spamming, misinformation, and hyper-targeted ads easier than ever before.

Kevin Lee, CEO of the eMarketing Association, took the theme of Techonomy 23: The Promise and Peril of AI to heart. His discussion, “Marketing in the Age of Machines,” covered how AI could make marketing more effective than ever before, for both good and ill. On the one hand, AI algorithms can help advertisers optimize their websites, discover underserved markets, and compete with rival firms. On the other hand, it can also fill the web with spam, deceive the public, and misuse personal information.

“Now, the more sophisticated SEO spammers are not just using a single AI to generate content,” Lee said. “They’re actually using several AIs to generate and then re-synopsize the content. Then they’re putting it through a second AI to introduce errors on purpose, so it looks more human.”

Lee believes that Google will have to develop better anti-spam tactics to deal with this influx of AI-generated nonsense. However, there’s no guarantee that Google will gain—or retain—the upper hand.

“There certainly have been situations where the spammers have won,” he said. “In your email inbox, or your SMS inbox, or your Google search result, at times.”

Lee also addressed how marketing companies can gather tons of personalized user data. This allows them to create “hyper-targeted” ads for individuals. Good companies, Lee argued, ask for consent before sending hyper-targeted ads, allow users to opt out, and clearly mark any AI-generated content. But not every marketing company operates like that.

“Just because you know something about the individual doesn’t necessarily mean you should transmit it in a hyper-personalized message,” he said.

As an example, Lee proposed a hypothetical scenario where a PAC or political party could use targeted AI spam to influence the outcome of an election. Marketers could contact individuals at a granular level, thanks to “zip-plus-four” labeling: nine-digit zip codes, which may be as small as a city block. With subtle messages of dubious origin, the press might never even notice.

[wrth-embed link="https://www.youtube.com/embed/lwLBzxAMw_Q?si=0uSz74wm0EPg0Q43"]

“When the anti-spamming laws were passed, they exempted PACs and nonprofits…The level of transparency in knowing who’s exactly sending you that message is not always clear,” he explained. “You can take a zip-plus-four, and just inundate people…That means you could end up influencing an election, and no one even knows how it was influenced.”

Lee also pointed out that AI can accurately replicate people’s voices. It’s not hard to imagine how an unethical firm might use that technology to toe the line between “questionable marketing” and “spear-phishing.”

“I’ve done 60 podcasts,” he said. “You could easily deepfake my voice, call my wife, tell her whatever you want. I could get in big trouble.”

The good news, Lee reminded the audience, is that there are government regulations in place to prevent such practices. Big companies, such as Google and Microsoft, usually follow these regulations to the letter. But enforcement is scattershot when it comes to startups and individuals, who may benefit more by breaking the rules.

“The scrappier the brand or startup, or entity of any kind—their incentive is to cheat,” he said. “If they just started the corporation four days ago, or there is no corporation, what’s their downside? It’s sort of like speeding. The faster you speed down the highway, the quicker you get to the other destination, but the penalties go up.”

The only way for everyday users to protect themselves, Lee concluded, is to adopt a skeptical attitude toward just about anything you see online outside of a known source.

“It used to be ‘trust but verify.’ Now it’s ‘don’t trust and verify,’” he said.