X. Eyée has had an unusual career path into the AI space. After dropping out of school, they joined the U.S. Army, served in Afghanistan, developed blockchain technology for Microsoft, and managed Google’s “Responsible AI” department. As such, they’re an expert on tech in both the public and private sectors. And, in light of a new executive order from the White House, they see a lot of potential pitfalls at the intersection of the two.

Eyée’s latest commentary comes from the Techonomy 23: The Promise and Peril of AI conference in Orlando, Florida. They joined Nicholas Dirks, president of the New York Academy of Sciences, and Dan Costa, chief content officer at Worth, in a town hall conversation called “Promise and Peril.” When the talk turned to President Biden’s executive order on “safe, secure, and trustworthy artificial intelligence,” Eyée reminded the audience that a little government oversight can be a healthy thing. That’s because powerful organizations already use AI to make potentially life-altering decisions.

[wrth-embed link="https://www.youtube.com/embed/SmreWg0ylMM?si=LzineLmv9U1ef1BG"]

For example, Child Protective Services in 14 states use algorithms to determine how likely a parent is to hurt a child. But these algorithms, Eyée said, are both ineffective and racially biased. In situations like this, where AI could cause real harm, a human being must ultimately take responsibility for it.

“That’s something that I really love about the executive order,” they said. “One of the very first provisions is that if your model is doing something that could be considered a concern to national safety, then you have to produce testing reports…You try to make it do things that will violate civil rights. You try to make it do things that will violate constitutional rights. And you provide those reports to the government.

“I really don’t care if Amazon recommends a book I do not want to read,” they said. “But I really do care if I’m trying to go in and get conditions treated, and they tell me I don’t need to see a specialist because of an algorithm.”

An AI that denies medical care is not just a hypothetical scenario. Later in the talk, Eyée related a debacle involving UnitedHealthcare. A “resource allocation problem” in the insurance company’s algorithm prevented people from getting the treatments they needed.

“The algorithm had all of the most unbiased data. It had all races, all genders, all sexual identities, all zip codes,” they said. “But the decision point that they used to decide whether or not you needed to see a specialist was how much you had previously spent on that condition.”

Here, the audience let out a collective groan, and Eyée knew exactly why.

 

 

“Culturally, depending on your ethnicity, your race, your socioeconomic status, your willingness to go to a doctor is going to change,” they said. “Consistently, [the algorithm] was recommending that healthier White patients see the specialist over sicker Black patients.”

Eyée also reminded the audience that there will always be some tension between private companies, which tend to prioritize profitability, and public entities, which tend to prioritize individual rights. To ensure that AI is ultimately accountable to us, everyday people will have to demand transparency as well.

“As humans, who are all active participants in this society, it is our job to say, ‘Look, I might not know the technicals of a recommendation versus a machine learning model versus a deep learning model. But if you’re using AI, you need to tell me. You need to have the ability to redress harms or failures that it causes me … I think we all have a collective responsibility to actually be intentional about shaping the future, instead of being respondents to a future that other people’s imagination shapes for us.”