Humans want to believe that there’s something unique about our species. However, for the past few decades, science has chipped away at that idea—and now technology is following suit. As AI masters more complicated tasks, humans will have to recontextualize the way we see intelligence. We’ll also have to revisit some perennially contentious issues about consciousness, up to and including whether or not we have souls.

Nicholas Dirks, president of the New York Academy of Sciences, discussed these heady issues at the Techonomy 23: The Promise and Peril of AI conference in Orlando, Florida. His talk, entitled “Mind, Machine, and Meaning: The Confluence of AI and Humanity,” raised philosophical questions about AI, in addition to the more prosaic technological ones.

Dirks first discussed the idea of consciousness. Humans used to believe that our self-awareness set us apart from the rest of the animal kingdom, as did our raw intelligence and subtle emotions. However, other animals seem to exhibit consciousness, and our own technology threatens to exceed human intelligence, if it hasn’t already done so.

[wrth-embed link="https://www.youtube.com/embed/-8H4HxzcbPo?si=JgtMlZf2QYxraTBF"]

“When we thought about what it was to be a human, and what it was, therefore, to be conscious as a human, one thought about intelligence. It was the delta between even the chimpanzee, or the most advanced primate, and the human, that defined what it was to be human,” he said. “We have language, well… … some blurring of the lines there. There was sentience, emotion—well, not so clear there.

“Now we have machines that arguably are more intelligent than we are. And if they’re not more intelligent about everything now, they seem to be, every two or three months, gaining skills and competencies.”

Dirks pointed out that AIs outperforming humans in complex tasks is not new. A computer defeated humanity’s most skilled chess player in 1997, and our most skilled Go player in 2016. AIs can now ace an MCAT exam and analyze the properties of black holes, and they don’t have to spend years acquiring the knowledge to do so.

If humans aren’t unique in terms of language, emotion, sentience, or intelligence, Dirks argued, then perhaps we want to believe we are unique because we have souls. But a soul is a tricky thing to define, even for philosophers.

“What is consciousness? Is it something outside you? Or ultimately, is it of yourself?” he asked. “You go from notions of the self to notions of the soul. Now, we’ve mostly jettisoned the idea of the soul. But it’s come back. And, of course, it’s now fundamental to the way in which so many people are beginning to think about what it means to be human, in relationship to a machine that’s going to be smarter, it’s going to be more versatile, it’s going to be able to do all kinds of things that we can’t do.”

The rest of the talk covered more traditional AI topics, such as medical research, national security concerns, and data provenance. However, Dirks maintained that the human element was the common thread—and the common fear—that bound these ideas together.

“I think that we all want to find ways in which the human is not only involved, but critical to this new world of technological possibility and power,” he said.