placeholder ad
Watch

Artificially Intelligent Healthcare

Artificially Intelligent Healthcare

Session Description: 
How will AI and analytics alter healthcare? What are the near-term impacts? The craziest possibilities? Will we need fewer doctors, or will AI give us a more complex understanding of "health" and what we can achieve?
Below is an excerpt of the Artificially Intelligent Healthcare panel. The full transcript can be found here.
David Duncan: Everybody here represents some aspect of AI, of healthcare, entrepreneurism, you know, lots of different points of view. Starting with Walter, just say who you are, and I’d like you to answer, in a fairly short, concise way, your greatest hopes and greatest fears about AI and health.
Walter De Brouwer: Walter De Brouwer, I’m a computational linguist. My greatest hope for AI is that it comes very soon and I hope my greatest fear doesn’t come soon enough, because we are actually losing neurons, sextillions of neurons by the second because of, you know, the fertility rate going down and life expectancy going up. And I think we now have this great technology that we don’t have to use organic substrates but can put actually intelligence straight into silicon and that intelligence has to go somewhere.
So my company is a deep language company. We are a business-to-business company.
Duncan: Walter goes way back with the Internet and coming out of IT, and he’s actually a linguist by training and started the company Scanadu with his wife Sam, who’s here, and they have this new venture, which I want to hear a little bit more about. So, Ron?
Ron Gutman: I’m the founder and CEO of HealthTap, now a network of more than 108,000 physicians here in the United States, another 2700 healthcare professionals in New Zealand, and expanding this year all over the world. You know, we connect doctors with hundreds of millions of people all over the world to help them access better care on their terms. And you know, we’re very excited about the opportunity to work not only with individuals in healthcare, but HealthTap has a consumer application that you can access by downloading the app or going to our website, or were actually working now with healthcare systems, with large insurance companies, providers, healthcare systems, and governments to help them manage the health of their population.
You asked what I’m most excited about in AI and what I’m concerned about. I’ll start with what I’m most excited about. What I most excited about in AI is finally making healthcare smart and helping us personalize the right kind of programs to people so they can actually live healthier, happier, longer lives, and I’m happy to elaborate on that and what we’re doing with Dr. AI and some of the other things that we are doing.
What I’m concerned about with AI is that we don’t turn into the wild west. And you know, HealthTap builds its AI applications with knowledge that was collected over the past seven years from a network of more than 107,000 physicians. And also, everything that we are doing that is related to artificial intelligence, machine learning is always, always being scrutinized and being tested by thousands and thousands of physicians before we actually release anything to the market and we open what we’re doing to them, we get their advice, we get their vote of confidence and only then release it to the market. So I’m concerned that anybody that comes to this doesn’t take this kind of approach, because we are dealing with people’s lives, right? So it is very important.
Duncan: No, in fact, that entrepreneurial issue, that’s a huge issue. A lot of people come from IT and build these wonderful apps and devices and when they take them to doctors and patients, they go “We don’t know what to do with this.” So you know, there is a lot of just, you know, how do you position these. So that’s our entrepreneurial voice, and Walter is a bit too. Maria?
Maria Luisa Pineda: I am the cofounder and CEO of Envisagenics. Envisagenics is one of the latest startups out of Cold Spring Harbor Laboratory. We have developed a platform for drug discovery using RNA sequencing and machine learning algorithms to find and discover brand-new targets and therapies for patients with diseases like cancer and other genetic disorders.
I’m most excited about with AI—which I guess I’m part of, because we use machine learning algorithms to just get better targets. It’s the fact that we can now ask big, big questions because of the amount of data that’s being generated, at least in sequencing data. And for patients, it’s the only kind of emotional part is that we are able to deliver new therapies hopefully sooner and faster for them and their families.
What I’m scared for the AI movement is the limits that were going to be facing soon because of the computing and some of the technologies and the advances of them coming in.
Duncan: Okay.
John Mattison: I was a marine biologist and evolutionary biologist before I decided to go to medical school, practiced in the fee for service and academic world for a while and then joined a small startup called Kaiser Permanente. [LAUGHTER] And because of incentive model, because you get what you incentivize, and I think, back to David’s question about what is our greatest hope and our greatest fear, my greatest hope is that we follow Einstein’s advice that if he had an hour to solve a problem, he’d spend 55 minutes deciding what the right question is first and then five minutes solving the problem.
I think a lot of what we’re seeing in AI today is not understanding what the right questions are, and I think the right questions were alluded to earlier by both Esther Dyson and Arianna and others in the morning around, we already know that in adverse childhood experiences, that our patterns are deeply set for the health span and our lifespan. And so how do we begin to focus some of our work more on children in a pan-generational and the transgenerational approach to solving some of the health problems. And I think we can do that, and I think we have enough knowledge today—so we’re data-rich in wisdom-poor, as Arianna said.
We have enough data today to change how we practice medicine for the next 20 years but we need a lot more wisdom about how we do that, and where I believe AI can play a role is helping us to realize what works for what individual and link it with a very mature field of motivational science, which was also spoken about this morning, in ways that lies not just what we tell people, but the narrative and how we put together in a multisensory toolkit in what I call a motivational library of motivicons, which are most broader than what most people think of as an emoticon. So I think AI is really going to help us in personalized medicine to customize how we deliver messages in a motivational framework, especially the younger, the better.
What I worry about the most the current lack of sight into what’s going on. If you haven’t read Weapons of Math Destruction by Cathy O’Neil, I highly recommend it. It talks about the inherent bias in machine learning, because after all, humans do the supervised learning that machine learning is. And so my concern is what I called the “dyadarity.” So we all know what the singularity is. The dyadarity is a term I coined to refer to having a transparency into what’s going on in the black box and being able to ask specific questions about it. My fear is the dyadarity will not be instrumented soon enough, although there is promise in both the software and the hardware side. There are people working on transparency into the black box with some very creative, clever tools today.
Duncan: Fantastic. So, Bud?
Mishra: I’m a professor of mathematics at the Courant Institute of Mathematical Sciences. I also have a bunch of other affiliations with Cold Spring Harbor Lab, Mt. Sinai—I forget the other ones.
Duncan: He’s one of these people, he has 17 titles.
Mishra: Yeah, I am a tenured professor. I sleep for eight hours. [LAUGHTER] I go to the gym for two hours. I follow one kind of hearty rule of never working for more than four hours. Well, I don’t go watch cricket, but I could do that. [LAUGHTER]
So AI has been sort of part of my life. When I started my PhD at Carnegie Mellon, we had to take a qualifier on AI. I passed it with good grade without learning a thing, by using what I call weak methods, same thing that Siri and Alexa use. Didn’t understand a word they said, but aced the exam.
But there is actually a—historically, there is an interesting question. In mathematics, Aristotle started what I call inductive logic. So you start with self-evident truth and derive other truths, theorems and things like that. But there is another side and inductive logic that never really took off. And the reasons are two. One is something called Goodman’s paradox, or Grue paradox, because a lot of things are time-dependent. And now there’s Simpson’s paradox, that not all samples are the same. So if you take patients and do not stratify, you could get contradictory results. Or if you’re looking over time, you could get results, causalities that are not correct.

Participants

Bud Mishra

Professor, NYU

John Mattison

Chief Medical Information Officer, Kaiser Permanente

Ron Gutman

Co-Founder and Co-CEO, Intrivo

Walter De Brouwer

Chief Executive Officer, doc.ai

David Ewing Duncan

Cofounder, Curator, and CEO, Arc Fusion

Scroll to Top