Watch

DARPA’s Arati Prabhakar

DARPA’s Arati Prabhakar

Brainworks I: Man and Machine
What are the implications for humanity as neuro-tech improves, artificial intelligence becomes less “artificial” and the symbiosis between humans and our machines increases? What is DARPA doing?
 
Kirkpatrick: Arati is the director of DARPA, and someone who’s been in and out of government and the private sector for a long time. Was the head of NIST, the National Institute for Standards and Technologies, is that right?
Prabhakar: Yes.
Kirkpatrick: —for a number of years, and then went into the private sector and was a venture capitalist for quite a number of years. And took over DARPA when exactly?
Prabhakar: I came back to DARPA three years ago.
Kirkpatrick: And you had been at DARPA before you were at NIST.
Prabhakar: My hair was a different color in that—
Kirkpatrick: Back in those days, yes. Well, so, what is DARPA all about? What’s its mission?
Prabhakar: DARPA is the Defense Advanced Research Projects Agency. We were born in the Cold War, when the Soviets launched Sputnik. That was not something we enjoyed in the United States because it was such a surprise. And DARPA was created to prevent that kind of technological surprise. And dating back to the late ‘50s, people at DARPA quickly realized the best way to prevent surprises was to create a few surprises of our own, and that’s what we’ve been doing. And I sort of think of it as two different modes.
One is sometimes we go after really tough military problems for the Defense Department, and that’s where stealth aircraft came from. That’s where Precision Strike came from, and that shaped how we fight. But over decades, we’ve also invested in the underlying technologies that are so important for national security as well. And those include advanced materials, and new chip technologies and MEMS and most famously, of course, the Arpanet and the Internet, and this entire information revolution that—you know, we planted those seeds, but then it was this huge private investment and entrepreneurship that drove all of that as well.
Kirkpatrick: So, what are the seeds you’re planting now that you think might bear the most, biggest trees down the road.
Prabhakar: That’s the question we ask ourselves all the time. And maybe just to stick more with the technology, the core technology side for a moment. First of all, there are things right now, the information revolution is in full swing. And you know, a lot of people in this room are driving it. So, one thing we think about is, how do we make sure, number one, that in the national security world, we’re able to trust the information that we’re so reliant on. So, issues of cyber-security and data integrity are very, very core to us. And then, how do you—you know, you don’t want to just be safe, you actually want to use all that information for something. So, harnessing, you know, like everyone else in the national security context, we’re working on how you harness big data. And find its value, not get confused by the noise in it, because there’s a lot of noise in it as well, often.
So those are some of the things in the information technology realm. And then, you know, unlike the ‘50s and ‘60s and prior decades at DARPA, now we’re living in a time in which biology is intersecting with information technology, with physical science and engineering. And that is, for DARPA, an area we’ve been slowly building up. It’s a major focus in terms of some of the areas that we think are going to be transformative in the next few decades.
Kirkpatrick: You know, we hear a lot, including on this stage yesterday, of criticism that the U.S. is not investing enough in its future. And that the U.S. government is not doing enough R&D in general. Is DARPA’s budget higher than it’s been—how are you doing relative to the past in terms of the resources you need.
Prabhakar: Yes, but just a couple of pieces of context. DARPA is 2% of the federal investment in R&D. We are really, really tiny. We have outsize impact because of our mission and our ability to work with—you know, in this really unbounded way, all of our work gets done with this huge technical community. So, it’s only a small piece of that, and the rest of the ecosystem also has to be healthy.
Kirkpatrick: Which it isn’t. But go on.
Prabhakar: Well, you know. I think it’s, I think it is always a matter of concern and something we should all be very focused on. And it’s changed in some important ways. I’m actually quite optimistic about our country’s ability to innovate.
That’s one of our core capabilities. DARPA’s budget specifically declined about 20% in real terms between ‘09 and ‘13. That was just part of the defense budget coming down after a period of great growth. And then, you know, sequestration, like everyone else in government, we’ve had this sort of crazy way of operating because of the problems we’ve been having with budgets and sequestration and partisan politics. But subsequent to 2013, our budget’s been pretty stable. So that’s, it’s been okay. We’re at $2.9 billion right now, as an example.
Kirkpatrick: That’s not bad. Well, let’s briefly talk about this optimism thing, because that wasn’t on my list, but I’m happy to hear it.
Prabhakar: No one’s talking about optimism.
Kirkpatrick: I’m happy to hear it, just elaborate a little more why you’re optimistic about America’s future and our technological future.
Prabhakar: Well, I—you know, first of all, just, I think the facts are very promising. Some facts are promising. The environment, the ecosystem for innovation is so different today than it was in prior decades. And I think it’s easy to see the things that we don’t feel as good about. But let’s talk about a few things that are actually great. It used to be in the post–World War II era, that federally-funded R&D was the dominant share of R&D done in this country. And, you know, very much to our benefit for national security to lay a basic research foundation, all of those important federal missions. That declined, as a percentage of GDP, federal R&D has declined quite substantially, partly for good reason. Partly because of the end of the Cold War and shifting priorities. The really good news in my mind is that private sector investment in R&D in this country is growing faster than GDP and has been for a very, very long time. And that—you know, we know we’re living in this much more innovation-intensive economy, and that’s reflected in those R&D numbers. Now, that is much more product-development. It’s much less basic research. So, you know, it’s a different model. But I think that that’s something we shouldn’t overlook when we talk about how the system is today.
Kirkpatrick: And basic research is your forte, really, basic and applied, sophisticated, cutting-edge research.
Prabhakar: We live in, we live in the national security world, and the $2.9 billion at DARPA is spread across quite basic research all the way to systems prototypes. So we’re, we sort of cut across. NSF and NIH are the people that really, you know, own the job of the core foundation of basic research.
Kirkpatrick: So the main things you’re putting money into right now are what?
Prabhakar: Three major topics. One is rethinking complex military systems so that we can actually meet the threats of the future without bankrupting the country. We think that would be a good thing to do. The second and third, I’ve mentioned the second, is the information revolution and cybersecurity, big data, the next chapter in AI. And then the third one is, it’s biology but it’s also anywhere we see research that we think has the potential to be game-changing technologies, be it synthetic biology, neurotechnology, advances in math and algorithms to manage complexity. Those are areas that would be more the basic research end of our portfolio today.
Kirkpatrick: Well, neuroscience is one that I think a lot of people might say, “Why is DARPA doing that?” So why are you doing that, and what are you doing?
Prabhakar: Well, two reasons for the why. One is simply that a lot of my program managers—we’re a very bottoms-up organization. My programs managers were disturbed, distressed and inspired by watching our wounded warriors come home with mental injuries, with physical injuries, with lost limbs. And they wanted to do something that would make a difference for these individuals, and not just sort of, you know, barely adequate solutions, but things that would really restore function and allow them to do, to live full lives again. That led to our work in revolutionizing prosthetics, to work we’re doing to repair the kind of memory loss that can come with traumatic brain injury, to work we’re doing to understand neuropsychiatric disease by looking at neural mis-firings in the brain. That’s one big reason.
The other big reason is we’re always looking for the places that are going to create just hugely disruptive change. Where the research potential is so promising that we think big technologies are going to come out of it. And I think as we understand the human brain, that is going to be one of those areas.
Kirkpatrick: And when you say that, you’re not thinking entirely only of its application to AI, although that’s part of it, right?
Prabhakar: Yes.
Kirkpatrick: It’s also human augmentation in our actual functioning.
Prabhakar: Well, it’s the restoration of function and the maintain—you know, our ability to maintain good health, mental health, but also, you know, your brain regulates everything your body does. And then, as you learn to restore and maintain, of course, you’re also learning the things that could allow for augmentation. And those become very interesting to think about as well.
Kirkpatrick: When I asked you on the phone, I’ll see if you say it again, and if you don’t, I’ll direct you.
Prabhakar: Tell me what I said.
Kirkpatrick: Well, what you said—You said that in 10 years we might be able to have the ability to—a person could turn their own sort of functions, what was it, metabolic functions—
Prabhakar: Yes their physiological—
Kirkpatrick: —in effect, up and down deliberately, depending on their choice or need. Talk about that.
Prabhakar: Yes, so this is one aspect of what we’re doing in neurotechnology. So, the work here is about understanding how the peripheral nervous system, how it maintains the health of our organs. So, number one, can we use that peripheral nervous system to monitor how our various physiological functions are operating, and can you restore function, can you maintain health? But then, if you can do those things, then possibly, in some future, you might be able to directly regulate your blood pressure without having to take medication for it. Or, you know, one thing we think about is, would a war fighter in an intense battle situation be able to focus, to fire, by regulating his heart rate.
Kirkpatrick: Really?
Prabhakar: We don’t know if we can do these things yet, but those are the kinds of possibilities when you start doing this research.
Kirkpatrick: And of course, you presume there are civilian spinoffs to that sort of thing, or do you? Do you care, when you go at that area, do you think, “Oh, this is one where we could help the fighters and the wounded warriors, and also American society and possibly global society,” ?
Prabhakar: Yes. I think these things often are very, very intimately linked. I mean, the information explosion that we’re all living in today has so many of its roots back in Defense Department investment, because we knew that it was, you know, our predecessors at DARPA, people like Licklider, the heroes of this revolution, knew that these were the technologies that could be revolutionary for research, for military use, and for us all as humans, to extend our capabilities. And I think I see all of those elements in the things we’re looking at for the future now.
Kirkpatrick: Well, I wish there was more of the defense budget in this, but anyway. What about mental illness, which is something I think you briefly mentioned before, neuropsychiatric disease. What could we possibly be able to do based on the research you’re seeing now, in the future?
Prabhakar: A couple of examples. One of our programs is looking specifically at a kind of memory loss that’s sometimes accompanies traumatic brain injury, it’s declarative memory. And this is a case where I think people have a thesis for what the impairment is. It’s a particular signal that isn’t transmitted. There’s a lesion in the hippocampus as I understand it. This is not my area, so I’m telling you what my neurotechnology people tell me. But there’s a lesion in the hippocampus that interrupts the flow of this signal that tells you to go do a sequence of things that you’ve learned to do. And so, one of the directions of research is to read the signal. It’s just an electrical signal, right? I mean, it’s a complicated one that’s buried in your brain, but it’s an electrical signal. So the question is, can we read it on one side of the lesion and basically put jumper cables to the other side of the lesion and feed the signal to the neurons on the other side.
Kirkpatrick: By understanding the whole pathways of the brain, so you’re going to re-route things, in effect.
Prabhakar: Right, and look, I mean, we’re going to be understanding how the human brain works for decades, right? I mean, my grandchildren will still be figuring this out. But along the way, as we understand pieces of it and we develop the technology—it’s Big Data, it’s analytics, it’s all the information technology, it’s nanotechnology and new electronics. All of those come together and allow us to do specific new things all along this 50-year journey.
Kirkpatrick: Well, this is good. I want to go to another brain-oriented thing that I believe you’re working on, which is learning. And where could we go with that.
Prabhakar: Yes. So I mean, I think this is actually one dimension of learning is tied to what I was just talking about. Because if you can figure out how to repair that kind of memory impairment, potentially you might learn something about how to accelerate that learning process, right? I mean, think about how all of us learn complex tasks today, how important it is to do them over and over and over again, and train ourselves. That’s what we do from infancy, right? But again, as we start to learn that, there may be elements that can be accelerated, either through the kinds of, in the sort of domain I was talking about. There’s also some early thinking about how our peripheral nervous system, again, might play a role in that.
But you know, even something as—it’s so simple, the word ‘learning’ is so simple, but it represents so many different kinds of things that go on in our brains, right? So I think we’re going to be unwrapping that and figuring out what all the dimensions are, for a long time.
Kirkpatrick: Well, that’s good, because we have a lot we need to accomplish, right?
Prabhakar: That’s right.
Kirkpatrick: You know, one of the things, I’d just be curious to get your thoughts because I know AI is a lot of your work, and did you see the last session yesterday afternoon by any chance?
Prabhakar: Gods in Boxes? Yes, I did.
Kirkpatrick: What did you think about the idea that these things may go off on their own despite whatever we might want them to do? I mean, it didn’t come up on the panel, but a lot of people afterwards were saying, “That kind of reminds me of HAL and the whole nightmare scenarios that a lot of movies have presented.”
Prabhakar: Yes. There are a lot of great movies about that, actually.
Kirkpatrick: Well, it wasn’t quite as apocalyptic on the stage, but even at the level they were describing it, does that worry you?
Prabhakar: Let’s see. First of all, I think those, that’s a really important question to ask. I think I’m not sure we’re asking it in as productive a way as we could right now. I like to separate two things. One is the capability of technology itself. You know, we’re in this explosion of information capability. Technology is moving at this phenomenal pace. It’s all the drivers from Moore’s Law to the information revolution now accelerated with deep learning and what’s happening with machine learning. That’s one huge area.
The second thing which I think we’ve way too frequently conflate—the second distinct topic is what we human beings choose to allow technology to do.
Kirkpatrick: Right. That goes back to the first session in a way, but go on.
Prabhakar: Which I didn’t hear, so maybe you’ve covered it.
Kirkpatrick: Well, there was a lot about empathy and ethics, and, you know, values.
Prabhakar: Right, and whether we do it explicitly or implicitly, there are human choices about how these technologies are used. So, if it’s a company and the algorithms they use to serve up whatever you’re looking at online, that they’re human beings making those choices, machines implement them, and then there are human beings on the receiving side. I mean, I’m not forced to do searches on Google, I’m not forced to use Facebook. I’m making an implicit choice that says I will accept the way that they’re presenting that information to me to go do that. So, I think in that world, those are some important issues. In my world, one thing we talk a lot about is, we know that the power of technology is going to drive the level of war fighting capability up and up. That’s the history of humanity. So now we’re at this new threshold. We ask these questions about the agency we have over technology in the military context all the time. And I will tell you that the person who is least interested in letting go of control of the situation is the war fighter. That is the last thing any person in a uniform ever wants to do. So they become very real issues. How do you build powerful technology, how do you understand what it’s doing. And I think that’s harder today, with machine learning in particular.
And then, how do we as humans set bounds on what we do and don’t want it to do under different circumstances. That’s the issue of autonomy, and I think it’s important to think about that in parallel, but not conflate it with just the power of the technology itself.
Kirkpatrick: So many things I’d love to ask you, but I want to give the audience a question—a chance to ask things. Jessie, I was hoping you’d get your hand up, so. And I know you know Arati. Can we get the mic over here? Jessie’s from “Wired.”
Hempel: So as I think about the relationship between DARPA and the military, the nature of who our enemy is changing significantly. It used to be nation-states, now we go to war with nation-states and also individuals and also sometimes they’re on our team. And the tools that they’re using are much more rudimentary. I mean, my iPhone and a social media platform can get me into conflict with a nation-state as large as China or the United States. How does that change the way that you think about development, or are you so forward-thinking that it doesn’t really change it at all?
Prabhakar: No, your comment is absolutely core to our context. So, I talked about DARPA starting in the Cold War, but that was actually—it was a very painful time to live, and I think it was not a great time for global history, certainly for the United States. It did have clarity of purpose, because that one overarching monolithic threat, and existential threat just loomed so, you know, in your face. And I think that was all we really thought about. Certainly in my first tour at DARPA, ‘86–‘93, that’s all we ever thought about.
Today’s world has all the threats you talked about—so, violent extremism, cross-border criminal activity, terrorist activity, with participants who are tapping into globally available technology. The obvious part is social media, but it’s also how easy it is to order components, powerful semiconductors, nuclear materials, and move them around the world. That is a very different reality. And at the same time, from a national security perspective, we also need to be understanding what China is doing, what Russia is doing, and making sure that we can deter or defeat if necessary that kind of much more sophisticated adversary who’s also taking advantage of global technology. So I think in that sense, the business is much more complicated. And at the end of the day, the Defense Department doesn’t really have the luxury of just solving one of those problems. You have to deal with today’s realities and try to keep the lid on that bubbling pot of that noxious stew, and make sure you’re doing the longer-term work that puts us in a strong position for whatever other future might come. So yes, we think about that all the time.
Kirkpatrick: Kind of combining that and going back to this thing you said before—are we going to a world where it’s going to just be robots fighting each other on battlefields, or not?
Prabhakar: Well, we’re certainly moving in the direction of more—of conflict that’s going to be accomplished through machines that keep the individual war fighter farther away. I think one dimension of war is moving in that direction.
Kirkpatrick: It does sort of underscore the futility of the whole thing, not that human death didn’t, but—
Prabhakar: You know, you can talk about the futility of war all the way back to the beginning of humanity.
Kirkpatrick: Yes, and I would love to, but.
Prabhakar: Yes, but you know, what’s interesting to me is there’s this high-end thing that’s happening that’s very deeply technology-intensive, and some future grand conflict may have that shape. But you see what’s happening on the ground in Syria today as well. And it’s the other end of the spectrum.
Kirkpatrick: It’s the other end, yeah.
Prabhakar: You know, it’s medieval, right?
Kirkpatrick: Well, I guess another quick question, and sort of it goes to what you were saying to Jessie, given that that is so medieval, and that ISIS is right now considered maybe our number one adversary in some weird way, and they are so—even though they’re good at social media, they’re not good at robotic weapon development, right? Are we going to have fundamental advantages to over time, do you think, contain that kind of problem because of the sophistication of our weaponry?
Prabhakar: Well, except that is a conflict that is, that has so many different dimensions to it. Because it is being played out on social media. And there, I think I would tell you that the U.S. response, to my impression as a technologist, is that we are restrained by our values and our concern that we not overstep in responding. So I think we’re still figuring out how to navigate in that environment.
Kirkpatrick: In other words, if they’re not using robots, we shouldn’t either, kind of?
Prabhakar: Well, just on social media. I mean, I think that’s the issue.
Kirkpatrick: Oh, okay.
Prabhakar: Then, there’s what’s happening in kinetic warfare on the ground. And by the way, they are using, right? I mean, they are using drones, they are flying cameras overhead. Everything that’s commercially available, that is—you can inflict a lot of damage with those technologies, right? So even at that end, some of that is beginning to happen. So, I don’t think that there’s a simple sort of straight-ahead war-fighting capability that’s going to contain that problem. It just has many more dimensions. And I think you just have to recognize that. So the military response is only going to be a piece of it.
Washington: Ken Washington, Ford Motor Company. I think we’ve actually met. In fact, I know we have. And that’s related to my question. So, I’ve had the privilege of working on both the defense and national security side of the issue, and also the commercial side of the issue, which is where I work now. And my question to you is, I’d like to ask you to talk a little bit about your thoughts about how the commercial community can—are we fully leveraging the intersection between government-sponsored research and corporate-sponsored research?
Kirkpatrick: Oh, that’s a good one, because I wanted to get to that cooperation issue. Thank you.
Washington: I think it is just accidentally sort of on the periphery today, as opposed to much more strategically and intentionally.
Kirkpatrick: It’s an issue of strong concern to you.
Prabhakar: Yes, it absolutely is. And I think there’s much more opportunity. I mean, good things are happening. Every day that I see some of our research being picked up by a commercial company and moved forward, I’m happy. But I would love to see more of that. I think I mentioned that DARPA is only 2% of federal R&D, we’re a very small part, even in the Defense Department. And my program managers are on a plane to wherever there is good technology around the country and around the world. And so I think the model at DARPA which is about program managers who rotate through, and we don’t do any of our work inside our four walls, it’s all contracted out into the technical community. That has allowed us to keep very fresh, and to keep very connected. But we’re always pushing ourselves to reach farther into entrepreneurial communities or farther into research at universities that might not work with us.
The rest of the Defense Department doesn’t have those built-in mechanisms and I think one thing you’re seeing right now with Secretary Carter is a big push to try to get the department to be more open and to realize that—and there’s this very natural progression, right? Because in DOD, we’re solving national security problems, it has to be—we can’t talk about it, because it’s national security. And there are a lot of gravitational forces that drive you to not connecting with the commercial sector, which meanwhile is two-thirds now of the country’s R&D, because it’s accelerated so much. So I think he’s very right to shine a light on that, and I don’t think that’s as much of an issue at DARPA, although we keep pushing ourselves. We’re eager to see the rest of the department get much better at that.
Rowan: David from the Wired, with a British accent. Can I ask about your innovation models? We wouldn’t have self-driving cars at the speed we are if there hadn’t been a car called Stanley that won a DARPA Grand Challenge. Are there any other innovative approaches to generating innovation you can share with us?
Prabhakar: Well, we think about that all the time, and at DARPA at any moment in time, we typically have about 200 different programs. Some of the most visible are these big Grand Challenges that we’ve done. We just did one for rescue robots a few months ago. The next big DARPA Grand Challenge that’s coming up is in the cyber domain. It’s called the Cyber Grand Challenge. It poses the question of whether machines can learn to reason about cyber security to identify vulnerabilities and penetrations in a system, develop patches automatically.
Kirkpatrick: Oh.
Prabhakar: There’s actually a game to do that, it’s called Capture the Flag at Defcon. So basically we’re building as challenge that’s a “League of Their Own” for machines to play Capture the Flag. And that competition will be the day before Defcon in Las Vegas next summer. And I think that one’s going to be a hoot, actually. We have some amazing technologists and hackers coming together, and seven teams have qualified for that competition. So that’s a particularly—that challenge model is a great way, when enough pieces of technology that you can integrate them and actually do a competition and see, and drive people to see what’s really going to work effectively in that kind of environment. It’s not a substitute for doing a lot of the much more underlying research. The neurotechnology things that we’re working on are not going to be in a Challenge competition any time soon, because it’s deep in the lab right now. So, I think you need a lot of innovation models, and I think you need to keep developing new ones, and then for whatever it is you’re trying to do, you want this rich toolkit to pick the methods that are going to be the most effective. And that’s what we try to do.
Kirkpatrick: But it is a great point, you know—the Internet was one of your real home runs, and autonomous vehicles really are another DARPA home run, because we would not be where we are today with this really poised to transform modern transportation, and war fighting and a lot of other things, if you hadn’t started them.
Prabhakar: Yes, and I think it’s a great example, because we did that early work but, you know, that was a decade ago, and look at the amount of private investment.
Kirkpatrick: Exactly.
Prabhakar: And the creativity that followed. So you need all of that.

Participants

Scroll to Top