Watch

Innovating Our Selves

Innovating Our Selves

Wohlsen: Welcome everyone to this session of The Techonomy Labs, this is “Innovating Our Selves.” You know, I first encountered the idea of the post-human back in, I think, the mid-90s. I think I read a story in the “New York Times Magazine,” and the impression I was left with was of some people who were really into body-building and taking a lot of vitamins and they were going to live forever. And you know some of that was my own lack, I think, of sophistication and understanding what was really going on and what was at stake. But it’s also been remarkable, you know, in the past even 20 years, what’s happened in terms of the thinking about self-augmentation, the possibilities of using technology to alter, change, improve upon the quote-unquote “self.” And a concept which itself has become much more complicated with the rise of different technologies. So the way of thinking about that has changed kind of in parallel to what we’ve observed in terms of the kind of exponential growth and the power of the technologies that we have access to, and so every time something new becomes possible in technology, it almost goes without saying that some people are going to start to ask, well so what does this mean for me? What can I actually do in terms of my body, in terms of my brain that’s different now? How does this technology, how can it work for me, and even—and then this is what we’ll start to get into today, I think—how can I integrate it with me?
There’s a sense in which we all have become a little more cybernetic, the way we’re all married to our devices. We haven’t actually literally plugged them into ourselves yet, but you know sometimes it sure looks like we have. But that doesn’t mean that there’s still not that step or that possibility, it’s sort of impossible not to keep thinking in that way. And you know this is both something that arises in terms of digital technology and biotechnology, and as we talked about yesterday, and have seen in different venues, there’s a convergence there that also continues to just open up this sense of possibility and a sense of anxiety.
And so those are some of the things we’re going to talk here about today. And we’ve got some really, really cool people to talk about it.
To my left is Carlos Olguin, and he—let me get this right—he heads the Bio/Nano/Programmable Matter group at Autodesk research. For those of you who aren’t especially familiar with Autodesk, they make design software that has done amazing things for the hardware industry, and Carlos’s team—and I’ll let him talk about this a lot more, is thinking through how to use a similar approach but with materials that we might not typically associate with fabrication or industrial processes, so bio-printing is a very significant part of what his team works on.
We have Drew Purves who’s here from Cambridge in the UK where he is head of the Computational Ecology and Environmental Science group, it’s a division of Microsoft Research there. Drew is at the forefront of thinking through how to use this massive increase in computing power that we’ve seen and our ability to process data to model entire ecosystems in new ways, and to think about what it means when we’re able to build these models with a kind of exponentially greater degree of complexity than we used to be able to. And so he’s here to help us think through what that looks like, when we start to apply it to the self.
And last but not least is Eri Gentry, she is the Technology Horizons Research Manager at the Institute for the Future. She’s also the co-founder and President of Biocurious, which is the world’s first DIY bio-hacker space here in Silicon Valley. I wrote a book several years ago about the rise of the DIY biotech movement, and I first met Eri then, and at the time she and a colleague had used their own blood to see how the human immune system would respond to cancer cells. So she’s somebody who’s fearless when it comes to self-experimentation and also thinking about the ways that technology can be used to improve the self and the way we ourselves can be empowered to use that technology to improve ourselves.
So welcome everyone and thanks everyone for being here.
So I thought what I wanted to do first was to just kind of broadly frame the discussion in terms of thinking through, you know when we talk about innovating the self, what are we really talking about? What are sort of the key themes and possibilities that we need to think through when we want to try to answer the question of what technology can do for us and what it should do for us?
Eri, I think it would be best to start with you, because you know you’ve spent so much time thinking kind of big picture about this. What’s most interesting to you, what’s most compelling when you think about this concept of self-innovation?
Gentry: The most interesting thing about recent years is the people who are starting to approach self-augmentation or using technology for health or medicine or for self-tracking, and I talk about it’s not the typical players, so it’s not your healthcare professionals, it’s not necessarily people who have ever been interested in health, but there’s a lot of engineers who are looking at a problem and they see a problem and they see opportunity and potential and usually think, “Oh, well I know how to develop apps, I could probably make an app for that problem.” And that creates this interesting mix of people who are trying to develop, for a new audience, for problems that they haven’t thought about before, for diabetes and adherence to medication, or to an exercise protocol, for example. So a known problem, and there’s lots of money and opportunity in the space, but when computer scientists and designers start looking at that, the solution tends to look a lot different than something a doctor might prescribe. And I think that it looks a lot better. But as this sort of side industry for innovation industry across health tech starts to evolve, you see that they’re brushing up against regulation and this is creating this really interesting dilemma, like how do we innovate against FDA regulations, for example?
Wohlsen: But so to you it’s not so much about these kind of grand ambitions around radical life extension, or kind of computer-human interfaces in the brain or something like that, it’s actually much more immediate and in a way much more compelling, because it’s something that presents possibility right now.
Gentry: It is all of these things, and in a sense I wanted to see what people were interested in talking about, because all of these ideas about extreme life extension—so those are happening … and to your point about first hearing about body-building and people who had this desire to experiment for a very measurable human performance increase, that was one of the first things I ever got interested in too, because it was out there, it was accessible. I remember reading copies of my dad’s Men’s Fitness magazines because there were experiments embedded within. People can take that a lot further today. I’m reading a lot about transcranial direct stimulation and people hacking 9-volt batteries and getting wet sponges to try to stimulate a part of their brain to enhance their learning.
[LAUGHTER]
It’s fascinating, it’s a little bit scary, I kind of want to try it.
[LAUGHTER]
Or people who are, you know, moving more into the world around what I do with biology, who are thinking about solutions for engineering plants or even engineering themselves. And to bring it back down to earth, where I started from is people are dealing with real serious challenges today and they might not learn as well as they want, hence transcranial stimulation, but they also might be suffering from not being able to afford medication and looking for hacks around that, or they can’t remember when they should inject insulin. So a guy I know, actually, hacked his insulin pump so that it would—he used Bluetooth low energy and he could use his iPhone to basically direct when he should have something. That would never pass FDA inspection, but people are designing out of curiosity, out of the desire to be better, but also because they really can’t live without their hacks in some cases.
Wohlsen: Carlos, when you hear Eri describe some of these projects or even just that sort of sensibility about wanting to do it yourself, because you have that ambition and that sort of set of assumptions that come from building apps and so on, maybe a good way to get into talking about the kind of tools that you’re working on is—I guess what I’m curious about is, is part of the idea behind what you’re doing to open up, you know in the way that Autodesk does for other kinds of creativity, open up these materials to a broader set of creators?
Olguin: Yeah, so maybe I should clarify, so Autodesk is not doing any self-augmentation tools that I’m aware of, but it’s true that the group I’m part of looks at life as a new design frontier, and not just life, but other forms of programmable matter. In a way, to step back a little bit of whether it’s for your own self-innovation needs or for other applications, the reality is right now biology is a technology that we don’t fully understand. We’re reverse engineering to try to characterize it. It’s almost like taking an archeological approach to biology and trying to understand how we can better characterize it.
So in that sense, we are trying to create tools that help extract some of the things that we already know about biology, and as we do that we hopefully enable scientists to accelerate their own innovation, by not having to focus on the more laborious work that is needed to be able to create anything. And as that happens, that maybe goes, we hope, in a recursive loop that is just producing out better tools, better design tools that on itself then creates more knowledge. So that’s more or less what we’re trying to uncover.
It’s almost as if you were saying, you know, when we’re talking about creating apps—so I don’t think we have yet the critical mass to understand biology to create those kinds of apps. But we’re heading in that direction. And one example I would put is imagine creating tools right now for your iPhone and you need to write them in self-assembly code. You just, you wouldn’t have the economies that are growing around the mobile industry, if you couldn’t write them in more intuitive development environments and we’re so far away from even being into the self-assembly level, into the—not self-assembly, I was going in a different direction—into assembly code of biology. We still need to understand it a little bit, or a lot, more. So that’s what we’re focusing on right now, trying to understand that.
We do work on art projects, like bio-printing of the missing ear of Van Gogh, which actually it’s printed using actual DNA from Van Gogh, apparently, according to a stamp that Van Gogh licked. But those are art projects, and as we do them, they help us get a more intimate understanding of those workflows that we want to then extrapolate into actual applications.
Wohlsen: Well I just want to make sure everybody caught that, what Carlos just described was trying to use Vincent Van Gogh’s actual DNA from a stamp that he licked to recreate his famous ear that went missing. And so I certainly appreciate what you’re saying about the early days, but that sounds incredible. I mean, that sounds like, you know—I mean if the ability to do that is early days, it’s very striking to me what that suggests might be possible.
Olguin: Yeah and you could—so to clarify, this is a project led by Diemut Strebe, and a number of scientists. We are supporting that effort, but we’re not taking credit for the actual science in it. But you can imagine thinking about self-augmentation, imagine you print tissue that has your DNA and now you can expose it to all kinds of things and learn from it. One of our team members—and I’ll finish with this—actually is working on a company that is all about bioremediation. So he wants to maybe think of futures where you introduce pathways in human tissue that allow you to absorb radiation, for example. So it’s almost like you put tissue you don’t want—you imagine 20 years from now, maybe—you have those pathways in your body, and when you’re exposed to a nuclear bomb or whatever situation you might be exposed, it’s almost like you already have the upgrade in your system, and those pathways taken from bacteria are able to digest gamma rays. And again, this is all speculation at this point, to be very clear. But it is true that we’re experimenting right now to better understand what that future might mean.
Wohlsen: So just absorbing radiation and being able to—I think I saw that in a movie. So I mean, one of the things that’s fascinating about that, to me, is that—and I want to use this to segue way into some of what we’ve been talking about, parts we’ve done here with Drew, is that one of the reasons that I think we can start to talk about these things is that we have this new—not new anymore, but in recent years, this incredibly more robust set of computing tools to be able to analyze those pathways and to be able to sort of test, discard and revise our models so that we can figure out how these kinds of complex systems will work. Because, you know, when we’re working on the genetic level, the level of complexity is on an entirely different degree that most other systems.
And so Drew, that to me, then that becomes your sort of area of expertise—and one of the things that he and I have been talking about prior to just now, is both in terms of genetics but also in terms of what we’ve sort of come to call the quantified self, where we’re able to measure so many things about ourselves that we couldn’t before, and store and track. And so I’m wondering if you can talk a little bit about some of what we had been discussing before about what that ability kind of suggests about what we might be able to know about ourselves that we couldn’t know before.
Purves: I think it’s helpful to differentiate between two things in the kind of in the augmented self and space, at least from my perspective, which is one is the ability to take drastically more amounts of data and different kinds of data from your body, little deployed sensors that are floating around your blood and monitoring your glucose levels or even monitoring gene expression and this kind of stuff. And so this might enable us to understand and come to predict things about your body—I’ll come back to that in a second. But the other side’s where—which more of the bio-hacking stuff I guess you hear about—it’s more like actuators, where you design new devices that go into your body to do things, so that’s the intervention, a new kind of drug, or it might be a little more like your fantastic journey, we heard about that yesterday, you know the cell that goes around and does something.
And there’s even been some talk about combining these two, and so these two things would be happening at once. A computer in the cell model, actually would be a little set of mini-computers that are floating around your body, possibly built with DNA—they don’t have to be built with silicon, monitoring things, and actually making decisions. So if they find, for instance, patterns of gene expression to indicate cancer, they might go and terminate those cells. I think I’d be very nervous to deploy millions of those in my body at the moment, but maybe one day that will be reliable enough.
But I think the part that we thought about more in the ecosystem space is this idea of taking lots of data and using that to understand complex systems better. So it’s a special kind of machine learning. So I’m sure everyone here has heard about big data and machine learning, it’s the idea that you can take data and make some predictions. So you might be able to make better predications about whether you’re going to get a certain kind of cancer, let’s say, in this space. But traditional machine learning is based on models that have been designed for computational convenience, so that we can very rapidly deploy them on large, complex data sets, and the sort of statistical correlational models. But we’re now—it’s been a kind of quiet revolution in computational sciences the last few years, where we can now instead create something that’s more like a sort of simulacrum, a sort of real virtual example of the system that you’re interested in and train that against data in the same way. So it’s less a neural network as a machine learning method, it just takes input and gives you output. And instead you’ve got the kind of living, breathing, virtual version of something.
And if we apply that to the human case, which is what we’re talking about here—actually, you know, it’s strange to think there are already lots and lots of models of ourselves floating around on the Internet and inside companies. We don’t think about them, but you know our life insurance companies have a little virtual version of us that’s making a prediction about how likely it is we’re alive tomorrow, or next year, and in five and ten years’ time. But it’s just a very, very simple model. It’s probably just based on a few things like, you know, how old we are and things like that.
But if we could take lots and lots more data about people, eventually we could get to the state where we have quite a faithful, realistic, virtual me existing somewhere that’s being fed with our data. And the more and more data we give to that thing, the more complex that model can get, and the more faithful to us it can get. And that could really change—you know, it seems to me that could really change the way we interact with medicine, obviously, because we go from saying things like, “Oh, well, based on some relatively crude scan of your body, mass and age and a maybe few things about your genes, you’ve got this certain probability of this happening to you.” And we would—Eri, we were talking earlier about women having mastectomies based on a couple of gene association studies, but most of the women that are having those mastectomies would not have got breast cancer. It’s just that there’s enough of a chance of getting breast cancer that it’s worth doing. But with more and more data and a more and more accurate version that we have of a virtual you, we’d really be able to say much more accurately, “No you will or you won’t get it.” And you can make a much more informed decision about something which is a really major decision for someone.
And so we’ve been doing that in the ecosystem space and taking lots of data from nature and trying to build realistic models of forests and so on. So for me, it’s quite interesting to wonder what that might mean for us humans.
Wohlsen: Could you say a little bit about what you were describing to me as what you might imagine as a kind of helpful assistant?
Purves: Yes, so it seems to me that one of the key problems, actually, that we share, both in terms of our individual lives, from a medical perspective or actually several other perspectives—one of the core problems, and it also occurs in the environment is the contraction, the mismatch between the time scale of the real problem and our perception of that problem. So I think we evolved, for instance, so assume that we’re not going to be here in 30 years’ time. You know up until very recently, you wouldn’t be. The life expectancy was very low. And that explains a lot about mismatched behaviors in individuals, you know, we often drink too much or make bad decisions about finances, because—we call it discounting in economics. So that our future selves are viewed by us as very far into the future and actually, they’re really not us anymore. The old Drew is not the young Drew, is not my problem. Until I get there one day and I realize that—so I kind of wondered, if you have this virtual version of me, you know it might, the virtual me, that I could simulate out into the future and then I could look at that virtual Drew, and he could maybe even communicate back to me and say things like—you know, so when I pick up that second drink at night, he texts me and goes, “Hey, dude, you’re killing me. You know you’re seriously killing me. Can you just put that down please?” And so I wonder whether somehow this idea of being able to kind of realistically—I mean, it’s all about it being realistic—realistically project your own life into the future and interact with that, the virtual me, might somehow psychologically be helpful, I suppose.
Wohlsen: Eri, I’m wondering if you could respond to that a little bit and—I don’t know if the right way to frame the question is—one thing that strikes me is that when you talk about kind of virtual augmentation, like a future version of yourself that sends you text messages, that that is a way of potentially—like, I don’t feel like that’s going to draw the sort of regulatory wrath as quickly as something that involves actual sort of manipulation of tissue or you know your body itself. What do you think is more important? Do you think it’s a distinction even worth making? Do you think that it’s important that people push equally hard on—what’s the right way to even frame the question in terms of what we’re able to, but we have sort of the right to do, with ourselves, to ourselves, and sharing that? Versus the sort of different technological platforms that we might use to do that?
Gentry: Right, and this is an on-going argument, or against paternalism or for it, how much do we have the right to know? Should the doctor be delivering information about a cancer diagnosis, or if there’s no known cure, just let you die and you’ll have a happier life? So this right to choose comes up almost every time we have a new technology and where I work, at Institute for the Future, we developed a ten-year forecast around brain augmentation and how we need a Magna Cortica. We need five different principles to help us, societally, all of us, determine how to use a new brain technology. And to wrap it all up, it’s basically the right to choose whether we can be augmented, the right to choose whether our children may or may not be augmented, and also the right to refuse augmentation if suggested. And the right to know who has been augmented.
I thought that last one was the really interesting piece. It came up yesterday that we had this sort of attachment to secrecy or even sometimes to lie, or the ability to lie. And if the system were completely transparent, it would totally change dynamics, but it would be a very scary situation, like a “1984,” which I just re-read. It’s different as an adult than when you read it as a kid, much more dense than I realized.
But anyway, back to the original question about which technology we use, I think it speaks to what we need to use as humans to actually make a change. So in this conversation earlier about, you know, “The older Drew is not the Drew that I am, that he is now, or that he cares about”—and we need to hack that motivation and hack psychology, and there are obviously better ways to do that than a letter written by your doctor or some dense text that you’re going to find on WebMD. But you know visual cues matter or peer pressure or something that just tricks your psychology for a moment, and some examples I’ve seen that are pretty interesting are a bar that put a distorted mirror in the bathroom to show you what you’re looking like after a few drinks, so it’s all twisty-turny and it just is that smack in the face. It’s that additional thing that you need. There’s also this bracelet—so we’re all familiar with wearables and the advent of putting very simple trackers in there and how that changes our lives and our behaviors—and this is beyond an accelerometer, it actually puts shock therapy into the bracelet, so when you do something that you’re not supposed to do, it just delivers this quite unpleasant shock to your arm.
But these are the sort of triggers that we actually need to get going. And in my opinion, that’s where the low-hanging fruit is, like take all of the things that we already know that we want to do to be better humans, like there is a place for thinking about augmenting ourselves to the extreme end, but what about augmenting to be, you know, like leaner, and fitter, and sleep more, and be the person that I actually want to be today, or to be a parent or employee, etcetera, and find ways that entertain us to actually help us?
So I think in the near term, that means that entertainment and gamification and especially this idea of the avatar, of the Siri for health, or of using Kinect to create an avatar of everybody in the room, and even creating the social situation that encourages you to be the person that you want to be or the real near-term evolutions that are going to change how we act, which is the key.
Olguin: Can I say?
Wohlsen: Yeah, please, of course.
Olguin: So I was just curious, so Drew you were talking about the contracting time scales, and I thought that was really interesting, maybe Eri it’s really just the fact that things are happening faster but obviously we’ve been evolving and now we’re evolving with our own intervention as opposed to just evolutionary speaking, and maybe those are just key differences that make it so in our faces right now, that the time that we’re going through. And you know that reminds me of this article, I think, “The future doesn’t need us,” from Bill Joy, you remember that one? Do you guys remember that article, “The future doesn’t need us”? It’s about robots taking over our worlds, and humans are not needed anymore, and it’s a very famous article, 2001 I think, something like that. And if you think about that article, it’s almost like well the future doesn’t need us in the same way that there no Cro-Magnons around us today, and therefore, you know, they’re not needed, or at least according to at least one view. They’re not around here. Sometimes that might be not true, but I just think that we will just continue evolving. That’s, I guess, the main point I’m trying to say, and right now we’re experimenting with what it could mean when you’re not just relying on traditional evolutionary streams of continuity change. So that was all.
Purves: Yeah, compare us with other species, that’s one of the things that sets human aside, is that, you know, genetically we’re very, very similar to what we were like 1,500, 2,000 years ago, even longer than that. And yet we’re also incredibly different, and that’s been achieved without any biological, genetic change. So you’re—I totally agree, this is like just another extension of that. It’s amazing when you think of things like penicillin, that actually there isn’t a single human that has had penicillin as a very young child and got to age 90. Because we basically didn’t have it 90 years ago. It’s incredible. So we think of that as an established technology. Not really. And if we had been too risk-averse, you know, we never would have given anyone antibiotics. So it seems inherently scary to put these new things inside your body, but I presume that’s what people felt like with things like penicillin, or Tylenol, Paracetamol, and the list goes on. And there’s been some disasters over the years, but people very quickly get comfortable with it, don’t they? And they are kind of part of our evolution.
Wohlsen: How does that look to you as an ecologist? I mean, how do you model a system where there’s that kind of—it seems like a very difficult to quantify variable, of that kind of disruptive technology as an avenue for evolution?
Purves: Yeah, it is a strange one. I mean I think one of the things that—so what sets us aside is this idea that we have this vertical transmission of information, as we say. So one generation of humans can translate an incredible amount of information and knowledge to the next generation, rather than everyone having to learn through experience. And things like Wikipedia, I mean, what an unbelievable development, you know, everyone says the same thing, but it’s true. We don’t even have to pass it verbally anymore. It’s available for everyone on Wikipedia.
I think at the same time, some of the environmental concerns we have, things like climate change, for example, or even antibiotic resistance, things like this, they do come from, in part, an increasing disconnect between humans and the natural world. So by and large, new technology, as it’s come in, has at least made us convinced that we’re further and further removed from the natural world. And it can—especially from the Industrial Revolution, in a sense, I mean, you could say the Industrial Revolution almost had a thesis underneath it that you could separate the economy and ecology. And that’s lead to some of the problems we have. So again, I do wonder in some way whether these new technology, like are they going to continue that trend? And are we just going to be all the more removed from nature now we can deploy a DNA computer into our bodies and you know harvest minerals from asteroids, and I don’t know, whatever it might be? Or could we in some way use them to improve our connection with nature again? I don’t know what that might be. You know, that wristband giving you an electric shock, is that some way for us to somehow understand where we fit?
Because the truth is, our economy does fit within an ecology. We do have real boundaries to our economy in terms of materials. And I was quite inspired by some of the talks we had earlier on Techonomy and we saw the concrete example earlier, you know young people actually innovating around things as—if you like—as apparently boring as concrete. But you know I mean it shows you how exciting it can be. And so whether some of these technologies could somehow help us to understand where we sit within of the boundaries of our own lives, and so in terms of resources, in terms of our impacts on biodiversity, food sustainability, and things like that. I have no idea how you do that, I can’t imagine, but it would be an interesting challenge for someone to think about.
Gentry: And to pick up on that, I like to remind people that most of what we know as life is yet to be undiscovered, or yet to be discovered. So it’s only in the 1950s that we began to be able to culture living cells inside of labs, and kind of get a sense for what life really looked like, except for what lives in an extreme environment like our mouths. And we can scrape that off and look at it under a microscope and then begin to speculate about that. But we really couldn’t grow cells until the ‘50s and that meant that all of the life around us that you can’t grow in a laboratory environment, we just have no way of determining whether it’s even out there unless we can somehow see it. So sequencing, sort of, genetics is finally giving us the ability to know everything that’s out there. So we’re talking about how Craig Venter went on a sailing trip, he’s just scooping up bits of seawater, and in every scoop, he’s able to find new life. And this whole territory of what’s living, what’s out there, is a new field. And it means that the speculation from any random person, like us, is fair game. We first have to actually sequence what’s out there, and the biggest efforts are happening in China with the Beijing Genomics Institute. Anybody who’s kind of a laggard, who is thinking about the ethics around it is really missing the boat. The leaders in this field are going to be those that have amassed those petabytes of data.
And to give you an example, I was at BGI and they told me that they had just sequenced a rice genome, and they were sending it off to a collaborator to help further analyze it, and they have to ship it in a box full of hard drives, because, you know, that’s the current state of the art. Even our—I don’t know anything about data science, but however you send the data across the waves, like even that infrastructure is not set up for the amount of data that’s coming out of genetics. And you mentioning Wikipedia made me remember efforts like uBiome, and American Gut. They’re small startups that are creating kits to basically crowd-source the sequencing of people’s guts. If you’ve heard of microbiome, so that’s this really exciting area where we’ve discovered that the cells that are around us and in us, they make up more of who we are than our human cells, so human cells about 10 percent and bacteria about 90 percent. We’re just getting the ability to sequence those, and the major efforts are coming from startup people out of Silicon Valley and out of universities. And again in terms of scale, so the microbiome research project that came out of the NIH, I believe, it was 10 years long, it cost over $100 million dollars to do, and they sequenced the guts of 100 people. So these crowd-source efforts, uBiome and American Gut, they raise money on IndieGoGo, about $300,000 dollars in both cases, and they’ve got data from more than 6,000 people. So they’re doing massively more than the US government has done for massively less money.
And the reason that I’m excited about this is that will give us the ability to have the data of thousands of people, and hopefully soon, millions of people. And begin to have guys like Drew model who we are, what’s in us, what’s on us, how does that respond to changes in our diet, to our environment, to drugs that we take. And even model that into the future. What are those future selves going to look like? And maybe this eventual—to tie-in all these pieces—like the Siri for health, that’s going to be able to tell us in a way that we can analyze in an instant, “Hey if you have that extra piece of cake, here’s what you’re looking at in terms of microbiome and your health and your skin, etcetera.” And I think that this is the huge revolution area and the leaders are really independent small, startup companies. And it’s this fascinating dark horse that we all didn’t expect. But that’s what I’m watching, what I’m putting my money on.
Wohlsen: Well I’m glad you brought up the microbiome, that’s one thing I wanted to ask about specifically, because it really, in a way, it fundamentally calls into question our notion of the self. We think of ourselves as this kind of unitary singular organism, but we’re actually made up of many different species that inhabit us—and just correct me if I’m not understanding this correctly, but my understanding is—so right now we’re sort of at the stage of kind of simply saying what’s there. And then following on that, would be a desire to say, “Well what does it do?” And what I’m wondering is, are people talking about and imagining a time when if we feel we’ve achieved some kind of thorough understanding of that, that then we hack that? That we genetically manipulate the microbiome if we want to do other things to ourselves?
Gentry: Yeah, I mean, it’s what we do already, but we do it in this very generalized way. You know like when Drew was talking about reducing your salt intake and that’s this advice that, you know, you go online and how do I be healthy? Like the top recommendation is reduce red meat, reduce salt intake, but why? And should I actually do that?
And microbiome is one part of mapping out what an individual’s profile is like. And we’ve tried to do that for ages, like you go to the doctor and you describe what’s up in a narrative fashion, and they try to look into the recesses of their mind and their doctoral training and try to identify what category you fit in. So we’re talking about doing that same thing in a much more rigorous way.
Olguin: When you’re talking about … okay not to the extent that Eri describing, which is great and down the road, but even today concepts like fecal transplants actually exist, as weird as that sounds. You know, things are running in parallel, this panel was about self-augmentation, but we already pointed out here that we don’t even understand who the self is in the first place. And even on the microbiome, I was just reading—I attended the last personal genomics conference and some of the microbiome that lives in our face is actually not coming from directly microbes in our face but actually mites that host those microbes that live in our faces.
[LAUGHTER]
So there is this whole ecosystem that we’re starting to understand and sequencing gives us the initial cues, but it’s a static description of a reality around us. It doesn’t relate to the dynamics of all the organisms interacting with each other and obviously that’s further down, but that’s still going in parallel too, so it’s very interesting times right now, mind you, to be part of.
Gentry: Yeah, it’s as much a problem of how do you create the Internet of Things? Or how do you create a smart home? You know, have some idea of what’s going on and what’s in our environment and how that impacts us.
Purves: I think what he said, you know we think of ourselves as our unified things. I’d like to—I suppose I have a disconnect. I think of myself in that way, I suppose. But intellectually, when I think about the human body, I suppose for me it is like an ecosystem. And it’s amazing when you think that your hands are the same size, and yet there was no direct communication when I was developing, so this is an interaction between cells. And it’s incredible that the single thing that all cells share is that, at least most types of cells, is that they can replicate themselves. What’s amazing is not that we get cancer but how little cancer we have. So what stopped my fingers developing so that they’re the same size is absolutely unbelievable. So we tend to think then, when you see that, you think there must be in this ecosystem that is our body, so many negative feedbacks and checks and balances that keep us stable. There are ones that regulate our mental health, so most of us, for instance, if we’re getting tired we’ll have more sleep and it’s like a negative feedback, so we get less tired, like a thermostat. Our blood sugar levels and the number of cells in our body, I mean the skin is continually creating new cells, and then losing cells, and yet my skin hasn’t grown without bounds to get six feet thick, and it hasn’t gone to zero, so it’s just kept the same thickness for decades. So on the one hand it’s—you know and then of course all of these systems can go wrong. So actually one of—there’s problems with an epidemic of mental illness at the moment, so stress and things. And bipolar disorder for example is when the equilibrium of the mental health goes wrong, you’ve got our sugar systems, when they go wrong that’s diabetes, when the cell regulating numbers go wrong that’s cancer. And we really don’t understand those systems very well, so I think with these—we’re now in an era with the computational modeling, with the data, and with all the sequencing and so on that if we can understand those cycles, then we can understand—they we can begin to predict when they might go wrong for different people under different situations, but also we can understand how to put these hacking devices in.
You know weirdly, one of the great examples, we shouldn’t forget, is the pacemaker. I mean, if that’s not like the augmented self, what is it? It’s been around for ages, and that’s another cycle, the heartbeat, and we worked out how to actually intervene to keep our—with electronics! I mean, unbelievable. So you know, but could we do the same with mental illness? Could we detect when someone’s going off the rails and help them to change their behavior and so on? So I guess that’s—yes, it still freaks me out. So there’s so much to learn, and then you add on all the microbiomes, like you mentioned, Eri, and it’s just—yeah, it’s unbelievably complex, I suppose. But it should be by degrees. I mean, if you want to say virtual self, obviously, you know, in a 100 years’ time maybe? Maybe a long time before that, you can’t tell with biology. It’s racing ahead. But you know when you’d actually have that in the virtual human, all of that going on. But in the nearer term, it might be something much simpler, you know, just tracking a few basic things to do with your blood sugar levels and hormones and things like that.
Gentry: Well the existing tests that are out there, just the nature of getting them more frequently than at the point of care. So I won’t rail on the medical system too much, but if you think about when you get sick, you have to schedule a doctor’s appointment, you have to go in for a test, and then you might have to wait another day while you’re fasting to get a blood test. And so all of this time, biology is racing ahead inside of your body, and a virus could have manipulated your entire system. So what you know when you get the blood test doesn’t really tell you much about the progression of disease, if that’s the case for you. But if you have access to these facilities where you can test yourselves pretty regularly, you can learn some interesting information. And the best example of this is Michael Snyder’s lab at Stanford. He is one of these trackers but also a physician and research scientist. And he was able to test his blood multiple times throughout the day and he happened to pick up a virus, you know, a pretty common virus. He was having cold and flu-like symptoms, but what started happening is that his blood sugar levels were elevating, he started having inflammatory markers, and he actually developed diabetes. It was onset by that very common virus, and that had never been shown in the data before. So he just revealed this new model of how a person can get diabetes. And he has been able to correct and to monitor the correction of it through the same simple tests that you can get, complete blood count. So it speaks to what if we just gave this very simple capability of a spot blood test and Theranos, probably many of you have seen the CEO on the cover of “Forbes” and they were in stealth for 10 years and now just revealed that, with just a pinprick of blood, they can basically do a complete blood count on you. So for a technologist this is a great opportunity to see about getting into the consumer domain, this ability to just do the pinprick, even, you know, a ring. So it’s unobtrusive, it’s passive, and then you’re able to get these predictive models built on top of people, and their behavior, and etcetera that ties into all this other data that we’re tracking, just fascinating applications.
Purves: And one more perspective I’d just sort of remind myself about as much as anything, I suppose when you think about going into the developing world as well, because you know most people in the world don’t have access to a GP, and the most expensive kind of augmented self device that you can imagine at the moment is still a lot cheaper than a GP, than a doctor. And if you look at sub-Saharan Africa, the population predictions are that we’re looking at an extra 1,000 cities of 500,000 people each within the next 15 years. And you know, where are they going to get their medical care from? And so what you’re talking about, a pinprick of blood combined with a publicly available web service that you can access from your phone that’s then giving you the information you need, you know—and it’s not to imply that that’s necessarily better than a GP. I would hope GPs will increasingly use that information here in the West, and then you can argue about whether they’ll be supplanted by that. That’s a separate debate. But the fact is there is no GP for those people and yet they could still, in theory, get really good advice.
Gentry: Absolutely. It’s one of those things where they can kind of jump the curve. You know so we see this with the fiber Internet for those who started with sort of the dial-up and the old telephone infrastructure, they are kind of building incrementally faster webs. But in places that had no infrastructure, they can get what is the fastest state-of-the-art, so that’s what you see in some of these so-called developing countries, that they actually have a much faster broadband connection than we do in the US. And so that’s an interesting speculation about what if we have sort of smartphone- or cellphone-enabled medical service or medical knowledge, information gathering. And because of the slowness due to medical regulations, again in the US, the state-of-the-art could be somewhere else that we’re not even thinking of.
Audience: Can youexplain the Magna Corpa again?
Oh, I’m talking about—my voice is low and loud enough. Could you explain the Magna Corpa again, what that was and what the five tenants were?
Gentry: Absolutely. So it was the Magna Cortica, and it’s kind of this five guiding principles for how we can treat brain augmentation in the future. And basically it is the right to augment yourself, the right to refuse to augmentation, the right to decide whether your child may or may not have augmentation, and the right to know who has been augmented.
Audience: And where did the idea of that originate again?
Gentry: It was out of a lot of our conversation like this at Institute for the Future and sort of our research on people who are augmenting their own brains, the history of it, the potential of it, and there is a lot to speculate on—the brains are sort of this core of who we are and how we think, and our motivation, what we’re capable of doing. You know there’s tons of fascinating research of accidents, of people like accidentally, you know, getting into a car accident and a piece of metal getting stuck in their brain and now they have this ability that they never had before, they can draw intricate designs from memory, you know. So an interesting area we don’t know a lot about, but new technologies are allowing us to know more and more or enhance ourselves through chemicals, you know, Provigil and Adderall are already out there, so there’s use and there’s abuse. And speaking to the future of the development of smart drugs and other technologies that might stimulate specific areas of the brain. We started developing this ethical model around it. That’s really the stopper. In most of the conversations I’m in, no matter how exciting the conversation gets, or the state-of-the-art, or what people are experimenting on, there’s a stop-gap of, “But what about the ethics?” So you always have to put a little something out there.
Wohlsen: And so it’s kind of, the idea was kind of if you have a set of guidelines that have been really thoroughly thought through and worked out and discussed, that then that could help kind of smooth—instead of running up against that every time, you have a framework. Is that sort of the thinking?
Gentry: Right, it sort of frees you. I guess policy can do that in a way. At least it’s more clear-cut whether you’re breaking rules or not.
Wohlsen: Anyone else want to jump in? We still have a few more minutes here. Anybody have any other questions? Anything else?
Audience: Eri,I have a question. You’ve mentioned—
Wohlsen: Hold on. To the mic, so we can all hear.
Audience: You mentioned that some of the space in the hacking sense, I guess, that you’re seeing more on the engineering side. What percentage would you say that you’re seeing of the interest in that being more bio-based and more from the engineering side?
Gentry: More I see the separation between medical professionals and engineers. So different things going on. There’s a big rise in health technology or digital health. It’s that space that isn’t regulated presently, although there’s a lot going on about that. And incubators are popping up or accelerators, like Rock Health for instance or Blueprint Health, and they’re only accepting these digital health startups. And typically you would see those started by an engineer or by someone who just hasn’t been in the health domain before. The area of bio that is more applicable, you‘ve either got the extreme end of Pharma, which is incredibly expensive to get into and entirely slow—and it’s some of the frustrations with that system of spending $30 billion dollars on cancer research and really not getting far that kind of frustrate me into finding a different avenue for doing research. But then on the other side you have data scientists, so really where there’s progress and where there is need is in bioinformatics. So usually this is somebody with a computer science type of background who would analyze the data that’s coming out of a genome synthesis or who is maybe a different kind of an engineer who is designing new ways to synthesize more quickly. We can’t do a lot on the human side at BioCurious, which is the lab that I helped found, because we would get into a lot of trouble for that, regulation-wise.
I don’t know if that answered your question. Okay.
Do people know about grinders? Has anyone heard this term? Like, not the app. So I figured that when we had this conversation around innovating ourselves, that people would want to know about grinders and people hacking their own bodies, but since you don’t know [LAUGHS] I may as well tell you.
Hackers are starting to implant their own devices. I’ll go back to Fitbit, so this $100 dollar device that you can wear around your wrist or snap onto your belt and it tracks your steps. Well this accelerometry technology, and it just costs a few dollars, and I’m shocked that people are so excited by it, just the ability to like take it out of your smartphone and put it on your pants and create an app around it, makes me think differently about business models. But just going back to the cheapening cost of sensors, it’s totally within our reach to design our own sensor kit and put it into our bodies.
And most of the people who do this thing, they’re called grinders, so look them up. You’ll see some really interesting pictures and even diagrams on how these people surgically implanted the devices themselves. So you have all of these different steps that a person has to become sort of an amateur expert in. So designing the sensor package themselves, figuring out how you’re going to make that sort of live in a [LAUGHS] human biology-friendly way inside of your body without getting infections, etcetera, and then actually cutting your own usually arm or fingertip open and then putting that device in and sewing it back up. No doctor is allowed to perform this, which means that you can’t have anesthesia for it, and you can think about how really motivated a person has to be in order to do this. They got tons of coverage in the last year, so check that out. And I’m fascinated by it, not that I would do it myself necessarily, but through this ethical question or—like we really need to be ethical, we really need to take care of ourselves, which means that you will never see this sort of experiment being taken on by medical professionals. And in a way this is an amazing opportunity to see what happens. Because people are willing to do it, and it prompts the question should we actually encourage this type of thing, and maybe even, god knows, give some incentive structure so we can, as a society, or at least as a health care system, get those data back? Because there’s all of these mysteries about what happens when we take these devices and put them into the body and it costs tens of millions of dollars to do a clinical trial. But people are rearing [LAUGHS] to do this themselves.
So just some things for you to self-study then are, let’s see, grinders and what else was I going to talk about?
Purves: Do you know is it legal to sell things, like so if I had a device that I thought grinders would like to implant into their arm, is it legal for me to sell that, and then say it’s up to you whether you implant it or not? Just out of—I mean, not that I’m planning to do that. Microsoft—I work for Microsoft. I’m not planning to do it.
[LAUGHTER]
But just to examine that ethical question, you know, I mean would that be okay? Or like for instance could you create open source designs for grinders and things like that?
Gentry: Yeah.
Purves: You know, could you be prosecuted by people?
Gentry: I think as long as you’re careful about language, and that you’re not implying somebody use it for a medical purpose. So the rules around practicing medicine on the web come down to language. So one of my colleagues at Scanadu, he had built the first physician website in the ‘90s and the US government didn’t know how to deal with this, so he actually sat on and managed a panel for policy around physicians on the web. So a good example of how you talk to somebody who’s looking for medical information is you know a mom of like a two-year-old will write in saying, “My daughter has a temperature of 104. What do I do? Does she have a fever?” And you can’t diagnosis it as a fever. You can say, “Okay, the literature says that a fever is technically a temperature of 101 or over, commonly prescribed medications are so-and-so.”
And even at Scanadu we talked about there’s this extreme barrier to entry for medical devices, but there isn’t for dogs, so. And there actually is this right now, there’s an EEG headset for dogs, billions of dollars are going into this area, people who want to connect more and more with animals with this headset. It promises to read the brainwaves of your pets and to turn this into language that we can understand.
Wohlsen: Well I think from the augmented self to the augmented dog back to the self, there’s obviously a lot to keep thinking about here. So thanks everybody for being here. I appreciate it. It was great, great conversation. Thanks.
Olguin: Thank you.
Gentry: Thank you.

Participants

Carlos Olguin

Co-Founder, LogicInk

Drew Purves

Google DeepMind

Eri Gentry

Technology Horizons Research Manager, Institute for the Future

Marcus Wohlsen

Staff Writer, WIRED

Scroll to Top