Watch

Governing in the Age of the Internet of Things

Governing in the Age of the Internet of Things

Session Description:
If more and more "things" are interconnected, intelligent and self-aware, what does that mean for government? If we can track the speed of any vehicle, should we? What would be the public reaction? This is not an abstract example. It's possible today. As new efficiencies emerge everywhere from our bathrooms to our power plants, who controls this world? What regulations and policies will we need?
Chui: Why don’t I ask our panelists to introduce themselves, name, rank, and IP address—no, I’m kidding. And what do you find to be the most important intersection between the Internet of Things and policy.
O'Connor: Good afternoon, I’m Nuala O’Connor. I’m the president and CEO of the Center for Democracy and Technology. We are studying many things and advocating for many things around your digital daily lives, but in the Internet of Things space we are concerned, obviously, about the opaque decision making about you impacting your worldview, your opportunity, your ability to advance individually.
Hawkinson: I’m Alex Hawkinson. I’m founder and CEO of a company called SmartThings. We were acquired by Samsung Electronics in August last year, so now we’re steering IoT at Samsung more broadly. I think in IoT in general probably, at least in consumer space privacy and the implications around it are probably the biggest area where I think there—as much as I like lack of regulation, and self-regulation I’ll say in a lot of spaces is a place where there’s some imperatives around having a policy that can adapt to this technology wave.
Haley: Hi, I’m Dan Haley, and I’m charged with going over regulatory policy for athenahealth, which is a cloud-based information technology services provider primarily to doctors and medical groups. For me the interesting intersection, or the most interesting intersection, is the degree to which policy is responsible for the fact that health information technology lags roughly a decade behind the rest of the information economy. It’s interesting to me to hear about all of these fantastic things, the Internet of Things, that arise from the ubiquity of information, and in healthcare we haven’t yet gotten to the ubiquity of information. The primary—still, true fact—means of information of change for care providers is still a fax machine. To a hammer every problem looks like a nail; to a policy guy, everything looks like a policy failure. But I am interested in the degree to which well-intentioned policy has frozen healthcare in amber somewhere in the 1990s.
Espinel: My name is Victoria Espinel. I’m the president and CEO of The Software Alliance. So when you asked that question, I was debating between two things and I couldn’t pick, so I’m just going to go with both of them, one of which is a little derivative. So as a personal matter, as someone who was formerly in public service, there’s sort of a meta policy issue, I think, about how governments will use Internet of Things and whether or not that’s going to be helpful to governments in making better policy about all sorts of different things, not necessarily related to technology per se. So that’s something I just personally find really interesting.
In terms of policy areas where the Internet of Things will have a big impact, I would align myself with privacy, and maybe to put a little more specificity on one particular subset of the privacy issues, how governments are accessing data. We are clearly living in a world where the laws are—to say they’re outdated is kind. And that issue, which the US and governments around the world are struggling with, I think is a really fundamentally important one.
Chui: Great. Well, I think we described these as lunch labs, so these are meant to be interactive, I think, workshops, etcetera. So before we actually jump into the panel part, I just want to give the opportunity for folks who are still eating to say what you, you know, what do you want to address? Is there something that you wanted to get out of the session? Yes?
Audience 1: DRM inside the devices.
Chui: DRM inside devices, what else? Anything else? Any other reason you came up here other than you thought food was better on the eighth Floor? No? All right. We’ll just dive into it here and hopefully that’ll provoke you. Either that or you hope for better food up here. All right. That’s great. Why don’t we start with privacy? First of all, how many people have heard of the Internet of Things before? Okay good, people are still alive.
How many people are sick of hearing about it? Maybe. How many people think it’s hyped, over-hyped? Getting there.
Just by way of background, so I lead some of our research at McKinsey Global Institute on the impact of long-term technology trends. We wrote about IoT about five years ago. We’ll have a new report coming out in a few weeks. Our point of view is roughly the following, which is that in terms of the value that can be created, IoT is not over-hyped. In fact, there’s probably more value than most people think. But it’s going to take a lot of work to get there, right? And overcoming some of these policy issues or some of the things that will have to happen in order to create the $4–11 trillion dollars’ worth of value that we think it could be worth, could catalyze over the next 10 years or so.
So with that, privacy sounds like a good topic. Alex, I think you raised it first. Can you talk a little bit about the challenges that you see? Particularly, you know, you have a company that arguably can monitor people a lot of the time—and of course, the broader trend allows that to happen. What do you see, number one, as the issues, and number two, as the policy alternatives to look at?
Hawkinson: I don’t know if I have all the answers or not, but I’ll share some of the ideas. And this is more of just some of the things that trigger it for me. In our average households today, people are growing the number of devices pretty dramatically. So they start with a handful of connected things and then they—probably similar to what you went through with apps on your smart phone, you sort of get used to this way of existing and then you add more and more connected things. And so an average household sort of doubles in the first three months, for us, and then keeps on going from there. And then lots of households with 100-plus connected devices in the home—
Chui: How do you get to a hundred? That sounds like a lot.
Hawkinson: Well, the light switches and stuff. I mean, there’s a lot of things. If you counted the number of things you interact with every day that have electricity running through them, there’s a lot of them, more than you think. So it’s a lot of these simple devices in many cases. It’s not all the face recognizing cat feeders, which are pretty rare. But I loved the example.
At any rate, there’s just a tremendous amount of information that comes out of it, and it’s not everything that meets the eye. Take a motion sensor in a house. You can tell not just when you’re away from the home for security purposes, let’s say, but when you’re in the home, are you in that room, how often, use that for energy management or a variety of other things. And at an eldercare center you could track movement patterns and whether they’re healthy patterns in a home and so on. And so that’s just one device. As you look at the sum across all these things, there’s just incredible richness of what you can tell about a family and who’s living there and their patterns.
And so we have a lot of opportunities where big home insurers as an example come to us and they say, “Well, gee, we’d love to give away your technology to our home insurance customers because it’s going to give them richer insights about problems that are emerging and let them get in front of it, and we can discount their home insurance policy and all these great things.” On the other hand, how do you feel about this technology, the insights from your data going to your insurance company, all of your home patterns, and so on. And so I think that intersection point, that’s just one example. There’s a million facets to it.
So we’ve tended to take the stance that it’s first principles, sort of privacy as a feature set. So in our patterning of what we’ve been labeling on our platform is users own their own data, what does that mean in terms of protecting it at all levels for application providers, service providers that might want to get access to it and so on. But I think there’s—I’ll open it up so I don’t dominate it, but I think that there’s just, it’s just an explosive open issue where if we’re too constrictive the innovation’s not going to happen and if we’re not protective I think people could stumble into a place where there’s suddenly just incredible insights about them that some company has that they didn’t intend to be out there.
O'Connor: You sound like you’re making my pitch for why we should all be afraid. So I will take on my advocate hat and talk a little bit about why, first of all, we are thinking about these issues at CDT. And I’ll tell this story, and some of you have heard it already, that my eight-year-old daughter asked me, “Mommy, why did you leave Amazon to take this job?” You know, this kind of—it’s a lot easier to explain to people what GE does and what Amazon does, where I used to work, as opposed to the Center for Democracy and Technology. And I happened to be holding my cellphone at the time, and said to her, “Well Maggie, you know, when I was your age, I remember vividly reading a book in second grade that said someday you’ll be able to see the person you’re talking to on the phone while you’re talking to them. And this was a long, long time ago and it really blew my mind”—let me just not even put a date on that. And I said, “Well honey, that was my future. My future is now. I talk to you all the time on that. We Skype and we FaceTime and whatever.” And she said the quintessential question which is, “Mommy, what does my future look like?” And that is the great thing at CDT and all the great companies, we get to work on what the future’s going to look like.
And I remember another story when I was at Amazon and one of the engineers called me up and said, “Well we have this device and it’s going to be shaped like this and it’s going to do this and it’s going to collect this kind of data.” And I said, “Guys, I can write you a disclosure that covers that, but who is going to buy that? Who would want that thing that is going to sit on my kitchen counter and collect all this data or whatever?” And they said, “Well that’s what they said about Kindle 20 years ago.” And so this is why I’m not Jeff Bezos and I’m not running Amazon, because I don’t have that kind of future cast.
But the answer to my daughter was, “Honey, your future looks like a world where you get up in the morning and the walls are screens, right? And you don’t even have to touch them. You just eyeball them and you say, ‘What’s on my calendar today?’ and you’re making your cup of coffee, and it’s kind of more Star Trek than it is, hopefully, Star Wars.” But that means everything is going to be on all the time. You’re right. It’s all, especially inside the curtilage—anyone go to law school? Anyone remember what curtilage is? I love that word. No one understands it, it sounds kind of naughty. But it’s about what’s being collected about you inside the boundaries of your home and who has the—the American construct of data is ownership. You know, we think of it as kind of data, my data, myself, digital self, digital kind of human. But where are the boundaries between the companies and the individual, who owns the data, or who has rights to that data? And Victoria very well pointed out what is the compelling kind of legal policy issue right now, which is just because I’ve given that data freely to a company with which I am doing business, should it end up in the hands of the government and creating harder boundaries between the private sector and the government on the most intimate, most granular data about our daily lives. Again, all great to have a wired house that can think for me and tell Safeway I need more milk. I don’t necessarily want my federal government knowing what I’m eating and who’s in my house and when I come and go and that sort of thing because judgments could be made about me. And judgments could be made about me in the private sector as well, and that’s certainly worth talking about. But there’ve got to be boundaries, and there have got to be rules, and whether some regulation is enough, or even legislation or whatever, but there has got to be a sense of self and dignity of individuals in this always on, always connected world.
Chui: Victoria, I’d love to get your perspective now that you’ve been called out by a fellow panelist.
O'Connor: In a good way.
Espinel: Thank you. So I started saying, I completely agree with something you said at the outset, which I think—I’m going to paraphrase what you said at the outset. I think there’s, you know, the Internet of Things can, and I think will, bring amazing benefits. But I also think the policy issues need to be overcome first or that won’t happen, or it will happen in a way that’s greatly diminished from the promise. And clearly the privacy issue is one of the, if not the biggest policy issue to overcome. A lot of data is going to be collected that is not—that is interesting to lots of people but not particularly interesting to the government and probably not as sensitive as the sort of individual information that we’ve been talking about here, right?
Chui: Such as what?
Espinel: You know, so like information from—you mean what’s less interesting?
Chui: Yes.
Espinel: So I think information from jet engines. You know, I fly Brussels, there’s 45,000 sensors on that jet engine sending information back so that the planes fly more efficiently and more safely. And that’s incredibly important to the airlines and to all the people that are on that particular flight, but it doesn’t raise the same kind of privacy issues as your heart rate or how fast you were driving or your location. And I think those—
Chui: Or who you’re with.
Espinel: Yes, who you’re with, or what I think Nuala alluded to, you can take—there’s sort of individual bits of information, you can aggregate those and combine them in a way to create profiles, a sort of depth of information that go far beyond anything that would be observable by the normal human eye or normal human capacity. And that is a real concern. And I think, you know, there’s how this is treated by companies. We should probably spend a little more time talking about that, but just to refer back to the issue that I raised, and that Nuala’s talked about as well, I think we are—I would say we are in a place where the laws are clearly outdated. I think government access to personal data is not something that can be left to self-regulation. I don’t think we want to trust governments around the world to make the right choices there. Certainly, speaking for software, I don’t think it’s the optimal outcome for society for software companies or any type of companies to be making decisions and be put in sort of an awkward position in the middle of government. So our companies want to work with law enforcement. Our companies take national security issues incredibly important. But right now because the rules are unclear, they are often being put in a place where they have to make a decision about what’s appropriate, and that is not what we as a society want. I think we want the governments that we elect to come up with processes, legal processes that are clear and predictable and consistent with our values of due process and civil liberty.
And the last thing I’ll say before I get off my soapbox and hand over the mic is that’s a really important conversation for us to be having in the United States, but we are not the only government that is struggling with this issue. And I think it makes no sense and it will not be the best outcome if the United States has that conversation internally in isolation. I think it’s really important—I think it’s difficult, but I think it’s incredibly important for the United States to be working with other governments, at least some subset of other governments where we have common traditions, to try to come to a consensus of what the right balance is there.
Chui: So just to—while you’re still on your soapbox—to be very specific, you used the adjective outdated. What would it mean to have an up-to-date set of regulations or policies?
Espinel: Yeah, I was going to say, I’ll kick it to Nuala because I’m—and I’m happy to take it back as well, because I think there’s actually a few different areas.
O'Connor: I’m so glad to see us in concert on this. This is wonderful. Our organizations have worked together on a number of issues. But we had a huge win, and I’m just going to toot our horn a lot because we’ve been really bad at taking credit for this kind of work. It was my team at CDT that helped draft what is now law, the USA Freedom Act that passed last week. And it is the right compromise. It is the right set of laws right now that allows for limited law enforcement and national security—limited, not overreaching, not overwrought, and we’re also not a group that will call for the defunding the NSA. But Victoria’s 100% right. We can’t get ECPA reform. It’s simply inconceivable. That should have been the easy one and USA Freedom should have been the harder one. ECPA reform is the Electronic Communications Privacy Act, which allows government intrusion into your emails simply stored after 180 days, because when the law was passed nobody used email. Email seemed like something that the guys at MIT were doing and nobody else was doing and so it seemed irrelevant. Now that email is an ordinary form of communication in this country, the law really does need to be updated. And let me stop by saying I’m not a big one for believing that just because a technology is new that the law has to change. But in this case the law is very specific and very, very dated. So that’s a very good example.
Those are two examples. And USA Freedom is just the first in a number of revisions and modifications that need to happen to reflect, as Victoria says, a global dialogue on national security, a global conversation around what is legitimate and limited national security and law enforcement in the digital world.
Chui: So let’s take this back to IoT, integrated sensors and devices. It is a world connected by networks and computers, which, you know, some of that stuff could be sent by email but a bunch of it wouldn’t be. One of the places where there are a lot of sensors is in the medical field, and where people have worried about privacy for a very, very long time. Comments about what you’ve learned in terms of the approaches that people taken to privacy, and particularly around device data?
Haley: I’d actually look back to something that Alex started with, which is the observation that if you’re too restrictive, innovation doesn’t happen. Healthcare is a perfect example I think where we’re too restrictive in terms of how we deal with data privacy. And I think that’s for good reason, that there’s something instinctively private about your health data, even though if you take it up to 10,000 feet, you might care less about whether someone knows what your blood pressure is than you do that they know where you go at every time of the day or where your kids go to school or whatever. But in healthcare, again, because of policy, there a bias against the easy sharing reformation, and actually a bias toward much less efficient and much more dangerous—for example, it’s no exaggeration to say there’s a preference in healthcare for printing out my medical records, stuffing in a HIPAA compliant envelope and handing it to a heroin-addled bike messenger to pedal across town as opposed to sending it via secure electronic means. Nothing against bike messengers, but the risk profile of those two modes of transmission are very, very different.
So I would love to get to a place in healthcare where the issue at bar is how do we better integrate device data into, for example, electronic health records. The issue at bar is how do we get health data flowing from care facility to care facility, and then how do we, in a way that is safe and private, integrate all of this incredible potential, where it’s not your refrigerator that’s populating data into your consumer life, it’s your blood glucose that is populating information into your health record. That’s an incredibly valuable thing. But the legal paradigm in healthcare, I would argue, is even more retrograde than outside and more in need of reform.
Chui: So Nuala, let’s come back to you. You were going to tell us how an updated law would be better, so—
O'Connor: Well, he’s talking about HIPAA, which I completely agree, 10,000 pages of regulation have only served to make individual consumers really annoyed when they get to the doctor and have to sign things they don’t understand. We’re working on another project at CDT, actually in partnership with Fitbit. I’m so proud of this work funded by the Robert Wood Johnson Foundation to study flows of healthcare data in the regulated and nonregulated space and how to make this better. And this is about making people healthier and improving outcomes and improving technology. Again, referring back to my GE days, we had GE Healthcare System in Salt Lake City, which was one of the oldest mainframe computers collecting health data and predicting and improving outcomes in a fairly closed set population. And what they were able to do with that dataset in helping doctors diagnose and get to outcomes faster and get to lower health costs and improved outcomes were incredible, absolutely incredible. And you’re 100% right that the regulatory structure in that world has not helped, has actually hindered. And if you think that’s bad, you should come over and visit our folks working on education privacy because that’s even worse. I hate to tell you, but we are abusing kids, you know, generations of kids who could be improved with big data and technology and devices in the classroom because of fear, fear and loathing and the dialogue in that space as well.
So I don’t have all the silver bullets either, but I mean a simplified streamline approach that said here are—as Victoria said—first principles, basic principles about fairness and transparency and accountability. I was just on the phone with a “New York Times” reporter having this very conversation and he said, “Well what’s the answer?” I said, well, the first answer is letting people know what data’s being collected. And that’s incredibly hard to do in the Internet of Things space—another phrase I hate, really. It’s the Internet of People, people. It’s not the Internet of Things, right? But how do you let people know, as these sensors are kind of ubiquitous, when they’re walking down the street, what is being collected about them by the government, by the private sector? It is a harder conversation. It’s not like going to a privacy policy on a website or sending them those horrible disclosures that the banks all send us once a year about what data they are taking. It is an ongoing dialogue. I still think, however, we can be creative in the technology community about helping the individual consumer understand what is happening to them, what the deal is, how they can modify and control and have the ownership over their data that they need. But the companies that are building these great devices have some kind of responsibility to help explain and educate and reveal and have that dialogue, and it’s one of customer dignity and trust. And that was my last title at Amazon was VP of customer trust, which just shows you how much they put emphasis on data, but also on respect for the individual customer is part of the mindset in the right kind of places.
Hawkinson: But there’s—I think what gets lost for a lot of people too is just how impactful the IoT can be in individual daily lives and sort of the imperative. I don’t know if everybody feels it, but everybody has different challenges in their world context. If you have a history of health problems in your family, the two million data points that can be collected on you per day by one of these Simbands that sort of can help your doctor, if they have it in advance, sort of really predict patterns of emerging health issues for you individually. If you have a parent that is sort of reaching that threshold where they’re getting a little older and you’re—I said it up on stage—considering pushing them out of their home prematurely, how much higher quality could a whole generation of elders have their lives be enabled by these simple technologies. I’m moving to California with a pool and I’ve got young kids. You know, the leading cause of death in kids between the age of five and ten, one of them is drowning in pools. You know, it’s incredibly temporal information where it’s a very simple problem to solve to know if somebody falls in the pool or not and whether an adult is around. There’s these imperatives for people’s individual lives where if the information doesn’t get shared, it doesn’t get enabled, I feel like there’s a lot at stake as well, maybe more so than many of these other spaces.
Chui: Is there a point at which not only the sharing of information should be restricted but actually the sharing should be required?
Haley: Sure. Yeah, healthcare.
Chui: Say more.
Haley: When you talk about data privacy in healthcare, it’s very important always to think about the cost benefit, obviously. Information is power is a cliché for a reason, and the notion that we have the ability to vastly improve outcomes simply by looking at patterns that we’re able to look at but prohibited from looking at in service to legal paradigms that are created before the capability existed, it’s appalling. When you think about—not to get melodramatic—well, to get melodramatic—these are life and death issues in healthcare every single day. And every single day across the country in world class care environments, people die. I was two years ago diagnosed with cancer, sent to the lab, where I was teed up for a pregnancy test—literal truth—just because there was a wrong checked box on the fax. And that was kind of funny and kind of shook me out of a bad state of mind at the moment. But that could have been anything. It could have easily been something that was not obviously an error. It could have made a bad day a lot worse. And this was at the Brigham & Women’s Hospital in Boston, which is one of the best care facilities in the world. This happens all the time and people die. So, yes, moral imperative to share information in healthcare to a much broader extent than we do now.
Espinel: So I was going to say I think, you know, on the moral imperative, I think the question is should that be accompanied by a legal imperative as well to share information in certain spaces and what happens to notice and consent in the world that we live in. I have an opinion on that I’ll offer. I think Nuala raised one of the issues, that information is going to be collected in all sorts of ways and at a rate that I think the notice and consent paradigm will break down. I think there are some spaces, like healthcare is actually a space where notice and consent to some extent might still work because you’re having very personalized, specific interactions with human beings. And there are amazing stories that come out of healthcare, and I agree with what you say, I think a lot of them really resonate with people because of your own personal experience. I was in Tokyo and learned the Japanese government was kicking off an initiative where they were going to be using big data to go through thousands of medical journals to look for predictors for Alzheimer’s, and because of my own personal family history, that really resonated with me. There’s a famous experiment Watson did in Canada with premature babies using sensors to collect—I think they were collecting, not 500, but maybe 20 or 30 data points on premature babies. And they found—and they don’t know why, but they found that when premature babies’ vital signs stabilize, they are actually getting close to a crash, which is totally counterintuitive. So doctors understandably see vital signs stabilize and they would think, “Okay, I can focus on another patient.” Now, in fact, they have learned that that is a tremendous risk factor, and so now they know that and they can focus on premature babies when their vital signs are stabilizing. Again, they don’t know why, but there’s a very high correlation there. And to someone who had a child who was, fortunately very briefly, in the NICU, you read that and it really brings home to you in a very personal way the benefits.
But I think healthcare potentially is an area, because it is very individualized, where notice and consent can be helpful. I think in other areas notice and consent is going to break down. I would argue that notice and consent as it stands today is problematic because I don’t know how effectively the notice is actually given and what the consent can mean, at least in some circumstances not at all. But I think because of the way Internet of Things is going to work, because of the myriad points of collection, I don’t think it’s going to work in the same way.
And the second thing I’ll say, or the last point I’ll make here and I’ll turn over the mic, is I think a lot of the benefits that we’re going to see will come from collection of data being used and aggregated in ways that are very hard for us to sit here and predict. And this point has been made before. And that does not work in a notice and consent world. I think what we need to be looking at are costs and benefits and risks. And I think that policy doesn’t move over night so that would probably take us some time to move on that evolution, but I think that’s clearly where we need to be heading.
Chui: So notice and consent doesn’t work because these lights are paying attention to me, and this microphone is, and everything else. How do you, if you, quote, “own the data” or you should have some say in terms of how it’s used, how do you do that when we’re being monitored all the time?
Espinel: So I’ll take two things briefly, and then I’ll kick it over to others because I’m sure people have lots of interesting things on this. I think one is transparency and accountability for those that are collecting the data, and that applies to private companies and to governments as well. And then the second is looking at the cost benefit or risks or however you want to phrase that, but weighing the upsides against potential downsides to human dignity and to people’s conception of privacy and to the reasonable expectation of privacy. And we can have a really interesting conversation about what’s happening to that expectation of privacy and how that itself is going to mutate over the next decade. But I think if you take as sort of your baseline parameters not what people have been told and what you believe they have consented to, but infusing principles of transparency and accountability into those that hold data and then thinking about that particular data and the cost benefit or the risk to personal dignity and how it’s being used, I think that gets us a long way. But I would actually be delighted to hear from others on the panel because I know there’s a lot of smart thinking here on this.
Hawkinson: I don’t know how to exactly define it. I’m sure there’s some official term for notice and consent, but I think there’s—I don’t know if it’s a variant on that. You know, we’ve tried to take the approach of a lot of user education and then when you’re setting up these things and these apps that work with them, making you in control of what is being shared with what. But that can get complicated for people as well. As an example, we’re going to track the temperature patterns on your thermostat and correlate that with the weather outside your house over a period of time. And it turns out, from that you get a lot of great insights about how to save energy in your home, whether it might be new insulation, whether you need new air filters in your HVAC system and all sorts of other stuff. And so by taking those examples, you try to make users aware and then they see the benefits. But where there’s no silver bullet is, again, the, I’ll call it the creepy factor. It’s self-governance at this point that keeps us from sort of advertising air filters of a specific type on the back of that information or using it in other ways. So again there’s sort of a big Mandelbrot set of issues and things that sort of unfolds behind each one of those things. But I think there is a lot of education, there’s a lot of this, I’ll call it notifying, you know, putting it in front of users so they understand the general use cases around the information that they’re agreeing to share.
Chui: And can we dig into that a little bit? We talked a little bit about the responsibilities of government versus private actors, but many of us feel that the private actors in our lives are much more powerful, or at least much more direct in the way in which it influences our lives. So you talked about what your company does. Other thoughts from other folks on the panel in terms of either what the responsibilities are or should be for companies when it comes to IoT data?
O'Connor: So two thoughts. I agree the big debate right now is collection versus use: should the company collect the data at all, or should it use it and how should it use it, should it be germane to the initial collection? I don’t think notice of choice is dead, but I think it needs to be reimagined and kind of reinvigorated. Again, referring to my previous employers, of which I’m very proud, at Amazon we would have conversations around like should the thing have lights that light up? What color should they be? Should the rotate to the left or to the right? Think about your iPhone or your smart phone. When you go to voice activate it, it shows you the little microphone. My daughter actually had to ask me, “Mommy, what is that?” She’d never seen an old school microphone, but we all know what the icon means, right? So it doesn’t necessarily have to be text. It can be visual, it can be iconographic, it can be lighting or sounds or whatever. I’m very proud of the work—one of the last projects I worked on at Amazon was Amazon Fire, and we embedded our disclosures right into the OOBE, the out of box experience, where if you plug in that Fire TV, and the guy pops up who’s a cartoon character and he talks to you. It’s a voice-activated television. It’s talking to you about the disclosures in voice in exactly the medium and communication style that you are—you know, hopefully in the right language, depending on what country you bought it in.
But I wanted to refer back to the larger question is where there should be no notice of choice. And I think that’s actually the harder question. And surprisingly, I would agree with you that there are places in society—what are the big societal swings that we are trying to solve and what do we as a country or a world want to do with this technology and this data to make our lives better? And I would say healthcare is one of the smaller slices. Obviously, coming from a civil liberties group, I would veer very far over towards the individual liberty versus the rights of the government. But there are very good examples in healthcare where peeling off people from the dataset that will damage the data and we will not know the accurate outcomes. And so I would say there are very small and very kind of, I hate to say sectoral, but individually negotiated kind of areas of society where we want the data, and we would want also then to have the protections of de-identification and anonymization and protections for the individuals so that there isn’t any blowback to them, any harm to them because they have not voluntarily given their dataset.
I’ll just leave you with one last thought. A great technology company we were just talking to recently has new technology for seatbelts. Now, we all love and use our seatbelts, right? But what this technology did, actually embedded in the fibers was the ability to detect alcohol content in the blood. Now, that is—again, back to health, but also private sector collection of data—fascinating and amazing technology when you think of the drunk driving deaths in this country. The fact that the car wouldn’t start because you pulled on your seatbelt, because you touched it with your—anyone who’s had any experience with alcoholism or drunk driving, you know, courts can order you to have breathalyzers or whatever. I mean there’s all sorts of technology that exists right now in the legal space, but this would be embedded in your car, right? And they went through their whole development process and said, “Well, first we actually just had it floating in the air, that anyone breathing alcohol out in the air, the car wouldn’t start. But then we realized that would ding the car for all the passengers as well, so that wasn’t fair because the ambient air would get everybody, blah, blah, blah, so we just embedded the technology in the seatbelts of the driver.” So now we have the question do you want this in every car in America? I mean, we’ve decided the speed limit is 55, and maybe we break that. We’ve decided that drunk driving is a horrible crime and should be punished. Do we want this embedded technology in the seatbelts of every car? Is that the right thing for industry? I’m not saying I have an answer. I just think it’s a fascinating societal question. Do we want the government to mandate that? Should there be a law? Do we want that data going to your insurance company or the police immediately when you try to put your seatbelt on and you’re in the car and you’re drunk? I mean, it’s just a fascinating opportunity to save lives, but also to invade what many people in America at least think is their personal space inside the four doors of their car. And these are the great technology versus society issues we have.
Hawkinson: Luckily in that one is a no-brainer answer, which is self-driving cars.
Haley: To answer one of the earlier questions, which is what should policy do, and actually, what I’m getting out of this conversation is a better definition in my own head about what policy is doing wrong right now, which is regulating from a perspective of protecting against the worst possible outcomes, risk-based regulation, assuming the worst possible risk, rather than policy that punishes misuse of data, proscribes misuse of data or irresponsible uses of data, but doesn’t make it unnecessarily difficult to make responsible and societally beneficial use of data. So again, I’ll come back to my own experience in healthcare. It is more difficult for my company to use data in a highly protected, highly secure way than it is for other actors in our space to use data in a much less protected, much more risky way, and that’s because of the biases in policy against data sharing, as well as the fact that policy in this area was largely written before—it’s not contextually appropriate anymore. But getting to a place where we look at what we want as a society and how data can be beneficial in achieving those ends and then proscribing what we don’t want.
Chui: Did you go to law school?
Haley: I did.
Audience 1: This is a question about the panel topic, but just before we get there, nobody downstairs, when I made my comment about whether they believe the NSA would stop surveilling even though a law was passed. Nobody raised their hand. So we need to be really careful that we think the laws we’re going to pass are actually going to deliver the results that we want.
Secondly, on the privacy area, nobody is talking about metadata. You can sign all the terms of service you want that protect you, but if you think you’re going to have your privacy saved, that has nothing to do with how the metadata is being used by corporations or governments or police in ways that allow them to track what you’re doing that has nothing to do with what you agreed to. Which brings me to the point of DRM. The Internet of Things is an amazingly aspirational goal that would make the efficiency of the plant better, maybe help us fight crime, keep us from driving drunk in cars, etcetera. But we are embedding technologies in these devices that is controlled by the companies and do not allow us to control them once we own them. And there isn’t even a policy framework for even thinking about it because nobody in Congress would understand what I just said. So on the Internet of Things, could you, panel, please address what we can do to ensure that DRM is not embedded in the Internet of Things, but the lives of people are?
Chui: Alex? You build these things.
Hawkinson: Yeah, I’d love to talk about that more. It’s a really tough one because the definition of what a thing is, is sort of changing. I’ll challenge myself with my laptop when I don’t have Internet connectivity. It’s like effectively useless. I mean, yes, there’s some cached presentations and some other stuff that I can do, but so much of what I use it for as a communication tool is intertwined with all these services and so on that I’m tying into. And a lot of happens too—it’s not that all of the uses are embedded on a single thing. It’s these applications or otherwise that are in this higher order of the connectivity space between a bunch of things. So not rushing over it, it’s totally a place where I just want to dig into and sort of unpack it more myself so I can try to understand your point. But I think that’s one of the challenges that gets into it is it can’t be isolated to even the rights to use an individual device when so many of the applications of these things spread past the individual device themselves.
Chui: Do you think policy should tell you how to build a device so that it can achieve the things that been suggested?
Hawkinson: No. I mean I think there’s policies for like we shouldn’t use nuclear radioactive transmission stuff to make the range on these things go higher and things like that, but in a lot of spaces you can’t hold back the tidal wave of amazing innovation that’s happening that enables all these underlying technologies. So it’s a tough set of problems.
Chui: Other questions?
Audience 2: What about policies regarding the aggregation of data? The thing I don’t really hear a lot about is government on its own can’t collect certain information, but if you voluntarily give it up and it’s collected by a third party aggregator, that third party aggregator is selling all that information to the government. We don’t hear that a lot, and that’s a big workaround for government to get information that they’re not able to get. So God forbid that the usage policy allows a light switch to collect information and that is sent to an aggregator and before you know it, you have somebody’s entire day mapped out and it’s legal for law enforcement to in and be like, “Yeah, we know when this guy wakes up, when this guy goes to bed. We know the whole day because all these little sensors have now been aggregated.”
Hawkinson: That’s a great example. It’s the type of stuff we have our own internal paranoid discussions around, I’ll say. And we have tended towards policies, basically we’re thinking about is there a self-stamp framework for common set of policies around how a company treats data, where we share it to, what it’s used for and so on that could be almost a stamp that’s sort of a certification at some level that we would encourage others in the industry to follow. So we don’t, we haven’t done any of those deals with the insurers, as an example, just literally because of, yeah, an individual household could get this technology less expensively, but you know what, it’s not that expensive now, and it starts to fall into that potential creepy factor that we just don’t want to get to. But you’re stepping into a huge area that I think needs a lot of thought.
Espinel: So I would agree with that. I think it does need a lot of thought, and I think, you know, you said information is voluntarily given over, and that’s true. But I don’t think that necessarily means that people have a full understanding or expectation of how it’s going to be used. So sort of going back to the reasonable expectation of privacy and how consumers believe that information is going to be used when they voluntarily give it up to a company.
Chui: After not reading the terms of use.
Espinel: Or maybe they even do read the terms of use, but I think there’s still sort of an understanding that—I don’t think the consumer expectation is as broad as some of the potential uses out there could be. And I think, therefore, when we are talking about law enforcement and what they have access to and how the legal rules in the United States, which are supposed to protect reasonable expectation of privacy from unreasonable searches and seizures—you know, I think that’s one of the areas that has not been kept up do date. And I mean this is a very live conversation, not so much about aggregation of data but the sort of more—the bigger issue about reasonable expectation of privacy. It’s something the Supreme Court has had some interesting things to say about and we expect that they will have more interesting things to be saying about how reasonable expectation of privacy in collection of data work in a digital world. The Supreme Court has already signaled that the digital world and a cellphone, certainly, should be treated in a way that is very different than a cigarette pack or a note that you happen to be carrying on your physical person if you’re stopped for an arrest. And I think there is—so my own personal expectation is that the Supreme Court and our larger legal framework will learn to treat that type of information in a way that is different from the way physical information has been treated. I could go on at length, so I’m going to stop.
Hawkinson: If I could add one more curveball in all this too, just to freak people out further. Some of this is also going to come from the—not explicit sensors that you install yourself, but—let’s take small business examples. You put a camera in a small business, you can track patterns of people walking through the aisles or where they’re spending their time and then you can learn to recognize somebody again and again. This is a repeat customer. The person up front gets a notification that customer X, who is a very loyal customer, is in and give them their free latte or whatever. There’s all these things where you’ll be in environments, your friends’ houses, where there’s also a ton of this information being collected that’s not even explicitly—you never agreed to in some way. And that’s just, again, mixed into the complexity of the space. And the answer isn’t shut it all down because all of these imperatives that hit people individually where there’s this opportunity to really improve lives. So just to add another big pickle.
Espinel: So I will just say—I will take the microphone back briefly. I think in the government space part of that goes to our understanding of what a search is, and what a reasonable search is, and that, under our system of law, has profound implications for what law enforcement can do and what they have to do in advance to get that information.
I think in the private sector space it goes back to what we were talking about before in terms of risks. You were talking about the bike courier versus the electronic system. I think we need to think about risk in a different way. Risk is sometimes used to mean the risk of a data breach or the risk of mishandling, but I think we should be looking at risk as a sort of larger context of cost and benefit and sort of what the potential harms are to dignity and what the potential harms are to society if information can be shared or not.
Chui: We have to wrap it up. We’ll take one more.
Audience 3: I was hoping people could talk a little bit, building on this idea of risk and the conversations about use versus collection. One of the things that’s very important is how the aggregated sets of information are going to be used to make decisions, whether that’s decisions about allocation of resources, opportunities being offered to people, etcetera. So that’s one question itself, and then there’s the question of scale, which is, these are the things we want a monitor for, for appropriate use. How, if we’re having an infinite number of devices and a very large number of uses, how can we be using technology or other systems to be monitoring for what we societally decide is appropriate?
Chui: Any response?
O'Connor: Those are two huge questions, so thank you for asking them. I know we’ve got the head of the organization here looking at us to wrap up, so I would simply say this: What we are fighting for at CDT is to make sure that democratic values and equality are embedded in the digital decisions. We have an old project called Digital Decisions about the algorithm, the responsible use of data to improve society as opposed to kind of create further disparity. And there’s all sorts research and all sorts of great thinking around kind of the balkanization or fragmentation of society based on embedded decisions that are inherent in the technology today. And I think good companies are thinking about these things, and you see a trend towards hiring sociologists and psychology in technology and in advocacy to think about the effect on the human. The questions are so big that they really are kind of decade-long conversations we have to have as a country and as a society, as a globe.
I’ll leave with one quick response to the last conversation we had, which is the answers actually are easier for private sector versus government. Tell them to come back with a warrant.
Chui: All right, three quick questions for everybody. What’s the worst IoT policy that could be enacted?
O'Connor: Collect everything all the time and don’t ever think about it.
Haley: Worst IoT policy?
Chui: Yeah.
Hawkinson: Gosh. Be too restrictive on telling consumers what they can and cannot buy.
Haley: Status quo.
Espinel: So in the law enforcement space I’d say status quo. I think we need to update the laws. I think having it so that there are proper restrictions on what law enforcement can do and clear and predictable paths. I think the status quo we have right now is quite bad.
In terms of enacting policy, I think putting blanket restrictions on what companies are allowed to do, staying in a model where—staying in this current construct would be a real dampening on innovation.
Chui: If you could do one thing in policy, what would it be, to enhance IoT’s benefits?
Espinel: Well, time to go back to law enforcement. I think we need to update the laws around law enforcement.
Haley: Reset across the board to account for the Internet.
Chui: Alex?
Hawkinson: I’d say something similar, but I also think there’s so much opportunity in each of these spaces that I just, I’d lean on what this guy was talking about in terms of looking at a total revamp of healthcare sharing information. There’s like—nothing could be closer to home for every human being eventually.
Chui: Nuala, last word.
O'Connor: A simplified legal structure that allows for innovation but protects the dignity of the human.
Chui: Great. Please thank our panelists.

Participants

Alex Hawkinson

Founder and CEO, SmartThings

Dan Haley

VP, Government and Regulatory Affairs and Assistant General Counsel, athenahealth

Michael Chui

Partner, McKinsey Global Institute, McKinsey & Company

Scroll to Top