Watch

Privacy, Trust and The Future of Digital Relationships

Privacy, Trust and The Future of Digital Relationships

Session Description:
How does culture alter our technology? Two trained anthropologists–one an eminent and longtime student of tech-influenced social systems, the other a senior business journalist–discuss the social, economic and political biases we bake into our systems. How may that affect how people behave? If we’re moving to a world of sensors, analytics, and a consequently altered society, will it be a fair one?
 
Kirkpatrick: Many times during the day this issue of trust and privacy and who’s really getting the benefit has come up, and I hope in this final session it’ll sort of really go on steroids, so to speak. We’re going to start out with danah boyd talking a little bit about some of her own thinking about trust and technology, and then Gillian Tett, another leading thinker about these issues, is going to join danah onstage and I’m going to come up and ask a few questions.
boyd: So I have the honor of being able to give a provocation.
So thank you so much for coming out today. I think this has been a phenomenal set of conversations. My goal today is to talk a little bit more about how to think critically about some of the data-related issues that we’ve been talking about, because I live and breathe the sort of downfalls and the challenges there.
I want to begin by saying that big data is a myth. And I don’t mean that there’s not data that’s big or that there’s not data that’s useful, but that the frame that we hear when we hear big data is actually wrapped up with this ideology, as though we could actually magically solve all of the world’s problems if we just have more data. And that frame has prompted a huge drive in innovation without a willingness to understand where that goes terribly wrong or what some of the unintended consequences of it are. In fact, it’s gone so notably, to the point where we realize that big data kind of feels surveillant—surveillant to the point where it’s like, it’s kind of creepy.
So in the last couple of years, we’ve moved away from big data as the dominant frame to artificial intelligence. And one of the things that I’ve learned about artificial intelligence is that it often means three different things, depending on who you’re talking to. To the public, it means any time a computer does something that seems remotely intelligent, and to the industry at this point, it means any form of statistical processing and modeling, any form of machine learning. And the researchers down the street are scratching their heads going, “No, it’s a very narrow set of technical interventions.” But what it means at the public level is this idea of machines coming to replace us, and we’ve heard that as a tension throughout today.
Now, I’m a big believer in actually using data analytics in order to get smarter, in order to figure out how you can do what you do and do it well. And I’m going to give you an example from my own work in the things that I’ve been doing. I’m on the board of Crisis Text Line, which is an amazing service here in town that allows people who are in crisis to send text messages to counselors. So think about an old hotline, except that this is now actually done via text. It’s pretty phenomenal to watch it play out.
Now, what’s intriguing about this is that it’s an amazing service, but it’s also a data analytics process. We take all of the data from people who are writing in and talking to counselors and we process it in order to empower counselors. We don’t automate what’s going to be said in any way. We allow counselors to see correlations that we can see statistically in order to become better counselors. We give them references, we give them pinpoints. A lot of it is about empowering humans. And this is important because a lot of the conversation that tends to lead is about replacing humans, about assuming that technology can be smarter than humans, rather than realizing the degree to which it can be a tool and be empowering.
Now, I love this quote from Geoffrey Bowker. He argues that, “Raw data is both an oxymoron and a bad idea; to the contrary, data should be cooked with care.” And I think it’s a phenomenal way of thinking about it because it’s not just the data. In fact, the data means nothing. It’s the model. And that means the data plus the algorithm plus the interpretation. And there’s an amazing number of things that can go wrong in that process. The data can be deeply biased in unexpected ways, and deeply concerning ways. The algorithm can not be appropriate for the data that you’re working with. And the model and the level of interpretation that can come out of this can have all sorts of social ramifications, such that even the same model for one purpose being used in a different purpose can actually be quite harmful.
I’m going to give you an example from Latanya Sweeney. She’s a professor of computer science at Harvard University and she was doing an ego search, something that we’ve all done, searched for our names to see what would come up, and she found that when she searched for her name, she got all sorts of criminal justice-related products: “Latanya Sweeney arrested.” And she was like, “What the heck is this?”
Now, as a computer scientist, she realized that the fun thing to do was to script this. Would this occur for all names or just her name? And so she went and grabbed baby names over many, many years and found that, not surprisingly, black names were more likely to get these criminal justice-related products than white names.
Now, the immediate intuition for somebody who’s not working in technical circles would be like, oh my gosh, Google is discriminating against black people. But Latanya knew better, and she was really intrigued by it. Because what happens in these ad models is the idea that we put something out, you know, an advertiser actually goes and places ads against something like names and then, depending on who clicks on what, it iterates. It evolves. And so what happened was that when people were searching for black names, they were far more likely to click on those criminal justice-related ads. And it didn’t even require Google to know whether or not those names were black or white dominantly in the US. It just required them to understand a graph structure, and the result is that those names seemed to be a better fit.
So what happens is that Google learned all of the prejudices of the American public, which is a deeply racist public, and fed it right back at us. And I think that this is important to realize because at no point in this process did Google go out of its way to be discriminatory. But a discriminatory society fed into a model will result in all sorts of discriminatory outputs.
Now, this is funny with ads and we can look at it as problematic, and it’s very connected to some broad racial politics in the US. But it has huge ramifications when we think about these same models being deployed in a variety of other contexts. I’ve been really interested in risk assessment tools and many of the people in my research community have been deeply invested in trying to understand how this is working. For those of you who don’t spend a lot of time in criminal justice, this is the information that judges get about whether or not somebody is a risk, whether or not they should get bail or bond, whether or not they should get certain kinds of punishment. And these are models that are based on a variety of prior information, such as what’s the likelihood that this person will commit a crime, based on the history of them committing the crime.
The challenge is that these models are not just based on the individual. They’re based on understanding the individual in the context of the available data. For those who aren’t aware of our very discriminatory criminal justice system, we arrest people of color at far greater rates than we arrest white folk, for even the same crimes. And when we try to build out these understandings, we assume people of color to be much more at risk, much more likely to be repeat offenders, even though statistically that is not true. In fact, one of the things that’s intriguing, if you read Michelle Alexander’s book, “The New Jim Crow,” she does a full detail of how we got there, but one of the things we know is that white people are more likely to use and sell all sorts of drugs, but the rate at which we arrest black people for selling and using drugs is far greater.
So we have a really biased dataset to begin with. We feed that biased dataset into a model of risk assessment tool and, surprise, we get a really biased output. And one of the things that’s intriguing about telling you this at this moment in time is that ProPublica has worked with some of my researchers in order to do a deep dive on this for the past couple of months, and they just released all of their materials, or a first chunk of it, including the data. I strongly recommend you check it out. They’ve got all the Broward County data as well as the models that they used to try to discern what’s going on.
This has huge ramifications for people’s lives. This is true for predictive policing. This is true for a lot of the personalized learning that we’re talking about. We are seeing discriminatory processes get baked into every model that we build. And why? It’s not that we are designing to be discriminatory. It’s that we’re not designing not to be. And when we live in a society where we don’t necessarily understand how to process this and we assume that that data is neutral, we have all sorts of unintended consequences.
So what do we do about it? This is a really tricky process, because these are living models. They don’t necessarily get designed in one way. They’re not necessarily bad actors. And it’s not even about data being accessible to us. A lot of folks think that the answer is we just need data to be open or we just need the algorithm to be transparent. That will do no good. That is actually not the way to look at these things, because it’s about the interrelationships as they evolve, especially as we’re talking about data analytics.
What we need are new mechanisms for thinking about auditing, ways of understanding how these systems reach the complexity and whether or not they have the unintended consequences that we don’t want. Now, this is an intriguing moment, right? Auditing has a fascinating history. I recommend going back to the history of how we got to fiduciary accountability through the mechanisms of auditing. What’s important about auditing is not the outside actor. It’s in many ways the inside actor. It’s the organization that really wants to limit its own discriminatory potential, purposely trying to understand the unintended consequences of what it’s doing. And when we build those tools to look in, often because of pressure from outside, we have the possibility of trying to then articulate what it is that we want. And this is where value systems come into play, because we may not all agree on what we want. I’ve been spending a lot of time with young people here in New York City that are part of NYCHA housing. And the reason I’ve been talking to young people in NYCHA housing is because they’re watching all sorts of new surveillance mechanisms come in and they’re coming up with interesting ways of coping with it. And I love it because they have these whole models of exactly where the surveillance cameras are, what’s working, what's not working, how to escape the various cameras, and what to do about it.
Now, I asked them, “Why are you trying to do this? Why do you want the cameras to not see you?” And they’re like, “Well, you know, I live with my mom and she’s on the record in NYCHA, but my aunt’s homeless and we want her to stay with us too, but we will get kicked out of NYCHA housing if they know about her. And so we’re trying to find ways to sneak her in every night so that she doesn’t have to sleep on the street.” Very practical realities. And in fact, if you spend time in NYCHA, you know that the vast majority of homes do not match with what is not the record. The people who are actually living there far exceed the people that are officially registered. And this is a question of policies. Is this a good thing or a bad thing? More people are actually living within these homes and they’re surviving rather than being on the street because people are violating policy and because we are not finding ways of tracking it.
The reason I bring this up is we’re talking about Internet of Things today. It is extremely easy, with a lot of Internet of Things-related items as they go out into NYCHA housing, to realize very quickly how many houses have far more than their acceptable population living there. How then do we grapple with it, right? Do we kick those people out? What are the politics of that? How do we think about our policies? And this is going to be true of a whole variety of these systems. We heard earlier about some of the sharing economy-related issues. And one of the biggest challenges that we’re seeing is that huge chunks of people in those industries are undocumented. They’re not joining in on the sharing economy because, it’s great, Uber is banking more people than any company has before in history. But what does it mean to start tracking and tracing those populations? What does that mean for our immigration policies? How do we start having that complexity of a conversation?
It is pretty amazing. You can do amazing things with it. But those amazing things come down to a question of power. Who has access to it, what are the questions that they are choosing to ask, what are the problems they’re willing to solve, and who are they willing to throw under the bus in order to get there? And I think that that’s really important because as we look to all of the potential, the same possibilities of awesome can also be the same vectors of really huge socially consequential problems. And it’s really up to us to choose how we go forward. Because it’s not just about what the technology can or can't do, but just the entire social ecosystem that goes around those technologies. Thank you.
[APPLAUSE]
Kirkpatrick: Thank you so much, danah. That was intense and extremely interesting. And joining us is Gillian Tett, who’s the North American editor of Financial Times and quite a deep thinker about many of these matters. And when we came up with the idea of putting them together onstage, I didn’t even know, and Simone didn’t either, that Gillian had written an entire chapter of a book that was largely about some ideas that danah and she had shared, a book called City Squares.
First of all, before we get to that, which I hope we’ll have time for, I’m just curious any thoughts you had listening to danah talk about this risk that data-based systems kind of almost inadvertently instantiate the prejudices of a society and what if anything we can do about it.
Tett: Well, I think danah is completely right. I think also that the work that she and her colleagues are doing right now frankly is genius and very symbolic.  I’m actually trained as an anthropologist myself. I did a PhD in anthropology, and when I was at college, there was this huge tribal divide between one bunch of people who were called geeks and sat in one corner of universities and you had a bunch of people called hippie anthropologists who wandered around in tie dye and generally didn’t speak to the geeks. And what is electrifying right now, which cuts into this debate about privacy, about the way that tech is used in so many ways is that people like danah are now bridging that gap and creating a whole new world of digital anthropologists. Because it’s become very clear that if you are a social scientist, you now have access to once unimaginable amounts of data. In the past, the only way that you could observe people closely was to physically watch them with your eyeballs and that meant you could only see a tiny part of society. Now you can watch huge swaths of people. But it’s also become clear, as danah says, that if you sit there just as a geek and look at the numbers, you will actually end up becoming completely trapped by all these cultural patterns you hadn’t even thought of. And technology has the ability for us to rewrite all kinds of cultural patterns, but only if we employ our brains, because otherwise it just intensifies the existing cultural patterns in ways that are fantastically dangerous.
So frankly, my dream or hope is that for the next few years both sides of that tribal divide actually start working together, universities start investing in this digital anthropology, or what someone like Sandy Pentland of MIT would call social physics, and we really try and actually use both the power of big data and the power of our brains around cultural analysis to try and understand the world better and hopefully make it a bit better, or less bad.
Kirkpatrick: Well, that was very well said, as I’m not surprised, knowing you. And it’s interesting to see your trajectory. You started sort of studying social networks, and in fact I should have said in introducing you, danah was one of the people that essentially defined what a social network is back in an earlier phase in her career. But now she’s actually created an organization called Data and Society, which is focused on exactly trying to address the kinds of issues that she discussed and what Gillian was just urging us to spend more time thinking about. But the most promising thing about it in many ways, in my mind, is that it’s funded primarily by Microsoft—and you also work at Microsoft—which goes to show that at least one of these Internet giants is recognizing that there’s something more fraught going on than maybe the industry has historically wanted to acknowledge. And I happen to believe that most of Microsoft’s competitors of that scale have not at all yet recognized how fraught the world is in which they’re operating. And I’m just curious, since you’re nodding—I’m glad of that, but what are you thinking?
boyd: So I also want to clarify, it’s actually not mostly funded by Microsoft.
Kirkpatrick: Okay, pardon me.
boyd: It was a beautiful gift, and they fund me, which is phenomenal. I think one of the things that—you know, as a part of Microsoft Research, I had a lot of intense conversations with people in the company where for the longest time tech was a thing over on the side. It was its own sector. It was its own separable conversation, and people were geeks and that was all cool and fine. We’re now at a point where technology is no longer just a separate thing. It is infusing and infecting every aspect of society. And we don’t have a sophisticated way of understanding that or thinking that through. And I think that it’s dangerous when the tech sector comes in and wags its finger at other people like, “This is how you should do it.” But at the same time, we’ve built a lot of these systems and we need to be able to critically interrogate what’s going on of these systems that we imagine. And I think this is where—you know, Fred Turner is a professor of communication at Stanford and he wrote this beautiful book, The Democratic Surround, and what he was interested in was how in the 1930s all of the top cultural leaders of the day—Margaret Mead, John Cage, the founders of MOMA, you name them—were brought together by at the time the equivalent of the Defense Department to imagine how they could think about democratic media. And they were asked to do it for a very particular political agenda. As you can imagine, it was the 1930s. They were worried about film being this hugely destructive tool that would actually force people to think in one way and one way only and so what would democratic media look like. And they started really envisioning things that we’d all recognize in this room because they started to envision the really early days of the Internet, in the late 1930s—or mid-1930s.
But one of the thing that Margaret Mead says in this process, which is the closing of this book, is that if we succeed at creating the world we envision, we will not know how to live in the world that we have created. And it’s a really interesting moment because I think that this is true for a lot of the tech sector. We envisioned all sorts of things—and I was a part of it. My background is actually computer science, back in the day, before I retrained as an anthropologist. We imagined the amazing things that would happen. It would be the great equalizer. It would solve all the world’s problems. If we just got people online, it would be the great enlightenment. And, you know, anybody who’s been through this a few times realizes like, no, we bring with us all of our cultural baggage. We bring with us our biases. We bring with us our desires. And so what we saw was not what these folks in the early ’90s envisioned. What we saw was a set of processes that played out radically different as power got in the way in different ways.
And so one of the things that I’m honored to be able to do, you know, to run Data and Society, is to take a moment and step back and say, wow, this was not what anybody in the tech sector, this is not what folks who built Microsoft wanted and like what does it mean that it’s had these ramifications? What does it mean that the World Bank is correlating the rise of the Internet with the rise of inequality? What does it mean that we’re seeing all sorts of huge social justice issues emerge because of technology? We thought that this was going to bring everybody together. And if we don’t critically interrogate this, if we don’t understand what’s going on, not only do we have the potential of deepening these problems, but we have the potential of actually seeing our technology, the things that we love and are passionate about, used for the worst things that we could possibly imagine. And my passion in all of this is to make certain that we get ahead of that, that we try to stop and address that. And I’m really grateful for Microsoft’s help in making that happen.
Kirkpatrick: I think it’s cool too.
Tett: Can I just add one thing there? Because I think another way to frame what danah’s saying—because I’m often asked, you know, when I tell people I have a PhD in anthropology, the first thing they say is what the hell are you doing working for the “Financial Times” because don’t you need to know about finance and economics? If they have kids who want to study anthropology, they say, well, will they ever get a job, to which the answer is these days they probably will, because actually, companies are increasingly hiring anthropologists because they realize that culture matters.
But the critical point is this: we’ve all learned—and by the way, is anyone else in the room in anthropologist? Okay, one—half a person. All right, we’re officially outnumbered. In the last ten years, everybody has learned  why they need to know a bit of psychology. Because a bit of psychology makes your life better, because it helps you to understand the processes that make your brain work and which will trap you if you don’t think about it.
The next leap that everyone needs to learn is why they should all know some anthropology, or at least have appreciation for its importance. Because actually, we are equally shaped by very powerful cultural habits that we inherit from our environment. We’re all creatures of our environment and if we don’t think about it and consciously try and address them, we will end up equally trapped. And that little example that Danah gave about AI and the way that one word can be used in three different ways is incredibly emblematic of what’s going on with technology as a whole. Because all of you, who are mostly tech people, are tossing words around every day and using cultural patterns that will trap you completely and make you much less functional if you don’t step back and actually try and question them and realize that we can either be prisoners of our patterns or we try and master them.
Kirkpatrick: Well, that’s interesting and beautiful. I’m more prosaic. I have questions written down. But, you know, having read that chapter, there’s a question that you ask at the end that you don’t really answer. Because the book’s about city squares, and then you write this interesting chapter about sort of the virtual square is emerging, and then you invoke some of danah’s ideas, and then you ask, you know, can—
Tett: Yeah, I’m a journalist, so I basically go round picking up people’s ideas.
Kirkpatrick: —can virtual squares be a force for civic good or not, right? But you don’t really answer that question. And I wanted to ask you what your opinion is and I’d be curious to know yours, since you’ve spent so much time looking at social networks. You know, one of the things danah did was write a book about—what was it called again?
Boyd: “It’s Complicated.”
Kirkpatrick: “It’s Complicated,” about how teenagers and young people use social media and online and digital communication, a brilliantly influential book.
Tett: In a nutshell, Twitter was created by Biz Stone, who had this dream that if you created Twitter, it would cause everyone to flock together, to use that old cliché, and then you’d have a much better world. And the question in my mind about whether virtual squares are good or bad is are they actually causing people to flock together or fly apart. Because in the early days of Twitter, when it was small enough, everyone collided with everyone else automatically. They probably were flocking. Today, Twitter is so fragmented by virtue of its size that it’s very easy to create polarization instead. And this comes back to my point: if we don’t think about how we’re actually using social media, how we’re creating virtual squares, we’ll simply end up intensifying the tribalism that is already out there and actually making it worse. And one way to do this is actually to take a lesson from Dick Costolo, of all people, the former CEO of Twitter, who told me once that what he was doing when he was running Twitter was making himself every couple of weeks change the people who he actually followed on Twitter just to force himself to get out of that tendency to slide into tribalism. And I’d ask, again, everyone in the room who’s on Twitter, go back tonight and look at the people who you’ve chosen to follow, and I would bet you that 99% are broadly from your own intellectual and social tribe. Okay? I mean do any of you actually follow any anthropologists? [LAUGHTER] Right, two. Okay. Well, start with danah if you don’t.
But, you know, just try and experiment for one week and knock out everybody who you follow on Twitter and put in 20 people from totally different worlds. Put in someone who thinks that Donald Trump is fantastic. Put in somebody who is an activist in Bolivia. Put in someone who, I don’t know, is an Australian CEO. Put in people from a completely different world and then you’ll realize, once again, just how powerfully we’re being trapped by our cultural patterns that we inherit without realizing it.
Kirkpatrick: Okay, but you say if we don’t think about this sort of squares for good issue, we’re in trouble, right? And yet, one of the things that I’ve mentioned several times on the stage today is how completely absent any consciousness of even these issues is from our public political dialog. It’s absolutely 100% absent, right? And you very majorly edit one of the world’s leading papers that’s looking at political and economic developments. Is there any hope—
Tett: Is it my fault? [LAUGHS]
Kirkpatrick: —that we, the collective we that you referred to before—you used that ‘we.’ Is there any hope that we could do that when we at the leadership level are so blindly oblivious, it seems to me?
Tett: Well, in my own way, I’m—you know, I try every day. And I probably succeed about 2% of the time. But I think one way to start is actually by persuading people who claim to be educated to think about the patterns they’ve learned by their education and start challenging that.
boyd: But I think—I mean it’s been an interesting couple of weeks. We’ve been living—at Data and Society, we’ve been doing a ton of research for the last nine months about who controls the public sphere. And a lot of it is really trying to imagine what are the different kinds of algorithmic models and how could they actually be used to manipulate. And so, you know, three weeks ago when Gizmodo starts releasing information about the possibility that Facebook is biased in their content and in their trending topics, it was sort of an interesting moment for us. And I realized that, actually, the public has an appetite for it but doesn’t know how to process or make sense of it. And as a result, we go to these polarized moments where it’s just like, you know, they’re blocking conservatives. It’s like, no, that’s not actually what’s going on, there’s a lot of complexity to this, and there’s a huge set of data-related issues. Some are about human judgment, some are around technology. But we don’t even know how to critically interrogate that, let alone to articulate what we want. And this is actually where there’s a really tricky tension around how we think about civics. The reality is that I don’t want to read another article about Syria. It’s painful. It’s depressing. And yet, I know as a citizen of this world that it’s extremely important for me to do so and I sort of swallow my breath and read something that just sort of kills me emotionally as I think about what it means to deal with, you know, a war-torn society.  Or, you know, Venezuela, or you name a variety of different contexts. And this is this interesting moment of history of media, where a group of highfalutin, high privileged folks told the world what they should pay attention to. And now we have a lot of technology that will actually feed us what I want.
And actually, I have to admit, like, you know, watching Kim Kardashian on yet another level of insanity, actually, there’s something really delightful about it, right? And I know it’s a terrible thing to sort of get sucked into, especially at like midnight, but somehow there’s something drawn to that. And so what does it mean where we start to learn or move these mechanisms. And the place that I like to think about it is Netflix. So Netflix used to exist in a world where they would send you DVDs, some of you might remember this, and you build a queue and they would send you whatever the next DVD was. And people built their queues and they’d send out the DVDs, and people would keep them for like a month before they’d send them back. And that was good for the business at some level, but then eventually people started quitting the service and Netflix started looking into it and trying to understand why, and it happened to be at the same time that they started putting out streaming content. And in the process they learned that what people put on their queue was aspirational. It was a list of things that I think I want to watch, like that really good documentary or like, you know, “Twelve Years a Slave.” That’s what I want to watch, except I don’t really want to watch it. It’s what I think I want to watch. And any given night, if I’m given the choice, I’m going to click on something like “House” or something silly and just watch it there.
So it’s this interesting moment about what is aspirational versus what is actually desired. And part of the challenge of living in a public is that we actually have to balance those different interests. And this is a really hard thing to then think about technologically. What is the responsibility, the moral responsibility of organizations like Twitter or Google or Facebook when we think about this tension? Their business imperative is very, very clear: give people what they want, they will spend more time, they will click, they will pay attention. It’s really good. But that may not actually be the thing that creates a healthy society. And this has indeed been the debate for the last three weeks, is Facebook a media company? Do they have a set of social responsibilities?
Kirkpatrick: The answer to both of those is definitely yes, absolutely.
boyd: Well, I mean it is, but that’s not necessarily how we think about it.
Kirkpatrick: It’s not how they think of it either, and that’s even a bigger problem.
boyd: But, so this is where I think we get to a business conversation of the double bottom line. But there’s a long history of this. If you look back at the history of Ford Motor Company, they needed to take care of their entire citizenry in order to be a functioning company. And so the social impact was actually built into it because it was located within geography. We no longer have that, and so we end up creating these weird forced dynamics of like what is the social good, who defines it, how is it scripted, and that’s a lot harder to negotiate, and I don’t think we have a good way of going about it.
Tett: And I’d just say, at a practical level, the issue we face at the “Financial Times,” like every single newspaper—and this is me putting on my FT corporate hat, not my anthropology hat—is that, you know, if we ask our readers what they want to read, they will increasingly choose customized news about their sector. And yet, actually, the value of the FT to our readers is not to give them just customized news and not to just take them down intellectual rabbit holes that they choose themselves, but to enable them to collide with the unexpected. And the question is, in a world where we are increasingly creating our own customized universe, how do we ensure that we actually collide with the unexpected? And the vision I have is dominos. If you think about dominos, we all have a tendency to customize our world. You can actually engage with people with a bit of customization at one end of the dominos, but then you have to give them something new to break them out of their own little bubble.
Kirkpatrick: By the way, it’s one of the reasons that I find print paper is far more useful for my intellectual development. Because serendipity is much, much more frequently experienced. And you guys—you’re probably my favorite paper. I like several papers, but—
Tett: Let’s say it a bit louder. Give it to all your friends and family.
Kirkpatrick: I love the FT. And it is amazing, if you just sort of flip through it, the things you’re going to be exposed to about what’s happening in Rwanda or, you know, British politics—a little bit more of that than I can deal with, to be honest. But, you know, it’s really—
Tett: But the question, David, is how do you create that experience of collision online?
—we grapple with that problem every single day on the FT.
Kirkpatrick: The Times doesn’t—I don’t use the FT online because it’s too expensive. But I do use the Times and I think they do a terrible job of it. I think, you know, the only way—
Tett: Does anyone know how you collide online? Because—
Kirkpatrick: The best is the—you know, the most emailed story list is where you kind of get a little bit of that. And that’s often people’s favorite feature of the Times app. But I want to give the audience a chance to say anything, because you don’t usually get two people like this to ask questions to.
Audience 1: What do you think the impact of blockchain recording the history for the first time in math, where it’s less destructible than for example history in Granite is going to have as an impact?
boyd: I don’t know that it’s less destructible, actually. I mean, yes, the crypto-structures are really phenomenal and like mathematically I love it. It’s an environmental disaster. That’s my first major concern. And we can’t actually scale it, and any system is manipulatable, so how are we going to deal with those different dynamics? And I think there’s just a lot of open questions and it’s going to work for certain things and it’s not going to work for others. But more than anything, what scares me is just the environmental cost of what it means that we’re doing literal mining in order to do crypto-mining. Just, I’ll pause here, which is, we don’t think about the environmental impact of the cloud. Why do you need every Twitter notification you’ve ever had to sit live on servers so that you can search for it in your Gmail? That’s insane. That takes up land, power, water, environmental structures all the way down. The more we go to bitcoin, the more we’re going to be costing ourselves into the future in really significant ways, which is one of the reasons that I don’t actually think it is sustainable, or nearly as long term as Granite.
Sundararajan: Arun Sundararajan. So I mean both of you alluded to the sort of potential reinforcing nature that digital has of, you know, people wanting to stay in their own sort of business world, filter bubbles or like, you know, sort of the reinforcement of sort of biases that have collectively now arrived and sort of been embedded into Google. And, you know, we’ve been hearing about the filter bubble and sort of the disappointment of the Internet in sort of not giving us a broad range of things. But I’m wondering, is this what you actually—like, you know, let’s say we compared it to sort of 50 years ago. I’m wondering, are we moving forward or are we moving backwards in terms of like, you know, how much we are being reinforced in our sort of biases or like, you know, how narrowly we are getting our news. I mean because I think the idea of the filter bubble is appealing and it’s intuitive, but is it really sort of a step backwards relative to the way things were say 50 years ago or 20 years ago?
boyd: So it all depends on what position you sit in. Which is to say that 50 years ago, in terms of media and news narratives, it was all shaped by white men in a very particular narrative and we had a whole set of subcultural conversations. Ethnic media was very, very mature and it was very much filter bubble style. But at the same time there was a dominant narrative and that dominant narrative was set and shaped by a certain set of interests. What got us to where we are today is a desire and a critique about the fact that this dominant narrative was a narrative of privilege, and at best was diversified across political spectrum conservative and progressive. But it was not actually about all of these different dynamics. And so what we saw was a restructuring and reshifting. And I’m talking domestic for a second.
When we go international, we deal with a different set of interests. I’m particularly interested in what’s going on right now in Russia. Russia is engaging in a mass disinformation project. It’s pretty phenomenal to watch, where the goal, the purposeful goal is to make information on the Internet untrustworthy. And this is where I think that we’re reaching a really interesting question, and this is where I think information is shifting. We all know, especially as scientists, we know that most things are probabilistic, right? It’s not you have cancer or you don’t have cancer. There’s a whole set of probabilistic structures of what this means about this growth.
But we can talk about this scientifically but we’ve never been able to talk about this publicly. This is not how people think. They think in binary. So the narrative of news for the longest time has been in binary. What’s at stake right now is that we’ve actually moved to a probabilistic conversation where you can hear it as dog whistle politics, you hear it in a whole set of different frames. But it means we will have an election series of debates that will not only not be on topic, but they will not talk to one another. They will literally be speaking past one another in all sorts of weird ways. And so it’s this interesting moment where we literally hear different things and we’ve been trained to hear different things by the things that we’re rooted in. And that’s what’s shifted in significant ways. When we were all hearing the dominant white male narrative of American politics, we heard everything in political discourse in response to that and in response to our own identity in relationship to that. Now we are hearing all of those things in relationship to the different materials that we have and that complicates our ability to process and our ability to negotiate.
How many of you in here are Trump supporters? So we have one. I’m pointing this out for a reason. One of the things that’s really tricky in our political conversations at this point is that we live in a world where we don’t know people with different political ideologies. I’ve spent so much time driving around the United States over the last 15 years talking to young people about their lives and one of things that’s always shocking to me is just how invisible difference of opinion is. And this goes back to Gillian’s point. And so what happens is that it’s like we’re at a point where our election is really being defined by the fact that we are not actually even able to make sense of a different political agenda because we don’t know anybody that shares those.
You know, there’s one person, I recommend that all of you go and make friends with him and get an understanding of what is going on where he comes from—two.  We’ve got two. Great. You’ve got two—
Audience 3: There was an article in the “New York Times” this week that said that in the polling situations, as people exited or were put on the spot to say do they support Donald Trump, most of them would not admit to it. When called on the phone, the likelihood of people saying yes, they do support Donald Trump went up considerably. And online, where it was completely anonymous, that number jumped up even to a higher level.
Tett: Can I say one thing very quickly on that?
We’ve always had tribalism, to come back to your point. We have a new type of tribalism. The reason it matters today is twofold. One is it doesn’t have to be like that. Technology, if we employed our brains, could actually be an amazing way to connect the world. But secondly, we live in a world that anyway, irrespective of us being in social tribes and us being in mental tribes, is actually more connected as a single system than ever before. We’re more prone to contagion around the world than we’ve ever had in history at any point. And so the key point about tribalism today is actually it’s more dangerous. Because we don’t understand each other and each other’s tribes, we have the ability for things to happen that are going to shock us profoundly and affect us profoundly because of that contagion issue.
Kirkpatrick: Okay, we’re going to get two more voices on the floor and then we’re going to have to more or less wrap up—okay, we’ll get three.
Barros: João Barros from Veniam. Are you seeing major differences in how people deal with privacy in their digital relationships in different countries, in different nationalities, or is the human race behaving more and more the same everywhere?
Walter: Well it is on point on what we’re doing here. On the algorithms—which we’ve heard a lot about over the course of the day, and doubts about how accurate they are, either intentionally or unintentionally, but with enormous consequences—who is reviewing the algorithms? Is there a body, hopefully impartial, that says, you know, this is a very important algorithm that we’re now all relying on and it doesn’t work. Or it has this flaw, and I’m going to rate it a three out of five. So who’s doing that?
Kirkpatrick: This is why I called on you, because that’s a $64,000 dollar question.
Hook:  I’m Leslie Hook. And I wanted to ask about Tay. Was the algorithm working as it was supposed to? I mean Microsoft took Tay down and said, okay, we’re going to fix it—
Kirkpatrick: Oh, that was the bot that basically went racist.
Hook: Right, the bot that started tweeting all these pro-Hitler sentiments. And at what point—you know, what are the moral boundaries for a chat bot like Tay and can you put them in an algorithm?
Kirkpatrick: Okay, and then Oz quickly.
Oz: Hi. Interesting background here, I was at the Economist intelligence unit for a while and did some work with the PeaceTech Lab. What you guys are talking about is there’s a missing quadrant of data in the conversation that doesn’t really allow it to happen and I think talking about it in terms of purviews and politics is one side. How do we start bridging that last gap? And that last gap is how do we actually take it from what we have with this algorithmically, which is a bunch of data scientists, and then get it to conversational idioms that people can actually do something about?
Kirkpatrick: Yeah, well a lot of really fascinating questions. Anything either of you want to say?
Tett: No, I will give a quick sort of brief headline summary.
boyd: Few things. Mahzarin Banaji, if you don’t know her work, it’s pretty phenomenal. One of the things—we all know that people, when they work in more diverse teams, they outperform homogenous teams. But they believe themselves to perform poorer and they mark themselves as less happy. The reason that this is important is that there’s an interesting moment of how you give people information and how you actually allow them to process it and what does discomfort look like. Discomfort is something we struggle with.
Privacy is—I’ve written a bunch of material about it. I’m happy to send it to you. Privacy is not about the ability to control information. It’s the ability to control a social situation. And when we understand that, we actually start to see how that differs around the globe because of what you understand the social situation and the context to look like. The actual practice is the same, but the context shifts and that makes a magical level of difference.
I do believe that there are ways of actually looking at the models—I think that anybody who tries to look at just the algorithm or just the data without understanding the whole model is going to fail. And I think this is the problem, that not only are these systems extremely complex that even the most technically sophisticated people can’t make sense of them, but it’s actually done in a way where it so evolves depending on the data that it’s actually tricky to even talk about what it means to do it. Which is one of the reasons that I’m a big proponent of trying to think through auditing mechanisms, because it’s a way of doing a technical intervention to a technical system.
In terms of Tay, Tay is often not what you think it is. And that was one of the interesting things of watching it all play out. One of the biggest challenges of Tay—and this is well publicly documented—is that Tay was designed to respond to folks involved in gamergate and 4chan by challenging their beliefs and their systems. And one of the things that happened in 8chan and in 4chan is that the whole group of people decided game on and they purposely went after Tay to try to get it to do a whole variety of things, because as they described, somebody programmed this bitch to be really sensitive. That was the frame. And so it was one of those interesting moments where it’s like can you actually design sophisticatedly for attackers and what are the different dynamics of trying to do that.
And I will put one final note in all of this. If any of you are journalists out there, I recommend figuring out how you’re going to start beat reporting on 4chan, because that’s going to say a lot of what’s going on with this election. There’s a tremendous amount of election material and coordination happening in 4chan and 8chan right now and nobody is reporting on it.
Kirkpatrick: That’s fascinating. Okay, Gillian.
Tett: Thank you. As a journalist, we love tips. Thank you, danah. I’ll just say very briefly that, you know, I take our responsibility of journalism very, very deeply. I strongly believe that one reason why the financial crisis happened was because we are all exposed to the financial system and the contagion, and yet, almost none of us understand what those geeks in finance were actually doing in their own little silo in relation to the financial instruments.
I think something very similar is playing out today in the technology world. We made tremendous mistakes as journalists in relation to our reporting on finance. We still haven’t learned many of the right lessons. We’re certainly trying at the FT. On a good day, we probably get about 30% of the truth. I think most of our competitors get 25%. But it’s a very, very big uphill struggle, and in a world where the corporate interests have the money and where the PR departments of most of the banks, the tech companies are bigger than the budgets of most media organizations, frankly, we need all the help we can get from any of you lot. So thank you.
boyd: One p.s., if you don’t know what 4chan or 8chan is, read the Wikipedia entry. Do not otherwise look it up. Or talk to somebody who knows what it is. Because it’s not pretty.
Kirkpatrick: So I want to just end with a couple of quick comments. One is, I think there’s a little more case for optimism than we have heard on this stage, but I’m not going to take lengthy time that I would have to take to explain that. But the thing that I find most interesting, summarizing this—and I think so many of the things that both of you have said underscore it, and it kind of goes to Walter’s question about algorithms and this role of Facebook with the trending news controversy. And then a question that Gillian asked in that same “City Squares” book is how do we look at the real and the virtual worlds in symbiosis? Because what’s happened, in my opinion, is that we have an institutional leadership structure in government, and still in much of business, academia, journalism, that’s mostly not thinking in terms of algorithms or data systems. They’re still thinking in old models generally. And then we have companies like Facebook, Google and Amazon, in particular those three, but a couple of others as well, that are not nearly enough taking seriously their own social responsibility. And the problem is how do we solve both those problems and bring them closer together? Meanwhile, you know, there’s no sign it’s really happening on either side, in my opinion, but I still think overall there’s a lot of good things happening that we’ve heard about other times during the day.
But I will say you don’t have two more smart women than these two. And people—I mean the fact that they’re both woman I’m proud of. And Simone gets additional credit for that because we really do work hard to have a diverse crowd here. But I think you two are both doing important work and I thank you both for being here.

Participants

danah boyd

Founder and President, Data & Society

Scroll to Top