Watch

Jack Dorsey et al: Can Tech Bring Equality and Peace?

Jack Dorsey et al: Can Tech Bring Equality and Peace?

bi play circle fill
Jack Dorsey et al: Can Tech Bring Equality and Peace?

Surowiecki: I realized as I was sitting here that I am the only person here who has wine. They have water and tea, which probably tells you why I’m moderating and they are on the panel. But what I’m going to do is introduce you all first, sort of try to introduce the panel and give you all a sense of what we’re trying to talk about when we talk about something like “can tech bring about equality and peace.”
So Nandan Nilekani is the former chairman and co-founder of Infosys, but you probably know him, perhaps, as the chairman of the Unique Identification Authority of India, an extraordinary program that basically has at this point given 700 million Indians 12-digit IDs that basically has allowed them to do a whole host of things in society, avail themselves of government benefits, open bank accounts, and so on, and really transformed the infrastructure of that country, and actually led to this question that David mentioned, which is: What counts as technology?
Jack Dorsey is chairman and co-founder of Twitter and the founder and CEO of Square, which I think you all know.
David Miliband is the former foreign secretary of Britain, also schools minister and secretary of the environment—is that what it’s called—and is now the president and CEO of the International Rescue Community, which was founded by Albert Einstein in 1933 and is one of the more important rescue and development and refugee assistance NGOs in the world.
And Genevieve Bell, finally, is a vice president of Intel—you’re not anymore?
Bell: No, I am.
Surowiecki: Okay. [LAUGHTER]
Bell: Last I checked.
Surowiecki: But is an anthropologist who has done extraordinary work on user experience and actually trying to work on how people actually use technology in really important ways, has a—a panoply of patents, is that accurate?—and I think really one of the most interesting thinkers about the intersection of technology and culture today.
So the question we are here to discuss—I don’t know if we’re going to be able to answer it—is can tech bring equality and peace? Which I guess is a little different from the question will it bring equality and peace, but I think that also is maybe a question that is worth sort of interrogating. And what I hope we’ll do tonight is—that’s a very utopian topic, but I think in a way it’s actually a topic that is in line with a lot of the early ambitions of what people in Silicon Valley thought the personal computer would bring about. So let me just read you a few quotes, many of which are culled from Walter Isaacson’s book, “The Innovators,” which has a pretty great section about the sort of utopian visions in the Valley around the personal computer in the late 1960s and early 1970s.
And Lee Felsenstein, who was an important figure both in sort of the radical movement in the 1960s and in the PC movement, said, “We were looking for nonviolent weapons, and I suddenly realized that the greatest nonviolent weapon of all was information flow.” The People’s Computer Company, which was this newsletter that they put out, in 1972 the first issue appeared and said, “Computers right now are being mostly used against people instead of for people, used to control people instead of to free them. Time to change all that.” And they said about the network computer, “They work to bring the locus of power closer to the people.” And as Isaacson describes it, what all these people wanted to do, he said, with the early founders of the PC revolution, the people in the Homebrew Computer Club, they all wanted to “throw off the constraints of institutions, be they government, IBM, or their employers.” And, finally, Stewart Brand, who we saw, if you saw the biotech panel, there was a clip of Stewart Brand speaking recently. In 1995, Steward Brand, reflecting on the digital revolution, said that people set about transforming computers into tools of liberation, and that the digital revolution was born out of the counterculture’s scorn for centralized authority. And in his famous article he wrote in Rolling Stone in 1972, he said that what computers would allow people to do was wrest power from the rich and powerful institutions.
So it’s 40 years later, and while we are a much richer world than we were, and while inequality across countries has obviously lessened, inequality within countries has increased. We are I guess arguably more peaceful, but I’m not sure how much more peaceful we are. And I think a lot of people at least would raise the question of have we really wrested power from the rich and powerful institutions or have we just concentrated power in other rich and powerful institutions?
So I think the question is really twofold. It’s can it bring equality and peace, and will, or maybe has it—maybe it’s threefold, but whatever it is. And I thought maybe we’d start just by—the way I thought about doing it was to say are you an optimist, a pessimist, or neutral? But what I really think is, maybe if you want to say something about that, but really talk a little bit about your own experience and how it maybe connects to this question. So, Nandan, why don’t we start with you?
Nilekani: Jim, I don’t know about the peace stuff, but I can talk about inequality. The problem which I was tackling for the last five years was the inequality caused by the lack of having an identity, or identity document. Because, unlike in the West, where every birth is registered, in countries like India, and it’s true across the world, more than half the births are not registered at birth, so you have millions of people who don’t have an ID. And that ID becomes a form of an identity divide, because for more and more things you need an ID. You need an ID to catch a train or to enter an airport or to get a bank account. So everything needs ID, so ID creates a divide.
Surowiecki: To buy property.
Nilekani: To buy property, everything. So we took up this challenge to give everybody an ID, and since we are doing this in 2009, we said let’s make it a digital ID. So we built the world’s largest digital identity program, which has now issued these IDs to 700 million people. And we built it as a platform. I think one of the things in the Internet is the concept of an hourglass architecture, which means that there is the narrow stem, which is the platform, and you have innovation happening below and above. So the TCP/IP, above that, you have apps, you have all the apps that Jack’s building, and below that you have all the infrastructure and the communication stuff. So we used the same concept and built an ID system on which innovation can happen, and today these people can use these IDs to open bank accounts, get electronic cash transfer into the bank accounts. So I think it’s like a gigantic leap to great a billion people into the digital world and reduce inequality of identity. So that’s something which I think is a technology-based platform for addressing inequality.
Surowiecki: And one of the things that’s interesting about that is just thinking about this idea of what a technology is, because while you’re doing an amazing information technology project, part of what you’re doing is people were doing in the 19th century in the Western world, or the 20th century, in terms of the technology of just giving somebody a social security number. That’s a kind of technology, just on a much simpler level.
Nilekani: Sure. Well, you know, IDs have been around—I mean in Europe in the 15th and 16th century, IDs were given. I mean surnames are IDs. James Butcher or John Baker is an ID, right? And those IDs came about because governments wanted to recruit people, to draft an army or to pay taxes or to pay the church fees and so forth. In the 20th century, your IDs were social welfare, so the U.S. did the social security number in 1936 and the U.K. did the national—you know, the health system has a health number in the 1950s, after the Beveridge Report and so forth.
So we were building an ID which wanted to be a universal ID which was platform neutral, application neutral. That’s the difference, because that meant for one particular app—it was layered, and that’s where technology came in. And the idea was that you’d give the ID that just says that John is John, and the healthcare application has his healthcare record, the banking application has his banking record. So this layering was all using the principles of the Internet, so to that extent, it was a modern system of ID using today’s technology.
Surowiecki: So, Jack, both Twitter—I mean obviously there’s an ongoing debate about the role of Twitter in liberation movements and freedom movements around the world. And then Square has an interesting democratizing element in terms of what it does for businesses and small businesses. But when you think about this question, can tech bring equality and peace, what do you think?
Dorsey: Well, I mean it certainly can’t alone. To me, technology fundamentally is just a tool, so it’s up to us to figure out how to use those tools and how to apply those tools. And this work around identity is just so fundamental. Like if we had something similar in the United States, we could move so much faster, with such greater efficiency, which would actually reduce the whole cost of the system and allow more and more people to get in. So ultimately, I think the role of any tool is to do a job more efficiently, to do a job faster, to allow more people to access it. And I think there are uses of tools like Twitter and Facebook and Instagram and Square that enable people to do that, and, you know, I’d love to surface more of that activity. But there’s also negative uses of the tool as well, and there’s always going to be a balance. And I think, you know, I am an optimist and I do believe that these tools can surface the conversations that we need to have that aren’t being seen, that aren’t being heard, that have no venue otherwise. And there’s a potential for equality of voice at least. The potential exists that someone could have an idea, that someone anywhere around the world could have an idea and it could spread instantly, whereas before, those ideas were mediated through various channels, whether they be media channels or government channels or organizational channels. I think there is a potential for more of that sort of freeing of the voice that we can now listen to and, as a society, amplify if we think it’s the right thing.
Surowiecki: Because everyone has access to the printing press, at least in kind of theory.
Dorsey: Yes.
Surowiecki: Okay. So, David—I mean I want to come back also to these ideas, I mean when you talk about governments wanting to issue IDs for taxes and drafting people. So, David, you’ve been on it from the government side, but obviously now, at the IRC, you’ve seen the way governments can use technology to oppress, and then also the potential liberatory impact of it. And you said a couple of things when we talked that I want to steal because they were such good lines, but I’m going to let you maybe talk—see if you can remember them—
Miliband: If you don’t tell me what they are, then—but let me address the war and peace part of this question. Because President Obama said three weeks ago if you turned on the news you’d think the world is going to hell. Now, actually, the data shows this is one of the most peaceful periods of human history. Fewer nations are at war with each other than for at least 300 years. But, more people were displaced from their homes around the world by conflict and disaster last year than at any time since the Second World War; 52 million people were displaced from their home by conflict and disaster last year, mainly conflict. That’s one every four seconds. And so there is an absolute tsunami of conflict and displacement, at a time when interstate warfare is at a very low level. And what’s happening essentially, in simplified terms, is that you’ve got more longer and more complex civil wars that are consuming not just nations, but subregions. So if you think about Somalia, if you think about Afghanistan, if you think about Syria at the moment, these are—if you think about West Africa, actually, I recently came back from Sierra Leone and Liberia, and these are post-conflict societies that are now traumatized by Ebola. And so you’ve got this tsunami of violence, where ethnic, political, and religious difference is not being contained within peaceful political boundaries.
And my reflection on the question of the session, therefore, is that, if I think about the 12,000 people who are working for me in 35 countries at the moment, working for the IRC, the terrible truth is that technology is being used far more effectively to wage war than it is to advance humanitarian goals. So the material that we are getting into Syria to help run health centers across Syria, that’s overwhelmed by the way in which people are stopped at checkpoints, their social media is scrutinized, and if they’ve got the wrong friends, then literally the word goes to the sniper on the roof of the nearby building. And the challenge I think for all of us is how to make sure that the potential of the technology, that Jack rightly has evoked, how do we make sure that it becomes a weapon for peach and development and progress, rather than a weapon in the worst hands?
Surowiecki: So are there ways in which you feel like your organization has been able to use technology that has made your job easier?
Miliband: Well, it’s unquestionably the case that when we’re able to use the new technology to spread information to refugees about the services that are available for them, which we’re doing in Lebanon, when we use quite advanced—we’ve had to develop our own system for tracking our own goods into Syria to make sure they’re not being found in the wrong hands. Those are quite positive examples of how, if you like, humanity or humanitarianism and technological advance are being allied together. But if I think about our 167 field offices, in Congo, in Pakistan, etcetera, these are places where the basic problems of connectivity, security, cost, etcetera, become very, very constraining on the kinds of things that are possible. And so I think that we’ve got to look at ourselves as a humanitarian sector. We’re almost so stretched doing the day job that there hasn’t been the vision to think what are the leaps forward?
But I think there’s also a challenge for the tech sector really, which is which of these great problems and great challenges does it want to address? And I was talking to someone beforehand, saying that this is an incredibly competitive, incredibly vibrant, incredibly open sector, but somehow there needs to be some focus brought to some of the social problems that need to be addressed, without the cramping or sort of suffocating sense of a government, quote/unquote, “plan,” which obviously isn’t the way to do it.
Surowiecki: So, Genevieve, you’re the resident anthropologist on the panel. You have a lot of experience in looking at how people are actually using technology. I know from talking to you, you’re maybe a little more skeptical about the broader concept. Maybe just talk a little bit about what you think about the question, but also about what we know about how people actually use technology.
Bell: I think, you know, aside from the daunting prospect of following you talking about global-scale things—I’m going to come back to a much sort of smaller piece here, which is that we can talk about technology as though it were a tool, as though it were a weapon. The reality is it is, I mean it’s both of those things and neither, right? And it has no agency. So one of my problems with the framing of the question is that we somehow imagine that technology is going to deliver peace and equality, and technology can’t do the work that we as a collection of societies don’t want to do ourselves. So if we are not invested in peace, technology will not bring it. If we are not invested in equality, technology cannot bring it. So there’s something about imagining that technology is this wonderful solution that’s fascinating, not only because of its persistence—so if you go back and look, we were having this conversation over dinner about history, but if you go back and look at the hearings in front of the United States Congress in the 1910s about electricity in America and about electrifying homes, it was also freighted with this notion that technology was going to revolutionize the home, it was going to bring women into the workforce, it was going to liberate us from domestic tasks. Do you all feel liberated from domestic tasks? No. Amazing. And it was going to make us all safe, and it has this whole kind of language on top of it about electricity as though it were the panacea to, certainly not world peace, but to equality. And I think there’s something fascinating about the fact that we keep coming back to those stories that we need technology to do this work because it’s easier to imagine it will do it than we will.
So that’s sort of problem statement number one for me. And it’s not that I’m not optimistic about technology. I’m lucky enough to work in a place that is the custodian of the Moore’s Law everyone has been talking about all night long, and there’s something remarkable about that promise. But it also has to be what we plan to do with it, and there’s something about what we do with it that becomes really interesting. And imaging that it will be varied, right? Not everything that everyone does technology is going to be lofty. I mean let’s not forget that every single one of us in this room I’m willing to bet loves our cellphone, not just because it gives us the possibility of following the Arab Spring, but because you can look at the gossip pages and take pictures of your shoes. And there’s something about that work that technology does that’s also about pleasure and whimsy and about human connections that might be as important at one level as the problems of peace and equality, and somewhere between those two things is the work that it does, right?
But I’m also acutely aware, not only am I the only social scientist on the panel, I’m the only woman. And so talking about the notion that tech brings equality when we’re down to 20 percent versus 50 percent is sort of an interesting—
Surowiecki: I’m not on the panel. It’s 25 percent.
Bell: You’re still not female, James, are you? Or did I miss something at dinner?
Surowiecki: No, no, no. Okay, 20 percent.
Bell: But there’s something about saying, if we want to drive equality in any of these forums, it actually means ensuring that there are lots of different voices in the room. You’ve done well, because you have the Global South represented, go you. There’s two of us. That’s excellent. But what it means to think about peace for whom and under what circumstances, equality for whom and under what circumstances, and remembering that those are fights that take a really long time—I mean Mary Wollstonecraft demanded equal rights for women in 1792. My female colleagues are still being harassed off the Internet in 2014. So these are longstanding fights, and they’re fights about social justice and visions about what our various societies might look like, and they’re going to be different in different places, right?
Surowiecki: So there’s a host of things here to talk about, but maybe let’s start with what Jack said about this idea of someone somewhere who suddenly has access to a voice that wasn't really possible beforehand. I mean it’s sort of the A.J. Liebling line about basically you want to own a printing press if you don’t want to get into a fight with someone who owns a printing press. Nowadays, well, theoretically, everyone at least has access to that. I mean when you look at the Internet, when you look at what technology has done, I mean do you feel like that’s a reality, like it actually has really changed the nature of power? Is that a place where people have been able to wrest power away from traditional institutions? What’s your experience of that?
Miliband: I’d say two things. First of all, we’re meeting on a very, very significant day. The ninth of November is a very important day. The most recent importance of it is that it’s 25 years since the fall of the Berlin Wall, and the Berlin Wall was brought down before some of you were born, but before the revolution that we’re talking about. So one always has to have a sense that the extraordinary change in the means of congregation, organization, dissemination of information that undoubtedly has been wrought by the world that you all populate, a bit of humility in my own rhetoric is important because the Berlin Wall was brought down before, and people did organize in subterranean ways, and the debate—just so you know, the scholarly and political debate now is that, actually, it wasn’t that the fall of the Berlin Wall brought freedom and protest, it was that protest in the name of freedom in the end brought down the Wall. So that’s the first thing that I think is relevant to this.
Secondly, I do think that there has been an extraordinary leakage of power from institutions of authority. That’s why the democratizing movement that the technology represents or has enabled, I think it’s compatible with a period of greater inequality. The fact that inequality has risen doesn’t mean that there hasn’t been a democratization of some forms of the exercise of power. And for me, the interesting part of this is that what's really been democratized is that the ability to say one thing in public and do another thing in private has broken down. And the example I always give is about the former president of Egypt, President Mubarak, who portrayed himself as a strong leader. He printed in the Egyptian papers the picture of him lecturing President Obama, but in fact, the picture had been doctored and it was actually President Obama lecturing him. And within five minutes of him putting out the doctored picture, the truth was abroad. And so I think that that notion that hypocrisy is more easily exposed, the transparency that’s associated with the world we’re talking about I think is the most significant part of it. And in politics, it is a transformational change. The ability to hold politics to account is greater. It maybe hasn’t hit business yet, but it’s going to hit business very, very hard as well, and I think that notion of a public-private divide breaking down is really very significant.
Nilekani: I think part of this is obviously how the technology plays out. Obviously, if you have a highly centralized approach, then you’ll end up with centralizing power. But I think if we can visualize a world where everybody has a smartphone, everybody has access to the Internet, everybody has—definitely, I think it can be very empowering, provided people have the ability to get onto this platform.
But the other thing is that, in my experience, if it’s done well, technology is very, very pro the marginalized, because it reduces the cost of transactions, it reduces the size of transactions, it makes it available across the grid, anywhere in a remote part of a country where there’s no other way to get there. So those kind of things are hugely empowering. So it all boils down to how well you architect these things to actually deliver the empowerment that you want. If you don’t do it, then it won’t work. But I agree with Genevieve that you can't advocate the peace and say let the technology handle the peace.
Surowiecki: Should we be concerned about, you know, now it’s all channeled through Facebook or Google or Twitter, that these are powerful institutions that have some sway, or no, are they just neutral applications for people’s voices?
Nilekani: I think it’s actually concentrating in a few institutions, so while the confidence or the trust of the public institution may be going down, these institutions actually are garnering more and more data and information, which I think is not necessarily a healthy thing.
Surowiecki: Not a healthy thing?
Nilekani: Yes.
Surowiecki: Other thoughts? Jack?
Dorsey: I mean I think very simply, independent of technology, to the previous question, I think the more we add connections, to individuals, to organizations, to the world, the faster we can move, and it’s that basic and that fundamental. Twitter and social networks and social media are not all the different from just a conversation, not all that different from just word of mouth. It just happens to have more endpoints and it happens to move a lot faster. And I do believe that with that speed, we have more time back to focus on something that’s more meaningful. So could we have revolutions that need to take place in a faster timeframe instead of over 100 years or decades or five years? I think there is a quickening that occurs when technology is efficient.
To this question, I think there is also an expectation of more and more transparency. So as more and more people are connected and as more and more people are using it, both in terms of speaking in their own voice but also listening, there’s an expectation for similar things around everything they interact with, and that includes companies like Twitter. And I think we’ve done a good job and continue to get better about bringing out transparency reports around requests for information, for instance, and our internal policies, and I think it’s a responsibility that we have that hopefully we can inspire others to do the same. And we have also had a lot of inspiration from other companies, such as Google, for instance, where we have learned a ton in terms of the internal transparency inside that company that has allowed them to move faster to make better decisions.
The other interesting thing that I’m finding in my own companies is that like this generation that’s entering the workforce right now demands transparency, demands more participation, and expects more participation in more decisions, and demands a greater control of their own destiny, and I think that’s really, really healthy, and it’s a question of how do we—as an organization and building companies or building governments, how do we meet that expectation in a healthy way that’s not overwhelming, that is balanced, which is the role of any government, to ultimately balance multiple parties’ desires and needs. So I think as long as we keep fighting for more transparency within these organizations and we report what we’re doing and why, most importantly, why we’re doing it, I think we can continue to move fast.
Surowiecki: So government is obviously a big issue in all this. If you think about a Foucauldian idea of what—the whole idea of identity itself is kind of problematic. Like you want a registered identity because that allows the government to—the government wants that because it allows them to tax you, to draft you, to do all these other things. And obviously, one of the real questions right now is—you know, Larry Lessig has written a lot about it in terms of companies that, at the same time as it’s obviously become much easier for people to copy, and it’s also become much easier for companies to enforce copyright, and obviously, on the government front, at the same time as we have this push for transparency and it is more transparent in some ways, at the same time, we are being surveilled in ways we never could imagine before, etcetera, etcetera. I don’t know if Barton’s here, but I’m sure he’ll have something to say about that.
So I mean how do we make sense of that. I mean it does feel—maybe, Genevieve, we’ll start with you. How do we make sense of this world in which, at the same time—I mean it feels freer in some ways, more transparent, and in other ways, it feels like the power that technology’s giving governments is more—it was actually much harder to open people’s mail than it is to read people’s email, probably.
Bell: That’s an excellent question. Thank you. [LAUGHS] Listen, I think one of the things that’s always fascinating to me watching other human beings is that the same time—I mean Jack’s right, the kind of conversation about transparency, the idea that the information wants out, you know, that’s been a conversation that has in some way dominated the rhetoric of the Internet for a long time. One of the things that’s also happening on the human side of the Internet is that we are more prone to lie online than we ever were in real life. So the average human being tells somewhere between six to 200 lies a day.
Surowiecki: Six to 200?
Bell: Yes. When you’re on the 200 end, you’re sociopathic. But you can get to 20 pretty easily: that was an excellent breakfast; you look great; God, that was a fascinating panel; love this conference; Half Moon Bay, very pretty. [LAUGHTER]
Surowiecki: It’s a great topic.
Bell: Exactly. God, Jack was a great moderator.
Surowiecki: Great moderator, amazing.
Bell: You can get there really quickly, right? And we’re not telling them necessarily to be dishonest. If you were raised Catholic, it’s more the sins of omission than commission. And it’s not always about the act of deception, right? But it turns out, when you actually track this, whereas in physical interactions we are less likely to lie to each other, for reasons of shame, guilt, upbringing, social ether, the lies we tell online fill us with glee.
Surowiecki: Glee?
Bell: And pleasure. So we have this really interesting tension between a system that is predicated on the notion of transparency and is being designed for transparency. We have consumers who on the one hand appear to like those things and tell a lot about themselves on the Internet. Patch that together with the reality that much of what they are telling themselves and others about themselves isn’t entirely true, suggests to me that the idea about transparency is an infinitely more complicated thing than it first appears.
Surowiecki: Do they know they’re telling lies?
Bell: Oh, absolutely. There’s a lovely—well, most of us do. There’s a lovely study done by a colleague of mine at Cornell, and he picked the most obvious site to track this, which was online dating. And it turns out 100 percent of Americans lie in their online dating profiles.
Audience 1: Starting with hi.
Bell: Yes. Men and women lie differently. Men lie about their height, women lie about their weight.
Audience 1: And age.
Bell: Less about that. Everyone lies about that. But what became really clear was that people were doing it quite deliberately. So for me there’s something really interesting about how you reconcile the rhetoric of transparency with the practice that says, actually, as human beings, we are deeply invested in secrets, and a little bit in lies too. And there’s something about how you resolve those things that’s always endlessly fascinating.
Surowiecki: Interesting. David, what do you think about—I mean you’re dealing with this on a daily basis. You’re dealing with oppressive governments. Is it easier for them to oppress because of technology?
Miliband: I mean obviously the first thing I have to say is that was a really great point.
Bell: I know it.
Miliband: And you’ll have to decide whether that was [LAUGHTER]—
Surowiecki: Whether that was true or not. Whether that was your 199th lie.
Miliband: Exactly. Yes, you’ll have to think how sincere I was in saying that. I mean where do I want to take this? I mean I think that what is extraordinary to me—maybe let me shift the conversation a bit. Something relatively straightforward, there are 3 million refugees from Syria, 1.7 million of them are in Lebanon. Probably 35 percent of those 1.7 million in Lebanon are of school age. Educating those kids is not a very difficult thing to organize. Yet, this year there are fewer Syrian refugee kids in Lebanon education than there were last year. International appeals, UN, you name it. And they are the victims of a fragmentation of authority. They are the victims of local political concern about whether or not, if life’s too good, they’re going to end up staying for a long time. They’re a victim of clashing mandates of different international organizations. And what frightens me is that the, if you like, the collective efficacy of the number of people who are concerned with the situation there is low, and it should in fact be higher than it would have been 20 or 30 years ago. And that tells me that sort of the alchemy of technology, one shouldn’t believe it’ll just happen naturally, that the authority structure in which anything happens is absolutely key. Personally, I think that’s a bigger issue than the sort of surveillance/openness set of issues, because fundamentally, what you care about on the surveillance issue is abuse. It’s the abuse of power that should concern you, and the abuse of power happens when there’s an inequality of power and when power isn’t held to account. And so I would take the conversation towards how does the voice of, in this case the refugees in Lebanon, but you can think of many other examples of it, how does the disparate and disaggregated voice come together. But until it does—
Surowiecki: But let me—wait, wait, wait, wait—
Miliband: Until it does, then those in power are not accountable for it.
Surowiecki: But I mean how do we hold—I mean imagine Richard Nixon gets elected in 2016 and has his enemies list that he had in 1972, but now he has every single phone call they made, every email they sent—he has access to all of it, basically. I mean that seems like a readymade—he doesn’t have to break into Daniel Ellsberg’s psychiatrist’s office. He has it all. I mean that seems like tailor made for abuse. And the fact that we don’t know that, right? I mean we don’t know that he’s going to do that. So what’s the distinction—I’m confused about—I mean the distinction may be that the abuse hasn’t happened yet, but that’s why people are concerned about it, right?
Miliband: I guess in my world, there’s a whole—I can see there’s a whole set of issues about how does government abuse the information that’s available to it, whether in democratic societies or in authoritarian societies. And there’ll be technological aspects to that question, to what extent is it possible to push back against a tide of openness. It feels to me like the tide of openness is very, very strong.
I think that what we have to acknowledge, even if—and I put myself on the optimistic end of your original spectrum—is that the inequalities that are being generated are that much more glaring, given the resources to tackle the inequalities. I guess that’s maybe a better way of putting it.
Surowiecki: Got it. Got it. So Nandan, this is obviously a big issue with you. I mean that was a lot of the opposition, both within India and then sort of more generally to your program, was that it was—partly because it was so comprehensive, because it involved biometrics and things like that, that this was somehow an incredibly intrusive program. How did you think about that, and how did you deal with it?
Nilekani: I think the whole idea was to make it as nonintrusive as possible, and part of it was by limiting it to just an ID system, which is only evoked when the person wants his ID to be established. So it was not a data-hoarding kind of system that collected information about you, nor did it share information with you about somebody else without your permission. So a lot of rules that were designed precisely for making sure that the privacy is ensured, because there was a huge worry about, you know, “the state is building some big brother” and all that. And I think it’s actually much better than the situation here, because in the U.S. today, you have four or five companies—not mentioning Jack’s company—that are collecting every bit of information about you: where you are, what you are—everything, right? And the government can just peek into that anytime, which is what is happening. So this is the worst for both worlds. You have sophisticated private firms that know how to collect information and then the government opening the trapdoor and getting it wherever they want. So that to me is a far more dangerous thing than having a simple ID system that does not collect data. So I think we’re able to deal with that. And I think the value of millions of people getting entry to the formal society was very, very important. So I think a lot of it goes down to how you design it for empowerment, because if you don’t consciously design for empowerment, then it can go the other way.
Surowiecki: So let’s talk about that. I mean David did get to this issue of equality. Equality has been implicit—I mean, Jack, when you were talking about in a way the kind of idea of anyone who has a voice can now speak and be heard in a way that they weren’t before. But obviously one of the big issues in the West is this sense that technology is exacerbating the problem of inequality. And while I think there are obvious questions about that—I don’t know if you saw Peter Thiel and Reid Hoffman. I do think globalization probably has had more to do with that than technology. It’s clearly a tension that people feel, that actually, instead of improving equality it actually has made things worse. And obviously, San Francisco is kind of the locus of this in a quite concrete way. So how should we think about technology’s relationship to equality generally? It’s a big question, but—whoever wants to go. Let’s hear Jack.
Dorsey: How do we think about technology in—
Surowiecki: Yes, when you think about technology’s relationship to equality, do you feel like it’s actually exacerbating inequality in the United States?
Dorsey: I mean technology is one of those words where it just becomes this abstract thing that becomes really scary very quickly, because we can’t seem to demystify it. And I think the most important thing is just the realization that this is a tool. We choose how to use the tool. And I think ultimately, it goes back to just, it’s really just changing of velocity, and ideally making something that was not as accessible a lot more accessible. But there has to be a desire for that to happen in the first place. And what I’m optimistic about is there’s more people talking about it all over the world and we can see them talking about it all over the world almost instantly now, whereas 100 years ago, 200 years ago, 300 years ago, it would take a long time for that sort of story to travel. So now we can get that feeling of that story, that electricity almost instantaneously, and then we have to come up with good rules and guidelines on how to make use of it, how to make it practical, how to use it to move us forward.
So I think there is potential for that balance, but we have to have the desire in the first place, certainly. And I believe we do, but we just, we need to constantly remind ourselves of it.
Bell: And to David’s point, one of the remarkable things about a whole constellation of technologies—because I take your point. We’re not just talking about the Internet. We’re talking about television, radio, God forbid magazines still turn out to be important—
Surowiecki: They’re still relevant sort of.
Bell: They’re still relevant, exactly. There’s sort of something about—the constellation of those technologies makes things visible in a way that they couldn’t have been visible before, and make them visible on a much more immediate scale. You know, news comes at us across multiple formats and makes the dimensions of inequality much harder to escape. And then I think, you know, whether it’s refugee camps in Syria, whether it was what happened in Ferguson—I mean there’s a series of those things where the flashpoint just becomes much more visible.
Surowiecki: Right. Ferguson is a great example, I think a fascinating example.
Bell: Absolutely. But I think the harder thing there is twofold. One is are we really, in America, surprised that there is still racism? I mean is that actually a surprise? Because if it is, that says something really fascinating about American society that requires a deeper reflection. And if we aren’t, then what does it mean to say we want to commit, and who is it that is committing to something different? So it’s one thing to say the information will have transparency, becomes clear, things are made more visible. But unless there is also effectively a moral commitment—and I don’t mean moral in the churchy way, but in the kind of social justice moral sense, you know, what is the moral statement you want to make, what is the philosophical bent that you want to wrap around the tool, meaning in some ways the ideology we have wrapped around the Internet for 30-plus years was that it was about freedom and democracy. So then you might have to ask some questions about do we really have velocity, because I’m not sure we’ll get to freedom or democracy anytime soon, certainly not for everyone. And if we didn’t, what does it mean to imagine how you rearticulate again what the values are that you want the Internet to stand for—whatever the Internet becomes, what is it that you choose to stand for. Twitter has a very particular position about what it chooses to stand for in terms of what information it lets go through and what it doesn’t, and they’re to be commended for it. They have a philosophy about how the company is oriented. But what it means to think about where does the tech industry, and the Internet as one of its manifestations, sit on any number of issues about inequality would require a different conversation than the one we have. We talk a lot about innovation. We don’t talk so much about what it would mean to imagine full participation of all sorts of other communities in building it and critiquing it, and if we do have those conversations, we always have them in a way that doesn’t get the conversation very far.
Miliband: Let me just offer a reflection as someone who’s lived in America just for a year now. I moved to New York—I know New York isn’t America, but [LAUGHTER] I moved to the U.S. a year ago. Your question about equality presumes that we know what community we’re talking about.
Surowiecki: Yes.
Miliband: And my reflection is that this is a far more fragmented society—I don’t mean on socioeconomic terms, but fragmented culturally—than I would have guessed having been a schoolboy here in the 1970s at the age of 13. Coming back sort of 35 years later, I’m struck at—if I use the word “balkanization,” I don’t know if people know what I’m referring to, but there’s a sense of micro-communities being created, some of them by technological advance. And I think, if that’s right, there’s an indirect consequence of the onward march of the technological links that allow people to find people of like mind across the country. And it concerns me if national community is a corroding sense. I mean I was absolutely astonished to be told last week by one of the TV anchors that for the main news program, even with all the opportunities for replay, etcetera, 12 or 15 million people is a lot of people watching one of the news shows. Now, 12 or 15 million people is what a news show in the U.K. would get in a population of 60 million.
Surowiecki: That’s one-fifth of what we have.
Miliband: Yes. And maybe that’s not a good example, but the sense of the fragmentation of community is in a way an unexpected or paradoxical result of these extraordinary technological advances that has made possible closer community building. And it’s evident communities have been built, virtual communities are being built. I mean I don’t know the answer to it, but I’m interested in the question of, if that’s fragmenting a sense of national community, then actually, that will play out politically. It will reduce forces for equality and it will let fly some pretty inegalitarian forces. And I think it would be stupid to say—look, it’s not the case that technological change explains why the top 1 percent have got 95 percent of the income gains over the last 10 years. It’s evidently not true. But there’s a wider conversation or a wider aspect to that, because you can’t talk about equality unless you talk about which community you’re talking about.
Surowiecki: I think that’s right. I mean the other thing about fragmentation that’s interesting is—and Nandan and I were talking about this before. You know, fragmentation could play out in a variety of ways. It could play out that people are not all watching the CBS News as they once did, but as a result, they’re taking in lots of diverse sources of information, right? I mean that would be sort of the ideal.
But the other alternative is that instead of it is balkanized and they’re just taking in the source of information that—and that is I think one of the fundamental paradoxes of the Internet is that it’s this incredibly powerful force for diversity of knowledge and information, but it can also be an extraordinarily powerful force for creating an echo chamber, and that’s what you were sort of talking about, the problem.
Nilekani: Yes. You know, one would think that with the diversity of views on the Internet that people would be more open about getting more ideas from different sources and ideologies and then framing a composite. But it seems to be the opposite is happening. And it’s also got a structure where you subscribe to the blogs you want to read, you subscribe to the feeds you want, and therefore, there’s a process of self-selection. So if your ideology is to the right, then you read right blogs; if you’re liberal, you read those things. And therefore, if you are only in your echo chamber, then I think the polarization and fragmentation will be much more. And that actually is a thing that is worrisome, because, you know, in the old days, when somebody curated the news for you, okay, fine they may have—
Surowiecki: They had their ideology—
Nilekani: They had a bias, but there was a notion of serendipity in that. There was a notion that something would suddenly come in your frame which was different. But if you just select what you want to read then there’s nothing new, and I think the fragmentation will become worse.
Surowiecki: So my mom tends to watch CNN because she figures if she watches Fox it’s too far to the right, MSNBC’s too far to the left. She figures CNN has to be somewhere in the middle. But it is this kind of attempt to find that kind of a curated thing.
Nilekani: Yes.
Surowiecki: Why don’t we take some questions, if there are any. Yes? Do we have mics?
Kirkpatrick: Yes, we have mics.
Surowiecki: Okay.
Bonchek: Thanks. Mark Bonchek with SHIFT Academy. My question is to challenge a little bit the notion here of “technology’s just a tool and it’s how we use it.” If we kind of take Marshall McLuhan’s missive that the medium is the message, you know, if I look at Gutenberg’s press, or the railroad, or the rifle, I mean it seems to me that those are technologies that had a certain impact kind of regardless of what we were going to do with it, and maybe you can control it a little bit, but it’s disruptive in its own way. And so is it kind of the sense that really it’s just a tool, or does technology have its own trajectory, regardless of what we do?
Dorsey: I mean it’s a fair point. I think there’s an intent when a thing is created, and what people do with that intent tends to evolve over time, in some cases negative and in some cases positive, whatever negative and positive means to whatever society or community is interpreting it. But I think we all kind of bring our own intents to these canvases and we use them in very different ways and they’re going to have different outcomes. So I do still feel that like the intent of something doesn’t always necessarily persist throughout its life.
Surowiecki: One other thing you might—can I say one thing about that? I mean there’s this distinction that historians of technology make between sort of a technology like the rifle and then general-purpose technologies like electricity or the Internet. And it may be that the rifle is a very focused technology, where the intent is quite explicit, and it may be that there are these more general-purpose technologies, like electricity or the Internet, where the ends are more open potentially and can be used in different ways. So that may be a way of thinking about it.
Bell: I mean I take your point about general versus specific purpose, but I think it’s also safe, to sort of build on something Jack said, that, you know, technologies are also built in a moment in time, right? Their use is imagined, there is a work that they are thought that they should do. Sometimes that work is subverted, which is also when technologies become most interesting in some ways is when they don’t end up in their intended purpose. But they inevitably encode a logic. In the case of railways, about the gauge, about how wide it was going to be, about how many trains were going to go on it. I mean the same with the Internet, right? A series of technical decisions were made about the backbone infrastructure of the Internet that shaped the way we use it today. Is that a tool or not? Hard to say, but logic is encoded in those objects that requires critical interrogation and asking the question of what was the intent, and then what else can you do with it once you’ve got it—two different things.
Audience 3: Hi, my name is David—I’m over here. And it’s a comment, really. It’s a comment, because I’m kind of annoyed by some of the views that are expressed on the panel, statements like, “Everybody has access,” or statements that—I feel like the general conclusion, or the fundamental questions being asked, it’s because we’re afraid to acknowledge that these technologies actually can solve today’s problems if we want to. So David mentioned about education. The kids in Sierra Leone, Guinea, and Liberia don’t go to school today. Do we have the tools to bring learning to all of them at their homes? Yes, we do. But do we do it? No, we don’t. A bomb suit costs $27,000. A PPE, I don’t know how much it costs. But we still cannot bring technology to people in the field who can use those PPEs to save the healthcare workers, who we need alive. And so to sit here and say, well, technology will solve inequality, it’s slightly annoying, because we know we have the tools today, and if we know we have the tools today and that connectivity is not available to everyone and there’s already this growing inequality, why is the conversation around will we do it and not why aren’t we doing it? So that’s annoying and I felt like I had to say it because you—Genevieve also mentioned about who, and that’s important. Who is creating the technology and those codes that are embedded in those technologies does affect who eventually uses it. The fact that there’s a train in Sierra Leone that goes from the mines to the port that is built by a mineral company that is not in Sierra Leone is ridiculous and very, very frustrating for everybody, because if a train can go from a mine station to the port, we should be able to bring those health tools from the port to people who need them. And so we need to check the fundamental assumptions behind the question, and then maybe we can have a conversation again that’s based on the reality of what’s on the ground and less this utopian assumption about what technology eventually does. [APPLAUSE]
Surowiecki: Well, do you want to talk a little bit about—I mean I totally get what you’re saying. I think obviously Nandan’s example in India is an example of actually using technology to try to change some of that stuff. But I think that—do you want to talk a little bit about the literacy stuff?
Nilekani: Yes. A lot of my time is spent in how we can use technology for empowerment and reducing inequality. And one of the areas which is now a big issue is education, and I think David mentioned that in the context of refugee children. But in general, the quality of learning outcomes in schools in large parts of the world is just not up to scratch and kids in grade five can’t even read grade two paragraphs, they can’t do basic arithmetic. And we believe now we can apply a technology-based approach to that, with smartphones and tablets and game-based learning and so on.
So I think the point is we have to make a conscious effort to say, okay, here’s all this good stuff. How do we build the platforms, how do we build the distribution, how do we build the access? You can’t just leave it and say that it’ll happen, because governments are not innovative enough to solve this, markets have no interest in solving this, and NGOs don’t have scale. And so these problems fall in that Bermuda Triangle of these three things. So I think definitely we have to make a conscious effort to see how we can use this to address inequality.
Surowiecki: I also think what Genevieve was saying about how technology can only solve problems that we want to solve seems very relevant to what you’re talking about in that sense. I mean to the extent that people aren’t investing in this stuff, then it’s not going to solve anything.
Bell: And the fact that it’s really easy to sit in Silicon Valley, in San Francisco and think everyone has access to the same stuff, and we know that’s not true. I mean we can talk about Google and Facebook. If we were in another country, we might be talking about Tencent and Baidu, or the Tata Group. And there’s something about also what it means to think about that it’s easy to say everyone has access, but we know that’s not true even in the United States. And, you know, we were having that conversation earlier with Tony Marx about how many Americans don’t have access to the Internet. And, you know, magnify that problem on a global scale and while it’s easy to say there’s a lot of cellphones in circulation and there’s a lot of broadband and 3G out there, we know not everyone has it. We know it’s to everywhere it could be, and we know that even if you’ve got it doesn’t mean that your access is equitable or fair or reasonably priced. I mean unlike say television or the telephone—landlines—this is not yet a technology of genuine pervasiveness. I mean it’s scaled in remarkable ways in an incredibly short period of time, but there are huge places, swaths of the planet that are still not part of this conversation.
Saitto: I’m Serena Saitto from Bloomberg News, and I have a question for Jack. I want to know, reconnecting to something somebody said before about—I want to ask you, what was your intention when you created Twitter? Because I’m a Twitter user and, as a journalist, I feel I need to be there. I cover technology and I feel if I’m not there I’m not reaching the audience of people that I cover. But yet, I still feel it’s a tool to reach insiders and newsmakers, and this is probably the reason why maybe Twitter is not growing beyond that, because I feel you don’t reach yet that wider audience that is maybe on Facebook and other social media. So my question to you is what was your intention, and how are you going to bring it to the step forward of reaching beyond the insiders—if you think that’s true.
Dorsey: I mean the intention was pretty simple to us, which was just to create more connections between the people in our company, at first, and do it with amazing speed. So we had access to text messaging and we could be anywhere in the city and we could share whatever was happening in front of us, or what we thought, or what we were about to do or what we just did. And it was that simple. And then on the other side was just seeing what people were doing, and it’s the people the we wanted to follow, the people that we cared about, and the aperture of that kept growing and growing and growing.
So the idea and the intent was just to make sure that people have a very, very fast way to share what’s happening and then to see the world around them in a very speedy way. And that’s still the intent, and there are certain parts of society that have really taken to it, and we’ve seen it in kind of waves. One of the original audiences for Twitter in a major way was the press, and they would share links, they would get a sense of what was happening on the ground very quickly and then build a narrative around it. And it’s become a way to source information right now as well. So I think that will continue to grow, but we are bound by the number of people that have access to technology that can actually interact with Twitter, whether it be through a mobile phone and a text message or through a computer or through an app.
So, you know, I think Twitter’s biggest problem has always been, “How is it immediately relevant to me?” and it takes some time to see that, and everyone has a different. Some people see the relevance in sports, some people see the relevance in news, some people see the relevance in their social circles, some people see the relevance in breaking news, but everyone has a different story of when it really clicked and then it expands. So our job is to make sure it’s more instantly relevant to more people. But, again, that’s bound to if people have access to the technology.
Surowiecki: I think there are all these paradoxes we’ve been talking about, you know, freedom versus surveillance. But when you think about Twitter, that’s a sort of familiar rap about Twitter, that it’s like the press, the media, and then it’s people linking to celebrities, basically. But you have like 240 million monthly users or something, and some incredible number of—and whenever I’m on Twitter I just think like there’s an incredible number of things going on that I know nothing about, right? And so it seems clear—I mean how big can the press number be? It’s got to be a tiny fraction of that. I mean 240 million seems like a huge number. I guess in the context of the world it’s not that big. That gets to this Facebook/Google thing. I mean on the one hand, that’s amazing, 240 million people are tweeting. On the other hand, maybe in the context of the world—I mean how should we think about—Facebook has 600 million people or whatever it is. That’s an incredible number of users. It’s only a tenth of the world population. I mean how do we think through this about how influential these things are or are not?
Dorsey: I mean I think they’re only as influential as we make them, right? So it’s all up to the ideas being expressed and what’s being shared on these platforms, and I think they go in waves. And as an example, we put a lot of emphasis on followers and the number of followers one has on Twitter, but in the early days of the service we had this guy who was in a boat, he had 60 followers. He was on the Hudson, and a plane landed in the Hudson and he took a picture of it, and within 10 minutes there was an international conversation with that picture, and he grew his follower base to about 1,000 after that. So the quality of what he was sharing is really what spread the message. And it’s our job to make sure that that content and those ideas are surfaced to the right people as quickly as possible, and that’s not an easy thing to do in real time, and that’s really where the work is.
So I think it’s not the technologies or the companies that are influential; it’s the people using the technologies that use them to influence. And that’s a matter of using the voice to speak up and then making sure that the right people hear it.
Miliband: And it’s also the case that many of the ills of Twitter and other similar technologies were many times worse before they existed. I mean remember, the truth is the chemical weapons attack—attacks, plural, that happened in Syria, they’re exposed because people are able to communicate. The fact that diaspora remittances to Africa are three times the level of aid is because you’ve got greater capacity for cheaper connections. And so I think one’s got to be careful not to forget where we’re coming from—
Surowiecki: Chemical weapons attacks in Iraq in the 1980s, it took forever to find out that they’d happened, right.
Miliband: Exactly. Exactly. So the means of secrecy are always renewed, but they are under far greater pressure.
Persson: Hi, my name is Michael Persson from the Dutch newspaper De Volkskrant. I would like to come back to the idea of technology as a tool. All of you have been saying that it’s a tool that can be put to good or to bad use. My question, whenever there are authorities, whether it be government or courts that try to restrict those tools or restrict tech companies, they find it very hard to let themselves be controlled. A few examples, if you look at Airbnb, Uber, I think Twitter itself when it was used for Syrian propaganda or Islamist propaganda. It’s hard to control those companies and the companies let themselves hardly be controlled by governments or courts. There’s big resistance from tech companies whenever there’s attempt to push this technology in another way than the companies themselves find a good way. So my question is why do tech companies find it so hard to be—if they think they’re just a tool, why do they let themselves controlled so?
Surowiecki: Why do they let themselves be, or why do they find it so hard to let themselves be?
Persson: Why do they find it so hard?
Surowiecki: Okay. I mean there’s a paradox here, you know, this idea of the many Internets, right? In some parts of the world—I mean Google backs out of China or whatever. But, so Airbnb, Uber, etcetera, regulation, very uncomfortable with it, have a hard time with it. Why do they find it so hard to allow the government to weigh in?
Dorsey: Well, I mean as one example, when we were starting Square, our job is to make sure that our sellers can make every sale, and what we found was that people were not able to accept credit cards because they couldn't get a merchant account. And we looked at the process of getting a merchant account and it took, you know, a month, two months, three months, four months. And the majority of folks were denied because the only instrument the financial industry had to vet identity and potential for fraud was the FICO score, and a lot of people who are just getting started, smaller companies, people who are going out on their own, they don’t necessarily have the best credit, or they don’t have a credit history at all, so basing their ability to actually participate in an electronic economy off something that they just don’t have access to stopped it flat. So when we went to the banks and said, “This is what we’re doing. We want to make the credit card rails a lot more accessible to more people,” they said we have to use the FICO score. And it didn’t make sense to us, and it didn’t make sense to our sellers because here are people who are selling things, people are giving them money. Why do you have to check my credit? Like people are actually giving me money. I’m not spending unsecured lines of credit. But it was the only instrument the industry had. So we spent nine months working on a completely different identity system so that we could let more people in, and whereas the typical merchant acquirer business accepts only 10 percent of those who apply, Square accepts over 98 percent and allows them onto the rails to start swiping credit cards. It would’ve been so much easier if we had a better identity system, if we people had the ability to show who they were. We had to create all these systems in order to do that.
So I think these are—there may be cases where the intent is to protect a behavior, and there may be cases where the tools and the policies and the practices just have not been updated, and they need to be questioned and you need to explain why it’s important. But ultimately, I think there will be a way through, as long as you show resonance with people who want to use this in a positive way that can be balanced. And we were able to eventually do that, but it was an extremely, extremely painful process. And I think the companies you mentioned are facing similar activities.
Nilekani: I think one challenge is the challenge purely digital companies face. But I think Airbnb and Uber are different because they are—I mean they’re using digital platforms, but finally it’s about, you know, spare rooms or getting a car or something, and that impacts a lot of people. I mean if you’re a taxi driver in New York and you spent $1 million buying a medallion, and here comes along this thing which makes it go away in value, obviously there’s going to be agitation. Or if you have apartment buildings in New York which you rent through Airbnb and don’t pay some hotel tax. So these are real world issues where the physical world is meeting the virtual world, and there are incumbents and political interests and political economy people who are affected. So I think when it’s an atoms and bits kind of a thing, I think the regulatory issues are far different than a pure bits kind of a thing.
Surowiecki: Yes. And I mean just think about the institutional weight that those bits have in terms of that stuff. I mean the amount of money people spend on medallions—I mean the number of taxis in New York City is a tiny bit larger than it was in the 1930s, I mean maybe a couple thousand more, even though the population is obviously much bigger. And that obviously raises a whole host of lobbying issues and the like. So I do think it is very complicated. And then even on an industry level, that stuff really plays out I think.
Yes, over here?
Grainger: Hi. Katherine Grainger from Civitas. And it’s nice to hear people talking about technology as a tool instead of the answer. It’s refreshing, actually. But something that I’ve been wrestling with is something you said, Jack, around revolutions, which I think that what Twitter and other social movements have been able to do is expedite the process in which we see revolutions occur.
The flipside of that, and, David, I’d like you to speak to this a little bit, is once the revolution happens, the infrastructures that have been built up normally, historically through time, because it takes however many years for a coup and that—they don’t exist. So you get to a place where there’s a tipping point and then you kind of fall over the edge, and there’s no infrastructure. There are no systems in place, whether those be constitutional or who the next person who’s going to be elected would be, and then it almost creates the status quo. And so while I think that what technology’s been able to do in terms of expediting the revolutionary process is extraordinary, what is our responsibility, whether that be through technology or through NGOs or government, to make sure that we’re keeping up with that pace? Because without it, we’re getting to a system where militaries are taking over and we’re not necessarily pushing the change and the revolution at the pace that we could be if we were all keeping up with the technology. So I think it’s very interesting to think about it as a tool, but then what’s next?
Surowiecki: So this would probably be like Egypt and Libya basically have ended up the same or worse as they were before—
Grainger: And then you can say that bleeds into Syria.
Surowiecki: Right, exactly.
Miliband: Great question. I mean really important question. I think that it’s important to keep in mind, though—you know, contrast Germany 1989, Egypt 2011. And there are obviously many differences, but I think what you’d conclude is, first, that the fundamental issue in Egypt in 2011, like in Germany in 1989, was a legitimacy crisis. The power was not legitimate in the way it was being exercised, by years, in fact decades in both cases, of the corrosion of the legitimacy of power—even though in both cases it was undemocratically attained. When people talk about—I hope you don’t mind me saying this. When people talk about Twitter revolution, in a way they are—you’re not saying this—it’s to demean the people who are on the streets to say that. In fact, what it can do is to create power to organize. It can be a catalyst, but it’s never the fundamental driver. And I think that’s the second thing to say.
Thirdly, though, your point is a really powerful one, that in the end, if people are forced to choose between order—if the price of security is order and the alternative is anarchy, they will chose order. And what's very, very interesting about the German case, and all of the Eastern European revolutions—remember, it wasn’t just the unification of Germany. You’ve got what’s happened in Poland, you’ve got a dozen countries effectively liberated. The anchor, the institutional anchor was European constitutional democracy, and the institutional anchor was the European Union as well. And the tragedy of the Middle East is there is no anchor, and so you end up back in the strongman theory of history.
Personally—I mean some people take a very, very negative view about the implications of that and they say either the Middle East is never ready for accountable government, which I prefer as a phrase rather than, quote/unquote, “democracy,” or they sort of draw I think facile points about the fact that the technology was never really there, which I don’t think is right either. The lesson I learn is about building accountable institutions and legitimate institutions, and that is a lesson for those who are in power in the Middle East, because those forces of the demand for accountable government hasn’t gone away. And that’s why when people say we’ve had a reversion to the status quo ante, I find that hard to accept, because history goes forward, not backwards really.
So I haven’t got a sort of pat answer to your question, other than to say you’re right that institutions matter. But I think that if you buy the tool/weapon metaphor, then people who want to hold government to account have got more tools at their disposal. And of course, the risks for them are huge, but those risks were there before they had the weapons.
And just to pick up your Syria point—and, sorry, I don’t know if this is what you wanted me to talk about, but tens of thousands of people were slaughtered in Hama in 1982 and no one really found out about it. Now, 190—200,000 people have lost their lives in the last three and a half years of the so-called civil war, one in two Syrians is a displaced person, 3 million refugees in neighboring countries, and the country doesn’t exist as a country anymore, and so it’s very hard to see where the legitimate authorities or structures of power are going to come from. But in a way, I think that speaks not to a technological problem or question. It’s a far deeper question of identity, in a state that has lost its legitimacy.
Surowiecki: But what about the idea—and I think it’d be hard to demonstrate this, but one of the implications of the question was actually what if in fact—what if you buy the idea that Twitter actually accelerates the process—let’s just stipulate that—but that one consequence of that is that you don’t have the kind of set of foundation building that you have in other circumstances? So in other words, if you have a long process—so the Civil Rights Movement in the U.S. really takes from 1954, realistically to, I don’t know, the mid-1960s, before the Civil Rights Act is enacted. But with Twitter, things happen much faster—let’s just stipulate that; that may be totally implausible, but let’s just stipulate that—and that therefore, that makes it harder for institutions to get built that might actually be able to come in and replace the existing things. Is there anything to that?
Miliband: I actually think the different point is that secular liberal forces were under-organized in Egypt, compared to the Muslim Brotherhood.
Surowiecki: So that’s just a very specific—yes. Right.
Miliband: I mean that’s essentially what happened.
Surowiecki: Right. Okay.
Bell: Though it is interesting—and I know you’re asking a different question, but I am sort of struck by this other different kind of way of thinking that through, which is what does it mean to think about revolution when you don’t have a single legitimate authority against whom you are rebelling? So, you know, what was it that Stonewall was about? What was it that the Civil Rights Movement was about? What is it that Mary Wollstonecraft, Alice Paul, Elizabeth Cady Stanton, and Brianna Wu are all arguing for? I mean there is a thread that runs through those things, right? There’s not necessarily a legitimate social authority—which is a nice phrase, right?—against whom one is having the argument. So it becomes sort of fascinating to think, running in parallel, you have these two very different kinds of calls for change, one that has a clear thing around which you can coalesce and a clear moment, and these other ones that take—I wish they happened quicker. My sense is they will take decades, and in some cases centuries to play out. But then it becomes how do you frame the conversation when there isn’t a clear thing against which the anger is turned. I mean you can say down with the patriarchy, but what does that really mean? And you can say, you know, Stonewall forever, and, again, what are you actually articulating, right? So there’s something there about what does it mean to advocate for equality, and I think also peace in some ways, where it’s not about a particular target, but about a larger call for a world that you want to inhabit. And I’m sort of struck by thinking about how one does that through some of these mediums that want things to happen more quickly. I want them to happen more quickly too. The reality seems to be some of them take a tremendous amount of time and work, and I wonder how you balance those things out.
Kirkpatrick: I want to both ask a question and announce that it’s going to be the last question. But also—oh, actually, somebody has a mic? Okay, then it won’t be the last question.
Surowiecki: Can that be the last question, over here?
Kirkpatrick: Yes, she can be the last question.
Surowiecki: Okay.
Kirkpatrick: As the formulator of the poorly formulated question, I wanted to—you know, it was deliberately loose, this whole theme. But I actually wanted to push back against the notion that has kind of been abroad throughout the conversation, and I think it’s related to what David said, but it’s not the same. It’s really about technology itself. Because Techonomy is really aiming to try to look forward at what’s going to happen, right? And I don’t think you’ve done that enough in this discussion myself, because I actually think there are fundamental ways in which technology is creating equality right now. And the one way that it strikes me—I remember the other day I looked at a photograph of Elon Musk holding an iPhone, right? And I think, okay, Elon Musk can have any damn thing he wants in the world. He has an iPhone. I have an iPhone. You know, my daughter has the same iPhone. A kid on the subway—like 17 people on the subway have that same iPhone. It is the state of the art for a power tool in society and everyone effectively who can spend $300 can have the same thing as Elon Musk.
Okay. I’m asking you to project forward a little bit. Because I believe—and in a way, Genevieve is the expert at the ultimate mass manufacturing, you know, consumer products, technologized mass manufactured products that is a true egalitarianizing phenomenon that I don’t think the world had seen until recent decades, and you guys haven't really touched on that. So I want to see if you agree with that at all. And the same points can be made about software too, but I’m sort of thinking of hardware as an easy way to symbolize this and ask you what could happen in the future, the more extreme technology that would be very inexpensive, absolutely pervasive, and universally accessible that might affect equality.
Surowiecki: David, can I ask a question? So what you just said, I mean that’s the famous Andy Warhol thing about Coca-Cola: everyone can drink a Coke and it’s just as good when the poor person drinks it—
Kirkpatrick: It’s different when it’s such a powerful thing as an iPhone.
Surowiecki: So you’re saying it’s something about the power that the tool, that the iPhone has as a tool—
Kirkpatrick: The same thing happened with semiconductors in PCs. I mean PCs were an example of the same thing. It’s just getting more and more extreme because the nature of the tools is getting more and more sophisticated. And the question is, is there any degree of sophisticated in the tools that might actually lead to more equality? Because I think maybe it’s happening already, in my opinion. You all don’t seem to think of it that way, but I’m pushing you on this.
Nilekani: But, David, assuming everyone on the planet has an Apple 9s or whatever, the iPhone 9, which is going to be the future thing, assume everybody has superfast gigabyte broadband, assume everyone has access to unlimited information on the cloud, you still are not going to solve the inequality problem. So let me explain—
Kirkpatrick: But you don’t think the world would be more likely to be equal in that scenario, even—
Nilekani: Well, look, if you have 100 million kids on the planet who can’t read or do arithmetic, that iPhone 9 isn’t going to solve the problem. So I think to assume that technology is a sufficient—I think having this gives us the tools, if we do it properly, to solve it. But I don’t think just having an iPhone is going to solve it.
Kirkpatrick: Okay. Since you’re building software to try to allow kids to teach themselves to read, Nandan, and if that iPhone 9 has that software in it, and then that can go to anyone, you don’t find that—
Nilekani: No, no, but I think unless people specifically say, “Here’s this future platform and these are the potential inequalities that we can fix with this platform” and work towards that goal, it’s not going to happen.
Kirkpatrick: Right. So do that. You are doing that.
Surowiecki: But why—if we’re not getting billions of people clean water, why do you think we’re giving them iPhone 9s? I mean how are most of the people on the earth getting iPhone 9s if we can’t even get them clean water at this point?
Kirkpatrick: I’m not saying that it has to—I’m saying technology takes many forms. I’m not saying it’s only smartphones that do this. I get the feeling David understood what I was asking. [LAUGHTER] He was nodding aggressively. You were the one person I wasn’t going to force to answer the question.
Miliband: But what would transform my world, if this translation technology, that you can speak into a—you get your voice instantly translated into someone else’s language—what do you call that in your world?
Dorsey: A translator. [LAUGHTER]
Miliband: Instant translation. If I can get my Lebanese staff speaking in French to my Syrian staff in Arabic that would—and apparently that could happen within five years or less? That would—if I can get my Swahili speaking African staff, if I can speak to them directly and it’s in my voice but it’s coming out in Swahili, that is an amazing change, and that is a democratizing and equalizing change, and it’ll probably help me organize the water and sanitation and all the rest of it.
Surowiecki: Yes. That’s great. Last question, over here?
Zajac: Hi, I’m Karolina Zajac with Samasource, and thank you for letting me field the last question. So we’ve been talking a lot about technology in the age of globalization, where we’re connecting people from often very remote areas together, you know, people from across the world can connect with a disaster, with a conflict, with an issue somewhere around the world and it seems like that can be very empowering. But do you feel that there might be something that might be also dangerous about that, and also false, where we think we understand a conflict that’s somewhere halfway around the world, or we think we’re invested in this problem that’s somewhere halfway around the world, but the reality is we’re always going to be more invested in our local connections, our real physical contacts, and that this is kind of a false type of phenomenon? Is there any kind of danger of falseness to it?
Surowiecki: Can I ask just a question? When you talk about danger, do you mean—do you think there’s something literally dangerous, or is it just kind of a moral danger, like I pretend to care more about it than I really do?
Zajac: There’s that, but on a real level, let’s say all these people really kind of add fuel to the fire of a conflict and get emotional about it online and build up the emotion in a conflict that’s in a remote are, but then when it comes down to it, are they really going to be on the ground when the bombs fall? Are they really going to be on the ground when it’s time to take up arms?
Surowiecki: Got it. Okay.
Miliband: I think the far greater danger is that we think—sorry to put it like this—that the problems of taxicabs in New York are the biggest problems that the world faces. I mean it’s a far bigger problem that we spend all of our time looking at the puddles in front of us, not the horizon. And the truth is the big threats to the world are those that are the neglect of the global public good. And at a time when 20,000 people died in boats trying to get from North Africa to Europe, the Pope talked about the globalization of indifference, and that threat of the globalization of indifference, which is an extraordinarily potent phrase, is the antithesis of what we’ve been talking about today. We’ve been talking about universal access, universal services, universal connections, a sort of consciousness that can spread around the globe, and in fact, this is a time when people are thinking more and more local, and that is I think an even bigger danger. So if the danger you’re warning of is sort of imperialistic hubris that we’ll become so convinced that we understand the problems of a faraway country, I can see that, but I think it’s a far less danger than that we think we can secure our own future by just concentrating on our own locality. And the big problems of the world are actually ones which we face in common with people a long way away, and my ambition would be that the expertise and the idealism that exists among people like you is deployed to tackle the neglect of the big global public goods, which include the environment, they include the environment, they include security, they include health. And none of those things can be solved by communities on their own.
Surowiecki: Actually that seems like a good way to end it. All right, thank you very much.

Scroll to Top