placeholder ad
Watch

Gods in Boxes: Vivienne Ming

Gods in Boxes: Vivienne Ming

Gods in Boxes: Almighty Algorithms and Hidden Values
Much of what we see and learn is determined by increasingly complex algorithms. From news to auto emissions, from pricing to scheduling, tech largely indistinguishable from magic is selecting, editing, and organizing more and more of our daily lives. How do we ensure the values and beliefs we hold self-evident don’t get lost in translation?
 
Sherman: So we’re sort of picking up where the last conversation left off. The title is Gods in Boxes. We’re referring here to—we’re calling them algorithms, but really we’re talking about the entire ecosystem in which algorithms operate, the architecture, the ecosystem, and the data. But let’s call them algorithms for short.
Before we get to them, why don’t we introduce who’s here. At the far end, Oren Boiman, who is CEO, cofounder of Magisto. This is an AI-driven technology that some of you have already sampled, I know, and we’ve all got I think a year’s membership in it. It takes photographs and videos and edits them, through some artificial intelligence technologies, into things that are not just edited but emotionally resonant. So Oren has background in computer science and math, PhD in computer vision.
To my right, Vivienne Ming, self-described academic turned unemployable time waster/entrepreneur.
That’s just the best occupation I can think of.
Ming: I wish it paid better.
Sherman: Yes, wouldn’t that be great? So she is a theoretical neuroscientist and technologist and an entrepreneur. Among many, many, many other things, she’s cofounder of Socos Learning, which uses a very deep foundation in research on life outcomes to deliver one message a day to parents of young children.
Ron Brachman, in the middle, chief scientist at Yahoo, head of Yahoo Labs, had a much heralded and much published career in computer science and AI, ranging from Bell Labs to DARPA. And as an aside, he helped develop artificial intelligence at AT&T that then-CIO Hossein Eslambolchi applied to a zero touch vision for automated customer service, which some of you who are AT&T customers may remember as a nightmare. So thank you for that. I know it’s not your fault.
Brachman: I actually had nothing to do with that, sorry, Strat.
Sherman: So, first question, and let me give this to you, Oren. What’s the difference between choices made by computers and choices made by individual people or groups of people?
Boiman: I think one of the things that people don’t really understand, or it’s difficult for us to understand—even if we get it, like we don’t really get it how fast computers are when it comes to computers working together. You can think about future, but the near-future use case of self-driving cars, and there is some man going into the junction at high speed, and we know that case where we feel like everything is happening so fast, we don’t have time to react to it. From a computer’s perspective, this is like happening in very, very slow motion. They have all the time in the world to have, you know, kind of years of discussion on, “What’s this person going to do?” Everything looks very slow motion.
So computers move so fast compared to us that it’s difficult for us to understand. If you think about algo-trading, for example, in the blink of an eye, years of discussion of people can happen between computers and it will make exactly the right choice for the computers. So speed is one thing.
I think that another element which, again, it’s not necessary, but it’s moving fast in the direction is that computers are becoming more and more, and will become further, more black box. So we now need to think about algorithms that are making decisions, computers making decisions. There is a programmer that has written a program and somebody had the logic on what they’re going to do with the program, and if, then, else, we all know that. This is changing very fast, and neural networks that were kind of invented in the 1970s are back with a vengeance and now they work. And they work so well that they beat in many cases the best designed algorithms by people. So they are kind of generic algorithms, more like a black box function between input to output, so what you get essentially is that there is a program that can make decisions and those decisions will be right in most of the cases, way better than anyone else, and sometimes way better than a human in a lot of cases. But nobody understands what’s going on there.
Sherman: Let’s stop there. Vivienne, are computers value neutral?
Ming: Well, what I would say about that is computers are just like people; it depends on how you raise them. So particularly in this case, we’re talking deep neural networks, where you don’t explicitly design what—you just give it some criteria to value its training against and then you give it a lot of training examples. Then it depends on what you show it. I remember the very first, the very first real time face detection algorithms that I saw, and it was developed by Paul Viola, and he was so proud, he showed all of these amazing examples and then flashed the crew of the Enterprise up there. And sure enough, it found everyone on the Enterprise except one person, Ohura. And he immediately confessed, when it was pointed out to him, he said, “Well, we didn’t really have a lot of black faces to train against and so the way it learned how to do it, it just didn’t recognize faces with dark skin.” So in this case where we’re really not explicitly designing these things, then they inherit whatever bias is implicit in how we train them to do things.
Brachman: I was going to interrupt the flow and say one thing, if you don’t mind, somewhat aligned with what Oren said but a little bit different. I think there’s a fundamental difference between the kind of choices that computers make and people do right now, and that is human choices, except for a split second instantaneous reactions that have no thought loop, humans have intentions. They have goals they’re trying to satisfy, they have the needs, desires, other things of that sort. And while we impute those things sometimes to computers, we talk—we just heard in a prior discussion that Facebook notices something, Facebook recommends something. Well, in honesty, it’s not really doing those things. We’re just using those anthropomorphized terms to reassure ourselves what the algorithms are doing. But humans make decisions based on intentions, desires, to satisfy goal states, and past history in a way that at the moment is very different than a way computational mechanisms make decisions.
Ming: At the same time—so I was chief scientist of a company, which I won’t pitch here, but they’re great, go look them up—we built models to predict how good people were at jobs. They never helped. And frequently when I’d get interviewed by the press they’d say, “Isn’t that scary? Don’t you ever make mistakes?” And, yeah, we made mistakes all the time. But we looked at 55,000 variables. Your average recruiter looks at three variables: your name, your university, and your last company. So they use a lot of implicit sort of pattern matching and intentionality in making their decisions without reflecting on why. At least in our case, and fully acknowledging that clearly we’d inevitably make mistakes in our recommendations, we were explicitly stating a set of criteria.
Sherman: But did more data produce better decisions?
Ming: More data—we had these wonderful things that we ended up calling Jade Stories, based on the person we originally hired using our algorithm. He’s a guy who never went to school—
Sherman: Yes or no? Better decisions—
Ming: Yes.
Sherman: Do you believe that? Just in general, more data, better decisions?
Boiman: Totally. I mean unless you think that there is an independence of the data, unless you think that the result is not related to that, then, yes, you can get better information.
Brachman: Okay, not to get too technical about it, but it depends on the data and the underlying distribution that caused the pattern. So there’s such a phenomenon as overfitting, and if you give something more data, it gets more and more narrowed in on a certain point of view, and if that data isn’t totally representative of the actual population you care about, it will actually make things worse rather than better.
Sherman: And what about this black box problem that Oren got us started on, Ron? When you get an algorithm that’s dealing with tens, hundreds, thousands, tens of thousands, millions of data points and is using machine learning to get better at it, how much control can anyone exercise over the outcome?
Brachman: By anyone, do you mean the users or the designers or both?
Sherman: Human beings.
Brachman: In a nutshell, not much. Just the way Vivienne mentioned, there is a parallel with the way we raise children, right? We believe at some point that we can give them the values that we care about, we can train them by examples, we can punish them, reward them, do reinforcement learning, if you will. We send them to school, and in a way, while they’re in school, the examples that they see and the concepts they developed are controlled in a way. But then they walk out of school and go home or they go play with their friends, or they get on the Internet or worse, and there’s really no way to control what children, learn unless you keep them in a very frighteningly rigid surroundings, and I think it’s the same thing with computational mechanisms that we allow to do learning in whatever sense that means. So I think we lose control of these things as soon as they get out in the wild, and no matter what values we think we’ve put in them to start, and even what we’ve proven, if you will, about the programs, they start to go awry pretty quickly.
Sherman: And are we okay with that? Is that a neutral fact?
Boiman: I think that, just to get back to the point about the—it’s not just about the black box. I think that there is kind of super position of two trends that are happening now—in the last years, not now—that are kind of making things very, very unpredictable. One of them is obviously the black box thing, but there is also the fact that everything is getting more and more connected. It’s connected but independent. It’s not connected and controlled by one brain, kind of one program. And what this usually means is that you have a lot of dependencies and those dependencies are creating a chaotic mechanism, chaotic in the mathematical sense, which means that small changes to the input can generate very, very different output sizes. We call sometimes it’s viral, things go viral, which means tiny stuff that nobody suspected becoming huge in a very short time. And if you take that chaotic system that happens because everything is connected along with the black box, it makes everything pretty much unpredictable. And even if you’re going to try to trace back why things happen, it’s going to be difficult for you to understand. At the end of the day, you’re going to get to some tiny machine decision that nobody can predict or even understand.
Ming: And to just put a final point on that, this has been researched specifically in the context of social sharing and algorithmic sharing, and those results truly are unpredictable and they throw, I think, a lot of uncertainty, and to quote one paper, “inequality” into these systems.
Sherman: So let’s pause there and just very, very briefly let’s just survey some of the examples of things that touch our daily lives in ways that we’re kind of, sort of vaguely aware of but don’t really know how they work, that are algorithmically driven. Search. Oren?
Boiman: Yeah, there’s—I think if you think about it, there’s actually two kinds of algorithms that we’re dealing with all day every day all the time in our modern world. One of them is search. When we look for something and we get some results, these are not the best results. These are some results that some computer, some algorithm thought because of whatever cues it had, and this is obviously affecting our entire lives. So the choices, the ranking algorithms are actually determining everything that we do. We think that what we see in the first results are the right ones, but it’s not. It’s like just what a computer thought. And at the same time, the other things we see in a day which are being pushed to us, so the rankings of newsfeeds, like I Adam talked about before, of various kinds. It can be a Facebook newsfeed, it can be the notification newsfeed that we get from our smartphone. This is essentially what determines what we act upon, and if you think you can ignore that, then you don’t really know how your brains act. You actually act upon what you get. So these two things are actually determining pretty much everything that we do and they are controlled by computer.
Sherman: So, Oren, in preparing for this meeting, I went on to Google and I typed in, “How big is the SEO industry?”—search engine optimization—and the first thing that came up was an ad, marked as an ad, which was great. The next one was a highly graphically ornate thing with 46 points, numbers and percentage points and it told a very nice story about how great the SEO industry was. I went to the third item and it was actually the same as the second item. And then I went to the fourth item and it was the same as the first and the second. So somebody out there is pretty good at search engine optimization. And it raises the question of, if we actually are, all of us, going to Google or whoever for the basic information on which we are going to make the choices of our lives, are we operating in the real world or not? And I think the answer probably is that we cannot know, can we?
Brachman: I don’t think we can know in the sense that we don’t have transparency and visibility into the algorithms that they’re using. It would be kind of interesting if that were shared with us. There was some discussion earlier. David kept probing Adam, saying, “How do you do that? How does that work? How do you do that?” And either it’s not appropriate to say or the fact is anything you say would only be a half truth.
Sherman: Ron, what would most of us do if Google gave us its algorithm? What would we do with that?
Brachman: Right, exactly.
Boiman: Probably abuse it.
Brachman: In any case, I think the issue is a little bit different. Vivienne mentioned something a little earlier about machines being in a way like people, and one of the things that we do, because people go out into the world and exhibit unpredictable behavior, as much as we’d like them to share our values and learn from the right examples, is that we have extrinsic mechanisms for measuring quality and trustworthiness and control. So in one extreme we have laws and courts, so if people learn values of right and wrong and they learn what behaviors they’re supposed to have, they don’t always abide by such things and these extrinsic mechanisms come in and try to sort of deal with them appropriately. If we had sort of third party  or external mechanisms around some of these things that we depend on all the time, it might help us at least have confidence that they’re not running too far off the rails.
Let me give you two tiny examples, very mechanical, not high-falutin, around recommendation and the like, but interesting. I was curious and went online and just tried to understand how antilock brakes work, because my intuition was, with the way cars were going these days—and I think we have some car people in the audience; they can say better—there’s a lot of software in the vehicle. And I’m wondering if the brakes on which my life and my family’s lives depends are run by software. And indeed, there’s an antilock brake control module that has software in it. One of the interesting things is apparently there’s another module that watches over the performance of the system itself and if something goes wrong, it can stop it, it can retract the hydraulic stuff for the antilock brake system. So there’s a third party, if you will, that’s neutral, that’s not intertwined with the mechanism itself, that can help it perform well.
Sherman: So that works on antilock brakes, but does that work—let’s take for example, I’m sure all of you have read books by Michael Lewis, “The Big Short,” “Flash Boys,” that sort of thing. You can’t do anything with money other than use cash that’s not involving some kind of an algorithm. And one of the human values that wasn’t mentioned yet in our discussion of values is greed. We have very, very powerful financial incentives driving the creation of algorithms that deal with money, and sometimes these things crash markets and sometimes they do things that are worse than that. At that level of complexity, is there an antilock brake controller that can be applied so that somebody at the Fed can go,[braking sound effect]?
Brachman: Well I am not an expert on this—
Sherman: No one is, I don’t think.
Brachman: But there are certain mechanisms that come into play when the swings at let’s say the New York Stock Exchange go too far and they stop trading. Now, that’s kind of a blunt instrument, but it’s better than just letting it go. So you can imagine these extrinsic mechanisms being designed to try to keep things—again, I’m not an expert, and maybe you guys can speak about the financial world.
Ming: But I have to say, you know, in addition to being an entrepreneur, I’m an academic. I’m at the Redwood Center for Theoretical Neuroscience. During the real estate meltdown—which was actually a financial meltdown. It drove the real estate market, not the other way around. But during that period, we were all, “Are you telling us you didn’t know what was going on?” We work with very similar types of algorithms in our work and it seemed entirely implausible to us that it truly took everyone by mistake. This was willful ignorance. So I think a lot of these judgments are being made by humans about how they decide and deploy a set of algorithms that aren’t inherently good or evil.
Sherman: So let’s take that a step further. If you’re getting a credit rating, if you’re buying an insurance policy, I believe you’re being subjected to an algorithm. Does the credit agency or the insurance agency know why you got the rating you got?
Boiman: Maybe now, yes. I think in the future they won’t know. Again, when the black box algorithms are competing and winning, will we use the algorithm which is more moral or will we use the algorithm that wins? I think it’s very easy to know what it’s going to use. And again, the more data you have, assuming you have lots of data and you don’t have this problem of dimensionality to the data, you get more information, you get better results, so probably you’ll abuse it and you’ll sacrifice. Some people—obviously there’s going to be mistakes. Every algorithm of this kind makes prediction mistakes and there will be mistakes and there will be the casualties of this—
Ming: But one of the interesting things about not simply the mistakes—which I think to some extent we accept if the tradeoff using the algorithm is better than the tradeoff with the human. That seems very straightforward. But say working in HR tech, where I was before, and in education, what we’d find for example is that the algorithm, if it mines deeply enough, inevitably finds things that correlate strongly with race and gender. Well, those are not legal to make HR decisions about, but they turn out to be very predictive of at least past behaviors of companies. So now we have these sort of inextractable deep neural networks that have found these things that are highly predictive, but we’ve decided as a society we don’t want them there. And yet they’re almost impossible to remove because the whole point is that the algorithm is optimized to discover these trends.
Sherman: So how do you get algorithms to stop reinforcing discrimination and unfair distribution of opportunity?
Ming: Goodness, that’s a very difficult question. Because like I’m saying, you can take out the explicit input data and it doesn’t necessarily remove the bias out of the system.
Sherman: So isn’t it basically a garbage in, garbage out problem?
Brachman: I don’t know if it’s garbage. But how would you stop a person or an organization from doing that? Maybe the same rules should apply. Which, again, is back to the point I made earlier is there are other authorities that can come in and look for evidence if necessary and stop the business from operating until you—
Sherman: But you can’t throw a computer in jail. I mean one of the great things about people is you can force them to take responsibility in the end, and it’s very hard to do that with a machine.
Boiman: I think there’s—again, if you think about the implications of training a network to do anything, let’s say to predict your risk and then determine how much you need to pay, you might find that that piece of software, that black box software that you got is racist. Actually if you check it, it’s racist. What do you do then? Then you force it to not be racist, but then you find it’s racist for other groups. So how do you force it to be—I mean by forcing it to be providing everything to be the same for everybody would completely make it not optimal, so others will try to trick it. So there’s a very deep moral question of what happens when you take the decision and you actually just say, “Let’s make it optimum.” We have no idea how it’s going to get there.
Sherman: So we have microphones and we have people. Would the two come together in the form of comments or questions?
Tas: Jeroen Tas from Philips. You mentioned that during the financial breakdown they actually started building what’s called circuit breakers, which I think helps. I’m in the healthcare industry, where every algorithm needs to be validated. But now we started doing machine learning, we started feeding it, and the algorithms will change. So do you have any suggestions how regulators should deal with this?
Ming: Well, here, I have one very specific example because my son was diagnosed with type 1 diabetes four years ago.
Tas: So is my daughter, by the way.
Ming: Ah, well it was a big inspiration to me to develop some of my own algorithmic treatments. I’m not—I’m a fake kind of doctor, so I can’t actually do medical treatments. But I could build us something that, if it had enough data, could actually for example predict whether he will go high or low, his blood glucose levels will go high or low in the future. But I had to hack all of his equipment to do that. Now, the idea of hacking medical equipment sounds like a threat to society, but with this I was actually able to build something that was actually much better. And, you know, I would get updates to my Google glass. It would say, “Hey, Felix is going to go low in 20 minutes. Give him some crackers.” So I actually think—I kind of want a pluralistic approach to algorithm development. If we can pair the datasets from Facebook and Google with a lot of people that can act on them, I would be pretty happy having an open marketplace even for medical data in which algorithms can be tested, applied. I think there’s this drive to be very proprietary and a little bit paternalistic about how algorithms get deployed on data.
Tas: So how do we convince the regulators?
Ming: Oh, they don’t want to hear that. Especially the FDA is not a good friend in this particular space. So what I simply did is I did this and then I went out and I talked about it. So then eventually—and in fact, not just my own work, a hash tag developed called #wearenotwaiting, which is a parents’ group to try and push forward implementation of these new technologies.
Sherman: Other thoughts on regulation before we move?
Brachman: Again, just doctors can cause problems as well, right, rather than just machines. There’s things like certification, and doctors learn, presumably, retraining, continuing education credits and things like that. So again, I think if we put our machines in that context, we just have to acknowledge that learning makes them unpredictable. We probably should test them in rigorous conditions to make sure that they do what look like the right thing the same way a human would.
Sherman: There’s also a human choice element here. We noticed when we were backstage that—we did a quick survey of how many of us were on Facebook and it turned out 50% of us are on Facebook and 50% of us are not. And that is I think one demonstration that people can choose. Who else?
Ming: So, actually, the original PI of my first ever project was Terry Sejnowski.  I worked on lie detection for the CIA off of video, and as potentially controversial as that might be, it led to projects I did reuniting orphaned refugees with extended families at the UN and doing a system for Google Glass that could recognize emotions and train autistic kids.
Audience: And by the way we have a company called Emotient, which is now building on that technology for facial expression analysis. But I want to make comment on this issue that you brought up, which is a very important one, which is how do you create intentions, and specifically how do you deal with issues that come up in terms of having a network which is picking out things to make predictions that you don’t want it to pick out? So in addition to the data, you also have a cost function. You can pick the cost function, not just so you can maximize your profit, but also so that you don’t do something wrong. You penalize things that might—you can actually put it in in terms of percentages or whatever. But it also points out that it’s really the humans’ intentions that really create the network, that provide what it is that you’re really wanting to accomplish with it. And you can put terms in which are good ones, which are for the positive, in addition to the ones that—the greed ones.
Boiman: I think it’s very difficult to make it a working practice because there are humans involved. So what I’m saying is, even without computers influencing that, if you think about the increased connectedness we have right now and the political uprisings happening on Facebook in speed that was never seen before, on Facebook and Twitter, then who’s right? What’s good, what’s bad? One group thinks it’s good and that the other one’s bad. How do you penalize what? Do you expect the engineers in Facebook to make the decision between two conflicting parties? That’s impossible. And the moral questions are difficult. I think it’s inherently chaotic and it’s incredibly difficult to control that.
Ming: I have a very explicit—for a while I was chief scientist of another company. We were doing relationship predictions. It was supposed to be in professional context, but the best analogy is if you put it into a relationship. We were finding what we called relationship factors, which were predictive of relationships being formed, but my judgment would be they were undeniably negative. In a romantic relationship, the equivalent would be, “Boy, she really likes a guy who a couple months later is going to punch her.” And if we take that out of the system, we would actually make it less engaging for her early on in the experience. So we’re actually compromising our short-term product experience by removing people from our recommendations that we know she would connect with because we’re making a somewhat paternalistic but clearly appropriate judgment not to include these factors.
Sherman: Can I generalize from that and ask you what you think of this? There are certain things that are mechanical and I think best done by algorithms and there are certain things that are based on human behavior which we as humans actually understand only imperfectly. To the degree that our understanding is imperfect, our algorithms are more likely to be imperfect, are they not?
Ming: They’re rarely going to be better than us. I mean to some extent it’s, as you put it, garbage in, garbage out. We design the criteria functions, we decide what data goes into it. If we’re making imperfect judgments there, that’s what we’re stuck with.
Sherman: So, we’re called Gods in Boxes for this session. I wonder if that’s really the right title.
Brachman: I’m not quite sure what was implied by ‘gods.’ Probably a sense of—maybe we worship them, because some people do, but that—
Ming: It could be talking about us.
Brachman: We’re not in a box. But that somehow there’s some all-powerful, all-knowing element to this. And I think, as we’ve discussed here, and I think it’s a common theme, at the very least it’s more like people like things in boxes, with the flaws and virtues that people have. In some cases, because of the ability to process huge amounts of data, these things can be better than people, but they kind of end up with the same kind of mistakes and flaws and probably should be regulated the same way. So the title was great, very provocative. It would have been a lot more boring if it were more realistic, I think.
Sherman: So what we’ve really got though is a lot of invisible, really fast people that we can’t hold responsible. That’s what we’ve got in our boxes.
Boiman: Gods in black boxes.
Sherman: So let me just close on a quote. “If the development is possible, it is out of our powers to prevent it.” Edward Teller, father of the hydrogen bomb.

Participants

Oren Boiman

Co-founder and CEO, Magisto

Ronald J. Brachman

Chief Scientist, Yahoo!

Scroll to Top