Watch

“Human + Machine” Meets “AI Superpowers”

“Human + Machine” Meets “AI Superpowers”

Description: Artificial intelligence will alter how people, companies, products, and, we hope, governments work. Two of the world’s leading analysts explore this megatrend. Its approach to AI will help determine even a country’s success. But we’ll need a new idea about how people work with machines.
The below transcript has been lightly edited for readability. 
“Human + Machine” Meets “AI Superpowers”
(Transcription by RA Fisher Ink)
Kirkpatrick: Two old time, longtime friends of Techonomy that will join me on stage momentarily. Paul Daugherty, CTO—he’s on the far side—of Accenture; Kai-Fu Lee, sitting next to me. Both of them have just written and published extremely influential books about artificial intelligence. Paul’s book is called, Human + Machine, and Kai-Fu’s book is called, AI Superpowers. Kai-Fu has his book and Paul’s book in Chinese.
[LAUGHTER]
Kirkpatrick: You’ve received Paul’s book. But before we get started—oh, come on.
Lee: The machine is failing you.
Kirkpatrick: It worked so well. Kai-Fu knows what this is. I just did it this morning and it worked fine. This is a Christmas ornament of the Hal 9000 computer that has recordings on it.
Hal: My responsibilities range over the entire operation of the ship, so I am constantly occupied. I am putting myself to the fullest possible use, which is all I think, that any conscious entity can ever hope to do. I’m sorry, Dave. I’m afraid I can’t do that.
[LAUGHTER]
Hal: I think you know what the problem is just as well as I do. This mission is too important for me to allow you to jeopardize it. Daisy, Daisy give me your answer.
Kirkpatrick: Okay. The reason I thought that was a fun way to start, even if it took a minute, is that movie just had its 50th anniversary. So this fear of AI has been a meme, so to speak, for most of our lives. Now you two firmly believe that not only there’s nothing to fear, but that we’re entering into, essentially, what could easily be a golden era. Is that a reasonable summary, Kai-Fu?
Lee: Well, I do think there are a lot of issues that need to be solved. Including safety, privacy, and job displacements. But I think the fears of a super-intelligent AI is way overblown. There are no known ways of getting there. And people can keep working at it, but it could be decades, centuries, or maybe infinitely away from us.
Kirkpatrick: So Paul, how would you characterize the AI era that we’re entering into?
Daugherty: Building on what Kai-Fu said, I don’t think this super intelligence, transhumanism, is what we need to be focused on. But there is a real issue of how we as people use artificial intelligence. The technology is neutral; it’s how we use it and AI does require us to think differently about how we’re applying it to organizations and society, and some of the issues that Kai-Fu just talked about. As we look forward, we see artificial intelligence changing 90 percent of the work that people do, in some fashion. Almost everything is going to change, and we think organizations are behind in understanding how to redesign their organizations. We call it reimagining their organizations, to accommodate for those changes, the new business models, and the change, and the work that’s going to happen. And therefore, preparing people for the change that’s coming, which is the big challenge we’ve got ahead of us.
Kirkpatrick: But also the reason you’ve titled your book, Human + Machine is that really you don’t see a world where people are replaced, you see a world where people are in effect, “augmented.”
Daugherty: In the aggregate numbers, yes. I mean, we believe on the order of 15 percent of jobs that you look at, on that order will be completely automated and replaced. But the majority of jobs will be improved, given “superpowers” using Kai-Fu’s terminology. And that’s where the real opportunity is, to the plus sign in our book, Human + Machine, I always say, the summary of the book is the plus. It’s human plus machine. And we think about it as collaborative intelligence.
I think artificial intelligence leads us to think about, “How do we bolt a technology into an organization?”, which is the wrong approach with AI. The right approach is to think about collaborative intelligence and capability. How do I take the best of what a person’s human skills are in a particular role, and combine that with better technology to allow them to be more productive and effective in living their life, or using services, or working at an organization? That’s been an absent dialog from a lot of the work that we’ve seen.
Kirkpatrick: So Kai-Fu, when you look forward to ten years, eight years—whatever number of years you want to pick—what’s going to be different because of our ability to harness this new set of technologies, positive or negative? But I think we’ve talked about a lot of the negatives, I’m assuming there’s quite a few positives.
Lee: Yes, absolutely. I think—So first, the definition of narrow AI, which is what Paul and I are both talking about, is within a single domain. With a huge amount of training data, AI can do things better than people do the tasks. And what that leads to is a question of what AI cannot do, and that’s where the humanity will shine. AI is given a goal; it is a tool, so it cannot be creative. AI is single domain, so it can’t be strategic, or common sense, or cross domain. And also, AI is a tool that has an objective answer, so it can’t be compassionate.
So I think the ways in which it will work with people will be tools that make the creatives more creative. Tools that give the strategic people a chance to look at more data and make better decisions, and also in the case of compassion, I think a job like a doctor’s will turn into one that requires more EQ, compassion, communication human-to-human, while AI takes over more and more of the analytical diagnostic work. So those are the three ways in which I think there will be co-work, as Paul stated. I also think anyone whose work is purely routine and not much human-to-human interaction, those jobs are prone to be totally displaced by AI.
Daugherty: And just to add to that, I think one of the challenges with both education and with corporate training today is that a lot of it is focused on training people to do machine-like things, and I think we need to shift the frame to teach people to do more of the human things, and the four skills that we see from the research that we did for the book, that we talked about are: complex problem-solving, creativity, social/emotional intelligence, and sensory perception in terms of the way we look at problems. And those are the four skills, regardless of occupation, those are the four types of things we need to be training people to do better in the way that we use technology.
Kirkpatrick: You know, I assume Paul that you wrote the book largely because there’s challenges in the transition, but you’re excited about what this is going to enable, right? What are the things about AI that you’re most excited about? Assuming we can manage these transitional challenges.
Daugherty: In the big scale, it allows us to solve problems on a scale that we couldn’t solve before, using the specialized capabilities of artificial intelligence, the narrow AI capabilities. Solutions that we’re seeing in, for example, urban farming solutions using AI in the agricultural production process to radically transform and improve food supply often in urban deserts by measuring the progress and adjusting the growing process dynamically inside warehouses with no natural sunlight and all-organic growing processes.
Solutions to medical problems that seemed intractable before and access to specialists leveraging machine learning on a scale that you just couldn’t achieve before, serving more of the population. The general thing I’m excited about with AI though is that it allows us to interact with technologies in a more human way. I think we’re coming out of a dark age in technology where we forced humans to use a lot of our cognitive capability to figure out how to interact with technology. I think we just heard about that a little bit on the last panel. We type with our thumbs on a super computer to try and get information rather than having more natural interaction with more human-like tools.
And then you take that to the extreme, and what kind of job does that create? Well, think about one example with a digital twin model in the energy industry where an oil driller who used to just operate the drill and send it down a mile underground to do the drilling, or fracking, or whatever based on gauges and things, and whatever they were seeing. Now they can have a digital twin model visualizing from sensors on the drill what that drill’s encountering, looking at the tensile resistance, the torque, and everything else in the drill.
And the technician is making decisions on how to operate the drill that used to be management decisions of the company based on a lot of data that used to be fed back. So we’re pushing decision-making for what’s really, really important decisions to the edge of the organization. That’s something we’re seeing again and again with AI. Rather than eliminating those frontline jobs, in many cases it can add value to those front-line jobs, and restructure them, and reimagine them.
Kirkpatrick: But again, and this is key, that front-line job is becoming a different job that requires a different set of capabilities than sheer brawn, which probably was a key element to it in past decades. So how do we make that transition? And I’d like you both to talk about that, but Paul you start.
Daugherty: Yes. That’s where we believe there’s a lot more focus that’s needed on the re-skilling. We’re donating all the proceeds of, Human + Machine, to nonprofits and NGOs that are focused on mid-career reskilling of exactly those types of people because we believe that’s where the biggest challenge is.
So I don’t have all the answers; I think we need a lot more focus, we need investment from the government; we need businesses more focused on transitional skills for those people, but from what we’re seeing in companies that aren’t doing it, it’s starting to invest in basic digital skills for those people. They’re not going to become machine-learning experts, if you’re operating a piece of physical machinery. But what can you do? How do you use information in doing your job? How do you get access to digital literacy that you didn’t have before? And that’s where you need to start with those types of professions.
Lee: So I agree with all of Paul’s points, but I’m going to be a bit more pessimistic and say for most companies, the reskilling actually won’t be possible, because the type of skills may be too hard to retrain. And the size of the decimation on jobs may be too large for certain industries, because if you think about the edge jobs. I mean, we’re a venture capital firm, and when I look at the investments that we make, we’ve made about 45 investments in AI, and seven of them basically completely replaced the human job. Not immediately, but over time. These are jobs like telemarketing, customer service, loan officers, fruit picking, dishwashing, etc. And you know, a dishwasher—
Kirkpatrick: That’s quite a range, actually. But go on.
Lee: It’s both blue-collar and white-collar. In fact, white-collar will come first. And unlike the Industrial Revolution, where you replace a craftsperson who makes an automobile with an assembly line and creating more jobs, here we’re replacing jobs in the assembly line completely.
Now, to the extent that people can and undertake digital re-training and all those things, every corporation has a responsibility to reskill the employees to the extent possible. But imagine when all the truck drivers are gone, there’s not going to be another job. So reskilling is needed and re-training is needed. I think there are areas that re-training can be possible for blue-collar jobs. I think dexterity, new environments, will be things AI cannot overcome. So a plumber’s job is quite safe.
For white-collar jobs, to the extent that they are non-routine, strategic, cross-domain, or involve human interaction, then those cannot be replaced. And I think a whole new set of job categories need to be focused on are the compassionate jobs. The jobs that are like the nannies, nurses, elderly care and companion, those jobs are probably the only ones large enough without a huge retraining hurdle that some transition could be possible.
So if you look at Amazon, Jeff Bezos just announced a $15,000 a year, per employee for four years re-training for jobs that are in that category. So I can’t help but think that he’s thinking about his warehouse employees who will become replaced by robots, the Whole Foods cashiers, and so on. But he’s committed to training these employees for jobs like, Aeronautic Repair, or nurses. Those are the two categories of the re-training he is doing so that they’ll be employable. Perhaps not with an Amazon anymore. So if corporates will take a strategic view of saying their responsibility for employees doesn’t end by just finding a job for you within the company, but re-training you to be ready for a job somewhere else. That’s an important step.
Kirkpatrick: Let me—Go ahead.
Daugherty: I was just going to add to that point. We did a survey for research as part of the book. The one survey we did supported Kai-Fu’s point. Sixty-five percent of corporate executives believe their workforce is not ready for AI. Like Jeff in that case.
Kirkpatrick: Sixty-five percent?
Daugherty: Sixty-five percent believe their workforce is not ready. Only 3 percent were investing in increased training for the employees, which shows the gap you have. So you do need a call to action, with business taking on that responsibility. But it can’t just be business, because as Kai-Fu said, it won’t always be opportunities with the same company. This is a societal issue and challenge we need. There needs to be government intervention in different ways to support the re-skilling.
Kirkpatrick: But why is that gap? Is that because of ignorance? Fear? Poor management skills? What explains that?
Daugherty: If we dig under it, part of it is, “What do I really retrain them in?” We’re getting more precise now—
Kirkpatrick: People don’t know how to do it.
Daugherty: What do I retrain them for? “How do I invest?” is part of the issue. Some believe it’s another person’s problem. “I’ll find the workers that I need with the right skills.” But I think that’s a flawed strategy, because if you think about the new jobs that are being created, there’s not going to be a market for many of these jobs. An AI, visually inspired oil rig operator isn’t a job that you’re going to be able to hire, it’s going to be a specific one you need to develop. And most of the jobs we see emerging, millions and millions of jobs are highly specialized combinations of artificial intelligence and the human skills, it’s collaborative intelligence that companies need to develop themselves.
Kirkpatrick: I mean, they need to develop it on the per-job basis, even.
Daugherty: Yes, you need the learning platforms within a company to evolve your employees to develop these skills, or develop them into new employees that you acquire, and not enough companies are viewing it that way.
Kirkpatrick: But also, isn’t there a societal issue where we need to change the whole nature of education as John was suggesting quite strongly, I think, in his presentation earlier, to prepare an entirely new generation, leaving aside the ones who are going to be displaced now, but to function in the world we see emerging, right?
Lee: Sure. I think STEM is a great step. Everyone is talking STEM, that’s wonderful. But we have to recognize not everyone is going to become an AI engineer or data scientist. I think equally important is emotional training, emotional quotient. Ability to communicate with other people, because the human-to-human touch is one part that’s not replaceable by AI, and it’s something that every human has the innate ability to learn, whereas becoming a great engineer may or may not be possible for everyone given the aptitude.
Kirkpatrick: But in any case, we have to think about it consciously, right?
Daugherty: Well, yes. We outlined six categories of jobs in the book that we believe are the new broad families of jobs being created, most of them aren’t technical jobs. So we do need more STEM and STEAM, but we need more people in other professions that understand how to use technology in the way they work. I think what I might tease announce, recently, with a new AI college, is a step in the right direction. It’s a cross-disciplinary AI college, 25 of the professors are AI professors, 25 will be from other disciplines to mashup AI with how those other disciplines work. That’s the way we view AI in our company, and the way we’re doing it, is embedding AI in different types of industries, skills, and things that we develop. I think that kind of mashup of skills is what we need. It’s going to, in many cases be business skills and human skills combined with those AI-enabling capabilities.
Kirkpatrick: Okay. In a recent session with John, we talked a lot about national competitiveness issues. You are an American living in Beijing, right? You’ve worked most of your career in the United States, or an awful lot of it. You’ve worked for Apple, Google, and Microsoft. And you live in Beijing now; you’re a very global person. You just came back from China. The issue of the China-U.S. interface and intersection and competition is so central to this technology in particular; I’d like you both to just lay out how you see that right now, and where you think it can and should go. Why don’t you start, Paul?
Daugherty: Well, Kai-Fu’s got the deep perspective having a foot in both worlds there. I did just get back from a week in China. I spend a lot of time there and we have a lot of business in China and really carefully watch the environment. And what I see happening in China is effective implementation toward their goal of AI leadership by 2025, and broad leadership by 2030. I think they’re well on the path. I’m interested in Kai-Fu’s opinion.
But the universities, SIGMA, and others are producing top-notch engineers, AI researchers. There’s an investment climate in China that’s attracting capital. Over half of the VC capital for AI in 2017 went to China, 48 percent, 50 percent. So you have the investment climate, you have the talent being developed there, and you have platforms developing at scale like you have here. In particular with Alibaba, Tencent, and Baidu producing at-scale AI technology, and a constellation of startups who are innovating around that as well.
So that’s what I see happening in China, I’d have to see some different societal norms in place in terms of how data is being used in society and some of the capabilities being developed, but on a very strong path. I think if I look at most other countries, they’re not accelerating at the same pace. Most of the U.S. as one example I think needs to accelerate our progress and our path toward those types of goals, and that’s what’s teeing up, I think, this dynamic that I see a lot of. Everybody’s starting to view it, people are starting to talk about China versus Europe versus the U.S., which I don’t think is a constructive way to view it, but I think that’s the natural outcome of the dynamics that are happening now.
Kirkpatrick: But at the moment China is ahead, you would say?
Daugherty: China is accelerated. I wouldn’t say China is ahead right now, I think China is accelerating faster than the other countries given the investments they’re making.
Kirkpatrick: Kai-Fu.
Lee: So if you look at the state of AI, I think most people mistakenly think there are breakthroughs coming out every few weeks because you read about headlines in various newspapers. But actually, all of those are application breakthroughs built on the same set of technologies. So AI has a 62-year history and the biggest breakthrough, deep learning, came about nine years ago, and there has been no other breakthrough like it.
So the phase we’re in is more of an implementation phase. That is, who can collect more data, find the opportunities to make money and build the businesses, and less of who can research and invent the next big thing. Obviously that is a possibility, and the research part [the] U.S. is ahead of the rest of the world. The whole world added together is probably not half as powerful as the American research prowess. But we’re kind of at a state where electricity has been invented.
Now, will there be electricity 2.0? Who knows? There may be. But right now, there is an opportunity to make anywhere between 13 trillion to 17 trillion in the next ten years, estimated by the likes of PWC, McKinsey, and so on. This is the time to harvest the discoveries that’s been made, especially deep learning. So given that as a background, what Paul said is absolutely true, that the Chinese entrepreneurs are tenacious, work harder, have more capital—
Kirkpatrick: More government support.
Lee: Yes, more government support and more data. The government support actually came later. A lot of Americans misunderstand that it was all a government move, actually it was all private up until roughly the last year. But when the government does come in, it does things that are very long term, strategic and infrastructure oriented, such as building a new city with autonomous vehicle built in with two layers.
One top layer of the road for pedestrians and pets and bicycles, the bottom layer for vehicles, thereby avoiding what happened with Uber autonomous in Phoenix, allowing autonomous vehicles to launch faster without more safety problems. And new highways with sensors, and new cities with good universities, trying to get them to build AI parks. So these are smart governmental programs centrally pointed but locally implemented, that really will advance the state of AI.
Kirkpatrick: So current trajectory, U.S. versus China?
Lee: So in implementation and monetization, I think China has already caught up. For example, the world’s most valuable speech recognition, machine translation, drone, computer vision companies are all Chinese. Of course, there are many that are U.S. as well. Autonomous vehicles, and so on.
So I think they are roughly even right now in terms of implementation, and China is on the faster trajectory, so it will probably make more money, create more applications using the current set of technologies and their natural extensions. To the extent something is invented, the U.S. may again have a big leadership.
Daugherty: I agree with the implementation phase. I would say it’s another reason for a call to action around looking at this now very seriously if you’re a business leader, and understanding where to take your business. Because in our view, there’s hundreds and thousands of winners with artificial intelligence. There’s not just a few, because the facial recognition winner or winners are going to be very different than drug discovery and pharmaceutical, very different than chemical process acceleration, very different than financial fraud detecting. You could go on and on. There’s the implementation and the way that those leading companies will be built is very different. And for every company, now’s the time to figure out how you’re going to capitalize on AI embedded in your strategy and figure out how to really change your business, change the workforce, to do it. So I don’t think the game’s over by any stretch. I think we’re just starting the game, playing out this competition on who wins all these different applications.
Kirkpatrick: Wow. This is a great conversation. We have time for just a little bit of audience interaction. Do we have anybody? Okay, right here.
Westby: Hi, I’m Jody Westby. I was involved in standing up a new association called, ACRO: The Association of Cloud Robot Operators. We have a world with the cloud use of robots; we think that all the robots we’ll have in the world are going to need to have cloud-based intelligence or brain in the cloud, that then will control billions of robots, and it’ll allow exponential growth, because one robot an learn from the lessons of another robot, and you’ll have a safer interaction.
So here we are at the beginning of this, and that obviously is going to require best practices and standards, and some policy. And my question to you, who’ve looked at this field so closely, is what are the roles of the various stakeholders? And I mean even the users, so that these robots can be developed? This cloud brain can be developed, so that there’s a safe interaction and an acceptable interaction between the human and the machine? I’d really like to know. I mean, there’s the developers and the operators, but then there’s also the users and the governments and what are your thoughts on the stakeholders for building this whole ecosystem?
Kirkpatrick: Interesting question.
Daugherty: It’s a really interesting question. I think that it’s important to kind of put in context what the cloud brain is going to do in that case, because I don’t think it’s a cloud brain that’s going to create super intelligent robots. It’s a cloud brain to help very specialized robots learn from each other, as I think what the outcome will be there. We talk about it in four principles, that cut across a number of stakeholders. In that type of environment, the accountability is really important to understand. At the end of the day, in any organization, a human needs to take accountability for any robotic or AI-enabled process. So for organizations’ understanding what’s the best practice, and they’re assuring the organization has accountability and understands what their robots are doing, the implications, is critical.
Transparency and explainability, we do believe is a big issue. I know that’s a controversial topic, but we believe you need to for certain things you do in an organization. You have to be able to explain them, and if you can’t you have to take a different approach to solving the problem. So understanding how you apply and who decides how you’ll apply the cloud robots in that case is important.
The third is, fairness and how you learn a more inclusive unbiased way of operation by learning from the other robots I think would be important. And there’s technologies and ways of approaching that problem.
And then the final point we talk about is human-centric in the approach, because most of those robots are going to be interacting with people in some way, so how do you make sure the human interface is appropriately thought through. You have government involved. We believe there’s business and industry organizations, as well as individual business decisions and corporate responsibility that needs to be re-thought as you deploy those types of solutions.
Kirkpatrick: Any thoughts Kai-Fu?
Lee: I don’t see the single-cloud working with all the robots, helping each other. We don’t have the technology for that. I think it’s one application at a time, so robots that work on inspection for factory. Robots that work on dishwashing. Robots that work on specific tasks. Robotics, in particular, I think we use the word robot a lot. There is a rule called Morovax Paradox, which says, “The date of robots is farther away than we think.” The white-collar AI displacement and value adding is going to come much faster than the hardware side, because mechanical robot problems are still very far.
Kirkpatrick: Well, we saw a demo yesterday of robotic process automation on screen, so we see there’s things happening.
Lee: It’s happening, but the white-collar part is happening much faster. Look at the value that Google, Amazon, Facebook have generated versus the very early beginnings of robotics. I think it will happen at the places where there’s the greatest willingness to pay, like manufacturing, maybe commercial applications. Other things will take longer, then there will be single-purpose, not cross-purpose.
Kirkpatrick: Well I would have to say this was probably one of the meatiest conversations we’ve had on stage at Techonomy. I don’t think it’s often that you have two such eminent and informed experts on such a critical topic, so I’m really pleased that we were able to do this. We have both books available for you. Kai-Fu’s book is available outside; most of you should have Paul’s book already. I want to thank Mary Lou Jepsen for the Christmas ornament, and thank both of you for taking the time to come and help us with this really central set of questions. So thanks.
Daugherty: Thank you.
Lee: Thank you.
[APPLAUSE]

Participants

Scroll to Top