We are seeing all sorts of technology spring up around us, but where are the big changes promised? Have we made rapid leaps?
Gerd Leonhard gives a great analogy. He says that over the last 50+ years, we've been doubling from .01 to .02, etc. So the pace of change has been pretty easy to follow and keep up with. Gerd says we are at a 4 right now, which is, in his words, "Just enough technology to make a nice meal."
But soon we'll be at 8, 16, 32, and then facing uncontrollable growth. How can we get ready for such change?What we learned from this episode
*Ever heard of gold-collar work?
*The Moravec's paradox and how it messes up what we think we know about AI
*Splitting feelings and facts isn't as easy as it seems
*Deep thoughts about exponential change
*Know your androrithmsWhat you can do right now
*Understand technology to the best of your ability
*Get better at human tasksKey Quotes
"Artificial intelligence is a much hyped topic, but it's most of what we see today is intelligent assistance. It's basically just fancy software."
"It cannot be created, for example, saying, okay, I saw a video of this guy, the CEO of this company that I'm investing in, I saw a video of him and I really think he's amazing. And he tells a great story. I'm going to buy another hundred thousand shares. That's a feeling. That's not a fact. Try to get a computer to have that kind of thing, that will be difficult."
"We can finally cook a good meal, so to speak, with all of the pieces of this technology."
"Computers are stupid. They only provide answers." (Ok, that was Picasso)Links mentioned
Today, our guest is Gerd Leonhard. He’s a futurist and author and this is Work Minus Routine. Hi, Gerd. How are you?
Good. How are you doing?
I’m doing excellent. Excited to hear what you have to say to us today. Why don’t you give us a little bit of background about who you are?
I’ve worked as a futurist for the last 15 years. And I basically help companies to write new rules for what’s coming and to take a wider view of the possibilities. My last book is called “Technology vs Humanity” which talks about the relationship between man and machine and a big part of that is, of course, the future of work, and skills, and what you want to go to the future?
I like that title. Tell us more about this book, “Technology vs Humanity“.
When I started writing this, I mean, after doing about 1,000 speaking gigs in the last few years, this became a major topic. Everybody asked me all the time. So, technology is so great and it’s going to automate everything and everything will be different than today. What are we going to do in the future? That’s going to change work and culture, society, social and security, everything. And then I started writing four years ago on this book and the publisher wanted me to make it more aggressive. So, he said it has to be technology versus humanity. But the original title was technology and humanity. That’s my viewpoint is that it’s not really versus as much more like we have to put technology in its place, and we have to govern it wisely, rather than not use it, which is not really an option. So, the book goes through all these things about what technology is doing to us, the things that are likely to happen in the next 10, 15 years, exponential change and complete automation of society, and all of those things that we’re going to see, the good and the bad, and then it gives advice as to how to apply what I call digital ethics. And that has become a mainstream theme in the last three years since the book came out. Everybody’s talking about AI ethics and digital ethics. And of course, the future of work, as a great consequence, some more dystopian than others. But my opinion is mostly positive rather than saying it’s the end of everything, I think that’s just the beginning of something new.
So, as a futurist, I’ve always wanted to know, how far ahead are you comfortable speaking about making predictions or things that will happen? I mean, you said 5 to 7 years or 10 to 15 years. What’s the edge of that for you?
I think my work is not too much about predictions. It’s what I call observations and foresights, and that’s basically the things that you already see today expanded into a future. So, you could see today, for example, that we’re at the beginning of this renewable energy economy. So, we’re going to see the end of oil more or less as a main fossil fuel. And that’s quite obvious. It’s going to happen, whether that’s 10 or 15 years, but it’s not going to be 50 years. And so, these observations and foresights is what I do. And that’s the time frame 5, 7, 10, sometimes a little bit further, but 20 years from now, we’re going to live in a world that will be so dramatically different that most of the forecast, if it’s not just in one arena, if it’s general, then it would be very, very much science fiction.
What’s one thing you feel disappointed about in terms of where we are at the present compared to where maybe you thought we were going to be 10 years ago? What are some areas you feel disappointed in our progress?
That’s a good question. I think that in many ways, I’ve been too conservative on the slope. And then, in other cases, that’s quite common also too positive, for example, artificial intelligence is a much hyped topic, but it’s most of what we see today is intelligent assistance. It’s basically just fancy software. And it does things like language recognition, but it does not do anything that humans can do. So, that’s a little bit disappointed that we haven’t reached that point. But at the same time, it’s probably a good thing because we really couldn’t deal with if it was actually working. And the things that are generally difficult to stomach are things like climate change, global warming. We have not done much about dealing with it. And now it’s all about just dealing with the consequences, but not actually changing it very much, the impact on the environment, and that’s still very much the old fashioned capitalist thinking of whatever works for money is the top priority. And that’s going to change very, very quickly once we get to see the consequences.
Let’s jump into the topic of this episode, Work Minus Routine. What do you mean by that?
I talk about the end of routine. So, basically, what’s happening is that computers, machines, software, AI, robots, whatever you want to call it, they’re going to learn pretty much any routine that does not involve human judgment. I think my colleague Luciano Floridi, who runs the AI Ethics Lab in Oxford, he talks about, basically, that machines can outperform humans when it’s not about anything that humans do, like emotions, understanding, natural language processing on a very high level, semantics and all these things that makes us human, that’s very hard for machines. So, what that means is that anything that becomes donkey work for us, stuff that we just do because we have to, passing information, distributing files, setting up meetings, checking money flow, and checking facts and non disclosure agreements, or what have you, most of that work is automatable. And like bookkeeping, accounting, on the lowest level, and of course, driving and those kind of things.
So, machines will learn all of those things, no matter whether it’s black and blue collar, or white collar, or what’s called gold collar at the top level, or whatever the color is, even scientists will be automated, for example, in doing biological tests and diagnostics and what have you because machines are no longer stupid in sense of not knowing the patterns. But I mean, I think the thing about the routine is that basically, we can expect in the next 10 years, we have quantum computing, we have the internet of things, we have basically extremely powerful machines and machine learning and artificial intelligence. So, we’re going to be able to take those routines, and the machine will do like 100 billion variations of it, and then it will eventually learn how it works. If it does not involve any creative act or decision making or compassion, like a call center, 20 million people work on call centers roughly around the world and 19 of those will not have a job under those circumstances because if it’s just about changing my booking and going to a database, it can have compassion, but most of the time, it does not. So, those are jobs that will be replaced.
Define human judgment a little bit more because I feel like that term has become a little more gray in the last several years.
Well, there’s actually quite a bit. So, the other flip side of this coin is that many jobs are automatable, but they cannot be completely automated, like a driver, a taxi driver or a bus driver can be automated for some functions, but not for all of them, like driving on a highway with a truck for eight hours, you can certainly automate that going in like a daisy train or a platoon, that’s no problem. But to get off the highway and to actually make decisions about parking and unloading, that’s actually very complicated. So, basically, human judgment, understanding, and what’s called tacit information, that is the information that you have in your mind, as most humans have, I think the saying goes, we know a lot more than we can tell which is from I think Moravec’s… It’s a law of computing, basically. It’s like we know all these things, but it’s very hard to tell the computer what they are. And so, that tacit information, like shop for knowledge, what’s called shop for knowledge, you can transfer some of that in the factory, and then the computer will do something similar, but it will never quite do the same. And so, this is also a big chance for us because clearly we’re going to be doing what the machines cannot do. And that’s basically where we’re going with work.
So, what do you think it looks like to have this golden age where humans and machines are working well together and they’re each doing what they do best?
We can see how that already has happened, for example, in banking. So, low level financial advice, if I’m going to invest $10,000, I want to monitor that, I’m going to use a website like I do today. But in the near future, I can use a chat agent, a chatbot. Or I can call the computer and I could speak to the computer because, finally, the computer is going to understand what I say. That’s kind of here now. But that’s just two or three years away. And I can have a chat with the computer. And I could say, hey, do you think because of what happened in China, should I move my 10,000 euros? And the computer will have all the information that no agent could have. But it cannot tell me about whether this company I’m investing in is ethically interesting, whether it’s doing the right thing or it’s supporting the right cause or human judgment, basically. It cannot be created, for example, saying, okay, I saw a video of this guy, the CEO of this company that I’m investing in, I saw a video of him and I really think he’s amazing. And he tells a great story. I’m going to buy another hundred thousand shares. That’s a feeling. That’s not a fact. Try to get a computer to have that kind of thing, that will be difficult.
I want to go into one of the topics you discuss a lot, which is the exponential change that we’re facing now. Give us some context to put some framework around how much has changed recently.
Yes, it’s a typical human problem that it’s very hard for us to perceive, what Hemingway calls gradually, then suddenly, which is the exponential formula. Basically, nothing happens when you double 0.01, you get 0.02, and so on. It’s still nothing. And that’s what happened in the 90s in the internet business, like the paperless office and cloud computing, and of course, AI, and didn’t happen, it just wasn’t working. But now we’re at 4, which means all of a sudden, technological progress in terms of deep learning, machine learning, quantum computing, the Internet of Things, all the things that the ingredients is, we can finally cook a good meal, so to speak, with all of the pieces of this technology. And so, we’re at 4 and the next step is 8, 16, 32, which is completely different than 0.01 and 0.02. And in 30 steps, you end up at one billion. So, imagine now, if you double to exponentially one million, you get two million, and then 4, 8, 16, that is really materially different than one million. And so, we have to get used to this idea of saying we’re not going to change gradually here. Look at the music business. We went from buying records, and then buying $2 per song on iTunes, and all of a sudden, we got 100 million people using Spotify. And that happened all like an explosion, like all or nothing, basically.
I think part of the hard part is that we’re sitting here in the present and it feels like these things have always been there, even though it’s really only been in the last 10 years that these changes have taken place.
And I think part of that is to realize that we have to face the fact that a lot of jobs are gone and changing, like record stores, for example, music, but now we have thousands of new jobs because of Spotify, people making playlists, people making touring lists, they’re harvesting the data on Spotify. And last but not least, social media, we have 21 million people or so working in social media that didn’t even exist 10 years ago. And nobody would have said, “You know what? My job is going to be social media director.” Nobody would have known. And so, I’m not so pessimistic. I just think that we have to get used to the fact that routine, and anything that’s not really human, exponential progress and technology, quantum computing, the power of machines, they will learn how to do these things like tax returns, social security dispensing, all of those things that used to be hundreds of thousands of people, they are doable by machine.
So far as a species, I feel like we’ve been able to keep up with these changes so far. But like you said, we’re still in that 1, 2, 4, 8 sequence. At what point in the future, do you feel like it’s just going to overwhelm us and we’re not going to be able to keep up anymore?
We cannot compete with machines and robots and AI in the future. That will be utterly useless. I mean, nobody likes routine. We’re just doing the routine work because it is part of our other work. And our real work as humans is to make sense, to create meaning and to connect with others and to create experiences, the human part. That’s really what we do. Trust is not a download and so if the customer is going to trust me, then I would have to take care of that myself. The customer is not going to trust a chip, or even an app or a gadget. There’s a relationship there. That’s what we do. And so, ultimately, that is the challenge for us to make that shift and to also say to the machines, you do this, but no further, to set limits. And to compete with the machine is to say, well, ultimately the doctor knows more than the machine. That’s the case today. That won’t be the case in 10 years. I would say the doctor knows different things than the machine. And I’m not going to compete with the machine on having factual knowledge on melanoma. That is just mission impossible because we’re talking about information. We’re not talking about understanding or wisdom or intuition or any of those things. We’re talking about, basically, trillions of crude facts.
So, let’s talk to our audience here. We’re talking to people who are managers of teams. What are ways that they can look at their own careers and build up the necessary skills to be able to succeed when these big changes take place?
There’s two sides to this, first of all, because you have to understand technology as best as you can. And then you have to say, what do I really need in this work group, in this team, or this company, and you’re going to need what I call my book, I call this the androrithms, the human things, not the algorithms, but the androrithms. That’s a keyword in my book, the human things. And the androrithms is basically anything that machines don’t do, so compassion, understanding, imagination, intuition, design, negotiation. And that’s what we do already. So, my theory that I put forth in the book is to say it’s maybe 50/50 today, so 50% monkey work in the average job, and 50% the human work. And in the future, it’s going to be 95% human work. And as a manager, I have to enable my people to become more human, not less because the more they work like a machine, the more they’re going to be replaced.
As we’ve explored the topic of productivity with some other guests, we’ve recognized that there’s a need for people sometimes to do this mundane routine work sometimes just to clear your mind or do something else that enables you to focus on something else subconsciously perhaps. Do you feel like humans do have a need for this mundane work? And will there be an opportunity to do it in the future?
Of course, I mean, it’s part of how we exist, it’s part of a process and part of the process is doing mundane things, or being bored for that matter. That’s all part of the process and we shouldn’t cut out all of it. But then the race for efficiency is stupid for humans because humans are not efficient. We will never compete on efficiency and optimization and productivity because we’re just not. I mean, in fact, we are the most productive when we don’t feel like we have to be productive. And that is, of course, human nature. We need sleep, we need to take a break, we come up with stories, we get distracted, we lie, we make mistakes. And if you remove all of those things from the equation, then we become like a machine. In fact, in many cases, I teach some MBA students in different places, and I realize that, for some reason, most of them are already falling into a robot like routine, best case study, and this is how you build this better mousetrap to make more money and stuff. None of that stuff is really very valuable because anyone could do that. Anybody can apply a rule. But what about the things that’s not a rule? And now we’re going to have many, many more things that are constantly being requestioned and reinvented. I think Picasso once said, a long time ago, that computers are really stupid. They only provide answers. And I think it’s very true. It is the questions that are going to matter, or not the answers.
Let’s spend some time talking about the core human skills when it comes to managing other people. You talked about empathy and compassion. And what are some of the other skills you feel are essential to managing people well as a human?
I think that if you’re talking to HR people, it’s already quite clear that their primary aspect today is that they’re going to find people that have emotional intelligence, and human factors, like the ability to ask questions, critical thinking, questioning stuff, coming up with out of the box ideas. And those are exactly the things that 10, 15 years ago, you didn’t want your employees to have emotions. You didn’t want them to ask questions. You wanted them to say, “Yes, I’m going to execute,” and be productive. And today, you have people who are going to sit there and say, “You know what? I think this whole idea is just not going to work.” Or they come and say, “You know what? I found a new way of doing this. It sounds crazy, but we can do it like this.” And that is really the creative part that only humans can do, I think at least for a foreseeable future. Maybe in 50 years, a robot can do that. But for the time being, that’s what sets us apart. And most managers are asking for those kinds of people. So, our children should have those human skills primarily before they even look at other skills like process, business, technology, because it’s going to be about creativeness, about understanding others, about finding solutions that nobody else thinks of. And how do you do that? I mean, that’s probably not something that you can learn at school. That’s something you learn in life.
Let’s explore this further. I mean, you’re talking about an MBA program, if you could reinvent that to enable people to have the experiences necessary to succeed in a world where robots and automation are doing so much. What do you feel are the core skills and experience that we need to be a part of that program?
I mean, of course, quite a few of those already exist. I mean, like in Finland, there’s lots of this kind of conversation. It is basically rather than downloading information and learning what used to work, I would vote for saying let’s create experiences where I am enabled to respond and to invent. So, rather than saying, okay, I’m going to make a plan for a startup and how much money I need, I’m going to calculate profitability and return and all that stuff, which is probably useful, but then you get into real life, and the entire situation has changed compared to three months ago, and really what you need to do now is to pivot. I don’t mean you have to completely switch and look at what’s right next to you. So, it’s the more creative part, the artistic part really, you could say in the end. Basically I think this is what it comes down to, that’s the kind of skills we have to nail and practice. So, I would rather say something like let’s go out into nature and we’re going to spend two weeks working together to create something there. Or maybe just a week to survive somewhere. Or travel through India on a bus and collaborate not to get sick. Or whatever the deal is, something that makes it more human centric rather than saying the world is a giant spreadsheet. That’s exactly what computers are doing are turning the world into algorithms. And that’s not a bad thing because that’s all they know. But that’s not the real world. The real world exists in human relationships.
Yeah, I agree. I mean, I think it’s great to be able to see the world as number and to be able to apply those algorithms and codify things. But if the machines are doing that at some point, then it frees us up to have other experiences, too.
It is directional change where we’re going from the idea of we focus on STEM, science, technology, engineering, math, and business management and that’s basically a programming thing. We’re downloading this stuff and then we know it and then we get to work. And on the opposite side of this is what I call HECI and not stem, so humanity, ethics, creativity, imagination. And that cannot be downloaded in a classroom. That’s a process. If you look at what happened in Silicon Valley, most of those people either quit school or went through education and a half. And of course, 60% are not from the U.S. They’re foreigners. So, I mean, there’s something to be said for humanists, humanity, or you could say humanism in how we approach the future.
Gerd, this has been a lot of fun. Thanks so much for coming on and sharing your ideas. I really hope the things you’re saying come to fruition. It’s a good view of the future. Let people know where they can get in touch with you and see more of your work.
Of course, the book is techvshuman.com. It’s easy to find. It’s out in 12 languages. And I have a very active YouTube channel where we have a shortcut is gerdtube.com. There’s like 800 hours of video there. And of course, my website. That’s futuristgerd.com. So, there’s loads and loads of stuff that you can spend and fast forward on the weekend.
Well, we will put all of that on the show notes. Gerd, thanks so much. It’s been a great show.
Okay. Thanks very much. It’s been a pleasure.