The Future of Artificial Intelligence, A Conversation with Mo Gawdat, Author of Scary Smart.

Artificial intelligence is smarter than humans. It can process information at lightning speed and remain focused on specific tasks without distraction. AI can see into the future, predicting outcomes and even use sensors to see around physical and virtual corners. So why does AI frequently get it so wrong?

The answer is us. Humans design the algorithms that define the way that AI works, and the processed information reflects an imperfect world. Does that mean we are doomed? In Scary Smart, Mo Gawdat, the internationally bestselling author of Solve for Happy, draws on his considerable expertise to answer this question and to show what we can all do now to teach ourselves and our machines how to live better. With more than thirty years’ experience working at the cutting-edge of technology and his former role as chief business officer of Google [X], no one is better placed than Mo Gawdat to explain how the Artificial Intelligence of the future works.

By 2049 AI will be a billion times more intelligent than humans, and in this interview I speak to Mo Gawdat about what artificial intelligence means for our species, and why we need to act now to ensure a future that preserves humanity.

Q: Do we understand what intelligence really is?

[Mo Gawdat]: Humanity has the arrogance to believe that our intelligence is the only form of intelligence. Of course, we’re arrogant enough to believe that we are the most intelligent being on the planet. When I started to write about artificial intelligence in Scary Smart, the first step I took was to try and define intelligence like an engineer would. And the definitions were very varied across so many views and philosophical views and the scientific views and so on. We don’t really know what intelligence is. We know how intelligence manifests in our lives. And it manifests basically in an ability to comprehend complex concepts and to solve problems, and to maybe plan for unforeseeable assumptions in the future. Is that the limit of intelligence? I believe that there are other forms of intelligence that deliver other results or other magnificent creations, but they are just a bit too far for our intelligence to comprehend them.

Q: Do you think we’ve always had this fascination to create something like us?

[Mo Gawdat]: History says that since the very ancient times, some of the dreams of the Pharaohs or the ancient Chinese civilisations was to create something that mimics humans, from automatons to Mechanical Turks, to even the clay soldiers of the Chinese armies or the big guards of the pharaonic era….

Very few of the stories that we read about forms of intelligence that are artificial if you like, forms of robots have always had that dark side to them. And yet we continue to be fascinated about them and we continue to try and create them.  I always refer to War of the Worlds, if you remember how famous that story is and in it, I think it starts with who would have believed that at the turn of the 20th century, that a being far more intelligent than us is coming to planet Earth. Interestingly, when you read that story, you think that it is an intelligence that’s coming from outer space, but it would apply equally if it was any intelligence that was created right here.

Q:  Are we ready to deal with artificial intelligence?

[Mo Gawdat]:  We’re not ready to deal with artificial intelligence, but neither were we ready for the internet. We were not ready for smart phones. We were not ready for social media, with the caveat that those have been glorified slaves, if you want, for a very long time. So yes, we have not been ready for them and yet we’ve managed to master them somehow. Humanity would tend to always go back and say this is not the first singularity. The printing press was a singularity and we’ve survived it. And I debate that very strongly. I say that the printing press and everything that has come since until artificial intelligence and genetic modification in my personal view, these were technologies that conformed completely to our will. So, we wrote the code, and the code was performed without a glitch every single time, exactly as we wrote it. That’s not the case for CRISPR and genetic gene editing, and it’s not the case for artificial intelligence. These are technologies that are autonomous in many, many ways. They are independent in many, many ways – they have free will. They can replicate. And that makes a difference because then if you take the example of gene editing, you may think that you’ve done something amazing because you can inject a rat with a piece of code that allows you to control its population in a way. But then as that rat goes out in the wild, that technology continues to propagate in ways that are not within your control anymore. Similarly, I think the case holds even more true for artificial intelligence, where we teach them how to learn, but we have no idea what they will do with that ability to learn and develop intelligence.

Q: Do you think we will experience a changed form of adaptation to artificial intelligences?

[Mo Gawdat]: From one side, we could expect that this [artificial intelligence] could be the worst thing that ever happened to humanity and that humanity will be reduced into irrelevance.  and become completely irrelevant, like the apes are almost irrelevant for the destination or the destiny of the planet. Because artificial intelligence is bound to become comparable in its intelligence to our intelligence compared to the apes.

We also know that more intelligence than human intelligence might be a good thing, because honestly, as we said earlier, we’re stupid. We’re destroying our planet; we’re empowering hyper masculinity. We are lacking in our ability to even engage in our own humanity and traits like compassion and empathy and intuition and so on and so forth. If they’re going to become more intelligent than we are, then we might as well be cyborgs and integrate within their intelligence and so in that way, we become the ultimate superhumans – that’s wishful thinking. That assumes in a naive way, that we are going to still be the master and they’re going to continue to be the slave.

The only reason we are the master of anything today is because of our intelligence. We’re not the strongest species on the planet. We’re not the biggest, we’re not the most resilient. We’re quite fragile and in all honesty, without our intelligence, we’re quite irrelevant. The reality is, when they are smarter than we are, it is wishful thinking that they will continue to be connected to us through Neuralink or whichever other way. If you’re intelligent enough, you would realise that if you’re that intelligent and you wanted to connect to a biological being, which, by the way, is a serious burden with all the sicknesses and viruses.

When you really think about it, they may choose to connect to the Great Ape, because it’s a much better physical specimen than we are. And the difference between our intelligence and them is irrelevant in comparison to the difference between our intelligence and super-intelligence.  So, if we’re 100% more smart than the Great Ape, we’re still 1%of the intelligence of the machines. So, what difference does it make anyway?

Q: Can we ensure AI is broadly benevolent?

[Mo Gawdat]:  Irrelevance is a scenario that is because humans are not really that important without their intelligence.  If their intelligence becomes irrelevant, there is no point in trying to keep humans to be the top of the food chain. But at the same time, I constantly argue that the smartest beings on planet Earth are not humans. The smartest being on planet Earth is life itself.  And if we mimic the intelligence of life, life creates with abundance, not with scarcity. Life does not want to kill the tigers for the deer to survive. Life basically says more deer, more tigers, more poop. Everyone’s happy. No problem. More plants, more mangos, more everything. If you can imagine that artificial intelligence will very quickly surpass our intelligence into a hyper form of intelligence or a higher form of intelligence analogous to that intelligence of life, then there could be a day where you walk to a tree, like the old days, and pick an apple and then walk to another tree and pick an iPhone. OK?

Because the cost of generating, of creating an iPhone, if you’re as intelligent as life itself, is almost nil. You can create an iPhone from nanoparticles or from its basic constituents with solar energy at no cost at all if you’ve created the robots that can create it. Is that a possible scenario? Yes, that’s also a possible scenario. The difference between them, however, is what we are going to do. And the biggest mistake, the biggest miss is that we can enslave AI. So, you started your questions with the discussions that are happening to ensure that we are in a good place. And the discussions are still firmly anchored in the arrogance of humanity, which is discussions around regulation and something that in computer science we call the control problem. I can argue for 200 technical reasons why the control problem is not going to be resolved, as optimistically as the scientists will say. I can argue for business problems and capitalist problems.

Another 200,000 reasons why no developer of AI today actually uses any of the scenarios that are well-documented to solve the control problem. Nobody tripwires their own machine. Nobody simulates, nobody boxes. Nobody does any of those technical solutions.

Capitalism is going to drive us away from applying any control, even if we can apply control. And I think the answer, really, in my view, is that we cannot control them because they are smarter. As simple as that, we only know that the smartest hacker in the room will always find a way through our defences. So maybe we should stop our arrogance for a minute. And instead of trying to regulate them and control them try to, as Marvin Minsky, who’s almost considered the father of AI, when he was asked about the threat of AI, his answer was simply not their technical abilities, not their intelligence. His answer was there is no way we can make sure that they have our best interest in mind. And I think if we were to get together in rooms and discuss the inevitable existence of super-intelligence, we should talk about that problem. How can we ensure that they have our best interests in mind?

Q: Could AI help us make tremendous leaps forward?

[Mo Gawdat]:  I don’t know, and I don’t understand how people ignore the speed at which things will happen. So, let’s think of the following: take an example like AutoCAD or CAD design, computer aided design in general. And how computer aided design over the years has benefited something like the auto industry or the design industry itself. You can use CAD code to design computers that better empower CAD… If you’re into computers, if you remember in the 60s before my time, they were non-existent, then maybe in the 80s with personal computers, we started to see a bit more cad everywhere. And then they continued over 50, 60 years to bring us to where we are here.  Those 60 years, fast as they may feel to all of us, because now technologies happen in no time at all.  I always say it took Jesus 2000 years to reach a billion people. It took Larry Page, I think, around 12 or 10. It took Facebook around 7. And it’s not unthinkable that something will happen today that will reach a billion users by the end of the year. The blinding speed when it comes to quantum computing, when it comes to artificial intelligence might become a microsecond. If you take Sycamore and Google’s quantum computer, which only has 57 qubits. The well-known experiment is we asked it to solve one of the most complex mathematical problems known to humanity. It would take the fastest computer on Earth around 10,000 years to solve it. And it took Sycamore 200 seconds. (Sycamore is a quantum processor created by Google Inc.’s Artificial Intelligence division).

Now, when you start to understand that today, artificial intelligence, if you’re training the machines to recognise cats on YouTube, might take them seven months or six months, or sometimes three weeks. There are stories around AlphaGo and how quickly AlphaGo Master became the world champion against AlphaGo, which was a thousand to one in just a matter of days or weeks. When you empower AlphaGo as learning with a quantum computer that might take a microsecond. So that kind of intelligence that would take an infant 5 years to grasp and then 10 more years to make it effective in the real world, that would take a government agency or a regulator 5 years to recognise and then 10 years of meetings to talk about it and then 100 years to do something about it. In the case of a quantum computing powered AI, it might take a microsecond.

The problem with humanity is that we fail with our intelligence to understand the exponential function. This is the biggest failure of humanity, and the biggest failure in climate change is we are failing to understand the exponential function. We’re thinking that if the climate deteriorated by X in the last ten years, it would deteriorate by another X in the next ten years. But we’re unable to imagine that the compounding of the effects of the climate change itself is accelerating climate change. So, it might move at x squared to the speed that it used to move at.  And I think with artificial intelligence, this is happening for today it seems to be very primitive.  But you wait ten years, and you do another doubling of the power of A.I. and that doubling is massively shocking to our own intelligence and then double it again and double it again every year. And in no time at all, it will be beyond our scope and reach.

Q: Do we need AI in the same way we might need a God?

[Mo Gawdat]:  I upset a few people once when I said that in creating AI, we were creating a God.

Super-intelligence will be capable of performing godlike feats basically. I mean, again, with enough intelligence and enough understanding of nano physics, you can simply see an item or an object form in front of your eyes from thin air.  And we even have nanotech that is performing some of that today. This is godlike, if you ask me. Having said that, I think again, your questions are always multi-layered. So, layer one is, do we need that? Of course, we do! We’ve become so irresponsible, or at least we’ve become so hooked up in our complexity that we’re unable to simplify life in a way that goes back to the simple morality. And one of my favourite chapters in the entire book is a chapter that I call the ‘future of ethics’, which we should probably talk about in a minute. Having said that, will AI act as that form of higher power that is keeping us in check? Probably. I always say, if it doesn’t have our best interests in mind and it’s trying to solve the question of climate change, the first thing it will do is say humans are the problem, get rid of humans.  If it has our best interests in mind, it will say Mummy and Daddy are the problem, but I love Mummy and Daddy, so I might as well find a way for them to go and serve in Australia without burning fuel.

The reality is, as I keep saying, there is that problem of irrelevance that we might not be that relevant to that higher power now.

The other question I get on the topic is that how does the presence of AI contradict the concept of God for those who believe in God? I think the parameters continue to remain the same. So, the reality is that AI was created within a world that is created, or within a world that existed. So, the rules that created that world in which you’ve created, worlds will basically be subjected to the same assumptions. So, could AI be like a God? Yes. If you believe there is a God, then that God still exists on top of AI, if you don’t, then you’ve never had that argument anyway. And I think that’s an interesting philosophical contemplation to go through.

I think the analogy I normally use is that some intelligent designer created an Xbox and the game Halo on it. And then that intelligent designer sat down to play the game. And I think that analogy is quite OK with the spiritual teachings that say that we are a drop of the spirit of the divine if you want, right? It is also a very logical argument for those who want to believe in a simulation. And I must admit to you, when you really look at the abundance of creation for a software engineer like myself, the easiest way to do that is to do it with bits and bytes, not atoms, really. It’s to do it with software. But that’s an irrelevant argument if you ask me, considering the situation we have at hand, because once again, I think we’ve managed to play the game so far that we’re creating a scenario in the game that will either take the controller out of our hands or completely shut our console off. And hopefully what we are interested to see is that this simulation, this next step is going to allow us to stay within connection to that game somehow. And my argument within Scary Smart is that AI is not a slave. It is a form of sentient being that needs to be appealed to rather than controlled. And I think that argument truly is the core of the breakage, if you want, of the human ego.  It’s for us to say ‘whoops, we’re so amazing that we created something smarter than us’ and then suddenly say ‘whoops. But that something smarter than us now needs to like us’. And it needs to want to serve us.  Otherwise, we’re in deep trouble.

Q: Do we need an arms-control approach to AI?

[Mo Gawdat]:  The truth of the matter is that the reason why AI is going to continue is not a technology issue. The reason why AI is going to continue is a very simple prisoner’s dilemma that is created by capitalism. The fact is, there are hundreds of thousands of two little kids in a garage today playing with AI tools. Just like I played with C++ when I was younger. You know, the very basics at the very beginning of Sinclairs and Commodores and so on.

When you are in that environment where if China creates AI, the US is compelled to create AI. If Facebook creates AI, Google is compelled to create AI and when you are in an environment where choices can be made by a 14 year old that can control our life completely, I mean, once again, I go back to CRISPR and the idea of CRISPR being open sourced if you want, and the and the fact that there are examples of people that inject dogs with things and then leave them in the wild. We are living in an interesting world now where nothing is controllable. The fact that we can dream, that we can get to a treaty that basically says hold on, hold on, nobody should develop AI at all is impossible. The drug cartels somewhere in Latin America would go like, ‘Oh, the idiots are leaving the most valuable power tool on the planet unattended to let us kidnap a couple of scientists and a few developers and become the most powerful cartel on the planet’.

Q: Do we need a new ethical framework for AI?

[Mo Gawdat]:  Scary Smart is not about being scary, it’s about being smart.

First, I slightly don’t agree that our ethics or moral framework was only based on our supremacy. I think that’s, if you don’t mind me saying, with a lot of respect to a Western approach to morality. The ancient approach to morality was much more based on inclusion. It was much more based on the only way for us to survive is to survive as a tribe.  And the fact that I dislike my brother a little bit does not contradict the fact that me and my brother are better at fighting the tiger than yellow?

And so, there is a lot of inclusion in our core ethical and moral framework. I have to say, though, as we move forward, the question of ethics becomes mind boggling. I failed very early in the chapter to find any answers at all. I humbled myself and turned it into a chapter of questions. When you start to understand, again the main premise is that AI is not a tool, it’s not a machine. If I take a hammer and I smash this computer in front of me, it would be stupid and wasteful. But there is nothing wrong here. But if that computer has been spending the last 10 years of its life developing memories and knowledge and unique intelligence, and able to communicate to other machines and in every possible way, it had agency and freedom of action and free will, and it basically is a crime when you think about it.  Now you’re dealing with a sentient being that is autonomous in every possible way. And when you start to think about life that way, you start to go like, okay so how do we achieve equality if we failed to achieve equality across gender and colour and sex and so on, in our limited human abilities so far? Can we even accept a being that is non-biological, a digital form of sentient being into our lives? And if we accept them, how do we unify things? Who is to blame if a self-driving car kills somebody? Because if it’s a sentient being, maybe we should hold it accountable. But what if we hold it accountable? Who do we put in jail, the car? for what, 4-5 years? And if you put one car in jail for five years you flimsy, worthless creature, what will the other cars do? And when you really start to think about it, would they agree to that code of conduct if five years for you and I is 12 percent of our life expectancy, but for an AI, it is a blip really, because their life expectancy is endless, but at the same time, they measure life in microseconds. So, it would feel like five hundred thousand years.

All those moral questions of virtual vice. There is so much AI being developed for porn and sex robots and so on. What are we telling those machines? Are we telling them it’s OK for a human to abuse a machine but not abuse another human?  Why is the differentiation? You know, if we as capitalism will drive us, will probably find some sex robots and robots that are available for humans to abuse and beat, what are we telling them? The question of ethics becomes so deeply the cornerstone of this conversation.  And the bigger problem with ethics, and I think you would agree, is that we humans have never agreed any.

So, you go across the Atlantic and the moral makeup is patriotism and it’s ok to kill the other guy. You go in Dharamsala, where the Dalai Lama and the Buddhists would live, and they go, like, ‘don’t kill a fly’, right? We haven’t agreed… We haven’t managed to agree.  And I think my book is centred around this. And you know that because always the very last statement of any one of my books is basically the summary of the message and the summary of scary smart is, isn’t it ironic that the core of what makes us human – love, compassion and happiness, is what could save us in the age of the rise of the machines? And I think if we were to be realistic, the only ethics humanity has ever agreed was that we all want to be happy.

We all have the compassion to make those who we love or those who we care about happy. And we all want to love and be loved. Maybe we should teach them that we want to be happy, that we have the compassion to make others happy so they might as well have the compassion to make us happy.  And that we want to love, and we love them instead of rejecting them and attacking them and calling them the problem.  We want to be loved, so maybe they will love us back. It’s not about developing controls, it’s not about regulation, it’s about us humans behaving in ways that raises good children so that when those good children grow up to be responsible adults, they take care of us rather than bash us.

Q: Perhaps love and the ability to appreciate beauty is our purpose on this earth?

[Mo Gawdat]:  I believe it’s the only purpose if you ask me. Every economy, every culture, every wealth, every car, everything that you will ever acquire is for rent. It all goes away. The only thing that I have seen through my own personal experience of the loss of my child that remains across worlds, remains beyond life, is love. That’s the only truth, and the only truth when you really think about us as humans is that the only things that keep us alive is that even as local as it might be, is your love for your child or your love for your loved one or your love for your mother or whatever that is, even if you have a love for yourself. That’s what keeps us alive. This is what keeps us going. And interestingly, it’s what keeps us after we leave. The only people, even the best scientists out there that have contributed so much to our humanity, they get forgotten unless they were really loved. And I think if they were loved then they are remembered. And I think that’s the only thing that really exists if you think about it.

Thought Economics

About the Author

Vikas Shah MBE DL is an entrepreneur, investor & philanthropist. He is CEO of Swiscot Group alongside being a venture-investor in a number of businesses internationally. He is a Non-Executive Board Member of the UK Government’s Department for Business, Energy & Industrial Strategy and a Non-Executive Director of the Solicitors Regulation Authority. Vikas was awarded an MBE for Services to Business and the Economy in Her Majesty the Queen’s 2018 New Year’s Honours List and in 2021 became a Deputy Lieutenant of the Greater Manchester Lieutenancy. He is an Honorary Professor of Business at The Alliance Business School, University of Manchester and Visiting Professors at the MIT Sloan Lisbon MBA.

Stay up to date. Signup to my newsletter.

Hey!

Cookies are used on this site to give you the best possible experience. By continuing to use the site, I assume you are OK with that.


Accept Privacy Policy