From 600+ conversations with the world’s leading thinkers.
Unlike a machine, which processes myriad data points yet remains detached from meaning, we humans instantly ascribe significance to our sensory perceptions. The colour red transcends mere visual data, morphing into a spectrum of experiences.
The ultimate goal is to be in a state of flow with machines. Think about people working with horses, or herding cattle with a dog, they are examples of interactions with other intelligent creatures in a way which is fluid and allows us to achieve something we couldn't do ourselves.
At some point, if this kind of technological progress continues, it would seem that our descendants will become entirely digital: uploads or artificial intellects implemented on computers.
We cannot think about technology in confrontational terms. There is no race against the machines, there is no fight, no war. We have to end this long, historical confrontational narrative.
The things that are trivially easy for humans turn out to be extraordinarily difficult for machines, and vice versa. I cannot do the square root of a large number in my head, but my pocket calculator can do that instantly. But my pocket calculator still cannot make me a cup of coffee.
Arguably, it could be more comparable to the rise of Homo sapiens itself, or even to the origin of life on Earth.
The brain is definitely not doing computation in the purest sense. We are not crunching numbers in binary ones and zeros in our heads. A more important question is: what are your inputs, what output do you want, and how intelligently can the system get from one to the other?
If you tried to achieve that purely through informal methods—where hallucination is a persistent risk—getting to the same level of consistent correctness would likely require a lot more research effort and resources.
My fear has never been the machines waking up and deciding to do away with us, but rather that we- in our own bone headed way- deploy systems inappropriately, or without thinking through the unintended consequences that may occur.
The third and deepest reason this matters—why it's not just commercially meaningful but potentially world-changing—is the ability to bridge different levels of abstraction.
We therefore see the drone exhibiting through software signs of the moral-affective function of 'guilt' when engaging in each mission.
It's the ultimate invention—the last one we'll ever need to make—because once we have AI that is generally intelligent and then superintelligent, it will do the inventing far better than we can. In that sense, it's a handing over of the baton.