The Future of Humanity

In this article, we talk to Professor Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, discussing the profound changes humanity could experience over coming years including artificial intelligence, machine consciousness, the direction of human evolution, and risks to humanity itself.

Whether you believe that evolution gave rise to our species from a primordial soup, or that we were manifest on this earth by some divine creator, the fact remains that humanity has progressed an immense distance to where we are today, which many regard as a pivotal moment in our journey. The pace of change in all aspects of our lives is accelerating, bringing with it a greater frequency of ‘paradigm shifting’ events (i.e. those events which shape the future direction of our entire species). Renowned futurist Ray Kurzweil illustrated the speed of these changes by looking at the time between paradigm-shifting events in our history (source):

From the time humans emerged, to basic art and proto-writing, 69900 years

From the emergence of this basic art (cave painting) to agriculture 16600 years

From agriculture to fire 8200 years

From the wheel to democracy 2470 years

From the industrial revolution to the advent of modern physics, 125 years
It is clear to see that the pace increases dramatically, even over these long time periods. To take more recent examples:

In January 1915, Alexander Graham Bell made the first transcontinental phone call. Now, only a century later, practically the whole of our world, and everyone in it, is interconnected through an amorphous network (the internet), with researchers currently working on esoteric technologies which speed up data transmission by modifying time itself. Similarly, in December 1903 at Kitty Hawk, the Wright brothers flew the first modern airplane a distance of one hundred and twenty feet. Just sixty six years later, in July 1969, two men were stood on the moon, looking back at earth, the first two human beings to ever view our planet from another celestial body. In both cases, had you stood alongside the Wright brothers or Alexander Graham Bell at those moments, and told them the changes which were to come in such as short time, they would have likely dismissed your views as madness, but here we are today, witness to those very advances.

For our current iteration of humanity, the most profound changes will emerge from science and technology, fields which are consistently delivering advances of such leaps that we are frequently confronted with research which, while real, appears to exist within the realms of science fiction. At the core of these fields exists the relationship between humanity and technology, and the very nature of humanity itself. Many commentators see inevitability in the fact that human evolution will be at our own hand through genetic and technological enhancement, but in this field also exists the possibility of us, as a civilisation, creating machines which are intelligent, conscious, and smarter than we are. This combined with developments in biological and other fields also introduces the threat of ‘existential’ risks (risks which pose a threat to the very existence of humanity).

In this exclusive interview, we talk to Professor Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, discussing the profound changes humanity could experience over coming years including artificial intelligence, machine consciousness, the direction of human evolution, and risks to humanity itself.

[bios]Nick Bostrom (ranked as one of the FP Top 100 Global Thinkers, and winner of the 2009 Eugene R. Gannon Jr. Award based on his “criteria of integrity, ingenuity, professional recognition, and significance to the future of humanity”) is Director of the Future of Humanity Institute at Oxford University. He previously taught at Yale University in the Department of Philosophy and in the Yale Institute for Social and Policy Studies. He has more than 170 publications to his name, including three books: Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (OUP, 2008), and Human Enhancement (OUP, 2009). His writings have been translated into 16 different languages, and reprinted numerous times in anthologies and textbooks. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as analytic philosophy.

Professor Bostrom is a leading thinker on big picture questions for humanity. His research also covers the foundations of probability theory, scientific methodology, human enhancement, global catastrophic risks, moral philosophy, and consequences of future technology. Bostrom developed the first mathematically explicit theory of observation selection effects. He is also the originator of the Simulation Argument, the Reversal Test, the concept of Existential Risk, and a number of other influential contributions. He serves occasionally as an expert consultant for various governmental agencies in the UK, Europe, and the USA, and he is a frequent commentator in the media.[/bios]

On Risks:

Q: What are the big risks (including existential risks) facing humanity in the 21st Century?

[Nick Bostrom] In my view, all the big existential risks are anthropogenic, arising out of human activity. More specifically, the biggest existential risks in this century arise out of anticipated future technological advances. Humanity has survived all kinds of natural hazards over a period of over one hundred thousand years; it seems unlikely, then, that any natural hazard would do us in within the next hundred.

Among future technologies that may pose significant existential risks I would rank machine super intelligence at or near the top. Advanced molecular nanotechnology could enable the construction of very powerful weapons systems, and could also pose a big risk. Synthetic biology will have some potentially very dangerous applications.

There are also some existential risks that would entail the permanent and drastic destruction of humanity’s future potential without necessarily causing human extinction. Unwise human modification that changes human nature in some undesirable way could be one example. Vastly improved surveillance and mind control technologies could perhaps facilitate the establishment of some pernicious global totalitarian regime.

Furthermore, there is a big black box of as-yet unimagined risk. Most of what now seem like the biggest risks to society were unknown one hundred years ago. It is plausible that there are some big risks that still await discovery.

On an individual level, the most likely cause of death for most of us is ageing.

On Transhumanism:

Q: What do you think will be the direction of human evolution?

[Nick Bostrom] Right now, biological evolution is not the main engine of change in the human condition. Instead, social and technological development, which occur on shorter timescales, are the predominant change-makers.

In particular, it seems that we are gaining capabilities to directly modify human nature—through genetic selection, gene therapy, cognitive enhancement drugs, life extension treatments etc. —and that we are gaining these new technological capabilities much faster than evolution changes the human genome. Future breakthroughs such as artificial general intelligence, uploading, and advanced forms synthetic biology and molecular nanotechnology, will also make it possible to alter human biology in much more profound ways.

At some point, if this kind of technological progress continues, it would seem that our descendants will become entirely digital: uploads or artificial intellects implemented on computers. At that point, it is possible that evolutionary selection will again become an important driver of change—but not necessarily of change for the better.

Q: Can you explain the concept of transhumanism and do you think it will play a part in humanities story?

[Nick Bostrom] Transhumanism might become regarded as the name of a school of thinking that first began to take seriously of the idea of direct technological intervention to profoundly change human biological nature. In the long run, that development could be one of the milestones in the history of life. However, it will probably also be said that most trans-humanists were naïve and misguided in many of their particular beliefs, and it is questionable whether the movement will really have made much of a difference or whether history would turn out pretty much the same way without people self-identifying under this rubric.

On Machine Intelligence:

Q: How far are we from creating machine intelligence? Do you think it is feasible that we will see machine consciousness and/or super-intelligence?

[Nick Bostrom] It is not known how far away we are from creating human-level machine intelligence. This means that we must distribute our credence over a rather wide range of possible dates of arrival. Mid-century seems about as good a guess as any, but it could happen considerably sooner or much later.

However long it takes to get from here to roughly human‐level machine intelligence, the step from there to super intelligence is likely to be much quicker. In one type of scenario, “the singularity hypothesis”, some sufficiently advanced and easily modifiable machine intelligence (a “seed AI”) applies its wits to create a smarter version of itself. This smarter version uses its greater intelligence to improve itself even further. The process is iterative, and each cycle is faster than its predecessor. The result is an intelligence explosion. Within some very short period of time — weeks, hours — radical super intelligence is attained. But even if the process took a few years, it would still be incredibly rapid on an historical timescale.

I think that an intelligent machine could be conscious, but there might be some ways to build intelligent machines that would not be conscious. These are philosophical questions. Whether conscious or not, the development of inexpensive human-level artificial minds could have enormous economic consequences.

Q: What are the implications to our (human) society of machine intelligence, machine consciousness and/or super-intelligence?

[Nick Bostrom] Intelligence is a big deal. Humanity owes its dominant position on Earth not to any special strength of our muscles, nor any unusual sharpness of our teeth, but to the unique ingenuity of our brains. It is our brains that are responsible for the complex social organization and the accumulation of technical, economic, and scientific advances that, for better and worse, underpin modern civilization. All our technological inventions, philosophical ideas, and scientific theories have gone through the birth canal of the human intellect. Arguably, human brain power is the chief rate‐limiting factor in the development of human civilization.

Whether abrupt and singular, or more gradual and multi‐polar, the transition from human‐level to super intelligence would of pivotal significance. Super intelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by super intelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self‐replicating space‐colonizing robotic probes, and more. It would also be super‐effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.

It is an open question whether the consequences would be for the better or the worse. The potential upside is clearly enormous; but the downside includes existential risk. Humanity’s future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AI’s goal architecture) in such a way as to make it “human‐friendly”— in the best possible interpretation of that term.

Q: Do you think we will see whole-brain emulation? What are the implications?

[Nick Bostrom] The spectrum of approaches to creating artificial (general) intelligence ranges from completely unnatural techniques, such as those used in good old‐fashioned AI, to architectures modelled more closely on the human brain. The extreme of biological imitation is whole brain emulation, or “uploading”.

This approach would involve creating a very detailed 3d map of an actual brain — showing neurons, synaptic interconnections, and other relevant detail — by scanning slices of it and generating an image using computer software. Using computational models of how the basic elements operate, the whole brain could then be emulated on a sufficiently capacious computer.

The ultimate success of biology‐inspired approaches seems highly likely, since they can progress by piecemeal reverse‐engineering of the one physical system already known to be capable of general intelligence, the brain. However, some unnatural or hybrid approach might well get there sooner.

As with other avenues to machine super intelligence, whole-brain emulation would be associated with both existential risks and enormous opportunities.

It might be that the first-best alternative would be artificial super intelligence implemented with a good “friendliness theory”. However, developing a rigorous and correct friendly AI theory is very difficult, so it is not clear whether such a theory will be available when it first becomes possible to create super intelligence. The second-best alternative might instead be whole-brain emulation with some appropriate safeguards. The least preferred alternative—the one that seems to maximize existential risk—would be artificial super intelligence created without an adequate theory of AI friendliness

—————————————————

We can see from Professor Bostrom that we are really at a rather unique point in our story where it is conceivable that we, as a species, could choose and engineer the direction of our evolution. This is a profoundly powerful concept, with three core outcomes. We could either become more powerful, subordinate to a higher intelligence, or extinct, in either case, clearly a fundamental shift in what it means to be human.

Looking at the morality of this, in October 2009, we spoke with Professor John Harris, Lord Alliance Professor of Bioethics at The University of Manchester who stated, “…I believe that it is right to use technology and science and the innovation that it generates, whether the technology is mechanical, chemical or biological to improve ourselves, to make life better. We talk of “human enhancement”. For me an enhancement is necessarily good because if it wasn’t, it would not be called an ‘enhancement’ it would be a ‘disadvantage’ or ‘injury’ which would be unethical. As long as it’s good for you, its not only a reasonable thing to do, but may be morally required. One of the most fundamental moral principles is ‘do good’ or if you cant do good ‘don’t do any harm’ or if you cant avoid harm, ‘do the least harm possible’. If you believe that, and I think we all do, then you should use enhancement technologies, if they’re safe to improve human individuals and human kind. One of the ways of doing that is to use computers and if we can interface with computers in a way that enables our brain function to be better that would obviously be useful. For example, I am getting older now, and my memory isn’t what it was, and I do use my computer and my Blackberry to aid my memory. I don’t remember telephone numbers but they are in my Blackberry, I don’t remember addresses, they are in my computer, I don’t remember lots of facts, and the computer supplies those for me. This seems, to me, to be harmless and the more efficiently we can do this, perhaps by having implants that did it automatically for us, seems to be to pose no problem.”
Professor Harris reflects, in the above, that it is the nature of ‘how’ we use the technology, rather than the technology itself which creates the core moral issue of whether this evolutionary route is right or wrong.

For us, as a species and a community, if we have a hand in our evolution, the results could bring great benefits. We could become incredibly empowered, intelligent and connected. We may cease to require economies as we see them now, we may be able to indefinitely extend our lives, we may be able to eradicate disease, and exist in a society where, with genetic and technological engineering, our civilisation starts to enter a new plane of existence which would be difficult for us to even conceptualise now. The difficulties, though, arise where the pace of this change is faster than our ability as a society to adapt to it. We could see a vast plane of inequality between those who are “enhanced” versus “not enhanced”, we could see polarisation between the wealthy (who have access) and the poor (who don’t), and we could theoretically create a new underclass of humans by elevating a few into a new state of evolution.

Looking at the possibility of our civilisation creating machine consciousness and super-intelligence, this also raises issues of how we, as the subordinate species would interact with this new form of consciousness (if, indeed, it would allow us to). Would we have to acknowledge it’s rights as an entity? Would “it” acknowledge ours?
Looking at the philosophical argument, we can turn to Peter Singer who, in his book entitled, “in defence of animals” wrote “…if humans are to be regarded as equal, we need some sense of equality that does not require any actual descriptive equality of talents, capacities or other qualities. If equality is to be related to any actual characteristics of humans, they must be pitched so low, that no human lacks them – but this set of criteria – low enough which no human lacks – will not only be possessed by humans”. In context, he was referring to the rights of animals in society, but the same principles could easily be extended to machine intelligence or genetically modified variation of our race.

It is also important to look at the context of ‘how’ our current civilisation functions. Are we sufficiently at peace as a civilisation where we could be sure that elements of our society would not take this technology to use in a provocative way? At the extreme, where Professor Bostrom talks of our descendants becoming ‘entirely digital’, we are starting to also question the very nature of life, the soul, and even god.

If these concepts strike you as extreme, bear in mind that even at the current pace of growth in computing power it will be around the year 2020 when, for USD1,000 you will be able to buy a processor with the equivalent calculation power of a human brain, and around the year 2050 when, for USD1,000 you will be able to buy a processor with the equivalent calculation power of all human brains combined (based on projections by Ray Kurzweil).

The direction of our evolution, whether at our hand or otherwise, cannot effectively be predicted. All we know is, whether we like it or not, the future will happen, and it will be dramatically different to our present. George Santayana once said, “We must welcome the future, remembering that soon it will be the past; and we must respect the past, remembering that it was once all that was humanly possible.”

And as for those who fear the changes that society it was Marcus Aurelius who, almost 2000 years ago said, “Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present”

Thought Economics

About the Author

Vikas Shah MBE DL is an entrepreneur, investor & philanthropist. He is CEO of Swiscot Group alongside being a venture-investor in a number of businesses internationally. He is a Non-Executive Board Member of the UK Government’s Department for Business, Energy & Industrial Strategy and a Non-Executive Director of the Solicitors Regulation Authority. Vikas was awarded an MBE for Services to Business and the Economy in Her Majesty the Queen’s 2018 New Year’s Honours List and in 2021 became a Deputy Lieutenant of the Greater Manchester Lieutenancy. He is an Honorary Professor of Business at The Alliance Business School, University of Manchester and Visiting Professors at the MIT Sloan Lisbon MBA.

2 Replies to “The Future of Humanity”

  • I am afraid I do not agree with the dismal future that Professor Bostrom wants us to envision.
    There is much that is not known and no matter what the computational power at the command of human beings, there will still be a lot that is unknown, unknowable, left to be discovered.

    The Black Swan phenomenon delineated by Nassim Nicholas Taleb also has to be factored in.

    We also forget the 'good' in Man. That will be the force which will fight the evil designs of some misguided robots.

    Professor Bostrom is well informed and can perhaps tell a good story with scientific backing, but all is not told as yet, I am sorry.

  • I am afraid I do not agree with the dismal future that Professor Bostrom wants us to envision.
    There is much that is not known and no matter what the computational power at the command of human beings, there will still be a lot that is unknown, unknowable, left to be discovered.

    The Black Swan phenomenon delineated by Nassim Nicholas Taleb also has to be factored in.

    We also forget the 'good' in Man. That will be the force which will fight the evil designs of some misguided robots.

    Professor Bostrom is well informed and can perhaps tell a good story with scientific backing, but all is not told as yet, I am sorry.