Will Human Intelligence & Digital Technology Coevolve? A Conversation with W. Russell Neuman, One of the World’s Foremost Experts on Technology & Society.

In this interview, I speak to Professor W. Russell Neuman, one of the world’s foremost experts on the relationship of technology and society. He was one of the founding faculty of the MIT Media Laboratory and served as Senior Policy Analyst in the White House (Office of Science & Technology Policy). We discuss the danger of projecting human traits onto the machines (and algorithms) we might build and talk to the need for a new vision of computational (and human) intelligence. Neuman is currently Professor of Media Technology at NYU Steinhardt and in his remarkable new book, Evolutionary Intelligence, he brings together thinking across discipline to demonstrate that our future depends on our ability to computationally compensate for the limitations of a human cognitive system that has only recently graduated from hunting and gathering.

Q: How can evolution help us better understand technological advance?

[W. Russell Neuman]:  In my opinion, the progression of human evolution is both dramatic and romantic. A pivotal moment in our journey was the invention of language. Imagine a time, lost in antiquity, where our ancestors merely grunted, mimicked animal calls, and pointed to communicate. Over hundreds of thousands of years, we slowly developed rudimentary words for tangible objects. It was only much later that we mastered structured, grammatical language, allowing us to express abstract thoughts, and discuss the past and future with precision. This linguistic evolution significantly bolstered our ability to coordinate. Given our physical vulnerabilities—our small stature, lack of speed, and limited strength—it was essential for our survival to work together in the face of a hostile environment. This intrinsic need for cooperation is deeply embedded in our nature.

Fast forward to just 10,000 years ago, another breakthrough occurred. Humans realized they could corner animals, essentially laying the foundation for animal husbandry. This insight extended to cultivating certain plants, marking the dawn of settled life, communities, and eventually cities. The era of hunting and gathering came to an end. Roughly 5,000 years later, the advent of written language revolutionized the way we transmitted knowledge, both spatially and temporally. The industrial revolution, as we know, replaced manual and animal labour with machines, magnifying our influence over our surroundings.

Now, as we stand on the precipice of the AI era, we are witnessing perhaps the most profound invention in human history. AI doesn’t just amplify our physical capabilities; it augments our intellect, allowing us to comprehend and engage with the world on a level previously unimagined. This, I believe, is the pinnacle of our evolutionary journey.

.. consider the past 70 years. The term “artificial intelligence” was coined in 1955. Since then, we’ve navigated through a labyrinth of modest successes and notable setbacks, endeavouring to develop truly effective AI processes—systems that could aptly assess and react to their surroundings. Funding for AI research has been roller-coaster. There were phases, known as “AI Winters,” when belief in AI waned, leading institutions and governments to cut off financial support, deeming the direction fruitless.

In response, the research community, in a blend of pragmatism and ambition, rallied behind the Turing Test as a benchmark. The idea was simple: demonstrate that machines can emulate human intelligence, which was seen as the pinnacle of cognitive achievement. This historical and cultural trajectory, while understandable, seemingly dismisses the idea that computers can serve as invaluable complements to human cognition. We ought to look beyond mere abstract potentialities of human capabilities and consider the tangible ways AI can assist when humans interact with their environment.

Q: Why do we have a cultural fear of AI?

[W. Russell Neuman]:  Historically, humans have evolved to be wary of the unfamiliar—a survival instinct that’s served us well. Thus, the age-old “Frankenstein” narrative, wherein we birth powerful entities beyond our understanding or control, resonates deeply with our intrinsic apprehensions.

It’s commendable and indeed vital to scrutinize the potential pitfalls of emerging technologies and to be mindful of unforeseen repercussions. This vigilance ensures we harness innovations responsibly.

To illustrate the gradual acceptance of AI, let’s consider real-world AI applications like the Waze navigation system. Living in Manhattan, I’ve observed that even seasoned cab drivers, well-versed with the city’s labyrinth, rely on Waze. They’ll often remark about congestion on Park Avenue and opt for an alternative route to the airport. Through trial and error, users gauge the reliability of such tech. Sometimes, Waze might suggest a convoluted route, involving numerous turns, to shave off a mere two minutes. Users then question its utility. Over time, the system might prompt, “Do you prefer a complex detour to save 2 minutes or a straightforward route?” As these platforms evolve and people discern their benefits and limitations, acceptance grows.

Q: How can we effectively put guardrails around AI?

[W. Russell Neuman]:  Approaching this topic anew, it’s evident that a degree of caution is essential when dealing with potent emerging technologies. We need to diligently examine methods to forestall their misuse. Some voices in the industry have suggested a national, or even global, commission to oversee and potentially license new AI implementations. While the sentiment is valid, the practicality is questionable. AI fundamentally utilizes mathematical algorithms to inform decisions. You wouldn’t require a license for a simple device advising you to carry an umbrella due to rain predictions. Thus, determining which AI complexities would warrant licensing becomes a challenge.

The dialogue surrounding AI licensing and regulations is logical. Companies could benefit from guidelines on enhancing best practices. After all, businesses thrive when their technologies don’t expose them to liabilities or dangerous repercussions. Essentially, the industry is aligned on this front. A faction in the AI community, often referred to as “doomers”, ardently advocates for stringent regulations, suggesting a complete halt on all AI activities for a certain period. Their core concern revolves around what they term “existential risk”.

A metaphor popularized by Nick Bostrom from England aptly captures this sentiment: If an autonomous AI is instructed to produce paperclips, it might prioritize this task to such an extent that it might consider converting all available resources, including humans, into paperclips. This tale, now known as the “paperclip story“, has become shorthand for this existential risk within AI circles. Mention “paperclip”, and nods of recognition follow.

While the cautionary tales narrated by these thinkers have merit, one can’t help but feel they occasionally veer into hyperbole. They often highlight the idea that AI systems can self-evolve, enhancing their capabilities exponentially in mere seconds. However, this overlooks the fact that genuine intelligence augmentation necessitates the incorporation of vast new data. Mere reprogramming or tweaking mathematical models doesn’t compensate for the intricate process of training, correction, and refinement. In this light, the fears of unfettered exponential growth and impending existential threats may be somewhat overstated.

Q: What are the dangers of anthropomorphising AI?

[W. Russell Neuman]: Firstly, the ‘Doomers’ often anthropomorphize computers by attributing human characteristics to them. While this is an understandable tendency, it’s essential to recognize that humans have evolved their competitive nature and occasional violent impulses from survival in a world marked by resource scarcity and competition, both with other humans and animals. Computers, on the other hand, have emerged from a vastly different evolutionary path. Thus, to say a computer “wants to eat your lunch” leans heavily into projecting human traits onto machines. We ought to reframe from such assumptions.

Secondly, there’s no denying the potency of AI technologies. When they fall into the hands of those with malicious intent, the potential for harm is significant. However, for every concern about AI’s capacity to generate deceptive information or images, there’s a parallel effort within our community to develop AI tools designed to detect and debunk them. It’s reminiscent of the age-old ‘spy versus spy’ dynamic: as deceptive techniques advance, so too will the sophistication of our detection methods.

Some have taken issue with my definition of intelligence, deeming it overly simplistic. I typically define intelligence as the ability to accurately perceive one’s environment in order to optimize behaviour towards achieving goals. While this might come across as purely rational, omitting the vastness of human creativity and artistry, I believe it serves as a foundational starting point. My perspective, which I term “evolutionary intelligence,” stems from the observation that humans often misconstrue their surroundings. For instance, we might see a billion-dollar lottery jackpot and impulsively buy a ticket. We have a natural tendency to seek out information that aligns with our pre-existing beliefs and often overvalue our contributions while downplaying others’. The sheer number of cognitive biases we possess is staggering; Wikipedia lists over 200, though there is some overlap. The point is clear. If we could address even the top 10 of these biases and harness advisory tools like Waze for traffic, it could significantly benefit us. Such tools don’t replace our perception but enhance it by connecting us to a broader network of information. This allows us to foresee potential issues, whether they’re immediately in front of us or further down the line. Over time, I believe we’ll come to see this augmented, collective intelligence as a given, naturally integrating it into our daily lives.

Q: How will ubiquitous computing (computing everywhere) change our relationship with technology (and AI)?

[W. Russell Neuman]:  … let’s delve a bit deeper. Reflect on the 1950s when computers occupied vast air-conditioned rooms filled with vacuum tubes. This gradually transitioned to the more compact PDP 11, then to desktops, laptops, and eventually to the smartphones we can hold in our palms. The trajectory is clear: technology is getting intimately closer to us, no longer an isolated, bulky entity. I envision its next iteration as smart glasses or even smart contact lenses, offering an augmented layer over our perception. This would harness knowledge beyond our immediate experience, elevating our individual capabilities.

Consider the analogy of a horse and rider; their relationship remains, but they’re becoming more unified. Ray Kurzweil, a colleague of ours, terms this phenomenon “the singularity“, foreseeing technology integrating physically within us. I’m more reserved in this belief, suggesting that much can be achieved through augmenting our senses, primarily sight and sound.

In my research realm, these are dubbed “wearables” — essentially, technology woven into our attire. I often tell my students to recognize the implicit messages our clothing conveys about our mood, social status, and more. Beyond our primary senses, there’s an expanding realm of interaction through radio waves. This additional layer of communication means that soon, our entrance into a room will announce our presence not just visually or audibly, but also electromagnetically. We’re already familiar with this in transactions like Apple Pay. Soon, our interactions with others will encompass sight, sound, and this electromagnetic identity.

Q: Do we have the philosophical constructs to even understand that world?

[W. Russell Neuman]:  When examining philosophy, many scholars pinpoint Heidegger as a monumental shift, ushering in a modern approach to philosophical inquiry, especially in his work “Being and Time“. While my grasp of this tradition might be limited, it seems that rather than addressing broader issues of justice, Heidegger homed in on human existence within the confines of a limited lifespan. He explored life, death, existence, and the profound meaning of human presence in the world – aspects somewhat overlooked in preceding philosophical traditions. So, given your query, will we require a ground-breaking contribution addressing empowerment, identity, both individual and collective? My hope is a resounding yes. I’m optimistic that there are thinkers, akin to Heidegger, currently disrupting philosophical circles worldwide, and we’ll soon be privy to their insights.

Q: What does legacy mean to you?

[W. Russell Neuman]: I’ve walked the conventional academic route, albeit with brief forays into realms like Washington DC. Typically, academia is a dialogue amongst peers; it’s rare for scholars to venture into, let alone excel at, broader communications. Yet, I’m driven by the belief that top-tier academic research holds significance for our surrounding world. Beyond conversing with fellow experts, we all bear the responsibility to harness our findings and address the challenges faced by the wider community.

Thought Economics

About the Author

Vikas Shah MBE DL is an entrepreneur, investor & philanthropist. He is CEO of Swiscot Group alongside being a venture-investor in a number of businesses internationally. He is a Non-Executive Board Member of the UK Government’s Department for Business, Energy & Industrial Strategy and a Non-Executive Director of the Solicitors Regulation Authority. Vikas was awarded an MBE for Services to Business and the Economy in Her Majesty the Queen’s 2018 New Year’s Honours List and in 2021 became a Deputy Lieutenant of the Greater Manchester Lieutenancy. He is an Honorary Professor of Business at The Alliance Business School, University of Manchester and Visiting Professors at the MIT Sloan Lisbon MBA.