Sir Nick Clegg is the former UK Deputy Prime Minister (2010-2015) and former President, Global Affairs at Meta (2018-2025). Prior to being elected to the UK Parliament in 2005, he worked in the European Commission and served for five years as a member of the European Parliament. He became leader of the Liberal Democrat party in 2007 and served as Deputy Prime Minister in the UK’s first coalition government since the war, from 2010 to 2015. He joined Meta, then called Facebook, in 2018 and became the company’s chief policy decision-maker and its principal interlocutor with world leaders, governments and policymakers around the globe. He has authored numerous publications, including, How to Save the Internet: The Threat to Global Connection in the Age of AI and Political Conflict; Politics: Between the Extremes; and How To Stop Brexit (And Make Britain Great Again).
Q: How do we align the openness of the internet (and technology, broadly) with the needs for political control and boundaries?
[Sir Nick Clegg]: …in a sense, all of that reflects the underlying collision between how these technologies—and generative AI is just the latest example—erase geography and ignore borders, and the countervailing deglobalisation of politics everywhere. Whether it’s Modi, Brexit, Trump, Erdogan—across the world, people are trying to reassert control over something that seems to slip beyond their grasp. So how do you strike the balance? Well, I think sovereign governments have every right to legislate and regulate in this area. The problem is that if they do so in ways that are either unworkable in practice, because they don’t understand the technology, or too divergent from what other jurisdictions are doing, you end up with an erratic, uneven approach to how the internet is governed. And that, I think, will lead to the frictions and balkanisation of the internet I talk about in the book.
In an ideal world—which doesn’t exist, but still—in an ideal world, and this is what I explore in the final third of the book, you’d have a closer alignment, not perfect but closer, among the major techno-democracies: the US, India, and Europe, in that order of descending importance. They’d reach a basic understanding of the guardrails they want to establish—whether on content moderation, the training and transparency of foundation models, or open data flows, and so on. In the long run, I just don’t see any alternative to some form of partnership or multilateralism in this space. We’ve done it before—on trade, on arms reduction, on other cross-border issues. We just need to rediscover that political capacity here too.
Q: How are politicians perceived by technology companies, and vice-versa?
[Sir Nick Clegg]: I think it’s worse than that. They just don’t speak the same language. Politicians, by and large, don’t understand technology at all, and technologists don’t understand politicians—and both tend to denigrate each other. The technologists in Silicon Valley see politicians as venal, short-term, and ignorant, while politicians view technologists as rapacious capitalists who will stop at nothing to beat their rivals and lack any ethical compass. One important step forward would be if politicians and technologists simply spent more time trying to understand each other.
And dare I say it, as a refugee from British and European politics into Silicon Valley, one of my motivations was precisely that. When I was Deputy Prime Minister, especially on security and Home Office-related issues, I saw firsthand how government struggled with technology. The British Home Office, as it often does, made yet another attempt to persuade ministers to abolish or restrict encryption—because end-to-end encryption naturally means some data and communications are less visible to intelligence services. I remember thinking at the time, they’re fundamentally misunderstanding the technology. It’s an absurd idea that you can somehow put the cat back in the bag.
So, I think anything—any forum, gathering, or seminar—that brings these two communities together is a good thing, because they can’t keep throwing rocks at each other and clinging to these caricatures. Of course, it doesn’t help that big tech is often portrayed by publishers and newspapers in the worst possible light. They have their own axe to grind, since tech companies have disrupted their business models, which only fuels the stereotypes. Now, none of this is to say that politicians or tech leaders are angels—they’re certainly not. We should be clear-eyed about the fact that big tech companies have their own agendas and interests. But, to your point, they’re still made up of human beings—mothers, fathers, sons, and daughters—not robots, despite the cliché. And most of the people I worked with in Silicon Valley—not all, but most—really did wrestle with these issues. They often didn’t know the right answers, but they grappled with them far more seriously than the stereotype suggests.
Q: How do we handle the seeming paradox of a borderless internet vs a world with borders and sovereignty?
[Sir Nick Clegg]: …of course a degree of digital sovereignty should be exercised to safeguard the agency and interests of the society you live in. Over time, I’ve become more French in my thinking about digital sovereignty, in the sense that I see it as a fundamental issue. We’re not just over-reliant—we’re wholly reliant—on American technology across the entire stack. Our data sits in American cloud infrastructure; our hardware is American designed; our software and operating systems are overwhelmingly American; most of the AI systems people interact with are American, and so on.
For the Brits in particular, there’s this rather touching belief that the “special relationship” will somehow meet all our needs. That may have made sense for the past 70 or 80 years since the Second World War, but it doesn’t anymore. It’s now clear that America is pursuing a different agenda. The Trump phenomenon isn’t just a blip—there’s a genuine rupture across the Atlantic. So, what does that mean in practice? It means we need to protect some of our own sovereign infrastructure—our own sovereign cloud—especially for utilities, security, and intelligence. And as new AI paradigms emerge beyond the current LLM model, as I think they will, we must ensure the ability to develop our own. We have real expertise in quantum computing within UK academia, and we need to make sure that academic lead translates into industrial leadership if and when quantum becomes a reality, because that will be transformative.
I also think it’s possible to protect your interests while still living in a world that remains open—open to ideas, communication, and the basic flow of data. It’s really about finding the right balance between the two. At the moment, it’s deeply unbalanced: fully globalised but not sovereign at all—unless you’re American or Chinese, in which case it’s entirely sovereign. And I’ve lived this, as you said. One of the great ironies of a company like Meta, where I worked, is that well over 90% of its users are outside the US, yet well over 90% of the bandwidth among decision-makers is focused on what’s happening in America. In the end, that just doesn’t make sense.
Q: Do you think we will achieve multilateral governance or the internet in the true sense, or is it likely we could see the internet become an exercise in digital imperialism?
[Sir Nick Clegg]: The answer, I think, goes something like this: you need to determine which parts of—sorry to use the technocratic jargon—the technology stack are most suitable for multilateral governance, and which are best handled through sovereign action. Let me give you examples of each.
On the multilateral side, I think having a basic, settled understanding that data should be able to flow freely between jurisdictions is a foundational principle for the internet—and one that could be enshrined multilaterally. And that might sound obvious—surely data will always flow freely?—but that’s not the case. As I describe in the book, we’ve already had some close calls. Just a few years ago, the US and EU clashed legally in a way that could have collapsed the legal basis for transatlantic data transfers. In India, where I spent a lot of time in Delhi talking to ministers, the government came close to passing a hard data localisation law that would have required all data gathered by platforms to remain within India’s borders. And of course, once one country does that, everyone starts demanding their own slice of the cake.
On the sovereign side, though, I think content really should be for each country to decide. The platforms hate that because they prefer to treat the world as flat—to apply one universal set of content standards everywhere. I always tell the story of how Scandinavian ministers used to berate me for Facebook’s prudish standards on nudity, which stopped happy-go-lucky Swedes from posting photos of topless sunbathing in the Baltics. In India, there was, quite understandably, deep concern about inter-ethnic and inter-communal violence, and frustration that content standards didn’t reflect that. Americans, meanwhile, tend to have a far higher tolerance for violent content than Europeans. At the end of the day, the platforms will complain, but they exist to serve society—society doesn’t exist to serve them.
So those are just examples, but you can apply that logic to every layer of the stack. Some areas are ripe for multilateral agreement, while others should rightly remain within the purview of sovereign governments.
Q: How do we rebuild trust between civil society and technology?
[Sir Nick Clegg]: I don’t think there’s one single magic silver bullet answer. I think part of it is getting the balance right between regulation and free enterprise. The current Trump–Silicon Valley consensus—that all forms of regulation somehow constrain the muscular freedom of companies to innovate—is ludicrous. Of course governments and legislators have a perfectly legitimate right to set guardrails on issues like content, child protection, and so on. So yes, regulation is an important part of the mix.
The second ingredient is far greater transparency from the platforms about how they assemble and architect their technology in the first place. In the age of AI, they should be required to be much more open about how models are trained and how inference works. That might only interest specialists, but it’s still a crucial principle: people should be able to lift the bonnet and see how the technology functions.
The third leg of the stool is maximum user control—offered in the most granular and user-friendly way possible—over their experience. That’s improved a lot in the old social media world. On Instagram or Facebook, for instance, you can now click the three dots and exercise quite a lot of control: override the app, disable the ranking algorithm, choose which ads you see (or pay to remove them), block people, adjust settings, or ask why you’re seeing a particular post. But we need that same kind of control—expanded and improved—for our interactions with agentic AI. These systems know so much about us, and users must be able to wipe their memory or disable them easily. My fear is that these controls will again end up buried in the fine print. They need to be made simple, visible, and genuinely accessible—and that’s something regulators can and should insist on.
Q: Are governments taking the risks of AI seriously enough?
[Sir Nick Clegg]: … everyone’s still grappling with this. The industry is in a kind of frenzy, racing to build ever more powerful models and ever more agentic experiences. Governments and regulators, meanwhile, are scratching their heads, trying to figure out what on earth to do about this new technology. Some welcome it, some fear it, some want to regulate it, others want to unleash it. And consumers and businesses alike still aren’t sure how to respond.
I agree with you. Unfortunately, one of the things we’ll need to do to address the concern you’ve rightly raised is to significantly elevate societal scepticism about what people find and see online. I hate to say it, but we’re going to have to start very deliberately telling people: don’t believe a damn thing you find online unless you can absolutely verify it. There’s already so much of what’s being called “AI slop” online. These tools are immensely powerful—and already being used by fraudsters to deceive and defraud. I think there’s a whole new industry waiting to emerge, one that may not even exist yet, focused on doing the reverse: verifying what’s real, human, and authentic. If I were an investor, that’s where I’d put my money.
In the early stages of this AI hype cycle, some people argued that all AI-generated content should be watermarked or kite-marked. That’s not going to happen—there’s just going to be too much of it. But you could do the reverse: kite-mark or flag content that is authentic or has been verified. So I think what we’ll need is a combination of widespread scepticism and far more sophisticated authentication technologies. Otherwise, I honestly don’t know how we’ll navigate this minefield.
Q: How can we remain optimistic about the role of technology in human civilisation?
[Sir Nick Clegg]: … there are two groups of people—and this is certainly my own reaction—whom I increasingly just don’t listen to when it comes to technology: the arch-proponents and the arch-opponents. There are plenty of both in Silicon Valley, and they share one fundamental mistake, which is why I tend to take a slightly more measured view. They both wildly exaggerate the role of technology in humanity—in who we are, in the anthropology of being human. We just keep doing this. We keep over-ascribing to technology powers it doesn’t really have.
I talk about this in the book. Look at the hysteria that’s accompanied every new technology: bicycles were once seen as dangerous and subversive for women because they sat astride them; when I was a kid in the 1970s, video games were said by the Daily Mail to turn you into an axe murderer if you played for more than ten minutes. Radio was considered subversive because it piped information directly into people’s homes, becoming a tool for fascist propagandists. Television advertising, I remember, caused similar panic—friends of mine were sent out of the room when ads for Crispy Pancakes came on, because their parents thought it might burrow into their neural pathways. And now we have artificial intelligence—which we call “artificial,” but spend all our time anthropomorphising. History, it seems, keeps repeating itself.
This is undeniably powerful technology, and I don’t want to sound glib or dismissive of its potential. But it’s still more flawed and limited than people think. It remains a probabilistic machine, with all the errors that entails. Will it cause upheaval in some jobs and industries? Sure. Could it deliver dramatic benefits in education, health, urban planning, and climate change? Absolutely. But I think, as ever, we’ll remain human—and what we’ll continue to value most is each other.
I saw this repeatedly in Silicon Valley. It’s a remarkable place, but also a strangely ahistorical one—it looks as if it was all built last Tuesday, a culture wonderfully free from the patterns of history. Yet history, like geography, always reasserts itself. We’ll navigate this too, just as we’ve navigated bigger shifts: the printing press was arguably more transformative, as was agricultural mechanisation, which moved most of humanity off the land. Industrialisation brought huge upheavals, yes—but when people say this is “bigger than the Industrial Revolution,” I think that’s nonsense. It’s an accelerant to many trends, a big change, yes—but we’re getting carried away by the breathless language.
And finally, Sam Altman, Mark Zuckerberg, Satya Nadella, Sundar Pichai—these are all brilliant people. But they’re not philosopher kings. Dario Amodei is not a labour market economist. Listen to them on AI and their companies, fine—but don’t take their views on universal basic income as gospel. Just because they’ve built the technology doesn’t mean they understand exactly how it will interact with society. They don’t necessarily have that wisdom.