Neurons, Intelligence, and the Biological Computer: A Conversation with Hon Weng Chong, Founder & CEO of Cortical Labs

Neurons, Intelligence, and the Biological Computer: A Conversation with Hon Weng Chong, Founder & CEO of Cortical Labs

What if the most powerful computer of the future is not made of silicon, but of living cells? For decades, computing has followed a single trajectory — faster processors, more data, greater energy consumption — with the brain held up as the aspirational model but treated as too complex, too mysterious, and too biological to actually work with directly. That assumption is now being challenged at its foundation. A new field of biological computing, sometimes called wetware computing, is demonstrating that living neurons grown outside the body can not only survive on a laboratory dish, but learn, adapt, and solve problems in real time — using a fraction of the energy required by conventional chips. In 2022, one company made global headlines when it taught a cluster of human neurons to play Pong. In early 2026, those same neurons learned to play Doom. And this week, that company opened the world’s first biological data centre.

In this interview, I speak to Hon Weng Chong, Founder and CEO of Cortical Labs, the Melbourne-based biotechnology company pioneering the field of biological computing. A medical doctor and software engineer by background, Hon previously co-founded CliniCloud, a medical device startup backed by Tencent and Ping An Ventures. In 2019, inspired by a paper by Demis Hassabis advocating for AI researchers to return to neuroscience, he founded Cortical Labs with the goal of integrating lab-grown neurons with silicon hardware to create a fundamentally new class of computing architecture. The company’s CL1, launched commercially in 2025, is the world’s first biological computer available outside a research lab, and this week Cortical Labs announced the opening of a biological data centre in Melbourne alongside a major partnership with DayOne to build a facility in Singapore accommodating up to 1,000 CL1 units. With backing from investors including Horizons Ventures, Blackbird Ventures, and the CIA’s In-Q-Tel, Cortical Labs sits at one of the most provocative frontiers in science — a place where biology, computing, ethics, and the deepest questions about the nature of intelligence itself all converge.

Q: You’ve coined the term ‘synthetic biological intelligence’ to describe what your neurons are doing. How should we really understand that term? Is what you’re building genuinely different from traditional computational AI, and are the neurons actually learning in a meaningful sense?

[Hon Weng Chong]: We did coin the term synthetic biological intelligence — though I’ll be honest, three-letter acronyms don’t really work well, and this one is no exception. It’s just too long. We still use it internally, but we also use the equivalent term ‘biological computing’, which tends to be easier to grasp. Coming up with the right name for something entirely new is actually one of the hardest problems when you’re starting something from scratch. You need an ontology before you even have the vocabulary.

Going to the root of your question: what is intelligence, exactly? How do we define it? I think intelligence is best understood as an entity that has the ability to improve a metric through repeated exposure over time. By that definition, machine learning algorithms are learning systems — they get better with more data and more exposures. A dog is a learning system. A cat is a learning system — you teach it a trick, reward it a few times, and it just does it from there on. And humans, of course, are the ultimate example of a learning system. So when we ask whether our neurons are learning, the answer is yes, in that fundamental sense.

The harder question is whether this is a computer. There’s a well-known observation that the brain tends to be described in the terms of whatever the most complex machine of any given era happens to be. In the Victorian era, looking at paintings and drawings, people assumed the human brain must be a steam engine — because that was the most complex thing we had. Now, of course, everyone assumes it must be a computer. But the brain is definitely not doing computation in the purest sense. We are not crunching numbers in binary ones and zeros in our heads. When people ask me how our system compares to an NVIDIA GPU in terms of FLOPS, I tell them they’re asking the wrong question. A more important question is: what are your inputs, what output do you want, and how intelligently can the system get from one to the other? That is a much better measure of equivalence than raw computational throughput.

Q: What are the practical applications you see for biological computing? We now have quantum computing for specific tasks and GPU-based systems for others. A tiny pot of neurons that can play Doom would have required quite significant compute to replicate conventionally. So what does biological computing do that neither of those approaches can?

[Hon Weng Chong]: It really comes down to what’s known as Moravec’s Paradox. Hans Moravec was a roboticist and AI researcher who, in the 1980s, noticed something very counterintuitive: the things that are trivially easy for humans turn out to be extraordinarily difficult for machines, and vice versa. I cannot do the square root of a large number in my head, but my pocket calculator can do that instantly. But my pocket calculator still cannot make me a cup of coffee. Robots have enormous difficulty navigating three-dimensional space, particularly where you have joints with three degrees of freedom. Animals and biological organisms, on the other hand, solve that three-dimensional spatial problem with almost no effort. If you think about a large predator like a T-rex, it would almost certainly have had the same navigational capabilities we do — because they were successful predators, and successful predators require that ability.

Moravec’s conjecture was that this asymmetry exists because the capabilities that come easily to biological systems are precisely the ones that have been critical for survival and have been honed through millions of years of evolution. And that leads directly to two properties we’ve observed in biological computing systems.

The first is energy efficiency. This follows from evolution directly: you cannot expend more energy than you can consume. That’s a fundamental law of physics. If you do, you starve, you die, and you remove yourself from the gene pool. Biological systems have therefore been under enormous selective pressure to develop highly efficient intelligence. Our CL1 uses around 30 watts — far less than a GPU drawing thousands of watts.

The second property is the ability to operate with sparse data in real time. We take data for granted these days — it seems to get churned out effortlessly. But think about what a camera actually does. A camera sampling at 30 frames per second, or a microphone sampling at 44,100 hertz, is always just sampling the world. Even if you had a camera capable of 1,000 frames per second, all you’ve done is increased the fidelity — the real-world event still elapsed at one second. Every digital system is fundamentally limited to sampling a continuous world at a fixed rate. Biological systems, by contrast, have evolved to operate in real time, continuously, because their survival depended on it. Think about a rabbit sitting in a field. If that rabbit saw a hawk circling above and decided to wait for the back-propagation step before responding, it would be dead. It has been eliminated from the gene pool. The better you model the world, and the faster you can act on that model, the more likely your genes are to survive. That is the selective pressure that has shaped biological intelligence over millions of years.

Knowing those two properties — energy efficiency and real-time operation with limited data — points directly to the application domains where biological computing will excel: robotics, drones, cybersecurity. These are precisely the domains where there isn’t much data, things are running in real time, and you’re spending enormous compute resources trying to solve what are ultimately quite fast, simple problems.

Q: How did you solve the interface problem? There’s a fundamental difference between how neurons communicate — through spike trains, analogue voltage signals — and how digital systems work in binary. How did you bridge that gap, and how do you translate what the neurons are ‘saying’ into something a computer can act on?

[Hon Weng Chong]: This is genuinely the hardest problem in our field, and I want to be honest that we have one approach — not the definitive answer. It warrants a great deal more investigation, and it’s still an ongoing process.

My entry into this space came in 2019 when I read a paper by Demis Hassabis of DeepMind advocating for machine learning and AI researchers to go back to their roots in neuroscience. I took that literally — I went back to the neuroscience department at my alma mater, the University of Melbourne, and asked what was exciting them. What gets you out of bed in the morning? They introduced me to a technology called the microelectrode array — essentially a glass Petri dish that uses vapour deposition to embed titanium nitride electrodes into its surface. You can grow neurons on those electrodes, connect a computer, and both sample the electrical activity of the neurons and provide stimulation back. A bidirectional interface between living cells and digital hardware. It had been around for almost two decades, but something about it really intrigued me.

The translation challenge is then: neurons communicate through spike trains — patterns of action potential voltage pulses separated in time — and computers operate in binary ones and zeros. These are fundamentally different languages. Our initial approach drew on two principles from neuroscience. The first is place coding, which is how the hippocampus encodes spatial position — different neurons firing in response to different physical locations. The second is rate coding, found in the cochlea: the louder the sound, the faster the vibrations of the inner hair cells, and the higher the amplitude. The cochlea is also itself place-coded, because different parts of it respond to different frequencies. So when you hear something, you have both frequency and amplitude being picked up simultaneously.

We applied exactly the same approach to the Pong game. To play Pong, you only need to know two parameters: the relative XY position of the ball in relation to the paddle. If you can track that, you can technically win the game. So we built a pseudo-cochlear system where we place-coded the sensory region of the neurons to correspond to the Y-axis position of the ball relative to the paddle — if the ball was far up the axis, we stimulated one region; in the middle, another; at the bottom, another. And we rate-coded the X-axis distance between the ball and the paddle. It was a brute-force approach, but it worked. It went back to fundamentals in neuroscience.

But that approach had limitations. It wasn’t particularly good at using the time domain — we couldn’t encode that much information that way. Think about Morse code: you can represent all 24 characters using sequences of just three elements. We were nowhere near that level of efficiency. So we’re now working on better encoding schemes — using variational autoencoders to learn compact representations of high-dimensional state space, which are then converted into spike trains via spiking neural networks that the neurons can respond to.

The Doom experiment used a different approach again: a convolutional neural network (CNN) handled the visual processing and distilled the game’s visual input down to an XY position that was then fed to the neurons. And we have a paper coming out in the next week or two where we got the neurons to perform not just game-playing but also MNIST digit classification, which is a well-established machine learning benchmark. There are multiple groups pursuing different interface approaches right now. This is one of the most open and important questions in the whole field.

Interestingly, when people ask me how we’re going to scale up — now that Doom works with 200,000 neurons, why not just keep adding more? — they’re surprised to hear we’re deliberately going in the opposite direction. Hyperscale computing says: it works, let’s make it bigger. We’re saying: it works, let’s make it smaller. We want to keep ablating, keep reducing, until we find the smallest functional unit that still produces intelligent behaviour. That’s where the real scientific understanding lies. Making it bigger just means duplicating something we don’t fully understand yet.

Q: Your work introduces profound ethical questions. Your chief science officer has described these neurons as a different form of life. You’re working with living human brain cells that appear to learn, adapt, and respond. How do you think through questions of sentience and consciousness, and how do you avoid the trap of people anthropomorphising what they see?

[Hon Weng Chong]: It’s really important that we take this seriously — but also important that we don’t overreact. What we have is actually a relatively simple system. There are other labs pushing the frontier into much more ethically grey territory: growing organoids with eyes on them, and all sorts of other things that raise harder questions than ours. What we have is considerably less sophisticated than that.

The deeper question you’re touching on is where consciousness and sentience come from — are they emergent properties of complex systems, or something more fundamental and inherent? My personal view is that consciousness is an emergent property: it arises from sufficient complexity in a system. Having said that, in our first major paper we used the word ‘sentient’ — drawing on a definition associated with Karl Friston — and it caused significant controversy, because people immediately conflated sentience with consciousness. They kept treating them as equivalent, which they are not.

Sentience, in the sense we used it, simply means the capacity to respond to stimuli. Consider a paramecium or an amoeba: if you poke it, it moves away. Put food nearby, it moves toward it and consumes it. Is it conscious? Almost certainly not — I would be very surprised if anyone argued that a paramecium is a conscious system. Is it sentient by this definition? Yes, because it is responding to its environment. Our neurons sit at a similar level. They respond to their environment — and that responsiveness is the basis of the learning behaviour we observe. But lacking sufficient complexity, they are not conscious.

The field genuinely needs agreed definitions and a shared ontology before these questions can be rigorously answered. That requires ethicists, neuroscientists, and consciousness researchers to work together rather than in separate silos — and to come up with terminology that the broader public can distinguish. Until that happens, every time a biology lab uses the word ‘sentient’, the reaction will be: ‘oh, so it’s conscious’. And that confusion doesn’t serve anyone.

Q: The medical potential seems extraordinary — being able to grow a patient’s own neurons on a chip and test treatments for neurological conditions directly. When you started the company, did you have a particular envelope of use cases in mind, or was it an organic process of discovery?

[Hon Weng Chong]: It was very much organic. The first question was simply: can we get this thing to do anything useful outside the body at all? That was the entire goal at the start. Once we did get it to play Pong, the question became: well, what are we actually going to do with this? And none of it was really planned out. It takes time to think through, to let the implications evolve. The next question everyone asked was: what is this even for? Why do we need to keep looking into this? Which is a deep and important question, and we kept working through it.

What we realised, after publishing our findings and fielding questions from the research community, was something we hadn’t initially anticipated. These systems weren’t learning in the traditional machine learning sense — there was no distinct training phase and then a separate inference phase. They were learning continuously, in real time. That single insight reframed everything. It pointed us toward applications where time is a hard constraint and data is scarce — exactly the domains where conventional machine learning struggles, because it requires large datasets and extended training periods before it can be useful.

Even the Cortical Cloud emerged from a similar organic process. When we published, colleagues from other institutions started asking: where do I buy the machine? How much does it cost? What API do I write to work with it? And then you do a bit of introspection and you think: why are all my colleagues in academia asking me these questions? And then you realise it’s because they’re all planning to go off and build the machine themselves. Every lab wanting to explore this space was independently rebuilding the same infrastructure from scratch.

That is the fundamental reason why this field moves slowly. You raise a £5 million research grant and one to two million of it goes straight into building the hardware before you’ve run a single experiment. It’s an enormous duplication of effort. So we built the infrastructure once and made it accessible to the whole research community through Cortical Cloud. Just listening to what people actually needed, integrating that feedback, and building accordingly — it really has been an organic process of discovery all the way through.

Q: I want to ask about the entrepreneurship dimension. When I’m meeting startups as an investor, most of them fit recognisably into existing categories. What you’re doing doesn’t. It’s not just new — it sits in the realm of what most people would call science fiction, even though it’s actual hard science with real risk attached. How did you navigate fundraising and attracting talent when there’s no established frame for investors to match against?

[Hon Weng Chong]: You’re raising a genuinely important point. It is hard — very hard. Cortical Labs is a seven-year-old company, and this is my thirteenth year doing startups. I’m a second-time founder. And one of the most important things you learn over that time is that investors and entrepreneurs are essentially like partners in a dance — you cannot have one without the other, and they tend to move together to specific themes. Right now the theme is agentic AI. Two years ago it was large language models. Before that, NFTs and crypto. The dance is much easier when you’re moving to the current theme.

When your company doesn’t fit a current theme, investors have no pattern recognition to lean on. They can’t use the shortcut of ‘this is just like X, so I understand it’ — they have to actually process the information you’re giving them from first principles, evaluate it, and make a bet. Most investors aren’t set up to do that efficiently. They’re optimised for pattern-matching, not for processing genuinely novel information. So it makes fundraising extraordinarily difficult. There is no comparable pattern to point to.

Hiring, interestingly, cuts the other way — it’s actually a structural advantage when you’re doing something genuinely new. When you’re in a hot space like AI right now, everyone wants in, and that makes it very hard to tell whether someone is joining because they genuinely care about the vision or because they’re just running toward the gold rush. When you’re doing something that doesn’t have that kind of momentum, the people who join you are self-selecting for genuine curiosity and genuine conviction. They are not here because it was the obvious place to be. The team we’ve built at Cortical Labs are deeply curious people who want to push the boundaries of what’s possible — and that kind of culture is actually very hard to build in a hot sector where everyone is just chasing the moment.

Q: The same week your Doom demo went viral, there was another significant demonstration: a team simulated the Drosophila fly brain in a virtual environment and let it navigate unsupervised. That approach seemed to be a more traditional compute challenge — simulate the neurons, then let the system interact with a virtual world and see what emerges. Does that kind of simulation approach converge with what you’re doing, or are these fundamentally different paths?

[Hon Weng Chong]: They’re fundamentally different. The Drosophila work is computer scientists claiming to have simulated a brain — and when you look closely, it’s a considerably more simplified model than it first appears. Don’t get me wrong: mapping the connectome — charting every synaptic connection in an organism’s nervous system — is a genuine and remarkable scientific achievement. It is by no means a trivial task. But a map is just that: a map.

What the connectome gives you is structure — which neuron connects to which. What it doesn’t give you is the dynamics: the traffic on those routes. What’s the firing rate on a given pathway? Is that connection strong or weak, fast-conducting or slow? Is it a major highway or a dirt road? Trucks or motorcycles? There is an enormous amount of detail missing from a connectome-based simulation. People are starting to look critically at this and raise exactly these questions.

The Drosophila simulation used a leaky integrate-and-fire model — the most basic model we have of neuronal behaviour. It’s a useful approximation, but it strips away enormous amounts of biological reality. Every neuron has slightly different activation parameters, and modelling those accurately at scale is extremely hard. Maybe with 100,000 neurons they were able to achieve something approximating it — but it remains a very simplistic representation of what those neurons are actually doing.

We are going in exactly the opposite direction. Rather than simulating neurons in software — and inevitably baking in simplifying assumptions at every step — we work with real neurons and try to understand from the ground up how they actually operate. The two approaches might eventually inform each other. But right now, the simulation path and the biological computing path are asking fundamentally different questions, and I’m not sure the Drosophila work has much direct application in our space.

Thought Economics

About the Author

Vikas Shah MBE DL is an entrepreneur, investor & philanthropist. He is CEO of Swiscot Group alongside being a venture-investor in a number of businesses internationally. He is a Non-Executive Board Member of the UK Government’s Department for Business, Energy & Industrial Strategy and a Non-Executive Director of the Solicitors Regulation Authority. Vikas was awarded an MBE for Services to Business and the Economy in Her Majesty the Queen’s 2018 New Year’s Honours List and in 2021 became a Deputy Lieutenant of the Greater Manchester Lieutenancy. He is an Honorary Professor of Business at The Alliance Business School, University of Manchester and Visiting Professors at the MIT Sloan Lisbon MBA.