Our Brains on Music: A Conversation with Daniel J. Levitin – Musician, Neuroscientist & Author.

Our Brains on Music: A Conversation with Daniel J. Levitin – Musician, Neuroscientist & Author.

In this interview, I speak to Daniel J. Levitin– an award-winning neuroscientist, musician, and best-selling author. His research encompasses music, the brain, health, productivity, and creativity.

Levitin has published more than 300 articles, in journals including Science, Nature, and PNAS, as well as The New Yorker, The Atlantic, and The Wall Street Journal. His research has been featured over 2000 times in the popular press, including over 60 appearances on NPR.

He is the author of four New York Times bestselling books: This Is Your Brain On Music, The World in Six Songs, The Organized Mind, and Successful Aging, as well as the international bestsellers A Field Guide to Lies and I Heard There Was A Secret Chord: Music As Medicine. Since 2017, he has been a member of an expert panel on Music As Medicine appointed by the White House Science Office and the National Institutes of Health. His TED talk on productivity and information overload is one of TED’s most highly viewed of all time. As a musician (tenor saxophone, guitar, vocals, and bass), he has performed with Mel Tormé, Renée Fleming, Sting, David Byrne, Rosanne Cash, Bobby McFerrin, and Victor Wooten. Levitin has produced and consulted on albums by artists including Stevie Wonder, Steely Dan, Joni Mitchell and on the films Good Will Hunting and Pulp Fiction and has been awarded 17 gold and platinum records. Records to which he contributed have sold in excess of 30 million copies.

Q: Why did music evolve to be so important for our species?

[Daniel Levitin]: I think the question itself is a little misleading. It suggests that music suddenly appeared out of nowhere, and that it instantly became important — but of course, evolution doesn’t work like that. You hear a similar kind of reasoning from the Christian right when they argue against evolution. The classic example is the eye: what good is half an eye? You can only see with a full eye, so how could it possibly evolve in steps? And the answer, of course, is that it began with tiny organisms that had nothing more than light-sensitive cells, which then became gradually more complex over eons.

I bring that up because I think music has a parallel story. We evolved to respond to sound long before humans ever existed. Reptiles had sound. And sound itself likely evolved as a way to detect danger — say, an intruder in a dark cave where vision was useless. That’s why we have two ears: binaural hearing lets us locate threats through echolocation. Later, organisms developed sensitivity to pitch, because pitch carried meaning. A low pitch often meant a larger animal, which required a different kind of reaction than a high-pitched call. Vervet monkeys, for example, have specific alarm cries — one for a bird of prey, another for a snake — and those cries are surprisingly musical. If one spots a snake, it gives the “snake call,” and everyone knows how to respond.

Fast-forward through the eons, and in humans the original function of sound was emotional communication. Pitch, timbre, and loudness are powerful signals of emotion — just as apes can soothe their infants with soft calls or signal aggression with harsh ones, and cats can purr or snarl. In that sense, human music probably began as a kind of proto-language, a way of expressing emotion before we had spoken words. It may even have sounded like those Charlie Brown cartoons, where the teacher’s voice comes through as garbled tones — you don’t understand the words, but you know from the sound that she’s angry at you.

Q: Why does music have the ability to trigger such deep emotional responses in us?

[Daniel Levitin]: So there are really two parts to this question: memory and emotion. They’re deeply connected, but it’s helpful to think about them separately for a moment. Music is often better than speech at conveying and understanding emotion, because music has a kind of openness and ambiguity to it. Words, on the other hand, tend to put things into boxes.

For example, if I try to describe the flower outside my window, I might say it’s purple. But that word instantly confines it — and in reality, it’s several shades of purple depending on how the light hits it. Words help a little, but they never quite capture the full thing. The same is true with emotions. If I say, I’m happy, but also a bit sad, nervous, winsome, and tired, those words are still boxes. But if I play you a passage of music — maybe something by Elgar — you might think, Yes, that’s exactly how I feel. Just listen to that, and you’ll understand.

That’s why we sing to each other, play music for each other, and listen in return. Music communicates emotion more directly. Because in real life, emotions rarely arrive in neat, isolated packages. They’re layered, overlapping, constantly shifting. Pure happiness, for instance, is rare; most of the time it’s tinged with something else. And music, more than words, captures that mixture.

[Vikas] It seems music therefore may also act as a bit of a tagging system in our memory?

[Daniel Levitin]:  Yes! Music is one of several tagging systems for memory. We tend to remember the things in life that deliver the biggest emotional wallop. The death of a family member, the birth of a child, a wedding, an accident, an injury, or a major global event — the Syrians gassing their own people, the Twin Towers collapsing, Brexit, or the assassination of John F. Kennedy. (My friend Chris Matthews once pointed out that the word assassination really wasn’t part of American vocabulary before 1963 — not since Lincoln, a hundred years earlier. This country went a century without talking about assassinations. And then Kennedy’s death shattered that innocence. John Fogerty even wrote a powerful song, I Saw It on TV, about the collective innocence lost when that man was shot. We remember events like these because they’re deeply impactful.)

Music ties into memory in two ways. First, music itself can be tremendously impactful, so we remember it — and we also remember everything happening around us when we heard it. For instance, I still remember what was playing when I first heard about the Twin Towers: Blondie’s Heart of Glass. I’d heard that song hundreds of times before, but after that day, the association became permanent.

The second way music works as memory is through popular culture. Since the 1940s, we’ve had the “hit parade,” and during the Top 40 era — roughly 1960 to 1990 — radio stations literally played just forty songs. That was it. Occasionally they’d dip into the past, but only for a number one or number two hit, never something that had lingered at number forty. So you’d hear those songs intensively for a few weeks or months, and then suddenly not at all. That makes them highly effective memory tags, because they’re anchored to a very specific time and place.

And that’s different from, say, a national anthem like God Save the Queen. You hear it so many times in your life that it blurs together. It’s hard to tie it to any single moment. But a pop song from the Top 40? That’s often forever linked to one particular experience.

Q: Is there a universality to the emotional response music creates; for example, it seems that sad music can trigger sadness, irrespective of the culture where that music was created?

[Daniel Levitin]: Not universal. We’ve studied this, and it’s broader than you might think. You can usually pick up on the emotions in Renaissance or Baroque music, maybe even as far back as Gregorian chant. But you probably wouldn’t grasp the emotions the ancient Greeks were trying to express 2,500 years ago. And most listeners won’t understand the emotional language of Chinese opera .

For a long time there was a kind of cultural chauvinism: the idea that if you just played Mozart to people in the Amazon or to hunter-gatherer groups in the South Pacific, they’d instantly recognize its greatness, maybe even see God. But of course, they don’t experience it that way at all. They don’t necessarily know what’s supposed to be happy or sad. So while there are strong cultural norms shaping how we hear music, there are also universal regularities tied to brain architecture, which is remarkably similar across humans.

The brain can be thought of as a blank slate, yet it comes with certain built-in constraints and proclivities. For example, it automatically extracts pitch. Even if you’re not a musician, you can hear the difference between high and low, even if you can’t label it precisely. How finely you can distinguish pitch — say a semitone versus a tenth of a semitone — is partly innate and partly learned. The same goes for rhythm. And some features are truly universal: every culture recognizes the octave, because it’s grounded in physics, a simple 2:1 frequency ratio. Every culture also uses the perfect fifth, 3:2. And every culture divides the octave into a discrete set of steps for their scale, usually between five and eight, though how they divide it varies. Even within Western music, the system we use now is different from what was used in Bach’s time before the well-tempered clavier.

Within those boundaries, though, culture creates its own musical language. For us, shaped by our traditions, minor keys tend to sound sad or wistful, major keys tend to sound happy, and dominant sevenths push music forward in the way they do in the blues. Slow tempos usually feel calming, fast tempos stimulating. Those associations aren’t hardwired, but they’re built on a shared brain architecture that gives us the raw materials, with culture filling in the rest.

Q: It seems that our mind’s ability to decode music seems to play a big role in our finding enjoyment in music?

[Daniel Levitin]: It’s a big part of the story. Not the whole story, but a major piece of it is that the brain, if nothing else, is a giant pattern detector. It looks for order in chaos. And those patterns matter: they can mean the difference between life and death, between food and poison, friend or foe, predator or prey. We’re constantly trying to predict what’s coming next in the environment so we can adapt.

Music taps into that same machinery because it’s so highly structured. In our scale, there are only twelve notes, and some notes point more strongly to others. Certain chord progressions or melodic lines set up expectations in your mind, based on a lifetime of listening. The role of the composer and performer is to play with those expectations — to satisfy them just enough that you feel grounded, but to break them often enough to surprise you with something you couldn’t have predicted.

And when that happens — when your brain says, I thought this path would lead here, but it actually took me somewhere new — you experience pleasure. It’s a kind of learning: you’ve just discovered that the road can bend in ways you didn’t expect. That’s why I like the geolocation metaphor. Our ancestors survived because they could find their way to water, to shelter, to a mate. None of us descends from someone who failed at those things. And in order to motivate those survival behaviours, there had to be rewards built in.

That’s what music exploits: the reward circuitry that says, Yes, I learned something new. That was fun. I want to learn more.

Q:  Does this explain why music has potent healing potential?

[Daniel Levitin]: Reward is part of the story. Music can help us heal and achieve therapeutic outcomes by tapping into various neurochemical circuits that influence mood and behaviour. As an example, endogenous opioids produced and mediate activity in the ventral tegmental area, nucleus accumbens and more broadly in the vental striatum, all parts of the reward network. Ours was the first lab to show that listening to music releases the brain’s natural pain relievers—opioids. Relaxing music can modulate prolactin, a soothing, tranquilizing hormone. Music also releases dopamine which helps us to focus and motivates us to stay on task.

But music can also do something else. At different times, with different kinds of music, it can push us into what’s called the default mode network — essentially the daydreaming state. In that mode you’re not consciously directing your thoughts, but you can learn a lot about yourself and solve problems in unexpected ways. It’s a restorative, healthful state of consciousness.

We often resist letting our minds wander because we feel we have to stay locked onto what’s in front of us. And of course, if you’re driving or operating heavy machinery, that’s true. But in an office setting, when your attention starts to flag, that’s really your brain signalling that it needs a break. Too often we just plow ahead, which only depletes neural resources and leads to mistakes.

A short reset — even a 15-minute break to listen to music, meditate, or go for a walk — can be enormously beneficial, not only for cognition but for overall brain health.

Q: Is this the same for musicianship the act of making music and playing instruments?

[Daniel Levitin]: With musicianship, you’re engaging with the minds of some of the greatest, most creative people who ever lived. If I sit down to play Chopin, I can place my fingers exactly where his once were, and in that moment I’m interacting with his work in a meaningful, active way — something I can’t get from listening alone.

Let me ask you something: if you had to name the single job that requires the most training in the world, the most difficult skill to master, what would you say? More training goes into that than into becoming an astronaut, a rocket scientist designing rockets, or even a brain surgeon. It requires far greater precision and far more control over incredibly detailed movements. A master musician makes hundreds of micro-adjustments every second — in pressure, in angle of attack, whether they’re singing or playing an instrument.

And what’s remarkable is that music activates every part of the brain we’ve mapped so far. It’s not just the left hemisphere, not just the right — it’s the whole brain. I don’t really like the metaphor that the brain is a muscle, but in some sense the comparison works: the more you use it, the more effective it becomes. Even when you’re using it for music, you’re strengthening neural systems you need for other things too — movement, prediction, eye-hand coordination.

Q: What are your views on AI making music?

[Daniel Levitin]: First, full disclosure: I’m a boomer. I grew up on the Beatles, and I was alive when they were releasing albums. I got to watch their growth, their experimentation, their evolution in real time — which is a very different experience from having all of it delivered to you at once. When the Rutles, Eric Idle’s parody group, came out with their Beatles spoofs, I was fascinated. They really nailed so many of the compositional and production elements that defined the Beatles’ sound. It was clever, it was funny. But as entertaining as it was, I never found it inspiring. I never think, I want to put on a Rutles song. Because at the end of the day, it’s just a copy.

And that gets to the second part. I’ve played around with some of the AI music generators. One of them came up with a pretty convincing Steely Dan–style track. But there was something missing. It didn’t have the imagination, the spark, the ingenuity of the real Steely Dan — because how could it? All the AI has to work with are the things that already exist. It’s essentially making a mash-up of those elements. A collage. And while collages can be interesting — some even hang in galleries — they rarely inspire people in the way the Mona Lisa does, or Van Gogh’s Sunflowers, or Georgia O’Keeffe’s flowers. What made Van Gogh’s sunflowers different from O’Keeffe’s flowers is that each was reaching for something new, not remixing their own past work.

So when I say something is missing, that’s what I mean. It’s the difference between real artistic intent and an imitation. It’s like walking into a cheap hotel: sure, there’s art on the wall, and I’m glad it’s there, but I’m not sitting in that room contemplating the meaning of life. That’s what AI music feels like right now. It’s sonic wallpaper. Nothing more.

Q: It seems the act of it being human created is essential as part of our understanding?

[Daniel Levitin]: … the human quality in art often comes from limitations.

There’s a great American songwriter, John Hartford, who once said about himself, I have a style, and my style’s based on my limitations. That rings true across the arts. Take Coltrane: to me, his style comes from the fact that his musical ideas were too vast to fit through the narrow bore of the saxophone. He was constrained by the instrument, and what you hear in A Love Supreme and his other great works is that struggle — a human being pushing and fighting against limitation.

The Beatles and the Stones are another example. The Stones wanted to be blues musicians in the mold of Howlin’ Wolf and Muddy Waters, but they couldn’t quite pull it off. Instead, out of those limitations, they stumbled onto a completely different, unmistakable sound. The Beatles, the same story. They wanted to sound like Chuck Berry, Buddy Holly, Elvis, Little Richard — and they failed. They never managed to capture that exactly, but in failing, they created something else entirely.

That’s the paradox of creativity: their ideas were larger than their abilities, and that tension gave us something new. And I think we instinctively recognize that struggle in any art form, whether or not we know anything about the medium. We can feel it in painting, sculpture, literature, music.

The opposite case would be tragic — if your ideas were smaller than your abilities. Then all you’d have is a technician.

Q: Do we need to rethink the role of music in our lives?

[Daniel Levitin]: don’t really like using the word should. I’m more of a live-and-let-live type: people are different, and they ought to be free to choose their religion, their partners, their passions. I can’t say blondes are better than brunettes, or that music is inherently better than other things. But what you said is indisputable — music has taken on a more secondary role in our lives than it had even a hundred years ago, and certainly far less than five hundred years ago.

In hunter-gatherer societies, music is woven into the fabric of daily life. People sing and dance together; everyone participates. No one is singled out as “the singer,” and no one opts out by just mouthing the words. If they’re singing a Congolese song, everybody sings. That’s unthinkable in our current culture.

But the deeper issue you raise goes beyond music. As a society, we’ve lost our appreciation for the benefits of sustained engagement with anything. The Economist recently ran a piece in its Bartleby column about the attention economy. The author noted, somewhat ironically, that most readers wouldn’t even finish the one-page article because their phones keep feeding them an endless stream of shallow but dopamine-rewarding content.

This erosion of focus shows up everywhere. In my day, everyone read War and Peace. Now, no high school or college students are tackling that. Fifteen years ago, my undergraduates were already complaining: You want us to read 25 minutes a night? That’s too much. Social and digital media have fractured our attention spans. But we need to reclaim them — and music is one way to do that.

Long-form music in particular — classical, jazz, or concept albums like Quadrophenia or The Dark Side of the Moon — invites us to listen from start to finish. Unlike a novel, which requires constant effort to keep your eyes moving, music carries its own tempo. It moves forward whether or not your mind wanders, and if you let yourself sink into it, it pulls you along. It sets up patterns of prediction, reward, and surprise that train your mind to sustain focus in a way that even a great book can’t, unless you’re listening to it as an audiobook.

And that matters, because the world is facing problems of unprecedented scale. Economic inequality, intranational and international aggression, rising bigotry, government corruption, hunger, homelessness, climate change, even the renewed threat of nuclear war — these challenges won’t be solved by someone thinking about them for ten seconds before checking their social feed. They require sustained, concentrated effort.

And music can help us train for that.

Thought Economics

About the Author

Vikas Shah MBE DL is an entrepreneur, investor & philanthropist. He is CEO of Swiscot Group alongside being a venture-investor in a number of businesses internationally. He is a Non-Executive Board Member of the UK Government’s Department for Business, Energy & Industrial Strategy and a Non-Executive Director of the Solicitors Regulation Authority. Vikas was awarded an MBE for Services to Business and the Economy in Her Majesty the Queen’s 2018 New Year’s Honours List and in 2021 became a Deputy Lieutenant of the Greater Manchester Lieutenancy. He is an Honorary Professor of Business at The Alliance Business School, University of Manchester and Visiting Professors at the MIT Sloan Lisbon MBA.