What researchers found: The brains of jazz musicians who are engaged with other musicians in spontaneous improvisation show robust activation in the same brain areas traditionally associated with spoken language and syntax. In other words, improvisational jazz conversations “take root in the brain as a language,” Limb said.

“It makes perfect sense,” said Ken Schaphorst, chair of the Jazz Studies Department at the New England Conservatory in Boston. “I improvise with words all the time—like I am right now—and jazz improvisation is really identical in terms of the way it feels. Though it’s difficult to get to the point where you’re comfortable enough with music as a language where you can speak freely.”

Along with the limitations of musical ability, there’s another key difference between jazz conversation and spoken conversation that emerged in Limb’s experiment. During a spoken conversation, the brain is busy processing the structure and syntax of language, as well the semantics or meaning of the words. But Limb and his colleagues found that brain areas linked to meaning shut down during improvisational jazz interactions. In other words, this kind of music is syntactic but it’s not semantic.

“Music communication, we know it means something to the listener, but that meaning can’t really be described,” Limb said. “It doesn’t have propositional elements or specificity of meaning in the same way a word does. So a famous bit of music—Beethoven’s dun dun dun duuuun—we might hear that and think it means something but nobody could agree what it means.”

So if music is a language without set meaning, what does that tell us about the nature of music?

“The answer to that probably lies more in figuring out what the nature of language is than what the nature of music is,” said Mike Pope, a Baltimore-based pianist and bassist who participated in the study. “When you’re talking about something, you’re not thinking about how your mouth is moving and you’re not thinking about how the words are spelled and you’re not thinking about grammar. With music, it’s the same thing.”

Many scientists believe that language is what makes us human, but the brain is wired to process acoustic systems that are far more complicated than speech.
Pope says even improvisational jazz is built around a framework that musicians understand. This structure is similar to the way we use certain rules in spoken conversation to help us intuit when it’s time to say “nice to meet you,” or how to read social clues that signal an encounter is drawing to a close.

“In most jazz performances, things are not nearly as random as people would think,” Pope said. “If I want to be a good bass player and I want to fill the role, idiomatically and functionally, that a bass player’s supposed to fulfill, I have to act within the confines of certain acceptable parameters. I have to make sure I’m playing roots on the downbeat every time the chord changes. It’s all got to swing.”

But Limb believes his finding suggests something even bigger, something that gets at the heart of an ongoing debate in his field about what the human auditory system is for in the first place. Many scientists believe that language is what makes us human, but the brain is wired to process acoustic systems that are far more complicated than speech, Limb says.

“If the brain evolved for the purpose of speech, it’s odd that it evolved to a capacity way beyond speech,” Limb said. “So a brain that evolved to handle musical communication—there has to be a relationship between the two. I have reason to suspect that the auditory brain may have been designed to hear music and speech is a happy byproduct.”