Perhaps the most terrifying instrument ever devised, the sound of the Death whistle has been described as the “scream of 1000 corpses,” and I can’t help but agree. Listening to the instrument, I was immediately reminded of the screams played in haunted houses crossed with the sound of rushing winds. Records and archaeological evidence indicate the instrument may have been used in ritual sacrifices or funeral ceremonies, though some believe it was a tool for psychological warfare in battle. Engineer Roberto Velázquez Cabrera has made a good deal of information available for English speakers on these singular instruments, and runs an amazing website detailing all sorts of Mexican Aerophones. Functionally, the instrument’s behavior is made clear from its cross section. Air is blown through the tubular mouthpiece, where it generates turbulent noise upon meeting the sharp edge of the primary resonator, the resonator itself generates the harsh tone of the whistle, which is then filtered by the secondary resonator that opens on the bottom of the whistle. By placing their hands around the bottom of the instrument, players can adjust the filtering frequencies, creating the wavering death scream sound. While this truly amazing instrument may be a relic of the past, modern reproductions are available from the Oregon Flute Store for $100. Of course, it’s a bit morbid for a Christmas present, but this seems like an essential tool for an acoustician on Halloween! (News Article by Ancient Origins Original Research and Photo credit Roberto Velazquez Cabrera)


Every machine has its own acoustic signature - a precise frequency that indicates whether that machine is operating at peak performance. GE engineers monitor and record these sounds to perform real-time diagnostics on airplane engines, locomotives, power turbines, and medical equipment. Musician Matthew Dear and GE Acoustics Engineer Andrew Gorton teamed up to collect and compose thousands of audio emissions from the world’s most powerful machines. The result is an original track of music titled “Drop Science.” Download the full track on our SoundCloud.


Veritasium’s new video has an awesome demonstration featuring acoustics, standing waves, and combustion. It’s a two-dimensional take on the classic Rubens’ tube concept in which flammable gas is introduced into a chamber with a series of holes drilled across the top. Igniting the gas produces an array of flames, which is not especially interesting in itself, until a sound is added. When a note is played in the tube, the gas inside vibrates and, with the right geometry and frequency, can resonate, forming standing waves. The motion of the gas and the shape of the acoustic waves is visible in the flames. Extended into two-dimensions, this creates some very cool effects. (Video credit: Veritasium; via Ryan A.; submitted by jshoer)

Birds aren’t the only creatures that have evolved to mimic sounds. According to a press release from the Acoustical Society of America, some species of butterflies, Maculinea for example, have evolved to mimic the acoustic transmissions of ants. Ant colonies, it turns out, coordinate their efforts through acoustic signals that help to dictate how workers in the nest will act. By mimicking these sounds, butterfly larvae can enjoy the safety of the ant nest without being kicked out by the rightful tenants. In fact, the workers will even mistake the larvae for small queens and feed them more than they feed their own offspring!(Article: Nature World News Photo from Wikimedia by PJC&Co Hat tip: @IoAUK)

Owls have a lot going for them: good looks, great eyesight, and the power of nearly silent flight

That last characteristic got engineers thinking, what if it were possible to impart Mother Nature’s design for silent owl wings into things like wind turbines, allowing them to quietly operate at higher speeds and generate megawatts of additional power?

Researchers at the University of Cambridge say they’ve taken some good steps toward doing just that, announcing this week their design for a new coating material that mimics the complex structure of an owl’s wing.

“No other bird has this sort of intricate wing structure,” said lead researcher Nigel Peake, in a news release. “Much of the noise caused by a wing - whether it’s attached to a bird, a plane or a fan - originates at the trailing edge where the air passing over the wing surface is turbulent. The structure of an owl’s wing serves to reduce noise by smoothing the passage of air as it passes over the wing - scattering the sound so their prey can’t hear them coming.”

To replicate the trailing-edge structure scientists looked a variety of designs, including covering wind turbines with a material that’s similar to that used in wedding veils. They also created a 3-D printed plastic material which, in tests, reduced noise generated by turbine blades by 10 decibels. 

The findings were presented at the 21st American Institute of Aeronautics and Astronautics (AIAA) Aeroacoustics Conference in Dallas.

(Image Credit: Wikimedia Commons)

Horse Skull Disco

(Source, please do not remove)

If you’re looking to install a new sound system in your house, consider burying a horse skull in the floor.

According to the Irish Archaeological Consultancy, the widespread discovery of “buried horse skulls within medieval and early modern clay floors” has led to the speculation that they might have been placed there for acoustic reasons—in other words, “skulls were placed under floors to create an echo,” we read.

Ethnographic data from Ireland, Britain and Southern Scandinavia attests to this practice in relation to floors that were in use for dancing. The voids within the skull cavities would have produced a particular sound underfoot. The acoustic skulls were also placed in churches, houses and, in Scandinavia especially, in threshing-barns… It was considered important that the sound of threshing carried far across the land.

They were osteological subwoofers, bringing the bass to medieval villages.

It’s hard to believe, but this was apparently a common practice: “the retrieval of horse skulls from clay floors, beneath flagstones and within niches in house foundations, is a reasonably widespread phenomenon. This practice is well attested on a wider European scale,” as well, even though the ultimate explanation for its occurrence is still open to debate (the Irish Archaeological Consultancy post describes other interpretations, as well).

Either way, it’s interesting to wonder if the acoustic use of horse skulls as resonating gourds in medieval architectural design might have any implications for how natural history museums could reimagine their own internal sound profiles—that is, if the vastly increased reverberation space presented by skulls and animal skeletons could be deliberately cultivated to affect what a museum’s interior sounds like.

Like David Byrne’s well-known project Playing the Building—"a sound installation in which the infrastructure, the physical plant of the building, is converted into a giant musical instrument"—you could subtly instrumentalize the bones on display for the world’s most macabre architectural acoustics.

Carnivorous plant’s sound echoes draw bats in

There is a plant in Borneo that literally has a built-in bat signal. Nepenthes hemslayana is a Paleotropic carnivorous pitcher plant that provides a safe place for bats to roost; it’s cool and free of parasites and other bats. The bat, in turn, helps the plant by providing extra nitrogen through its feces. But how do the bats find the plant in the first place? According to a new study, published online today in Current Biology, N. hemslayana’s tubalike shape features a long, reflective structure that extends back into the cylinder of the plant. As the bats search for a place to roost, the structure acts as an acoustic flag, bouncing back the ultrasonic calls the bats emit to navigate (a process known as echolocation) and waving the bats down to a comfortable home. 

Many male frog and toad species sing during warmer months to attract mates. Some, like the American toad in the photo above, can be heard for an impressive distance. Here’s a video of an American toad in action. To sing, these amphibians close their mouth and nostrils, then force air from their lungs past their larynx and into a vocal sac. As with human sound-making, forcing air past the frog’s larynx vibrates its vocal cords and generates noise. That noise resonates in the vocal sac, amplifying the sound and driving the ripples seen in the photo.  (Image credit: D. Kaneski; submitted by romannumeralfive)

How a Human Scream Uniquely Activates the Fear Response in Your Brain

We know human screams are jarring. They’re loud, occasionally shrill, and tend to make us feel stressed, or even fearful. What’s unclear is why they elicit anxiety. But a new study suggests this response may have something to do with the acoustic quality of human screams, and how they trigger the brain’s fear response.

According to a new study headed by David Poeppel from New York University, and his postdoc Luc Arnal, now at the University of Geneva, this has something to do with a unique property of sound, called roughness, that activates the brain’s fear circuitry within the amygdala. The details of their work now appear at Current Biology.

Rough Sounds

“Roughness refers to fast sound changes in loudness,” Arnal told io9. “Normal speech for instance only has slow differences in loudness—between 4 and 5 Hz—which is not rough and basically corresponds to the syllabic rate. Screams, on the other hand, modulate very fast—between 30 and 150 Hz—which is rough.”

Arnal adds that the strength (low vs high) of roughness corresponds to the amplitude, or volume, of these fast changes. Low roughness corresponds to weak loudness changes whereas high roughness corresponds to high loudness changes.

(Credit: Luc Arnal)

“This kind of sound could be compared to a strobe light in the auditory domain,” says Arnal. “Everyone is familiar with those lights that flash super fast in clubs for instance. Screams could be defined as strobophones, since they are modulating super fast in an analogous way in the auditory domain.”

As their fascinating experiment shows, these rough, strobe-like sounds appear to have a curious, and possibly adaptive, effect on the human psyche.

An Evolved Response

Poeppel and Arnal used recordings taken from YouTube videos, popular films, and volunteer screamers who were recorded in the lab’s sound booth. Then, in a series of experiments involving fMRI scanners, 16 participants listened to sounds of various degrees of roughness. The researchers used three different categories of sounds that were either neutral or unpleasant, namely: human vocalizations (normal voices and screams); artificial sounds (like instruments and alarms); and musical intervals (both consonant and dissonant sounds). The researchers then identified brain regions involved in processing unpleasantness by comparing responses to unpleasant sounds against responses to neutral sounds.

Results showed that unpleasant sounds induced larger hemodynamic responses, i.e. the rate of blood flow, in the bilateral anterior amygdala and primary auditory cortices. The amygdala is a brain structure crucial for regulating emotions.

fMRI measurement of roughness and screams (Credit: Arnal et al., 2015)

“The rougher the sound was, and the more scary it was rated, the more effectively it activated the amygdala,” Poeppel explained to io9.

Fascinatingly, the researchers found that the amygdala, and not the auditory cortex, is sensitive to temporal modulations in the roughness range.

Their results suggest that rough sounds specifically target neural circuits involved in fear/danger processing. This is the first direct evidence in support of the idea that roughness is an acoustic attribute that triggers adapted reactions to danger. The researchers speculate that this behavioral feature confers an evolutionary advantage, and that rough vocalization, which recruit dedicated neural processes “that prioritize fast reaction to danger over detailed contextual evaluation”—in other words, that a rough sound can trigger your fear response more directly, and therefore faster, than something you, say, witness with your eyes and process in your mind.

Both researchers were asked how they were certain that other aspects of the sounds weren’t triggering the fear response, such as spoken words, or some other factors, like context.

“We are very pedantic researchers,” replied Poeppel. “We matched all the other sounds, in fact all sounds for duration, for loudness, for many of the other features we can control. We try our damnedest to make sure that the one remaining factor is in fact roughness.”

To which Arnal added: “There was no word spoken. Only syllables and artificial sounds were used in that study. We also controlled for other aspects (pitch frequency, valence of the sound) when analyzing the data and found that the amygdala specifically responded to roughness.

Interestingly, the researchers discovered that rough sounds don’t necessarily have to be uttered by humans to elicit the response. The participants exhibited similar responses to alarm signals, such as car alarms and house alarms.

Keep reading

Researchers uncover why there is a mapping between pitch and elevation

Have you ever wondered why most natural languages invariably use the same spatial attributes – high versus low – to describe auditory pitch? Or why, throughout the history of musical notation, high notes have been represented high on the staff? According to a team of neuroscientists from Bielefeld University, the Max Planck Institute for Biological Cybernetics in Tübingen and the Bernstein Center Tübingen, high pitched sounds feel ‘high’ because, in our daily lives, sounds coming from high elevations are indeed more likely to be higher in pitch. This study has just appeared in the science journal PNAS.

Dr. Cesare Parise and colleagues set out to investigate the origins of the mapping between sound frequency and spatial elevation by combining three separate lines of evidence. First of all, they recorded and analyzed a large sample of sounds from the natural environment and found that high frequency sounds are more likely to originate from high positions in space. Next, they analyzed the filtering of the human outer ear and found that, due to the convoluted shape of the outer ear – the pinna – sounds coming from high positions in space are filtered in such a way that more energy remains for higher pitched sounds. Finally, they asked humans in a behavioural experiment to localize sounds with different frequency and found that high frequency sounds were systematically perceived as coming from higher positions in space.

The results from these three lines of evidence were highly convergent, suggesting that all such diverse phenomena as the acoustics of the human ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation ultimately reflect the adaptation of human hearing to the statistics of natural auditory scenes. ‘These results are especially fascinating, because they do not just explain the origin of the mapping between frequency and elevation,’ says Parise, ‘they also suggest that the very shape of the human ear might have evolved to mirror the acoustic properties of the natural environment. What is more, these findings are highly applicable and provide valuable guidelines for using pitch to develop more effective 3D audio technologies, such as sonification-based sensory substitution devices, sensory prostheses, and more immersive virtual auditory environments.’

The mapping between pitch and elevation has often been considered to be metaphorical, and cross-sensory correspondences have been theorized to be the basis for language development. The present findings demonstrate that, at least in the case of the mapping between pitch and elevation, such a metaphorical mapping is indeed embodied and based on the statistics of the environment, hence raising the intriguing hypothesis that language itself might have been influenced by a set of statistical mappings between naturally occurring sensory signals.

Besides the mapping between pitch and elevation, human perception, cognition, and action are laced with seemingly arbitrary correspondences, such as that yellow–reddish colors are associated with a warm temperature or that sour foods taste sharp. This study suggests that many of these seemingly arbitrary mappings might in fact reflect statistical regularities to be found in the natural environment.

Objects that we think of as strong or solid in our everyday experience appear almost liquid while experiencing acoustic deformation. The above clip, taken from an episode of Discovery Channel’s Time Warp, shows how a cymbal being hit by a drumstick bends like a piece of soft rubber in response to the initial impact. The disturbance then travels to the other side of the cymbal, setting up the natural vibrations that give the cymbal its distinctive sound.