acoustics

Perhaps the most terrifying instrument ever devised, the sound of the Death whistle has been described as the “scream of 1000 corpses,” and I can’t help but agree. Listening to the instrument, I was immediately reminded of the screams played in haunted houses crossed with the sound of rushing winds. Records and archaeological evidence indicate the instrument may have been used in ritual sacrifices or funeral ceremonies, though some believe it was a tool for psychological warfare in battle. Engineer Roberto Velázquez Cabrera has made a good deal of information available for English speakers on these singular instruments, and runs an amazing website detailing all sorts of Mexican Aerophones. Functionally, the instrument’s behavior is made clear from its cross section. Air is blown through the tubular mouthpiece, where it generates turbulent noise upon meeting the sharp edge of the primary resonator, the resonator itself generates the harsh tone of the whistle, which is then filtered by the secondary resonator that opens on the bottom of the whistle. By placing their hands around the bottom of the instrument, players can adjust the filtering frequencies, creating the wavering death scream sound. While this truly amazing instrument may be a relic of the past, modern reproductions are available from the Oregon Flute Store for $100. Of course, it’s a bit morbid for a Christmas present, but this seems like an essential tool for an acoustician on Halloween! (News Article by Ancient Origins Original Research and Photo credit Roberto Velazquez Cabrera)

youtube

Every machine has its own acoustic signature - a precise frequency that indicates whether that machine is operating at peak performance. GE engineers monitor and record these sounds to perform real-time diagnostics on airplane engines, locomotives, power turbines, and medical equipment. Musician Matthew Dear and GE Acoustics Engineer Andrew Gorton teamed up to collect and compose thousands of audio emissions from the world’s most powerful machines. The result is an original track of music titled “Drop Science.” Download the full track on our SoundCloud.

youtube

Veritasium’s new video has an awesome demonstration featuring acoustics, standing waves, and combustion. It’s a two-dimensional take on the classic Rubens’ tube concept in which flammable gas is introduced into a chamber with a series of holes drilled across the top. Igniting the gas produces an array of flames, which is not especially interesting in itself, until a sound is added. When a note is played in the tube, the gas inside vibrates and, with the right geometry and frequency, can resonate, forming standing waves. The motion of the gas and the shape of the acoustic waves is visible in the flames. Extended into two-dimensions, this creates some very cool effects. (Video credit: Veritasium; via Ryan A.; submitted by jshoer)

Rad acoustics, [listen]

Acoustic versions of some of my favorite songs.

Sweater Weather- The Neighbourhood // Happily- One Direction // Lies- Marina And The Diamonds // Demons- Imagine Dragons // The A Team- Ed Sheeran // Chocolate- The 1975 // Boyfriend- Justin Bieber // What You Know- Two Door Cinema Club // Born To Die- Lana Del Rey // She Moves In Her Own Way- The Kooks // 22- Taylor Swift // Creep- Radiohead // Fluorescent Adolescent- Arctic Monkeys // More Than This- One Direction // Jump Into The Fog- The Wombats // Wires- The Neighbourhood // Primadonna- Marina And The Diamonds // Treacherous- Taylor Swift // Fake Plastic Trees- Radiohead // I need Your Love- Ellie Goulding // Bad Blood- Bastille // Changing Of The Seasons- TDCC // Lego House- Ed Sheeran // Chasing Cars- Snow Patrol // 

Birds aren’t the only creatures that have evolved to mimic sounds. According to a press release from the Acoustical Society of America, some species of butterflies, Maculinea for example, have evolved to mimic the acoustic transmissions of ants. Ant colonies, it turns out, coordinate their efforts through acoustic signals that help to dictate how workers in the nest will act. By mimicking these sounds, butterfly larvae can enjoy the safety of the ant nest without being kicked out by the rightful tenants. In fact, the workers will even mistake the larvae for small queens and feed them more than they feed their own offspring!(Article: Nature World News Photo from Wikimedia by PJC&Co Hat tip: @IoAUK)

5

The acoustic signatures of many animals contain features we humans cannot appreciate, given the limited range of frequencies we can hear. In fluid dynamics and many other fields, scientists and engineers have to find ways to analyze and decompose time-series data—like acoustic pressure signals—into useful quantities. Mark Fischer uses one tool for such analysis, a wavelet transform, to turn the calls of whales, birds, and insects into the colorful snapshots seen here. Wavelet transforms are somewhat similar to Fourier transforms but represent a signal with a series of wavelets rather than sinusoids. They’re also widely used for data compression. (Image credits: M. Fischer/Aguasonic Acoustics; via DailyMail)

Unplugged  || A mix of acoustic songs

1. Somewhere In Neverland - All Time Low // 2. Fully Alive - Flyleaf // 3. I Miss You - Blink-182 // 4. What A Catch, Donnie - Fall Out Boy // 5. Some Nights - Fun. // 6. Time Is Running Out - Muse // 7. Helena - My Chemical Romance // 8. Lake Of Fire - Nirvana // 9. This Is Gospel - Panic! At The Disco // 10. Hold On ‘Till May - Pierce The Veil // 11. Brick By Boring Brick - Paramore // 12. Stay With Me - You Me At Six // 13. Let It Land - Tonight Alive // 14. Scene Five: With Ears To See and Eyes To Hear - Sleeping With Sirens // 15. Can’t Help Falling in Love - twenty one pilots // 16. America’s Suitehearts - Fall Out Boy // 17. Kiss And Tell - You Me At Six // 18. Still Into You - Paramore // 19. Six Feet Under The Stars - All Time Low

Listen here

4

Sound waves often interact with many objects before we hear them. Understanding and controlling those interactions is a major part of acoustic engineering. The animations above show shock waves—sound—from a trumpet interacting with different objects. The sound is made visible using the schlieren optical technique, allowing us to observe the reflection, absorption, and transmission of sound as it hits different surfaces. Fiberboard, for example, is highly reflective, redirecting the sound waves along a new path without a lot of damping. In contrast, the metal grid is only weakly reflective and a small portion of the incoming sound wave is transmitted through the grid. To see more examples, check out the full video, and, if you want to learn more about acoustics, check out Listen To This Noise.  (Image credits: C. Echeverria et al., source video)

Researchers uncover why there is a mapping between pitch and elevation

Have you ever wondered why most natural languages invariably use the same spatial attributes – high versus low – to describe auditory pitch? Or why, throughout the history of musical notation, high notes have been represented high on the staff? According to a team of neuroscientists from Bielefeld University, the Max Planck Institute for Biological Cybernetics in Tübingen and the Bernstein Center Tübingen, high pitched sounds feel ‘high’ because, in our daily lives, sounds coming from high elevations are indeed more likely to be higher in pitch. This study has just appeared in the science journal PNAS.

Dr. Cesare Parise and colleagues set out to investigate the origins of the mapping between sound frequency and spatial elevation by combining three separate lines of evidence. First of all, they recorded and analyzed a large sample of sounds from the natural environment and found that high frequency sounds are more likely to originate from high positions in space. Next, they analyzed the filtering of the human outer ear and found that, due to the convoluted shape of the outer ear – the pinna – sounds coming from high positions in space are filtered in such a way that more energy remains for higher pitched sounds. Finally, they asked humans in a behavioural experiment to localize sounds with different frequency and found that high frequency sounds were systematically perceived as coming from higher positions in space.

The results from these three lines of evidence were highly convergent, suggesting that all such diverse phenomena as the acoustics of the human ear, the universal use of spatial terms for describing pitch, or the reason why high notes are represented higher in musical notation ultimately reflect the adaptation of human hearing to the statistics of natural auditory scenes. ‘These results are especially fascinating, because they do not just explain the origin of the mapping between frequency and elevation,’ says Parise, ‘they also suggest that the very shape of the human ear might have evolved to mirror the acoustic properties of the natural environment. What is more, these findings are highly applicable and provide valuable guidelines for using pitch to develop more effective 3D audio technologies, such as sonification-based sensory substitution devices, sensory prostheses, and more immersive virtual auditory environments.’

The mapping between pitch and elevation has often been considered to be metaphorical, and cross-sensory correspondences have been theorized to be the basis for language development. The present findings demonstrate that, at least in the case of the mapping between pitch and elevation, such a metaphorical mapping is indeed embodied and based on the statistics of the environment, hence raising the intriguing hypothesis that language itself might have been influenced by a set of statistical mappings between naturally occurring sensory signals.

Besides the mapping between pitch and elevation, human perception, cognition, and action are laced with seemingly arbitrary correspondences, such as that yellow–reddish colors are associated with a warm temperature or that sour foods taste sharp. This study suggests that many of these seemingly arbitrary mappings might in fact reflect statistical regularities to be found in the natural environment.

3

PRIMORDIAL SOUNDS: BIG BANG ACOUSTICS PRESS RELEASE:

Cosmology is in a golden era, with extraordinary advances, both experimental and theoretical, coming every year. The aim of this project is to cast some of the most recent developments in a novel and engaging manner, choosing sound as the primary vehicle. Perhaps, with this experiential access to Nature, the excitement which is almost palpable in the astronomical community, can be felt more widely. 
  
The project has also allowed me, a non-cosmologist astronomer, the opportunity to follow more closely what is, I think, a precious time in the history of science — indeed, in human history. With each passing year, the biography of the Universe is being penned in an ever firmer hand. In fact, since we are of the Universe, it is really an autobiography. Let’s now turn the first pages, and read about our birth and first year of life….

The site is hosted here, and contains a breakdown of the acoustics of the cosmic background (CMB) complete with audio and graphs!

youtube

Because of the way the camera captures this image (rolling shutter), you can see the vibration of the guitar strings very clearly.

This isn’t exactly how the strings vibrate (this video makes the waves look narrow), but it still gives you a good idea of what mechanism causes the unique sound of each string. See how the shapes are slightly different for each string, but they also change depending on the note being played? Since each string is a different thickness and wrapped with a different gauge wire, the vibrations affect each one differently. All of these differences add up to different sounds, both in pitch and quality. The difference in quality (not pitch) is the timbre (“tam-ber” [ˈtæm.bɚ]).

Timbre is what makes sounds recognizably different from each other. It’s why people’s voices sound different, and why trumpets sound different from flutes. It’s also why the sound “ah” ([ɑ]) sounds different from “ee” ([i]) and “oo” ([u]).

How does one manage noise in the ocean? It’s a more difficult problem than you might think. We’ve talked about why noise in the ocean is such a problem for marine life, but the solution may require new technology to be developed. Because of the incredibly long wavelengths of sound in the ocean, not to mention the density of water itself, traditional noise control solutions for air simply aren’t feasible in an ocean environment. Researchers at the University of Texas at Austin have been hard at work, though, building and testing a novel noise barrier that uses air-filled resonators to absorb very low frequencies that are all but impossible to block using traditional means. The principle at work is not all that different from blowing over an empty beer bottle: the individual air chambers resonate at some frequency, and so they preferentially absorb those frequencies from the water. By tuning these air chambers to the frequency of the noise source, you can effectively block noise and reduce its impact on the environment. While these barriers are unlikely to help with the copious amounts of shipping noise we create, they may be used to reduce the impact of offshore wind farms, so our attempts at generating clean energy don’t inadvertently lead to a whole new environmental crisis. (Article from Acoustics.org)