We conduct our daily lives against a soundtrack of incessant background noise. But the nature of that noise is changing in the digital age. And sound design — the way we engineer recorded audio — is changing with it.
Geoff Martin is a “tonmeister” for Bang & Olufsen, a global designer of audio and video equipment. Just what is a tonmeister anyway? According to Martin, it’s “a recording engineer and a record producer rolled into one.” It’s a fairly rarefied profession: Tonmeisters first emerged in Germany in the 1960s, and the only way to become one is to study under one.
As tonmeister for Bang & Olufsen, Martin has helped the company design cutting-edge loudspeakers and sound systems. But his job these days is to look to the future and project what kinds of audio products customers will need in 10 years and beyond.
To do this, he finds clues in studying how our acoustic environment is changing — how the digital environment is altering our relationship to sound and noise.
Sound and noise are not synonymous; noise is broadly defined as unwanted sound, according to Martin, like an audible hiss in a recording. Sometimes it refers to sounds whose waveforms carry no useful information, like radio static. (Radio waves are used to transmit information — e.g., your favorite song — by altering the pattern of the waves to encode that information. Static is the lack of a pattern; it’s a completely random signal.)
For audio engineers, noise comes in many colors, depending on its unique spectral fingerprint: white, pink, red, violet, blue, even black, although to the untrained ear it will just sound like static. Every sound has such a fingerprint showing all the different frequencies that make up that sound. (Most sounds are a complicated mix of mechanical vibrations at lots of different frequencies.)
Noise can be a tonmeister’s best friend. To test a speaker’s acoustic design, Martin uses so-called “pink noise.” It’s similar to white noise — that snowy static when you can’t get any reception on your TV — in that it contains every frequency within the range of human hearing (20 Hz to 20,000 Hz). But unlike white noise, it’s not completely random. It hits a sweet spot between overly rigid order and utter chaos, and its patterns can be seen in heartbeats, DNA, traffic flow, most electronic devices, and of course, musical melodies.
That’s why pink noise is ideal for testing prototypes. “It has the same spectral content as music,” said Martin. “It just gives us a more repeatable behavior [i.e., less variation] for the loudspeaker than if we used Marilyn Manson or Brittany Spears.”
But noise is an annoyance for most of us. Much like light pollution is making star gazing well-nigh impossible in all but a few key locations, “It’s hard to get to a place where all you can hear are natural sounds anymore,” said Martin. “Even if you go to the middle of the woods, you’re bound to hear a jet flying overhead.
Walk into a modern building and the first thing you’re likely to hear — if you’re paying attention — is the hum of the lighting or the air conditioning system. It’s always at the same frequency: 60 Hz in North America, and a slightly lower 50 Hz in Europe, equivalent to a B flat note — the key of modern life.
In fact, Martin cites a Vancouver composer who only creates pieces in the key of B-flat for this reason. “When the reverb dies down, the [music] blends with the sound of the Coke machine in the lobby,” he said.
Now the nature of that ubiquitous background noise is changing. “We’re getting this blur between what we think is real and what is actually real,” said Martin. “The entire acoustic arena is designed for us; it’s not accidental anymore.”
That acoustical arena includes scads of artificially designed sounds, carefully tailored to match what people expect to hear. That includes everything from the sound of your hair dryer, or ring tone on your iPhone (and the clicking sound it makes when you unlock it), to the sound of your car’s exhaust system. Think all car doors sound the same when you slam them shut? Think again. Car manufacturers go to great pains to modify those sounds into whatever is most pleasing to the ear.
How we listen to music is also changing. As recently as 10-15 years ago, most people still listened to music over loudspeakers most of the time. These days everyone sports earbuds to take calls or listen to music on their smart devices, and noise-canceling headphones are all the rage (especially for frequent flyers). In fact, Martin said that noise-canceling headphones will soon be so good that you could stand in the center of Times Square wearing them, and not hear anything except the tunes you’re playing.
That convenience comes at a cost: Recorded music isn’t optimally designed for headphones. Think about what happens when you watch a big-screen movie on a smaller TV screen: you lose a lot of the original detail. The same thing happens acoustically when you listen to music designed to be played back on a loudspeaker over earbuds or headphones.
The big question for Martin (and for B&O) is whether this shift will eventually influence how recordings are made. Instead of recording with conventional microphones, perhaps the industry will shift to microphones contained within something that mimics the acoustics of the human head. But then what happens if you try to play such a recording back on a loudspeaker, or a high-end surround sound system? Martin said it would be similar to watching a TV show on a large screen: It just looks cheap.
“If you have a recording made for headphones, and you play it over a loudspeaker, you’re going to lose a lot in the translation,” Martin said. “We’re not there yet, but I can easily see this becoming the norm as more and more people live with a pair of headphones on their head.”
20kHz is a new blog exploring how technology, science, and culture influence music.