Warning: Use of undefined constant user_level - assumed 'user_level' (this will throw an Error in a future version of PHP) in /home/commons/public_html/wp-content/plugins/ultimate-google-analytics/ultimate_ga.php on line 524
I have always been an avid music fan. Growing up I was surrounded by my father’s dabbling on a variety of world instruments, my friends live jazz shows, and the hundreds of CDs I was constantly swapping out of various CD players. I found that while I was not particularly inclined to play music, I was highly attuned to acoustics and sound quality. Towards the end of high school, as I was increasingly involved in studying video film, I learned the basics of working with a semi-professional live production sound board. It wasn’t until many years later that I had an opportunity to put those skills to use.
Working at a DC restaurant with live music three nights a week, and a very low-budget set-up, I found myself wanting the sound quality to be just a little bit better. During shifts I would tweak the settings on the sound board, and through a bit of trial and error along with some helpful requests from bands, became comfortable with knowing how to adjust the sound played by the musicians into what I felt was the most acoustically pleasing form. But while I was learning how to make the physical adjustments to analog waveforms, able to understand at a basic level the physics of audio waves and how each dial altered them, I’d never really understood what went into recording audio for conversion into a shareable media format.
Given my love for live music I frequently would try to record shows on my cameraphone, but always found them significantly lacking from what I remembered of the in-person listening experience. Then I received the opportunity to take my passion and basic knowledge to the next level. At one such live show, my skills at live production were noticed and let to an offer to produce future shows that were destined for full scale production into music videos. In this new role I worked to get up to speed with the differences between simply live production and recording for remediation, learning a great deal about the how audio engineers overcome the challenge of capturing live experiences and translating them into digestible media files – and how media must be tailored to meet the socio-cultural expectations of the viewer/listener.
Live Sound vs Recorded Sound
Concerts are immersive experiences. Listeners are surrounded by sound, coming directly from speakers but also reverberating off of walls and being absorbed by soft surfaces and the bodies around them. The result is a very distinct “concert” sound that leaves the music a bit blurred for each listener. This blurriness of the music stems at its root by the physical analog nature of waveforms that limit the speed of sound.
As the sound technician, how do I recreate that sound? These physical aspects of the live experience mean that I can’t just take the audio from a singers microphone, sync it with camera footage, and expect listeners to be satisfied with what they are hearing. Whereas the concert attendee is hearing the singers voice from many different angles, the listener at home has only the usual two speakers on their device, that are hopefully in stereo to give just a hint of depth.
This means that to recreate a concert sound I have to place many more mics around the venue to capture the other versions of the singers voice that are formed by reverberations within the space. These multiple recordings must be then overlaid on top of the primary recording to give the listener the audio sensation of the live concert experience.
When tailoring to the socio-cultural expectations distorts reality
Of course, while I may place multiple microphones around the venue to capture the variations of a singers voice, I’m not recording the loud talkers standing at the bar behind the concert goer. Listening to a live concert at home leaves out the very real sounds of that in-person experience. But sometimes, real experience sound is not what we as a society of consumers of experiences via visual interfaces want, and this is no more true than in sports audio.
The two classic examples of how the audio associated with the representation of sports in digital format differs greatly from the analog “live” experience are Nascar and professional soccer. When watching Nascar at home on a TV, or computer, viewers will hear not only the stereo roar of cars whizzing by but also the roar of the crowds cheering. However, in person the sound of the cars is so loud that it drowns out even your friend sitting next to you, let alone the sound of the crowd cheering. In professional soccer coverage viewers hear a satisfying thump every time a player kicks the ball. This thump clearly syncs what they are seeing with the sound they would expect to hear if they themselves were kicking a soccer ball. However if one thinks for a moment they would realize that there is no way they would be able to hear the thump of a kick from the stands of a soccer arena.
The ability to hear the crowds at a Nascar race and the thump of a kicked ball are audio illusions demanded by viewers that would not be present for in-person attendees. Some are honestly achieved through well placed microphones such as those that track the ball on the field and capture the sound of the kick. But others, like the roar of the crowds at Nascar, are added in from other sources, since even the best microphones would struggle to separate the sound of the crowds from the sound of the cars. These are examples of the affordances offered to creators of audio/visual media that enable them to render an experience for viewers such that preset socio-cultural expectations of the live experience are met.
Mars, Roman. “The Sound of Sports.” (Podcast) 99 Percent Invisible. 8/11/14. https://99percentinvisible.org/episode/the-sound-of-sports/