I enjoy seeking parallels and connections between various aspects of the world that surrounds us in search of clarifying analogies.
One of those connections occurred to me not long ago while doing one of my sound seminars. I was looking for a way to clearly explain the theories I implement when equalizing live sound systems.
Though mixing a live event can be a complex process with many factors that need to be simultaneously juggled, it’s possible to look at mixing audio in very simple terms as just the process of controlling the volume level and tonal balance of one or more sound sources.
Compressors and gates are just signal dependent volume controls. EQ, high- and low-pass filters, crossovers and microphone choices are all just tools that alter the tonal balance of the mix, the vocals and the individual instruments.
Yes, there are other factors like polarity, time delay, coverage, effects, panning and so on, but the fundamentals of mixing are mostly just controlling the volume levels and tonal balance presented to the audience.
To help analyze methods of optimizing our control over a sound system more clearly, let’s divide the decisions we make into two main categories: technical and preferential. Technical-based decisions encompass things like setting the time delay of a delay cluster for minimum offset, setting the polarity of two microphones such that they do not create unwanted cancellations, and adjusting the frequency response of a system to be linear and create a “what goes in, comes out the same” scenario.
On the other hand, preference-based decisions can include things like the relative volume differentials between the instruments and whether we’re trying to recreate a realistic sound or alter the sound of an instrument.
One common example of drastically altering the sound of an instrument would be the reproduced tonal balance of a kick drum at nearly every rock show. In real life a kick drum just sounds like “pop pop pop” or “thud thud thud,” lacking the significant amount of low end (and sometimes click) typically chosen to be reproduced through reinforcement systems. But hey, that’s just an evolved preference that often falls in line with listener expectations.
Now I’ll ask: what about the overall tonal balance of a mix – is it primarily a technical- or preference-based decision? I’ve been mulling over this concept for years in one way or another. Should a system be EQ’d to flat? How bright or dull should my mix be? How do I determine the technically correct tonal balance in any given situation? And how should it be measured?
I’ve made some significant progress in unraveling these quandaries by establishing headphones as a reference point. First I play music through the headphones and through the sound system simultaneously.
Then, using my ears, I compare the sound of the music in the headphones to the sound of the same music through the system. Using the house equalizers, I then do my best to EQ the system sound to match the headphone sound. Assuming the sound of the headphones is “correct,” this locks me into a “correct” tonal balance.
Next is the use of a real-time analyzer (RTA) to observe the frequency response so that I can also see what “correct” looks like. While mixing the show, I constantly refer back to the analyzer to make sure the tonal balance of my mix has not drifted from the established baseline initially established with the headphones.
Finally, during the show I compare the pre-house EQ sound of my mix in the headphones to the post-house EQ sound of the system in the room. This allows me to make sure that what’s being sent to the system sounds like what’s being reproduced in the venue.