January 14, 2013, by Karl Winkler
For me, good sound and its opposite, Dr. Evil Sound, are very personal issues.
Good sound really enhances the listeners’ experience, while bad sound, being unfortunately far too common, really takes away from the performance.
I’ve heard from several engineers, guitar techs and monitor guys that they know their artists to be sensitive to sound to the degree that if things aren’t going right, the performers have trouble continuing on.
I can understand this – bad sound has a nearly physical impact on me.
So, what are some of these additional issues and factors?
EQ Or Not To EQ
Equalizing or tone shaping is clearly a very important tool in our audio arsenal. By the way, wouldn’t Audio Arsenal be a good name for a band? Of nerds?
Anyway… I believe that there are two reasons for using EQ. The first, and perhaps most important in the overall scheme, is to properly tune the system for the type of response that is wanted or needed.
Some frequencies that might excite the room too much can be reduced, and tailor the low-frequency response to match the volume of the room, etc.
Of course, don’t forget that there are acoustical problems that cannot be solved with electronics. I know, call me a heretic.
But the other kind of EQ I’m thinking of more is the “color” applied to individual channels. A lot of this is already determined by microphones and where they’re placed. Certainly a snare sounds different when mic’ed with a dynamic vs. a condenser. And really, it does start at the microphone.
But let’s say you’ve chosen the best mic for the source, and put it in the right place, and now you need to do a bit of final tweaking on the channel EQ for the sound to be “perfect.”
I forget who said it, but “people don’t go to the concert to hear the kick drum.” Whoever it was has never been to a Metallica concert. I was at one back in the mid-1990s (the “Black Album” tour). All three of the opening bands had already played, and we were in the middle of the break before the headliner came out. Anticipation was growing… And growing.
And then at one point, the drum tech came out, sat down, and stomped on the kick pedal, sending a thunderous sound through the audience. And everyone went wild! Who would have thought that a single-note kick drum solo would have brought the crowd to its feet!
But I digress.
My point is that each instrument may sound great on its own, but may not sit properly in the mix.
The first question to ask: “what is the musical context?”
In other words, the ideal kick drum sound for Metallica obviously wouldn’t wash for the Count Basie Orchestra. Or vice versa.
But more than that – at lot of mixes I hear just sound too dense.
Each instrument takes up far too much of the soundstage by way of being too loud, having too wide of a frequency range, and perhaps with too many effects.
Really great mixes have “air” around everything and individual channel EQ is one big reason for this. Distorted guitars are one of the biggest culprits. What sounds good when you are standing in front of your amp ripping your own face of may not be what’s best when mixed with the band.
If there are two guitars, make sure that they each have a different tone so that they can be distinguished in the mix.
And, depending on the music, generally the guitars should not compete with the vocals. It’s probably OK for punk music, but not for much else. Ever notice that hair metal bands that have huge-sounding guitars in the mix also have super-high screaming vocalists?
The Air Up There
Speaking of “air”, one thing I always found to be a good tool for producing an open-sounding mix was an overall high-frequency shelf. I would typically boost 12.5 and up, by maybe 2-3 dB. This is above the range of vocal sibilance, harsh cymbals, way above guitar amps, etc.
But the content that is in this range includes the upper harmonics from acoustic guitars, piano, cymbals, and even a touch of the vocal harmonics. It was a subtle thing but to my ear, really helped.
Of course, I had to be careful not to have boosted any of the individual channel EQ’s too much in this same region, because then things could stick out. In a way, I thought of it as being similar to the final mastering EQ used in the recording process.
Along with adding a touch at 12.5 kHz and above, I found that I also tended to have just a hair of cut down in the low mids, say -1 or 2 dB centered on 320 Hz and a Q of two octaves. I would move the center around a bit and also maybe adjust the Q a tad depending on the room.
This could also be accomplished with individual channel EQ, but I found that after tweaking each strip, this overall EQ really helped to tighten things up by taming what is arguably the most obnoxious-sounding area of the audio spectrum.
The problem is that most of the fundamentals are right in there, from voice to guitar to piano to keyboards, and even bass. Horns, too, if you have them.
So it is easy to accumulate too much mud there and using an overall EQ can certainly help.
I should point out that not just any ol’ EQ is up to the task for managing the air and the mid cut.
Generally, a parametric plus HF shelf will work well, if it is a unit of good quality.
And certainly the drive rack can be used for this – the good DSP units used today are ideal for this slight coloring EQ.
In other words, tweak your system and get the kinks worked out, then add a touch and take away a touch to give the system a little more life for your mix.
If You Can Hear It, It’s Too Much
I’m talking about compression, of course. Like EQ, it should be used sparingly and only when there is good reason.
Compression seems to be the most over-used processing of today (and for the past 10 years at least), particularly in mastering for recordings. But it’s over-used live, too.
Music should have dynamics, although there are types of music like cookie-monster metal that don’t have any dynamics.
In terms of how compression is used – if you can hear it, it’s too much, whether on individual tracks or the whole mix. One can argue that “if you can’t hear it, then it’s not doing what it’s supposed to do,” but if you can hear the actual effect of compression, i.e. “pinching, pumping, breathing,” or the like, then it’s overblown. If the tone of the instrument or singer is affected, it’s too much.
What should compression actually do? Let’s use vocals as an example. Even when a singer knows how to work the mic, compression can help smooth out the level so that the vocal stays on top of the mix, also helping the softer parts sound full.
On bass, compression can even out what is often a source that wanders around quite a bit in terms of level. Nothing helps make a mix sound solid than a steady, pulsing bass that is locked with the drums.
Compression can help with drum overheads, but is easily overdone in this application.
On the overall mix, compression can be the “glue” that keeps things locked together. But here again, it’s easy to overdo it and lose individual sounds (see above regarding EQ).
So what is the magic setting? There isn’t one, but I ‘ve found that any ratio over 4:1 is often too much. Soft knee is better.
For bass, two stages is often good: a more aggressive one with a later attack to allow the transient to come through, then a milder one with a faster attack to keep the transient somewhat in check.
This combination has been used on thousands of hit records, by the way.
For the overall mix, be gentle with it!
And please, please, please, use a good-quality compressor, especially for the mix.
Sometimes you can get away with a mediocre unit for bass (unless the bass player finds out) but for the mix bus, it needs to be high-end stuff.
Your ears will thank you for it.
One last word about dynamics: if you expect that the music will have room to breath, and that dynamics are required for this particular genre, then your system must have enough headroom to accommodate these needs. And remember that 3 dB more means double the power…
And For The Last Time…
It appears that issues of Time of Arrival (TOA) are fairly well understood in the macro sense – system designers using zones or distributed loudspeakers understand the need to keep all the loudspeakers aligned from the origin point (usually the band or the stage).
But where I don’t see TOA understood as well is in the micro realm – in terms of the difference in time between instruments on the stage, or between the vocalist and the lip loudspeakers firing into the first couple of rows.
My main concern with all of this is what is known as “precedence.” Our ears tend to translate the first arrival as the original source.
But what if the closest source to a particular listener is a loudspeaker? Then the listener will perceive that loudspeaker as being the original source.
And frankly, this is a very common occurrence and is often unavoidable. The position of the loudspeakers relative to the audience may necessitate that some people hear some of the loudspeakers first.
Even if you have a great sounding mix and you’re working for a great artist, these things should still be of concern. By simply delaying the lip loudspeakers or front fills so that the singer (or lecture speaker for that matter) arrive first acoustically, the illusion will be that all the sound is coming from that person.
When I first started working with delays on the mains, I was amazed how much more clarity could be achieved by aligning the system back to the drums.
From then on, I typically delayed the mains by about 10 to 20 mS depending on the size of the stage and how far back the drums were from the front line. It was a subtle thing in some ways, but I came to rely on it to get the quality of sound and the performance experience I wanted for the audience.
Just Blowin’ In The Wind
All of this is just my own personal tips and tricks for good sound. You probably have your own techniques - and you should.
But I hope what I’m imparting gives you some new ideas and some new things to try. Ultimately, good sound is up to all of us.
I continue to maintain hope that overall sound at shows, lectures, concerts, plays, and festivals, will improve, and frankly, there’s a lot of room for improvement.
Let’s make it happen!
Karl Winkler is director of business development for Lectrosonics and has worked in professional audio for more than 15 years.