Sign up for ProSoundWeb newsletters
Subscribe today!

Forums Presented By: 
In The Studio: Hear Me Out On Equalization
+- Print Email Share RSS RSS

This article is provided by the Pro Audio Files.

 

All recorded sound is equalized…

Before it runs through any EQ – if you record a voice, with a mic, into a preamp, and play it out of your loudspeakers – you have a sound that’s already been EQ’d six, arguably seven times.

The Variables:

—The way the vocalist shapes their mouth, throat, and diaphragm will effect the frequency curve of the voice.

—The way the room responds will effect the way the tones are perceived.

—The microphone will naturally dip and accentuate certain frequencies – as will the preamp.

—The loudspeakers will not be perfectly flat (though some are pretty freakin’ close), and the room in turn where playback occurs will also effect the way the sound is heard.

—Also, you, as the engineer have a unique brain that interprets all of that information – some are more sensitive to high end, others more sensitive to low end.

—Compression in the tracking phase? Yep, it’s going to effect the tone.

The point of this is that all sound has a tone shape. The only question when reaching for an EQ is: are the tones working?

And that’s what EQ is: Tone Shaping.

Unnecessary Processing

But it seems that even though the sound source (voice in this example) has been EQ’d several times over – people automatically reach for another EQ. The temptation might sometimes be for the engineer to make the sound hers/his.

There’s a pride in knowing that one has crafted the sound. I have to stress that this mindset is ultimately detrimental – the quest and pride should be in making the necessary moves, and not EQ’ing for the sake of EQ’ing.

Why EQ Something?


Masking
– So this is every engineers’ favorite idea. What is masking? When a tone from one instrument hides the audibility of the tone of another instrument.

As engineers we commonly mistake masking for being a “bad” thing. I’d like to offer the idea that masking is just a thing – and that whether it’s good or bad is completely context dependent.

Masking to some is layering to another. Clutter can just as easily be wall-of-sound. In an orchestra, instruments walk all over each other and this is done purposefully in order to create a sea of sound (when done harmoniously), or chaos and tension.

Sometimes, a masking tone can provide added energy in the spectral area of the element it would mask, if the level is set properly. For example, cymbals will assuredly mask the high end of a vocal if too loud. But you wouldn’t turn them way up and then EQ out the treble (well, you might, but probably not) – because then you get loud, dark cymbals.

Instead, you mix them low enough so that they don’t step on the vocal’s presence – they just provide more energy behind that domain. Masking is only an issue when you want two things to be of equal role importance that share significant content in the same frequency area.

And then, removing as little as possible from one of the elements will provide a fuller sound which may be preferable to a bigger cut, which will lead to a more open sound.

Resonance and build up removal
– The vocalist may have been too close to the mic. The room may have a strange ring. The snare may be un-damped and the player might play with too little tip.

All these things can lead to unwanted resonances or frequency build up. Now again, these are just things – not necessarily bad. A sine wave synth is basically just a moving resonance, and that sound dominated West Coast Hip Hop in the early 90s.


Commenting is not available in this weblog entry.