Study Hall

Supported By

Make It Stop! There’s No Excuse For Loud, Bad Sound

Are we part of the solution or part of the problem?

Preserving our hearing is critically important, but that’s not what motivated me to write this piece. Really, I just can’t stand bad sound!

To a certain extent, it used to be somewhat excusable if a live show was loud and didn’t sound all that great.

Sound reinforcement systems have come a long way in the past decade alone, but ‘back in the day’” good clean power was harder to come by, and loudspeakers just were not designed to produce high-fidelity audio.

In fact, it was generally considered that you either had reliable speakers OR hi-fi speakers, but not both.

The late Albert Lecesse’s oft-quoted list of priorities goes like this: “Make sound. Keep making sound. Make good sound.”

And for legions of engineers, this has been the mantra, i.e., that high quality sound is not the first priority. And I agree with this for the most part.

However, with in-ear monitoring, processor-controlled loudspeakers, line arrays, plenty of good, inexpensive power, good wireless systems, higher quality microphones than ever before, etc., etc., the equipment has ceased to be the problem. In fact, it’s become part of the solution.

But what about us, the operators? Are we part of the solution or part of the problem?

I’ve encountered the resistance to using better microphones in a lot of places: “This warhorse mic has been good enough for the last 30 years so why change now?” Indeed.

But when was “good enough” ever really good enough? Those standard mics became a standard for a reason:  they were the best thing available at the time. Are you using the best tools and techniques available to you right now?

Instead of just pointing the finger and expecting to get your middle finger in response, I thought it might be useful to cover some of the things, I think, that are fairly easy to implement and that can make a huge difference in the quality of our product, i.e., the sound of the events we mix.

Previously I’ve talked about some general topics related to listening, thinking and related audio issues. This time, let’s get more specific.

Heard It Before

Gain Structure. This has probably come up more times than any other concept in sound reinforcement, and for good reason. It’s the basis of how the signal goes from one device to another and within devices, and understanding it fully goes a long way towards at least giving us a fighting chance at producing good sound.

Here are the main points: Avoid excess noise and maintain enough headroom to keep signal from distorting.

First, it’s not a good idea to be combining consumer, semi-pro (whatever that means) and pro equipment in a single system. But sometimes it’s unavoidable, particularly when you need to pipe in some program material or feed a mix to a video camera.

However, understanding that not all ‘line level’ signals are alike is a good place to start towards integrating these devices, For example, there is roughly a 14 dB difference between the output level of consumer gear and that for pro gear!

But let’s say that you have gain structure knowledge under control and levels are carefully managed throughout the system, yet the system has a fairly high amount of harmonic distortion.

Read More
Solid State Logic Announces Certified SSL Live Console Training In Nashville

Wait, let’s back up: how would you know?

I’ve run into a lot of sound engineers that aren’t readily able to identify modest levels of distortion, or if they can, they still can’t identify which type of distortion is predominant.

Ear Straining

In music school, students are required to take “ear straining and sight screaming” otherwise known as ear training and sight reading. Let’s concentrate on the first part.

There are reference sets for sale, such as the Golden Ears library, designed to serve as ear training materials for audio engineers. And these can be very effective.

I also suggest finding ways to set up your own experiments. For instance, set up two stereo channels on the console, fed from recorded tracks – with really good gain structure and the other with some problems (for instance, bring the faders down to -20 and bring the input gain up so that the output to the mix bus matches that from the “clean” channel).

How far do you have to go to push the problem channel into distortion before you can hear it? Then adjust the EQ on the two channels – does it make the distortion on the “bad” channel worse at better? And what EQ settings make the problem worse?

Supported By

Celebrating over 50 years of audio excellence worldwide, Audio-Technica is a leading innovator in transducer technology, renowned for the design and manufacture of microphones, wireless microphones, headphones, mixers, and electronics for the audio industry.