The feedback beast was born at the beginning of the 20th century as a result of man’s first attempts to harness electricity to capture and amplify sound. One of the earliest recorded instances of feedback happened in 1910 when Edwin Jensen and Peter Pridham were experimenting with sound reproduction in their laboratory in Napa, California. They connected a carbon microphone to their rudimentary sound system and were greeted by a howl.
I wonder if they knew what this strange beast was or how they had brought it into existence? It must have been quite a shock. Little did they know how it would go on to blight the lives of sound engineers forever more.
Feedback is also known as The Larsen Effect, named after Søren Absalon Larsen, the Danish physicist who first documented the principle, and the science behind it is quite simple.
When a fraction of the output signal of a sound system leaks back into the input it gets amplified and is output where it once more leaks back into the input, creating a building loop which quickly manifests as an audible howl.
The frequency of the howl is determined by resonances present in the microphone, the amplifier, the loudspeaker and the room, as well as the physical positioning of the sound system components.
Traditionally the first line of defense is to utilize graphic equalizers on the outputs of the system to “ring out” the system in the way I described earlier. When employing this process we’re using the EQ to attenuate each of the successive resonant frequencies in the system until we’re able to achieve a reasonable signal level before feedback occurs.
We can also utilize feedback suppressors that detect those telltale howls and automatically apply a narrow filter at the appropriate frequency, but they can be tricky to set up precisely. If they’re too sensitive they can attack perfectly valid sounds which briefly resemble feedback and if they’re not sensitive enough they can miss unwanted howls.
In the context of a busy mix it can be quite difficult to tell the difference between the noises we want and those we don’t – spotting feedback can be a highly subjective task and many engineers (myself included) prefer to trust our ears over a machine. An experienced engineer can proactively eliminate problems before they occur, unlike a feedback suppressor, which is reactive.
However, it’s important when employing graphic equalization to be aware that not all EQs are equal. Each slider on the EQ applies an individual filter centered on the nominated frequency. This means that it doesn’t just attenuate or boost the specific frequency but also those adjacent to it (to a lesser degree).
The width of the filter (known as the bandwidth or Q, meaning quality) can differ greatly from unit to unit. Some have wide filters, some have narrow ones, and some are variable, starting out wide and getting narrower with the more cut or boost that is applied. Most engineers prefer wide filters for general equalization of a system to make it sound more musical, with narrow filters applied for surgical feedback attenuation.
Then To Now
Like most of the tools in live sound, graphic equalizers evolved in the good old analog days of the 1960s and 70s. Various designs emerged and eventually settled down into the 31-band standard, which has been ubiquitous ever since.
The reason this format triumphed is because each filter represents a third of an octave, covering the full range of human hearing (i.e., from 20 Hz to 20 kHz), which provides a comprehensive way to address the entire frequency spectrum in a musical manner.
At the time, there was a belief that 1/3-octave filters corresponded well to the width of the ear’s Critical Bands, although current research indicates that 1/6-octave would be a better fit. This format has proven so effective that it persists to this day, even in the software facsimiles present in the virtual racks of digital consoles.
Speaking of digital consoles, the prodigious signal processing power now available means that most manufacturers now include a set of parametric equalizers on all the outputs, which has enabled us to develop hybrid methods for ringing out the system.
Nowadays I use up to four parametric EQs with extremely narrow filters (i.e., high Q) to attenuate the first four resonant frequencies and then a graphic EQ to deal with any further resonances present in the system.
Being able to deploy equalization in real time to keep the beast at bay is a vital skill to develop, but there are other ways to improve your chances in the battle.
A basic awareness of the directional characteristics of loudspeakers and the reflective qualities of nearby surfaces can greatly aid in loudspeaker placement. There’s a reason why we typically place loudspeakers at the front of the stage pointing towards the audience with all the microphones behind pointing in the opposite direction.
Likewise with mics.
While it’s common to use omnidirectional microphones in the studio environment for their open sound and lack of proximity effect, it’s no coincidence that most of the mics we use in live sound are directional in some way. (The exception is lavalier designs, which are often omni to avoid coloration as the individual turns his or her head.)
Understanding the difference between the various polar patterns of stage microphones can greatly aid in ensuring the most sensitive area of the microphone is pointing towards what you want to capture and the least sensitive area is towards the things you don’t (such as loudspeakers).
At the end of the day, we can never truly banish the feedback beast. A live show, by its very nature, is a dynamic event. Things change and mics move, but the more we tussle with the beast the easier it gets to anticipate its behavior and swiftly banish it before it can ruin everyone’s evening.