Equalizing The Room—What It Really Means
There are lots of disagreements but all agree on one thing: You can't change...

December 24, 2013, by Bob McCarthy

image

“I’m going to equalize the room.” We’ve all heard that statement so many times that we scarcely think about what it literally means. We know that in practical terms it means adjusting an equalizer to suit your taste. It may be done with the latest high-technology analysis equipment, voodoo magic or simply tweaking away “until it sounds right.”

In any case, are we really “equalizing the room”? What exactly are we doing? There are lots of disagreements on this topic but all agree on one thing: You cannot change the architecture of the room with an equalizer.

You can, however, equalize the response of the speaker system. Where the room fits into all this is a matter of debate; It is much more than semantics and has very real practical consequences on our approach to sound system alignment.

What do equalizers “equalize” anyway?
Let’s assume that we have a speaker system with a flat (or otherwise desirable) free field frequency response. That is to say, it requires no further equalization. There are three categories of interaction that will cause the frequency response to change, to become, for lack of a better word, “unequalized.”

The first of these interactions are between speakers. When a second speaker is added the combination results in a modified frequency response at virtually all locations. This is true of all speaker models and all array configurations, regardless of any claims to the contrary.

The summation of the two responses varies the frequency response at each position, depending upon the relative time arrival and level between the two speakers. As additional speakers are added the variations in response increase proportionally.

The second category is the interaction of the speaker(s) with the room. These are generally termed coupling, reflections or echoes. The mechanism is similar to the speaker interaction above. The response varies from position to position, depending upon the relative time arrival and level between the direct and reflected sound.

Both of the above effects are the result of a summation in the acoustical space of multiple sources, either speaker and speaker, or speaker and reflection. Therefore the solutions for these interactions are very closely related.

The third interaction is caused by the effects of dynamic conditions of temperature, humidity and changing absorption coefficient. However, the effects of these interactions are small by comparison with the other two, so we will not touch on them further here.

Are any of these problems solvable with an equalizer? The answer is a qualified “Yes”. The magnitude of the above problems can be reduced by equalization, and substantial progress can be made toward restoring the original desirable frequency response.

If equalizers were totally ineffective, then why have we been loading these things into our racks for the last 35 years? However, in a practical sense the equalizer can only provide complete success in equalizing the response when applied in conjunction with other techniques such as architectural modification, precise speaker positioning, delay and level setting.

To what extent is the speaker/room interaction equalizable? This has been a matter of debate for more than 15 years. In particular the advocates of various acoustical measurement systems have come down hard on these issues.

What we are doing is equalizing, among other things, the effects of the room on the speaker system. Why is this controversial? It stems from the historical relationship of equalizers and analyzers. Let’s turn on the way-back machine and take a look.

Early analysis
In ancient times (the 1970s), the alignment of sound systems centered around a crude tool known as the Real-Time Analyzer (RTA) and a companion solution device, the graphic equalizer. The analyzer displayed the amplitude response over frequency in 1/3 octave resolution and the equalizer could be adjusted until an inverse response was created, yielding a flat combined response.

It takes a negligible skill level to learn to fiddle with the graphic EQ knobs until all the LEDs line up on the RTA. It is so simple that a monkey could do it, and the result often sounded like it.

Although these tools were the standard of the day, they have severe limitations, and these very limitations can lead to gross misunderstanding of the interaction of the speakers to each other and the room, resulting in poor alignment choices.

One such limitation is the fact that the RTA lacks information regarding the temporal aspects of the system response. There is no phase information nor any indication as to the arrival order of energy at the mic.

The RTA cannot discern direct from reverberant sound, nor does it indicate whether the response variations are due to loudspeaker interaction alone and loudspeaker/room interaction. Therefore the RTA provides no help in terms of critical speaker positioning, delay setting or architectural acoustics.

Second, the RTA gives no indication as to whether the response at the mic is in any way related to the signal entering the loudspeakers. The RTA gives a status report of the acoustical energy at the microphone, with no frame of reference as to the probable causes of response peaks and dips.

These peaks and dips could be due to early room reflections or speaker interactions, which can respond favorably to equalization. However, the irregularities in response could be from late reflections, noise from a forklift engine or reflections from a steel beam in front of the loudspeaker.

The equalizer will be ineffective as a forklift or steel beam remover, but the RTA will give you no reason to suspect these problems. A system that is completely unintelligible could look the same as one that is clear as a bell.

Third is the fact 1/3-octave frequency resolution is totally insufficient for critical alignment decisions. In addition, there is the misconception that a matched analyzer/filter set system is desired. It is not. The analyzer should be three times the resolution of the filter set in order to be able to provide the visible data needed to detect center frequency, bandwidth and magnitude of the response aberrations.

A 1/3 octave RTA is only able to reliably determine bandwidths of an octave or more. What appears as a 1/3 octave peak may be much narrower. What appears as a broad 2/3 octave peak, may actually be a high narrow peak placed between the 1/3 octave points. What will your graphic equalizer do with this?

Unfortunately the absence of this critical information lulled many users into a sense of complacency predicated on the belief that equalization was the only critical parameter for system alignment. In countless cases, equalizers were employed to correct problems they had no possibility of solving, and could only make worse.

Graphic equalizers have no possibility of creating the inverse of the interactive response of the speakers with the room. Simply put: “You can’t get there from here.”

The audible results of all this tended to create a generally negative view of audio analyzers. Many engineers concluded that their ears, coupled with common sense could provide better results than the blindly followed analyzer.

As a result, though RTAs were often required on riders, they only received cursory attention on show day.

Modern Analysis
Technological progress led to the development and acceptance of two analysis techniques in the early 80s: Time Delay Spectrometry (TDS) and dual-channel FFT analysis. Both of these systems brought to the table whole new capabilities, such as phase response measurement, the ability to identify echoes and high-resolution frequency response.

No longer could an unintelligible pile of junk look the same as the real McCoy on an analyzer. The complexity of these analyzers required a well-trained, highly skilled practitioner in order to realize the true benefits.

Advocates of both systems stressed the need for engineers to utilize all tools in their system, not equalizers alone, to remedy the response anomalies. Delay lines, speaker positioning, crossover optimization and architectural solutions were to be employed whenever possible. And now we had tools capable of identifying the different interactions.

But on the issue of “equalizing the room” a division arose. All parties agreed that speaker/speaker interaction was somewhat equalizable. The critical disagreement was over the extent the loudspeaker/room interaction could be compensated by equalization.

The TDS camp advocated that speaker/room interaction was not at all equalizable and therefore, the measurement system should screen out the speaker/room interaction, leaving only the equalizable portion of the loudspeaker system on the analyzer screen. Then the inverse of the response is applied via the equalizer and that was as far as one should go.

The TDS system was designed to screen out the frequency response effects of reflections from its measurements via a sine frequency sweep and delayed tracking filter mechanism, thereby displaying a simulated anechoic response. The measurements are able to clearly show the speaker/speaker interaction of a cluster and provide useful data for optimization.

Such an approach can be effective in the mid and upper frequency ranges where the frequency resolution can remain high even with fast sweeps but it is less effective at low frequencies. Low frequencies have such long periods that it is impossible to get high-resolution data without taking long time records, thereby allowing the room into the measurement.

For example, to achieve 1/12th octave resolution, the equivalent to the Western Tempered Scale, one must have a time record 12x longer than the period of the frequency in question. For 30 Hz you will need a 360ms (12x30ms). If fast sweeps are made to remove echoes from the measurement, the low frequency data has insufficient resolution to be of practical use.

Dual-channel FFT analyzers utilize varying time record lengths. In the HF range, where the period is short, the time record is short. As the frequency decreases, the time record length increases, creating an approximately constant frequency resolution.

The measurements reveal a constant proportion of direct sound and early reflections, the most critical area in terms of perceived tonal quality of a speaker system.

The most popular FFT systems utilize 1/24th octave resolution, which means that the measurements are confined to the direct sound and the reflections inside a 24 wavelengths time period across the board. This is a good practical level of resolution, allowing us to accurately equalize at around the 1/8 octave level.

With the FFT approach, more and more of the room enters the response as frequency decreases. This is appropriate because at low frequencies the room/speaker interaction is still inside the practical equalizability window.

For example, the arena scoreboard reflection is 150 ms later than the direct signal. At 10 kHz, the peaks and dips from this reflection are spaced 1/1500 of an octave apart. At 30 Hz, they will be only 1/3 octave apart. Thus the scoreboard is in the distant field relative to the tweeters, and applying equalization to counter its effects will be totally impractical.

An architectural solution such as a curtain would be effective. But for the subwoofers, the scoreboard is a near-field boundary and will yield to filters much more practically than the 50 tons of absorptive material required to suppress it acoustically.

Many years ago, the FFT camp boldly stated that the echoes in the room could be suppressed through equalization. Unfortunately, these statements were made in absolute terms without qualifying parameters, leaving the impression that the FFT advocates thought it was desirable or practical to remove all of the effects of reverberation in a space through equalization.

While it can be proven from a theoretical standpoint that the frequency response effects of a single echo can be fully compensated for, that does not mean it is practical or desirable. The suppression can only be accomplished if the relative level of the echo does not equal or exceed that of the direct and that no other special circumstances arise that cause excess delay. (Excess delay causes a “non-minimum phase” aberration and is outside the scope of this article.)

If the direct level and echo level are equal the cancellation dip becomes infinitely deep and the corresponding filter required to equalize it is an infinite peak. As we know from sci-fi movies, bad things happen when positive and negative infinity meet up.

Compensating for the response requires adjustable bandwidth filters capable of creating an inverse to each comb filter peak and dip in the response. As the echo increases, you will need increasing numbers of ever narrowing filters.

A 1ms echo corrected to 20 kHz will require some 40 filters because there are 20 peaks and 20 dips varying in bandwidth from 1 to .025 octave. A 10 ms echo would need 400 with bandwidths down to an 1/400 octave.

Obviously, it would be insane to attempt to remove all of the interaction at even a single point in the hall. In the practical world, we have no intention of attacking every minuscule peak and dip, but instead will go after the biggest repeat offenders. The narrower the filters are, the less practical value they have because the response changes over position.

Practical Implications
It is indeed possible and practical to suppress some of the effects of speaker/room interaction. If this was not possible, it would be standard practice to equalize your rig in the shop, put a steel case around the EQ rack and hit the road. The practical side of this is that we must be realistic about what is attainable and what are the best means of getting there.

The variations in frequency response due to both speaker/speaker interaction and loudspeaker/room interaction will always change with position. Once you have seen high-resolution data at multiple positions, you can never go back to thinking that your equalization will solve problems globally.

A system that has the minimal amount of the above interactions will have the greatest uniformity throughout the listening environment and, therefore, stand to gain the most practical benefit from equalization. If it sounds totally different at every seat, let’s just tweak the mix position and head to catering.

To minimize the speaker/speaker interactions requires directional components, careful placement and precise arraying. In areas where the speakers overlap, time delays and level controls will minimize the damage in the shared area. To minimize loudspeaker/room interaction, the global solutions lie in architectural modification (it’s curtain time), the selection of directionally controlled elements and precise placement.

Finally you are left with equalization. For each subsystem with an equalizer, map out the response in the area by placing a microphone in as many spots as you can and see what the trends are.

In particular, measure around the central coverage area of the speaker. Stay away from areas of high interaction, where the response will vary dramatically every inch.

Examples of this include the seam between two cabinets in an array or very close to a wall. Each position will be unique, but if you place filters on the top four to six repeat offenders you will have effectively neutralized the response in that area.

Conclusion
Modern analyzers are capable of displaying a dizzying array of spectral data. But little practical benefit will come to us if we continue with the antiquated approach of the RTA era. To fully take advantage of the benefits of equalization, we must fully comprehend how to identify the mechanisms that “unequalize” the system.

With modern tools, it becomes possible to analyze the response such that the interactive factors of speaker systems can be distilled and viewed separately. This allows the alignment engineer to prepare the way for successful equalization by using other techniques that reduce interaction and maximize uniformity in the system.

“Equalizing the room” will remain in the domain of architectural acousticians, but with advanced tools and techniques, we can proceed forward to better equalize the speaker system in the room.

Bob McCarthy serves as director of system optimization with Meyer Sound. Find out more about Bob here.



Return to articleReturn to article
Equalizing The Room—What It Really Means
http://www.prosoundweb.com/article/equalizing_the_room