Technological progress led to the development and acceptance of two analysis techniques in the early 80s: Time Delay Spectrometry (TDS) and dual-channel FFT analysis.
Both of these systems brought to the table whole new capabilities, such as phase response measurement, the ability to identify echoes and high-resolution frequency response.
No longer could an unintelligible pile of junk look the same as the real McCoy on an analyzer. The complexity of these analyzers required a well-trained, highly skilled practitioner in order to realize the true benefits.
Advocates of both systems stressed the need for engineers to utilize all tools in their system, not equalizers alone, to remedy the response anomalies. Delay lines, loudspeaker positioning, crossover optimization and architectural solutions were to be employed whenever possible.
And now we had tools capable of identifying the different interactions.
But on the issue of “equalizing the room,” a division arose. All parties agreed that speaker/speaker interaction was somewhat equalizable. The critical disagreement was over the extent the loudspeaker/room interaction could be compensated by equalization.
The TDS camp advocated that speaker/room interaction was not at all equalizable and therefore, the measurement system should screen out the speaker/room interaction, leaving only the equalizable portion of the loudspeaker system on the analyzer screen. Then the inverse of the response is applied via the equalizer and that was as far as one should go.
The TDS system was designed to screen out the frequency response effects of reflections from its measurements via a sine frequency sweep and delayed tracking filter mechanism, thereby displaying a simulated anechoic response. The measurements are able to clearly show the speaker/speaker interaction of a cluster and provide useful data for optimization.
Such an approach can be effective in the mid and upper frequency ranges where the frequency resolution can remain high even with fast sweeps but it is less effective at low frequencies. Low frequencies have such long periods that it is impossible to get high-resolution data without taking long time records, thereby allowing the room into the measurement.
For example, to achieve 1/12th octave resolution, the equivalent to the Western Tempered Scale, one must have a time record 12x longer than the period of the frequency in question. For 30 Hz you will need a 360ms (12x30ms). If fast sweeps are made to remove echoes from the measurement, the low frequency data has insufficient resolution to be of practical use.
Dual-channel FFT analyzers utilize varying time record lengths. In the HF range, where the period is short, the time record is short. As the frequency decreases, the time record length increases, creating an approximately constant frequency resolution.
The measurements reveal a constant proportion of direct sound and early reflections, the most critical area in terms of perceived tonal quality of a speaker system.
The most popular FFT systems utilize 1/24th octave resolution, which means that the measurements are confined to the direct sound and the reflections inside a 24 wavelengths time period across the board.
This is a good practical level of resolution, allowing us to accurately equalize at around the 1/8 octave level.
With the FFT approach, more and more of the room enters the response as frequency decreases. This is appropriate because at low frequencies the room/speaker interaction is still inside the practical equalizability window.
For example, the arena scoreboard reflection is 150 ms later than the direct signal. At 10 kHz, the peaks and dips from this reflection are spaced 1/1500 of an octave apart. At 30 Hz, they will be only 1/3 octave apart. Thus the scoreboard is in the distant field relative to the tweeters, and applying equalization to counter its effects will be totally impractical.
An architectural solution such as a curtain would be effective. But for the subwoofers, the scoreboard is a near-field boundary and will yield to filters much more practically than the 50 tons of absorptive material required to suppress it acoustically.
Many years ago, the FFT camp boldly stated that the echoes in the room could be suppressed through equalization. Unfortunately, these statements were made in absolute terms without qualifying parameters, leaving the impression that the FFT advocates thought it was desirable or practical to remove all of the effects of reverberation in a space through equalization.
While it can be proven from a theoretical standpoint that the frequency response effects of a single echo can be fully compensated for, that does not mean it is practical or desirable. The suppression can only be accomplished if the relative level of the echo does not equal or exceed that of the direct and that no other special circumstances arise that cause excess delay. (Excess delay causes a “non-minimum phase” aberration and is outside the scope of this article.)
If the direct level and echo level are equal the cancellation dip becomes infinitely deep and the corresponding filter required to equalize it is an infinite peak. As we know from sci-fi movies, bad things happen when positive and negative infinity meet up.
Compensating for the response requires adjustable bandwidth filters capable of creating an inverse to each comb filter peak and dip in the response. As the echo increases, you will need increasing numbers of ever narrowing filters.
A 1 ms echo corrected to 20 kHz will require some 40 filters because there are 20 peaks and 20 dips varying in bandwidth from 1 to .025 octave. A 10 ms echo would need 400 with bandwidths down to an 1/400 octave.
Obviously, it would be insane to attempt to remove all of the interaction at even a single point in the hall. In the practical world, we have no intention of attacking every minuscule peak and dip, but instead will go after the biggest repeat offenders. The narrower the filters are, the less practical value they have because the response changes over position.