The sound reinforcement system tuning procedure on my first arena tour in 1978 had two steps: 1) Listen to a track on a cassette and adjust the octave graphic EQ; 2) Sit at front of house and say “test 1-2” on a Shure SM545 microphone and adjust the octave graphic EQ. The only variables to set were crossover levels and house EQ.
By the early 1980s the procedure had expanded to three steps: 1) EQ the system by eye, looking at the real-time analyzer (RTA) at the mix position; 2) EQ the system by ear, listening to a cassette at the mix position; 3) Retune the system by speaking into a Shure SM58 mic.
We’ve come a long way since then, at least in some places. Let’s take a moment to review what comprises system optimization in the current era.
The process is much more than just turning some knobs on an equalizer until we like it. There are a large number of operations that must be performed before this final “taste test.”
The system must be verified to ensure all wiring is correct. Polarity must be checked and phase incompatibility between different models must be compensated. Acoustical crossovers between loudspeakers covering different ranges must be optimized.
Latency must be monitored in the signal path, especially in the modern world of digital audio and networked systems. The maximum level capability and noise floor must become known to obtain the optimal gain scaling to maximize the dynamic range. Any sources of compression or distortion in the signal path must be found.
Once verification is complete, we need to make calibration decisions: loudspeaker aim, splay angle, spacing, delay times, relative levels and equalization. It should be clear by now that the inherent limitations of the RTA render it unable to perform the full set of optimization operations.
However, these limitations did not stop the RTA from being the dominant tool back in the day. As previously stated, it was cheap, small and easy to use. It was a simple tool, but in many ways those were simpler times. The process was simply termed “equalization” at the time, which was the only thing the RTA was useful for anyway.
As the sound systems themselves evolved, the analyzers evolved and the process evolved beyond equalization into optimization. As we will see, analyzers and sound systems will grow up together and the RTA will fade away to extinction.
Evolving To Optimization
In the early days of this work there was typically only a single action considered to be in the scope of optimization: equalization. This is extremely important because system EQ is a “season to taste” parameter. When we introduced high-resolution measurement to the FOH position it became yet another opinion on how to set the house EQ. We already had the RTA, the playback tracks, the SM58, and the sound of the band playing.
The modern fast Fourier transform (FFT) analyzer can add super-detailed frequency and phase response to the discussion, but in the end we’re all weighing in on the house EQ settings. I called it the “EQ tug-of-war,” featuring endless discussions of ears versus analyzers, which can never be resolved. The modern analyzer becomes truly useful on its own when we move from simple equalization to optimization: making it sound the same everywhere.
Into The Great Unknown
There’s so much we know now that we didn’t know in the 1980s. No one had ever measured the response of a sound system in a full arena with the band on stage and been able to compare that to the response with the room empty.
Was the response of the loudspeakers different for songs in different keys? (No, loudspeakers don’t even care if you sing out of tune). Was the response different for quiet songs versus loud songs? (No, unless the system was non-linear or run into limiting). Did the sound system produce beat frequencies during a concert? (No, beat frequencies are in our heads – we don’t see them on the analyzer). Was the graphic equalizer really the best tool for tuning sound systems? (No, this was proven the very first day of SIM.) Does the system response change between room empty and full? (Yes, but there is no simple, consistent trend.)
We all knew from our own experiences that sound changed when the audience arrived. But did it change in a global sense (e.g., more 250 Hz) or did it change locally (e.g., more 250 Hz here and less 250 Hz there)? The answer had to wait until 1987, when we went fully multichannel and could measure in eight locations during the concert while comparing empty and full. (And by the way, the answer is locally – each area changes in its own way.)
All of this may seem quaint now, but before we made these measurements it was simply not provable and therefore not known.
There are often events in our lives whose significance we only realize in retrospect years later. This was not one of them. John Meyer and I knew in those first concerts that this groundbreaking form of measurement would become the mainstream. It just took 20 years longer than we expected.