Spatial Pursuits: The Evolution Of Large-Scale Sound System Optimization

Early Concerts

In the early 1980s, the first systems we measured with SIM (short for “source independent measurement”) were supporting arena rock concerts. They had minimal signal complexity, simple left/right systems without signal subdivision.

Therefore we didn’t have separate EQs for side fills, front fills, etc. There was left and there was right, period. There was nothing to do but EQ and nowhere to measure but the mix position. Why measure the side areas if you’re not going to do anything about them? We weren’t even allowed to turn parts of the system down at the amplifiers or at any subdivision point.

During this early period we learned how to equalize systems, but that was as far as we could go. Optimization had limited appeal with mainstream rock ‘n’ roll because it didn’t do the most important thing: make it louder. The only thing it could promise was to make the response more uniform over the room, which might have even made it less loud at the mix position. Who needs that?

A short time later (1984) we began to do arena shows with Italian operatic tenor Luciano Pavarotti. The promoter sold seating all around the arena, including the sides and rear, which led to a complex system design and the first real application for system optimization. The system had four main clusters (center, side, side, and rear) and each of these had multiple sections in the vertical plane.

There were 10 subsystems in total, each with its own equalizer and level control. There was no digital delay that was quiet and/or good enough, so we had to physically line up the rigging points to get a respectable time alignment. There was no more EQ tug of war.

The mission: make it sound the same everywhere, and develop the tools to do it. We built a custom multichannel switcher that enabled access points to all of the system equalizers. We could measure every EQ in the system while observing the response at the mic (yes, mic, singular – there was still just one mic that was moved all over the hall).

A procedure was developed, a step-by-step process that included the individual measurement of each subsystem and their additions into the composite whole. A mic placement strategy was also devised that drove the setting of individual subsystem EQ and its relative level. The secondary stage included the equalization of the combined systems. The final stage was the ongoing optimization during the concert. Since there was only a single mic, you can guess where it was placed.

Additional Concerns

The process for setting EQ was more complex than one can imagine. Recall that we view the frequency response with three transfer functions (Room/EQ/Result) in seven linear segments. The EQ sequence follows the pattern of A) Find the problem; B) Fix the problem; C) Verify the result. To find the raw response, we view the Room transfer function, where we see a peak, store it, and recall the response.

The engineer must adapt the field situation with methodical solutions that often require much more than just turning knobs on an equalizer. In this case, a physical solution was needed to tune the system while water leaked through the backstage ceiling. (The author is pictured at Carnegie Hall, 2013.)

Next we switch to the EQ response and compare it to the Room response in memory. We turn the EQ knobs until we make a dip that complements the peak. (In practice we actually viewed the EQ inverted so that making the complement was as easy as matching the two traces. This was much easier and more accurate than trying to see that they were opposites.) Then we switch to the Result response, where we see that the peak has been flattened by the dip in the EQ.

O.K., we’ve now completed the EQ process up to 250 Hz. There are six more rounds to go to get up to the top of the spectrum. That’s 21 measurements total, which would take around four minutes just to get the 63 traces. We also needed to make sure we remembered (in our head) where the trace was in the low end as we moved up – wouldn’t want to lose our reference point, eh?

The other fun part was making sure to pull the correct trace from memory. Don’t compare the 250 Hz Room response to the 500 Hz EQ response. The analyzer didn’t care. Once it was stored it was just a green line. You had to keep it honest.

We soon added the primitive computer to store data, label it and send it back to the analyzer (a Hewlett-Packard HP 3582). The computer was all about managing the library of 63 traces and keeping them sorted out. Without it, we erased our work every 15 seconds and started from scratch. Following all of this, it was time to move the mic to repeat the process again for the next of 10 subsystems.