Spatial Pursuits: The Evolution Of Large-Scale Sound System Optimization

Musical Theater

Our next big break was in musical theater, working with sound designers Abe Jacob and Andrew Bruce. In this world there were already extreme levels of system subdivision: left, right, inner, outer, upper, lower, music, vocal, surround, FX, and more. Every one of these subsystems needed individual attention, and they also required seamless combination with each other into an integrated whole.

The scope of work now included pan and tilt, individual EQ, level setting, delay setting, angular splay, spacing and combined EQ. We now had all of the pieces of modern optimization on the table, but there wasn’t an analysis system that could handle it: we still had just a single mic and extremely limited ability to manage the multichannel data.

The 1986 London production of Chess was a turning point for me. I spent night after night moving a single mic up and down, in and out, and all over the three levels of the Prince Edward Theater. I would fix it upstairs only to find I had broken it downstairs and then do the same thing in reverse. Every time we changed something somewhere, it changed things somewhere else.

Four critical ingredients were required to successfully optimize musical theater’s complex interactive multichannel systems: access to multiple equalizers, multiple mics, computer memory to manage the data, and a systematic methodology.

It’s impossible to overstate the importance of memory for this work. There’s only a limited amount we can learn from any single set of data on an analyzer. The screen shows what we have here and now, but lacks context as to how this relates to where and when. The prettiest trace in the world means nothing until we relate it to other locations in the room.

Optimization is a spatial pursuit. How can we tell it sounds the same around the room if we don’t have data from all around the room to which we can compare?

Multiple Microphones

A modern era optimization with eight mics running in a line from front to back at Ruth Eckerd Hall in Clearwater, FL. The matched mics allow us to see the differences in response from front to back and make adjustments as needed. In this case the upper mics were down in level from the middle and front until an upward angular adjustment was made to the arrays. This was followed by EQ adjustments and minor level reduction for the nearest rows.

As previously stated, SIM 1 gave us multiple mics and access to all EQs without re-patching. We could now freely roam the system and greatly expand our knowledge and capabilities.

Recall my experience of chasing my tail up and down the room in London with a single mic. This would never have to happen again. I could keep mics at all levels of the theater and see what happened everywhere when I changed something. The cycle of trial and error just shrunk tremendously.

The engine was still the HP 3582, which meant we still had to view the frequency response in seven sections. We were no faster than previously but we were in multiple locations and we had an organized data library.

One of the first users of SIM 1 was Soundcraft for the Yuming Matsutoya arena tour in 1987. It was so new that I was soldering parts onto the PC board the night before it shipped to Japan ahead of me.

This was an extremely important application. We now had the chance to measure all over the room with the audience in place during the show. With the room empty, we could store the response at each location. Then we could compare their responses when the room filled up. For three years we’d measured only at the mix position during the shows so we had no proof about what changes occurred in other locations.

Now we were ready to find out. At the Yuming concerts we were able to clearly prove that the response changes locally, uniquely in each part of the room rather than a global change (such as all the room gets a peak at 250 Hz). We were hoping for an easy answer, but now we knew what we were up against.