Study Hall

Tech Topic: Modern Analyzer Development

The evolution of large-scale sound system optimization, part 4.

Editor’s note: Click here to read the other articles in this series.

The analyzer is at the core of a measurement system. The manufacturers of laboratory test equipment design their products to be adaptable to a wide variety of scientific applications. Their working assumption is that individual users will add the peripheral devices required for their application.

The laboratory analyzer needed a lot of help to be ready for the down and dirty of touring with a sound reinforcement system. The process began slowly, but eventually became standardized since the basic structure of sound systems is the same throughout the industry.

A key challenge was to get the maximum practical use from the scientific power of the fast Fourier transform (FFT). It had to easily patch into the sound system and get answers quickly in a way that engineers could understand. It had to be small, portable, durable and quiet enough to sit at front of house. Most importantly, it could never stop the show.

There were quite a few customizations required to begin this transition in 1984. First we needed an external microphone preamp since the Hewlett-Packard 3582 spectrum analyzer had unbalanced line level inputs (banana plugs!). We also needed an external delay line so we could synchronize the measured mix console output to the sound arriving at the mic.

Digital delays were barely extant at that time and our initial work was done with an analog delay with a limit of 100 milliseconds (ms). One of the early jobs had a mix position at 105 ms so we had to set a fence around the mic a few feet ahead of the mixer.

A Meyer Sound SIM 1 System circa 1987. From top to bottom: Hewlett Packard Integral Personal Computer, the HP 3582A dual-channel FFT analyzer, the delay controller with a customized Audio-Digital delay line, the 8-channel mic switcher, and two 8-channel line switchers (enough for 8 EQ inputs and 8 EQ outputs). All of the devices were under the remote control of the computer, which could switch between any mic or EQ in the system and manage the data library.

At the earliest gigs we used a “Y” cord with bare wires and gator clips to split the mix console output feed for us and then wired the other end to go unbalanced into the HP analyzer. (Kids, don’t try this at home!)

We won’t be allowed to measure anybody’s system if we cause it to hum, so one of the first steps was to create transformer isolated access points that see the signal at the input and output of the system EQ. We could now see the signal flowing through the system without risking an interruption or signal degradation.

Memories

It’s easy to forget how recently computers joined our workplace. It was unlikely to see a computer at a gig in the early 1980s when the real-time analyzer (RTA) was the primary measurement tool. If we did see one, it was certainly not being used for system measurement. Even the best measurement tools of that era had very little memory, which greatly limited what we could learn from them.

A handheld RTA like the Ivie IE30 could hold one or two stored traces. You could toggle between the live trace and a stored one to make a “before vs. after” or “here vs. there” kind of comparison. In order to store another trace we had to overwrite the previous one. Once the device was powered down, the data was lost for good. This lack of memory made it very difficult to learn about the behavior of sound systems and how to optimize them. Essentially we had two responses to work with: right now and one from memory. We could compare two locations out of 10,000 seats in an arena. There was very little we could act on with such a minimal amount of data.

We’d made a huge step forward when we moved from the single-channel RTA (1/3rd-octave, amplitude only) to the dual-channel FFT analyzer (24th-octave amplitude, phase and coherence). But without computer memory there was an extremely clear picture of only one location.

The author and a SIM system at the 30th Grammy Awards at Radio City Music Hall in NYC. Multiple mics monitored the response in the hall during the broadcast.

Moving from low-resolution, one-dimensional analysis at the mix position to high-resolution, complex signal analysis at the mix position was the extent of our progress. We don’t need memory to turn the EQ knobs at the mix position. In fact, we don’t need a high-resolution analyzer to do this. We don’t need an analyzer at all because there is no objectively right or wrong EQ setting for the mix position. Whatever the mixer says is “right” is right. End of discussion.

But there are objective rights and wrongs for optimization, the process of creating uniform response over the space. Optimization requires us to measure at locations other than the mix position.

Memory is the mobilizing ingredient for optimization that moves us beyond the mix position. Multiple trace storage allows us to compare responses all over the room. We can make a change in aim, splay, level, EQ or delay and see how it affects the response in all of the relevant areas. Are the responses down on the floor and in the balcony more closely matched than before? If so, we’re making forward progress toward optimization. If not, we’ve eliminated one wrong option and can now try another till we get it right.

We also need a good librarian. It won’t do us any good to store data if we can’t find it later. For this we need an orderly system of organization: a data library. We’re measuring multiple loudspeaker systems with multiple mics in multiple locations. Our data library has to allow comparison of the mains upstairs with the down fill below, each position with the hall empty to the house full, and so on.

Study Hall Top Stories