January 31, 2013, by Ken DeLoria
This is the first installment in a series that will explain how various types of audio analyzers function, how they differ in capability and usage, and the way that the test and measurement industry has evolved over the past five decades.
Way, way back in time, graphical acoustic measurements were acquired by means of chart recorders. A chart recorder looks a lot like the lie detector seen in old-time movies and is analogous to a mechanical version of a CRT (cathode ray tube). Chart recorders were widely used before CRTs (and LEDs for that matter) had been invented.
But unlike a CRT that displays pixels on a screen in real time, a chart recorder uses an ink pen attached to, and put in motion by, a coil of wire that works against an opposing magnetic field. To acquire an acoustical measurement, the coil is driven by the output of a pre-amplified measurement microphone. The result is a visual response trace displayed on a roll of paper that continually unwinds as the pen forever inscribes its mark on the paper media.
When using a chart recorder, the stimulus to the DUT (device under test) can be any signal that the user wishes, but useful results only occur when the stimulus is a swept sine wave (for frequency response measurements) or a fixed sine wave (for distortion measurements).
The mechanical design of a chart recorder has much in common with a loudspeaker driver. But instead of producing sonic energy from the oscillation of an LF cone or an HF diaphragm, the result is a visual response plot inscribed by a moving pen.
Chart recorders were manufactured by numerous companies, but those most often used for audio measurement work were the product of Bruel & Kjaer (B&K), based in Denmark. Some notable loudspeaker designers still rely on them today.
As long as no time-domain information is of interest, and the measurement environment is anechoic (that is without echoic reflections), a chart recorder will provide meaningful, high-resolution data…if the pen can accelerate and decelerate rapidly enough so that it doesn’t skew the results.
A chart recorder (also called a pen-plotter) is a mechanical beast that behaves a lot like a loudspeaker; some work well, others do not. They possess no intelligence. Rather, they simply create a graph on paper that reflects the drive voltage, at any given instant, that’s applied to the mechanism which moves the pen.
As a group, chart recorders are categorically unsuitable for measuring sound systems in rooms with the intent of using the resultant chart to apply corrective EQ. Because of their slow speed, one would still be measuring and adjusting the EQ at a gig in Las Vegas while the tour bus has moved on to the next show in Los Angeles.
The next big development in measurement technology was quickly termed a “real time analyzer” or “RTA.” The name came from the fact that it could provide results that didn’t require a paper printout.
An RTA is akin to a collection of separate chart recorders, each filtered to respond only to one part of the audible sound spectrum. But instead of needing a roll of paper, ink pens, and a lot of time for transport and setup, the RTA seemed like an amazing panacea in the world of sound measurement.
Numerous RTA products were built, many units were sold, and to this day there exists an entire culture of sound techs that still use RTAs regularly, but perhaps do not understand precisely what the display – now sometimes even in smart phone form – is actually telling them. It’s also worth noting that without the invention of LEDs as a display medium, RTAs would scarcely exist
The proper engineering term for an RTA is a “parallel filter analyzer.” A typical analog RTA is made up of 10 or 11 LED ladder displays (in the smaller one-octave-band models), and 27 to 31 LED ladders in the larger third-octave models.
The signal from a measurement mic is routed to a bank of simple analog filters, each one tuned to an ISO (International Standards Organization) bandcenter. Hence the term “parallel filter analyzer.” Each filter is independent of the other filters, feeding only its own LED display driver chip, which in turn illuminates the LED ladder that’s associated with that filter (i.e., 63 Hz, 125 Hz, and so on). See Figure 1.
When working with an RTA, the stimulus used to excite the DUT (again, Device Under Test) is almost always white or pink noise. The DUT could be an entire sound system in a large venue, or merely a component part of a system.
In any case, because of the nature of random noise (of any color), a pronounced settling time is needed to keep the LED display from constantly flickering within the amplitude range of the crest-factor of the white or pink noise source. This makes the real time analyzer almost anything but real time.
Figure 1: How a typical RTA or “parallel filter analyzer” is constructed. A one-third octave RTA will normally have 27 to 31 filters and LED ladder displays. (click to enlarge)
When using an analog RTA it will take quite a while to view a stable response curve, especially in the low frequency region, and this can lead to substantial errors. The measurement technician walks the room, comparing the response in one region to that of another, and tries to get the tuning work finished as quickly as possible. Just another day in paradise.
But while a few seconds spent while waiting for the display to settle down may not seem like much, it drastically slows the process of capturing a response trace and later adjusting EQ to compensate for room-related acoustical problems.
Margin Of Error
Additionally, the resolution of a third-octave RTA is quite course in the LF region, and that’s precisely where high-resolution measurement and correction is most beneficial. Low-frequency room resonant modes will rarely fall smack-dab on an ISO band center, and if they do, it’s purely a random event.
When they don’t – and that’s most of the time – a considerable margin of error exists because an RTA can only resolve the data to fixed third-octave intervals. It’s all too easy to fall into the trap of flattening the response as the RTA “sees” it, but in reality actually worsening the response due to the RTA’s limited LF resolution.
Some RTAs have selectable time-weighting curves that will speed up their response time when only viewing higher frequencies.
While that can be a useful feature, all brands and models share the same inherent limitation: they must account for the crest factor of the applied noise by slowing down their speed of response whenever low frequency measurements are made.
The dynamic weighting of the filters in the LF range require quite a few seconds before the LF damping can settle in, eventually producing a stable and readable response curve.
The basic design concept of early analog RTAs was intended to dovetail with the basic design concept of the first generation of graphic equalizers. A one-third octave “graphic,” along with its less expensive sibling, the one-octave “graphic,” were the only type of room equalizers available back in the day.
Graphic equalizers have persisted for many years and are still commonly in use because they’re readily understood and provide an easy means of shaping the sonic signature of a sound system by ear (or by using an RTA). Once a response trace is acquired with an RTA, the technician then adjusts the various bands on the graphic equalizer until the display on the RTA reads flat, or whatever preference-curve the technician is seeking.
The basic theory seems to be sound (pardon the pun), but is actually laden with significant limitations. However, before explaining that, it should first be noted that the concept of precisely tuning a sound system by means of parametric equalization emerged much later, and in large part due to the introduction of higher resolution measurement systems.
An RTA is an “amplitude versus frequency” device confined to only the frequency domain; it has no way of measuring the time domain or “phase versus frequency.” It therefore depicts only a small portion, almost a shadow if you will, of the true system response.
I once replaced a large sound system in a very large and highly reverberant modern cathedral. The previous system designer had spent many weeks “walking the room” with a handheld RTA, logging the frequency response at almost every individual seat. Wherever he saw a deficiency in the response curve he would then install a loudspeaker, or sometimes only an HF horn and driver, in the ceiling above wherever the deficiency was logged.
He would then apply EQ until the RTA response would read perfectly flat. By the time the installation was complete there were about 100 loudspeakers in the ceiling, plus about 3,000 pew-back loudspeakers, plus three main clusters flown over the stage in an LCR configuration. A contractor’s dream come true! The cost was more than 1 million dollars.
Unfortunately, the system produced an unintelligible mess of sound because nothing in the time domain had been accounted for – only the measured response in the frequency domain. It was eventually replaced with 11 small, 2-way trapezoidal loudspeakers and two smaller fill loudspeakers. This approach provided the intelligibility that was utterly absent in the first system design, and remained in service for 15 years before being upgraded because of a desire for higher power.
So how did I measure the new, simplified system to adjust the all-important EQ?
For many decades, Hewlett-Packard was the world leader in designing and manufacturing test and measurement instrumentation for audio, vibration, and medical usage. By the way, HP spun-off its test and measurement division to Agilent Technologies in 1999.
In the late 1970s I was lucky (and delighted) to have access to an HP 3580A, a heterodyne signal analyzer, as well as a B&K measurement microphone and precision mic preamp. See Figure 2. In those days, this array of equipment was worth at least $30,000 dollars or more – a lot of money then and now.
The 3580A looks like a portable oscilloscope, but actually is a spectrum analyzer possessing a phenomenal 1 Hz resolution. To put this into perspective, 1 Hz is nearly two orders of magnitude greater than a third-octave RTA.
This complex device provides a swept sine wave as the stimulus signal while internally, a narrow high-Q tracking filter follows the frequency of the swept sine wave as it runs through its sweep. The high-Q tracking filter rejects room reflections and other side-band noise, thereby producing a response curve that closely resembles a near-field measurement.
The 3580A worked beautifully when it was used to precisely flatten an individual loudspeaker or cluster of loudspeakers – but only when used in close range. The 1 Hz resolution allowed, for the first time, a truly accurate view of the response of a loudspeaker design. Unfortunately HP didn’t intend the 3580A to be used for corrective room equalization. At that time, precision room EQ was not yet something that was discussed much.
Figure 2: Hewlett-Packard 3580A heterodyne type spectrum analyzer. (click to enlarge)
When measuring the response of a loudspeaker in a room, especially a highly reverberant one, the measurement mic must be placed at varying distances to the loudspeaker(s) so that a composite response of the loudspeakers and the room resonant modes will be acquired.
The 3580A could not measure accurately at meaningful distances from the loudspeakers because the tracking filter could not be delayed to “wait” for the sonic energy to arrive at the measurement mic.
The result was a skewed response trace that provided almost nothing of value about the room response in the critical low frequencies. I was taken aback to learn this (having previously only used the 3580A for nearfield loudspeaker development), and was deeply disappointed. But as they say, necessity is the mother of invention.
Precise & Meaningful
After the RTA came the introduction of FFT-based measurement equipment. FFT stands for Fast Fourier Transform and is a mathematical function, not a product name. However, the term has become a moniker that many in our industry associate with modern-day measurement processes.
And well they should, because an FFT will provide a “look” into how sonic energy behaves, in a precise and meaningful manner that would be impossible to obtain using an RTA or a heterodyne analyzer. Arguably, an FFT-based product will eclipse any other form of test and measurement technology available for in-situ acoustic response measurements.
Once again HP took the leadership role in what was a rapidly-emerging field, providing a groundbreaking dual-channel FFT called the 3582A…at a mere $15,000. (And worth every penny.) See Figure 3.
I acquired a 3582A and eventually bought nearly 100 more of them to distribute to the dealers of my former loudspeaker company. It was a game changer, and I owe much of what I know about acoustics to having used this excellent machine for many years.
Though the 3582A never became as well known as handheld RTAs (such as those manufactured by Ivie), it had a profound impact on professional audio. It not only functioned at nearly two orders-of-magnitude faster than the 3580A swept-sine wave analyzer, but far more importantly, it provided time-domain information. We could now view, and learn to understand, the value of phase and time-domain data instead of just relating to frequency response.
Around the same timeframe, B&K also manufactured a dual-channel FFT analyzer, and never short on research efforts for both pure and applied technology, the company published a pivotal white paper that led to, or at least mirrored, the ideas behind measurement products such as Meyer Sound’s first SIM system and Apogee Sound’s CORREQT measurement system.
In both cases, it was the power of the dual port FFT analyzer that made it possible to realize the potentiality of the underlying concept: that is, using any signal, including the program content itself, to obtain meaningful measurement results.
Figure 3: The Hewlett-Packard 3582A dual-channel FFT analyzer. (click to enlarge)
The 3582A made it possible to properly equalize the system in the cathedral noted earlier. Due to the enormous interior volume and the construction materials which were exclusively glass, concrete or steel, the room exhibited a 7-second reverb time in the lower frequencies. Without the aid of FFT, I’d probably still be there today, trying to justify my system design.
FFT analyzers come in two primary types: single port and dual port. (There are also some offering four or more ports.) A single port analyzer is like a handheld RTA. It provides a visual display that reflects only the signal it’s receiving. Simple enough, right? Seems so, but the idea of acquiring a measurement with more than one input port is huge, really huge.
A dual port analyzer introduces a form of mathematical intelligence to the measurement that’s known as “transfer function.” Instead of receiving only the signal content arriving at the lonely single port, the dual port analyzer compares two signals, one being a reference. The reference signal is normally sent to port 1 and is the same signal sent to the DUT, while port 2 receives the response of the DUT.
While this might seem merely an exercise in semantics, the results are anything but, because now the analyzer can tell us about when signal 1 arrived in relationship to signal 2. It can plot how the two signals differ in phase and frequency response. Suddenly we see things that were never seen or understood before.
So now that the vitally important phase and time-domain information has readily been revealed, it becomes possible to use that information to make corrective response alterations to a sound system, but this time the alterations are based on solid data. The result? A new level of precision.
The use of dual channel FFTs initially re-shaped loudspeaker product design, but it soon followed that they would also play an instrumental role in developing the art and craft of field-tuning a sound system, an inherently more complex pursuit due to the involved nature of room acoustics. More about that in the next installment.
Ken DeLoria has had a diverse career in pro audio over more than 30 years, including being the founder and owner of Apogee Sound.