One of the most difficult aspects to comprehend in the field of sound is the relationship of phase response, frequency response, and how each relates to time. Understanding these relationships can help a lot in making optimal choices when you’re deploying and utilizing sound systems.
What the deuce is phase? (Sorry, I was channeling my inner Stewie Griffin.) The term phase response is actually short for “phase versus frequency,” in the way that we typically use the term in our industry. Likewise, when we speak of frequency response, we usually mean “amplitude versus frequency.”
In both cases, whenever the measured data is graphed, the “X” axis of the response curve – which represents frequency – will run horizontally across the graph.
The vertical “Y” axis represents intensity in an amplitude-versus-frequency response measurement (usually scaled in dB), and it likewise represents phase in a phase-versus-frequency response measurement (typically scaled in degrees).
Do time and phase representations differ? It’s a question I get asked quite a lot. The answer seems a bit elusive, but makes perfect sense once we dissect the characteristics. A phase “lead” or phase “lag” is a way of expressing a time differential in relation to a reference point – which usually is an ideal flat-line phase response.
For example, an equalizer that’s set to “flat” will present a flat phase and frequency response curve. But when a boost or cut is introduced, the frequency response will reflect the newly introduced response curve, as will the phase response (Figure 1).
The Ø (zero) crossing point in which the phase response is absolutely flat will always be at the center of the peak or dip of the graphic or parametric EQ (PEQ) filter. But before and after the center frequency, there will be a phase lead (or lag) depending on whether the filter has been set to boost or cut.
Note that this assumes the use of traditional IIR (Infinite Impulse Response) filters father than newer FIR (Finite Impulse Response) filter types.
Signal – Not Time – Delay
Unlike a digital delay that delays all frequencies in the audible spectrum equally, and is rightly referred to as a signal delay device, a phase lead or lag is a more subtle concept. This can be difficult to grasp, which is why we’re going to explain it carefully.
However, there is no such thing as “time delay.” Only a signal can be delayed in relationship to the universal clock that’s always ticking, mind you.
But before we move forward, ponder this: When we speak of phase, as it relates to the usage of pro audio systems, we’re almost always interested in phase “offset.” We don’t really care about the phase lag from a loudspeaker to a listener (correctly referred to as propagation delay), but we intently care about any offset than might occur when one sound source is referenced to one, or more, other sources.
If multiple sources are not precisely in phase with each other, some of the sonic energy will cancel…and some will add…and this occurs purely as a function of the wavelengths of any given frequency in respect to the physical offset of the radiating devices.
The resultant effect of loudspeakers that are offset from one another is the often mentioned “comb filter” response pattern.
The term comes from the hundreds of additions and subtractions that corrupt the frequency and phase response of the system, which end up resembling the teeth of a comb.
Here’s an example of how phase-offset manifests in the real world: a 600 Hz wavelength is 1.5 feet long (18 inches), whereas a 6 kHz wavelength is just 0.15 feet in length (1.8 inches).
Therefore, if one loudspeaker is offset in relation to another loudspeaker by 0.15 feet, a signal at 6 kHz will cause the two sources to totally cancel each other – at least in theory.
In reality, however, total cancellation is actually unlikely, due to the imprecision of most loudspeakers. Nonetheless, a large percentage of cancellation – possibly as much as 90 percent – will inevitably occur.
But move upwards, let’s say to 6.5 kHz, or downward to 5.8 kHz, and the whole picture changes. That’s because acoustical addition and acoustical cancellation will always be a function of the wavelength of the source material in relation to the physical and/or the electrical offset that’s affecting the relevant signals.
Conversely, the same offset of 0.15 inches represents only 1/10th (0.1) of a phase differential of the 600 Hz wavelength. By no means is this desirable, but it’s not going to cause much more than about a 10 percent cancellation of forward radiated energy. (It will also change the polar response of the system, but that’s another topic for another time.)
This is precisely why we speak of – and why we measure – the phase relationship of multiple sound sources, instead of thinking only about the pure time differential. A phase-versus-frequency response measurement will characterize the arrival time of a sonic wavefront at a given point in space, in relation to the wavelength of each relevant frequency.
That may sound difficult so here’s a simplified analogy: A firing squad of five shoot at the same target at exactly the same time with exactly the same rifles, but each one is standing a few feet behind the other. Therefore, the projectiles do not reach the target at precisely the same time.
While it might not mean much, insofar as the ultimate intent of the firing squad, it means absolutely everything when a “sound squad” wants all of the sonic energy from each loudspeaker in the sound system to arrive at the listener’s position in perfect phase with all the other loudspeakers.
Unlike rifles, if the sonic energy doesn’t arrive at exactly the same time from all sources, cancellations and additions will occur, causing an imperfect and comprised frequency and phase response.