“A matrix section on a console can help manage a complex PA system,” Craig Leerman wrote recently. “I’ll route the main console left and right outputs to the matrix and use the various matrix outputs to feed the different zones and delays. With EQ and delay available on the matrix outputs, it’s easy to tune and time align a loudspeaker zone to the rest of the PA.
“Once all of the various zones have been dialed in, any further overall level adjustments are simply handled via the left and right masters on the console. Using the matrix for this can give both stereo and mono feeds of the program, as well as a reduced stereo image feed.”
For recording or broadcast requirements with a limited channel count, a stereo or mono mix will usually fit the bill, but for the house, perhaps we can employ the matrix to go even further.
As a case in point, consider a talker at a lectern in a large meeting room. Conventional practice would dictate routing the talker’s microphone to two loudspeakers at the front of the room via the left and right masters, and then feeding the signal with appropriate delays to additional loudspeakers throughout the audience area. A mono mix with the lectern midway between the loudspeakers will allow people sitting on or near the center line of the room to localize the talker more or less correctly by creating a phantom center image, but for everyone else, the talker will be localized incorrectly toward the front-of-house loudspeaker nearest them.
In contrast to a left-right loudspeaker system, natural sound in space does not take two paths to each of our ears. Discounting early reflections, which are not perceived as discrete sound sources, direct sound naturally takes only a single path to each ear. A bird singing in a tree, a speaking voice, a car driving past – all of these sounds emanate from single sources. It is the localization of these single sources amid innumerable other individually localized sounds, each taking a single path to each of our two ears, that makes up the three-dimensional sound field in which we live. All the sounds we hear naturally, a complex series of pressure waves, are essentially “mixed” in the air acoustically with their individual localization cues intact.
Our binaural hearing mechanism employs inter-aural differences in the time-of-arrival and intensity of different sounds to localize them in three-dimensional space – left-right, front-back, up-down. This is something we’ve been doing automatically since birth, and it leaves no confusion about who is speaking or singing; the eyes easily follow the ears. By presenting us with direct sound from two points in space via two paths to each ear, however, conventional L-R sound reinforcement techniques subvert these differential inter-aural localization cues.
On this basis, we could take an alternative approach in our meeting room and feed the talker’s mic signal to a single nearby loudspeaker, perhaps one built into the front of the lectern, thus permitting pinpoint localization of the source. A number of loudspeakers with fairly narrow horizontal dispersion, hung over the audience area and in line with the direct sound so that each covers a fairly small portion of the audience, will subtly reinforce the direct sound as long as each loudspeaker is individually delayed so that its output is indistinguishable from early reflections in the target seats.
Such a system can achieve up to 8 dB of gain throughout the audience without the delay loudspeakers being perceived as discrete sources of sound, thanks to the well-known Haas or precedence effect. A talker or singer with strong vocal projection may not even need a single “anchor” loudspeaker at the front at all.
As an added benefit to achieving intelligibility at a more natural level, the audience will tend to be unaware that there is a sound system in operation, an important step in reaching the elusive system design goal of transparency – people simply hear the talker clearly and intelligibly at a more or less normal level. This approach, which has been dubbed “source-oriented reinforcement,” precludes the sound system from acting as a barrier separating the performer from the audience, because it merely replicates what happens naturally, and does not disembody the voice through the removal of localization cues.
Traditional amplitude-based panning, which, as noted above, works only for those seated in the “sweet spot” along the center axis of the venue, is replaced in this approach by time-based localization, which has been shown to work for better than 90 percent of the audience, no matter where they are seated. Free from constraints related to phasing and comb-filtering that are imposed by a requirement for mono-compatibility or potential down-mixing – and that are largely irrelevant to live sound reinforcement – operators are empowered to manipulate delays to achieve pin-point localization of each performer for virtually every seat in the house.