Editor’s Note: This is the second part of an article that originally appeared on the original Live Sound International website in January of 1993, and it provides a lot of information that’s still relevant today. Part 1 is available here.
“Well of course it sounded good, it was in surround sound!”
It would be understandable if you asked what “surround sound” means.
To me, using the term is akin to observing that light is bright! I have yet to discover unidirectional sound.
The comment above was made by a musician leaving a show I had mixed using the type of delayed zonal coverage which, in the previous article, I had begun to describe as part of a method for controlling the SPL, and improving the sound quality of live concerts.
His comment, I think, illustrates the problem. He was only familiar with hearing amplified music from the infamous giant stereo system but hints at the creative potential of dispersed sound systems.
Previously, I concluded by describing the procedure I use to average the variations in arrival time between a pulse simultaneously emitted from the main loudspeaker arrays and the various delay arrays being adjusted. I would like to note here that some of the adjustments I was speaking of often come down to variations of as little as 1 or 2 milliseconds!
I mention this because I’m often told that differences of 35-40 ms. are not noticeable to the human brain and that these minute adjustments are a waste of time; usually I am referred to the Haas Effect in support of this criticism. Indeed!
Ashamedly, I confess to being no student of Mr. Haas’ observations, but I would be surprised if so eminent a gentlemen were to make such an arbitrary suggestion. In my experience, and this article is based wholly on that; below 50 milliseconds or so, the shorter the delay, the more easily assimilated the two sounds (original and delayed) are.
The key word is assimilate. It represents a difference between the voluntary and the involuntary. If we can assimilate then we can separate and it is the ability to perceive fine variations in sound that is the very essence of good sound engineering. The ability to listen critically should distinguish the sound engineer from the general audience.
The human brain has the ability to ignore, within certain limits, the time delay aberrations which are inevitable in any multiple zone PA system. This allows the listener to focus upon the subject being listened to, without the distraction of time delayed information (whether that information is indeed the natural reverberation we are attempting to assuage or an unavoidable side effect of our solution).
It is the engineer, by exceeding these limits, who ensures that the audience remains focused on its subject and it is only by honing the skill of critical listening that the ability to raise these limits is achieved. Put more simply, the engineer becomes trained to identify incremental differences in delay time which the untrained ear fails to notice, and so can prevent those differences from ever becoming large enough to distract the audience.However, I digress.
At some point below 20 ms. the two sounds become indistinguishable in the time domain but that doesn’t mean that further changes of the delay time are inaudible, far from it. It is important to remember that we are going to be listening to music, not a series of pulses, through our delayed system, and that each of the component sounds of that music will, in most cases, be of longer duration than the delay time we are setting. This means that, in spite of the delay time, we will hear sounds of like or nearly like frequency content emanating from different sources at the same time.
The interaction of these sounds results in the reinforcement and/or cancellation of differing frequency components as the delay time is adjusted up and down. You can illustrate this effect by simply running pink noise through any two channel sound system and listening to the variations in frequency response as you move about the room.
Such variations make it possible to continue to fine tune the delay time when you can no longer detect differences in the time itself. It is for this reason that I used the snare drum beat to set my delays; it has a very wide frequency content which makes it easy to notice any changes of tone due to incorrect time settings. Turning the delay zones on and off is the obvious a/b test to use.
A Trained Ear
There is no paradox between my seemingly cavalier dismissal of any scientific approach to this procedure and the painstaking methodology I am describing. My intention is to emphasize the importance of critical listening and in developing that ability; notwithstanding the physical laws underlying the phenomena we are studying.
To give an example: if the purpose of a sound system is to transmit information, musical or otherwise, from a performer to an audience, then it should do so without drawing attention to itself. It follows then that any sound system, especially one with multiple zones of coverage, should provide the listener with the illusion of hearing the sound from the visual source (i.e. the stage) rather than from the loudspeaker itself. In a delay zone this illusion follows, theoretically, from the use of the correct time delay.
Practically, however, the delay loudspeakers have to be positioned other than in a direct line between listener and stage; obviously one can’t see through a loudspeaker. Ordinarily delay loudspeakers are flown above the audience at a height dictated by the sightlines of a particular venue. To compensate for this we again adjust our delay time and, in doing so, can move the audio image around to preserve the illusion of a single source, rather than having sound come from above us.
Tracking The Image
Unfortunately this “movement” doesn’t happen in a sonically whole, if I may use the phrase, fashion. Rather these “movements” in apparent position are frequency dependent.
Effectively this means that vocals, for instance, will “move” at a different rate to, say, a hi-hat. Nine times out of 10 “anchoring” the vocal firmly at the stage will give the most believable result.
It is critical listening that reveals this phenomenon and it is critical listening that preserves the illusion. The illusion which in fact becomes reality. Surely it is ‘unreal’ to watch someone singing while hearing a disembodied voice from somewhere above your head at the main loudspeaker system.
Also, an overly loud delay zone will draw attention to itself regardless of any delay time adjustments.
A couple of notes before leaving this question of the delay time setup:
1. In situations where an in-house system is being used, it may not be possible, due to the physical placement of its loudspeakers, to effectively use time delay in the conventional way to achieve a single apparent source. However, this doesn’t prevent us from otherwise using these systems in helping to reduce excessive near field SPL and in enhancing the overall sound.
2. In using delayed systems it’s good practice to delay each zone to the source rather than to other delay zones, this helps to avoid any accumulative errors. Even so, the more delay zones that are used the more the aberrations that will occur and the more complex the averaging process becomes.
We can alleviate these problems somewhat in the actual mixing process, by using the matrix, and we’ll examine that a little later on.
So, having dealt in some detail with the setting up of the delay time, we need to turn our attention to the question of equalization. In the first part of this article we used the terms “boomy” and “muddy” to describe the character of an overly reverberant environment, i.e., the average concert venue. In order not to exacerbate this problem, even as we attempt a cure, it is important to look at the frequency content of the loudspeaker arrays in the delay zones.
We have seen that the acoustic signature of the building attenuates, in one way or another, the higher frequencies, but it should be noted that the remaining sound, whether direct or reflected, contains the majority of the fundamental tonal components of the music we are hearing.
Therefore, if we look to the delayed arrays, primarily, to restore the higher frequencies we have also restored the harmonics and sibilants that give identity to voices, guitars, drums, or whatever other instruments we are dealing with in our delay zone, and have provided the listener in that zone with the complete spectrum of sound they expect to hear.
The usefulness of such high-pass filtering, perhaps, does not assume its full importance until we examine its effect outside the delay zone. If we consider the directional properties of sound wave transmission it becomes obvious that an unfiltered delay array will transmit a considerable amount of low frequency energy back into the direct field of the main loudspeaker array or of any preceding delayed array. In so doing, of course, it is now contributing, in a significant way, to the very problem it was meant to deal with.
I don’t believe it would put too fine a point on it to suggest that to use a delayed loudspeaker system without the appropriate high pass filtering is to not know what one is doing. By one, incidentally, I am not referring to the loudspeaker system! An understanding of this idea will also illustrate how the use of distributed systems can lead to a reduction in the bulk of frequency components then we don’t need large heavy enclosures.
Selecting a value for the high-pass filter is best done, as are most other things in live sound reinforcement, by experimentation. I would, however, recommend starting, especially if you are new to this, with a value much higher than might seem appropriate (remember that a filter with a typical slope of 12 dB per octave set at 2.5 kHz doesn’t render a signal effectively”quiet” until that signal reaches as low as 63 Hz).
A 2- or 3-way adjustable crossover with steep (24 dB) slope is often a good tool for the job. Selecting an initial high-pass frequency around 5 kHz, there are two or three equalization, as opposed to time, oriented listening tests you should perform.
First, listen in the delay zone to determine if the high-pass selected is restoring all or enough of the high frequency content to the program material (you should be listening to similar material to that which you will be mixing). And whether or not the delay arrays are adding any undesirable elements to the overall sound.
Second, listen in the direct field of the main loudspeaker array to determine the effect the delay arrays are having outside of the delay zone or zones, and third; listen to the delay arrays, both in and outside of the irrespective zones, with the main arrays turned off to more easily judge the degree of adjustment that might prove necessary as a result of tests first and second.
Observe the effect of any adjustments throughout the listening area. Other than the correct high-pass filtering, any other equalization of the delay zone arrays should be use (cautiously) to ensure a similar frequency response, albeit it band limited, between the delayed and main loudspeaker arrays. Bear in mind that when you are mixing, you need to know that any changes you make to the board EQ have the desired effect throughout your zones of coverage; not just in the direct field your mixer is in.
It is easier to ensure this similarity of response if you are carrying delay loudspeakers from the same company that provided the main PA. Specify components with like characteristics for the main and the delayed loudspeaker arrays. When using locally supplied or in-house systems of course it is more difficult. For the situation always carry a few extra channels of E.Q., DDL and compression, three is a reasonable amount.
Most in-house (not arena) systems include a flown center cluster and a balcony system so a third feed is useful for contingencies such as near fill or lawn coverage. Don’t forget that 30 seconds before the show starts the tour manager will arrive with Channel 6 News and they’ll need a feed too.
So at last we’ve flown our system, we’ve set our delay times and equalized the whole shebang;now let’s use it. Don’t limit yourself. This is where you allow the full force of your creative brilliance to shine. Assume your rightful place in the hierarchy of show-biz artisans. Show them what a competent, professional sound reinforcement engineer can do. And do it without killing anybody.
“Hey man, it’s good but it don’t sound like the record. Oh well spotted! Good grief, it reminds me of the sign in the grocery store window, “Smart Lad Wanted”. No mate, it doesn’t sound like the record; this is what we call “live” and you can’t put it on a record. If I’m getting carried away here, it’s because live music excites me and I make no excuses for that fact or for demanding the respect and autonomy within the industry that our profession deserves.
The Magic of Matrix Control
However, back to business. I said don’t limit yourself, and amongst other things, I meant use the matrix. If you don’t have one, change your mixing console. After the wheels on the road case it came in, I can’t think of a more useful facility for a mixing console to possess. It has both important practical and creative possibilities.
Earlier in this article we discussed the inevitable aberrations in arrival time that occur between the different loudspeaker arrays, and what we can do to minimize them. The matrix is a useful ally in the minimization as, combined with a proper sub-grouping of instruments, it enables us to re-mix our program to suit the limitations of our delay zones. By using the matrix we can cause the program material to vary between one delay zone and the next and, in doing so we reduce the apparent interaction between them.
For instance, in a situation where only a limited number of delay clusters can be used, it may be that a noticeably distracting doubling will occur with any sound that has a sharp attack.
The drum subgroup (again, for instance) can be pulled back on the matrix to prevent this distraction. The degree to which this balancing can be done varies with ability and taste and also with any constraints which may be imposed by the artist involved, but let’s remember we are degrading nothing here, we are improving whatever can be improved.
From a creative standpoint, many sounds, vocals, keyboards, horns, etc., can be wonderfully enhanced by just these time-based anomalies. The majority of the effects achieved with modern DSP devices are produced through time based manipulation of the signal being processed.
The Whole Package
With experience it becomes possible to achieve our dual goals of reducing near field SPL and increasing fidelity throughout the venue, while at the same time realizing a degree of creative control which is not available through any electronic device, because it exists in time and also in space.
Let me explain. I mentioned earlier that the placement of in-house loudspeakers may prevent their being used in a conventional way as delayed loudspeakers in a zone coverage system, but consider this scenario: You walk into this day’s venue, which is a “you can only fly 500 pounds (per side, single point), we always stack in here” theater. However, the house has an old flown center cluster, some cabinets installed high on the proscenium wall, to the left and right of the opening, and an underbalcony system.
Rather than blowing off the house system as being too old, in the wrong place or unable to handle the power you need, fly the 500 pounds per side and stack the sub bass where you can, perhaps in vertical columns against the left and right edges of the proscenium opening. Use front fill cabinets to cover the front rows and think about how to use the matrix to divide your program around the various loudspeaker arrays.
For instance, anchor the rhythm section to the stage by using your own system, flown just high enough to get it above the audience, for bass and drums etc. Now spread the rest of your instrumentation around; keyboards could be in your left and right system but also in the elevated house left.
A Model For Success
Use the house left for the main keyboard level with enough keyboards present in your own, in-effect near field system, to ensure that your mix is complete and realistic at the front.
Guitar or horns or whatever could be treated in a similar way but using the house right loudspeakers. Vocals would be primarily in the center cluster but also prominent in the other mixes. The underbalcony would be treated as any other delay zone to restore whatever parts of the program material the building had contrived to remove.
This is only an example, and how you utilize the equipment available will depend on its position and characteristics of dispersion and power handling, but I think you will see that what has been achieved in dispersing your material through the various sources available, can add a spaciousness; a whole new dimension to the sound that simply would not happen if you were to go with the conventional “stack it whack it” alternative.
If you then imagine a simple repeating delay effect being distributed around these same sources you will also see that the possibilities for creative expression need be limited only by your own or your artist’s imagination.
Reason and Responsibility
The purpose of this article has been to suggest ways in which we can address the problem of live concerts being too loud. In doing so I have tried to show that far from compromising the spirit of a live performance we can greatly enhance it.
The experience of listening to live music should be a visceral one without any conscious awareness of volume or lack of fidelity. If we can achieve this we have gotten to the very root of the desire to perform and experience art. If we can appreciate this fact we can begin to see the size of the responsibility we have to the audience as well as to the artist.
It is only by accepting and responding to that responsibility that our profession, as well as we as audio engineers, will grow.
John Godenzi is a long-time front of house engineer. See Part 1 of this article here