Mono versus stereo is a longstanding debate among live audio engineers. As church sound techs, we’re responsible for how the audience hears the music, and two of our primary goals are intelligibility and consistency. We want people to understand the words spoken and sung from the platform so they can agree in faith. We’re also aiming to create an atmosphere of beauty; beyond just eliminating distractions, we’re setting the stage for normal people to turn their attention to Jesus on the Throne.
Just so we’re on the same page, what do we mean by mono and stereo? Mono sends the same signal to every loudspeaker while stereo separates some signals to send them differently to left and right loudspeakers. Even with a stereo system, mix elements that are panned to the center are being sent with equal level to both loudspeakers, essentially making that input mono. So all the problems that are associated with mono systems are also applied to loudspeaker systems when everything is panned to the center.
With this in mind, why would we recommend a mono system? Most rooms where we gather are difficult to cover with a single loudspeaker array. For this reason, we’ll put a loudspeaker cabinet on both left and right sides of the room so each one can run a little quieter, and we can have less variance between the quiet and loud parts of the room.
In mono, every loudspeaker has the same signal feeding it, so every person listening, whether they’re on the left or right side of the room, is getting basically the same signal. Of course, most loudspeakers aren’t perfect for the space they’re in, so there are still variations in tonal balance between seating areas to one degree or another.
The case for mono is basically an argument against hard-panning mono sources in a stereo environment. You wouldn’t want a guitar panned all the way to the left so no one on the right side of the room can hear it. It would go against our values of consistency and intelligibility for the listener.
Before we talk about stereo, let’s dive in to a little about how our ears perceive direction and distance. When a sound source comes from the left side of our head, we hear the sounds in our left ear louder than in the right ear.
Taking it a step further, a sound source coming from the left side or our head is slightly farther away from our right ear. That increased distance between our ears creates a timing difference, which our brain uses as a clue to perceive direction. If we want to nerd out even more, higher frequencies can’t bend around our head the same way that lower frequencies can, so the balance of high and low frequencies is an additional clue that our brains use to tell us from what direction a sound is coming.
In a studio or broadcast audio context, stereo is very helpful for shaping mixes. To make space for the vocals that are in the center, the pan knob can be used to move mix elements to the left and right that might otherwise cover up the vocals.
Instead of just turning the other instruments down to avoid masking the vocals, we can put them in a different place horizontally. Since most listeners in these contexts are likely hearing both the left and right loudspeakers somewhat equally, we can pan things all the way to the sides without them losing much, if any, sonic information. If we tried te same hard-panning techniques in live sound, roughly one third of the audience would be missing critical elements in the mix, because those in the center could hear both loudspeakers, while those on the far left and right would miss the sounds coming from the opposite side.
So what benefit is there for stereo in live sound? At this point we’ve only been talking about mono inputs and using the pan knob to change the level balance between the two loudspeakers. However, true stereo inputs have variations between the left and right signals that incorporate the psychoacoustic cues that our brain uses to determine direction, giving us a left-to-right picture of that sound source.
For instance, if there’s a grand piano on stage and you put a stereo pair of microphones inside it, certain notes are going to be louder and have a different tone in one microphone compared to the other. Played back through a stereo PA, we perceive that as having a larger, wider sound than a single mic panned to the center. This also gives us many of the benefits of stereo that de-clutter the center of our mix, making more room for the vocals. Since intelligibility is one of our core values, that’s a win!
But what about the value of consistency? Well-designed stereo inputs are typically consistent enough from left to right that they’re recognizable on the opposite side. If we listen only to the left side of a stereo keyboard output, and then only to the right side, we would still recognize it as a piano, and none of the notes would be missing.
A Matter Of Timing
Now that we’ve established that stereo can be work well in live systems, let’s identify another enemy of consistency: “comb filtering” that comes from variation in arrival times of identical sources. If you’re not a nerd, that probably seems like gibberish, so I’ll explain.
Sound waves are made up of variations of air pressure that occur over various frequencies and travel through space at a speed of about 1130 feet per second (or 345 meters per second). When two waves of the same frequency combine at the right timing, they increase in amplitude. This is called “constructive interference.” But if their timing is off by a certain amount relative to the wavelength, they’ll combine to reduce their energy, and this is called “destructive interference.”
This is what happens with a single frequency, but when it comes to multiple frequencies with a change in arrival time, some frequencies will increase in intensity while others will cancel each other out. This is called comb-filtering since the frequency response graph of the combined signals looks like a comb, with very narrow cuts occurring at a predictable pattern based on the delay time.
With this information, we might assume that the best solution would be to have a mono system that’s very directional so there’s no overlap between loudspeaker coverage areas. This is great in theory, but in reality, a loudspeaker’s ability to steer the direction of sound waves is limited by its size.
Lower frequencies bend around the cabinet, which is why if you’ve ever been on stage (without stage monitors), the sound coming from the loudspeakers is muddy and muffled. Only the lower frequencies are coming back to the stage. What this means is that the loudspeakers are going to overlap in the lower frequencies unless we have a very tall line array. And we’ll still have to deal with comb filtering on inputs that are panned to the center.
While all this is nice to know in theory, what do we actually do to overcome the problem? If we incorporate more stereo sources into our mixes, then we’ll experience less comb-filtering effects on those inputs.
It’s not enough to have two channels coming from an input; it’s important that these stereo sources are de-correlated. What does that mean, and how do we test that? Simply put, correlation means how much information that’s in both sides will cancel if we flip the polarity of one side of the signal.
To test it, sum both the left and right channels to mono (either by panning to the center, or both on one side) and flip the polarity of one of them. If the output level goes down or if the low frequencies get quieter, then it’s not very de-correlated. If they stay about the same, it’s more de-correlated, and panning each to the outside will make our mix sound “bigger” and with fewer comb-filtering artifacts.
Will many people notice if we take these steps? Probably not. But is it worth the effort to make the worship mix sound better for everyone? Absolutely.
For more on this topic, be sure to check out this episode of the Church Sound Podcast featuring the author and co-host Samantha Potter.