With HDTV now in most living rooms, and with cinema exhibitors moving up to 4K digital projection, high-resolution visuals have become the order of the day. Even consumer music playback, long the domain of the lowly MP3, is beginning to dip into “high res” with downloads of 24-bit/192 kHz lossless FLAC files. How does the concept of resolution apply to live sound? Why is it so much more complex to predict and assess a listening experience? What can we do about it? John Meyer, co-founder and CEO of Meyer Sound, shares some food for thought.
Bruce Borgerson: What is high resolution?
John Meyer: To discuss resolution, it’s much easier to start by looking at the video world. In projectors, we need a certain amount of light, measured in lumens, to hit the screen and cover it evenly. In audio, the equivalent is a system’s SPL level and even sonic coverage in the room.
But with projection, we also understand video resolution, which could be anything from VGA through 4K and beyond. There are also other ways of defining the image resolution, like bit depth and contrast ratio. Taking all of the specs together gives you a pretty good idea how well an image will be displayed for the viewer.
BB: What is the equivalent of video resolution in audio?
JM: Tools for audio system design and analysis have come a long way. We can now measure SPL and uniformity of coverage at different frequencies. However, unlike video, there is no commonly accepted methodology to holistically and neutrally report the quality of audio signals as they are reproduced for the listener, because in audio, you have to take into account the acoustic space, the audio source, and the behavior of the equipment.
Adding to the complexity is that resolution with sound is generally harder to define than visual. If you thought you had a 4K projector and you couldn’t read the credits at the end of the movie, you’d know you have a problem.
BB: Are there any negative consequences of not having a common method to look at sound as a cohesive whole?
JM: The problem is that all three elements, audio source, acoustic space, and equipment behavior, greatly impact the experience for the listener, and sometimes this gets lost when all people see is specs like SPL and coverage.
Sound systems that have plenty of SPL and are implemented to specs could end up being completely unintelligible. It’s not the fault of the sound system designer or the speaker manufacturer. When you have loudspeakers that lack clarity or a reverberant room that lacks absorption, you cannot achieve the desired resolution regardless of how you tweak the sound system.
Resolution is much more clearly defined in video than audio. (Credit: Wikipedia)
And because there is no standard, there is no accountability for when people are disappointed by the result.
When the Montreux Jazz Festival was opening a new venue many years ago, we told them that they really needed to fly some acoustical material to bring the reverberation down from three seconds to around one second. The other option circulated was to use directional loudspeakers without acoustical treatment. We disagreed with that last option, as there were more issues involved than simply aiming the PA.
We went back and forth on it, and finally I said, “Look, what about the band on the stage? They are not going to be directional, and they will excite the room so much that it won’t matter what speakers you have because it will be so loud in that space.” Well, they said they hadn’t thought of that.
BB: So resolution in this case is not just the resolution of the input signal source?
JM: High-resolution audio is a holistic view of looking at all the elements that impact the listener experience, but it still starts with the input signal, bit depth for dynamic range, and sampling frequency for bandwidth. Most new consoles now output 24-bit/96 kHz signals. We have the transmission capability and the storage capability to handle high resolution. But we need to look at sound reinforcement as a whole, from signal to loudspeakers to room acoustics.