Study Hall

ProSoundWeb

Subjective Versus Objective: If It Sounds Good, Is It?

We make decisions based on whatever we have at our disposal, including theory, equipment, testing and measurement, intuition, and finally, critical listening – the key is attaining the right balance.

As with the ever-ongoing debates about “tubes versus transistors,” “analog versus digital” and “Mac versus PC,” there’s not likely to be agreement any time soon about “objective versus subjective” when it comes to sound quality.

Extremists in the “Objectivist” camp argue that “if it can’t be measured, it doesn’t exist” while on the other hand, the “Subjectivist” side firmly backs the idea that “human beings can hear things that can’t be measured.”

How often has it been suggested, “use your ears as the final determinant” in making a decision about sound? At the same time, most would agree that a fundamental understanding of audio systems, including the basics of how each component works, how to set gain structure, and so on, logically can lead to “better” sound quality.

Does science (objective) or art (subjective) play the more important role?

ABX Or Death

Since its development as a scientific testing method, ABX has gained ground as a clear way to determine the threshold of perceptibility in a group of test subjects.

The basics of ABX: two different sources are compared – source “A” and source “B” – and the subject must make the decision as to whether choice “X” represents either A or B. If the subject can reliably (i.e. in a statistically significant manner) identify the sources, then it is concluded that there is a perceptible difference between the sources. Otherwise, the differences are deemed insignificant.

There are some good things to be learned with ABX, and it’s proven to confound many the “golden ears” in tests involving things like 44.1 kHz versus 96 kHz sampling rates, 16-bit versus 24-bit quantization, and others. And it turns out that it’s not common for subjects to be able to reliably identify these sources.

However, I contend that there’s a vast difference between a short-term test like ABX and a longer-term experience with a product, system and the subject itself. Humans have demonstrated a truly amazing ability to learn just about anything.

Take a person who’s never spoken anything but the English language, and stick him/her in Japan for a couple of years. This person will most likely learn to speak Japanese, engaging a new part of the brain.

Or take a person who’s only tasted wine costing less than $10 a bottle. A few months after being introduced to $150 bottles of wine (let alone $3,500 bottles!) and learning about the different varietals, harvest timing, and other specifics, he/she will balk at the cheap stuff.

Even more importantly, this fledgling student of wine will have picked up the ability to discern much finer differences between all types of wines.

In both cases, what changed these people? Exposure, mostly. We all have what some call “paradigms,” meaning that we each filter outside stimuli through our own various levels of experiences and beliefs.

Fixed Level Of Bandwidth

I call these changes through exposure successive thresholds of awareness, and contend that part of this is that human perception is scalable in terms of resolution. With computers and test equipment, there is a fixed level of bandwidth and resolution available.

Not so with people – the longer someone spends being exposed to an experience, the more resolution that person is able to impart to that experience. An analogy closer to home for us audio geeks: the person that has only used a cheap dynamic microphone for years will likely find that even the lowest-grade condenser mic sounds amazing. He will hear tons more resolution, less distortion, and better transient response.

This same person will also wonder how a Neumann mic costs much more, and whether or not it would be possible to sound that much better. And in fact, upon hearing the Neumann in comparison to the cheap condenser, he will conclude that indeed, there is not really that much difference between the two.

Now take that same person five years later, after he’s made several records and used a plethora of top mics of various makes. Now he should clearly be able to identify the differences between the cheap imitation and the real thing, having reached a much higher threshold of awareness between the different mics.

Only One Problem

A few years ago, I read an interesting article about how Dunkin’ Donuts intended to update its marketing plan to target Starbucks customers, based on a very simple idea: offer the same quality of coffee, but more quickly and at a lower price. There was only one problem. These weren’t the reasons that Starbucks customers were buying coffee from Starbucks. They didn’t want it cheaper or more quickly.

What they did want was the Starbucks experience—the club chairs, the subdued lighting, the fancy woodwork, the ridiculously overpriced accessory products, and whatever else they’re seeking. For this, they’re willing to wait (part of the experience) and pay more (another part of the experience).

Although it could be argued that they would appreciate the coffee being less expensive, it’s been proven over and over that there is usually a “right price” associated with a brand experience, and if the price is either too high or too low, the brand will lose credibility.

So what does all of this mean in terms of audio and the Subjectivists versus Objectivists? For one thing, different people perceive things differently, period. What’s important to some is not important to others, and visa versa.

For some, a slightly lower noise floor in a mic is not worth either the extra cost or the resulting lack of perceived resolution, while for others, it might be just the ticket for their application. Thus there can be no consensus on whether or not a lower noise floor is always “better.”

One thing I firmly believe is that both approaches are important for the improvement of audio (or anything else that is part of someone’s experience).

The Accidental Designer

Sure, there are stories where accidental discoveries made improvements in design. For instance, the story of the German broadcast engineer in the late 1930s that inadvertently left a high-frequency oscillator “on” while recording an orchestra.

The result? For the first time, there was playback fidelity beyond 10 kHz. This accidental discovery lead to the implementation of an AC bias for analog tape recorders, and it also pushed the envelope of what was possible with this type of system.

However, despite the muddled beginnings of AC bias, a scientific approach was required to produce repeatable, reliable and predictable results. The required circuitry had to be thoroughly understood by analog design engineers, and the right frequency and right amplitude had to be identified.

Then the right combination of these factors for each different tape formulation had to be developed in order to realize the full potential of the bias signal. It took until the 1950s before this was well understood, resulting in improvement of both subjective and objective experiences for the listeners of tape recordings.

One real problem with measuring various changes in audio quality and attempting to both attribute them to specific causes and simultaneously predict how they will be perceived is that – in the first place – we often don’t know exactly what to measure. Of course, we know the basics such as amplitude response versus frequency, phase response, distortion in its various forms and the like.

But it’s exceedingly difficult to get detailed measurements with real source material in place of standard testing signals. Developments such as Rational Acoustics Smaart and others are measurement tools in this direction, and they’ve greatly benefited sound reinforcement.

At the same time, there is no solid standard for transient response measurements and the resulting perceived effects. Several manufacturers claim that by extending frequency response of a system well past the “audible” limit (say, to 50 kHz) and maintaining phase accuracy through that range, that transient response and distortion will be improved in the audible band.

But even so, is this necessarily the way to predict that the system will sound good? Perhaps it could be argued that all other things being equal between two systems, the one with the lower distortion will “sound better.”

But then again, an interesting experiment done long ago by Bell Labs resulted in the conclusion that for a limited-bandwidth system, the one with more distortion was perceived as sounding “better.”

Perhaps this is one way to explain why low-power, all-tube, all-Class-A amplifiers are often perceived to sound more “musical” than huge, solid-state, “mega-kilowatt,” machined-aluminum monsters that are competing for the same piles of money.

Or maybe it’s other, psychological factors, such as the idea that tube amplifiers replaced the hearth in the home as a centerpiece around which to congregate…

Or perhaps it’s a result of something that is more easily quantified.

Class-A amplifiers distort differently from other designs. Not only this, but by running “wide open” in some cases, there’s more power available for short-term small-scale dynamic changes such as transient information.

It can be easily shown that although two systems may have the same signal-to-noise ratio and the same distortion figures on an analyzer, they may sound radically different. The spectra of the noise, and the character of the distortion, play huge roles in perceived sound quality.

So again, the challenging question about quantifying performance in audio systems is what to measure in the first place, and how to measure it.

Getting Along

The bottom line is that both camps have something very important to offer. Without a scientific approach, we’d be stabbing in the dark trying to find solutions to problems about which we know very little.

But without a reliance on the subjective experience, even our most clever inventions would perhaps never reach the level of “art.”  What good can come of setting fire to a silk-screened portrait of Andy Warhol in the middle of the woods if there’s no one present to snicker?

Designers and sound system users make decisions every day based on whatever they have at their disposal, including theory, available equipment, testing and measurement, intuition, and finally, critical listening. If there is not a balance among these resources, the results are likely to be unbalanced.

How would you like some power amps with “DC to light” response but producing lousy sound? Care for some loudspeakers that sound amazing but look like a “Dogs Playing Poker” on black velvet? How about mics that can pick up a gnat burping but make a Stradivarius sound like a banjo bowed with rosined fishing line?

Let’s leave it to the great Duke Ellington: “If it sounds good, it is good.”

Study Hall Top Stories