In The Studio: Audio Perception And ABX Testing

Application Example (a real one)

Different software companies create and sell different codecs for creating compressed audio formats like MP3 and AAC.

There are a lot of reasons to prefer one over another, including user interface, cost, and brand association.

To keep myself honest, I’ll typically download a demo if a new codec comes out, and ABX it against my current preference.

I’ll bounce the same audio source twice – once with each codec product set to identical digital audio precisions. Absolutely nothing else about the two bounces can be different, or the test is pointless.

If I’m really being honest, I get someone else to load up the examples into the tester app so I don’t know which is which.

After one round of ABX testing (repeatedly identifying X as either A or B based on what I’m hearing) I observe my success rate at identifying A or B. I’ll usually repeat the test 3 to 5 times, maybe using different monitors (i.e. limited bandwidth versus fancy studio monitors).

If the results suggest any ability to hear the difference (especially if my preference isn’t my trusted codec), I’ll usually repeat all of the above with a wide variety of playback samples from different musical genres.

This process isn’t objective or blind enough to qualify as a truly scientific test, but it is goes a long way to eliminate a lot of self-deception and marketing fog.

Test Cautions

The most important step in the ABX testing process is defining a test that has a single variable. If there is more than one thing changing between A and B, you’re not really going to learn anything useful.

For example, a question like, “does mic pre A sound different than mic pre B,” is complicated.

First, you have to consider the wide range of variables between two successive performance examples. Eliminating those with a mic splitter (or a playback example), you would need to consider the gain staging of the two mic pres. Devise a standard for establishing “equal gain” between the two signal chains (e.g. acoustic test noise metered at a reference level at the mic pre outputs).

A question like, “can I hear the difference between a 96 kHz digital recording and one sampled at 44.1 kHz” would minimally require you to:

—Have an acoustic or analog test signal source (a digitally derived source would be irrelevant)

—With an acoustic source you would need to have two identical converters feeding two different DAW setups with nothing but sample rate different between them

—Bounce both examples with the same digital audio precisions in order to be able to conduct the ABX test, re-defining the question as, “can I hear the difference between a 96 kHz digital recording and one sampled at 44.1kHz once they’re both bounced as 44.1 kHz?”

Obviously the simpler the question, the simpler the test. This should begin to highlight some of the most positive results of doing ABX testing on your own.

After getting used to thinking through the variables that affect our perception, marketing claims will begin to inspire the question, “how would you test that?” The answer will either inspire some new exploration of your own, or instantly expose the silliness that often lies just under the surface of pro audio marketing.

The Challenge

ABX testing is just one way of attempting to determine unbiased answers to questions of audio perception. Other methods like null testing might be better for particular scenarios – as long as the test is well-conceived.

There are some popular examples of tremendously silly null testing on YouTube, but you’ll be smart enough to consider a single variable at a time.

Rob Schlette is chief mastering engineer and owner of Anthem Mastering (anthemmastering.com) in St. Louis, MO, which provides trusted specialized mastering services to music clients across North America.

Be sure to visit the Pro Audio Files for more great recording content. To comment or ask questions about this article go here.