The topic of audio perception has been pretty hot lately.
From the popular news media coverage of Mastered for iTunes to the pages of TapeOp magazine, it’s not uncommon for people to be asking the question, “can you really hear the difference?”
This is very good news for music and music lovers.
That might not seem like an extraordinary question for people to be asking, but the elastic reach of hardware and software marketing nonsense has devalued sensory feedback.
We are routinely exposed to the most outrageous qualitative claims that have never been proven (or even suggested) with a marginally systematic listening test.
In the interest of encouraging this recent flash of sensory curiosity, let’s take a look at how anyone with a basic DAW setup might be able to go about conducting a listening test of their own.
Ground Rules
1) If a claim or question includes phrases like, “sounds better” or “can hear the difference,” the most direct way to prove it (or put it to rest) is a listening test.
2) “Is better” is an irrelevant claim about audio and music if it can’t be heard.
3) A listening test is useless if the listener can visually verify what he or she is listening to (e.g. selections labeled MP3 and CD, or any visable waveforms). Our eyes will betray our ears.
4) A listening test is useless if the listener is allowed to switch wildly back and forth between two or more examples (unless you’re testing a tool for switching wildly back and forth between two or more examples).
5) The results of a test aren’t results if they can’t be repeated.
All of these types of premises are frequently debated in online communities, but we have to draw some boundaries. This article is not concerned with proving some sort of existential benefit of technology A or B, just in hearing the difference between two things.
ABX Testing
An ABX listening test takes two audio samples (A and B), and provides a method for determining whether they are distinguishable to a listener. During the ABX test the listener is asked to answer whether each of a series of playback examples (X) is sample A or B. Most test cycles will run between 5 and 10 X’s. The listener’s score is typically quantified in terms of percentage of correctly identified X’s.
Presumably if you identify X correctly close to 100 percent of the time, you can hear a difference.
If your scores keep landing in the 50 percent range or vary widely across multiple tests, the suggestion is that you’re not hearing a reliable difference between the two audio examples. Each ABX round generates a percentage result.
ABX Tester
There are several software ABX apps available. I use Takashi Jogataki’s (free) ABXTester all the time. I highly recommend it for Mac users. QSC made a fairly famous hardware ABX Comparator until 2004.