Understanding Analog & Digital In Terms Of Audio
Neither is "better" or "best" -- an uncolored look at the underlying simple truths of both formats...

June 09, 2014, by Bruce Jackson & Steve Harvey


Analog? Digital? Both? In professional audio, many choices exist, but there’s not enough time to make the wrong ones. We regularly hear claims floating about, often skewed by particular opinions and interests that tend to color underlying simple truths.

The Merriam-Webster Dictionary defines the noun “analog” as being something that is analogous (similar or related) to something else. For example, an analog can be a food product that represents another, such as inexpensive whitefish “krab” intended to replicate more expensive (real) crab meat, or, for you vegetarians, soybeans processed to look and taste like beef.

If you’ve experienced either of these examples, you know that some products are more successful than others in recreating the essence of the original. The audio world is really no different.

But a fundamental difference between processed food and audio, of course, is that a foodstuff analog can only ever be exactly that - analog. In contrast, audio can be reproduced as an analog OR as a digital representation of the original.

When something makes a sound, such as a musical instrument or a human voice, the vibrations produced travel through the air as an analog of that sound. So, we start with an analog of the original, a close representation of the sound source.

If we’re able to accurately preserve the subtleties and nuances of the original movement of the air throughout the audio system, then we have done our job. But how best to do that?

It isn’t just a question of whether it’s better to put together an entirely analog or entirely digital signal path to capture and convey the original sound source.

Both methods offer advantages and disadvantages, and a variety of factors, including circuit design and component choice, can affect how accurately the equipment reproduces the source.

But for all the advances that have brought high-resolution digital audio products to the marketplace, the debate still rages over which sounds “better.” The crux of the matter is how close current digital audio technology, even at high sampling rates and bit depths, can come in replicating full-bandwidth analog audio gear.

Audio—sounds that are within the average human range of hearing generally accepted to cover the frequency range of 20 Hz to 20 kHz - moves through the natural world as an analog signal that is continuous in time and amplitude.

It always starts with analog, such as the lovely voices of Norah Jones (left) and Dolly Parton.

In a digital audio system, a natural sound traveling through the air must be converted after being captured by a microphone.

An analog microphone translates the movements of air on its diaphragm into an electrical signal. That electrical signal must then be converted into a digital signal, a string of zeroes and ones, in order to be transported by, operated on, or stored by the digital audio system that follows.

This is achieved through an analog-to-digital converter, utilizing sampling and quantization.

How You Slice It
Sampling and quantization is like looking at the speedometer of your car. If you don’t keep a regular eye on your speed, your car might be going faster or slower than you realize.

Audio sampling is simply taking regular measurements of a varying analog voltage or current. Because the audio voltage or current is constantly changing, we have to pick moments in time to freeze the audio as a non-varying number.

We must make measurements in a quick enough succession that we don’t miss important changes between measurements. And we must measure with enough resolution that we capture as much detail as we desire.

Theory tells us that the rate at which the signal is sampled must be at least twice that of the highest frequency that we wish to reproduce. The Nyquist Theorum, therefore, means that, to faithfully capture an analog audio signal that extends to the accepted upper threshold of 20 kHz, it must be sampled at 40 kHz, or 40,000 samples per second.

As an aside, the reason that the compact disc Red Book standard dictates a sampling frequency of 44.1 kHz is based on the early developers, Philips and Sony, wishing to cover the generally accepted audio spectrum of human hearing while also fitting the resulting digital information onto videotape.

By fitting three samples into each active line in the video field, at 50 Hz or 60 Hz, the developers were able to sample 44,100 times per second and save the data onto videotape, which was the digital audio storage and mastering precursor to the compact disc.

These days, we understand that the higher the sample rate, the better. Extending the sampling frequency well beyond the minimum 40 kHz allows digital processing tools to operate on the signal without compromise and to reduce alias signals.

Alias signals are basically components of the audio signal above the upper limit of the sampling frequency that are essentially folded back into the signal, creating an unpleasant distortion.

Someone once gave a good example of aliasing. A guy living in a cave was waiting for daylight. He stuck his head outside about every 25 hours. Starting at 8 o’clock at night (8 pm), he next looked outside 25 hours later when, unbeknownst to him, it was 9 pm and still dark.

Poke your head out of a cave once every 25 hours or so, and you’re going to get an incorrect sampling of day-to-night ratio. So it goes with digital sampling.

Looking outside his pitch black cave every 25 hours he encountered night 10 times in a row, leading him to believe that the night was 10 times longer than it really was.

That is what aliasing is all about. It’s a false reality, created by not sampling the signal of interest frequently enough. If the knucklehead had looked every hour he would have seen the true length of night.

The same goes for audio sampling. If we don’t make enough measurements within a period of time, we miss important audio information and end up with incorrect sounds, harmonically unrelated to what we really wanted to capture.

Zeroes & Ones
Simply, the numerical representation of a slice of audio frozen in time could be either above or below a threshold. There are just two choices: on or off. The resulting one-bit audio would sound like a nasty guitar fuzz box.

Greater precision in measurement leads to more faithful reproduction of the sound we wish to preserve. We could choose to record sound as a stream of numbers using our familiar decimal system, based on our 10 fingers.

But digital electronic circuits are much more comfortable with the binary system of counting, where instead of 10 different levels there are only two, represented by zero and one.

The string of numbers flip by like the frames of a cartoon to create the illusion of a continuously variable analog of the original sound. The faster the pictures flip by every second and the better the picture quality the more realistic the illusion of movement in the cartoon, and so it is in audio, where a higher sampling rate and longer word length results in better quality digital audio.

When we put up a microphone and create an audio signal chain between the natural sound and what comes out the other end, we are putting our faith in the equipment manufacturers.

Just as we have a palette of choices in creating that signal chain, manufacturers have a broad palette of components to choose from when creating the piece of equipment that we choose to run our sound through.

While the equipment designer is picking just the right resistor, capacitor, or digital signal processor, he has to keep in mind a slew of constraints placed on him, such as reliability, end cost to the customer, support, appearance, and so on.

Regardless of using resistors (analog) or chips (digital), the designer still has a slew of constraints of which to be aware.

So a lot of the talent a designer brings to a product is the ability to make the right balance of compromises to deliver a cost-effective solution for the customer. And this is what makes one product or technology a better choice than another. So where do they go right and what are the traps?

Playing The Numbers
More is better, right? In our society, we tend to be impressed by bigger numbers. If 16-bit audio sounds better than 8-bit audio, then 24-bit audio must sound even better.

Manufacturers wow us with science. They tell us that their analog to digital converters have 24-bit performance.

A simple engineering rule of thumb says that we get about 6 dB of signal-to-noise improvement with every bit we add. This means that 24-bit audio has a dynamic range of six times 24, or 144 dB.

Yet when we read the specs, we’re lucky to see 118 dB out of a so-called 24-bit converter. So the difference between the claim of 24 and the reality of not even 20 is called “marketing bits.” They are in there to make you and I believe we are getting something more.

Dynamic range is just one measure of performance. Other measurements, such as distortion, similarly fall short of the claimed number of bits.

So our 24-bit converters are really 20-bit converters. But 20 bits is still impressively good when you consider dynamic range—the difference between the loudest sound and the noise floor.

But these are just measurements. Unfortunately, measurements aren’t always a true indicator of how something is going to sound.

Often we find that a piece of gear sounds great even though its specs leave something to be desired. The key is clean conversion in and out, and numeric precision once inside the box. Once digital equipment designers have transferred the analogue audio into their digital world, they use math to manipulate the sound.

Functions such as EQ involve repetitive processing of numbers. If the digital circuits don’t maintain enough precision, noise and distortion can creep into the pristine digital audio.

It’s like balancing your checkbook. If you only paid attention to the dollars and ignored the cents, your balance would be off by more and more each month. The rounding-off to the nearest dollar causes a bigger and bigger problem the more you do it.

Whether it’s the digital signal processor in your digital console or the Pentium in your laptop, care has to be taken not to allow the small rounding errors to creep up on you and cause noise and nasty distortion.

And here’s where we encounter a fundamental difference between analog and digital audio: There’s good distortion and there’s bad distortion.

Good & Bad
What sounds “good” depends upon whom you ask. That is, it’s a subjective matter to a great extent. Those firmly in the analog audio camp point to an indefinable emotional impact inherent in non-digital sound.

The common observation: vinyl is “warm” and CD is “cold.” But is this accurate?

Audiophiles and purists still prefer the open and natural sound of recordings on vinyl, for example, when compared to what they consider the harsh, cold, or unemotional sound on compact discs.

But what is the accepted norm also plays into the equation.

The audio world is transitioning to digital at a rapid pace. In post-production and music recording, two areas that were among the first to make the digital transition, the entire process, from ingestion of the source material to outputting the finished product, can take place entirely within the digital domain.

The television and radio broadcast industries have also been busy making the transition to fully digital operation over the last few years.

The last bastion of analog audio is live sound. But there, too, over the last few years, the tools have begun to emerge that have allowed engineers to set up systems that, except for the transducers at either end of the signal path, are wholly digital.

Arguably, beginning with the widespread availability of CDs during the 1980s, there has been a widespread acceptance of digitally delivered music as being perfectly satisfactory, even preferable to analogue methods.

Yet analog in the 1960s, ‘70s, and ‘80s seemed somehow more pure, within the limits of the available technology.

But in the recording process today, you have almost no choice regarding staying in the analog domain if you plan to release the project commercially. At some point in the process you must convert into the digital domain, if only for the release format, if you hope to have any kind of financial success.

But even the benchmark set by CD has been lowered with the current downloading revolution. The very nature of an MP3 file dictates that the listener is missing something. MPEG compression is a ‘lossy’ method; that is, it throws away what it considers to be insignificant content when compressing the file size.

But perhaps it’s throwing away that indefinable something that analog aficionados cherish.

Perhaps more significantly, the MP3 revolution may be leading to a generation that has no concept of distortion. A few years ago, an engineering acquaintance related how he had to sit his son down and explain distortion to him.

At the risk of sounding like an old fuddy-duddy, kids today are used to hearing “crunchy” audio. Younger people who have only lived during this digital era are largely unaware of ‘good’ distortion, the acceptable harmonic distortion of analog audio.

To them, what we consider bad digital distortion is simply a part of the music that they listen to daily. Trading poorly “ripped” MP3s (regardless of legality), they have become so used to the crunchy sound quality of the format that, as some readers who have discussed the matter with their own children may attest, they may find it not only acceptable but even preferable to CDs.

The crunch of digital distortion, which is not limited to MP3s, of course, but can just as easily find its way into the digital recording process and onto disc or into the live sound arena, is unpleasant.

The pops, clicks, and surface noises of the vinyl beloved of audiophiles pale by comparison to the harsh distortion of digital audio, compressed or uncompressed.

Right & Wrong?
With digital audio, noise and distortion tends to be inharmonic and unrelated to the original sound, whereas analog distortion is harmonically related and may be pleasant or, in extreme cases, tolerable. Analog distortion can add a hard-to-define “warmth” to a sound.

Is wrong always wrong? Analogue distortion can even be a major component of a sound, such as an electric guitar, and therefore desirable.

If you run audio through a tube device, you are doing it to add flavor, not to reproduce something exactly or realistically. In such cases, the audio process may not be accurately recreating the original source, but it is at least adding partials that are music-related, and therefore relatively harmonious.

There’s a reason why old mixing consoles, such as those designed and built by Rupert Neve, for example, are so sought after. The “warmth” controls on Mr. Neve’s contemporary updates of his old designs actually introduce a modicum of distortion.

This was something that was inherent in many of his original designs, albeit at low levels, yet which could impart an emotional response to the music produced through the equipment that made it attractive to the listener.

In the analog world, distortion is usually related to the original sound. The transistors, op amps, resistors, capacitors, and inductors can add even or odd harmonics. These unwanted extra harmonics are related to the original sounds in evenly spaced octaves.

If the distortion is in even multiples, taking the example of a bass note of 100 Hz, adding distortion at two times or four times the original and so on, the harmonics at 200 Hz, 400 Hz, and up tend to sound warm and on the organic side.

Odd harmonics, such as those at three, five, and seven times the original, tend to sound hard and edgy. But whether even or odd, they are generally somewhat ‘musical.’

The types of distortion picked up in a digital signal path are often unrelated harmonically to our pristine audio. Those unwanted aliases in the analog to digital conversion are inharmonic and sound just plain awful. And once they are in there, there is no getting rid of them.

But there are many different types of distortion—harmonic, intermodulation, frequency, phase, time, and so on, that are frequently present simultaneously and therefore interrelated. And not all are pleasant.

Ultimately, the goal in any audio system, analog or digital, is to maintain the lowest overall distortion.

Modeling In Digital
Byproducts of a compressor moving too fast in the digital domain can splatter grunge all across our good sound. Done with care, the same process can sound pleasant, like the analog equivalents.

Using a compressor as an example, it’s often hard to model what goes on in an analogue compressor in the digital world.

Analog designers often choose a simple gain control element that consists of a module with a light shining on a light dependant resistor. As the light comes on, it reduces the level of the audio flowing through the resistor.

But the resistor is slow to respond to the light. If a loud passage comes along and the light brightens to reduce the level, some of the loud stuff sneaks through while the sluggish control element changes.

It is also slow to come back to where it started after the loud passage has passed. Additionally, it adds subtle even order harmonics.

Going back to our food analogy, modeling the nice sounding analog compressor digitally is like trying to capture all the subtlety of the vanilla bean. Artificial vanilla has the same basic chemical composition as the real thing, but tends to lack the many subtle additional flavors and aromas that make real vanilla what it is.

Manufacturers offer compressors using inexpensive light dependent resistors for many thousands of dollars. When we run analog audio through digital equivalent, we want the designer to capture all the subtle nuances of our funky analog unit.

Digital designers claim to accurately model the analog world, but just like our lowly vanilla bean, it’s not that easy. What digital audio equipment most certainly offers is precise control and repeatability. Extensive recall of presets is technically much easier in digital.

Yet some users will still pick analog equipment because quality of sound outweighs the easy life of presets.

In equipment like digital mixing consoles, dynamic and snapshot automation allows the near instantaneous reset of console-wide, complex setups. That can be a particularly useful feature when the front-of-house console is handling multiple acts in quick succession, to offer just one example.

Similarly, on a major tour that is using in-ear monitor systems, which are not greatly affected by the nightly change of venue acoustics, setup times can be drastically reduced. Digital consoles such as those made by DiGiCo, Yamaha and Digidesign offer varying levels of features and price.

Digital processors let you do things that are otherwise impossible in the analog domain, such as impossibly steep-sided filters and multiple EQ curves stacked one on top of the other. Alternatively, there is the hybrid approach, digital control of analog circuitry, which several console manufacturers experimented with in the early ‘80s.

The late Bruce Jackson was involved with several leading audio companies and mixed Elvis Presley, Barbra Streisand and Bruce Springsteen, among many others, over the course of his illustrious career. Steve Harvey is a widely published professional audio journalist.

Return to articleReturn to article
Understanding Analog & Digital In Terms Of Audio