Study Hall
Sponsored by
Audio Technica

In The Studio: Techniques For Dealing With Phase

What it is, how it happens, what it can sound like and some techniques to handle it, accompanied by audio examples...

By Jon Tidey August 18, 2016

This article is provided by Audio Geek Zine.

Phase is a constant concern for recording and mixing engineers. Problems with phase can ruin your music; it can be easily avoided or corrected, but first you need understand how the problem occurs.

This guide will attempt to explain almost everything there is to know about phase, what it is, how it happens, what it can sound like and some techniques to deal with it.

What Is Phase?
I’m going to consult my engineering school textbook Audio In Media for this.

It says:

The time relationship between two or more sounds reaching a microphone or signals in a circuit. When this time relationship is coincident, the sounds or signals are in phase and their amplitudes are additive. When this time relationship is not coincident, the sounds or signals are out of phase and their amplitudes are subtractive.

No wonder people are confused about phase. Even I got confused at that, looking up other entries on phase in the book were even worse. (I guess I shouldn’t read books.) I’ll try to break it down more simply.

Phase Vs Polarity
Lets define things a bit more starting with phase and polarity. These two terms are often used interchangeably but they ARE different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case you missed that. Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

In And Out Of Phase
When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, shape, and peak amplitude, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be louder when combined.

Two waves combined that are exactly the same but have a 180-degree phase difference will cancel out completely. Silent output. These conditions rarely happen in real world recording, more likely the two signals will either be slightly different, like two different mics on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the signals are not 0 or 180 degrees, or the signals are somehow different, you get constructive and destructive interference or comb filtering. The peaks and nulls of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the key to combining mics on a single source.

Here are some examples using sine waves.

This is a 250 Hz sine wave with a peak amplitude of -20 dB: LISTEN

If I add another the exact same and combine them (in mono) the output is the same but louder, a combined peak amplitude of -14 dB. They have a 0-degree phase difference, amplitudes are additive.  LISTEN


Now if I change the phase, the time and waveform relationship of these by having the second track start 2 milliseconds later it’s like this: LISTEN  Silence, this is 180 degrees out of phase.


Here is the same kind of thing with some white noise, -20 dB, the same audio file is copied to another track and combined. Louder same as before, -14 dB combined.


Now I’ll use the invert function on the second track and since these sounds are exactly the same it the completely cancel out.


I think we understands it now, so here’s something slightly more interesting. I’ve taken the first one second of the white noise clip and repeated it nine more times. The second track is the same, but on each repeat I’ve delayed it by an additional 1 ms.

This provides an idea of the constructive and destructive interference and the comb filtering. If you look at the frequency spectrum in an analyzer you will actually see notches cut out like the teeth of a comb. LISTEN

Real World Examples
Here is a bass guitar going into a DI box, the signal splits and goes to an amp and to the audio interface. The amp is miked and the mic is going into the interface too.

This is a very common way to record bass, but you may run into phase problems when the fast as light signal from the DI box is combined with the air pushing out of a speaker into a mic some distance away from the signal.


Here is the bass DI signal: LISTEN

Here is the microphone signal: LISTEN

Combined they sound a bit funny, definitely some hollowness going on: LISTEN

Read the rest of this post


About Jon

Jon Tidey
Jon Tidey

Producer/Engineer, EPIC Sounds
Jon Tidey is a Producer / Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog


Leave a Reply

Your email address will not be published. Required fields are marked *

Tagged with:

Subscribe to Live Sound International

Subscribe to Live Sound International magazine. Stay up-to-date, get the latest pro audio news, products and resources each month with Live Sound.

Latest in Recording