Editor’s note: This is part one, of a three-part series, that was featured in LSI October 2018 and can also be found on ProSoundWeb. Read part two here.
Monitor engineers know that there are many soft skills required in their job. For example, building trusting relationships with bands and artists is vital for them to feel supported so that they can forget about monitoring and concentrate on their job of giving a great performance.
But what do we know about how the brain and ears work together to create the auditory response, and how can we make use of it in our mixes?
Hearing is not simply a mechanical phenomenon of sound waves traveling to the ear canal and being converted into electrical impulses by the nerve cells of the inner ear; it’s also a perceptual experience. The ears and brain join forces to translate pressure waves into an informative event which tells us where a sound is coming from, how close it is, whether it’s stationary or moving, how much attention to give to it and whether to be alarmed or relaxed in response.
While additional elements of cognitive psychology are also at play – an individual’s personal expectations, prejudices and predispositions that we can’t compensate for – monitor engineers can certainly make use of psychoacoustics to enhance their mixing chops. Over the space of my next three articles, we’ll look at the different phenomena that are relevant to what we do and how to make use of them for better monitor mixes.
What A Feeling
Music is unusual in that it activates many areas of the brain. Our motor responses are stimulated when we hear a compelling rhythm and we feel the urge to tap our feet or dance; the emotional reactions of the limbic system are triggered by a melody and we feel our mood shift to one of joy or melancholy; and we’re instantly transported back in time upon hearing the opening bars of a familiar song as the memory centers are activated. Studies have shown that memories can be unlocked in people with severe brain damage and dementia patients by playing music they’ve loved throughout their lives.
The auditory cortex of the brain releases the reward chemical dopamine in response to music – the same potentially addictive chemical which is also released in response to sex, Facebook “likes,” chocolate and even cocaine… making music one of the healthier ways of getting high.
DJs and producers use this release to great effect when creating a build-up to a chorus or the drop in a dance track; in a phenomenon called the anticipatory listening phase, our brains actually get hyped up waiting for that dopamine release when the music “resolves,” and it’s manipulating this pattern of tension and release that creates that “Friday night feeling” in our heads.
Our brains are good at anticipating what’s coming next and filling in the gaps, and a phenomenon known as “missing fundamentals” demonstrates a trick that our brains play on our audio perception. Sounds that are not a pure tone (i.e., a single frequency sine wave) have harmonics. These harmonics are linear in nature: that is, a sound with a root note of 100 Hz will have harmonics at 200, 300, 400, 500 Hz and so on.
However, our ears don’t actually need to receive all of these frequencies in order to correctly perceive the chord structure. Play those harmonic frequencies and then remove the root frequency (in this case 100 Hz), and our brains fill in the gaps and we still perceive the chord in its entirety – we still hear 100 Hz even though it’s no longer there.
We experience this every time we speak with a man on the phone – the root note of the average male voice is 150 Hz, but most phones cannot reproduce below 300 Hz. No matter – our brain fills in the gaps and tells us that we’re hearing exactly what we expect to hear. So while the tiny drivers of an in-ear monitor mould may not be effective at producing the very low fundamental notes of some bass guitars or kick drums, we still hear them as long as the harmonics are in place.