Study Hall

Supported By
ProSoundWeb

Sending The Message: Understanding Prosody & Its Role In Effective Sound Design And Mixing

A deeper understanding of a song's mood and message is critical for every aspect of production, and audio certainly isn't exempt from that.

In “Charting The Mix,” we saw how a song’s production incorporates various sounds to control the song’s momentum and differentiate its sections. In truth, the way a song sounds isn’t an isolated thing. Although we tend to think of flangers and ping-pong delays as “special effects,” a successful producer knows that every sonic decision should support the song’s message or mood.

Here’s an analogy: in cinema, it’s not just the dialog and action that tell the story, it’s also the director’s visual decisions that affect the way the story is told. Colorful, bright, well-framed shots feel good, safe and stable, whereas darkness, shadows, and off-kilter camera angles build tension, suspense, or unease. (Compare and contrast the early and late Harry Potter films – as the story grows darker and more ominous, so do each film’s visuals.)

Music is no different. A song tells a story using more than the words. A good songwriter uses melody, harmony, and rhythm as lighting, set design, and camera angles to influence the listener’s mood and support the song’s message. Songwriters use the term “prosody” to describe this relationship between how a song sounds and the story it tells.

If your eyes are glazing over at this point, here me out – this is important, because a song is the result of many deliberate decisions by the songwriter, and the artist is trusting us to faithfully deliver this product to the audience. This is just as critical as the guitar tone or the system EQ.

We want to assure our artists that we’re doing everything we can to help their songs translate effectively to the listeners, but how can we make that claim if we don’t understand even the basics of what they’re doing musically?

I can say first-hand that an understanding of the musical side of things improves my working relationships with artists a million times over. (That’s 120 dB!) We have a technical language to describe sound, and artists have an artistic one. Since they’re the ones everyone pays to see, let’s try to understand it. (No one buys a ticket to hear your vintage reverb unit.)

Professional motives aside, once you gain an awareness of prosody, you’ll likely never listen to music the same way again. Think of it as the musical equivalent of upgrading to HD.

The three main elements of a song are melody (the primary notes that are played/sung), harmony (the chords played/sung under those primary notes), and rhythm (the timing of the notes and chords). Note: I admit I’m simplifying this because many of us aren’t trained musicians, so if you happen to hold a music degree, please cut me some slack here.

Melodic Prosody

This is probably the most straightforward, so let’s look at it first. Most of us are familiar with Garth Brooks’ pub anthem “Friends In Low Places.” (If not, go look it up. I’ll wait.) In the chorus, the melody note on the word “low” is really, well, low. Tada! Prosody!

The song is fun to sing along with, far more so because the low note makes sense with the meaning of the lyric. If he were singing “I got friends in pizza places,” it would be significantly less appropriate, and as a result, less fun to listen to. This “up and down” being literally represented is prosody at its most basic.

For the theatre geeks, how about “Bring Him Home” from Les Miserables? The song opens with the lyric “God on high.” Guess where the high note is.

In the chorus of Sara Bareilles’ beautiful ballad “Gravity,” every single phrase has a downward-sloping melodic contour. (The lyric “I don’t wanna fall another moment into your gravity” rolls gently down to land on the word “gravity.”) The exception is the last line, “all over me,” where the melody goes up on the word “over.” The technique is so powerful because even listeners with no musical background are emotionally responsive to the way the melody supports the song’s message.

Here’s a textbook example of melodic prosody – in Leonard Bernstein’s West Side Story, he opens the song “Maria” with a musical interval called a tritone. Tritones sound unpleasant. (On a piano, play C and F# together. Not so pretty, is it?) In fact, the tritone used to be nicknamed the “Devil’s Interval” for its extremely discordant, dissonant sound. Composers usually take great pains to avoid tritones in their melodies.

Why does Bernstein intentionally set Maria’s name over a tritone? In the story, her character represents something forbidden, risky and unsafe. This won’t end well (the story is a modernization of Romeo and Juliet) and Bernstein illustrates the unease of the situation by using a small musical idea (a “motif”) that’s literally uncomfortable to listen to.

These psychological associations run deep: Although modern trumpets have valves, their ancestors didn’t, and so could only play certain notes – namely the root, third, and fifth of the musical scale. Military bugle tunes “Taps” and “Reville” use these tones exclusively.

Since the days of yore, horns have been used to herald the arrival of royals, so these intervals still carry a connotation of importance. Play B♭, B♭, F on the piano – this leap from the root to the fifth is heard at the beginning of the triumphant “Prince Ali” in Disney’s Aladdin and opens the iconic Superman theme composed by John Williams.

Rhythmic Prosody

This works by establishing rhythmic patterns and using variations to manipulate the listener’s expectation, a musical principle called “tension and release.” An obvious example: “How Sweet It Is (To Be Loved By You)” comes to a momentary stop on the word “stop.” So does “Stop! In The Name Of Love.”

“Memories,” from the musical Cats, lulls the listener into a serene, comfortable, reflective state by sticking to the routine and repeating the same rhythmic motif (two long notes, five short notes) three times in a row. So does the Beatles’ “Let It Be,” which sets the title phrase to the same rhythm over and over again, a constant assurance. Stravinsky’s “Rite of Spring” does the opposite, using unpredictable and irregular rhythmic jolts to create tension and stress. (It worked – at the premier, the audience rioted. This was in 1913.)

Harmonic Prosody

Chords create different textures and establish varying amounts of “stability” – sonically, can we comfortably hang out here for a while (I chord)? Do we feel a slight forward push (IV Chord)? Or do we really want to get back home (V chord)? Tom Petty’s “Freefallin'” spends a lot of time on the IV and V chords, creating a baseless, falling feeling that lacks stability or permanence – perfect, considering the subject matter.

It’s hard to give isolated examples, as songwriters usually use multiple elements to establish prosody. Tim McGraw’s hit “Live Like You Were Dying” changes to a higher key for the last chorus, creating an energizing lift and pushing the already-high melody notes on the lyrics “sky diving” and “mountain climbing” (both high things) even higher.

Michael Jackson’s “Man In The Mirror” uses prosody during the ending modulation (key change), a strong rhythmic and harmonic change that emphasizes the lyric “change.” John Williams’ flying theme from E.T. sonically soars by repeating the power ascending fifth interval higher and higher, with chords rising beneath.

There’s a very powerful relationship between sounds and emotions, and a good composer or songwriter can set up these associations and “cash them in” later, to great effect.

The idea of using a repeated musical idea to identify a character dates back to the operatic tradition. It’s called leitmotif. (Good crossword answer!)

Puccini used it in La bohème, and many people are surprised to learn that when Jonathan Larson modernized the opera into the musical Rent, he kept the idea intact – pay attention to how the character Mimi introduces herself, for example.

John Williams’ score for Jaws trains us to associate a particularly uncomfortable interval, the minor second (play a low G to A♭), with the killer shark. We hear this two-note motif, repeating and accelerating, and we already feel the tension of the shark’s approach. We don’t need to see it on the screen – we can hear it.

Making Connections

OK, hold on. This isn’t a music theory course. How does it translate into actionable information we can use to make mixing decisions. In other words, can we extend the concept of prosody such that the production design also supports the message of the music?

I’m going to relate it to lighting design (stay cool, it’s just an example) in that it’s a broad-strokes approach. There’s not a single light cue that establishes the mood. It’s an atmosphere established by the big picture – is it a “big” or a “small” look? Cool or warm colors? How is motion used? I’ve tried my hand at lighting design in the past, and since there’s no way I could go toe-to-toe with a big-time LD, these basic prosodic elements become critical.

In Oliver!, the orphan pines for his mother during the ballad “Where Is Love.” It’s profound loneliness, so let’s wash a large area to emphasize his insignificance and solitude, a tiny person alone on a huge stage. Later in the show, when a female character sings of being trapped in an abusive relationship (“As Long As He Needs Me”), let’s make her “special” a small shaft of light, squared off with sharp edges on the beam, literally trapping her in place.

Both songs are ballads, but to treat them both completely the same way ignores the opportunity to create some “production prosody” by using the show’s design to emphasize the message.

Back to audio. What are our tools, as mix engineers, to establish or emphasize a song’s mood? Two of the biggest elements we can control are dynamic range and reverb. Dynamically, is the mix ebbing and flowing, free and breathing? Or is it compressed and tightly controlled? There’s a proper time and place for each. Reverb puts and performers – and the listeners – in a space of a certain size. There’s a lot of potential here to set a mood and influence perceived intimacy.

In “Mr. Cellophane” from Chicago, the character sings about being ignored and invisible. Is it really appropriate to mix a big, booming vocal that fills the venue? At the end of Les Miserables, all of the characters who have been killed during the show come back and sing the closing anthem. It’s an emotional moment, about conflict, poverty, and the loss of human life.

When I approached the sound design for a local production of Les Mis, I really wanted this moment to have impact and emphasize the humanity, so I ended up making a pretty unconventional choice – I completely muted the entire system. After two hours of modern, amplified sound, the transition to natural, unamplified propagation created an (intentionally) jarring moment that forces the audience to refocus.

These are, perhaps, extreme examples, but the point is that having a generic approach (i.e., “OK, ballad. OK, production number. OK, …”) is adequate, but certainly not compelling. To maximize impact, every element of the production should support the artist’s message. Since songs don’t come with sticky notes attached – “sad. happy. angry.” – an understanding of prosody is essential to understand what the artist seeks to communicate and help forward that through our decisions.

A deep understanding of the mood and message is critical for set design, costume design, lighting design, and virtually every other aspect of production. I’m simply suggesting that audio isn’t exempt from that.

Study Hall Top Stories