February 15, 2013, by Bruce A. Miller
Many people say that older records “feel” better. They also complain that much of today’s music seems “sterile.”
I believe a big part of this is because these days so much music is made in sequencers or by bands playing individual parts rather than together.
As a result you lose the dynamics that I feel are important in music.
Live music played by a group of musicians (even if the drummer is playing to a click track) will rush and lay back. Good musicians will all move together if they are able to hear each other clearly enough and are sensitive enough to actually listen to each other.
Although it’s crucial for drummers and bass players to move together, it’s also important for the entire band to follow along and speed up/slow down as well.
I recently recorded a jazz band live and was editing together different takes. The different takes all felt great, but they didn’t always work well when cut together. The reason was that one take may have been a little bit rushed and another a little laid back.
But since the musicians were all good and listening to each other, each take felt tight. Even within a single part of the song if the drummer was laying back the rest of the band did as well. The result was that all of the parts hit together. The down beats hit together, and the melodic elements were properly supported by the rhythms being played.
When the producer asked me to move some sections from take to take, I felt differences in how the tracks felt. One edit went from a rushed solo section to a laid back head section (repeat of the main melody). I thought the contrast may have been too much, but the producer actually liked how the band seemed to relax back and fall into the head together.
He also asked me to move some individual solo parts from one take to another. This did not work well. Since the musicians were leaning forward or back together, moving individual parts from one take to another meant a laid back solo was now being played over a pushing rhythm track. It did not sound tight and professional.
Even when I moved some vocals a mere 8 measures within a single take, the feeling was totally different and it did not work. I had to make sure that each solo part (and even vocal) was heard with the music that it was performed with.
Although some people may prefer for each part to be played tight to a click track (especially to make it easier to move parts around), there is something human in how musicians move together. I believe that music played by a group has an element of communication between the musicians that listeners can pick up on and even ride along with.
Sequenced music is sometimes created with that push and pull intentially left in. On Jamaica Boys albums, the great Lenny White used to play the drum machine buttons live and never quantized the parts because he wanted to keep the machine sounds feeling “live” (yay Lenny).
It is possible to create sequenced music that feels human, but only by allowing the imperfections to happen and grooving along with them.
So now that we have the band moving together, what about their volume? Most of the songs I like have parts that are soft and parts that are loud. These changes in volume are most effective when the entire band is performing with the same sensitivity. The band should “whisper” together, and should “shout” together.
Live musicians will also change their volumes during the course of a song based on their roles at each part of the arrangement. When a guitarist stops playing rhythm and begins to solo, he should play louder. Likewise when he resumes playing rhythm, he should drop his volume so he does not overpower whatever instrument has taken over the lead position.
Dynamics are not limited to a band’s rhythmic dynamics (how they push and pull) or volume dynamics (how loud they all get at certain sections either together or individually).
Individual instruments have their own dynamics that are as expressive and important as group dynamics. When an instrument is played softly it will have a different tone than when it is played loudly.
Also, individual notes played at different parts of an instrument (such as the exact same “A” note played on different strings and fret positions on a guitar) will have a different tone. Good musicians know their instrument well, and will play different variations of the same note to elicit more emotion and expression.
Volume is another dynamic that good musicians will take advantage of. A soft instrument will elicit the same feeling of intimacy that a whispered voice will, and make a listener feel that they need to “lean in” and listen more carefully. This forced involvement can make the difference between someone “getting into” a song or not caring about it.
Opening Up The Mix
It’s been said that black and white photos can be more compelling than color because the viewer must participate more. In the same way, when you “lean in” to listen to a musical part more carefully, you participate more and the part becomes more compelling.
Mixes can be crowded in many ways. The song arrangement can be crowded if there are too many elements playing similar parts that are different enough to cause musical clashes or smudges.
Spatial crowding can occur if there are too many parts clashing within the same areas of the stereo image. Frequency crowding can occur if there are clashing sounds with similar tones (such as a screeching sax and vocal).
It is important to create differentiation between different parts, especially in songs with crowded arrangements. Spatial differentiation is achieved by placing instruments into different positions within the stereo image, or by having some parts moving rather than stay in one place.
Frequency differentiation can be achieved by EQing sounds to emphasize more of their differences rather than similarities. For example, both a kick drum and a bass will have very low sounds, but the kick will also have a sharp attack that will cut through the sound of the bass, and the bass will have a sustained roundness that will continue between kick hits.
Changing instrument volumes is an important process to consider when trying to create mixes with clarity, within which all of the instruments can be clearly heard and felt according to their functions.
Some mix engineers try to create differentiation by equalizing each sound to fit into a narrow frequency range. This works, but results in mixes that are significantly less expressive than mixes that contain full sounding instruments that move forward or back in the stereo image, either taking over the image or creating space for other instruments to take over.
Static changes to sound are changes that remain the same throughout an entire mix (“set it and forget it”). Dynamic changes are changes that are adjusted to different settings through the course of the mix. Anything can be dynamic, and almost every mix involves increasing and decreasing channel output volumes, panning or even if the channel output is turned on or off (“muted”).
The most commonly dynamic element is channel output volume. Channel output volume is usually accessed by a sliding fader rather than by a rotary knob. During a song as the volume of a channel is increased and decreased, the channel fader will move up and down. During a song as a channel is turned on or off, the mute switch will open and close.
These changes can be done by hand, but then they need to be performed every time that mix is played. As a result, in the days when changes had to be performed by hand it was not unusual to see many people crowded over a console, each with a specific job to do at a certain part of the song.
For example, when the singer says the word “baby” perhaps one person would turn a knob to make the voice sound different than it sounded for every other word. Perhaps when the song ended there was one instrument that kept playing that had to have its channel turned off (muted) every time the song reached the end.
Many consoles allow you to record the changes that you make to knobs or faders and play the changes back. This automation allows you to make changes to a channel along with the music only once and have them play back so you are free to change another track. For example, you can mute a vocal track when the singer coughed while recording, and have the Automation automatically perform the mute for you whenever the song is playing.
Automation is usually used for volume faders or send knobs, but many effects allow you to automate settings as well.
Bruce A. Miller is an acclaimed recording engineer who operates an independent recording studio and the BAM Audio School website.