Another photography analogy: Ansel Adams espoused the idea of visualizing the result you wished to have when viewing a scene, to imagine how you would want it to appear in a photographic print.
Then, using the technology at hand and the technique to go with it, achieve the desired results. One of the challenges is that a natural scene has levels of light and dark, i.e., dynamic range, that cannot be captured or reproduced with photographic equipment.
First, Adams suggested exposing the film in order to ensure that there were details in the shadows (above the noise floor). Then he gave pointers as to how the film should be developed in order to prevent the highlights from blowing out (headroom).
Finally, he formulated a precise method of printing so that—although the real-world levels of light and dark could of course not be reproduced—the relative levels could be kept intact, providing the viewer with the impression desired by the photographer in the original vision. With the tools of the day, this was a very involved process, with lots of smelly chemicals and expensive equipment, and it required a whole lot of patience and discipline while stumbling around in the darkroom.
Sound is not that different. For one thing, the real dynamic range of many instruments or ensembles is greater than what can be reproduced through loudspeaker systems. And yet the listener generally wants to have a bit less than reality for the sake of comfort, especially when it comes to things like vocals. Thus, dynamic compression is routinely used for this purpose.
However, let’s get back to the main point: cultivating a vision about the desired end result. What kind of music is it? Do the performers have an idea of how they want to be presented? Is there a recording we’re trying to match or to which the audience is comparing our efforts? All these things affect our choices in technology and technique. That is, if we’re paying attention.
Wherefore Art Thou, Reverb?
What are some other examples of using technology to achieve a “vision” in the mix? Application of reverb to create space, for sure. Applying delay to enhance the rhythmic elements of the music or to create “size” by panning a delayed copy of a source. Drawing on distortion to supply “color.” And certainly, using EQ to carve out space for each instrument or voice, draw attention to or away from an element in the mix, or to create vertical “size.” All these approaches are certainly valid, and there are dozens (if not hundreds) more.
One way to learn these and other creative uses of technology is to carefully analyze recordings and performances with disciplined listening. One of my best audio teachers in college would start every class with an analytical listening exercise, where we would make a chart with the relative levels of each instrument or voice, what effects were used, panning and space, etc.
After months of doing this with dozens of songs, it was very eye-opening because we realized how each different producer and engineer had exploited the available technology to achieve certain results, thereby enhancing the musical experience. Once in a while we’d also notice the bad examples where some aspects of the recording or mixing techniques got in the way of the results, and even ruined the recording.
One final thought: it’s easy to get caught up in the technology itself. But really, our jobs are to get past that, figure out what works, get really good at it, and make music. After all, that’s what it’s all about.
Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.