Feature

Thursday, May 09, 2013

Microfiles: The American D-22, Form & Function In One Package

A slim-profile design with several innovations

When I got serious about collecting microphones I started a list of the models I wanted, and one of the first names on the list was the D-22 from the American Microphone Company. It’s a beautiful omnidirectional microphone with two-tone coloring and a very modern unique look. 

American was a popular manufacturer from the 1930s and into the mid-1960s. The company was founded by Fern A. Yarbrough in Los Angeles, later relocating to Pasadena, CA, and it built mics primarily for broadcast and recording, with a few models designed for live sound use, including the D-22. During World War II, the company produced several communications mics for the military.

The Elgin Watch Company bought American in 1955, with a newspaper article of the day noting that the purchase was part of Elgin’s effort to expand it’s interests outside of watchmaking.

Specifically, the company stated that it would commence developing miniaturized components for mics because the current offerings were considered “too bulky for correct use on television.” By the early 1960s, that plan was abandoned, with Elgin selling American to Electro-Voice, which soon retired the brand.

Trim Lines
The D-22 was introduced in the early 1950s and was part of the Full-Vision (FV) Series, a name referring to the slim profile that offered a better view (or full vision) of a presenter on TV in comparison to other contemporary models that were bulky.

The FV Series included a pair of identical looking mics, the D-22 and D-33. However, they differed in their intended usage and offered different frequency responses.

The impedance switching linkage bar under the nameplate. Note that the high impedance screw is missing. (click to enlarge)

Marketing material states, “The D-33 is designed for television, AM or FM broadcasting and recording, while the D-22 is suited for less critical applications.” The spec sheet of the D-33 shows the frequency response is 40 Hz to 15 kHz (+/- 2.5 dB), while the D-22’s response is listed as 100 Hz to 8 kHz (+/- 5 dB). All other specs were identical.

Both models have precision machined “Duraluminum” cases as well as micro-metal alloy diaphragms that are unaffected by temperature changes and treated for protection against corrosion. Impedance on both models may be easily changed by removing the name plate and loosening the screws that hold “linkage bars.”

The bars then slide to the impedance position desired, with the screws tightened and the name plate re-installed. Many American models use linkage bars for setting the impedance, and I think it’s a better idea than exposed switches that could easily get changed.

All of the mics in the FV Series are simply gorgeous, with a color combination of black and gold. Over the years, the black on many units has faded into a dark purple color, only adding to its beauty in my opinion.  My model seems to have lived its life mostly in the case, so the black color has not faded.

The removable “slide-lock” stand adapter and XLR connector. (click to enlarge)

For an additional $5, FV mics were also available in a non-reflecting “Antihalation” finish (all black) for unobtrusive use on television.

Then & Now
The D-22 also has a unique stand mount called the “slide-lock,” which allows a performer to easily detach the mic from a stand for handheld use. A knurled knob end screw on the stand mount is used to securely lock the unit into a recess in the mic base. At the rear is a standard 3-pin XLR connector. (American was one of the early adopters of this now-standard connector.)

The broadcast and higher quality models from American shipped with fitted velvet-lined cases as fancy as the mics themselves. In addition to the case and stand adapter, 25 feet of shielded rubber-covered cable was included with each mic.

The modern, sleek lines and quality sound made the D-22 (and D-33) a popular choice for broadcast, recording and live events in the 1950s and 60s. And their timeless beauty makes them a popular choice with collectors today.

The D-22 inside the factory supplied velvet lined case. (click to enlarge)

American Model D-22 Specs
Transducer Type: Micro-metal alloy diaphragm dynamic
Polar Pattern: Omnidirectional
Frequency Response: 100 Hz-8 kHz (+/- 5 dB)
Sensitivity: -86 dB at low impedance, -52 dB at high impedance
Nominal Impedance: Switchable 50 ohms or 40K ohms
Size: 8.2-in x 1-in
Net Weight: 7 ounces, including slide-lock stand mount
List price for a new one in 1961: $99.50

Craig Leerman is senior contributing editor for Live Sound International and is an avid collector of vintage microphones. Click here to check out more Microfiles from Craig.

{extended}
Posted by Keith Clark on 05/09 at 11:28 AM
Live SoundFeatureBlogStudy HallMicrophoneSound ReinforcementStudioPermalink

Wednesday, May 08, 2013

Cable Anatomy Part 3: Everything You Wanted To Know About Instrument Cable

Construction, materials, shielding, specifications and much more

See Part 1, Microphone Cable here and Part 2, Loudspeaker Cable here.

Are instrument cables used for high-impedance or low-impedance lines?

Generally, the source impedance is the determining factor in cable selection. Instrument cables are used for a wide range of sources. Many keyboard instruments, mixers, and signal processors have very low (50 to 600 ohm) source impedances.

On the other hand, typical electric guitar or bass pickups are very inductive, very high impedance (20,000 ohms and above) sources. Typical load impedances are greater than 10,000 ohms, which limits the electrical current flow to a very small amount on the order of a few thousandths of an ampere (milliamps).

How much power does an instrument cable have to carry?

The voltages encountered range from a few millivolts, in the case of the electric guitar, to levels over ten volts delivered by line-level sources such as mixers. By Ohm’s Law this represents power levels of less than a thousandth of a watt.

What kind of frequency response does an instrument cable need? What are the lowest and highest frequencies produced by the source?

The bandwidth spans the entire audible range of frequencies, from the 41 Hz (and below) of bass guitar and synthesizer to the 20 kHz harmonics of keyboards and cymbals. Recording applications demand wide bandwidth to preserve the “sizzle” of a hot performance. Even an electric guitar has a bandwidth of about 82 Hz to above 5 kHz.

How big does an instrument cable need to be? Will a bigger cable sound better? Will a bigger cable last longer?

In order to be compatible with standard 1/4-inch phone plugs the diameter of the cable is effectively limited to a maximum diameter of about .265 of an inch. Larger cable diameters demand larger plug barrels, which sometimes won’t fit jacks that are located close together or in tight places. In terms of both sound and durability, “it’s not how big you make it, but how you make it big.”


image


What are the basic parts of an instrument cable and what does each one do?

The coaxial configuration is generally used for unbalanced instrument cables. At its simplest it consists of a center conductor, which carries current form the source, separated by insulation from a surrounding shield, which is also the current return conductor necessary to complete the circuit. These three components are augmented by an electrostatic shield to reduce handling noise and an outer jacket for protection and appearance.

What is a stranded center conductor? Why is it important?

A stranded conductor is composed of a number of strands of copper wire bunched together to form a larger wire. Solid conductors having only one strand are the cheapest and easiest to work with when assembling cables, because they do not require the twisting and tinning that stranded types need to prepare them for soldering.

The problem with a solid conductor is that it quickly fatigues and breaks when it is bent or flexed. This makes stranded conductors a must for cables that are frequently moved around, especially when they are attached to human beings playing music.

Finely stranded conductors increase the cost of the cable because of the increased production time and the expensive and sophisticated machinery required to assemble very small and fragile strands into a single conductor. The stranding of the center conductor is only one of a number of factors that influence the overall flexibility of a given cable, but it is generally true that finer stranding increases the flexibility and the flex life of the cable.

What is wire gauge? What gauge wire is used in instrument cables?

The diameter of copper wire is typically given in AWG (American Wire Gauge), with the larger numbers signifying smaller size. For instance, a 20 AWG (or “20 gauge”) wire is smaller than an 18 AWG wire. Generally, instrument cable center conductors are in the range of 18 to 24 AWG, with strands of 32 to 36 AWG.


image


What gauge should the center conductor of an instrument cable be?

Since the current involved in instrument applications is negligible, the amount of copper in the center conductor has only a very slight effect on the strength of the signal reaching the amplifier. In practice, the center conductor’s size is determined primarily by (1) the necessity of obtaining a maximum diameter of .265 of an inch or less while (2) providing sufficient tensile strength to withstand the rigors of performance without breaking.

The 20 AWG center conductor has become quite standard, normally in the form of 26 strands of 34 conductor has become quite standard, normally in the form of 26 strands of 34 AWG(26/34) or 41 strands of 36 AWG (41/36).

A 20 AWG conductor has a breaking point of approximately 31 lbs. Reducing conductor size to 22 AWG reduces breaking point to about 19 pounds (a reduction of 39 percent); increasing it to 18 AWG increases the strength to over 49 pounds (an increase of 58 percent).

The most common cause of failure for instrument cables is broken center conductors.

What are the differences between tinned copper and bare copper stranded conductors?

Sometimes the individual strands of the center conductor are run through a bath of molten tin before assembling them into a wire. Tinned copper wire is often easier to solder, especially if a lengthy (months to years) shelf life is required, because the tin coat prevents copper oxides from forming. If the cable is to be used immediately upon manufacture pre-tinned strands are not required and add unnecessary expense.

Furthermore, an electrical phenomenon known as skin effect makes the use of tinned conductors a potential threat to the high-frequency signal-carrying properties of the cable. However, the aging effects of the formation of copper oxides on untinned conductors may also cause a gradual deterioration of performance.

What is skin effect and how does it affect tinned copper?

Briefly, skin effect is caused by the magnetic field generated by the current flow in the cable causing electron flow to be concentrated more and more on the outer surface of the conductor as frequency increases. If this outer surface is coated with tin, which has higher resistance than copper, the cable will have a falling high-frequency response and act as an attenuator.

What is oxygen-free and linear-crystal copper? How do they affect sound in cables?

There is a continuing debate concerning the use of oxygen-free and linear-crystal copper wire. These types of wire contain lower levels of oxide impurities and fewer crystal boundaries than standard copper. Since these impurities form tiny semiconductors within the cable, the theory is that the cable itself introduces signal distortion, especially of low-level “detail” information. These claims have been very difficult to document with scientific test equipment, but numerous listening tests suggest there is something to them.

What materials are used for insulation of the center conductor?

The insulation that surrounds the center conductor can be made from thermoset (rubber, E.P.D.M., neoprene, Hypalon) or thermoplastic (polyethylene, polypropylene, PVC, FPE) materials. The thermoset materials are extruded over the conductor and then heat-cured to vulcanize them. This process yields a very high melting point which makes soldering very easy, but the vulcanizing stage adds to the cost and introduces unpredictable shrinkage which can make it very difficult to maintain the desired wall thickness.

Thermoplastic insulations are cheaper to process but will return to a liquid state when overheated, requiring great care during soldering when used to insulate large conductors. The insulation of choice for instrument cable has largely shifted from rubber or E.P.D.M. to high-density polyethylene, with cost being a major factor.

How does the insulation affect flexibility?

The insulation material and its thickness can be very dominant in determining the flexibility of the cable. A finely-stranded conductor insulated with a stiff compound will behave much like a solid conductor, as will a conductor insulated with a very thick layer of a more flexible compound. The thinner the insulation is, the more flexibility it allows in the overall cable.

How thick does the insulation need to be?

The basic electrical requirement for insulation thickness is called dielectric strength and is determined by the cable’s working voltage. The voltages involved in instrument cable applications are very low and very little dielectric strength is necessary to prevent the insulation from breaking down. However, a very important consideration when the cable is to be used for instruments like electric guitars is the amount of capacitance between the center conductor and shield.

What is capacitance and what does it do?

Capacitance is the ability to store an electrical charge. In cables, capacitance between the center conductor and shield is expressed in picofarads per foot (pF/ft.), with lower values indicating less capacitance. Combined with the source impedance, cable capacitance forms a low-pass filer between the instrument and amplifier; that is, it cuts high frequencies, much as the instrument’s tone control does.


image


Why is low-capacitance cable an advantage? How can cable capacitance be eliminated? How long of a cable can I run before I lose high frequencies?

Lower cable capacitance allows more of the natural “brightness,” “presence,” or “bite” of an instrument to reach the amp, which in turn allows the treble controls to be run lower, reducing “hiss” and other unwanted noise. High-frequency loss from the cable becomes audible and objectionable depending on the source, the amplification and other circumstances. Raising the source impedance or increasing the length of the cable increases the loss; there is no point at which high-frequency loss suddenly appears or disappears.

Guitars typically have much higher source impedances at higher frequencies because of the inductive nature of their pickups, which aggravates the effect of cable capacitance. A guitar will often sound noticeably “muddier” when run through a 40-foot cable, whereas keyboard instruments, samplers, mixers and other line-level devices with low source impedances can usually drive cable runs of hundreds of feet without problems.

How is low-capacitance cable made?

Given that the overall outside diameter of the cable is limited by the plugs that must be used, cable capacitance is largely the result of trade-offs between conductor size (and hence strength), insulation material (cost) and insulation thickness (size and flexibility).

The term dielectric constant is used to rank the insulation quality of a material.

Some materials are great insulators but impractical for use as wire insulation—glass, for instance! As far as practical materials are concerned, the thermoplastics are generally far superior to the thermoset family.

For instance, polyethylene has a dielectric constant of 2.3, while that of rubber is 6.5. This allows a cable with polyethylene insulation to have perhaps one-third of the capacitance of a cable insulated with the same thickness of rubber. It can make an audible increase in the clarity of the sound.


image


What is the best all-around insulation material for instrument cables?

Polyethylene is very economical and dielectrically hard to improve upon (teflon is slightly better, but its cost is far greater, and its flexibility is far from ideal). Its only drawback is a low melting point which requires a skilled touch with the soldering iron to avoid problems in production.

What does the electrostatic shield do?

As the cable is flexed and bent, the copper shield rubs against the insulation, generating static electricity. The electrostatic shield acts as a semi-conducting barrier between the copper shield and the center insulation which discharges these static electrical charges. Without it any movement of the cable would result in obnoxious “crackling” noises being generated.

What are electrostatic shields made of?
Electrostatic shields first appeared in cable as a layer of rayon braid. Nowadays carbon-impregnated dacron “noise-reducing tape” is a common element in any good high impedance cable. Conductive-plastic (carbon-loaded PVC) electrostatic shields have also become common. Conductive PVC is extrudable just like an insulation, which guarantees 100 percent coverage of the insulation with a very consistent thickness and a very low coefficient of friction.

The superior conductivity of C-PVC makes it much more effective than the semiconductive tape in bleeding off the small electrical charges that cause “the crackles.” Extruded C-PVC is also thinner and more flexible than dacron tape, which is applied longitudinally and restricts the “bendability” of the cable. Although conductive plastic (with a copper drain wire) has been used to completely replace copper braid or serve shields, its effectiveness falls off above 10 kHz.

Why are some cables microphonic?

As noted previously, the center conductor, insulation and shield of a coaxial cable form a capacitor; and, as many a microphone manufacturer will tell you, when the plates of a capacitor are deflected, a voltage is generated. (This is the basis of the condenser microphone!) Similarly, when the plates (conductor and shield) of our “cable capacitor” are deflected (for instance, by stepping on it or allowing it to strike a hard floor), a voltage is also generated.

Unfortunately, this voltage generally pops out of the amplifier as a distinct “whap,” and can be very hard on ears and loudspeakers alike. Effects of this type are called triboelectric noise.

How can cable noise be reduced?

The electrostatic shield’s charge-draining properties help greatly to diminish triboelectric effects. Triboelectric impact noise is also reduced by decreasing the capacitance of the cable with thicker and softer insulation because the deflection of the conductor is proportionally reduced. This is the main reason that the single-conductor coaxial configuration remains superior to the “twisted pair” for high-impedance uses—it allows thicker insulation for a given overall diameter.

Triboelectric effects are accentuated by high source impedances, and are at their worst when the source is an open circuit—for instance, a cable plugged into an amplifier with no instrument at the sending end. Testing for this type of noise requires termination of the cable with a shielded resistance to simulate the source impedance of a real instrument.

What does the shield do?

The copper shield of a coaxial cable acts as the return conductor for the signal current and as a barrier to prevent interference from reaching the “hot” center conductor.

Unwanted types of interference encountered and blocked with varying degrees of success by cable shielding include radio frequency (RFI) (CB and AM radio), electromagnetic (EMI) (power transformers) and electrostatic (ESI) (SCR dimmers, relays, fluorescent lights).

What makes one shield better than another?

To be most effective the cable shield is tied to a ground—usually a metal amplifier or mixer chassis that is in turn grounded to the AC power line. Cable shielding effectiveness against high-frequency interference fields is accomplished by minimizing the transfer impedance of the shield.

At frequencies below 100 kHz, the transfer impedance is equal to the DC resistance—hence, more copper equals better shielding. Above 100 kHz the skin effect previously referred to comes into play and increases the transfer impedance, reducing the shielding effectiveness.

Another important parameter to consider is the optical coverage of the shield, which is simply a percentage expressing how complete the coverage of the center conductor by the shield is.

What are the characteristics of the three basic types of cable shields? Which is best?

A braided shield is applied by braiding bunches of copper strands called picks around the insulated, electrostatically shielded center conductor. The braided shield offers a number of advantages.

It’s coverage can be varied from less than 50 percent to nearly 97 percent by changing the angle, the number of picks and the rate at which they are applied. It is very consistent in its coverage, and remains so as the cable is flexed and bent. This can be crucial in shielding the signal from interference caused by radio-frequency sources, which have very short wavelengths that can enter very small “holes” in the shield.

This RF-shielding superiority is further enhanced by very low inductance, causing the braid to present a very low transfer impedance to high frequencies. This is very important when the shield is supposed to be conducting interference harmlessly to ground. Drawbacks of the braid shield include restricted flexibility, high manufacturing costs because of the relatively slow speed at which the shield-braiding machinery works, and the laborious “picking and pigtailing” operations required during termination.

A serve shield, also known as a spiral-wrapped shield, is applied by wrapping a flat layer of copper strands around the center in a single direction (either clockwise or counter-clockwise). The serve shield is very flexible, providing very little restriction to the “bendability” of the cable. Although its tensile strength is much less than that of braid, the serve’s superior flexibility often makes it more reliable in “real-world” instrument applications.

Tightly braided shields can be literally shredded by being kinked and pulled, as often happens in performance situations, while a spiral-wrapped serve shield will simply stretch without breaking down. Of course, such treatment opens up gaps in the shield which can allow interference to enter. The inductance of the serve shield is also a liability when RFI is a problem; because it literally is a coil of wire, it has a transfer impendance that rises with frequency and is not as effective in shunting interference to ground as a braid.

The serve shield is most effective at frequencies below 100 kHz. From a cost viewpoint, the serve requires less copper, is much faster and hence cheaper to manufacture, and is quicker and easier to terminate than a braided shield. It also allows a smaller overall cable diameter, as it is only composed of a single layer of very small (typically 36 AWG) strands. these characteristics make copper serve a very common choice for audio cables.

The foil shield is composed of a thin layer of mylar-backed aluminum foil in contact with a copper drain wire used to terminate it. The foil shield/drain wire combination is very cheap, but it severely limits flexibility and indeed breaks down under repeated flexing. The advantage of the 100% coverage offered by foil is largely compromised by its high transfer impedance (aluminum being a poorer conductor of electricity than copper), especially at low frequencies.

What type of shield works best against 60-cycle hum from power transformers and AC cables?

The sad truth is that the most offensive “hum-producing” frequencies (60 and 120 Hz) generally emitted by transformers and heavy power cables are too low in frequency to be stopped by anything but a solid tube of ferrous (magnetic) metal—iron, steel, nickel, etc.—none of which contribute to the flexibility of a cable! For magnetically coupled interference, the only solution is to present as small a loop area as possible. This is one of fhe reasons that the twisted-pair configuration generally used in balanced-line applications became popular.

Fortunately the high input impedances generally found in unbalanced circuits minimize the effects of such interference. Don’t run instrument cables parallel to extension cords. Don’t coil up the excess length of a “too long” cable and stuff it through the carrying handle of a amp—this makes a great inductive pickup loop for 60 Hz hum!

What does the outer jacket do? What is it made of?

The jacket is both armor and advertisement; it protects the cable from damage and enhances the marketability of the assembly. As armor, the jacket must resist abrasion, impact, moisture and sometimes hostile chemicals (Bud Light, for instance).

As advertisement, it may be distinctively colored or printed with the name of the manufacturer or dealer for product identification. The materials used for jacketing are the same type as those used for the inner insulation (thermoset or thermoplastic), but the choice is dictated less by electrical criteria and more by physical durability and cosmetic acceptability.

What is the best cable jacketing material?

For years rubber or neoprene were preferred for their superior abrasion resistance and flexibility, but modern thermoplastic technology has produced a number of PVC compounds that are soft and flexible but also very tough. As previously noted, thermoplastic processing is cheaper, faster and more predictable than that for thermoset materials. Only very specialized situations requiring oil or ozone resistance or extremes of temperature and climate demand neoprene or Hypalon jacketing.

The use of PVC has two other major advantages. PVC is not as elastic as rubber or neoprene, and this lack of “stretch” lends additional tensile strength to the resulting assembly by taking some of the strain that would otherwise be borne solely by the center conductor. This has made a dramatic improvement in the reliability of currently manufactured instrument cables.

The other important property of PVC is its almost limitless colorability. Once found only in gray or “chrome vinyl,” PVC-jacketed cable now ranges from basic black through brilliant primary colors to outrageous “neon” shades of pink and green.


BIBLIOGRAPHY

• Ballou, Greg, ed., Handbook for Sound Engineers: The New Audio Cyclopedia, Howard W. Sams and Co., Indianapolis, 1987.
• Cable Shield Performance and Selection Guide, Belden Electronic Wire and Cable, 1983.
• Colloms, Martin, “Crystals: Linear and Large,” Hi-Fi News and Record Review, November 1984.
• Cooke, Nelson M. and Herbert F. R. Adams, Basic Mathematics for Electronics, McGraw-Hill, Inc., New York, 1970.
• Davis, Gary and Ralph Jones, Sound Reinforcement Handbook, Hal Leonard Publishing Corp., Milwaukee, 1970.
• Electronic Wire and Cable Catalog E-100, American Insulated Wire Corp., 1984.
• Fause, Ken, “Shielding, Grounding and Safety,” Recording Engineer/Producer, circa 1980.
• Ford, Hugh, “Audio Cables,” Studio Sound, Novemer 1980.
• Guide to Wire and Cable Construction, American Insulated Wire Corp., 1981.
• Grundy, Albert, “Grounding and Shielding Revisited,” dB, October 1980.
• Jung, Walt and Dick Marsh, “Pooge-2: A Mod Symphony for Your Hafler DH200 or Other Power Amplifiers,” The Audio Amateur, 4/1981.
• Maynard, Harry, “Speaker Cables,” Radio-Electronics, December 1978,
• Miller, Paul, “Audio Cable: The Neglected Component,” dB, December 1978.
• Morgen, Bruce, “Shield The Cable!,” Electronic Procucts, August 15, 1983.
• Morrison, Ralph, Grounding and Shielding Techniques in Instrumentation, John Wiley and Sons, New York, 1977.
• Ott, Henry W., Noise Reduciton in Electronic Systems, John Wiley and Sons, New York, 1976.
• Ruck, Bill, “Current Thoughts on Wire,” The Audio Amateur, 4/82.


Thanks to Pro Co Sound for this article.

{extended}
Posted by Keith Clark on 05/08 at 04:41 PM
AVFeatureBlogStudy HallAVInterconnectSignalPermalink

In The Studio: The Artistic Elements of Recording Production

Essential musical relationships and sound properties in recordings

The audio recording process has provided the creative artist with a very precise control over the perceived parameters of sound (through a direct control of the physical dimensions of sound). This control of sound is well beyond that which was available to composers and performers before the advances of recording technology.

The potential of controlling sound in new ways has led to new artistic elements in music, and has led to a re-definition of the musician and of music. While our discussion is focused on musical applications, it must be remembered that all aspects of these artistic elements of music also function as artistic elements in other areas of audio.

Through its detailed control of sound, the audio recording medium has resources for creative expression that are not possible acoustically. Sounds are created, altered and combined in ways that are beyond the reality of live, acoustic performance. New creative ideas, and new additions to our musical language have emerged as a result of the audio recording process.

The artistic elements of sound are the mind/brain’s interpretation of the perceived parameters of sound. Sound as it is perceived and understood by the human mind, becomes the resource for creative and artistic expression in sound. The perceived parameters of sound are utilized as the “artistic elements of sound” to create and ensure the communication of meaningful (musical) messages.

The art of recording lies within the utilization of the perceived parameters of sound as a resource for artistic expression. The materials that allow for artistic expression are understood through a study of their component parts: the artistic elements of sound.

By applying the perceived parameters of sound to the creation of a recording, the recorded material is comprised of sound elements that are interpreted by the mind/brain, and thus communicate artistic ideas. The artistic elements are directly related to specific perceived parameters of sound, as the perceived parameters of sound were directly related to specific physical dimensions of sound.

sound in audio recording is in three states: physical dimensions, perceived parameters, and artistic elements.

The artistic elements are the resources of the recordist for artistic expression. The perceived parameters translate into the artistic elements:

(1) pitch becomes pitch levels and relationships;
(2) loudness becomes dynamic levels and relationships;
(3) duration becomes rhythmic patterns and rate of activity;
(4) timbre becomes sound sources and sound quality;
(5) space becomes spatial properties.

The audio production process provides the resources for considerable variation, and the very refined control of ALL of the artistic elements of sound. This allows all of the artistic elements of sound to be accurately and precisely controlled through many states of variation, in ways that were possible with ONLY pitch on traditional musical instruments.

image

Pitch Levels and Relationships
Relationships of pitch levels contain most of the pertinent information in a piece of music. The artistic message of most of the music of Western heritage is communicated (to a large extent) by pitch relationships. The listener has been trained, by the music heard throughout their life, to focus on this element to obtain the most significant musical information. The other artistic elements often support pitch patterns and relationships.

Pitch is the most precisely controlled artistic element in traditional music. The use of pitch relationships and pitch levels in music is more sophisticated than the use of the other artistic elements. Complex relationships of pitch patterns and levels are quite common in music.

Information about the artistic element of pitch levels and relationships will be related to

(1) the relative dominance of certain pitch levels,
(2) the relative registral placement of pitch levels and patterns, or
(3) pitch relationships: patterns of successive intervals, relationships of those patterns, and relationships of simultaneous intervals.

The artistic element of pitch levels and relationships is broken into the component parts: melodic lines, chords, tonal organization, register, range, textural density, pitch areas, and tonal speech inflection.

A series of successive, related pitches creates melodic lines. Melodic lines are perceived as a sequence of intervals that appear in a specific ordering, and that have rhythmic characteristics. The melodic line is often the primary carrier of the artistic message of a piece of music.

The ordering of intervals, coupled with or independent from rhythm, creates patterns. Pattern perception is central to how humans perceive objects and events. These basic principles relate to all of the components of the artistic elements. Melodic lines are organized by patterns of intervals (short melodic ideas, or motives), supported by corresponding rhythmic patterns. The complexity of the patterns, the ways in which the patterns are repeated, and the ways in which the patterns are modified provide the melodic line with its unique character.

Two or more simultaneously sounding pitches create chords. In much of our music, these chords are based on superimposing, or stacking, the intervals of a third (intervals containing three and four semitones, most commonly). Chords comprised of three pitches, combining two intervals of a third, are called triads. The continued stacking of thirds results in seventh, ninth, eleventh, and thirteenth chords.

The movement from one chord to another, or harmonic progression, is the most stylized of all the components of the artistic elements. Harmonic progression is the pattern created by successive chords, as based on the lowest note (the root) of the triads (or more complex chords). These patterns of chord progressions have become established as having general principles that occur consistently in certain types of music.

Certain types of music will have stylized chord progressions (progressions that occur most frequently), other types of music will have quite different movement between chords, and perhaps emphasize more complex chord types. The patterns of the harmonic progression create harmony.

Harmony is one of primary components that supports the melodic line. Pitches of the melody are reinforced by the chords in the harmonic progression. The speed and direction of the melodic line is often supported by the speed at which chords are changed, and the patterns created by the changing chords: harmonic rhythm.

The expectations of harmonic progression create a sequence of chords which will present areas of tension and areas of repose within the musical composition. The tendencies of harmonic motion do much to shape the momentum of a piece of music, and can greatly enhance the character of the melodic line and musical message. Performers utilize the psychological tendencies of harmonic progression; exploiting its directional and dramatic tendancies. The expectations of harmonic movement and the psychological characteristics of harmonic progression have become important aspects of musical expression and musical performance.

The melodic and harmonic pitch materials are related through tonal organization. Certain pitch materials are emphasized over others, in varying degrees, in nearly all music. This emphasis creates systems of tonal organization in which a hierarchy of pitch levels exist. A hierarchy will most often place one pitch in a predominant role, with all other pitches having functions of varying importance, in supporting the primary pitch. The primary pitch, or tonal center, becomes a reference level, to which all pitch material is related, and around which pitch patterns are organized.

Many tonal organization systems exist. These systems tend to vary significantly by cultures, with most cultures using several different, but related systems. The “major” and “minor” tonal organization systems of Western music are examples of different, but related systems; as are the “whole-tone” and “pentatonic” systems of Eastern Asia. The reader should consult appropriate music theory texts for more detailed information on tonal organization, as necessary.

Certain components of pitch levels and relationships have become more prominent in musical contexts (and other areas of audio) because of the artistic treatment pitch relationships have received in music recordings.

The components of range, register, textural density, and pitch area are able to be more closely controlled in recorded music, than in live (unamplified) performance. These components are more important in recorded music, because they are precisely controllable by the technology, and they have been controlled to support and enhance the musical material.

Range is the span of pitches of a sound source (or of any instrument or voice). Range is the area of pitches that encompasses the highest note possible (or present in a certain musical example or context) to the lowest note possible (or present) in a particular sound source.

A register is a portion of a sound source’s range. A register will have a unique character (such as a unique timbre, or some other determining factor) that will differentiate it from all other areas of the same range. It is a small area within the source’s range, that is unique in some way. Ranges are often divided into many registers; registers may encompass a very small group of successive pitches, up to a considerable portion of the source’s range.

A pitch area is a portion of any range (or of a register) that may or may not exhibit characteristics that are unique from other areas. Instead, it is a defined area between an upper and a lower pitch-level, in which a specific activity or sound exists.

Textural density is the relative amount and registral placement of simultaneously sounding pitch material, throughout the hearing range or within a specific pitch area. It is the amount and placement of pitch material in the composite musical texture (the overall sound of the piece of music) in relation to defined boundaries.

With textural density, sound sources are assigned (or perceived as being within) a certain pitch area, within the entire audible range (or range used within a certain piece of music). Thus, certain pitch areas will have more activity than other pitch areas; certain sound sources will be present only in certain pitch areas, and other sources present only in other pitch areas; some sources may share pitch areas, and cause more activity to be present in those portions of the range; some pitch areas may be void of activity. Many possible variations exist.

Textural density is a component of pitch-level relationships, that is directly related to traditional concerns of orchestration. Textural density is a much more specific concern in recorded music because it is controllable in very fine increments. Traditional orchestration was concerned, basically, with the selection of instruments, and with the placement of the musical parts (performed by the assigned instruments) against one another.

With the controls of signal processing (especially equalization), sound synthesis and multi-track recording, the registral placement of sound sources and their interaction with the other sound sources take on many more dimensions. Each sound source occupies a pitch area; the acoustic energy within the pitch area of a timbre’s spectrum is distributed in ways that are unique to each sound source. The spectrum of each sound source is an individual textural density, and the textural density of the overall program (or musical texture) is the composite of all of the simultaneous pitch information from all sound sources.

Sound sources, and musical ideas, are often delineated by the pitch area they occupy within the composite textural density. Sound sources are more easily perceived as being separate entities and individual ideas, when they occupy their own pitch area in the composite, textural density of the musical texture. This area can be large or quite small, and still be effective.

Sounds that do not have well-defined pitch quality, occupy a pitch area. These types of sound are noise-like, in that they cannot be perceived as being at a specific pitch. Such sounds may, however, have unique pitch characteristics.

Many sounds cannot be assigned a specific pitch, yet have a number of frequencies that dominate their spectrum. Cymbals and drums easily fall into this category. Cymbals are easily perceived as sounding higher- or lower-than one another. Yet a specific pitch cannot be assigned to the sound source.

We perceive these sounds as occupying a pitch area. We perceive a pitch-type quality in relation to the registral placement of the area in which the highest concentration of pitch information (at the highest amplitude level) is present in the sound, and in relation to the relative density (closeness of the spacing of pitch levels) of the pitch information (spectral components). We are able to identify the approximate area of pitches in which the concentration of spectral energy occurs, and are thus able to relate that area to other sounds.

Pitch areas are defined as the range spanned by the lowest and highest dominant frequencies around the area of the spectral activity. This range is called the bandwidth of the pitch area. Many sounds will have several pitch areas where concentrated amounts of spectral energy is occurring, with one range dominating and others less prominent. The size of the bandwidth and the density of spectral information (the number of frequencies within the bandwidth and the spacing of those frequencies) define the sound quality of the pitch area.

Dynamic Levels and Relationships
Dynamic levels and relationships have traditionally been used in musical contexts for expressive or dramatic purposes. Expressive changes in dynamic levels and the relationships of those changes have most often been used to support the motion of melodic lines, to enhance the sense of direction in harmonic motion, or to emphasize a particular musical idea.

A change of dynamic level, in and of itself, can produce a dramatic musical event, and is a common musical occurrence. Changes in dynamic level can be gradual or sudden; subtle or extreme.

Dynamics have traditionally been described by analogy: louder than, softer than; very loud (fortissimo), soft (piano), medium loud (mezzo-forte), etc. The artistic element of dynamics in a piece of music is judged in relation to context. Dynamic levels are gauged in relation to

(1) the overall, conceptual dynamic level of the piece of music,
(2) the sounds occurring simultaneously with a sound source in question, and
(3) the sounds that immediately follow and precede a particular sound source.

The components of dynamic levels and relationships in audio recording are dynamic contour (with gradual and abrupt changes in dynamic level), emphasis/deemphasis accents (abrupt changes in dynamic level), musical balance (gradual and abrupt changes in dynamic levels), and dynamic speech inflections.

Rapid, slight alterations or changes in dynamic level for expressive purposes are often present in live performances. This is called tremolo, and is used primarily to add interest and substance to a sustained sound. Tremolo and vibrato are often confused. Vibrato is a rapid, slight variation of the pitch of a sound; it, also, is used to enhance the sound quality of the sound source. At times, performers may not be able to control their sound well enough to control tremolo and vibrato alterations; in these instances, tremolo and vibrato may detract from the source’s sound quality, rather than contribute to it.

Changes in dynamic levels over time comprise dynamic contour. Dynamic contours can be perceived for individual sounds, individual sound sources, individual musical ideas comprised of a number of sound sources, and the overall piece of music. Dynamic contour can be perceived from many different perspectives: from the smallest changes within the spectral envelope through great changes in the overall dynamic level of a recording.

The composite of all of the dynamic contours creates musical balance. Musical balance is the interrelationships of the dynamic levels of each sound source, to one another and to the entire musical texture. The relative dynamic level of a particular sound source in relation to another sound source is a comparison of two parts of the musical balance.

Dynamic contours and musical balance have been used in supportive roles in most traditional music. At times dynamic level changes have been used for their own dramatic impact on the music, but most often they are used to assist the effectiveness of another artistic element.

To support a musical idea or to create a sense of drama, musical ideas are often brought to the listener’s attention by dynamic emphasis or attenuation accents. A shift in dynamic level that brings the listener’s attention to a musical idea, is an accent. Accents are most often emphasis accents, making use of increasing the dynamic level of the sound to achieve the desired result. Much more difficult to successfully achieve, deemphasis (or attenuation) accents draw the listener’s attention to a musical idea, or a sound source, by a decrease in the dynamic level of the sound.

Attenuation accents are often unsuccessful because the listener has a natural tendency to move attention away from softer sounds; these accents are most easily accomplished in sparse musical textures, where little else is going on to draw the listener’s attention away from the material being accented.

Dynamic levels and relationships may be significantly different in the final recording than they were originally performed. The recording process has very precise control over the dynamic levels of a sound source in the musical balance of the final recording. An instrument may have an audible dynamic level in the musical balance of a recording that is very different from the dynamic level at which the instrument was originally performed. The timbre of the instrument will exhibit the dynamic levels at which it was performed (perceived performance intensity), but its relative dynamic level in relation to the other musical parts might be significantly altered by the mix.

For example, an instrument may be recorded playing a passage at ff, with the passage ending up in the final musical balance at a very soft dynamic level; the timbre of the instrument will send the cue that the passage was performed very loudly, yet the actual dynamic level will be quite soft in relation to the overall musical texture, and to the other instruments of the texture.

The dynamic level of a sound source in relation to other sound sources, musical balance, is quite different and distinct from the perceived distance of one sound source to another. Yet, these two occurrences are often confused, and is the source of much common, misleading terminology used by recordists. Significant differences are present between a softly generated sound that is close to the listener, and a loudly performed sound that is at a great distance to the listener, even when the two sounds have precisely the same perceived loudness level.

Loudness levels within the recording process are independently controllable from the loudness level at which the sound was performed, and are independently controllable from the distance of the sound source from the original receptor and from the person listening to the final recording.

Rhythmic Patterns & Rates of Activities
Durations of sounds (the length of time in which the sound exists) combine to create musical rhythm. Rhythm is based on the perception of a steadily recurring, underlying pulse. The pulse does not need to be strongly audible to be perceived. The underlying pulse (or metric grid) is easily recognized by humans as the strongest, common proportion of duration (note value) heard in the music.

The rate of the pulses of the metric grid is the tempo of a piece of music. Tempo is measured in metronomic markings (pulses per minute, abbreviated “M.M.”), or in some contexts as pulses per quarter note. Tempo, in a larger sense, can be the rate of activity of any large or small aspect of the piece of music (or of some other aspect of audio, for example the “tempo of a dialogue”).

Durations of sound are perceived proportionally in relation to the pulse of the metric grid. The human mind will organize durations into groups of durations, or rhythmic patterns. In the same ways that we perceive patterns of pitches, we perceive patterns of durations. Pattern perception is transferable to all of the components of all of the artistic elements, and is the traditional way in which we perceive pitch and rhythmic relationships.

Rhythmic patterns are the durations of or between occurrences of an artistic element. Rhythmic patterns might be created by the pulsing of a single percussion sound; in this way rhythmic patterns would be created by the durations between the occurrences of the starts of the same sound source.

Rhythmic patterns comprised of the durations of successive, single pitches (perhaps including some silences) creates melody. Rhythmic patterns of the durations of successive chords (groups of pitches) creates harmonic rhythm. In this way, rhythm can be transferred to ALL artistic elements.

For example, it is possible to have rhythms of sound location (as is becoming a very common mixing technique for percussion sounds); it is likewise possible to have timbre melodies, or rhythms applied to patterns of identifiable timbres.

Sound Sources and Sound Quality
The selection, modification, or creation of sound sources is an important artistic element of audio recording. The sound quality of the sound sources (the timbre of the source), plays a central role in the presentation of musical ideas, and has become an increasingly significant resource for the expression of musical ideas.

The sound quality of a sound source may cause a musical part to stand out from others, or to blend into an ensemble; in and of itself, it can convey tension or repose, or lend direction to a musical idea; it can add dramatic or extra-musical meaning or significance to a musical idea; finally, the timbral quality of a sound source can, itself, be a primary musical idea, capable of conveying a meaningful musical message.

Until the Twentieth Century, composers of Western music used the sound quality of a sound source

(1) to assist in delineating and differentiating musical ideas,
(2) to enhance the expression of a musical idea by the careful selection of the appropriate musical instrument to perform a particular musical idea, or
(3) to create a composite timbre (or texture) of the ensemble, thereby forming a characteristic, overall sound quality.

Performers have always utilized the characteristic timbres of their instruments or voices to enhance musical interpretation. This activity has been greatly refined by the resources of recording technology. The recording process allows the performers greater flexibility in shaping the timbre of their instruments for creative expression. Of equally great importance, after the performance has been captured, the recording process allows for the opportunity to return to the performance for further (perhaps extensive) modifications of sound quality.

The selection of a sound source to represent (present) a particular musical idea is vital to the successful communication of the idea. The act of selecting a sound source is among the most important decisions composers (and producers) make. The options for selecting sound sources are:

(1) to choose a particular instrumentation,
(2) modifying the sound quality of an existing instrument or performance, or
(3) to create, or synthesize, a sound source to meet the specific need of the musical idea.

The selection of instrumentation was once merely a matter of deciding which generic instrument of those available would perform a certain musical line. The selection of instrumentation has become very specific, since the performance of a recording may virtually live forever, whereas previous performance existed for only a passing moment.

Today, the selection of instrumentation is often so specific, as to be a selection of a particular performer playing a particular model of an instrument. Generally, composers and producers are very much aware of the sound quality they want for a particular musical idea.

The performer, the way the performer can develop a musical idea through their own personal performance techniques, and their ability to use sound quality for musical expression are all considerations in the selection of instrumentation.

Vocalists are commonly sought for the sound quality of their voice, and their abilities to perform in particular singing styles. The vocal line of most songs is the focal point that carries the weight of musical expression. Vocalists make great use of performance techniques to enhance and develop their sound quality, as well as to support the drama and meaning of the text.

Performance techniques vary greatly between instruments, musical styles, performers, and functions of a musical idea. The most suitable performance techniques will be those which achieve the desired musical results, when the sound sources are finally combined. One performance technique consideration must be singled out for special attention: the intensity level of a performance.

A performance on a musical instrument will take place at a particular intensity level. This perceived performance intensity is comprised of loudness, performance technique and the expressive qualities of the performance. Each performance at a different intensity level results in a characteristic timbre of that instrument, at that loudness level. The same sound source will have different timbres, at different loudness levels (and at different pitch-levels), etc., through performance intensity.

Along with the timbre (sound quality) and the loudness level, comes a sense of drama and an artistically sensitive presentation of the music, that is communicated to the listener. Through performance intensity, louder sounds might be more urgent, more intense; softer sounds might be cause for relaxation of musical motion. Much dramatic impact can be created by sending conflicting loudness level and sound quality information; a loud whisper, a trumpet blast heard at pianissimo.

Modifying an existing sound source is a common way of creating a desired sound quality. Instruments, voices, or any other sound may be modified (while being recorded, or afterwards) to achieve a desired sound quality. Most often, this option for selecting a sound source is in the form of making detailed modifications to a recorded performance of a musical idea by a particular instrument. The final sound quality will still have the characteristic qualities of the original sound.

The extensive modification of an existing sound source, to the point where the characteristic qualities of the original sound are lost, is actually the creation of a sound source. The creation of new sound qualities (or inventing timbres) has become an important feature in many types of music. The recording process easily allows for the creation of new sound sources, with new sound qualities.

Sound qualities are created by either extensively modifying an existing sound (through sound sampling technologies) or by synthesizing a waveform. Sound synthesis techniques allow precise control over these two processes, and are having a widespread impact on recording practice and musical styles. Many specific techniques exist for synthesizing and sampling sounds; all with their own unique sound qualities and own unique ways of allowing the user to modify the sound source.

With the control of timbre by the recording process, has come a new sense of the importance of sound quality to communicate, as well as to enhance, the musical message. Sound quality has become a central element in a number of the primary decisions of recording of music, and in the creation of music through the recording process. In making these primary decisions, sound quality is conceptualized as an object. The sound is thought of as a complete and individual entity.

In this way sound quality is considered as a sound object. While the sound object is comprised of component parts (as we have discussed above), it is perceived as a large unit, for its overall sound qualities.

Sound quality is perceived as a sound object:

(1) when the sound quality of the sound source itself is at the center of the listener’s attention, or
(2) when the sound itself is the most important element of the musical texture.

For the sound object, the individual character of the sound source is significant. This is in contrast to the normal, primary significance of how the sound quality enhances the musical material, or how the sound sources relate to one another.

The entire sound of the music may also be conceptualized as a single entity, or overall quality. In this way, the overall musical sound is perceived as a large sound object, being comprised of any number of small, individualized sound objects.

This sound quality of the overall sound, or entire program is texture. As it is the composite of all sound objects present at any one time, over a span of time, texture has also been called sound structure or sound event.

Texture is perceived by the characteristics of its global sound quality. This concept of sound quality can be applied to groups of sound sources, in the same way as to individual sound objects or the entire program.

Texture will nearly always be comprised of any number or types of individual sound objects or groups of sound sources. Texture is perceived as an overall character that is comprised of the states and activities of its component parts. Pitch-register placements, rate of activities, dynamic contours, and spatial properties are primary factors in defining a texture by the states or values of its component parts.

The reproduced recording presents an illusion of a live performance. This performance will be perceived as having existed in reality, in a real physical space; as the human mind will conceive of any human activity in relationship to their own physical experiences. The recording will appear to be contained in a single, perceived physical environment. Within this perceived space is an area that comprises the sound stage.

The sound stage is the location within the perceived performance environment, where the sound sources appear to be sounding. The sound stage will appear to be contained within a single, global environment. The sound sources of the recording will be grouped by the mind, and will appear to occupy a more or less specific area of that global environment; this area is the sound stage. It is possible for different sound sources to occupy significantly different locations within the sound stage.

Imaging is the perceived lateral location and distance placement of the individual sound sources within the sound stage. Imaging is defined by the perceived physical relationships of the sound sources. As such, it is the perceived locations of the sound sources within the stereo array and with respect to perceived distance.

The stereo (lateral) location of a sound source is the perceived placement of the sound source in relation to the stereo array. Sound sources may be perceived at any lateral location within, or slightly beyond, the stereo array.

Phantom images are sound sources which are perceived to be sounding at locations where a physical sound source does not exist. Imaging relies on phantom imaging to create lateral localization cues for sound sources. Through the use of phantom images, sound sources may be perceived at any physical location within the stereo loudspeaker array, and up to 15 degrees beyond the loudspeaker array. Stage width (sometimes called stereo spread) is the area that spans the boundaries established by extreme left and right images of the sound sources.

Phantom images not only provide the illusion of the location of a sound source, they also create the illusion of the physical size of the source. Two types of phantom images exist: the spread image and the point source.

The point source phantom image is a focused, precise point in the sound stage where a sound source appears to be located. It is an exact location where the sound source is perceived as occupying. It appears to have a physical size that is quite narrow, and precisely located in the sound stage.

The spread image appears to occupy an area. It is a phantom image that has a size that extends between two audible boundaries. The size of the spread image can be considerable; it might be slightly wider than a point source, or it may occupy the entire stereo array. The spread image is defined by its boundaries; it appears to occupy an area between two points. At times, a spread image may appear to have a “hole in the middle,” where it might occupy two equal areas, one on either side on the stereo array.

The perceived lateral location of sound sources can be altered to provide the illusion of moving sources. Moving sound sources may be either point sources or spread images. Point sources that change location most closely resemble our experiences of moving sound sources.

The listener will perceive two types of distance cues from the recorded music:

(1) the distance of the listener to the sound stage, and
(2) the distance of each sound source from the listener. Both of these distances rely on a perception that the entire recording occupies a single, global environment. This perceived performance environment establishes a reference location of the listener, from which all judgments of distance can be calculated.

Spatial Properties
The spatial properties of sound have traditionally not been used in musical contexts. The only exceptions are the location effects of antiphonal ensembles of certain Renaissance and early Twentieth-Century musics, and the effect of the movement of the sound source found in certain drama-related works of the Nineteenth Century.

The spatial properties of sound play an important role in communicating the artistic message of recorded music. The roles of spatial properties of sound are many: it may be to enhance the effectiveness of a large or small musical idea; it may help to differentiate one sound source from another; it may be used for dramatic impact; it may be used to alter reality or to reinforce reality.

The number and types of roles that spatial location may play in the communication of a musical idea have yet to be exhausted or defined.

All of the components of the spatial properties are under very precise and independent control. All of the spatial properties have the capacity to be in, and to gradually change between, many dramatically different and fully audible states.

The spatial properties of sound that are of primary concern to recorded music (sound) are:

(1) the perceived stereo location of the sound source on the horizontal plane of the stereo array,
(2) the conceptualized distance of the sound source from the listener, and
(3) the perceived characteristics of the sound source’s physical environment. The perceived elevation of a sound source is not consistently reproducible in widely used playback systems, and has not yet become a resource for artistic expression.

The three spatial properties are realized through stereophonic sound reproduction. The spatial attributes are related by the perceived relationships of location and distance cues of the sound sources in relation to the sound stage, and the relationships of the sound stage to the perceived performance environment of the recording.

Two-channel sound reproduction has become the standard for the recording industry, with monophonic capabilities still considered for adoption for AM broadcast and television sound. The two-channel array of stereo sound attempts to reproduce all spatial cues through two separate sound locations (loudspeakers), each with more-or-less independent content (channel).

With the two channels, it is possible to create the illusion of sound location at a loudspeaker, in between the two loudspeakers, or slightly outside the boundaries of the loudspeaker array; location is limited to the area slightly beyond that covered by the stereo array, and to the horizontal plane. The characteristics of the sound source’s environment and distance from the listener are affected in much more subtle ways by the stereo reproduction format.

A setting is created by the two-channel playback format for the recreation of a recorded or created performance (complete with spatial cues). The setting of the two-channel playback format is a conceptual (and physical) environment within which the recording will be reproduced more-or-less accurately.

The stage-to-listener distance establishes the location of the sound stage with respect to the listener. It is the distance between the grouped sources that comprise the sound stage and the audience. This stage-to-listener distance is the placement of the sound stage within the overall environment of the recording, in relation to the perceived location of the listener.

The depth of sound stage is an area occupied by the distance of each sound source relative to one another. The boundaries of the depth of the sound stage are the perceived nearest and the perceived furthest sound sources. The perceived distances of sound sources within the sound stage may be extreme.

The two factors of distance cues interact. The depth of the sound stage is perceived in the context of the stage-to-listener distance; the listener is prone to placing the nearest sound source at the nearest location of the stage-to-listener distance. Conversely, the perceived distance of each sound source relative to the listener can cause a shift in the perceived stage-to-listener distance; especially in multi-track recordings that incorporate dramatic reverberation techniques.

These two factors of distance cues have different levels of importance in different contexts. Depth of sound stage cues tend to be emphasized over stage-to-listener distance cues in many recordings; the cues of the distance of the source from the listener are often exploited to support dramatic and musical ideas.

As another example, stage-to-listener distance cues are carefully calculated in many art music recordings (especially those utilizing standardized stereo microphone techniques); while the distance might not change within the recording, the stage-to-listener distance is carefully selected to represent the most appropriate vantage point (the ideal seat) from which the music is to be heard.

The matching of a sound source to the sound characteristics of an environment in which it will sound, and the selection of the environment of the sound stage (the perceived performance environment) have become important parts of music recording. This coupling of source to environmental characteristics has the potential to have a significant impact on the meaning of the music, of the text, or of the sound source; to supply dramatic effect; to segregate sound sources, musical ideas, or groups of instruments; to enhance the function and effectiveness of a musical idea.

The sound characteristics of the host environments of sound sources and the complete sound stage are precisely controllable. Each sound source has the potential to be assigned environmental characteristics that are different from the other sound sources. The recording process allows for the assigning of different environments to different sound sources, and for widely varying those characteristics as desired. Further, each source may occupy any distance from the listener within the applied host environment.

The environment of the sound stage and individual environments for each sound source (or groups of sound sources) often co-exist in the same music recording. This musical context places the individual sound sources with their individual environments “within” the overall environment of the recording. The result is a perception:

(1) that physical spaces may exist side-by-side,
(2) that one physical space may exist within another physical space (to the point where a larger physical space may be perceived to exist within a smaller physical space), and
(3) that sounds may exist at various distances within the same or different host environments, within other environments (causing conflicting distance cues between sources and environments). The result is the illusion of space within space.

Any number of environments and associated stage-depth distance cues may occur simultaneously, and coexist within the same sound stage. The environments and associated distances are conceptually bound by the spatial impression of the perceived performance environment. These “outer walls” of the overall program establish a reference (subconsciously, if not aurally) for the comparison of the sound sources.

Oddly, the overall space that serves as a reference, and that is perceived by the listener as being the space within which the other activities occur, might have the sound characteristics of a physical environment significantly smaller than the spaces of the sound sources it appears to contain. Such cues that send conflicting messages between our life experiences and the perceived musical occurrence can be used to great artistic advantage. This is a very common space within space relationship.

Space within space will at times be coupled with distance cues to accentuate the different environments (spaces) of the sound sources. Often, this illusion will be created solely by the environmental characteristics of the different spaces of each sound source.

Space within space has become an important element in shaping the imaging of a recording. Often, imaging will work in a complementary and contrasting fashion with musical balance. Recordings are often quite sophisticated in the interaction of these two artistic elements.

With the recording process, it is possible for any of the artistic elements of sound to be varied in considerable detail. In so doing, all artistic elements can be shaped for artistic purposes, or used as resources for artistic expression. As all elements of sound are capable of an equal amount of variation, it is possible for each element of sound to function in any role in communicating the message of a piece of music.

The artistic elements are used in very traditional roles in certain musical works and types of recording productions, and in very new ways in other works. The new ways the artistic elements are used tends to place more emphasis on the artistic elements of sound that can not be precisely controlled acoustically. Many new musical compositions use the artistic elements unique to audio recording (especially sound quality and spatial properties) to support the musical material. Different musical relationships and sound properties exist in audio recordings than can found in the music conceived before the artistic resources of recording technology existed.

William Moylan is the author of “Understanding and Crafting the Mix, The Art of Recording, 2nd Edition” available here from Focal Press.

{extended}
Posted by Keith Clark on 05/08 at 02:57 PM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEngineerMonitoringSignalStudioTechnicianPermalink

12 Ways To Use Your iDevice In The Studio

Several useful applications for music-making
This article is provided by the Pro Audio Files.

 
In recording sessions it’s usually a good idea to keep the phone on silent and out of sight until it’s actually needed.

Avoid the distraction. This goes for engineers, musicians and especially interns.

Give the client your full attention. If your phone goes off in the session, guess who’s buying the first round.

Aside from being a tremendous distraction in the studio, an iPhone (or iPad) actually does have several useful applications for music-making.

Here are 12 ways an iOS device can be useful in your recording studio:

1. Sound sources - There are an enormous number of synths, drum machines, sequencers, samplers, control surfaces, and more available very inexpensively that can be used for songwriting or integrated into your DAW.

2. Handy recorder
- Want to capture a sound or musical idea anywhere? The FiRe 2 – Field Recorder app turns the iPhone into a great portable recording device. Paired with an iRig Pre (shipping next month), you can now use your studio condensers for much improved audio quality.

3. Transport controls - Play, Stop, Record controls away from your computer. Great for the drummer or guitarist who records himself.

4. Touch automation controller - Using an app like Touch OSC or AC-7 you can also have mixer and plugin control over WiFi.

5. Lyrics and notation pad - It’s not uncommon to have singers and rappers come into the studio and have all their lyrics with them on their phone. The iPad particularly works great for reading sheet music. There are also some full featured songwriting apps.

6. Photos, video and documentation - Now there is no excuse to not take a photo to document amp or pedal settings, make notes, or just capture the moment. Fans and audio guys also love seeing behind the scenes studio photos and videos.

7. Musical calculator - There’s a great free app from Audiofile Engineering called Backline Calc which can handle a 23 common conversion and calculation tasks that come up in the studio. Timecode, delay times, pitch to MIDI note number, etc.

8. Virtual guitar amp - Use the IK Multimedia original iRig or new iRig Stomp to get studio quality guitar tones anywhere with the free Amplitube app.

9. Vocal effects - The iRig Mic is a handheld condenser mic that is great for podcasts and interviews. Combined with the VocaLive Free app, you can have studio style vocal effects anytime.

10. Metronome - Every musician needs a click to practice with, a quick search in the App store comes up with 50 different apps including many highly rated free ones. No excuses.

11. Ear training - Quiztones is frequency ear training app to help improve your listening and mixing skills.

12. Order takeout
- There is nothing better at the end of a long workday in the studio than having pizza and wings or Thai food delivered as a reward for your hard work.

Bonus tip: If you’re using your iDevice in a live situation, which is becoming very common these days, set it to ‘airplane mode’ to prevent notifications or phone calls from interrupting your set.

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. This article is a guest post by Jon for the Pro Audio Files. To comment or ask questions about the article go here.

{extended}
Posted by Keith Clark on 05/08 at 12:45 PM
RecordingFeatureBlogProcessorSoftwarePermalink

Tuesday, May 07, 2013

A Great Mix? Sometimes It Depends On Who You Ask…

“That was the most amazing show I've ever heard!” When someone walks out of a concert saying this, is it accurate?

Mixing sound in the live realm is not rocket science. In fact, it’s probably closer to voodoo.  A studio engineer creates a masterpiece that will (hopefully) live forever in permanent hard copy existence. But the very nature of a live mixing dictates that every show will be unique - and that none will be perfect. 

A front of house engineer is in the business of creating a memory. Impact, excitement and anticipation form the landscape of the journey you’re guiding the audience through. Perception is everything.

“That was the most amazing show I’ve ever heard!” When someone walks out of a concert saying this, is it accurate?

Are they referring to fidelity, tonal balance, and mix perfection? Or is it possibly the impact, anticipation, and excitement that affected them in an emotional way?

We can’t force the audience to have fun, but we can make sure the audience hears the most important aspects of the music while doing our best to mask and acoustically downplay any negative issues that arise.

Imagine mixing a show with the utmost finesse, articulating a series of precision and complex cues, and then an irritating knucklehead from the audience leans over the console and says, “Hey man, can’t hear the keyboard.”

My first thought is to strangle the annoying punter. He obviously knows nothing about the intricacies of mixing or he’d be behind the console, right? Well, maybe not. Sometimes as engineers we get so wrapped up in displaying the depth of our skills that we forget exactly what is most fundamental and important. 

Have you ever heard an engineer fumbling with effects while the mix sounds tragic? Don’t kid yourself - 95 percent or more of the audience has no idea and really does not care whether you used a macro-pristine-ultra-chamber or a $20,000 tube comp on each of the 12 vocals.

What they do care about:

—Can they hear the vocals?

—Can they also hear the vocals?

—Can they hear everything else? 

—Does it capture their attention, take them to a state of bliss, happiness, rage, or whatever direction that particular music is supposed to take them, so they can stop worrying about whether they can hear the vocals?

No matter what goes wrong sound-wise during a live performance, if it’s noticed from the audience perspective, then the problem belongs to the house engineer. There are no excuses.

Here’s the important point for engineers: “NOTICE.”

Example:
The show starts and all seems good, but then I realize there’s no guitar microphone in PA left. I can immediately turn it on and “fix” the problem, also thereby instantly letting 10,000 people know about the goof.

Or, I can slowly pan the guitar mic to center, then left, and back to center. If I dialed it up correctly, then for the next song the odds are that the problem has now actually become a cool guitar effect. It’s not about hiding mistakes; it’s about giving the audience the best show possible.

“That snare sound is my sonic signature!” Yes, someone did tell me this once, and yes, it’s got to be one of the most irritating things I’ve ever heard. 

If the audience is focused on the way we mix, we’re fighting an uphill battle. I realize that there are many situations where the sound engineer is an integral part of creative process of the show. But the point remains - don’t muck with the frill until the basics are dialed in.

It all comes down to this: drawing attention to the mix, rather than the performers on stage, is often good for the ego. But it can be bad for the career.

Dave Rat heads up Rat Sound, based in Southern California, and has also been a mix engineer for more than 25 years.

{extended}
Posted by admin on 05/07 at 04:31 PM
Live SoundFeatureBlogStudy HallConcertEngineerMixerSignalSound ReinforcementSystemPermalink

In The Studio: The Two Principles Of Acoustic Isolation

A discussion of mass and filling the gaps
This article is provided by Bobby Owsinski.

 
Whenever I do clinics at universities I always get a number of questions about home studio construction, since most every musician wants and needs one.

One of the most frequent questions is about ways to increase isolation so the neighbors won’t hear what you’re doing. Here’s an excerpt from The Studio Builder’s Handbook that explains the two principles involved in acoustic isolation.

“When building almost any kind of space (especially a home studio), the first question that both musicians and engineers ask regarding acoustics is, “How can I make sure that my neighbors don’t hear us?”

There’s really no secret to this one, although everyone thinks there is. It comes down to adhering to the following principles during construction:

Isolation Principle #1 - All It Takes Is Mass
If you want to increase your isolation, you’ve got to increase the mass of the walls and ceiling of the structure that you’re in.

The more mass your walls have between you and your neighbors (that includes walls made from cinder block, brick, wood, drywall, etc), the more you’ll keep the outside sound from getting in, and the inside sound from getting out.

One of the ways that most pro studios accomplish soundproofing is by building a room within a room, which is done by putting the floor on springs or rubber, and building double or triple walls with air spaces in between on top.

Needless to say, this gets expensive and is impossible to do if you start out with a small space that’s only 10 feet x 10 feet to begin with since you’ll be left with no room to work in an area that small. But there are other ways to improve your isolation that can really be effective and completely acceptable (though never completely soundproofed) that are quite a bit cheaper.

We’ll go over this a lot more in the following chapters, but here’s basically what needs to be done. All it takes are some construction tools and a little time.

Add some mass to the walls and ceiling to increase your isolation. This could be as simple as adding another sheet of drywall to your existing wall, all the way up to building double studded walls with an air gap in between (see the figure on the left). Before you go nailing up another sheet of drywall and expecting total isolation, you must be aware of some acoustic realities.

Walls are subject to what’s known as a “mass law” that states that every time you double the mass of the entire wall, you get an extra 6dB of isolation, which you can hear but it’s not a huge difference (you can barely hear 3 dB).

The problem is, you need about 10 dB of extra isolation for the sound to subjectively decrease by half. This means that you need about four times the mass to get only half as loud, which is not nearly enough to isolate something like a rock band.

To put it another way, if you add another sheet of 5/8-inch drywall to your single stud wall, have a listen and decide it’s not nearly enough isolation, you have to add six more layers (for a total of eight) in order to cut the sound in half.

And that assumes that there are no leaks in the wall and the sound isn’t going through another path such as the ceiling, floor, side wall, or window, in which case NO amount of extra drywall will help.

You can see the limitations of just adding more and more drywall. First of all, there comes a point in time when the wall just gets too heavy for the underlying frame, and as you’ll see, there are much more efficient methods to increase the isolation.

Unfortunately, there is just no easy or cheap way to isolate a room. The easiest way is to use what’s known as “mass-spring-mass” or MSM walls, which means you have a wall (the mass), then an air space (the spring), then another wall (the mass). That gives you a double stud wall, which is essentially a room within a room (see the figure on the left again).

Isolation can be increased slightly by sound damping products like Green Glue and resilient channel (see Chapter 5 for more details), but mass and MSM walls are still the best way to get some major isolation. This brings us to Principle #2.

Isolation Principle #2 - Leave No Air Gaps
Isolation can be easily defeated by air gaps anywhere in the room.

Think of air like water. If you fill the room with water, any space between any construction joint would let the water (or air) leak out, so the idea is to make sure that there are no air leaks.

Leaks that allow the sound to violate the isolation is called “flanking transmission” and is a major cause of poor isolation. You can have four foot thick concrete and MSM walls but you can negate those benefits if there are air-gaps anywhere in the room.

This is especially true for doors, which are the greatest culprits for acoustic leakage, but can also be true of windows and seams between drywall.


image


There can be no air gaps if you want maximum isolation, it’s as simple as that. It’s best to use an acoustic sealant on these spaces because it doesn’t break down with age, but any kind of caulking will work in a pinch.

It’s also important to have a tight seal around any light fixtures and on-off switches, AC outlets, mic panels, wall junctures, and HVAC vents.

Sometimes, just eliminating the flanking transmission can increase the isolation by more than you think, so make it a priority.”

The big problem with isolation is that the principles are easy, but they’re labor and materials intensive so they cost a lot. Luckily, acoustically treating the room is much easier.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blog.

{extended}
Posted by Keith Clark on 05/07 at 02:17 PM
RecordingFeatureBlogStudy HallSignalStudioPermalink

Monday, May 06, 2013

Frame Of Reference: Choosing The Right Material For Critical Listening

Do we really need 10,000 songs in our pocket?

When I was a budding sound engineer with the U.S. Air Force Band during the 1990s, one of my mentors was a consultant in the Washington, D.C. area named George Weber.

George was a somewhat eccentric guy and definitely a dyed-in-the-wool audiophile. He had done some modifications to the band’s K&H studio loudspeakers, and those things did sound amazingly good.

George once invited me to his home and played some record albums on his super-high-end stereo system. The turntable alone was amazing – it had an air bearing for the spindle and a linear-tracking arm also with an air bearing.

To keep the air noise to a minimum, the air pump was in a separate room. I don’t believe I’ve ever heard LPs sound that amazing. And if I remember correctly, he had a personal pair of K&H loudspeakers in ceramic housings.

For the listening session, George suggested I bring some of my favorite (and familiar) albums, which leads me to my point: the material we choose for critical listening and sound system tuning should satisfy several criteria.

First, we should very familiar with it. Second, it should be well recorded. And third, it helps if we like it.

George stressed something else about critical listening: limit the sections of music to specific passages, and repeat those passages over and over again during evaluations.

KNOWING IT

What’s so important about being intimately familiar with the playback material? The most obvious answer is that when we are familiar with something, we notice differences very readily.

In a general sense, we best know the sound of the human voice, particularly the voices of friends and family. Thus at least one of the recordings to use for evaluation of a system is something containing a familiar voice, because right away, it should clearly show problems, particularly in the midrange. (As a side note, this is one of the reasons loudspeaker designers work so hard to get things right in the midrange.)

Voice alone usually isn’t enough, since it covers a limited frequency and dynamic range, while we’re usually dealing with musical sources. But the same rule of familiarity applies when choosing material that covers the whole range, because problems with the bass, drums, guitars, etc. should be obvious if we really know the tracks.

One of my favorite tracks was “Famous Blue Raincoat” performed by Jennifer Warnes and engineered by George Massenburg. The voice is so clearly represented, and so are the strings and upright bass – no drums though.

Robert Scovill share a different but related approach with me back in those days. For a Tom Petty tour during the mid-1990s, he recorded each raw microphone input to ADAT (remember those?). Then for the next show’s sound check, he would play the tracks from the previous show to help suss out the system and acoustics of the room.

I thought that was a unique and cool idea, and it certainly satisfies the familiarity issue, with the added bonus of being recorded from live instruments.

NOT CREATED EQUAL

Recordings should be the best quality possible, particularly in this era of iPods, iPhones and other MP3 players. If you’re ripping the songs yourself, use first-rate CDs or other original source material, as opposed to, say, the first-edition CD from 1985 that was cut from the LP masters and sounds terrible.

Instead, use a more recently re-mastered version, and set the conversion process to use the highest bit rate available.

Today we can readily choose AIFF, .WAV, and other lossless formats. So what if they’re larger files? Do we really need 10,000 songs in our pocket? How about a couple dozen really good ones that are well recorded and properly ripped?

One of the reasons I loved that Jennifer Warnes recording was that it sounded flat-out amazing for its day. Another cut on that album, “Bird on a Wire,” has great drums, too. Between those two songs, I often got a pretty good idea of the condition of the system and the state of the room.

Another superb track is Patricia Barber’s “Modern Cool” recorded by Jim Anderson. Check this one out for sure – super clear voice, sublime piano, upright bass to please upright bass players, and incredibly realistic drums. It’s not a pop/rock record, though, and you probably need some of that genre in your collection.

However, be picky – not all pop/rock records are created equal. You may not like Lady Gaga as an artist, but the production values of many of her recordings are pretty amazing. Check out “Bloody Mary” from the Born This Way album, if you haven’t already.

The bottom line is that most of what we hear today on the radio, at shows, on our personal music players and in our cars is far from ideal. Low bit rates, second-rate recordings, and shows that are too loud and too distorted all work to undermine our frame of reference. I’ve said it before but it bears repeating: listen to live acoustic music and then find some recordings that sound like that.

At the same time, I’m not at all opposed to creating new sounds in the studio or on the stage that never existed before. Just do so with fidelity and integrity.

CERTAIN ASPECTS

Finally, try to avoid tracks that you don’t care for, no matter how amazing they might sound and/or how many others like them. I have my personal collection of favorites for this purpose, and you should have yours.

This gets back to the thing about short passages that get repeated over and over during the course of critical listening and system tuning. By doing so, we can really start to pick out how certain aspects of the sound change when the parameters of the system change. And if you like the tracks, this process won’t drive you quite as crazy.

On one night, in a hall, the acoustics may aggravate a certain harshness in the cymbals on the recording, while on the next night, outside, the addition of a high shelf might be needed to bring those same highs back in.

As we get increasingly familiar with a select group of high-quality passages, we get increasingly better at hearing these changes and knowing what kinds of subtle adjustments should be made to provide a consistent sound to the audience.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 15 years.

{extended}
Posted by Keith Clark on 05/06 at 05:11 PM
Live SoundFeatureBlogOpinionStudy HallProductionAudioEducationLoudspeakerMonitoringSignalSound ReinforcementStagePermalink

Frickin’ Lasers: Exploring Better Drum Sound

Using laser technology to sharpen sound and eliminate bleed

The Sennheiser Technology & Innovation lab in California has created a new concept microphone designed to explore new tools for drum capture.

At its core, the Element system offers the ability to detect when an individual drum has been physically hit. This allows the engineer to carefully craft the sound of the drum set with less bleeding and tighter control over tone and dynamics.

As with previous concept mics, this prototype is aimed at sharing new ideas with our users to collect feedback and generate enthusiasm. It’s not intended to be released as a product as is.

Since there are many drums in a kit, common practice is to use many mics to capture the sound. Individual drums are given individual mics (sometimes more than one) and stereo overheads are generally used to pick up cymbals along with the overall “kit image.”

However, all of those mics end up capturing multiple copies of a single drum hit with a different delay to each channel. When the different channels are then mixed together, the delayed clones mix with the primary channel and smear out the sound of the drum in time, which reduces the sharpness of the attack and can create comb filtering.

The phenomenon is accentuated in a live setting where the other instruments on the stage are also being cloned multiple times by the drum mics—the dreaded curse of stage bleed.

Stage bleed makes mixing complicated. Effects applied to one channel also affect the bled signals of other instruments: snare hits get into tom mics, cymbals get into vocal mics, and the bass guitar seems to get into everything. This is such a problem that professional bands can often be seen with a full plexiglass cage surrounding the drum set.

One way to combat bleed is to use a gate or an expander. Ideally, this is akin to having your hand on the fader and turning the volume down in between each drum hit. Realistically, a hard snare hit is often louder at the tom mic than is a quiet tom hit. Therefore, the gates spend most of their time open and the dynamic range of the drum set suffers.

A band-passed side-chain on the gate can help separate one drum from another by isolating frequency regions specific to only that drum. However, all the engineer really wants to know is “was this drum hit or not.” So, we thought that we could approach this problem differently and find a better way to manage the bleeding.

Sennheiser evolution e904 and e604 mics are already made with a clip to secure them directly to the rim of a snare or tom drum. We added a simple laser vibrometer onto this clip to directly measure the vibration of the drum head. In this way, the vibrometer picks up the internal vibrations of the drum head before they transduce into the air. This detection method also has the benefit that it does not make any physical contact with the drum head. In opposition to a piezo pickup, the laser system has no influence on the natural acoustics of the drum.

image

The vibrometer consists of a laser and a directional photodiode (light detector). The coherent laser beam reflects off of the drum head and back, close to the photodiode.

When the drum head vibrates, the angle of reflection changes and the laser moves back and forth across the detector. This analog signal is driven to line level and is run into the sidechain of a standard gate. In this way, the gate will only open when the drum is physically vibrating and not just when there is a lot of noise up at the mic.

Drumming is often a physically active style of performance. Luckily, the sensitivity of the vibrometer is matched by the mechanical robustness of the mic clip.

When a drummer is emphatically playing a kit, often times the entire kit will move around on the riser. Because the clip attaches to the drum rim so securely, the laser moves along with the drum set and false triggering is still avoided.

The laser vibrometer is very sensitive. When the snare drum is hit, for example, there is enough energy to acoustically couple to the other drums and create a small movement in the heads of those drums. You can hear this in faint tom ringing after loud snare hits. The vibrometer is sensitive enough to measure these sympathetic vibrations.

However, the magnitude of sympathetic vibration is still less than the vibration from a direct hit. Therefore, the threshold of the gate can easily be used to distinguish between an intended hit and vibration from a neighboring hit.

Sennheiser Element - Laser Drum Microphone System from Andy Greenwood on Vimeo.


Andrew Greenwood is an audio software engineer for Sennheiser Electronic Technology & Innovation.

 

{extended}
Posted by Keith Clark on 05/06 at 03:46 PM
Live SoundFeatureBlogVideoEducationMicrophoneSound ReinforcementPermalink

In The Studio: How To Maximize The First Hour Of Mixing

Making initial decisions that influence how the rest of the mix will go
This article is provided by the Pro Audio Files.

 

I’d argue that the most important stage a mix is the first hour. Not only is it when your ears are freshest, but it’s also when you get your first impression of a song.

You’re making initial decisions that influence how the rest of the mix will go. Once you set down a path, you’re committing yourself to a certain direction.

Here’s how I go about my first hour to make the very most of this crucial stage.

1. Listen To The Rough Mix
As I’m re-labeling individual tracks, grouping them, color coordinating them, assigning them to buses etc., I’m listening to the rough mix.

The rough is what the producer or artist thought was a good idea. The rough doesn’t tell you, “I want to sound like this.” The rough is telling you, “this is what the producer thinks is important.” Oh, the chug guitar is really loud — well — that means the producer feels it’s driving the track. The kick drum seems oddly low — maybe a big fat kick wasn’t as important as the movement of the guitars.

Truthfully, raw tracks can sound like almost anything. The rough is essentially giving you advice on the direction.

You might take all of that advice, some of it, or none of it — but at least you have a starting point. Side note: it usually helps to embrace the rough mix rather than fight against it. Your client will thank you.

2. Organize
Again: organize immediately! This gives you time to breathe in the record and also makes the whole mixing process easier.

For me, the heart of organization is labels, groups, and bus routing.

I use a short hand for labeling: “Country_Tune-01_20- Cutaway Kick” can really just be “Kik.” If you want to get specific, there’s generally a notepad or comment section in your DAW where you can put additional thoughts.

In terms of grouping, I use color coordination and sequential ordering. It doesn’t help if “Kik” is colored red at the top, and “Snr” is colored green at the bottom. The color choices are not vital, but I do what I suspect a lot of people do and tend to color the bass elements dark and treble elements bright.

Assigning buses is important because often times people will request stems. If you know how to divide the stems ahead of time, printing them will be that much easier. This will change from genre to genre and song to song, but think “what would a music editor for a TV show need to have individual access to?”

A general rock session might be:
• Vox Leads
• Vox Back
• Guitars Elec
• Guitars Ac
• Drums
• Piano
• Bass
• FX Returns

This can also helpful when you’re doing fader rides. Most big moves can be done on group buses.

3. Relationships & Groove
The most important aspect of mixing is levels. That is the gig.

Getting the levels right is essential — more important than EQ, compression, reverb, and all of that.

But how do you know where to set them? Well, the rough mix already gave us a clue.

The next step is hearing how the elements relate.

What is really creating the movement in the record? Here are some examples.

Listen to almost any Red Hot Chili Peppers song and you’ll notice that the bass is pretty freakin loud, and the kick and snare are very forward. The bass and the backbeat are really driving the song, so that’s what sticks out.

 
Now let’s look at Green Day. Here the overheads and guitars are very loud because the drive is really coming from those elements.

 
Listen to any trap style hip-hop record and you’ll hear the 808 type bass and tight closed hi-hats way up.

 
Also consider how everything is connected. Turning up the buzz bass can make the sub bass seem louder. Turning the bass down can make the hi-hats come forward.

The conga rhythm may be working off the rhythm guitar. Placing them at similar levels and panned in the same place may make the track cohesive. Listen to the interaction, see how it feels to you, and make your level and pan decisions based on that feeling.

4. Preliminary Cleanup
Solo mode is a no-no when mixing. It’s the devil’s temptress. You can hear an individual element nicely in solo mode, it’s easier to work with. But it’s all lies. When you pop it back into the mix that element really changes. Solo mode is only allowed in one place: preliminary clean up.

As you’re bringing up levels in the first stage of a mix, you may notice some stuff that just doesn’t sound right at all. Clipping, explosive plosives (say that ten times fast), weird resonances. A quick clean up can make things easier down the road.

At the same time, do not delve deep into EQ, compression, or other effects. Only get rid of the nagging problems. It’s important to have a picture of the entire mix sitting together before making any serious processing decisions.

5. Save As
Once everything has a basic level, pan, and is cleaned up and organized: Save As. “Super Mixer Song (Start)” and then Save As again: “Super Mixer Song (mix 1)”.

This way if you screw up or go down a bad path in ‘(mix 1)’, you can just pop open ‘(Start)’ and start fresh.

Matthew Weiss engineers from his private facility in Philadelphia, PA. A list of clients and credits are available at Weiss-Sound.com.

Be sure to visit the Pro Audio Files for more great recording content. To comment or ask questions about this article go here.

{extended}
Posted by Keith Clark on 05/06 at 01:48 PM
RecordingFeatureBlogStudy HallConsolesDigital Audio WorkstationsEngineerMixerProcessorStudioTechnicianPermalink

Church Sound: The Importance Of Audio Team Leadership & Organization

Working to be a good delegator, administrator and teacher

Over the years that I’ve worked with churches, the problems I find often have foundation in basic communication, organization and administrative skills - or lack thereof.

Quite often I will visit with a church that is complaining of a lack of consistency in the technical aream and the explanation goes something like: “When Jim is here everything works, but when he’s not, it’s a disaster.”

I know at that point that while Jim may be a great tech operator and may understand the system very well, he is most likely not a good delegator, administrator or teacher.

When I’m at a church that is suffering from the “Jim’s the man” syndrome I can almost guarantee that the mixing board/patching is either not labeled, labeled incorrectly or just poorly labeled. The poor guys who are working tech on the weeks Jim is not there end up scrambling just to get things properly connected and working.

Also, because they are volunteers and “Jim the man” is the golden child in the eyes of the worship leader, people are afraid to step in and to try to organize and logically lay out the board. Other things that end up happening usually relate back to clear organization, such as:

• Batteries failing in the middle of the service because everybody thought someone else had changed them.

• Trying four microphone cables until you find one that works, because nobody throws out or labels the bad cables.

• The last minute scramble to find a mic (or stand, or direct box) that is missing because somebody used it during the week in another room at the church.

• Nobody shows up to mix on a Sunday morning. Bob traded with Steve who traded with George and now nobody really knows who on for the next month.

• The sound operator who is “on” for a given week shows up late because “Jim the man” never told him the worship leader was bringing in a mini-orchestra of 10 players and utilizing six vocalists. The poor guy was actually on-time for a typical Sunday not knowing he had an hour of setup to do.

I’m sure you can add your own list of frustrationsm but rather than moan over them, let’s look at how to prevent them.

1) Get together as a group and agree to a consistent layout of the mixing board and create a channel/patch list that sits next to the board. Also, commit to each other that if for some reason you need to deviate from the standard layout, immediately following the service you will reset the boars to the standard layout.

2) Make a rule that first thing every Sunday new batteries go in the wireless mics. This takes the guess work out of the equation and also lets you use the mics during the week without wondering when the batteries will die.

Wireless mics usually last 6—10 hours on a fresh set of batteries. To be precise, check the specs of your system. So, you can simply do the math: rehearsal/first service/second service = 4 hours plus evening service = 1 hour, and then decide if you need to put in fresh batteries for a midweek event.

3) Throw away bad cables. I know that this is not eco-friendly and everyone likes to occasionally get out the soldering iron. However, in my experience, the repair never happens, or the cable accidentally gets placed back with the good ones, or a repair ends up being rather poorly done.

4) Organize mics, cables and all accessories and put a sign out sheet that details who took the item and to what room they took it to. This way everyone knows where that missing equipment should be located.

5) Hand out or post online a schedule for 6 months of who is on for a given Sunday. In the sound booth (or online) keep a master schedule with the rule being if your name is on for that day you better be there. This doesn’t mean that you can’t trade dates, what it means is that if you do trade it has to be immediately updated on the master schedule.

6) Put the burden on the worship leader to communicate with the actual person that is on for that Sunday ahead of time.

A simple email with a stage layout and instrument list will give the soundman for that week the information he needs to plan on what time he should arrive to setup.

These may sound like simple suggestions, but may churches are simply negligent in these fairly basic tasks.

If there is a no leader of the crew, volunteer to be the coordinator or facilitator that will facilitate the items above. If there is a clear leader, offer to help with organization of the ministry. And if you’re a leader and don’t think you need the help that is being offered, at the minimum it’s time to step aside for awhile, as you’re likely doing your ministry more harm than good.

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with church tech for more than 30 years.

{extended}
Posted by Keith Clark on 05/06 at 12:50 PM
Church SoundFeatureBlogStudy HallEducationEngineerInterconnectSound ReinforcementSystemTechnicianPermalink

Saturday, May 04, 2013

In The Studio: DIY Subkick Microphone

An older but effective trick for kick drums
This article is provided by Audio Geek Zine.

 
This is an old but very effective trick for mic’ing kick drums.

Take a Yamaha NS10 speaker cone and use that to capture the extra low frequencies of the drum.

Without going into too much theory about this, a dynamic microphone and a speaker are essentially the same thing: they’re both transducers. They take acoustical energy and convert it into electrical energy or vice versa.

So what you do is take the speaker out of the box and solder a male XLR plug on a short cable to the speaker terminals. Pin 2 goes to (+) and Pin 1 goes to (-) pin 3 is not used.

The matter of mounting this speaker to a stand is a different matter, and the main reason to go buy the Yamaha Subkick (pictured below), because of it’s great, easy-to-use mounting system.

That, and it’s also more durable likely than the home version.

(click to enlarge)

One way to do it is to take a standard mic clip apart and fitting the slotted part securely to the corner mounting holes of the speaker; that is, if the speaker you’re using has the 4 corners and not just holes drilled just around the cone [square not a circle]. Or you can attach it to a microphone boom or gooseneck permanently.

The output of the subkick is very hot, meaning you’re going to have to attenuate the signal for it to be of any use to you. An inline -20 dB pad, a pad at the mic pre, or one built into the mic will need to be used.

This guy used a 10k Ohm in series with pin 2 and a 1k Ohm resister across pins 1 and 2 to drop the output about 20 dB.

Mic placement: These work really well at the edge of the drum parallel to the skin. Try it under a floor tom too.

Why the NS10? Most time you see these in a studio it will be with an NS10 cone, but why? From what I’ve been told it is because there are usually extra NS10s lying around a studio, all studios had NS10s, you could predict how it would sound, and they have a frequency response that works well. Don’t know how much truth there is to that.

You can use any speaker you want; it will obviously make a difference in the sound.

Finally, here is a picture I took of one of the two DIY subkicks at Metalworks Studios. Note mounting, placement, and inline pad.

image

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. To comment or ask questions about this article go here.

 

{extended}
Posted by Keith Clark on 05/04 at 10:46 AM
RecordingFeatureBlogPollProductMicrophoneStudioPermalink

Friday, May 03, 2013

Making It Sing: Microphones For Lead And Background Vocals

For vocalists, the microphone (and the processing that follows) is their instrument

The crowd waits impatiently for the lights to come up, to hear a single voice rise in song, riding above the instruments. The vocal microphone in the singer’s hand, reinforced by the rest of the signal chain, carries a voice to thousands of eager listeners that would scarcely reach the first rows unaided. 

Of course, mics have reinforced vocals for many decades, and by allowing singers to be heard, they’ve played an integral role in the rise of the popular singer. The basic design uses a relatively small, flexible diaphragm that is excited by the voice, followed by one of several methods to turn that motion into an analogous electrical current. 

Incremental improvements have lead to a vast array of useful vocal mics, including some current offerings that bring studio quality to live sound. So what characteristics make for an excellent vocal microphone?

The Basics
A vocal mic is most often in a format that can be easily held in the singer’s hand, balanced and comfortable to give freedom of movement without fatigue. Even when stand-mounted, the low profile doesn’t block the singer’s connection with the audience. 

Internal shock-mounting of the mic element is critical, since handhelds are continually gripped and moved during use. Many mics use sound-deadening materials and finishes on the handle to dampen the sound of bumps, ring clicks, and finger movements. 

Durability is key, since mics on the road don’t always receive the gentlest of treatment during setup and teardown – not to mention when they’re in the hands of certain performers. They need to be able to withstand being dropped repeatedly without causing any detrimental change to the performance of the element or internal electronic components. 

Live vocal mics are most often used at close range to the mic capsule, from lips touching to a couple inches away. Just inside the durable mesh screen, layers of foam windscreens – as acoustically transparent as possible – are added to minimize p-pops, excessive sibilance, and breath noise.

Close use brings with it the accentuated low end known as proximity effect, and mic designers work to control this with the positioning of the element and porting of the mic head. Proximity effect can also benefit vocalists with less powerful or thinner voices.

With stage levels typically high, monitor wedges still common, and drum kits and guitar amps nearby, polar patterns must be consistently controlled across the entire frequency spectrum so that the mic captures the singer’s voice without also picking up other instruments or requiring excessive EQ to try to control feedback. Often a “presence peak” is engineered into the frequency response to increase the intelligibility and “cut” of the voice. 

Dynamics & Condensers
Virtually every microphone manufacturer offers handheld vocal microphones, often with multiple models for different styles and budgets – both condenser and dynamic, and the occasional ribbon. The ubiquitous Shure SM58 cardioid dynamic remains in wide use more than 45 years after its introduction, either through familiarity and availability or by request of the vocalist.

On the other end of the scale, manufacturers better known for specialty and measurement microphones are now offering high-performance touring vocal condensers, exemplified by the DPA d:facto and Earthworks SR40V.

Dynamic designs are typically rugged, somewhat less responsive in the very high frequencies, moderate in sensitivity (though neodymium magnetic structures introduced in the mid-1980s added several dB to the output and often increased high-end frequency response), and priced in the lower to mid range.

Handheld options abound, including numerous dynamic (AKG D5, Audio-Technica PRO61) and condenser (DPA d:facto and MIPRO MM-90) choices. (click to enlarge)

Some examples include the AKG D Series, Audix OM Series, Audio-Technica Pro Series, beyerdynamic TG-X Series, Electro-Voice N/DYM Series, and Sennheiser e900 Series. Most manufacturers offer patterns ranging from cardioid to supercardioid to hypercardioid, accommodating a wide variety of vocal applications. 

Roadworthy condensers have proliferated in the past several years, and in conjunction with the advances in mixing consoles and loudspeaker systems, mean that the quality of the live vocals can sometimes approximate a recording.

These mics often borrow transducer technology from their studio counterparts, adding shock-mounting, pattern control, internal windscreens, and modified electronic circuitry into a durable handheld format. They’re also becoming increasingly accessible in terms of price and feature set, exemplified by developments like the MIPRO MM-89 and MM90 (hypercardioid).”

Certain newer condensers are widely cited as “favorite vocal mics” by several engineers I’ve spoken with recently, citing natural sound with almost no additional EQ, “airy” high end, consistent off-axis frequency response, and good transient response. 

In The Lead
The lead vocalist might concentrate solely on singing or play an instrument as well, and the style might be rock, pop, rap, country, gospel, or jazz. 

Whatever the genre, those vocals must be the focal point of the music, and be given a central place in the mix – with sufficient level, intelligibility, and presence to equal the band.

Nick Malgieri of McCune Audio works with a wide range of vocalists from jazz and pop to hard rock. His favorite current mics for vocals are the Neumann KMS-104 cardioid and the Shure KSM9, a dual-pattern design which he uses in its supercardioid pattern unless the singer frequently works the mic off-axis.

He notes that he almost always high-passes a vocal mic, using the steepest filter he has available.  The corner frequency is usually no lower than 120 Hz, and depending on the vocalist and their range could be as high as 160 Hz. To remove the initial peaks of loud syllables, he will use a compressor with a very quick attack and release setting; he describes this as “radio-style compression, but a lot less.”

For excessive sibilance, Malgieri employs a de-esser or the dynamic EQ that is available on Yamaha CL Series consoles, setting the filter width and threshold level so that he can keep the breathiness of the vocal while still pulling back the harshness of the “s” sound. 

Nick Malgieri with a Neumann KMS 104. (click to enlarge)

He is practiced enough at this technique to be able to select the frequency range and level while the singer is sound-checking, and have the filter in place within a few seconds.  Otherwise, he stays away from equalizing the high end of vocals, except for an occasional notch filter to correct a problem. 

Sound engineer Mick Conley has been working with singer/guitarist Marty Stuart for several years, running sound for live, television, radio, and recording dates. Aided by the musicianship of the band, he has been able to maintain fairly consistent audio results in those varied formats.

Mick Conley and Marty Stuart on the road with a Shure KSM9. (click to enlarge)

Conley too uses a KSM9 with Stuart, and also places them on the singing bass player and drummer. He says that he doesn’t need to do much equalization with the mic, other than a high pass; he chooses the corner frequency depending on the vocal part that the particular person sings. For effects, he combines reverb and delay, saying that this gives the voice a different texture and placement. 

Because the musical style is more in the vein of traditional country, Conley approximates old plate reverbs, brings up the delay until it can barely be heard, then backs it off slightly – resulting in a sense of depth without hearing the effect.  Especially when mixing for a TV broadcast, he will often place a de-esser in the signal chain before the reverb to control the high-end “sizzle.”

He states that the KSM9’s vocal quality is very natural when a singer is off-axis. At one point in the show, all three singers will use the same mic for harmonies, with Conley applying a bit more compression to make sure that all voices pop out, and he also rides the gain a bit as the singers move. 

Mikail Graham, house sound engineer at The Center for the Arts in Grass Valley, CA, may mix two or three acts a week. He says that many performers who do not carry their own vocal mics still insist on an SM58, even when he offers a top-notch condenser. His take is that they’re just not comfortable trying something new in front of a live audience. 

Backing Vocals
Depending on the band and the tune, backing vocals can be occasional harmony behind the lead, continual harmony as the signature of the band’s music, or virtually co-lead vocals.

In the early 1990s while working a Kathy Mattea tour, Conley told me how he used a slightly greater amount of reverb and delay and pulled out a bit of the vocal fundamentals to place the backing vocals more in the background, effectively creating a vocal “wash” of harmony behind her voice. 

For other tours, however, she would have five or six people singing at the same time, with very different voice characteristics and ranges – some more resonant, and some thinner.

In those cases, he would try to match the mic choice with the particular voice quality, and have the backing vocals mixed more present.

When Malgieri works with acts that include backing vocals, he prefers to have all of them “use the same capsule” so that the audio input to the console is consistent. He will typically process all of the mics the same, including high-pass, de-esser, and similar EQ.

Mikail Graham’s got SM58 fever.  (click to enlarge)

Often in his experience, backing vocalists will at random sing a lead line or do a call and response with the lead vocalist, so their voices must be consistent with the rest of the mix. 

To leave space for the lead vocal, as well as to spread out backing vocalists singing unison notes that may not be precisely in tune with each other, he slightly pans their mics alternately left and right. When a line of vocalists are standing next to each other and a mic can possibly pick up more than one person, he will make a harder pan. 

When a singers are comfortable with the sound and “feel” of the mic, their focus can be on the performance. I recall veteran monitor engineer Rocky Holman telling me that Steve Perry of Journey knew his particular microphone, and insisted that nothing else took its place.

For many years, Venezuelan jazz vocalist Maria Rivas has used her NDYM-757B mic whenever she tours. Though he could choose anything, Mick Jagger still appears to use an SM58 – albeit on a touring quality wireless transmitter. 

For a vocalist, the microphone (and the processing that follows) is their instrument. The role of the engineer is to help each particular voice and style sound its best, as well as to partner and guide artists on mic choices that enhance their performance while making them as comfortable as possible.

Gary Parks has worked in pro audio for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.

 

{extended}
Posted by Keith Clark on 05/03 at 04:31 PM
Live SoundFeatureBlogEngineerMicrophoneSound ReinforcementStageWirelessPermalink

Church Sound: Top Eight Common Acoustic Guitar Mixing Mistakes

How to get this staple instrument of praise bands sounding right
This article is provided by Behind The Mixer.

 

The most common mixing mistakes for church band instruments occur with the acoustic guitar. It’s a staple instrument of praise bands, and for this reason, it’s an instrument that needs to be correctly mixed. 

Let’s look at the eight most common mixing mistakes with the guitar as well as how to avoid them.

1. No EQ. I’m not one to say which is worse of these but in my book, this one is at the top because it’s indicative of the sound tech’s mixing process.

Using an analogy I’ve used before, it’s like making a batch of chili. Each instrument or vocal channel is an ingredient. Let’s take the beans, for example.  Do you like kidney beans, chili beans, mild chili beans, or spicy beans in your chili? 

The acoustic guitar is the can of beans. You need to make it the right type for your musical chili. All too often, I’ve seen churches use a mixing board as an expensive volume controller. For specific ideas on how to EQ an acoustic guitar, check out this article.

2. Too bright. Acoustic guitars produce a variety of natural tones depending on the wood used in their composition. Ovation guitars use a “plastic” rounded back that gives a unique sound. 

Guitar strings can also affect the sound. Some guitars give warm tones while others are bright. Some have a more bassy sound. The mistake of “too bright” comes in two ways:

Giving the guitar an overly bright sound. This might be done so that it stands apart from the other instruments. However, if it’s too bright you are then working in similar frequencies as some of the drum cymbals and the two aren’t easily distinguished.

Giving it a bright sound without regards to the guitar’s natural tone. Mixing the guitar should take into account the guitar’s natural tone and the other instruments on the stage.  You can’t make the guitar sound like something it isn’t.

3. Volume too high. The acoustic guitar is front and center to many a worship team, and therefore it’s easy to want to drive it in the mix. However, it has to sit in the mix in the right way depending on the desires of the band and the way it’s used in a song. 

A song that is piano-driven needs the piano out in front. The piano is the instrument the congregation will be following. The guitar will have times when it needs to lead but it should also be part of a mix, not something that overpowers all the other instruments.

4. Volume too low. This happens as a result of a sound check in which the monitor levels of the guitar are set too high and thus the house mix set too low. 

When it comes time for the service, the monitor mix volume is fine on the stage but isn’t loud enough to make up for the new dynamics in house due to the presence of the congregation and their singing along. The monitor mix should not be heard past the second or third row of seats in the sanctuary. 

Stick with this rule and it will be obvious which volume you’re hearing.

5. Ye ol’ geetar strangs. Guitar strings are like the oil in a car. When the oil in the car gets old, your engine’s performance suffers. Old guitar strings are detrimental to a guitar sound. 

This happens in two ways. The first is that old guitar strings come out-of-tune easily. Second, while they might be in tune, they sound bad. Think of it like looking at old oil that’s dark black and while you might say it’s still working in the car…does it look like something that would be working as effectively as possible in the car? 

For this reason, if you hear guitarists with bad strings, talk with them about it and explain how it’s detrimental to the music.

6. Mixed alone. All instruments should be mixed together. And to explain that, let’s talk omelets. Crack a few eggs, add some sausage, add some onions, add some peppers, and now add salt and pepper.

The addition of salt and pepper might have just destroyed your perfect omelet. The reason is that all sausage is not the same. Some sausage is unseasoned. Other sausage has salt and seasoning already added. By adding salt to a recipe with an already salty sausage…well, you can just toss that omelet in the trash.

You can have instruments (ingredients) that individually sound great but when you put them together, the mix sounds terrible. Without going into a 20-page discussion on mixing instruments together, keep this key idea in mind; before you boost the frequency for one instrument to make it stand out, try cutting those frequencies from another instrument.

7. Poor presence. This might be the hardest area of mixing an acoustic guitar. A guitar mix can have a good sound but when compared to a mix where the sound guy nailed the presence, your mix sounds flat. 

The best place to start is in the 1 kHz and 2 kHz areas. Boosting in these areas can give your guitar the fullness you desire. Mixing is an art so expect to spend some time working with these frequencies to find out what works for a particular guitar.

8. Mixing multiple guitars incorrectly. You don’t want the instruments to lose their individuality, but at the same time, you want to give a unique yet blended sound. This can be done through the use of EQ separation and pairing, panning, and through talking with the guitarists as to how they can create a well blended, yet different, sound using methods like bar chords and capos. I’ve covered this topic in depth here.

Mixing takes practice, and remember that the greatest live audio engineers started out knowing nothing about mixing.

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians, and can even tell you the signs the sound guy is having a mental breakdown. To view the original article and to make comments, go here.

{extended}
Posted by Keith Clark on 05/03 at 01:17 PM
Church SoundFeatureBlogStudy HallConsolesEngineerMixerProcessorTechnicianPermalink

In The Studio: 30-Plus OS X Power User Shortcuts

Adding capabilities, organization, and efficiencies of workflow
This article is provided by Audio Geek Zine.

 

Inspector
—Use “Inspector” instead of “Get Info.”
—RT-click finder item then hold alt, and “Show Inspector” will appear. Or use shortcut CMD+Option+i.

This is like the “Get Info” window but it will not clutter up the screen and will show combined data for selections.

Resizing Windows
—Shift-drag will scale height and width of the window
—Alt-drag will re-size opposite sides at once with same center point

Cycle Options In Dialog Boxes
It’s a pain in the arse to move the mouse to click Cancel or other buttons in those pop up option windows. Hitting Return/Enter will do the default action. Use the Tab key to cycle the options, then hit the space bar.

Hiding Apps
The Hide function works how minimize should.
—CMD+H will hide the active window.
—Alt+click the desktop (or the will also hide).
—Alt+CMD+click the desktop will hide all windows except finder.
—Alt+CMD+click dock icon will hide all apps except the one clicked (and open it if it’s not already open).

Close All Windows
To quickly close all windows for an app, hold Option and click the X in the top left of the window.

New Folder With Selected Items
I love this featured added in OS X Lion. Instead of making a new folder, giving it a name, grabbing items, and dragging them to the new folder, just select the items, rt-click and choose New Folder With Select Items right at the top. CMD+Shift+N makes a new folder.

Hidden Audio Controls
There are several “hidden” functions with a Mac’s audio options:
—Hold shift to avoid the annoying click when changing the volume (or disable it completely in Sound>Sound Effects prefs).
—Alt + any volume button on the keyboard will open the sound preferences.
—Alt + shift provides finer resolution on the volume control.
—Alt + click the volume control in the menu bar to bring up a menu for quick input and output changes.

Moving Files
One thing that confuses many new OS X users is the lack of Cut function for files to cut and paste to a new location. On the system drive, dragging a file to a new folder will move it, but dragging to a second drive will duplicate it. Sometimes you don’t want a second copy. In OS X, you copy the selected file, and use the Move command, CMD+Option+V.
—If you prefer drag + drop, just hold CMD.
—In case you didn’t know the other files for dragging files: Option +drag will duplicate (and append a number to the file name starting at 2).

Color Labels
Code files can be colored with the Labels function either through rt-click menu or adding the labels button to the toolbar. I use red for current projects, green for recently finished projects that have been paid for, purple for personal, gray (or none) for misc.

Arrange & Sort
Since the arrival of OS X Lion, the Arrange function can be found in Finder. It’s great if you work in Icon view a lot, but not great if you work in list or column view. In fact, there is a bug with color labels in column view—it doesn’t refresh correctly. If you set Arrange to “None,” you can sort files by name, date, label, etc., by clicking the column headers.

Navigating Files With The keyboard
I use Finder in List View almost exclusively because it’s very easy to navigate through files and folders within the same window using just the keyboard arrows. Instead of double-clicking to open, use CMD+O or CMD+DownArrow.

With List and Column views, use the left and right arrows to expand folders. If you push any letter key, it will select the closest file starting with that letter. Pushing the same letter does NOT go to the next one though. Try typing the first 2-3 letters of the file name. Use space bar to preview files.

OS X Lion Mission Control
Navigate dashboard and workspaces. The Control key plus Arrow keys allows you to jump to the Dashboard, Mission Control, and Spaces/Desktops. The Dashboard is where helpful widgets live. Primarily I use this for a basic calculator, a couple of timers, and a unit converter. Get to the Dashboard with Control+Left Arrow.

You can have multiple desktops/spaces to work in. It’s a bit like having multiple monitors. You can assign apps to any desktop from the dock icon or just click and drag them around. Control+Right Arrow moves over to the next desktop.

Mission control zooms out and shows you all open windows, the dashboard and desktops at once. Great for finding that dialog box hidden under other other windows or for moving stuff to other desktops. Control+Up Arrow shows Mission Control.

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com.

{extended}
Posted by Keith Clark on 05/03 at 10:28 AM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsSoftwareStudioPermalink

Thursday, May 02, 2013

Slick & Seamless: Developing A Live Digital Recording System

Innovation in forming a marriage between live audio and live recording

Choosing an audio console can be likened to a guitarist choosing an ax – it’s the pallet with which one creates. Some engineers have built their mixing techniques around plug-ins while others rely on complex channel grouping or outboard gear.

It’s difficult to classify one platform as “better” or “worse” than another, simply because it boils down to one question: does it help produce the best-sounding show possible in the shortest amount of time?

In addition, due to the enormous influx of digital consoles in recent years, the added criteria of multitracking capability has quickly become a deciding factor in many console choices – whether on tours, in theaters, or houses of worship. With many reliable platforms on the market such as Dante, MADI, CobraNet, AES50, EtherSound and so on, the ability to multitrack one’s show has almost become an expectation.

In my previous article (here), the goal was to offer an inside look at the selection and design process for a road-ready audio system for The Austin Stone Community Church in Austin, TX. Here I’m going to expand specifically on the console and live recording portions of the system, and share some of the results that have come out of the learning process.

Upgraded Capture
In 2011, The Austin Stone released its debut live album, which spent its first week at the top of the iTunes Christian & Gospel charts, and subsequently formed a marriage between live audio and live recording. For that album, I captured the tracks by taking direct outputs from an Avid VENUE Profile console at front of house, via MADI, over fiber optical cable into an RME MADIface.

The data was then piped over FireWire into a MacBook Pro through a PCI Express card. The solution was solid, reliable, and error-free during the six weeks of live tracking.

Upon completion of the album, our producer mentioned that some re-amping was required in post-production to “warm up” the tracks, and asked if we would consider using external microphone preamps for future recording dates for some of the more essential inputs. This began my quest to find a better way to capture the highest quality tracks possible, without lugging around racks of external mic preamps (and spending a fortune in the process).

At that same time, the church was in the process of moving away from its long-time rental system provider. This fortuitous timing afforded me the opportunity to construct a control package from the ground up that would not only handle week-to-week live mixing operations, but seamlessly carry the recording load as well.

I spent the next few months doing as much research as possible on the industry’s leading digital consoles – specifically focusing on the following criteria. First, the headamps and A/D converters had to sound exceptional. In essence, it would be the first and most critical component in capturing the raw sounds, and simply could not be an area of compromise.

Second, the navigation needed to be fast. Since I’m also mixing the band’s monitors from front of house (five stereo in-ear mixes, two mono), the ability to quickly send and pan an input without flipping through a bunch of layers was a must.

I had long enjoyed the 24 mix send encoders on the Yamaha PM5D that we used, and had developed a quick and efficient workflow for monitors-from-FOH applications. I wanted to find something equally quick without the worry of possibly forgetting which layer I was on.

Finally, because of the monitors from FOH needs, it also meant that the console must have a minimum of 24 auxiliary mix buses in addition to the Left-Right/Mono buses.

Finding Direction
My attention turned to the Midas PRO Series. After performing several blind listening tests with several competitors, I was blown away by the sonic quality of the Midas. In my view, the XL8 headamp has the girth and richness of an XL4 pre (a long-standing personal favorite), but with a clarity and “openness” in the high-end that I’ve not heard on any other digital desk.

My only concern was the lack of affordable recording solutions offered by Midas and its sister brand Klark Teknik. There wasn’t a way to record more than 32 channels at 96 kHz for under $14,000.

The RPM Dynamics RPM-TB48 I/O and the Lynx AES50 card it contains. (click to enlarge)

The widely used K-T DN9650 network bridge offers the ability to convert the Midas AES50 format to just about any third-party platform up to 32 channels, at which case a down-conversion to 48 kHz is needed. However, my hope was to keep all of the tracking at 96 kHz to preserve as much of the natural “Midas sound” as possible.

Around this time, I met Jim Roese of RPM Dynamics while mixing the mtvU Woodie Awards last year. Over the week, we talked at length about a 48-channel, 96-kHz, 24-bit recording/playback solution for Midas that he was working on, utilizing a pair of Lynx Studio Technology AES50 to PCI cards, all mounted in a Sonnet Thunderbolt chassis.

The RPM-TB48 I/O is a stand-alone, elegant solution with no external interfaces required, as the Lynx cards perform one single AES50 to Core Audio conversion (at approximately 1 millisecond of added latency). Because the processor load of the conversion is being handled by the interface, the load on the CPU of the recording computer is shockingly low.

A single Thunderbolt cable connects the RPM-TB48 I/O to a computer (a rack mounted Mac Mini running Reaper, in our case), providing a quick, easy, turnkey solution.

At roughly $2,600, the solution is about the same price as purchasing all of the components yourself, but with added benefits such as heavy-duty Ethercon jacks mounted to the modified chassis, a full warranty, and most importantly, it works as soon as you plug it in.

The resulting “recording rack” consists of the RPM-TB48 shock-mounted in a foam-lined rack shelf, a Sonnet Rack-Mac Mini enclosure, and a hard drive running off a UPS. A pair of 6-foot Neutrik Ethercon cables quickly connects the recording interface to the PRO2 via two of the AES50 ports on the console surface.

An added benefit is the ability to quickly and easily take RTA traces of individual channels, simply by picking an input or output on the AES50 network. It accomplishes the same thing as routing a solo through an A/D to a PC running Smaart tuning and analysis software, but only uses one conversion and is much cleaner.

The added functionality allows me to look at the frequency response of each channel, which can be an invaluable tool for both FOH and monitor applications.

Quick Workflow
Once the recording platform was established, the only task remaining was to select the components for the control package.

Front and back views of the recording rack built around the RPM-TB48 I/O. (click to enlarge)

We chose the PRO2 console, specifically, because of the advanced surface navigation and high input fader count. It offers 56 primary inputs, 8 auxiliary returns, 27 phase-coherent mix buses, and 27 faders on the surface. With the “advanced navigation” mode engaged, the ability to access a mix, effects engine, or graphic EQ is merely one button-push away, making the workflow exceptionally quick.

Another benefit is the ability to group and spill inputs via the 8 VCAs and 6 Population groups, which, in my experience, has proven to be much easier than remembering what layer a particular input is on. I use the VCAs to group the band, vocals, and effects returns, while the Population Groups are used to quickly access anything that needs to be cross-fades, such as line-in playback sources or a pastor’s wireless lavalier microphone.

The stage rack houses two DL251 I/O units, each with 48 analog inputs and 16 analog outputs. The primary DL251 handles all of the inputs and IEM outputs, while the secondary unit serves as the system outputs, as well as an extra 48 inputs with which to easily patch a supporting act. Finally, AES50 audio is sent to the PRO2 surface via a 4-channel Cat-5e snake.

The author with The Austin Stone’s Midas PRO2 console. (click to enlarge)

After six-plus months, this solution has proven to meet our expectations, and then some. The band was sold on day one as soon as they heard their in-ear mixes – they couldn’t believe the separation and clarity they were experiencing.

The results in the FOH mix have proven to be almost identical. Unprecedented separation in the mix, paired with a thick, tight low-end and rich, warm midrange has made Midas digital the new standard across The Austin Stone’s campuses.

As we near completion of tracking our third live album, the recordings have been nothing short of stellar, and our studio engineers have fallen in love with the sound quality of the headamps. There is no doubt in my mind that we will continue to see a growing trend with solutions of this type becoming a forerunner in the live recording world in the years to come.

Todd Hartmann is the audio engineering coordinator for The Austin Stone Community Church as well as a free-lance audio engineer.

{extended}
Posted by Keith Clark on 05/02 at 02:51 PM
Live SoundFeatureBlogStudy HallConsolesDigitalDigital Audio WorkstationsEthernetInterconnectNetworkingProcessorSoftwarePermalink
Page 52 of 166 pages « First  <  50 51 52 53 54 >  Last »