Feature

Thursday, November 07, 2013

Working On The Stage Sound—Moving From Mixing House To Monitors

A voice of experience provides a run-through on success at the monitor position

A recent assignment placed me behind a monitor console once again. It had been a while since I stage-mixed on a regular basis, so I enjoyed the change of scenery.

But this end of the snake presents a very different challenge from a front of house mix or a system engineering position.

Here, the fruits of my labors were not intended for the masses, but rather, were tailored to specific individuals and each of his or her needs, wants, desires… and idiosyncrasies.  And yes, IEM has fully come of age, but not everyone will go there.

Here are some of my rules for setting up successful stage mixes.

Objective
To me, the first and most important stage-mixing rule is to understand exactly what you are trying to accomplish. (As with most things in life!)

The objective is for the player or artist to hear what they need or want to hear, in a way that makes sense to them. Do not confuse this with the idea that you are there to make it sound good to you! The two do not necessarily coincide. Wedge mixes do not generally sound like front of house mixes.

Be Realistic
Face it; on a one-off with an unfamiliar band all you can do is give it your best shot. If it’s a couple of folks with acoustic guitars, you’re probably “in there”. If it’s Godzilla meets Metalhead, well… set up accordingly.

If you’re going on tour with a band, try to find out as much as possible about them. Perhaps the guy who was sitting in the seat before you got there would be a good place to start.

Make a plan, but don’t try to reinvent the wheel on the first day. Many musicians get used to their mixes sounding a certain way, and right or wrong be prepared to leave it that way.

But if you’re lucky enough to tour with some receptive players, you’ll have plenty of time to try different things and fine-tune your “stage sound” as you go!

First Things First
Assuming this is a tour, you’ll probably receive information about what goes in to the mixes, but it’s best to speak directly to the band members if possible. This is your starting point.

Following that initial information, you set up for your first sound check. When they begin playing and I am comfortable with my initial mixes, the next thing I like to do is walk around to the various positions and listen.

I mean really LISTEN carefully to what everyone is hearing. It will change as you move around depending on your proximity to various instruments, amplifiers and wedges.

It may change from song to song depending on the volume of the instruments. Make mental notes of what you hear. This will be the foundation for building a successful “stage sound” later.

Psycho
You must also play psychiatrist a bit and try to get inside the player’s heads.

It’s important to understand the difference between a guy who will ask for his guitar in the wedge in front of him while standing in front of a Marshall stack turned up to eleven, and the guy who wants a taste of the keyboards because they are on the opposite side of the stage. If it’s all about volume and ego… (fill in the blank).

Loudspeaker Placement
I’m always amazed at how many guys don’t take the time to really place the loudspeakers properly.

Aim them at the players’ faces, and away from troublesome acoustic instruments. (Like a grand piano) Try to keep from firing into open microphones, thank you.

Drum fills are particularly troublesome. I like to get them as far down-stage as possible alongside the riser, and aim them just up-stage of the drummer.

Orient the box so that the narrowest horn dispersion is in the horizontal plane. (Usually on its side) This will help to keep the foldback out of the tom and overhead microphones.

Be careful when you are using more than one enclosure on a mix. Play with the placement of your wedges and find out what works. You’ll be amazed at what a difference a few inches can make when it comes to hot spots and nulls.

Usually I try to find a place where they are close enough together and down-stage to still be in front of the musician, but far enough apart to aim the high frequency axis past the microphone at his ears.

When they’re too far apart, you lose that “in your face” feel. Avoid crossing the HF axis from both boxes at the microphone itself, and also be prepared for reflections from hats or costumes.

For fill loudspeaker positions, if you have multiple enclosures try to stack them, as opposed to a side-by-side configuration.

Horns that are not splayed properly will have several well-defined nulls and peaks in their response when acoustically added together. This is a classic case of non-coincident arrivals at the listener’s position and cannot be fixed with an equalizer!

You would have to splay the boxes for a very wide coverage pattern in order to add the horns together properly. (Depending on the horns of course) There are many more enclosures with 60-degree horns than with 30-degree horns.

Low Frequency Reality Check
Look around you. A reality check will tell you that if you have a relatively large house system with low frequency and sub-bass enclosures that your monitors will not be able to compete with the LF information on stage when everything is up to show speed.

Unless, of course, you want to turn everything up to “warp nine,” or add lots of sub-bass enclosures to you monitor rig, but this generally results in escalating levels with the backline amps and then the house system to overpower all of the information coming off of the stage. I think we all know what this leads to!

If you have to overpower the band with your stage rig, the house mixer will hate you and the show will suffer for it! (Just as it does if the band plays too loud.)

Use the low frequency information from the house system to fill out the bottom end in your “stage sound.”

If you’re carrying a smaller house system or playing on well-damped theater stages, this effect is not so prevalent and you can maintain a full bandwidth from your monitor system.

Pulling It Together
The best approach is to try to meld the backline amps, wedges and house loudspeakers into a system that all works together to attain the overall stage sound you are looking for.

To develop this environment, the spectral response of the mixes should be tailored to fill in what is not heard on stage from the backline amps and the house system.

This usually involves a lack of nearby instruments and VLF frequencies coming from the wedges. (A bonus for you!)

This is where the receptive players come in. You may have to point out the low frequency phenomena during a sound check, but it will be obvious to them if they listen.

Also point out the nearby instruments and how they may be heard without being very loud in their mix. Maybe even re-aim a stage amplifier to be more effective.

How many times have you seen guitar players wailing away with their speakers aimed at their rear-ends? Tilt them back and aim them at their heads. I promise they have no idea what kind of havoc they cause the house mixer about 75 to 100 feet away.

Of course this doesn’t work in every situation. It depends on the music, the venue and the players among other things.

But if you can make these principles work you can achieve the most clarity with the least volume in your wedges.

Use localization to help keep things clear on stage. It is easier to hear different instruments if they are coming from different directions. The fewer sources in any mix, the easier it is to hear them a noisy environment.

Also consider the individual instruments and a mix containing all of them. You have a certain bandwidth in which to fit them.

It’s pretty easy if it’s just a violin and a tuba, but not so straightforward with several guitars and keyboards and drums. Work at making all of the instruments sound different and fill the available spectrum with more distinct differences between them.

If a player insists on a particular tone in his monitor, but it doesn’t work for the rest of your mixes’ split the input into multiple channels on your desk so that you can tailor the sound for everyone.

Dan Laveglia is a long-time system engineer who has worked with Showco and Clair Brothers, serving top concert artists.

 

{extended}
Posted by Keith Clark on 11/07 at 11:46 AM
Live SoundFeatureBlogStudy HallConcertConsolesEngineerInterconnectLoudspeakerMicrophoneMixerMonitoringProcessorStageTechnicianPermalink

Properly Distributed Audio & Amplification For High-Quality Intelligibility

Getting a handle on the key pieces in the chain

I felt the need to write this article after a recent cross-country trip, and I’m pretty sure many of you can relate to my experience. At a newer, modern airport, I was waiting to board a connecting flight, and everything appeared to be going as planned until the departure time suddenly changed on the monitor - a one hour delay.

However, I never learned the reason for the delay, because the gate attendant making the announcement either had the mic down her throat - or, more likely - the paging system was defective. Speech intelligibility was poor to non-existent.

This can’t be, I thought. A modern airport certainly must have a paging system utilizing the latest technology, properly designed and installed to provide maximum intelligibility. We all need to “hear and understand” the announcements, right? After all, this is kind of the point, and unfortunately, it was a point completely missed by this system.

There can be several reasons for poor intelligibility with a paging system. In this case, the system was most plagued by bad loudspeakers combined with poor placement, and I’m sure “Mr. Budget” had negative impact as well, as it often does in systems of this nature. I wonder how many people miss their flights daily due to a poor paging system? One would think the amount of money wasted on compensating for missed flights would justify the cost of insuring every airport paging system sound do what it’s supposed to do: deliver intelligible audio to everyone in the facility.

Many audio professionals believe distributed audio cannot sound good. And in fact, it’s never going to sound as good as a finely tuned PA, but it also doesn’t need to sound like a badly tuned AM radio. If a distributed system is designed and budgeted properly, there is absolutely no reason for poor intelligibility or background music quality.

Distributed audio products have improved much over the past 15 or so years, because many manufacturers realized that distributed audio is big business. For example, there used to be only a few manufacturers offering ceiling loudspeakers, and now, just about everyone in the loudspeaker business has jumped on the bandwagon. Keep in mind that a major brand name on a loudspeaker doesn’t at all mean it provides the appropriate and necessary performance. Quality ceiling products take time to perfect.

THE WEAKEST LINK
While choosing proper loudspeakers for a distributed application is essential for good results, if you’re feeding these drivers limited audio signal and power, they still will not perform properly. The system will only be as good as its weakest link, therefore, every component in the chain - microphone(s), mixer, power amplifiers, step-up power transformers, wire gauge, step-down transformers and loudspeakers - must deliver good frequency bandwidth.

Another significant problem with distributed systems is the power amplifier side of the equation. Unfortunately, some installers lack understanding as to how to properly utilize power amplifiers for distributed applications. As with loudspeakers, most power amp manufacturers now offer products for distributed applications. I’m often asked about the differences between a standard two-channel power amp and a 70.7-volt or 100-volt power amp. Besides connectors and security features, there’s not much discrepancy.

The biggest difference is how high-output voltage is created. Some power amps require a step-up transformer to achieve the proper output voltage. Any professional power amp can be outfitted with a step-up transformer to make it a distributed amp. But first there must be proper understanding of why and how distributed audio is created.

Many distributed systems require only 100-watts at a 70.7-volt system operating voltage. (A 100-watt amplifier only has a voltage swing of 28 volts RMS at 8 ohms.) Thus a step-up transformer needs to be attached to achieve the required distributed voltage. There are two types of step-up transformers, isolation and autoformer, and both have pros and cons.

The autoformer is the easier to engineer, and the cheaper option. As a result, it’s the step-up transformer found in most power amps. Autoformers also supply good frequency response, but the downside is lack of protection between the loudspeaker and the power amp. Therefore, odds are pretty good that a system using an autoformer will have a short circuit on a line of 10 loudspeakers running on 300 feet of wiring. And if a short occurs, the power amp will shut down or even fail completely.

A step-up isolation transformer is a more solid approach in terms of power amp protection. With this approach, the power amp always sees a constant load impedance. If a short occurs anywhere in the chain, most likely the power amp will continue to be able to drive the system. However, frequency response suffers, limited to about 50 Hz to 8 kHz. A good bandwidth for an isolation step-up transformer would be 40 Hz to 18 kHz (+/- 1 dB).

So prior to purchasing a power amp with a step-up transformer, find out what type of transformer it is as well as its bandwidth. Be sure to check (or ask for) specifications of both frequency response and dB.

THE REQUIRED “SWING”
Increasingly, power amps don’t require a step-up transformer to deliver high-output voltage. A decade ago, Crown, for example, began offering a line without output transformers, instead outfitted with high enough internal DC rails to produce the required output “swing” for the load. These power amps also load protect against lower distributed impedances. It’s a very good solution for eliminating bandwidth limitations.

And in fact, many of the latest hybrid of high-power amps now on the market deliver enough voltage to drive a 70- volt system without need of step-up transformers. With higher power amps comes a larger voltage swing. For a 600-watt power amp running at eight ohms, voltage output is 69 volts. The impedance of a 600-watt 70.7-volt distributed system is 8.33 ohms. (This can be calculated using Ohm’s law.) What this means is that you don’t necessarily need to use all of the power provided by the amp. If you only need 300 watts, the 600-watt power amp used in my example will deliver the necessary power along with the required voltage swing.

Another question I’m frequently asked: can a power amp be bridged to get the voltage swing for a 70.7-volt system? The answer is yes, but you may not want to use a 600-watt power amp to drive a system requiring only 150 watts, and you may not want to use a step-up transformer.

On the other hand, a standard 150-watt power amp at eight ohms will not deliver the required 70.7 volts unless it is bridged or outfitted with a step-up transformer.

Here is an example of how to bridge a power amp for 70.7-volt operation. Let’s say we have a distributed system requirement of 150 watts at 70.7 volts. This will require a power amp that delivers 150 watts per channel at eight ohms, resulting in an output voltage of 34.6 volts per channel. For a 70.7-volt system requiring 150 watts, load impedance is calculated at 32 ohms.

When this power amp is bridged, it sees only half the impedance load applied to it. For a 32-ohm load, each channel sees 16 ohms. Most manufacturers do not publish 16-ohm specifications, so to determine this, divide the 8-ohm rating in half. So in this example, a 150-watt at 8 ohms is 75 watts at 16 ohms. The voltage swing for 75 watts at 16 ohms is 34.6 volts, and with the power amp bridged, power and voltage outputs double. Thus after bridging this power amp, output of 70.7 volts and 150 watts is attained, close enough to adequately drive the system.

CHECK THAT SATURATION
Overdriving is a common mistake with distributed systems. If more than 70.7 volts (or 100 volts) is applied, step-down transformers may saturate, causing them to go from high impedance to an almost dead short. Not only will sound quality suffer immensely, but the power amp may fail. And be extra careful that this does not happen if you are bridging a power amp. The channel outputs will be shorted together if saturation occurs, and this is not good!

Step-up transformers can also be saturated if too much power is applied to the primary. When selecting a step-up transformer, be sure to match the power output of the power amp to the primary impedance of the transformer. (Most primaries are either 4 ohms or 8 ohms.)

Another issue of which to be aware is the combined frequency response of a loudspeaker and step-down transformer. Most only go flat to 120 Hz. To conserve power from the power amp while also avoiding low-frequency saturation, place a high-pass filter on the input of the power amp, which will limit its low-frequency response. It is recommended to use high-pass filter rated at 80 Hz to 120 Hz, 12 dB to 24 dB.

Always remember that 70.7 volts or 100 volts is the maximum voltage to be applied to the step-down transformer. Further, a distributed system design should operate between 50 and 75 percent of its total capability, allowing for additional headroom when needed and preventing potential transformer saturation.

To attain a distributed system offering excellent intelligibility and quality background music reproduction, it’s essential to have a good understanding of all the pieces in the chain as well as the potential problems that may occur.

Jeff Kuells is an audio engineer and audio manufacturing consultant and was previously director of engineering for a major amplifier manufacturer.

{extended}
Posted by Keith Clark on 11/07 at 11:24 AM
AVFeatureBlogStudy HallAmplifierAVInstallationLoudspeakerPowerSound ReinforcementPermalink

Church Sound: Why In-Ear Monitors Make A Lot Of Sense For Smaller Churches

Reduced stage volume, improved overall monitoring -- IEM is not just for larger venues

A while ago I wrote about in-ear monitors (IEM) and some potential issues with them

While recently discussing IEM with a colleague, he made a statement that struck me: “In reality, it is the smaller church that needs in ears much more than the larger ones.”
 
A couple of things came to mind. Larger churches/ministries have the funding to get IEM, and they (often, at least) have paid technical staff that can properly setup and use in ears. And larger churches also have large stages in large rooms, and stage volume is frequently not as much an issue as it is in smaller churches.

Rather obvious reasons a small church might invest in IEM:

1) Stage volume is a huge issue. In some cases, 70-plus percent of the congregation probably hears more stage volume than sound coming out of the main loudspeakers

2) Because of the stage volume, there are continual complaints about loudness and that the vocals cannot be heard over the instruments

3) Further complaints about not being able to hear the vocals

4) Many small churches have singers and instrumentalist that have never played on a big stage and are perhaps self-conscious about their abilities. IEM allows them to better hear themselves and other musicians

5) Feedback can be a constant issue because the vocal monitors always need to be turned up too hot so the singers can hear themselves

So how can smaller churches move into IEM?

Fortunately, system costs have come down in recent years. Further, an investment can be made in just a few systems, with more receivers added over time as funds become available. 

I suggest starting with singers - although they’re not the loudest thing on stage, in general, their monitors tend to be loudest. So, a good starting point might be purchasing one transmitter and enough receivers for all of the vocalists. Then I would purchase a second transmitter and put the band on that mix.

Things to keep in mind about IEM:

1) They do take a bit to get used to, so use them in multiple rehearsals first

2) Mixing for “ears” is different than what is needed with standard floor monitors

3) Without ambient/audience mics feeding IEM, musicians will initially feel isolated and perhaps frustrated

4) Just like conventional monitors, when multiple musicians share a mix, there will be issues!

To help make the use of IEM successful, the person providing the IEM mix (usually the house sound operator) should use a set of headphones (or ear pieces) that match the ones the musicians have when setting up the monitor mix on an aux send on the console..

Also, the operator should try to make sure there is an ambient mic (or two) to feed that aux channel (and NOT the main mix).

I’ve found that placing a mic to capture some onstage sound as well as a mic to capture the audience/house sound usually works the best. Dialing the amount of each mic into the mix is a matter of personal taste - work with the musicians on this one.

In-ear monitoring is not a total solution to all stage monitor problems, but it’s a valuable tool that can help when deployed carefully and correctly.

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with sound at his church for more than 30 years.

{extended}
Posted by Keith Clark on 11/07 at 10:22 AM
Church SoundFeatureBlogStudy HallLoudspeakerMonitoringSound ReinforcementStageWirelessPermalink

In The Studio: A Handy (And Inexpensive) Wireless Remote Control Solution

It's no fun to stop, take off your headphones, and walk back over to the computer
Article provided by Home Studio Corner.

 
If you’re like me, then a lot of your recording sessions in your studio involve you wearing several different hats.

For me, I’m a musician, so I’m always recording myself.

The problem is studios tend to be noisy. I like to get as far away from the computer and hard drive as I can. That means moving across the room.

Then the problem, of course, is that now I’m very far away from the computer. I have to do what I call the “recording dance,” where I scurry back and forth between the microphone and the computer.

This gets old really quick.

When you’re in the zone to record, and you’re feeling very creative and musical, it’s no fun to stop, take off your headphones, and walk back over to the computer to stop recording and set up a new take.

This is especially frustrating if you make a mistake two bars into the first song, and you have to stop everything and start over. You’ll find pretty quickly that you’ll lose that “zone” that you were in, and playing the music then becomes a chore.

There are a few possible solutions to this. Over the last few years, there have been at least a handful of wireless transport control products on the market.

Frontier Designs made one called The Tranzport. I don’t believe it’s for sale anymore, but it was essentially a wireless transport control that allowed you to start and stop playback and do a few other functions wirelessly.

Another solution is a very cool product from PreSonus called FaderPort. It’s great because it allows you to have volume control with the fader and also all the transport controls you need for recording. The one problem is that it’s not wireless. That’s not a huge problem. All you have to do is get a very long USB cable and place the transport next to you at the recording position.

Now you have the transport controls right there, within arms’ reach to start recording, stop recording, or do whatever else you need to do without having to get up.

The only problem with that solution is that you have to run a cable all the way across the room to the recording position while recording. Then you have to run it back to your mix position when you want to use the fader port for mixing and other things.

My Solution
Here’s what I do. When I bought my iMac, it came with a wireless keyboard. It’s not a full-size keyboard. It doesn’t have the number pad to the right-hand side, so it’s fairly small.

I used this for a while, but, if you do a lot of work in Pro Tools or any DAW, you know that there are shortcuts that you can use with the number pad on the right-hand side of the keyboard.

Since this wireless keyboard didn’t have that, I eventually broke down and bought a full-size USB keyboard for the iMac. That left me with a very handy wireless tool. Now, whenever I record across the room, I simply turn on my wireless keyboard and carry it with me to the recording position.

Since I know all the shortcuts, I can quickly and easily start and stop recording. If I mess up an intro in Pro Tools, I simply hit Ctrl-period. That stops recording and I can press Cmd-Spacebar to restart recording again. It’s very handy and saves me a lot of time, since I’m not bouncing back and forth from the chair, back to the computer, and then back to the chair.

Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.

 

{extended}
Posted by Keith Clark on 11/07 at 08:25 AM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsInterconnectMonitoringSignalSoftwareStudioWirelessPermalink

Wednesday, November 06, 2013

Problem Solvers: Mics And Techniques For Challenging Situations

Creative solutions to the daily challenges are readily available

Whether it’s a band using amplified and acoustic instruments, a bassist switching between electric and double bass, a drum kit right next to an acoustic piano, a particular sound that needs to be isolated in the face of loud ambient noise – or, well, you name it – selecting the right microphone (and deploying it in a certain way) can make a big difference in attaining the desired level and audio quality.

With that in mind, let’s take a survey of several live professionals about some of their “go to” mics and techniques when encountering acoustically difficult environments and/or in working with combinations of instruments.

Acoustic Piano
Preparing for a show with pianist/songwriter Mark Cohn, veteran FOH engineer Tom Dube announced that he could set up the piano mics in 60 seconds. With that, he reached inside the grand piano, positioned two DPA 4061 miniature omnidirectional condensers on the metal ribs of the piano’s harp with magnetic mounts, guided the cables, and plugged them into the stage snake.

Dube notes that he usually positions the mics in specific locations to maintain the proper phase relationship with the piano’s hammers. He then quickly adds a Barcus Berry XL4000 pickup to the soundboard under the lowest strings, running to the preamp.  (“Maybe the process takes 90 seconds,” he concedes.) 

At the console, he generally adds some gentle limiting and applies a high-pass filter to the 4061s at about 180 Hz to reduce stage rumble. Additional equalization depends on the particular piano, consisting of a bit of midrange attenuation centered somewhere between 300 and 700 Hz, and perhaps a slight boost between 3 and 4 kHz as well as 8 and 10 kHz. The DPA mics are “remarkably consistent and useable,” he notes.

“X” marks the spots – FOH engineer Tom Dube’s positioning for piano mics. (Credit: Tom Dube)


The XL4000 pickup serves to reinforce low frequencies that are sacrificed for feedback suppression with the high-pass filtering. “I’ll dump a whole lot of 1 kHz and 2.5 kHz, as well as bump up a bit of 12 5 Hz to add the missing bottom,” Dube adds. Finally, he has the player hold some lower tones during sound check to determine the best phase relationship between the mics and pickup.

When talking with FOH engineer Nick Malgieri at the main stage of the 2013 Monterey Jazz Festival, he referred to the Shoepps MK4 and its “magical gain-before-feedback ratio” even before any EQ when used as piano overheads.

“I’m not sure if any other mic overheads would work as well on an acoustic piano on a live stage with a band,” he says. Because of higher stage levels and instrumentation, the MK4 output is typically blended with some combination of internally mounted DPA 4060s, AKG C414s, or an Applied Microphone Technology (AMT) M40 boundary mic attached to the sound board. 

A DPA 4061 with convenient magnetic clip.

Acoustic Guitar
Getting an acoustic guitar to sound like it does unamplified, only louder, is a perennial challenge, especially when the guitar is doing more complex tasks than strumming chords in first position.

Solutions over the years have included a sound hole cover to seal the instrument’s acoustic chamber and lower the odds of feedback, as well as magnetic pickups under the strings, contact and under-saddle vibration-sensing pickups with outboard processing, and various small mics attached to the guitar either by themselves or in combination with under-saddle sensors.

At small venues where the audience is silent and the performer is still, simply placing a directional mic close to the instrument – avoiding close proximity to monitors and mains and adding judicious equalization – may do the trick. 

A recent solution is the L.R. Baggs Lyric system. Designed to be mounted in the guitar, the system includes a full-range directional mic in a specially designed noise-cancelling mounting that attaches inside the instrument on the guitar’s bridge plate, a small rotary volume control and presence setting on the underside of the sound hole, and an end-pin jack containing sophisticated tonal circuitry – including compression, limiting, and EQ to output a balanced acoustic sound. 

The system allows freedom of movement for the performer, since the mic always stays in the same position. I tested the Lyric by installing it into a custom handmade acoustic by luthier Mike Kelly (of Goodyears Bar, CA), with good results.

Keith Sewell, a touring guitarist with Lyle Lovett and the Dixie Chicks (among others), has been involved with the system since it was in prototype in 2012, seeking an internal mic-only solution for performing in amphitheatres and other large venues that would sound as close as possible to an unamplified acoustic. 

Because every venue and stage differs, Sewell says the Lyric needs a bit of tweaking each night, but once dialed in, he states, “it’s the best sound I’ve ever had as far as a guitar pickup,” adding that there is no way he could get even the best condenser mic as clear and loud in a live setting.

Lovett’s FOH engineer, John Richards, finds the mic very stable and musical in the mix. In a full band setting, he uses a high-pass filter at around 100 Hz on the guitar.

Guitarist Keith Sewell in concert with the L.R. Baggs Lyric system. (Credit: Keith Sewell)

Other effective solutions I’ve run across include an AMT S15G mic, which has a gooseneck external cardioid condenser element that positions between 3 to almost 6 inches above the surface of the guitar using a specialized clamp for the body and a beltpack preamp.

The DPA d:vote 4099G acoustic guitar mic provides a supercardioid element on a gooseneck with body clip, and is used externally pointing toward guitar. DPA offers a variety of exchangeable clips for this mic so that it can be used on violins and other strings, brass instruments, piano, drums, and so on.

And, the Fishman Ellipse Matrix Blend combines an under-saddle pickup with a miniature cardioid gooseneck condenser mic that goes inside the instrument. 

Guitar Amp
With decades of touring and studio experience, Mick Conley has miked his fair share of guitar amps. His choice is often a Shure SM7,  more typically used in radio broadcast as an announce mic. Though by the specs the SM7 is cardioid, Conley cites its “really tight pattern that isolates so well” as a key reason it works as desired in this application. It also includes bass roll-off and midrange presence boost controls for further tonal shaping.

When he has the time at a given show, he moves the mic position a bit to find the “sweet spot,”  listening through the house system. When using the same guitar amps at every stop on a tour, he may also mark the best spot with a piece of tape.

Dube has a few favored mics for guitar amps, including the ubiquitous Shure SM57. He also sometimes selects a ribbon mic such as a figure-8 Royer R-121 or a beyerdynamic M160 hypercardioid – or if available, a Sennheiser MD409 dynamic supercardioid (currently updated to the e609 Silver).

He emphasizes that mic positioning is critical to maintain a correct phase relationship, adding “use your ears.” Depending on the guitar, playing style, and amp, finding the best place to point the mic between the edge and center of the speaker cone is also a matter of listening. 

Kick Drum
Also while at Monterey, I ran into FOH engineer Dunning Butler as he was returning from the on-site equipment area with a mic to solve an audio problem on an upcoming performance at a venue that’s dubbed “Dizzy’s Den” at the festival.

It’s a long, rectangular room, designed for county fair displays rather than musical performances. As an acoustic space, it could be accurately described as “tubby” since its dimensions tend to accentuate low-frequency energy coming from the stage and the loudspeakers – leading to a lack of definition for bass instruments and kick drum. 

Dunning placed the beyerdynamic M88 cardioid that he’d retrieved close to the front head of the 18-inch kick drum (without a hole cut in the head), since in his experience the tight pattern and “bright sound” of the mic allow him to capture the sound without additional “boominess.”

Conley too mentioned the M88 as well as the more recent TG-series equivalent as “one of his favorite kick drum mics” that he also finds useful on toms and guitar cabinets.

Further Solutions
Flutes don’t get all that much attention, but they’re actually quite common to musical ensembles of a variety of styles, and they present unique challenges.

Noted flutist Michael Mason has found a solution with a Countryman ISOMAX 2, which is available in omni, cardioid, and hypercardioid patterns. Specifically, he mounts the mic to the instrument with it’s specialized flute clip, and sends signal to FOH via a Shure wireless system.

“The ISOMAX 2 provides excellent response and the clip mount offers all the flexibility I require in order to position my embouchure, with the ability to adjust the clip position for many of the extended techniques I perform,” he notes. “I position the clip and mic onto various areas of the headjoint, but never too close to the lip plate. I use the mic without the windscreen because it enables me to capture a wide range of articulations and wind sounds.” He adds that he knows of several other prominent flutists who regularly use the ISOMAX 2.

On the recent Justin Timberlake world tour, FOH engineer Andy Meyer turned to a unique solution he’s developed in applying an Audio-Technica AE5400 cardioid condenser mic – normally for live vocals – on the top and bottom of the snare drum. “I’ve been doing that since I was like seven years old,” he jokes. “Seriously, you cannot beat the 5400 in that application, and I keep trying.”

For unobtrusive close-miking of acoustic instruments and other audio sources, engineer Nick Malgieri chooses the DPA 4060, also a miniature omnidirectional model. He finds that this lavalier-style mic retains the clarity and high-end response along with faithfully reproducing “the sound of the body of the instrument.”

At times, he uses the 4060 as a “contact mic” by taping it to the instrument, and since it’s an omni, it doesn’t exhibit proximity effect that can color the sound. In more esoteric situations, he’s even taped it to a target to reinforce the impact of an arrow hitting home – with the audio relayed to the console via a Sennheiser G3 or Shure ULX-D wireless system. 

In addition to piano overhead applications, Malgieri sometimes uses a Shoepps MK4 as a lectern mic and to capture acoustic guitars and other acoustic instruments. For picking up more distant audio sources, he finds that the output of the MK4 is much more transparent than typical shotgun mics while still achieving the necessary gain.

Flutist Michael Mason showing the Countryman ISOMAX 2 placement on his instrument.

Stage Setup & Isolation
Audio and piano technician Brian Alexander has spent many years behind the scenes, touring with Chick Corea and others. He focuses on the stage setup and the interaction between stage levels, monitors, and mics. 

When stage levels are higher, emphasizing isolation between instruments based on where they’re positioned relative to each other, or even using sonic barriers (“if the musicians will put up with it,” he adds) can lead to more control over the audio to the front of house and the audience. Careful positioning of the null zones of the mics is also important. 

Further, in working at Monterey with vocalist Bobby McFerrin and his multi-piece band, including instruments ranging from drums, electric guitar, and keyboards to acoustic guitar, dobro, and bass ukulele, Alexander was greatly aided by both the careful stage arrangement and the use of in-ear monitors to keep the stage level low. Mixed by Dan Vicari, the results for the audience were dynamic, with every instrument able to be heard distinctly whether the band was rocking or playing intimately.

With so many mics to choose from, each with a unique set of characteristics, creative solutions to the daily acoustic challenges are readily available. The best approach is to set aside time to try some of the ideas presented here, as well as to come up with novel solutions of your own.

To the extent that the musicians will work with you, experiment with the positioning of instruments, amps, and monitors so that better isolation can be maintained – giving you more control at front of house.  The audience will appreciate your efforts.

Gary Parks is special projects writer for PSW/LSI, and has worked in the industry for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.

{extended}
Posted by Keith Clark on 11/06 at 11:28 AM
Live SoundFeatureBlogStudy HallInterconnectMicrophoneSound ReinforcementStageWirelessPermalink

Fun With A Purpose: The System For A Fast-Rising Band’s Latest Tour

Success has made a big difference in the band's concert sound approach

New York City-based indie pop band Fun (stylized as fun.) has enjoyed a remarkable couple of years, particularly since the release in 2012 of Some Nights, which saw the single “We Are Young” topping the Billboard charts in 2012 and winning Song Of The Year honors at the 2013 Grammy Awards.

That success has made a big difference in the band’s concert touring approach, with larger venues demanding a much-expanded sound reinforcement effort.

“When I came onboard in 2011, the rocket had ignited, but the band was still touring with one bus and a trailer,” front of house engineer Gord Reddy told me when we spoke a few weeks ago while he was on a break from the extensive Some Nights tour of North American sheds and arenas.

Initially after “We Are Young” hit, the band was still appearing in 1,200- to 2,000-seat venues, but soon, the tour was carrying everything but stacks and racks, and now, the production has grown to require five trucks and three buses, with a production crew of 24 to manage it all at each stop.

Audio is pretty much the only job the northwest Washington-based engineer’s ever done. “I was on tour at the age of 16, though I don’t know if I should be bragging about it,” Reddy says, laughing. He’s been mixing FOH almost exclusively since the late 1990s following a stint as a system tech for Jason Sound, and has also done tech and FOH mix work with artists such as Barenaked Ladies, Sarah McLachlan and numerous others.

A perspective of the scene for Fun performing live at the Greek Theater in Berkeley, CA. (click to enlarge)


Canada-based Solotech is the sound company for the tour, providing a Meyer Sound-led rig that includes LEO linear large-scale arrays, MICA array modules, 1100-LFC low-frequency units, and UPQ loudspeakers under Constellation and Galileo digital control.

“In the 90s, I wouldn’t leave home without (Meyer) MSL-4s,” he states. “I got on to LEO in spring 2011, and it has a frightening amount of headroom and linearity through that gain so you can push it up without restructuring the mix or EQ-ing.”

A look at the Meyer loudspeaker set, including LEO and MICA arrays, 1100-LFC low-frequency boxes and UPQ fills. (click to enlarge)

More Direct
Typically at each stop, the sound team deploys 28 LEO and four MICA modules in main arrays comprised of 16 boxes each – 14 LEO with two MICA (100-degree horizontal dispersion) underneath for near field reinforcement. Coverage to each side is extended with MICA arrays, usually 10 deep, splayed outward, while low end is supplied by up to 12 1100-LFCs per side ground stacked (typically three boxes per stack).

“Twelve subs per side is just a scad more than I need, but gives me a lot more control and it’s fun to have that headroom,” he says. “Everybody’s crazy about bass steering right now – using propagation delay and cardioid arrays – but you need a lot of subs to execute effective pattern control. The best way to steer bass comes down to the dimensions of the baffle you build.”

He adds that it can get thunderous with all of that low-frequency energy up front, so the stage lip is “coated” with UPQ-2P and UPQ-1P compact loudspeakers to make sure the audience in the extreme near field is getting something to go with that big serving of sub bass.

While the size of the rig varies venue to venue, Reddy prefers to use as many loudspeakers as possible every gig.

“Not because I want to melt everybody, it’s just more consistent – more direct and less reflected energy,” he explains. “The more hanging there in terms of width and length allows you to be really specific about how you deliver that power, to make it pleasant up close while being convincing farther back. More like climbing into a set of headphones rather than listening to it off the barn wall.”

The drive system for the loudspeakers, which are all self-powered, incorporates dual Meyer Sound Galileo 616 loudspeaker management processors for alignment of multiple zones, feeding (via AES outpus) three Meyer Sound Callisto 616 array processors that provide delay integration for aligning the arrays, shaping filters, and simultaneous low- and high-pass filters for subwoofer control.

Front of house engineer Gord Reddy during setup at the Greek. (click to enlarge)

Accompanying Compass control software provides comprehensive control of all parameters from a Mac or Windows-based computer. “I use Galileo to make ‘broad strokes’ for the whole system, and then for zone-specific treatment, which I keep to an absolute minimum, I go to Callisto,” Reddy explains. 

“With this loudspeaker rig – or without it – Galileo is my drive device for system EQ,” he continues. “If we’re playing a festival and they’ve got subs on an auxiliary – which is very common – when they hand me the separate wire, I drive it through my Galileo and stitch the subwoofers back into the rest of that 10-octave composite of musical information the way it should be. Galileo allows me to do that outside the mix environment.”

Good Is Good
I followed up by asking Reddy if he applies any specific treatment for Fun and received a spirited response. “You’re opening up a big can of worms for me,” he states. “Sound systems aren’t super-conceptual. I don’t care what style of music you’re spraying out of them – good power tuning is good power tuning, good directivity is good directivity, good direct-to-reflected ratios are good direct-to-reflected ratios. These do not and should not have anything to do with the artist.

Monitor engineer Dave Rupsch at his DiGiCo SD8 console. (click to enlarge)


“We owe it to ourselves, the industry, and the audience to understand how the manufacturers intended this stuff to be used. The popularized mythology says that editing crossover settings is good, but it’s the equivalent of taking the front wheels of your car out of alignment and saying, ‘When I drive on this kind of surface I like my camber angle to be out a bit.’ It never made sense and I’ll go to my grave fighting anybody who says otherwise. I’ve got two pages in my rider covering no ‘home-proved’ drive settings, please. Keep your settings away from me. Give me the ones the manufacturer developed.”

The stage is relatively quiet, devoid of monitor wedges and side fills. Band principals Nate Ruess (lead vocals), Andrew Dost (piano) and Jack Antonoff (lead guitar)—along with their touring mates Nate Harold (bass), Emily Moore (keyboards) and Will Noon (drums) – wear either Ultimate Ears or JH Audio custom in-ear monitors, fed by Shure PSM 1000 personal monitoring systems, notes Phoenix-based monitor engineer Dave Rupsch, who joined the Fun crew in January of this year.

Vocals are captured with Shure SM58 capsules on UHF-R wireless systems, although lead singer Ruess occasionally switches to a KSM9 capsule for his vocal, primarily to add a bit of variety.

“I haven’t been looking for more than what I get with the 58,” Reddy says, “but you get ‘sound drunk’ listening to the same stuff all the time, so we’ll go to the KSM9, then go back the other way.”

Bass, keys, sampler and acoustic guitars are taken direct with Radial DI boxes, while guitarist Antonoff’s VOX electric guitar cabinets are handled with a combination of Shure KSM32 side-address cardioid condensers and Beta 56A dynamics.

“One of the cabinets is in an ISO box to maximize the distinct and very lovely tone a VOX produces when spun-up all the way, while minimizing the issues it would create with five open vocal mics,” Rupsch explains.

The drum kit is captured with Shure Beta 91 and 52 on kick, Beta 181s for cymbal underheads, an SM57 on snare bottom, and KSM137s on snare top, toms and hi-hat. “For the kick I get some information from the 91 to provide the noise gate for the 52,” Reddy notes, “but it’s mostly the 52 for me.” Two more KSM137s placed stage left and right collect stage ambience.

Pleasing & Lush
Rupsch and Reddy both do their mixing on DiGiCo SD8 consoles. “The SD8 was a happy accident,” says Rupsch. “I’d never used DiGiCo before, and in fact was just finishing up training on the Midas PRO Series when I was approached for this job. Rather than push for a PRO Series console, I decided to give the SD8 a shot. The snapshot editing is great.”

The monitor workspace outfitted with a DiGiCo SD8, Shure wireless and more. (click to enlarge)


It’s a challenging show to mix from a monitoring standpoint, he adds. “With six people on stage – five of them singing constantly – framing the overall mix changes drastically between songs, but they do a fine job of mixing themselves in many respects.”

The twin SD8s cut down on infrastructure needs in general, Reddy notes. “Previously, monitors and FOH shared the data stream out of one stage rack, so we were down to a splitter, a stage rack and two local racks. Now, the budget and truck space are there and we’re splitting copper to two stage racks so we can have independent control, and because we have a diverging input list now – a couple of click lines, audience and shout mics that I don’t use and some he (Rupsch) doesn’t use.”

Given the band’s very busy press schedule, time for sound check with them can be hard to come by, but it’s not a huge concern given the familiarity that’s developed due to the length of the tour.

Plenty of boxes on stage deliver a “coating” of mid/high energy to the extreme near field. (click to enlarge)

Whenever possible, Reddy defaults to a virtual sound check using his PC and Reaper recording software. Prior to each show, he evaluates the system in the Meyer MAPP Online Pro predictive application, with Rational Acoustics Smaart deployed to assist with tuning.

“I keep the PA quite flat, probably brighter and with less low end than many people might prefer. Then I force the composition of the mix to give that bass back,” he concludes. “The mix coming out of the console is pleasing, thick, and lush down low, but being delivered to a pretty flat system, which gives me a predictable and structured target to shoot for every day, just like the guys in the studio making the record.”

Based in Toronto, Kevin Young is a freelance music and tech writer, professional musician and composer.

{extended}
Posted by Keith Clark on 11/06 at 11:24 AM
Live SoundFeatureBlogConcertConsolesEngineerLine ArrayLoudspeakerMonitoringProcessorSound ReinforcementSubwooferTechnicianPermalink

In The Studio: Fixing Small Room Acoustical Problems

Overcoming the challenges of a dedicated studio in a small room
This article is provided by Audio Geek Zine.

 
Recently I moved my home studio from one room to another. From a nearly 200-square-foot living room to a 100-square-foot bedroom.

It’s been a long time since I’ve thought about room acoustics and because this is a common situation for home studios, I thought I’d share my experience.

This article will help you understand and overcome the challenges of a dedicated studio in a small room. It will be most helpful to those with symmetrical rooms (no weird angles) and to those that don’t need all the usual bedroom stuff, at the very least it will be a starting point to making the best of the situation.

The Problems
Small rooms are more likely to have acoustic problems than larger ones, primarily flutter echo, room modes and early reflections that are too short. In my room, I knew there was a very bad flutter echo problem and room modes may be a problem but were predictable.

The room is symmetrical which was an advantage the old room didn’t have. The measurements are approximately 11 feet long x 9 feet wide x 8 feet tall. There is a door and a closet on the back wall and 6-foot x 4-foot window on the front wall.

Flutter echo happens whenever there are parallel reflective surfaces. The sound repeatedly bounces off each wall and creates a series of bright echoes. In this room it was almost like a spring reverb. I was getting it off the side walls, floor and ceiling and from the window and back wall. When this was used as a bedroom the flutter echo was unbearable, I actually had treatment in here just to be able to sleep, but that might just be me being weird. Luckily this is easy to fix.

Room modes, also known as standing waves, are again when sound bounces between parallel surfaces. When the wavelength is a multiple of the room dimension, you will have a standing wave. It’s an acoustic phase problem. This frequency will be amplified twice as loud close to the walls and cancel out completely in the center.

Corner bass trap and broadband absorbers plus foam above.

For example, the wavelength of 60 Hz is 18.83 feet. If the width of a room is exactly 18.83, the exact center of the room will have complete cancellation of 60 Hz. If we multiply the frequency by 2, then there are two dead spots at 120 Hz, and four dead spots at 240 Hz. The dead spots are called nodes.

There are three types of room modes, each with more complex calculations, but the worst kind, and easiest to calculate is the axial mode. Axial modes are calculated: Half speed of sound (1130/2) divided by room dimension (length, width or height in feet).

Calculate all three dimensions and multiply each result by 2x, 3x, 4x until above 300 Hz. Room modes are only a problem in the low frequencies below 300 Hz.

Small rooms tend to have the worst standing wave problems, and not enough room to treat them effectively. Axial modes happen across the entire surface of the wall, and bass tends to accumulate in corners.

Early reflections are the first bounce off a wall to your ear. So when the sound comes out of your speaker, goes past you, reflects off the wall and back to your ear. There is a certain acceptable time range for early reflections in a mixing position called the Haas zone, 5-30ms. Longer early reflections are OK because our ears and brain can separate them from the original sound, but if they are too short, like in an untreated small room the sound is blurred by the echo. Btw, leather chairs with headrests can also be a problem.

Symmetry
You might think that because there are all these problems with parallel surfaces bouncing sound around that we should avoid rooms with them. Perfectly square rooms are the worst. Two dimensions the same is bad and anything else can be treated. The benefit of parallel surfaces is that we can easily predict what the problems will be and more easily set up a balanced room. A room with one wall that angles out will prevent flutter echo in one dimension, but will make perfect stereo imaging from the speakers much more difficult. You can still get standing wave issues with non-parallel walls, it’s just way harder to predict and measure.

The Solutions
In the middle of the front wall of this new room I had a roughly 6-foot x 4-foot single-paned window.

Two problems with this: reflects sound within the room, leaks sound in and out.

A while ago I scored some free acoustic ceiling tiles from a store that was being gutted. When cut to size these are very effective at reducing sound transmission through the window. They are non-reflective so sound doesn’t bounce off and they also work well for blocking sun and therefore heat. The downside of these is that they’re fragile and crumble easily.

I cut six of them to size and stuck them in the window, two layers thick. I left room on one side to open the window for a fan. This is then covered with a curtain.

If you don’t want to go through all that, you should still cover all windows with thick curtains, which will not help much with isolation but will stop the reflection from the glass.

The RFZ
What tends to be a good, affordable plan of attack for most rooms, large or small is to create a “Reflection Free Zone” (RFZ) with a “Live End, Dead End” treatment concept, meaning the majority of acoustic treatment is positioned to the sides, ceiling and front wall of the room.

By installing the sound absorbing panels in this way, the flutter echo and early reflection problems are eliminated and the standing wave issues are greatly reduced, at least when in the mixing position. You only need to absorb on one wall to stop flutter echo. You can stagger the treatment on opposite walls if you’re short on materials.

I have 64 square feet of 2-inch rigid fiberglass panels on the sidewalls and front corners and another 56 square feet of 1.5-inch foam on the ceiling. Likely this is more than most people will have available and it is probably slightly more than I need, but it makes such a huge improvement.

Rigid Fiberglass
Rigid fiberglass is the best bang-for-buck acoustic absorption material. It absorbs well down to the lower mids. A few years ago I built several 2-foot by 4-foot broadband panels with fabric covered wood frames that are mounted to walls easily by just hanging off a drywall screw.

In my new room I have two panels across the front wall corners, then three panels across each side wall covering the first 8 feet of the room. These are across the middle of the wall providing coverage for both sitting and standing listening. Mounting the corner traps is easy with a few hooks and a bungee cord.

You can increase the effectiveness of fiberglass absorbers by creating an air gap between the panels and wall of an inch or two. I have not tried this yet.

Foam
Foam is less effective than rigid fiberglass and is often not any cheaper and can be harder to work with. If you don’t have tools to build fiberglass broadband absorbers, then foam will still make a big improvement. I found a box on Craigslist for a good deal a year ago so I use it in the less critical spots in the room, where I need to kill early reflections or flutter only.

I have three panels hanging above the desk and mix position, this is called a cloud, and it helps focus the sound from the speakers. I have two more foam panels above the fiberglass sidewall panels squeezed into the corner between wall and ceiling. I’m not sure of the effectiveness of these in this position but it looks pretty cool at least.

Monitor Positioning
Monitor positioning can be tricky in a small room, especially if it is not symmetrical. When setting up a studio, have the short walls at your sides and speakers firing lengthwise. Don’t put the speakers directly in corners, as that will exaggerate the low frequencies coming from them.

Instead set them up with enough distance to walk beside and behind them if you can. Use the same measurements for each side. My speaker stands are exactly 27 inches from the closest sidewall, 21 inches from the front wall and 60 inches apart (measuring from the pole). I started mine a little narrower than that but it didn’t sound as good. 50 to 70 inches is where yours should be, not narrower than 4 feet.

Speakers should be “toed in” at a 30-degree angle towards you.

If you think of your pair of monitors and head as points of an equilateral triangle, then where your head is should be within 18 inches of that 3rd point.

The tweeters should be at ear level. Your speakers can be vertical or horizontal (I prefer horizontal) with the tweeters on the outside.

The Results
Having a dedicated space is great. At first I was worried that the small room wouldn’t sound as good but it turned out to actually sound better. I have better noise isolation, clearer sound from my monitors and still enough room to record acoustic and electric guitars and vocals here. Without all the acoustic treatment, this room would be a disaster.

With a more limited budget I could get by with two corner traps, four side panels and two foam cloud panels. The more absorption the better.

I really need that curtain…and a pro photographer!

Acoustics Resources
Acoustics Crash Course 1 – Modes | Mark Wieczorek
Room Mode/Standing Wave Calculator | Mark Wieczorek
Acoustic Treatment and Design for Recording Studios and Listening Rooms | Ethan Winer
Haas Effect | Wikipedia

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. To comment or ask questions about this article, go here.

{extended}
Posted by Keith Clark on 11/06 at 10:21 AM
RecordingFeatureBlogStudy HallMeasurementProcessorSignalStudioPermalink

Church Sound: Alternatives To Using Y-Cables With Source Devices (iPod, Laptop, CD Player)

Avoid burning out the outputs of your components

 
Question: Will anything bad happen if I use a Y-cable on my CD player or laptop to hook its two outputs into one input on my mixer? I’m running out of channels.

Answer: The answer is yes, probably. If not immediately, then some time in the future. What happens is that most modern audio gear has a very low output impedance, typically under 100 ohms. This is great in that it can drive audio over very long cable runs while ignoring interference from light dimmer buzz and cell phones, but bad in that a short circuit will cause its output transistors to put out too much current and overheat, eventually killing them.

But here’s the crazy thing…if you’re running the exact same signal out both the left and right outputs of your CD player, laptop, or iPod, say from a mono sound track, then there will be no current flow between the left and right output stages and all will be well.

However, if you then play a music track with a lot of dissimilar info on the left and right channels, say from a split-track song with music on the left channel and guide vocals on the right channel, then there will be essentially a short circuit current between the left and right output stages. This is very hard on the CD player’s and iPod’s electronics, and they will begin to overheat internally.

So if you only play these backing tracks once in a while or for only a few minutes at a time, then the output stages may never overheat enough to burn out. However, play these same backing tracks for an extended period of time (perhaps 30 minutes) and you’ll probably find that one of the outputs of your CD player, laptop, or iPod has been burned out. Not a good day for your gear. 

The best way to combine two outputs into a common input is by using a box with special build-out resistors that limit this current. Whirlwind makes a box called the podDI that not only safely combines the two output signals from the sound source into a common input on the console/mixer, it also gives provides separate volume knobs so you can turn the left and right channels up and down in volume independently. (The podDI is pictured above with a Y-splitter.)

In addition,  it provides a balanced XLR output transformer that’s perfect for isolating the ground of your gear from the PA system and stopping that nasty power supply buzz that often occurs when using a laptop as a sound source that’s powered from its own 120-volt transformer.

You can buy one for about $75 from Full Compass Systems here.

Mike Sokol is the chief instructor of the HOW-TO Church Sound Workshops. He has 40 years of experience as a sound engineer, musician and author. Mike works with HOW-TO Sound Workshop Managing Partner Hector La Torre on the national, annual HOW-TO Church Sound Workshop tour. Find out more here.

{extended}
Posted by Keith Clark on 11/06 at 10:12 AM
Church SoundFeatureBlogStudy HallConsolesInterconnectMixerSignalPermalink

Monday, November 04, 2013

In The Studio: “Music 101” For Recording Engineers

Arriving in the musical moment along with the musicians
This article is provided by BAMaudioschool.com.

 
If you’re a doctor, you can’t operate if you do not know what you should and should not cut. If you’re a mechanic, you can’t repair a car unless you know how the engine parts work together to move the car.

As an engineer, you are a technician, but one that works with creative material. Yes, you can approach it purely like a technician, but you won’t be able to perform as well as if you know a bit about music. Notice that I used the word “perform” rather than work.

We work with sound. We work with music. We work with feelings. If you don’t know anything about any of these things, you have no business calling yourself an engineer.

If you only know about sound and not music (and more importantly the feelings that music can express) then you may be able to spit out work that looks good on a meter, covers all the requirements, but has no musicality and feeling. In addition, you need someone to translate what the musicians say so you understand what’s happening.

The best engineers are IN THE MUSICAL MOMENT ALONG WITH THE MUSICIANS and can discuss not only things like sound volume but also things like sound dynamics, harmonic or rhythmic support, musical timing, and instrument functions. The best engineers recognize, encourage, and capture musical creativity.

Dynamics

“The main job of the recording engineer is to capture as much musical dynamics as possible. The mixing engineer should utilize those dynamics to enhance the expression of the song.”

Dynamics refers to the interplay and “give and take” between different instruments based on their changes in volume or other characteristics.

Dynamics means change, which can occur on many different levels. Even a single instrument can have dynamics that change over time. 

There is emotion in dynamics. When someone speaks loudly, it impacts you one way, but if they speak softly, you find yourself listening harder and perhaps even leaning in to hear better…this greatly changes how you will perceive what you are listening to. This is an example of dynamics as applied to volume.

Dynamics not only applies to volume but also to any other kind of change or movement such as tonal change, intensity (how hard one plays), rhythmic feel, etc. Sounds can have different dynamics at different frequencies. 

Dynamics can be felt in single instruments, relationships between instruments and even the combined sound of a finished mix.  Although these days everyone seems to want their music as loud as possible with no break, music often has important dynamics between instruments that help to convey the emotions of the song that can be lost when mixes are squashed and pumped for the sake of volume.

You do not have to know how to play an instrument or read music in order to push a fader, but it really does help to know what the musicians on the other side of the glass are going through.

Arrangement

A song is based on a melody (and often lyrics) and occurs through time. Songs have musical chords that support the melody (but may not necessarily be played in full).

Songs also have other parts that can support the melody and chords (such as drums for rhythm, bass to both support the low end and also to provide a low counter melody, guitars to play chords in rhythmic ways, etc).

It’s possible for a single musical element to take the role of others; for example, a song can be sung in a way that gives a strong rhythmic feeling without having drums. Arrangements are maps that indicate not only the song’s sections and their order but also which instruments will play particular parts. Although many people use the term to only mean the sections of the song, it also relates to how the different musical parts interact with each other as they support the main melody.

Typical arrangement sections include:

Intro: Song beginning
Verse: The “story”
Chorus: The repeating part of the “story”
Bridge: The part when everything changes for a short while before returning to the “story”
Tag (Outro): Song ending

In order for recording and mixing engineers to be able to effectively capture, edit and then mix music they must have a basic understanding of music, arrangements and instruments.

Instrumentation

Instrumentation refers to the actual instruments that are used in a song.  Musical elements / instruments are both rhythmic and harmonic, as even drums have musical pitch and a violin note has rhythm.

Commonly used instruments include:

Drums (Kick, Snare, Hat, Toms, Cymbals, and also Room Tracks)
Percussion (Conga, Bongo, Timbale, Clave, Maraca, Shaker, Clap, Go-Go, Cowbell, etc)
Bass (Upright, Electric, Synthesizer)
Guitar (Acoustic, 12-String, Electric, Distorted, Wah-Wah, etc)
Piano
Organ
Strings (Ensembles/Orchestras, Violin, Viola, Cello, Bass)
Horns (Trumpet, Trombone, French Horn, Tuba)
Woodwinds (Flute, Clarinet, Oboe, Bassoon, Saxaphone)
Synthesizers & Drum Machines
Background Vocals
Lead Vocals
Lead Instruments (any of the above)

Certain instruments have particular sounds that make them optimal for specific song functions, such as a percussion instrument to make a beat. However, most instruments can perform the functions of others.

Rhythmic Elements

Rhythmic Elements are accentuated points along a repeating pulse. The pulse itself is a rhythmic element called the BEAT.

A BEAT is a repeated heavy point in time that you can feel with your body. A song’s TEMPO is how fast the beat is going. Tempo is measured in BPM (beats per minute). 

When the rhythm repeats, it is called a MEASURE or BAR. The DOWNBEAT is the first beat when the rhythm repeats (i.e., the “ one” of “one – two – three – four – one – two – three – four”). 

Much music is made of repeating groups of four beats. When a note lasts for a whole measure it is called a WHOLE NOTE. Notes that last for half a measure (two beats of a four-beat measure) are called HALF NOTES. Notes that last only a quarter of a measure (a single beat of a four-beat measure) are called QUARTER NOTES. The “one – two – three – four“ are all each quarter note beats.

An EIGHTH NOTE is half of a quarter-note beat, while a SIXTEENTH NOTE is a quarter of a quarter-note beat (there are 16 sixteenth notes in a measure). And so on…

A TRIPLET is a measure of four beats that have been divided into tjhree beats (actually that is a half-note triplet).

TIME SIGNATURES show how the beats repeat and how fast the beats are. If they feel as if they repeat after every fourth beat, the song is most likely has a time signature of 4/4 (four quarter notes per measure). Waltzes are written with a time signature of 3/4. Some songs are 5/4, 6/8, etc.

Rhythmic Dynamics

Remember that DYNAMICS can occur in rhythms. People will naturally “lay back” or “push” at certain places in a song or a repeating rhythmic groove. Most rhythmic dynamics happen naturally (without notice rather than intentional) and often occurs because a drummer is “leaning” a certain way (or even because they are not yet experienced enough to play with consistency). 

For example, a punk rock drummer will tend to play with more energy than calculated thought, and as a result some of the drum hits may be ahead of the exact place they are intended for. This explains why punk rock snare drums are pushing the beat more often than big rock ballad snare drums.

If a certain rhythmic note is important in a style of music, a drummer may unconsciously emphasize that beat by playing it harder, and unless they begin to play the note earlier than they play other notes the extra effort required to play harder may actually cause the note to be hit slightly later, giving it a laid back feeling that can actually make a beat feel heavier. Remember, the drummer may rush to hit that all-important note.

The end result is that naturally performed drum parts WILL contain certain internal dynamics rather than be precise and exact.

Further, there is important emotion expressed in rhythmic dynamics, which is why music made with drum machines that play each element exactly on the beat is often considered mechanical and “unnatural.” To compensate, composers often will add repeating loops of live drumming to their machine drums in order to add the missing rhythmic dynamics.

Drum loops can be tricky. A drum loop is a repeated drum phrase (usually one or two bars in length). Many drum loops contain rhythmic dynamics, and certain drum hits will be slightly off time. 

When using several drum loops, it is possible to create moments when drum parts are slightly off time in different directions. This can go beyond a rhythmic “smudge” and sound (or feel) like a mistake.

People who use multiple drum loops often shift their relative positions to minimize blatant problems. Then again, many people just don’t care, and simply throw things together until it kind of sounds cool as their compositional process.

Harmonic Elements

Harmonic Elements are tonal and have pitch.

Sound waves create PITCH (TONE). The faster the sound wave, the higher the pitch. When a sound’s pitch increases until it is a perfect multiple of the starting pitch, the note has a similar but higher sound. This is called an OCTAVE. The pitch differences between octave points are mapped out into different SCALES.

Most cultures around the world use scales that have been made up from specific subdivisions of an octave. Some scales have developed along with regional musical instruments. There are “primitive” tribes that do not use a standard scale system at all (each tribe member tunes their instrument so it plays a note that sounds good compared to the chief’s note even if it clashes with another tribe member).

Western music is based on scales and chords made up from notes along those scales. There is a great amount of musical emotion in different combinations and sequences of musical notes, and in the way instruments approach and trail away from notes. Consider the emotions expressed in a human voice, saxophone, blues harp, violin, guitar or other instruments that play between notes or approach notes from above/below. 

Instruments such as piano get additional expression from dynamics, because the sound changes along with the volume as you play harder.

In Bulgaria there is a choral group that specializes in singing MICROTONES (tones between standard notes). The chords made with microtonal notes have more varied expressiveness than chords made with only western scaled notes.

Instruments

You will encounter various instruments that you will need to record, and record well. Some will be very easy, such as plugging in a bass or a synth. Some will be difficult, such as recording a quiet singer standing between a loud drummer and a Marshall Stack.

It helps to have an idea what the instrument should sound like in the end (which you learn by listening to “model” songs with specific sounds you want to emulate), but also to have an idea of how the instrument actually makes noise.  You need to know that a flute projects important sound from the top, and that shoving a mic into an instrument’s hole or flared end is not necessarily the right thing to do.

Reseach any instrument before recording it for the first time. Where does the sound come out? What part of the overall sound will it be expected to fill? Is the instrument a solo sound or part of an ensemble?

These things will influence any decisions you make. Remember to use any pictures or descriptions of mic techniques you see as something to try, not something to automatically do (even whatever you read here). 

Talk to the musicians and ask what they usually do to capture “their” sound. Many engineers do not do this, but rather just grunt at the musician while setting up the mic in the same old way. You might be surprised at what you hear, and just the act of asking makes the musician trust you a little bit more. 

Take what they say into consideration, and even set up what they usually do as an alternative to compare to if you have the extra mic and fader. Do not forget you’re capturing their sound, which they sometimes know well. Of course, expect the occasional person who sounds one way in their head and another way out their horn.

Walk around and move your head up and down around the instrument until you find a “sweet spot” (please use caution with drums and Marshall stacks).

Choose a microphone that will optimally capture the tonal characteristics you noticed are important when in the sweet spot, such as a bright sounding mic for cymbals rather than something boomy.

Place the mic where you thought sounded good, and move it if needed or if just curious. You can always go back to where you were, especially considering how easy it is to document with cell phone pics these days.

If you’re dealing with a direct plug such as with a bass, synth, computer (etc), you’ll need to make sure you are getting into your system the right way (often through a direct box). That’s it.

Once you have the instrument (from either mic or direct) in your input channel, you can now process with compression and EQ (if needed), and record the sound.

Bruce A. Miller is an acclaimed recording engineer who operates an independent recording studio and the BAM Audio School website.

{extended}
Posted by Keith Clark on 11/04 at 03:52 PM
RecordingFeatureBlogStudy HallBusinessEducationEngineerMonitoringSignalStudioTechnicianPermalink

An In-Depth Look At Microphone Cable Anatomy & Properties

Answers to key FAQs about mic cables and related issues

What is impedance?
Impedance is the AC (alternating current) version of the DC (direct current) term resistance, which is the opposition to electron current flow in a circuit and is expressed in ohms.

Impedance (often abbreviated as “Z”) includes capactive reactance and inductive reactance in addition to simple DC resistance.

Reactance depends upon the frequency of the signal flowing in the circuit.

Capactive reactance increases as frequency decreases: inductive reactance increases as frequency increases. Because of this frequency dependence, impedance is not directly measurable with a multimeter as DC resistance is.

What are the differences between high- and low-impedance microphones?
To answer this requires a little historical background. High-impedance microphones are capable of producing higher output voltages than low-impedance types. Until recently, “consumer” audio gear (small PA systems, home and semi-pro recording equipment, etc.) was always designed for high-Z mics because their relatively high output level required less amplification or gain.

The lower output of low-Z mics required the equipment manufacturer to use input transformers in front of the mic preamplifiers to step up the strength of the signal, which substantially increased the cost of the circuitry.

Hence, low-Z mics were rare outside of professional recording and broadcast studios. In these “big-budget” facilities, low impedance lines offered several big advantages. A high-Z mic’s high source impedance (approximately 10,000 ohms) combines with the capactive shunt reactance of the mic cable to form a low-pass filter which progressively cuts high frequencies. The severity of the loss is determined primarily by the length and construction of the cable.

The low source impedance (less than 200 ohms) of low-Z microphones proportionally reduces the high-frequency loss. Equally important, the high load impedances demanded by high-Z lines are much more susceptible to various forms of interference than low-Z lines, especially high-frequency noise and radio. Both of these high-Z liabilities made cable runs longer than 15-20 feet a problem.

Isn’t the use of balanced lines the biggest advantage of low-impedance microphones? What is a balanced line?

Balanced lines are wonderful, but they are sometimes given credit for benefits that they are not actually responsible for. Balanced, unbalanced, low-impedance and high-impedance are all individual properties.

Many people erroneously refer to anything with a 3-pin XLR-type connector as “low impedance” and assume it to be “balanced.” Others call any line connecting two pieces of equipment with 1/4-inch phone jacks “high-Z.” In reality, a lot of equipment has unbalanced inputs and outputs that are carried on XLR connectors, and there are even more low-Z lines on phone jacks.

Medical instrumentation uses a lot of high-impedance balanced lines for sensors, and most line-level unbalanced outputs are very low-impedance.

Electrical systems need a reference point for their voltages. Generally referred to as common or ground, although it may not be actually connected with the earth, this reference remains at “zero volts” while the “hot” signal voltage “swings” positive (above) and negative (below) it. This is referred to as an unbalanced configuration.

Physically, the common may be a wire, a trace on a printed-circuit board, a metal chassis - virtually anything that conducts electricity. Ideally it is a perfect conductor - that is, it must have no resistance or impedance. In a cable connecting two pieces of equipment, the shield is used as signal common.

As the complexity and size of the system is increased, the imperfect conductivity of the common (ground) conductor inevitably causes problems. Since it is made of a real material, it must have some resistance, which must (Ohm’s Law says) cause voltage drop when current flows through it, which means it cannot be at a perfect “zero volts” at both ends. The larger the system and the greater the distances between the source and load, the less effective this unbalanced configuration becomes.

The voltages of a balanced line are not referenced to the ground or common. Instead, the signal is carried on a pair of conductors with the signal applied to this pair differentially. The signals are electrical “mirror images” of each other - their levels are the same, but their polarities are opposite.

In other words, as the applied signal “swings,” one conductor will be negative with respect to the common, the other will be positive. These polarities alternate with the frequency of the signal, and the total signal level is the difference between the two individual voltages.

For example, if one conductor is at +5 volts, the other will be at -5 volts, and the signal level is +5 volts minus -5 volts or 10 volts. If, for same reason, the two conductors were both at +5 volts simultaneously, the level would be +5 volts minus +5 volts, which is zero volts. Very tricky!

image

Because of this differential signal transmission, two very valuable things happen when using balanced lines. First of all, each piece of equipment can have its circuitry referenced to its own common, because the interconnection of the equipment does not require that the commons are connected in order to move the signal around. This eliminates the major cause of a lot of noisy audio gremlins, ground loops.

Secondly, because the signal is differentially transmitted and received, any common-mode interference signal superimposed on the signal in the line will be carried by both sides at identical level and polarity. In other words, if the line has +5 volts of external noise induced, both conductors will have +5 volts of noise on them. This equals a total interference level of +5 volts minus +5 volts or zero volts. The interference cancels itself. This is called common-mode rejection.

There are several ways to balance lines.

Actually, the term “balanced” is very often used incorrectly to refer to lines that are actually floating. Properly speaking, a balanced line is one which has equal impedance from each side to ground.

An unbalanced signal may be derived from it by using one side of the pair as “hot” and ground as common. A floating line has no reference to ground, and must have on side of the line tied to common to “unfloat” it.)

The input transformers once required by low-Z mic preamps also provided a floating input as long as neither side of the transformer’s primary winding was tied to common. This is where the “low-impedance-is-balanced” misconception began.

The use of balanced lines was actually just a by-product of the requirement for a transformer to step up the low signal level. Using modern low-noise integrated-circuit design, a low-Z mic preamp can be clean, quiet, balanced and a lot cheaper to build - without a transformer.

What are the basic parts of a high-Z microphone cable and what does each one do?
A high impedance mic has many of the traits of an electric guitar, so the cable used for it is generally a coaxial instrument cable. The “hot” center conductor is insulated with a high-quality dielectric; shielded electrostatically to reduce handling noise and triboelectric effects; shielded with a braid, serve, or foil which is also used as the current return path for the signal; and jacketed for protection.

image

What are the basic parts of a low-impedance microphone cable and what does each one do?
The basic cable construction for low-Z mic or balanced line applications is the shielded twisted pair. It consists of two copper conductors which are insulated, twisted together (often with fillers), shielded with copper, and jacketed.

image

What gauge and stranding should the two conductors be?
The amount of copper in any electrical cable is usually dictated by the amount of current it has to carry, or by the tensile strength it requires to perform without breaking. If we take the worst-case situation, where the cable is used for a line-level (+24 dBm) 600-ohm circuit, the current is a negligible 13 milliamperes (that’s 13 thousandths of an ampere). The power in such a circuit is 100 milliwatts, or one-tenth of a watt. The current produced by a typical 150-ohm microphone connected to a 1,000-ohm preamp input is less than 10 microamperes (that’s 10 millionths of an ampere), with power of less than a microwatt.

By these figures it is apparent that not much copper is required to actually move signals around, except in applications demanding extremely long cable runs. Many low-impedance mic cables use 24 AWG conductors with excellent performance, and most multipair “snake” cables have 24 AWG (7 strands of 32 AWG) conductors.

Other things being equal, more individual strands in each conductor mean better longevity and flex life. Since singers using hand-held microphones can put a cable through several hours of tugging, twisting, straining and other abuse, these situations call for finer stranding and often larger conductors, sometimes as large as 18 or 20 AWG. However, the sonic properties of the cable may be compromised by using large conductors.

image

Why are the two conductors twisted together?
As previously explained, the interference-canceling common-mode rejection of the balanced line is based on the premise that the unwanted external noise is induced into both signal conductors equally.

Minimizing the distance between the two conductors by twisting them together helps to equalize their reception of external interference and improve the common-mode rejection ratio (CMRR) of the line.

The two conductors also form a sort of “loop antenna” for stray magnetic fields. The farther apart the two conductors are the larger the “antenna” becomes, and the more interference it picks up from sources like transformers, fluorescent lighting ballasts, SCR-chopped AC lines to stage lighting, etc.

Minimizing the loop area of the cable helps to reduce the unwanted hum and buzz from this type of interference, which the cable’s shield is almost totally ineffective against.

The distance between the twists is called the lay of the pair. Shortening the lay (increasing the number of twists) improves its common-mode rejection, and also improves its flexibility. The typical pair lay in microphone cables is about 3/4-inch to 1-1/2 inches. Shortening the pair lay uses more wire and more machine time to produce the same overall finished length, so of course it increases the cost of the cable.

What is “star-quad” cable?
This four-conductor-shielded configuration can best be thought of as two twisted pairs twisted together. Using four small conductors in place of two large ones allows the loop area of the cable to be further reduced and its rejection of electromagnetic interference (EMI) is improved by a factor of ten (20 dB). This makes star-quad cable very popular for microphones and balanced lines used in applications such as television production, where huge amounts of power cable for lighting and camera equipment surround the performers.

image

Does star-quad actually sound better?
When used for low-impedance microphones, star-quad construction substantially reduces the inductive reactance of the cable. Inductance was previously mentioned in discussing impedance. An inductor can be thought of as a resistor whose resistance increases as frequency increases.

Thus, series inductance has a low-pass filter characteristic, progressively attenuating high frequencies. While parallel capacitance, the enemy of high-frequency response in high-impedance instrument cable, is largely insignificant in low-impedance applications, series inductance (expressed in microHenries, or uH) is not. The inductance of a round conductor is largely independent of its diameter or gauge, and is not directly proportional to its length, either. Parallel inductors behave like parallel resistors: paralleling two inductors of equal value doesn’t double the inductance, it halves it.

In cable construction, using two 25 AWG conductors connected in parallel to replace each of the conductors of a 22 AWG twisted pair will result in the same DC resistance, but approximately half the series inductance. This will result in improved high-frequency performance: better clarity without the need for equalization to boost the high end.

Also of significance is skin effect, a phenomenon that causes current flow in a round conductor to be concentrated more to the surface of the conductor at higher frequencies, almost as if it were a hollow tube. This increases the apparent resistance of the conductor at high frequencies, and also brings significant phase shift.

image

What is phase shift?
Phase shift is a term describing the displacement of two signals in time.

When we described the two sides of a balanced line as being of opposite polarity, we could have said that they are 180 degrees out of phase with each other.

Each time an AC waveform completes a cycle from zero to positive peak to zero to negative peak and back to zero, it travels though 360 degrees (just like a circle).

A simple 1 kHz (1,000 cycles per second) sine wave travels through this 360-degree rotation in one millisecond.

If we consider its starting point to be zero, it will reach its positive peak one-quarter of a millisecond later, cross zero in another one-quarter of a millisecond, reach its negative peak a quarter-millisecond after that, and return to zero after a fourth quarter of a millisecond has elapsed.

Thus, each quarter of a millisecond equals 90 degrees of phase difference. When two identical signals are in phase with one another, their zero crossings and peaks are the same, and summing (combining) the two will double the amplitude of the signal. When they are 180 degrees out of phase, summing them will result in cancellation of both signals.

This property is very straightforward when considering simple sine waves. Sine waves consist only of a single fundamental frequency and have no harmonics. Harmonics are multiples of the fundamental, and are the elements of which complex waveforms are composed.

An excellent example of complex waveforms is called music. The reason a middle C note on a piano sounds different from the same note played on a flute is because the two instruments generate different waveforms - the harmonics of the piano are present in different amounts and have different attack and decay characteristics than the harmonics of the flute.

When complex waveforms are traveling in a cable, it would be ideal if the amplitude and phase relationships they enter the cable with are the same as those they exit the cable with. When the
effects of phase shift alter those relationships - when the upper harmonics that define the initial “pluck” of a string, for instance, are delayed with respect to the fundamental that forms the “body” of the note - a sort of subtle “smearing” begins to occur, and the sense of immediacy and realism of the music is diminished.

How can phase shift be minimized?
The phase lag caused by skin effect is one radian (about 57.3 degrees) per skin depth, and the effective skin depth of a conductor at a particular frequency is the same whether the conductor is very large or very small in diameter. For instance, the skin depth of a copper wire at 20 kHz is about .020 inches, while an 18 AWG conductor has a diameter of about .040 inches. This means that at frequencies from DC to 20 kHz, the full cross-sectional area of the conductor is utilized.

Because the skin depth (.020 inch) is never less than half the diameter of the conductor (.040 inch), there is never more than one radian of phase shift present. In short, star-quad cables seem to offer lower inductance and lower phase shift, both of which are parameters that directly affect the clarity and coherence of high-frequency complex waveforms.

Their inherently superior noise-rejection also reduces intermodulation distortion, a type which is particularly offensive because it produces “side-tones” not harmonically related to the fundamental. While the improvement may not be as dramatic as changing the microphone, an increasing number of audio professionals seem to be embracing the sonic benefits of star-quad construction.

image

What about the insulation? Does it affect the sound?
Even though the effects of cable capacitance are much less than that encountered in high-impedance applications, the use of low-loss, high-quality (low dielectric constant) insulation materials such as polyethylene and polypropylene are still preferred, especially when long cable runs are necessary.

Because of the desire to keep cable diameter to approximately 1/4 inch, the insulation thickness of a typical two-conductor microphone cable is generally about .020 inches, half that of a coaxial-type instrument cable. This relatively thin wall means that soldering requires good heat control to prevent melting.

For very thin (.010 inch) applications, cross-linked polyethylene insulation is sometimes used. The cross-linking process (similar to that used in manufacturing heat-shrinkable tubing) greatly reduces the problems of insulation meltdown and shrinkage during soldering.

Why does some cable have string-like fillers twisted with the conductors?
The primary use for fillers is to make the core of the cable round to eliminate convolution in the finished cable.

A twisted-pair is not round, and without fillers the finished cable will have an undulating, “wavy” appearance unless a very thick jacket is applied, which will greatly affect its flexibility and make it very difficult to strip.

A good example of convolution is found in the various thinly-jacketed twisted-pair cables used for pulling in conduit in permanent installations. Such cable is designed for economy and easy termination and so is not required to be round, only flexible and cheap.

Fillers also help to stabilize the cables shape and strengthen it, allowing some of the tugging, twisting and other stresses encountered to be absorbed by the filers rather than the conductors or shield.

Some special miniature cables used for the “tie-clip” lavalier microphones use conductors that are literally copper strands wound around cores of synthetic kevlar fiber. This cable is less than 1/8-inch in diameter, yet is enormously strong. (Unfortunately, it is also very difficult to terminate because of the necessity of sorting out the unsolderable kevlar from the solderable copper strands.)

Why don’t low-impedance cables require electrostatic shielding like high-impedance cables?
The “noise-reducing” semiconductive tape wrap or conductive PVC layers used on coaxial cable are used to “drain off” static electricity generated by the shield rubbing against the inner conductor insulation. When the source impedance is very high, these static charges will be heard as “crackling” noises as the cable is flexed and handled. A low source impedance has a damping effect on this type
of static generation which minimizes its effect.

There are cables available which use conductive textile or plastic shields for 100 percent coverage, with copper drain wires or very low-coverage copper braid added for ease of termination and low DC resistance. While this type of construction is very flexible, its shielding effectiveness suffers greatly as frequency increases, offering very little effect above 10 kHz because of its low conductivity.

What about handling noise?
The triboelectric effect that causes impact-related “slapping” noise as the cable hits the stage or is stepped upon during use is related to capacitance, specifically the change in capacitance that takes place as the insulation or dielectric is deformed. This causes it to behave as a crude piezoelectric transducer, a relative of an electret condenser microphone.

Because such transducers are extremely high-impedance sources, the drastic impedance mismatch presented by a low-impedance microphone and its preamp or input transformer makes the extraneous noise generated by triboelectric effects negligible except in cases involving very low-level signals.

In low-impedance applications, handling noise is best addressed by using soft, impact-absorbing insulation and jacket materials in a very solid construction with ample fillers to insure that the cable retains its shape. Note that it is totally invalid to evaluate the handling noise of a low-impedance mic cable without using a resistive termination to simulate the microphone element. A cable with no termination essentially presents an infinitely high source impedance, a situation that is beyond worst-case!

What special considerations should be given to shielding low-impedance cables?
Low-impedance microphone cables are shielded using the same basic methods as coaxial-type instrument cables.

Woven copper braid generally offers the best high-frequency shielding performance and protection from radio-frequency interference (RFI).

This is due to the very high electrical conductivity of the braid, and to its low-inductance, self-shorting configuration. Its disadvantages are primarily economic; it is the most expensive to manufacture and the hardest to terminate.

Spiral-wrapped copper serve shields are very inductive in nature, as they resemble a long coil of wire when extended. This can compromise high-frequency shielding and is not recommended when effective shielding above 100 kHz is required. Serve shields are relatively inexpensive and easy to terminate, making them a popular choice for medium-quality cables.

Foil-shielded cable is very heavily used for permanent installation work and for portable multipair “snake” cables. The extremely low cost, light weight and slim profile makes foil very advantageous in applications involving pulling cable into conduit.

In these cases the conduit (if metallic and properly grounded) can greatly enhance the RFI and EMI shielding properties of the thin mylar/aluminum foil generally used. The 100 percent coverage of the foil shield, which should be of great benefit at radio frequencies, is somewhat compromised by the inductive nature of the copper drain wire typically used for terminating it.

At low frequencies, performance is hampered by the relatively low conductivity of the foil/drain configuration. In applications involving repeated flexing and coiling, the metallized mylar tape will begin to lose its aluminum particles, opening up gaps in the shielding. This can be a particular problem with multipair cable used for touring systems, where the shield breakdown may lead to increased crosstalk between channels and to annoying radio pickup problems.

Does the use of 48-volt phantom power affect the performance of the shield?

The current typically drawn by a phantom-powered condensor microphone is generally limited by 6.81 kohm resistors, resulting in a current of less than 15 mA total.

This is not a significant factor unless the shield begins to break down mechanically due to use: tearing or fraying are possible, which could create intermittant changes in shield resistance. It has lead a few professionals to prefer the use of three-conductor microphone cables, with the common carried by a drain wire in addition to the shield.

BIBLIOGRAPHY
• Ballou, Greg, ed., Handbook for Sound Engineers: The New Audio Cyclopedia, Howard W. Sams and Co., Indianapolis, 1987.
• Cable Shield Performance and Selection Guide, Belden Electronic Wire and Cable, 1983.
• Colloms, Martin, “Crystals: Linear and Large,” Hi-Fi News and Record Review, November 1984.
• Cooke, Nelson M. and Herbert F. R. Adams, Basic Mathematics for Electronics, McGraw-Hill, Inc., New York, 1970.
• Davis, Gary and Ralph Jones, Sound Reinforcement Handbook, Hal Leonard Publishing Corp., Milwaukee, 1970.
• Electronic Wire and Cable Catalog E-100, American Insulated Wire Corp., 1984.
• Fause, Ken, “Shielding, Grounding and Safety,” Recording Engineer/Producer, circa 1980.
• Ford, Hugh, “Audio Cables,” Studio Sound, Novemer 1980.
• Guide to Wire and Cable Construction, American Insulated Wire Corp., 1981.
• Grundy, Albert, “Grounding and Shielding Revisited,” dB, October 1980.
• Jung, Walt and Dick Marsh, “Pooge-2: A Mod Symphony for Your Hafler DH200 or Other Power Amplifiers,” The Audio Amateur, 4/1981.
• Maynard, Harry, “Speaker Cables,” Radio-Electronics, December 1978,
• Miller, Paul, “Audio Cable: The Neglected Component,” dB, December 1978.
• Morgen, Bruce, “Shield The Cable!,” Electronic Procucts, August 15, 1983.
• Morrison, Ralph, Grounding and Shielding Techniques in Instrumentation, John Wiley and Sons, New York, 1977.
• Ott, Henry W., Noise Reduciton in Electronic Systems, John Wiley and Sons, New York, 1976.
• Ruck, Bill, “Current Thoughts on Wire,” The Audio Amateur, 4/82.

This article contributed by Pro Co Sound.

{extended}
Posted by Keith Clark on 11/04 at 03:42 PM
AVFeatureBlogStudy HallAVInterconnectMicrophoneSignalPermalink

Church Sound: How To Create A Song Mix Blueprint in Five Steps

Your mixes will come together a lot faster and ultimately sound better
This article is provided by Behind The Mixer.

 


Have you looked at the set list for next weekend? Do you have any idea what songs you’ll be mixing? The standards, right? 

A worship team worth its weight in salt (that’s a lot of salt) will be rotating in new songs now and then. The musicians will practice their respective parts, the worship leader will have an arrangement selected, and as a team they will practice the song until it’s good enough for playing for the congregation. 

You are the final musician on that team, mixing all of their sounds together into a song lifted up in worship. What have you done to learn that song?

Following are the steps I take whenever I see a new song on the set list. I’ve mentioned before about the importance of getting a copy of the song which the band will be using as their blueprint. 

This list goes way beyond that. It’s a way of creating your own mix blueprint. It’s a way of ensuring you are just as prepared as the musicians when you mix the song for the first time.

1. Listen To The Song

Get a copy of the song which band is modeling the style and arrangement. The worship leader will likely tell you something like “we’re doing the 10,000 Reasons song By Matt Redman in the same way he has it on the 10k Reasons album.”  You can jump onto Spotify or YouTube and look up the same version, if you don’t already happen to have it in your personal music collection.

Listen a few times to get the general overall song feel. Is it slow or fast? Simple or complex? Does it have a big sound or a “small set” feel? Get the big picture.

2. Create A Song Breakout Order

From the musical side of things, a song is arranged into several common areas. You might think of this as the verses and the chorus.  For your blueprint, start with the following six areas. This list can be expanded as I’ll soon discuss, but for now this is the best place to start.

Intro: Song intros can start in many different ways. It can be full on instruments, a slow drum beat, a rhythm guitar, or even a scripture reading over the instruments.

Verse: The verses of a song tend to have the same arrangement but can have a different number of instruments as a means of providing song movement.

Chorus: Choruses, like verses, can have slightly altered arrangements. A common arrangement change is the last chorus being sung without any instruments.

Bridge: Not all songs have a bridge. The bridge is often used to contrast with the verse/chorus and prepare for the return of the verse/chorus. It can have a time change and even a key change.

Instrumental: Instrumental sections of a song can be a few measures or it can be a long passage, depending on the arrangement.

Outro: The outro can have the same variety as the intro or you might have a lack of an outro. For example, the song immediately ends after the last chorus.

3. Listen And Fill Out The Breakout Basics

You know the general feel and flow of the song, now you need to sketch out the basic outline. You will need to adjust your breakout order if you have verse and chorus differences. 

For example, the second verse might have a different arrangement than the first verse. If this is the case, modify your notes such as:

Verse 1: Drums come in with only the snare and hi-hat

Verse 2: Full on instruments

Consider this example of breakout notes:

Intro: solo piano with singer reading a passage of scripture

Verse 1: Drums and bass added

Verse 2+: All instruments with only lead singer

Chorus: Backing vocalists used only in the chorus

Bridge: N/A

Instrumental: Piano over other instruments

Outro: Ends with acoustic guitar and piano

4. Listen For The Mix Details

It’s time to focus in on the mix details. Consider this sample of a breakout:

Intro: Piano leads/sits on top of rhythm acoustic guitar w/very heavy overall acoustic feel.

Verse: Drums and bass used in a gentle supportive way. Both instruments sitting far back in the mix. No backing singers. Snare distant in mix.

Chorus: Backing vocalists singing at same volume with lead singer (singing in harmony)

Bridge: N/A

Instrumental: Piano dominates the instrumental, push volume. Piano sounds bright.

Outro: Piano and acoustic guitar with piano ending first and then acoustic guitar finishes the last few bars of the song.

Note: Studio engineer/producer Bobby Owsinski has a short article here on the questions he asks himself on how he wants to create a song arrangement. While he’s focusing on creating the new song for the FIRST time, they are good questions that can be applied to listening to a song as part of your mix prep.

In this step, you are noting where the instruments and vocals sit in the mix. You should have also noted any mix points, like “piano sounds bright.” 

You don’t need to write down, “expect a 560 Hz cut on the electric guitar” but you should write enough that describes what you’d expect to mix, if it’s a bit out of the ordinary or worth noting. 

For example, in the song 10,000 Reasons, there is a distinct tom hit three times in a row. I heard the tom sound described as having a “tribal drum” sound. That tells me I need it upfront in the mix and how to mix it.

5. Pick Out The Effects

This is the last step in creating your mix blueprint. Listen to the song and look for the ways in which effects are used. Then make a list of the instruments/vocals which have those effects used and describe how they sounded.

For many worship bands, the effects will stay the same throughout the song but if you want to copy an arrangement with effects changes, then go for it.

The Take Away

The musicians put in a lot of time preparing for the church service…and if they don’t, they should. You need to put in time preparing your mixing plans when a new song comes along. 

Listen to a copy of the song for the general feel. Create your breakout list with your song basics. Then go back and add in your mix notes. 

It’s really nice to stand behind the mixer during practice and look down at my mix notes for a new song. Your mixes will come together a lot faster and ultimately sound better because of your extra planning.

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians.  He can even tell you the signs the sound guy is having a mental breakdown.

{extended}
Posted by Keith Clark on 11/04 at 01:50 PM
Church SoundFeatureBlogStudy HallConsolesEducationEngineerMixerSound ReinforcementTechnicianPermalink

World’s Biggest Studio Monitors? Thoughts On Modern Loudspeakers

Philosophical changes in the way we put such technology to work?

In many ways, loudspeaker technology is the primary limiting factor in sound reinforcement system design and performance.

Transducers are always problematic, having to comply with the laws of thermodynamics and all. Not only are they inefficient at turning electric energy into acoustic energy, they also add a significant amount of distortion to the signal during the process.

Some years ago I was in Washington, D.C., doing research for a potential patent for a vibration reducing device, and I came across some of the early patents for dynamic loudspeaker drivers from about 100 years ago. Guess what? They looked basically the same as what we have now, and were certainly based on the same principles.

It made me realize that, like the internal combustion engine, loudspeakers as we know them have been around for a long time, and are in a constant state of refinement because we simply haven’t developed anything better yet.

But what about all of that refinement? Indeed, as stable as the technology is in many ways, loudspeakers have been radically transformed, particularly over the past two decades.

Inside the Box
In my mind, the first big evolution came several decades ago in the form of cabinet design. Once designers had a handle on the acoustic properties of the transducers themselves, we began to see very good loudspeaker designs that took advantage of the strengths and minimized the weaknesses of the various drivers. Crossovers followed suit, becoming ever more sophisticated.

Next came the move to active designs, led by Meyer Sound and others during the late 1970s and early ‘80s. This was the first real step towards developing a complete system between amplifiers, crossovers, cabinets and drivers. I was fortunate enough to tour in the 1990s with Meyer Sound MSL3s, UPA1s, USWs and USMs—great loudspeakers (particularly for the day) that made my job much easier.

Next was the line array revolution, led by L-Acoustics, further changing the game. Certainly, line arrays aren’t the ideal solution for all applications, but they’ve lightened the load of countless tours while delivering quality sound to millions around the world since their introduction.

By the late 1990s, I noticed two things that were primed to have an effect on stage microphones. First, the introduction of in-ear monitoring systems dramatically lowered stage volume levels, improving signal-to-noise ratio while also accommodating the deployment of better microphones—even condensers—on live stages. Second, improved loudspeaker designs meant that those better microphones, no longer confined to the studio, could now be clearly heard by a live audience. 

As a result, I worked with the team at Neumann (my employer at the time) to come up with the KMS105 vocal microphone. There were similar models developed by Shure, Audix, and others, and we were all responding to the newfound fidelity of the loudspeaker systems. Artists like Norah Jones, Diana Krall, Tori Amos, Sarah McLachlan and many others were able to deliver a “studio-quality” experience to their audiences. I know—I was there! At the time, it really seemed like a breakthrough, probably because, well, it was.

DSP Cometh
Now we’re seeing a new generation of loudspeaker technology that more than ever takes advantage of the wonders of digital signal processing. However, this work has been ongoing for some time now.

Ever heard of the KF900 designed by Dave Gunness for EAW in the mid-1990s? It was arguably the first attempt at controlling every aspect of large-scale loudspeaker system performance via DSP.

Fast forward to 2011 when the Martin Audio MLA (Multi-cellular Line Array) system hit the market. It was the first time I’d heard such an elaborate system (each driver matched with its own signal processing line).

My thought? Wow! Not only does it sound great, but it provides an amazing degree of control of coverage. Also from what I’m told, the new Anya system from EAW in its implementation of DSP also produces impressive results, and these aren’t the only examples.

We’ve now arrived at a place where most modern loudspeaker systems for sound reinforcement, in the words of many, “sound like big studio monitors.”

I agree—they’re more neutral, with less distortion and better pattern control, than ever. And this brings about some interesting philosophical changes in the way we might put such technology to work.

Signature Sound?
For decades, there’s been a relationship between certain mixers, the sound companies they choose, and the loudspeaker systems they run.

There are many reasons for this, not the least of which is familiarity with a certain approach to the way the loudspeakers sound. Audiences not only receive the sound of the band on stage, but the signature sound of the mixer, with the loudspeakers an integral part of that sound.

Like it or not, there have always been anomalies specific to those loudspeakers, reflecting the preferences of the designer. Some emphasized “even coverage in the midrange” while others pushed “linear phase” and still others went for efficiency. And on and on… Each of these strengths and tradeoffs were part of why we chose those products.

Now, however, due to these newer design philosophies, the sonic signature of a box may no longer be such a key part of the selection criteria that also includes rigging, truck pack, weight issues, powering considerations and the like.

The sound of today’s latest models might be neutral enough that it is no longer a major tradeoff. We can expect most of these loudspeakers to sound A) amazing, and B) neutral, so we can put our own personal stamp on the mix without having to fight an anomaly or take some kind of inherent sonic signature as given.

To me, this is an amazing development and stands to make it possible for all of us to develop the best mixes of our careers. Even better, I think we can expect it to continue to get better, with even lower distortion, more coverage control, and probably smaller, lighter, and more efficient boxes. In short: what a great time to be in this business.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.

{extended}
Posted by Keith Clark on 11/04 at 11:26 AM
Live SoundFeatureBlogBusinessLine ArrayLoudspeakerProcessorSound ReinforcementPermalink

Friday, November 01, 2013

RE/P Files: Quincy Jones & Bruce Swedien—The Consummate Production Team

Talking with the "dynamic duo" in October,1989

From the archives of the late, great Recording Engineer/Producer (RE/P) magazine, this feature offers an interesting discussion with a true “dynamic duo” of the recording world. Quincy Jones is interviewed first, followed by Bruce Swedien. This article dates back to October 1989. The text is presented unaltered, along with the original graphics.

If the word “professionalism” can be epitomized by one of the most successful producers currently working in the recording industry, that man must surely be Quincy Jones.

The reports of his humanity, care and response to the needs of his recording “family,” and an almost telepathic rapport with his favorite engineer, Bruce Swedien, truly makes Quincy Jones a consummate producer.

During the many session hours that R-e/p spent with Quincy and Bruce in the studio, it became readily apparent that their complimentary skills — Quincy’s proven track record as a musician, composer, arranger, and record producer, married with Bruce’s mastery of the recording process —has resulted in a production team whose numerous talents overlap to a remarkable degree.

Having worked with Bruce Swedien on so many innovative album sessions, including Michael Jackson’s Off The Wall, George Benson’s Give Me The Night, The Dude, and Donna Summer’s Summer of ‘82, it came as no surprise to anyone in the industry that Quincy Jones should make such a clean sweep of this year’s Grammys, collecting a total of seven awards, including five for The Dude alone.

The following conversations with this illustrious production team were conducted during tracking dates for Michael Jackson’s upcoming album Thriller, at Westlake Studios, Los Angeles.

This is the first in a two part series of the conversations between R-e/p, Bruce Swedien, and Quincy Jones. Stay tuned for the next installment where R-e/p’s Jimmy Stewart speaks with Bruce Swedien.

R-e/p (Jimmy Stewart): How do you first get involved with a particular recording project? For example . . . Michael Jackson.

Quincy Jones: We were working on The Wiz together, and Michael started to talk about me producing his album. I started to see Michael’s way of working as a human being, and how he deals with creative things; his discipline in a media he had never worked in before.

I think that’s really the bottom line of all of this. How you really relate to other human beings and build a rapport is also important to me; energy that’s a great feeling when it happens between creative people.

I’ve been in some instances where I have admired an artist’s ability, but couldn’t get it together with them as a human being. To truly do a great job of producing an artist, you must be on the same frequency level. It has to happen before you start to talk about songs.

R-e/p (Jimmy Stewart): Then the important aspect, to your mind, is fostering a family feel during a project?

Quincy Jones: Yes. It’s a very personal relationship that lets the love come through. Being on the other side of the glass is a very funny position — you’re the traffic director of another person’s soul. If it’s blind faith, there’s no end to how high you can reach musically.

R-e/p (Jimmy Stewart): Is the special rapport you establish with an artist based on them saying something unique that triggers off an area in your creative mind?

Quincy Jones: That’s the abstract part which is so exciting. I consider that there are two schools of producing. The first necessitates that you totally reinforce the artist’s musical aspirations.

The other school is akin to being a film director who would like the right to pick the material.

As to what choice of production style I would adopt, your observations and perceptions have to be very keen. You have to be able to crawl into that artist, and feel every side of his personality —to see how many degrees they have to it, and what their limitations are.

R-e/p: Once that working rapport hits been established, how do you plan the actual recording project?

QJ: I think you have to dig down really to where yon think the holes are in that artist’s past career. I’ll say to myself. “I’ve never heard him sing this kind of song, or express that kind of emotion.”

Once you obtain an abstract concept or direction, it’s good to talk about it with the artist to see what his feelings are and if you’re on the right track. In essence, I help the artist discover more of himself.

R-e/p: Do you became involved with the selection of songs for the album?

QJ: On average, I listen maybe 800 to 900 tapes per album. It takes a lot of energy! I hear songs at a demo stage, and would like to think that the songwriter is open for suggestions.

If I say, “We need a C section,” or “Why don’t we double up this section” the writers with whom I’ve had the most success must be mature enough and professional enough to say, “Okav, I’m not going to be defensive about any suggestion you make.”

R-e/p: So what do you listen for in a song? The lyrics, melody, arrangement, instrumentation ...

QJ: I listen for something that will make the hair on my right arm rise. That’s when you get into the mystery of music. It’s something that makes both musical and emotional sense at the same time: where melodically it has something that resembles a good melody.

Again that’s intangible too. because it’s in the ears of the listener. So basically I’m saying… it transcends analysis. A good tune just does something to me.

R-e/p: Once the songs have been sorted, what runs through your mind prior to the studio session dates?

QJ: I try to get the feeling that I’m going into the studio for the first time every-ime. You have to do that because if we started to get to a stage where Bruce Swedien and I had a specific way of recording it wouldn’t work for us.

I’m sure some things overlap, because that’s part of our personality, but we try to approach it like everytime is the first time: we’re going to try something so that we don’t get into routine type of procedures.

With Rufus Chaka Kan I’ll do one kind of a thing, where we will have rehearsals at their home, and talk about things. Maybe even come in the session with everybody and do it like “Polaroids.”

That way you can hear what everything sounds like rough, and feel what the density, structure and contour of the song is all about.

Other times, like with The Brothers Johnson, we used to go in with just a rhythm machine, guitar and bass, and do it that way. We did the Donna Summers album with a drum machine and synthesizer, so that I could really focus on just the material. But with Bruce Springsteen everyone played live, as in a concert.

For George Benson’s album. Lee Ritenour came over and helped us with different guitar equipment to get some new sounds.

At the same time that Lee was there dealing with the equipment, and George was trying it out, Bruce Swedien came over for a whole week to just listen to George with his instrumentals and vocals, like a screen test.

R-e/p: You have obviously established a close affinity in the studio with Bruce Swedien.

How do the pair of you interact with one another, and how does he make the moves with you?

QJ: The thing is, what’s great about working with Bruce is I like him as a human being. In a funny way, we’ve the same kind of background.

The first record we did together was probably Dinah Washington. During that period of time we recorded every big band in the business. We did a lot of R&B in Chicago in those days ... a lot of big records.

Bruce’s first Grammy nomination was in 1962 for Big Girls Don 7 Cry. He studied piano for eight years, and did electrical engineering in school.

Along the way he recorded Fritiz Keiner and the Chicago Orchestra. And a lot of his time was also spent recording commercials.

So, from the sound aspect, and the musical aspect, the two of us kind of cover 360 degrees . . . well at least 40. We feel comfortable in any musical environment.

Bruce handled the pre-recording and shoot. He also designed some of the equipment for the location sound, did the post scoring, the dubbing and the soundtrack album.

To do all of it, that’s unheard of! Usually there are three to four different people to handle all those facets.

R-e/p: Is there a standard procedure you use for recording the various parts of a
song?

QJ: Each tune is different. “State of Independence” for the Donna Summer’s album is a good example of a particular process I might use.

We started with a Linn Drum Machine, and created the patterns for different sections. Then we created the blueprint, with all the fills and percussion throughout the whole song.

From the Linn, we went through a Roland MicroComposer, and then through a pair of Roland Jupiter 8 synthesizers that we lock to. The patterns were pads in sequencer-type elements. Then we program the Minimoog to play the bass line.

The programs were all linked together and driven by the Roland MicroComposer using sync codes. The program information is stored in the Linn’s memory, and on the MicroComposer’s cassette.

At this point all we had to do was push the button, and the song would play.

Once it sounds right we record the structured tune on tape, which saves time since you don’t have to record these elements singly on tape with cutting and editing. This blueprinting method works great when you’re not sure of the final arrangement of the tune.

We can deal with between three and five types of codes, including SMPTE on the multitrack.

With all these codes, we have to watch the record level to make sure it triggers the instrument properly. Sometimes we had to change EQ and level differences to make sure we got it right.

R-e/p: Do you try and work in the room with the musicians, or stay in the control room with Bruce?

QJ: I like to work out in the room with each player, running the chart down, and guiding the feel of the tune.

We will usually run it down once, then I’ll get behind the glass to hear the balance and what is coming through the monitor speakers, which is the way it will be recorded and played back.

Once I get the foundation of the tune on tape, and know it’s solid and right, it is easy for me to lay those other elements to the song. It’s the song itself that’s the most important element we are dealing with.

R-e/p: Any particular “tricks of the trade” that you’ve developed over the years for capturing the sounds on tape?

QJ: Bruce is very careful with the bass and vocals, and we try to put the signal through with as little electronics as possible.

In some cases, we may bypass the console altogether and go direct to the tape machine.

Any processing, in effect, is some form of signal degradation, but you are making up for it by adding some other quality you feel is necessary — we always think of these considerations.

Bruce has some special direct boxes for feeding a signal direct to the multi-track, and which minimally affect the signal.

With a synthesizer we very often can go line-level directly to the machine, while with the bass you need a pre-amp to bring it up to a hot level.

Lots of times we will avoid using voltage-controlled amplifiers, because there will be less signal coloration. Also, if possible we avoid using equalization. Our rules are to be careful, and pay close attention to the signal quality.

R-e/p: The rhythm section is often considered to be the “glue” that holds a track,together. What do you listen for when tracking the rhythm section?

QJ: I listen to the feel of the music, and the way the players are relating to that feel. My energy is directed to telling the players what I want from them to give the music its emotional content, and Bruce will interpret technically the best way in which to capture the sound on tape.

And we may try something new or different to highlight that musical character. Because Bruce has a good musical background, he is an “interpreter” that is part of the musical flow.

I like players who have a jazzman’s approach to playing. They have learned to play by jamming with lots of different people, and you can push them to their limits.

I don’t like to get stuck in patterns, so I need players who can quickly adjust to changes in feel. They must also be able to tell a story through their instrument. I look for players who can do it all! |Laughter]

R-e/p: You obviously have a keenly evolved sense of preparation for a recording project. How do you go about planning a typical day in the studio?

QJ: We do our homework after we leave the studio. Bruce will always have a tracking date planned out, with track assignments for the instrumentation, and so on.

For overdubbing, he will work out how the work-tape system will be structured, and Matt [Forger] our assistant will be responsible for carrying out that task.

I zero in on what my day’s work is going to be by listening to the musical elements; how they interact and work in the song in my listening room at home.

Bruce does the same by working out in his mind the best method of capturing the music, and structuring these elements so they can be used in future overdubbing and mixing.

I keep a folder for each tune, and make notes as the tune progresses. It may be that changing a stereo image to mono is one way to strengthen an element: stereo for space; mono for impact.

If it’s a wrong instrument or color it will be redone. Bruce understands the music and the musical balance, and never loses his perspective.

Our communication after all these years working together is very spontaneous. This is one of the reasons for our success!

R-e/p: It’s obviously important to you that Bruce is able to read music. How does this kelp you in the studio?

QJ: The way we work with music charts, I can get to any part of the tune. It’s fast for drop-ins, and you never end up making a mistake. Bruce will make notes on his music chart to he used later in the mix.

R-e/p: How often do you listen to a work cassette during an album session?

QJ: I’ll listen over and over again to a song until it’s in my bones. Some songs have just a chord progression and no tune.

Others may be a hook phrase and a groove, and sometimes the song may call for a lot of colors. Each song is different . . . when it’s played on the radio and jumps, I’m happy.

To keep the session vibe up, I use nick names for the guys I work with: “Lilly” for Michael Boddicker; “Mouse” for Greg Phillinganes; “Boot” for Louis Johnson; “Worms” for Rod Temperton.

And Bruce has many nicknames; it depends upon the intensity in the control room.

If things are going a little rough and I need a hired gun, I call Bruce “Slim”! [Smiles across room at Bruce Swedien]

And the way I keep in touch with the tracking musicians is to use slang: “Anchovy” is a mess up; “Welfare Sound” is when you haven’t warmed up to the track or the tune; “Land Mines” are tough phrases in an arrangement.

R-e/p: How do you gauge that a track is happening in the control room?

QJ: I listen on Auratones for energy and performance at about 90 dB SPL. I’m coming from a radio listener concept.

I have two speakers set up in front of my producer’s desk. I don’t have to ask Bruce to move so I can listen to his set of speakers, and we never play the two pairs of speakers at the same time. When i’t's a great take you can see through it!

R-e/p: With such wide experience over that I think is necessary to have in a record.

Digital sometimes gets a little too squeaky clean for me. But I know it’s going to improve, because it’s a wonderful direction.

With album sessions becoming more and more complicated, both technically as well as artistically, do you think a producer has to be a good arranger too?

QJ: I don’t know, because everybody produces with his strength.

That ability can come from the strength of an engineer, player, singer, instrumentalist, arranger, or a combination of these things.

R-e/p: As founder and president of your own record company, Qwest, do you find it hard sometimes to combine the creative ability of a producer, with the business side of running a label?

QJ: Let me give you some background. In I960 I got in trouble with a jazz band I had on tour, and when I came home with my tail between my legs from Europe I took a job with Mercury Records for about seven years, in A&R, and eventually vice president.

During the course of that time I had to understand a whole different area of the record business that I wasn’t even aware of before.

It was a big company because Mercury merged with Philips, which is now Polygram, and we started Philips Records in this country.

It was an incredible education, because I use to think that all these companies get together once a week to plan how to get new artists on the label.

You should be so lucky that you get past being an IBM number on a computer with a profit and loss under your name or code number. That gave me an insight into understanding what corporate anatomy was all about.

Understanding the rules of the game is important for a producer with a huge company like Philips, which is dealing with raw products, television sets, vacuum cleaners, and all the rest, at the that time we were doing $82 million a year worldwide, and music was only about 2% of the total.

R-e/p: So how do you communicate with the business person?

QJ: Somewhere along the way it’s got to make sense if it’s going to cost money. If you want to go to Africa and make a drum record, for example, you’re going to have to figure out how to get it done for the people who put up the money.

Somewhere along with your creative process you have to ‘scope out what the situation is, get your priorities straight, and don’t let that interfere with your creativity.

If they put a pile of money right in front of you, there’s no way to correlate the essence of what that means, and yet still tie it into the creation of music.

Being a record company president is a lot of responsibility, hut it’s going to be okay. To become a successful record company president, you have to apply and reinforce your creative side with a business side, but you can’t lead with the business side.

Jimmy Stewart’s interview with Bruce Swedien begins on the next page.

R-e/p: How do you see your role as engineer: working behind the console and handling the technical side of the recording process?

Bruce Swedien: Well, I guess I have to go back just a little bit in my background to really answer that question.

Number one, working in Chicago in the jingle field was tremendous training for getting a very fast mix, and being ready to roll because, quite frequently, jingle sessions only last an hour.

I recorded all the Marlboro spots, where it wasn’t unusual to have a 40-piece orchestra scheduled for nine o’clock downbeat, and literally be ready to roll at 9:05.

And when the band is rehearsing, I’m getting little balances within the sections: when the rhythm section is running a certain thing down, I’ll use that time to get the rhythm balance ready. It happens very quickly.

I guess I learned a lot about not wasting any movement or motion from the musicians in Chicago. In the early days of the jingles business — about 1958 through ‘62 - I worked with probably some of the finest musicians in Chicago at that time.

They were masters at making the most out of one little phrase. As they were putting the balance together and rehearsing parts, I would be getting it together very quickly behind the console. That was really great training for me.

R-e/p: Too much of an emphasis on the technical side of recording is often said to intimidate an artist in a studio.

How do you try and get into the musical groove with the musicians and the producer?

Bruce Swedien: You should be prepared down to the last detail, and get to the studio early. Start setting up early; there will always be things to do that you’re not apt to think of unless you have enough time.

If your session starts at 9:00 AM, be there at 8:00 AM - give yourself at least an hour to set up and prepare the average sized session.

Reduce as much of the routine of your work to a regular habit, and always do each job associated with the session in the same order. By scheduling all these mundane mechanical aspects of recording lo habit, your mind will be free to think of the creative facets of your work.

R-e/p: What type of musician do you like to record?

Bruce: A musician who gives it up… doesn’t hold back. Sometimes that’s a rare quality. So many musicians go into studios and they kind of tippy toe around, or they just don’t want to commit themselves.

I listen for the real sound of the instrument and player, not the interpretation.

I like to get to know the player and learn his sound. Ernie Watts, he’s my kind of player… disciplined! His energy is instant!

He never holds back; he’ll get it on the first or second take, because he’s so used to giving it up. Most of the solos on his album Chariots of Fire are first takes with the band. And that’s unusual for today . . . really unusual!

R-e/p: Obviously the cue/headphone mix is important to musicians in the studio. How do you help them get into the track?

Bruce: If the instrumentation is small enough, I’ll split the Harrison console [at Westlake Studio] and send to the multitrack with half of the faders, and use the rest for returns.

In that way you also get the cue mix on the multitrack return faders. It’s easier to see what you’re doing with the sound using the faders for the cueing mix, as opposed to monitor pots.

R-e/p: Quincy commented that it’s important to him that you are able to read music. What do you consider that a young engineer, in particular, should know about the musical side of recording to be a master engineer?

Bruce: I would say the best training is to hear acoustic music in a natural environment. Too many of today’s young engineers only listen to records. When a natural sound or orchestral balance is required, they don’t know what to do.

My folks took me to hear the Minneapolis Symphony every week all through my childhood, and those orchestral sounds have been so deeply imprinted that it’s very easy for me to go for an orchestral balance when that’s necessary.

And I’m talking about the whole range — even a synthesizer that is a representation of the orchestra. But, to put that sound in its correct placement in a mix is not easy.

My first advice would be to study the technical end first, so that you know the equipment and what it will do, and what it won’t do. Then hear acoustic music in a natural environment to get that benchmark in your mental picture

I think that it is very important for an engineer to understand a rhythm chart or lead sheet. I always make up my own chart with bar numbers on music paper and, as the song develops, I’ll add notes and sometimes musical phrases that will be needed for the mix.

R-e/p: Is it important to have a relative sense of pitch?

Bruce: No question about it… an absolute must. And I think a knowledge of dynamics is important too.

It’s not unusual in classical music to have a 100 dB dynamic range from the triple pianissimo to triple forte, and we cannot record that wide a range with equipment. In addition, it’s virtually impossible with most home playback systems to reproduce that dynamic range.

So, in recording we frequently have to develop a sense of dynamics that does not necessarily hold true with the actual dynamics of the music.

It’s possible to do that with little changes of level — what I would call “milking” the triple pianissimo by maybe moving it up the scale a little bit. And when you get to the triple forte maybe adding a little more reverb or something, to give the feel of more force or energy.

You see so many guys in studios with their eyes glued to the meters. I’ve never under-stood that. Take the clarinet, for instance, which can play softly in the sub-tone range; just an “understanding.”

An engineer would have to know how to deal with a player through the interpretation of the music that would be soft. On the VU meter, which only has 20 dB dynamics, so you don’t even see it. In those extremes, your ear is really on its own.

You can’t be the type of guy who has his eyes focused on the VU meter. It’s meaningless, absolutely meaningless.

The ear has to have a bench mark so you know where that dynamic should fall in the overall dynamic range. Quincy is always very aware of that, which is a real treat for me.

R-e/p: Having sat in on several of your sessions, I couldn’t help but notice that you and Quincy have your own jargon in the studio.

Bruce: You know Quincy and I don’t talk much when we work. We spend a lot of time listening: “More Spit” — EQ and reverb; “More Grease” — reverb; “More Depth” — enhance the frequency range, give it more air in the reverb; “Make it Bigger” — beef up the stereo spread; “More Explosive” — bring the level down, add some reverb, adding a trail after it. Quincy picks the sound or effect; I put the thought into application by choosing the “color,” if you like.

R-e/p: While there may be no hard and fast rules in the studio, have you picked up any tips about how to work creatively with a producer?

Bruce: An engineer’s important responsibility is to establish a good rapport with the producer. Nothing is a bigger turn off in a studio than a salty, arrogant personality. I have seen this attitude frequently in an engineer, and heard him describe himself as “Honest.”

It is very important for the engineer to know what a particular producer favors in sound. Producers vary somewhat in interpretation of a style, or musical character.

R-e/p: How does the engineer set a good vibe with the producer and the musicians?

Bruce: It’s a two-way proposition. I’ve been in situations early in my career — fortunately I don’t have to deal with that any more — where producers were not inclined to allow the engineer to be involved in a recording project.

I don’t think that’s the case anymore, at least in the upper level of the business, because it’s a fact that engineers do contribute a lot of useful input.

Yes, it’s absolutely true that an engineer can help an incredible amount in the production of music.

R-e/p: After the tracking and overdubs, how do you set yourself up for the mix?

Bruce: I’ll have many multitrack work tapes. For example, on Donna Summer’s tune “State of Independence” I had eleven 24- track tapes — each tape has a separate element.

Then I combine these tapes into stereo pairs. In the case of synthesizer, horns, back ground vocals, strings, sometimes I will use a fresh tape, or there are open tracks on the master tape.

The original rhythm track is always retained in its pure form. I never want to take it down a generation, because the basic rhythm track carries the most important elements, and I don’t like to loose any transients.

With synthesizer or background vocal you could go down a generation without loosing quality. I call this process pre-mixing, and we use whatever technical tricks it takes to retain sound quality.

We pre-mix the information on two tapes, and bring them up through the console. Having established the balance all the way through the recording process, we then listen to all the elements, and Quincy will make the decision based on what the music is saying.

We usually have more than we need. This stage is editing before we master —listening to everything once saves time, and we don’t have to search for anything.

Sometimes though, we may have to go back and re-do a pre-mix if the values are not right. For example, a background vocal part may have the parts stacked, and one of these parts might be too dominant.

Or sometimes everything sounded fine when we were recording the element, but with everything happening on the track the part gets lost.

Then we go back and re-establish a new balance by pre-mixing that particular element again. We also pay close attention to psycho- acoustics — in other words, what sound excites the listener’s ears. These are the critical things in the mix that will make the difference between a great mix and a so-so mix.

Also, we are sensitive to the reverb- content. Quincy may ask me to bring more level up on a given element. I may suggest adding more reverb, which will create more apparent level.

I establish what the mix will be, and Quincy will comment on the little changes and balances; these are the subtle differences that make for a great mix. We overlap our skills. Quincy becomes the navigator, and I fly the ship!

R-e/p: Does it take very long to get a mix that you both like?

Bruce: Quincy will work with me for the first few days until all the production values are made. Then we close down the studio and I will polish the mix until I like it.

After I get it right, Quincy receives a tape copy for the final okay. Because of Quincy’s business phone calls we have found that to be the best way to finish a mix. We know the mix is right when we’ve made the musical statement that we set out to make.

R-e/p: Do you use automation during the final mixing?

Bruce: Yes, because it gives you more time to listen by playing the mix away from the basic moves. Automation is a tool I use for re-positioning my levels. Then 1 can make my subtle nuances in level changes to get the right balance.

For monitoring the final mixes I am a firm believer in “Near-field” or low-volume monitoring. Basically, all this requires is a pair of good-quality bookshelf speakers. These are placed on top of the desk’s meter panel, and played at a volume of about 90 dB SPL or less. My reason for using Near-field monitor-ing is twofold.

The most important reason is that by placing the speakers close to the mix engineer, and using an SPL of no more than 90 dB, the acoustical environment of the mixdown room is not excited a great deal, and therefore does not color the mix excessively.

Secondly, a smaller home-type book shelf speaker can be used that will give a good consumer viewpoint. My personal preference for Near-field monitors is the JBL 4310; I have three sets.

Each musical style has its own set of values. When mixing popular dance music, for example, we must keep in mind the fact that the real emotional dance values are in the drums, percussion and bass, and these sounds must be well focused in the mix.

Making a forceful, tight, energetic rhythm mix is like building a house and making sure the foundation is strong and secure. Once the rhythm section is set in the mix, I usually add the lead vocal and any melody instruments. Then, usually the additional elements will fall in place.

For mixing classical music or jazz, how; ever, an entirely different approach is required.

This is where the mixing engineer needs a clear knowledge of what the music to be mixed sounds like in a natural acoustical environment. In my opinion, this one area is where beginning engineers could benefit their technique a great deal.

It is absolutely essential to know what a balanced orchestra sounds like in a good natural acoustical environment. Often, the synthesizer is used to represent the orchestra in modern music. A knowledge of natural orchestral balance is necessary to put these sound sources in balance, even though traditional instruments are not necessarily used.

R-e/p: You have provided us with a studio setup plan of the recent Donna Summer sessions at Westlake. How do you plan the tracking and overdubs?

Bruce: I generally record the electric bass direct. I have a favorite direct box of my own, which utilizes a specially custom-made trans-former. It’s very large and heavy and, to my ear, lends the least amount of coloration to the bass sound, and transfers the most energy of the electric bass on to the tape.

From my own personal experience though, active direct boxes are very subject to out-side interferences, such as RF fields — you can end up with a bass sound that has a lot of buzz or noise on it.

The miked electric bass technique alone usually does not work very well, primarily because there are very It few bass amplifiers that will reproduce fundamental frequencies with any purity down to the low electric bass range. In jazz recording the string bass is always separately miked.

My favorite mike is an Altec 21-B condenser, wrapped in foam and put in the bridge of the instrument; I own four of these vintage mikes that I keep just for this purpose. You also can get a good string bass pick-up with an AKG-451, placed about 10 inches away from the finger board, and not too far above the bridge.

Bruce: Quincy came up with the term to describe the way I work — my “philosophy for recording music” if you like.

To be more specific, it’s really my use of two multitrack machines with SMPTE codes — “Multichannel Multiplexing.”

Essentially, by using SMPTE timecode I can run two 24 track tape machines in synchronous operation, which greatly expands the number of tracks available to me.

Working with Quincy has given me the opportunity to record all styles of music. „With such a variety of sounds to work with I could see that single multitrack recording not enough to capture Quincy’s rich sounds.

I began experimenting with Maglink timecode to run two 16 track machines together in sync. This offered some real advantages, but since then I have expanded my system to use SMPTE timecode and two ,24-track tape machines.

The first obvious advantages that come to mind are: lots of tracks, and space for more overdubbing. With a little experience I soon found *hat the real advantage of having multiple machines with Quincy’s work if that I can retain a lot more true stereophonic recording right through to the final mix.

An additional major advantage is that once the rhythm tracks are recorded, I make a SMPTE work tape with a cue rhythm mix on it, and then put the master tape away until the mix. In this way we can preserve the transient response that would be diminished by repealed playing during overdubbing.

Quincy usually has the scheme for the instrumentation worked out for the song so we can progressively record the elements on work tapes. For example: Work tape A may have have background vocals; Work tape B lead vocals; Work tape C horns and strings; and Work tape D may have 10 tracks of synthesizer sounds to get the desired color.

All of these work tapes contain a pitch tone, SMPTE timecode, bar numbers cues, sometimes a click track, and cue rhythm mix.

R-e/p: What kind of interlock device do you use to sync the multitracks?

Bruce: We use two BTX timecode synchronizers. A BTX Model 4500 is used to synchronize the two multitrack machines, and I keep the 4500 reader on top of the console in front of me to provide a SMPTE code readout We work with real time from the reader, and don’t depend on the auto-locator during the work tape stages.

There are times when I’ll use the “Iso” mode on the 4500 to move one element on a tape to a different place in the tune.

Say, for instance, you have a rhythm guitar part that isn’t tight in a section; I’ll find it on the slave work tape and move it to the new location on the master work tape.

R-e/p: How long does it take to make a work tape?

Bruce: We start by adding the SMPTE code, and I’ll make a few passes with a mix until I like it Then we record the pre-mix on to the new work tape.

I’m very fussy about the sound, and we’ll listen back and forth between the master and the slave tape to make sure the sonics match before we move on to the next work tape.

I always want Quincy and the dubbing musicians to hear my best. It takes about three hours per work tape to finish the job. To keep all the tape tracks in tune, we also calibrate the speed of tapes by going through a digital readout.


Editor’s Note: This is a series of articles from Recording Engineer/Producer (RE/P) magazine, which began publishing in 1970 under the direction of Publisher/Editor Martin Gallay. After a great run, RE/P ceased publishing in the early 1990s, yet its content is still much revered in the professional audio community. RE/P also published the first issues of Live Sound International magazine as a quarterly supplement, beginning in the late 1980s, and LSI has grown to a monthly publication that continues to thrive to this day. Our sincere thanks to Mark Gander of JBL Professional for his considerable support on this archive project.

{extended}
Posted by Keith Clark on 11/01 at 06:04 PM
RecordingFeatureBlogStudy HallDigitalEducationEngineerMicrophoneMixerSignalStudioPermalink

Tech Talk: Building Directional Subwoofer Arrays

Working toward consistency throughout the listening area

Directional subwoofers are one more tool that can be used by sound system designers in their quest to achieve consistent sound throughout the intended listening area.

When using traditional, more or less omni-directional bass reflex (a.k.a., “vented,” “ported,” or “front-loaded”) subs arranged left and right of a stage, there is a build-up or “power alley” created in the center, where the energy from each source location shows up at the same time, with no phase difference, and sums quite nicely.

Moving left and right off of the center line, this area of addition is followed by alleys of cancellation.

Wavelengths of 40 to 100 Hz are roughly 11 to 23 feet long. At any frequency in this range, as you move away from the center line and change the path length difference between the two sources by half a wavelength (about 5.5 to 11.5 feet) there will be a cancellation, with higher frequency “nulls” encountered first.

To alleviate this there are three methods that have been employed: line arrays of subs, end-fired sub arrays, and cardioid subs, which are sometimes combined.

Line Arrays
Lines of subwoofers are one application of what Harry Olson discussed in the 1957 text Harry F. Olson, Acoustical Engineering, when he described a straight line source; using omni-directional elements, in a line, all reproducing the same signal, with relative close spacing compared to the wavelength, pattern control can be achieved.

Imagine a row of subs is assembled across the front of a stage. If it’s longer than the wavelength of the lowest frequency for which pattern control is desired (25 Hz is about 45 feet) and if the elements are close enough to one another, within two-thirds of a wavelength of the highest frequency produced (100 Hz is about 11 feet, so 2/3 is about 7 feet), cancellation at the ends of the line and addition in front of the array (and behind the array!) will be achieved.

Observed from the audience area, from one end of the line to the other, enough of the energy from each of the elements of the array arrives within +/- 120 degrees, at about the same level and sums.

Observed from the end of the array, enough energy from each of the elements arrives enough out of time but at similar enough level, causing destructive interference and level loss.

The use of a line array (yep, that’s what it is) of subwoofers can avoid horizontal differences in frequency response and deliver more energy to the audience area, while avoiding those nasty side wall reflections at lower frequencies.

Further, maximizing spacing can reduce the level differences from the front to back. In the interest of making sound where the audience is and not making noise where they are not, this is one option.

Remember, though, that the energy is the same in front and behind the array.

These arrays can also be assembled vertically, though space between the elements is not easily achieved with most rigging systems, so they are generally closely spaced arrays.

In amphitheater and arena situations where coverage to the sides is desirable, incrementally delay-tapering the horizontal array - so that moving away from center, each sub is slightly later than the one before it - can spread the coverage out towards the sides.

End-Fired Arrays
The end-fired array can be made up of two or more subs, spaced closely together, one facing the rear of the next, in a row along the “z-axis,” facing the audience and the direction of coverage.

Yes folks; it looks like it won’t sound “right.” Each cabinet needs its own drive line because we are going to incrementally delay all but the rear-most.

The rear, upstage, sub is delay time zero.

Moving towards the audience, each sub needs delay added corresponding to its distance from “sub zero.”

Let’s say the spacing is 3.5 feet: the delay time would be 3.1 ms (speed of sound = 0.9 ms/ft) for the next element.

The end-fired array produces gain in front of the array because the energy from each of the elements arrives in time at all frequencies being reproduced.

Cancellation behind the array is the summation of the energy produced by each source that is out of time and arrives at almost the same level.

There are a number of dips in frequency response based on the number of signals that have 180 degrees of phase difference. The level difference between front and rear is about 18 dB with a four-element array.

Cardioid Arrays
A few manufactures make multi-driver, single-cabinet cardioid enclosures, but they can be created with simple arrays of two or more cabinets.

The physical arrangement can be one of two options, both speakers facing the audience, one upstage of the other, lined up on the ‘z-axis,’ or one sub oriented facing backwards next to one or more facing forward. Again, people will question the appearance.

When both subs are facing the audience, one upstage of the other, delay and a polarity flip are applied to the signal going to the rear speaker.

In the rear, the energy from both loudspeakers arrives in time, at almost the same level, but with reversed polarity, resulting in broadband destructive interference and reduced level. In front of the array, the two signals arrive with polarity different and out of time.

This is a little tricky, but the first dip in the comb filter in this example is going to be at 160 Hz, out of band of the sub. If the spacing between the subs is 3.5 feet and the delay time is 3.1 ms, the two signals arrive 7 feet apart in front.

The wavelength of 160 Hz is 7 feet. With the polarity flip, the first dip of the comb filter will be at 160 Hz, not 80 Hz. The two signals in front are also at about the same level, so the dip will be significant.

The cardioid arrangement using forward and rearward facing subs can be assembled vertically or horizontally, subs stacked one on top of another or laid side by side, in a line, some facing the audience and one or more facing backwards.

Talk about looking like it won’t sound good. Behind the array, the output of the front and rear facing elements of the array need to match in time and be very close in level, but polarity backward to create cancellation behind the array.

A polarity flip and delay of the rear facing loudspeakers achieves this.

Determining the number of forward and rearward facing elements depends on model and how much energy needs to be created behind the array to cancel the energy from the forward facing subs.

The delay time will vary too, depending on model, and dimensions of the array, both vertically and horizontally.

Measurement is needed to determine level and time relationships between the front and rear subs.

An FFT transfer function can quantify this accurately. In front of the array, the summation of the rear facing loudspeakers is out of time and polarity different from the energy being produced by the forward facing subs.

The problem in frequency response, that first dip in the comb filter, must be kept out-of-band, higher in frequency than the operational range of the subs.

Alternative Methods
A hybrid approach, combining cardioid pairs, arranged in a line across the front of the stage, results in cancellation left, right, and to the rear. Alternatively, combining end-fired arrays and line-arrays also achieves additional directional control.

Using a directional array left and right affords the opportunity to join -6 dB down points in the middle of the audience and minimize the interaction between the arrays by minimizing the area where they are level similar, moving quickly into isolation of one or the other arrays. This would lend itself to very wide audience areas, such as amphitheaters and festival sites.

Directional arrays are often misconceived, mis-assembled, or are faulty in their operation. They require a knowledgeable operator, good equipment, and proper implementation.

The benefits can be substantial and are sometimes worth the risks. Avoiding some reflections in rooms, decreasing the amount of low-frequency energy on the stage (turn the floor monitors down, folks), and making the coverage smoother in amplitude and frequency response in the audience area are the substantial benefits when considering the use of directional low-frequency arrays.

There are several critical factors of performance that must be considered when assembling these types of arrays. Control of low frequency directivity is only possible when using exceptionally linear systems, precision-manufactured to perform identically.

The relationship between individual components must be consistent. What is sent electrically to the array elements needs to be turned into acoustic energy, without distortion or changes in frequency response as signal level changes.

New Tools
Historically, directional low-frequency loudspeakers have been in existence for some time.

Meyer Sound developed the first commercially available design, the PSW-6, a dozen years ago.

The PSW-6 uses a four-channel amplifier and signal processing built-into an enclosure that houses dual 18- and 15-inch drivers facing the audience, plus two more 15-inch drivers mounted in its rear.

This self-powered subwoofer provided cardioid vertical and horizontal polar response, serving as a new tool in the challenge of designing sound systems.

It eliminated 15 to 20 dB of the energy from the rear that would have bounced around and arrived in the audience area late.

Another advantage was the ability to place these loudspeakers in front of large walls without having to consider boundary reflections.

These and others continue to be advantages over omni-directional designs.

The PSW-6 design was a result of field experiments using the SIM (Source Independent Measurement) FFT measurement platform, along with prediction results from Meyer Sound’s then new MAPP Online (Multipurpose Acoustical Prediction Program).

MAPP, among its many uses, has become a tool that many practitioners use to design low-frequency directional arrays. Users are able to apply signal processing, arrange elements, and observe the results graphically as a narrow-band pressure plots or as broad-band Virtual SIM transfer functions, all predicted from the interaction of measured data sets of real loudspeakers.

“Measure twice and pile it up once.” Let’s face it, moving subs around in a parking lot is a lot of work and requires a substantial investment of time and effort, plus there’s tinkering with signal processing and measurement, as well as additional DSP and multiple drive lines.

On the other hand, moving subwoofers around on a computer screen is a two-finger event, and without the need for real subs, signal processing, and measurement platforms, a real time and money saver.

Not having to build and measure subwoofer arrays in the physical world as a first step has allowed users to design arrays that they might not have spent the time to experiment with in real life.

Steve Bush is a technical support representative for Meyer Sound.

{extended}
Posted by Keith Clark on 11/01 at 05:54 PM
Live SoundFeatureBlogSlideshowStudy HallInterconnectLoudspeakerMeasurementSound ReinforcementSubwooferPermalink

Sound Operators & Musicians, Working In Harmony

Avoiding the "deadly sins" that separate tech and creative sides

Over the past several years, I’ve had the privilege of being a musical performer and worship leader, as well as a church sound engineer and technician.

This has provided unique perspective from both sides of the platform; what I’ve learned on one side has helped me do better on the other side, and vice versa.

Through this process, I’ve noted several problems and solutions that apply to the technical side, the creative side, and both. I’ve refined these observations and practices into what I call the “Seven Deadly Sins.” Let’s get started.

Deadly Sin #1: Messing with the stage mix. Few things are more frustrating for a musician than a bad mix on stage. We’re a picky lot, and further, when an acceptable stage mix is achieved, we don’t want it to change.

Therefore, the first rule for the sound mixer is avoid adjusting input gain once a service has started. Even a slight adjustment can be a HUGE detriment.

Also, please don’t mess with monitor sends during a service. Certainly there have been times when the stage is too loud - often, we musicians tend to play louder when the adrenaline starts flowing. (Of course, others actually get timid and play/sing softer.)

Resist the temptation of making major changes mid-stream; not only will this distract the musicians, but also in all likelihood, changes will serve to make things even worse from a sonic perspective.

Instead, work on preparation that will eliminate these problems before they start. Pay close attention to how things sound during rehearsal, how sound is reacting with the room, and project what will happen when the room is full for services.

And, pay even closer attention during services, making observations and notes about what’s happening at “crunch time,” when true performance characteristics are being exhibited and an audience is on hand.

Of course, this is easiest to do when you’re using the same system in the same room with the same musicians. In most cases, the first two variables don’t change, and with respect to the third, note the techniques and mix approaches that result in the most consistency, regardless of who’s playing or a particular style.

Observe, experiment, formulate and then act - in advance.

Deadly Sin #2: Trusting untrained “critics.” While serving as director of technical ministries at a large church, I had the privilege of working with a talented director of worship. However, he had an annoying trait of trusting an elderly lady of the congregation to provide critiques of my house mix and overall sound quality.

She would wander through the sanctuary during rehearsals, listen and then report back to him. My goodness - this is an individual who had no experience with sound or music and who couldn’t even make the cut during choir tryouts! Talk about demoralizing…

The bottom line is that this person’s opinion mattered just like any other member of the congregation, but in no way was she qualified to serve as a reference. Her suggestions were useless, and actually would have been detrimental had I chosen to follow them

The lesson? Sometimes musicians and worship leaders find it difficult to trust the sound people. But please, let logic prevail. In most cases, leaders of a church technical staff have the necessary experience to do their jobs correctly.

If sound people seem to be lacking in ability and knowledge, they must pursue proper training. If it seems that they lack the “ear” to provide a properly musical mix, then they need to fill another role while others who do have this particular talent should be encouraged to put it to use.

And church sound staff members must always be honest with themselves and constantly seek to improve their skills any way possible.

Deadly Sin #3: The word “no.” Musicians often possess a certain confidence that sometimes can border on arrogance. We get an idea or vision and we’re quite sure it can come to life, and with excellent results. This is simply a part of the creative process.

It’s up to the sound team to foster this creative spirit, not squash it. Therefore, the word “no” should fall toward the bottom of the response list.

For example, if a musician asks for an additional drum microphone, the answer should not automatically be “no.” This suggests that the sound person has no care about the creative vision, no care about striving for improvement.

Instead, how about a response along the lines of, “I’ll see what I can do. And, if you don’t mind my asking, what do we want to achieve with this extra mic?” This is a positive, can-do attitude that’s supportive and can be infectious.

Also, by inquiring further, the sound person may be able to help deliver a solution better suited to achieve the new creative vision. Maybe it’s not an extra drum mic that’s needed but another approach, like additional drum isolation.

The point is to ask, which begets learning, which begets support and collaboration, which begets a better performance.

Deadly Sin #4: Unqualified knob “twiddlers.” Musicians like knobs and blinking lights, so naturally, they want to fiddle with the sound system. The confidence/arrogance mentioned previously plays into this as well - we believe there’s no task we can’t be great at, regardless of lack of training and experience.

But the reality is that musicians usually know just enough to be dangerous when it comes to operating a sound system. The same goes for house and monitor mixing.

The irony is that musicians indeed can be among the best “sound” people in the congregation, perhaps better than many sound technicians, due to their musical ear.

However, too many cooks spoil the broth. The solution is fairly simple and straightforward: someone is either a musician or a sound tech/mixer for a given service.

If you’re a musician, this means hands off the sound gear. If you’re the mixer, do the best job possible, and support the musician. One individual does one thing, the other does the other thing, and you meet in the middle with mutual respect and collaboration, striving together to make everything better.

Deadly Sin #5: Not holding one’s tongue (or, how I offered a suggestion and made things worse…).

When I’m mixing, I want everything to sound as good as possible.

Sometimes, however, things are happening on stage that just seem to get in the way of the sonic nirvana that’s etched in my brain.

Perhaps it’s a guitar that’s too loud, perhaps it’s an off-key singer, or perhaps “everything” just isn’t working. (Mama told me there’d be days like this, and mama was right!)

Should we feel some obligation to offer some advice? Of course. Should we act on this feeling? Well…

Telling a musician he or she isn’t sounding too good is kind of like telling an artist you don’t like his/her painting.

How many times have you looked at a painting and asked, maybe sarcastically, “they want how much for this?”I may not like someone’s “art” but in the minds of many, including the creator of that art, it’s serious, meaningful and perhaps brilliant.

The moral of the story is to hold one’s tongue and consider the big picture. Ask the question: will our ultimate goal be furthered if I suggest a change? (No matter my intentions – how will this input be received?)

The bottom line is that there are facts, and there are opinions, and the truth often lies between. Often you can lose more than could ever be gained by pushing your own agenda, no matter how “right” you may be.

Tossing out opinions can also ruin the team spirit so vital to the mission, and yes, also the joy of praise and worship. And showing distrust and/or lack of respect for others may lead the worship leader to question your own goals, agendas and visions.

Obviously there are exceptions. If a guitar is just so loud that you can’t create a good mix below 110 dB, best to gently encourage a change.

If a singer is off-key to a noticeable degree, maybe mention it to the worship leader, subtly and behind closed doors. If the leader agrees, change becomes his/her responsibility.

I’ve learned a lot from talented production people. They’re always positive, always put full effort into their work, and always have an attitude of appreciation toward everyone else they work with.

This attitude transcends minor problems, leading everyone to follow the example, resulting in a better production. It’s a self-fulfilling prophecy, one attained through the power of encouragement and positive thinking.

Deadly Sin #6: Being negative during a service. Sometimes things just don’t go right in a given service. But in virtually all cases, it’s not because every single individual isn’t trying their best, applying their heart fully.

The worst thing that can happen on these days is to draw attention to the problems. This is especially important for worship leaders to keep in mind.

Never apologize for bad sound during a service. If it’s that bad, people will notice without anything being said.

Rather, concentrate on making it through that service, and address problems afterward. Often, the vast majority of the audience doesn’t even notice problems until they’re pointed out.

Now, how best to address significant sound problems. The fact: today’s cars often have better sound than most churches. It’s time to change that. Get the sound people training, and get them the equipment needed to make things work.

You can spend days (weeks, months and years!) talking about how to fix sound problems. In fact, as a sound contractor, that’s how I occupy most days.

The best (and only) way of solving serious sound problems is to work with a qualified consultant and contractor. Select these individuals carefully, and bring them in as part of your team.

And don’t criticize others on your team for things that - in all likelihood - aren’t even their fault!

Deadly Sin #7: Assuming the other person is capable of understanding your thought process.

In 99 percent of churches, technical people and music people are like fire and ice. The logical mind and the creative mind. (Thank God for the fact that we are all doing this for a higher purpose or we would have killed each other years ago!)

We all need to learn how to communicate better. This is especially important because the way worship services are being done is changing, in many cases quite radically in terms of employing production. This requires more people be involved both as performers/contributors and in technical/creative support.

If we don’t communicate, we won’t enjoy what we’re doing and therefore we won’t participate. The church has a lot of work to do, and we can ill afford to lose people who desire to help out.

How do we start to understand each other’s thought processes? Drum roll, please…

I know you’re probably looking for a magic approach or series of steps to achieve better understanding, but in my experience, it all comes down to spending time together.

Hang out, fellowship, pray, study, talk, and practice together. Technicians, learn to play an instrument. Musicians, develop an understanding of sound.

One final piece of advice. I worked with a church here in Michigan - eventually my wife and I started attending there - and I became involved as a musician and technical advisor. This church had constantly battled technical difficulties and had learned to accept mediocre (at best) sound.

They moved into a new facility and purchased some pretty nice equipment expecting great things. Indeed there were improvements, but sound still wasn’t where we wanted it to be.

I suggested that the sound staff attend rehearsals, and after three months, the difference was astonishing.

And not only did sound improve dramatically through better understanding and coordination, but we also had great fun!

Rehearsals didn’t consist of just musical practice. It was “practice time” and “small group time,” all in one. Everyone became friends and co-developed a shared, common goal of excellence through cooperation and understanding.

We were all truly part of the worship team, and that sense of unity gets better to this day. The simple act of inviting the sound people to rehearsals turned out to be the biggest improvement the music department has experienced.

Most importantly, more than really altering things significantly on the technical side, it changed attitudes and opened up minds.

{extended}
Posted by Keith Clark on 11/01 at 05:42 PM
Church SoundFeatureBlogStudy HallTrainingBusinessEducationEngineerMixerSound ReinforcementTechnicianPermalink
Page 52 of 177 pages « First  <  50 51 52 53 54 >  Last »