Feature

Thursday, June 06, 2013

What’s The Delay? The Effects Of Weather Conditions On Sound

Temperature changes, as well as wind and humidity can wreak havoc on our carefully aimed and tuned rigs.

Summer is here, and a good many of us are out working those “mud and dust” shows, fairs and festivals.

We get to clean amplifier filters on a daily basis, put wedges into garbage bags to keep them dry, mix with visqueen draped over the console (also to keep it dry). And, we’ll have a shop vacuum at front of house as well as the monitor “beach.”

That said, the toughest aspect to deal with when doing shows in the great outdoors is the effects of atmospheric conditions on a system’s behavior. Temperature changes, as well as wind and humidity can wreak havoc on our carefully aimed and tuned rigs.

And the larger the venue, the greater the effect that these conditions will have on sound propagation. The effects are not preventable, but at least they’re (at least partly) predictable.

Morning, Noon & Night
Any time you’re doing sound outside, temperature gradients are an issue. In the morning, the ground retains the nighttime coolness longer than the surrounding air, resulting in a cool air layer near the ground with a warmer layer above it.

The velocity of sound increases slightly with higher temperatures. For example, at an elevation of 0 feet above sea level, at a temperature of 50 degrees (Fahrenheit), sound will travel 110.7 feet in 100 milliseconds (ms).

At 90 degrees F, it will travel 115.14 feet in the same 100 ms. This will force the wavefront angle of sound from loudspeakers to track slightly downward, bending toward the cold air layer.

In more extreme conditions, sound waves can actually bounce off the ground and skip over part of the audience before refracting downward again further on, causing dead spots in the system coverage.

In the evening the opposite happens. Because the ground is still warm while the air is cooling off, a layer of heat is trapped near the surface. Thus the wavefront angles upward, and can be refracted right over the top of the crowd. (Note also that the warm air layer generated by the crowd itself increases this tendency.)

Excess loss with distance due to reltive humidity in the air. (Graphic courtesy of JBL Audio Engineering For Sound Reinfocement by John Eargle and Chris Foreman (click to enlarge)

Wind can produce similar effects. The speed of sound traveling with the wind will equal the speed of sound plus the wind speed; thus, when sound is firing into the wind, you must subtract the wind speed.

And since the wind speed in a boundary region like the ground is at or near zero, a wavefront heading into the wind will refract upward as the top part of the wave is slowed slightly by the headwind.

With the wind behind sound - pushing it - the wave will bend downward. It’s not the wind itself that causes problems, but the velocity variations with altitude. The effects of a crosswind can be analyzed with a little simple trigonometry. (Is there really any such thing?)

Let’s look at an example. Start with the fact that the nominal speed of sound is 770 miles per hour (mph). Then, let’s say that a crosswind is blowing at 90 degrees to the direction of the sound system propagation at a rate of 40 mph.

We can use those speeds as distances on the legs of a right triangle and determine the angle of deflection. In this example it’s about six degrees.

However, this can be a little deceiving. Because the typical cluster may have a horizontal dispersion of 120 degrees or more, part of the wavefront is moving perpendicular to the wind, but other parts are quartering into the wind or away from the wind.

This causes their behavior to be affected as though the wind were pushing the sound as noted above. Very complex!

More & Less
Humidity is another factor that can produce large changes in sound system propagation but this time in the frequency domain.

Although it can seem counter-intuitive, lower humidity equals more attenuation and higher humidity equals less.

Humidity effects on frequency response start at about 2 kHz and become progressively more pronounced at higher frequencies.

At a distance of 100 feet and 20 percent humidity, 2 kHz will be attenuated by only 1 dB, while 10 kHz will suffer a whopping 8.5 dB loss.

And these losses are cumulative for longer distances. At 200 feet, that 10 kHz loss doubles to 17 dB! These losses are also in addition to inverse square law losses - they’re not linear with frequency, so amplitude response can vary greatly over the coverage area.

The inconsistencies are worst between 10 percent and 40 percent humidity. At higher humidity, the losses become smaller and also more linear across the frequency range.

These variations can be an issue with arrayed point sources that have a total vertical dispersion of 50 to 80 degrees. But when the forces of nature are applied to a line array with a wave front that is very narrow in the vertical axis, there is not much room for errors in directivity.

The fact that line arrays maintain their vaunted 3 dB loss per doubling of distance for a far greater distance at high frequencies than at low frequencies is somewhat offset by the higher atmospheric losses at those high frequencies. But because humidity losses are not linear this isn’t as helpful as it might seem.

Line arrays are also typically used to cover larger venues. The phenomena we’re examining here become more pronounced with distance.

The more air that the sound waves have to travel through, the more opportunity there is for mischief. At 100 feet, the effects are noticeable. At 500 feet, they can be dramatic.

Prime Weapon
So how do we overcome all this atmospheric mayhem? One way is to use delayed stacks. But that’s so 20th century, you say - haven’t line arrays made them obsolete? Not necessarily.

Getting people closer to the loudspeakers is a prime weapon in the temperature and humidity wars. Not only do we preserve a reasonable facsimile of the desired frequency response, we keep a much more even volume level over a large area.

Admittedly, the physical aspects of using delayed systems are a pain.

Obscured sightlines, audio feeds, power availability and extra setup and teardown time add expense and complexity to the production.

But we can minimize the inconvenience. Because air absorption doesn’t affect low frequencies as much as higher ones, we can skip the subwoofers, and in some cases, even the low-frequency cabinets in the delayed system.

This cuts the size and power requirements way down. And co-locating a delayed source with the mix position cuts down on audio and power feed issues.

Delay is an ideal application for some of the new smaller-format line array systems. They provide plenty of horsepower in a small footprint preserving sight lines. Alternatively, smaller full-range cabinets can be deployed from the “B” system.

How far from the main clusters should delays be positioned? Sometimes this is governed by physical considerations, and sometimes sound pressure level (SPL) limits are set by the venue in consideration of the surrounding communities.

If SPL is being measured at FOH, the main system may be operating at a fairly low level, keeping the delayed systems from being very far from the stage. A modeling program, or simple math and the inverse square law (or just the inverse law in the case of line arrays) can be used to determine what the acceptable level decrease is before the signal needs to be re-amplified.

Always keep in mind that the extra losses described are over and above the theoretical losses. If a show is being staged in a calm, high-humidity area, you may not have to allow for much environmental loss. But if a show is held in a windy desert, watch out!

Show The Arrivals
How do we determine the correct amount of signal delay to apply? In my view, measuring the actual time difference is the best way to go.

Haven’t line arrays rendered delay stacks irrelevant? Well, not exactly.

Use (Rational Acoustics) Smaart or (Gold-Line) TEF to produce an impulse response or an Energy Time Curve (ETC). This should clearly show the arrivals from the main system and the delayed stacks, and enable you to use the cursors to give you a delay number.

If you don’t have one of these tools at your disposal, just do the math. At 70 degrees F, at sea level, the speed of sound is 1,130 feet per second, or about .88 ms per foot. If you know the distance, you can determine the time delay.

Many audio engineers like to take advantage of the Haas (or precedence) effect. The human ear localizes on sound based on both time of arrival and frequency content. The earliest sound - and/or - the sound with the most high-frequency content establishes the perceived direction that the sound is coming from.

The human ear also integrates sounds that arrive within about a 20 ms window, and this is called the Haas zone. In other words, within this time frame, the ear does not perceive separate arrivals.

Thus the audience can be “fooled” into believing that all sound is coming from the stage system by delaying the signal slightly beyond the acoustically correct setting, and by slightly rolling off higher frequencies. This is called localization. You know you’ve done it right if people are saying that the delayed loudspeakers aren’t working when you know they are.

And don’t forget, the speed of sound changes with temperature. If the environment has large temperature swings reset your delays as close to show time as possible.

Now if we could just get it to stop raining.

Bruce Main has been a systems engineer and FOH mixer on and off for more than 30 years. He has also built, owned and operated recording studios and designed and installed sound systems.

{extended}
Posted by Keith Clark on 06/06 at 11:11 AM
Live SoundFeatureBlogStudy HallConcertLine ArrayLoudspeakerMeasurementProcessorSignalSound ReinforcementPermalink

Church Sound: Secrets Of EQ In The Mix

The most important question: Does it sound natural?

When I first began doing sound, I bought a great set of headphones. I thought to myself - if I’m going to be expected to make something sound good, I should probably know what I’m shooting for.

So I started listening (like crazy) to CDs. Not just bands or styles I liked, but anything and everything I could get my hands on. I listened to the lyrics, chords, melodies and harmonies, but also to how it all fit together. I concentrated on the space that each instrument was taking up.

I noticed that certain instruments seemed always to be sitting in a certain spot — not to where they were panned, but to the frequencies they occupied.

How To Get There
When building a mix, we need to think of the song as a line. Each instrument makes up part of that line. If we have too many instruments or frequencies trying to take up the same space our line gets bumpy and the mix gets muddy.

Listen to each instrument and think of a space for it on the line. Keep other instruments away from it (EQ wise) and you will have an easier time hearing that instrument. You wouldn’t want to have a really bassy, heavy electric guitar because it would be taking up a lot of the space the bass guitar really needs. Try to keep each instrument in its place.

image

“Listen to each instrument and think of a space for it on the line.”

Think of each instrument as to what the fundamental piece of it is. For instance the fundamental of a kick drum will be low frequencies.

That’s not to say you don’t need highs to make it cut, but there really isn’t much midrange going on with it. Try to carve out some of the midrange of the kick to make room for the low midrange of the bass guitar.

Another example is electric guitar. Many engineers mistakenly try to make the electric guitar huge to get a “larger than life” sound, but if you really listen to a guitar on a CD and focus on what frequencies are really taking up space in the mix, you’ll be surprised at how small the range actually is.

“Be attentive to the mix and what’s going on inside it. It doesn’t mean you have to constantly turn knobs.”

I always tell new engineers never to be “done” with the mix. Listen for changes, and more importantly, listen to make sure that everything is in the mix and working together.

Be attentive to the mix and what’s going on inside it. It doesn’t mean you have to constantly turn knobs. Focus less on the actual sound of the individual instrument and more on how it interacts with other instruments in that same range.

There are no “magic” numbers that work every time because all instruments are a little different. The equation gets more complicated when we use different mics or the instrumentalist changes patches on their keyboard, but trust me… none of that is really important. What is important is that you focus on getting a natural sound that blends nicely with the competitors for the same space.

Bridging The Gap
Here are some general guidelines to consider when you are trying to find your space.

General Frequency Tips

20 Hz to 80 Hz: This is your sense of power in an instrument or mix. It’s the stuff you feel more then hear. The kick drum and bass guitar are down here in this range.

80 Hz to 250 Hz: The area where everything comes together. This is where a lot of things can go wrong and too much in here will make a mix sound sloppy.

250 Hz to 2 kHz: Most of your fundamental harmonics are in this range. These are some of the most critical frequencies to building a solid mix. Learn what instruments are most dominant in these frequencies and clean up around them.

2 kHz to 5 kHz: Here you will find the clarity to almost everything. But be careful, too much of a good thing can start to sound harsh. This is an area where subtly is the key.

5 kHz to 8 kHz: Mostly sibilance and “s” sounds. Much of the vocal consonants are defined in this range.

8 kHz to 20 kHz: Brilliance is the word here, the top end of cymbals.

Instrument Frequency Tips

Kick Drum and Toms: Cut 500 Hz to get rid of the cardboard box sound. Add 5 kHz to make them cut thru the mix. Add a little 60 Hz to 80 Hz to make them really thump.

Hi Hat: I generally cut all the lows and a good chunk of low mids. There isn’t anything down there anyway.

Snare: Generally I take out a little around 600 Hz and add a little around 4 kHz, and maybe even boost some 200 Hz to make it move a little air, but that really depends on the drum and how it is tuned.

Bass Guitar: So many players and basses are so very different. Usually if it’s muddy I cut 160-200 Hz and possibly add a little 700 Hz to 1 kHz if I can’t really hear their notes, but be careful because there are a lot of other instruments fighting for that space.

Piano: This is a beast we could probably use a whole article to discuss. It depends mostly on how it’s miked. If it’s boomy then cut 200 Hz to 315 Hz. If it’s kind of barking then cut more up near 400 Hz to 500 Hz. Judiciously add a little 2 kHz to 4 kHz to make it cut a little more.

Voice: Boomy? High pass at 150 Hz. Is it too thick? Try cutting 240 Hz. Need them to poke out a little more? Add a little 2.5 kHz. Having trouble hearing their syllables? Try adding a little between 4 kHz and 10 kHz.

Trust Your Ears
The most important question is “Does it sound natural?” Does it sound like the CDs you’ve been listening to?

More specifically, does it sound like you were sitting in front of the real instrument? I keep this in mind throughout the performance.

“The most important question: ‘Does it sound natural?’”

I constantly glance down all the channels and think about each input. Kick, does the kick sound right? Bass, does the bass sound right? Guitar, does the guitar sound right? Piano, does the piano sound right? Vocals, do the vocals sound right?

Then I think about it all again and ask if the guitar and vocal are walking over each other. Can I hear the piano? Is it because the guitar has too much midrange near the piano part’s midrange? Try taking a little low mids our of the guitar instead of turning up the piano. I think you get the picture.

“Learning to EQ confidently means you know where you are heading.”

It’s almost impossible to make the initial adjustments to instruments or vocals in the mix with the whole band playing. Instead I try to have a snapshot of what I think the instrument should sound like.

Learning to EQ confidently means you know where you are heading. That’s why I recommend listening to CDs with a good set of full range headphones.

No cheap earbuds here… you need a pair that will allow you to hear the whole frequency spectrum, and preferably a sealed set, like good earphones or sealed headphones. You’ll be able to form a mental soundscape of that you can use when you are back behind the console.

Turn, Turn, Turn!
Here’s a bonafide “trick of the trade”: turn some knobs. I mean actually get in there and turn the heck out of the EQ knobs and listen to what they do.

“Becoming a master of EQ is like becoming a master painter. Sometimes you just have to throw some paint on a canvas and see how it works.”

click to enlarge

Here is a simple technique to use in sound check.

Grab the gain (Figure 1) on the mid EQ of an instrument crank it up a bunch…

... now grab the frequency (Figure 2) of the mid and sweep it up and down.

You will hear a spot where it makes that instrument or voice sound horrible. Once you find it, take the gain back to zero, listen for a second again, and then cut out about 6 dB of it.

You will be amazed how much better that instrument sounds when you “get the junk out” as I call it. This is an amazing way to learn what frequencies sound like and the technique will eventually train your ear to hear the junk without boosting it first.

Becoming a master of EQ is like becoming a master painter. Sometimes you just have to throw some paint on a canvas and see how it works.

A 20-year veteran of working sound on the road, John Mills is a frequent contributor to Shure Notes. He was a frustrated electrical engineer who hated college and left to pursue a career as a drummer, ending up as a sound engineer. After working with many of the top Christian worship leaders, artists and tours, he landed at a job as an audio engineer for a design firm.

{extended}
Posted by Keith Clark on 06/06 at 10:04 AM
Church SoundFeatureBlogStudy HallConsolesMixerProcessorSignalSound ReinforcementPermalink

In The Studio: Vocal Processing And Mixing Tips For A Well-Rounded Final Product

Some techniques to try during your next session
This article is provided by Audio Geek Zine.

 

Here are some simple steps that can help insure that your vocal track is working together with the rest of your mix.

Cleanup
Before anything, clean up the vocal tracks. Go through and trim the silence around each phrase. Remove any noise, thumps, clicks, pops and gasps. Fade in or out on each edit.

Pitch Correction
Gentle pitch correction with software like Melodyne is a big part in getting that professional sounding vocal we all strive for in our studio productions. Be careful with this! Doing it right takes time and practice. Over-tuned vocals are about as bad as out of tune vocals. Let’s be honest here, we’ve heard both on American Idol.

EQ
There are no rules here, and every voice is different, but here are some starting points. A high-pass filter (low cut) can be used to quickly and easily clean up the very lows (below 100 Hz). Right above there from 200-600 Hz can be gently boosted or cut depending on the voice to add thickness or compensate for proximity.

Another area worthy of focus is from 1 kHz-3 kHz which usually where the clarity of the vocals can be brought out. Above that is brightness, bite, and air but look out for sibilance! How much you can boost here depends entirely upon the song and the singer.

Compression
When it comes to controlling vocal dynamics, using two compressors doing less individually often yields the most transparent result. Use a compressor with fast attack and high ratio (10:1) working just on the peaks, ignoring everything else. The second compressor is best set with a 4:1 ratio, a slower attack and release and threshold set so it’s compressing about 2-4dB.

Reverb
This is an effect you’ll want to place to an auxiliary track rather than having it right on the vocal track. Use a send to add the reverb to the vocals. The reverb time should be related to the tempo of the song, it can really clutter the mix if it’s not. “Hall” and “Plate” are the most common types of reverb for vocals. Shaping the reverb sound with EQ is recommended.

Automation
Automating the volume level of the vocals is absolutely essential to getting the vocal to sit right where you want it throughout the song. Automating the reverb send for the vocal will allow you to have just the right amount at any time. You can also automate effects like chorus and delays for the vocals to keep things interesting through the song.

Vocal Sweetening
A great all-in-one tool for vocal processing is something like iZotope Nectar. Eleven effects in one plugin including EQ, Compression, Auto and Manual Pitch Correction, reverb, delay and more.

Be sure to try out some of these tricks the next time you mix vocals.

Jon Tidey is a producer/engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com.

 

{extended}
Posted by admin on 06/06 at 09:46 AM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsMixerMonitoringProcessorSignalSoftwareStudioPermalink

Wednesday, June 05, 2013

Dialing It In: Delivering Consistent Concert Sound For Paramore

All band with a highly defined sense of what they like to hear

Since Paramore’s 2005 debut recording All We Know Is Falling, the Nashville-based, rock outfit’s popularity has grown substantially, garnering them multiple Grammy nominations as well as plum song placements in soundtracks for motion picture blockbusters like Twilight, platinum sales and a consistently growing, global audience.

They’ve also developed a reputation for high-energy live shows, fueled as much by their musical chops as by a finely honed sense of what they and their audience need to hear in order for both artists and crowd to leave it all on the stage and on the arena floor, respectively, every night.

The band is currently touring in support of its fourth release – simply entitled Paramore – representing the first full-length album and tour they’ve undertaken since the departure of guitarist Josh Farro and his brother, drummer Zac Farro.

“It hasn’t been an easy break,” says Travis Bing, Paramore’s monitor engineer since 2009. That said, both he and front of house engineer Eddie Mapp had nothing but good things to say about the atmosphere on the road when talking with me from Denver, roughly two weeks into the North American leg of the tour. “The feeling is onwards and upwards. Everyone who’s here wants to be here, band and crew, and that makes it fun for everyone, even old, jaded road dogs.”

Personnel changes aside, the stage setup is similar to previous tours, although principal members, lead singer Hayley Williams, guitarist Taylor York and bassist Jeremy Davis are highlighted more obviously than the players backing them up, including guitarist Justin York (Taylor’s brother), keyboardist/rhythm guitarist Jon Howard and drummer Miles McPherson.

Paramore performing at a California stop on the current tour. (click to enlarge)

And the band’s attention to detail when it comes to their sound is also as keenly focused as ever, a fact that prompts Bing to refer to them occasionally as “tone freaks.”

Feeling It
All six musicians have a highly defined sense of what they like to hear, and feel that their on stage mix is integral to their ability to put on the type of performance the audience deserves.

“Jeremy, the bass player, is an interesting case,” Bing says. “The band is on IEMs, but he wants to feel the sound, so we put a pair of d&b M2 wedges in front of him. They’re mainly pushing kick, snare and bass, but the challenge is maintaining decent stage volume and still relying on IEMs over wedges and side fills. Sometimes his mix gets a little cluttered, but when that happens we just dial it in again.”

In fact, Bing continues, the whole band wants to feel the sound, explaining that he also deploys a d&b Qsub for McPherson and a stack of two d&b Vsubs and one C7 loudspeaker, all driven by four d&b D12 amplifiers, per side as fills.

Monitor engineer Travis Bing at his Avid VENUE Profile console. (click to enlarge)

The Vsubs were a last minute addition, based on a demo at the band’s audio supplier, Nashville-based Spectrum Sound, prior to the tour. “I’ve used d&b loudspeakers for about six years, because I feel they get that we need gear to be lighter and more compact without sacrificing audio quality,” Bing adds.

While both backup guitarist’s amplifiers are rear-firing, Taylor York’s and Davis’ amps are pointed downstage; a compromise, Bing admits, but – unlike the iso cabs and other options they’ve tried, it’s an approach that allows the two to hear what they want without compromising their performances.

“Mixing monitors is a psychological game,” he says. “You’re dealing with six different minds, personalities and sets of emotions that affect the show. It’s more than just providing what they need to hear, it’s giving them confidence and building mutual trust.”

Lead singer Hayley Williams with her Sennheiser SKM 2000 wireless mic painted her signature orange. (click to enlarge)

Bing’s approach to doing so informs his choice of console. “I mix monitors like a FOH engineer, polishing stuff to make it sound as much like the record as possible and I’m using an Avid VENUE Profile, partly because of the layout, and partly because we’ve grown inputs exponentially. When I started we were at 32 inputs, and now we’re at 64,” he says, citing the recent addition of glockenspiel and a pair of toms and a snare for Taylor York that’s located downstage.

“All of my mixes are in stereo, post fader,” Bing adds. “This isn’t an overly complicated band, so I operate the desk like an analog console and make changes on the fly. With the Profile, a button push here and there and I’m where I need to be. It’s intuitive and I love the plug-ins, especially the Cranesong Phoenix. Essentially it takes something digital, warms it up and makes it sound more natural.”

He references Waves PuigChild 660, CLA-2, and SSL Channel as other key plug-ins, and also cites a Waves C6 multiband compressor, specifically, as integral to sculpting the overall tone of the band’s IEM mix.

“I’m also using the C6 on Hayley’s vocal over EQ now, because it allows her to hear the frequency spectrum she wants, but doesn’t knock any frequencies out completely,” he says.

Wireless IEM systems are all Sennheiser, a combination of ew 300 IEM G2 and G3 systems, joined by an AC 3200-II active 8-channel transmitter combiner and A 5000-CP passive antenna. Band members sport Ultimate Ears UE11 earpieces. “This is a very transparent, natural and warm-sounding IEM rig,” Bing notes.

Rock Solid
The primary goal of New Orleans-based FOH engineer Eddie Mapp, who took the reins just this past February, is maintaining the same consistency that Bing provides on stage despite not traveling with a house system.

“We’re doing theatres now, and then we go to Europe, mostly for festivals,” Mapp says. “Obviously, with the PA changing nightly, that’s a challenge, but I’m traveling with a Meyer Sound Galileo 616 loudspeaker processor and a Mac mini running (Rational Acoustics) Smaart 7, which helps me maintain consistency.”

Eddie Mapp at the Midas PRO2c he bought and is using on the tour. (click to enlarge)

He landed the Paramore gig on the recommendation of tour manager Andrew Weiss, who’d worked with him previously with Evanescence. Based on previous experiences with Midas PRO Series consoles, and seeking a smaller footprint, he chose a PRO2c for this run. In fact, he actually bought the PRO2c he’s carrying.

“I use the console for mixing, but not for system EQ or delays. I prefer to do that externally,” he explains. “With Galileo and Smaart, I can walk in anywhere and say, ‘give me a desk.’ Now, with my own console, no matter where we go, when I load my show file I know everything is rock solid.”

Mapp also works with the band on microphone selection and placement, and has largely continued the previous approach. However, he recently re-did the drum kit, with a Shure Beta 91A on kick in, Audix D6s on kick out and toms, a DPA 2011C for snare top, and DPA 4099s on snare bottom, hat and cymbals.

Dual d&b M2 wedges for bassist Jeremy Davis. (click to enlarge)

“I don’t use the kick in, just the kick out, to eliminate any potential phasing and anomalies,” he says. “The D6s are about two fingers off the tom heads. The proximity effect is pretty incredible so I still end up pulling out a bit of 200 (Hz), but in that position, it makes the toms sound huge.

“As for the DPA mics, Big Mick (Hughes) from Metallica turned me on to them. The drummer, Miles, has a minimal kit, with two crashes and a ride. I’m under-miking the cymbals to get a little more isolation, and then EQ each according to its individual tone. These mics also eliminate stands, so there’s less chance of anything falling over.”

Mapp has also implemented Pintech RS-5 acoustic drum triggers on kick, snare and toms to open up the side chain of the gates, another practice he credits to learning from Big Mick. “It helps isolate everything and allows you to bring the threshold back on the gate to pick up subtle nuances, even if the drummer’s just tapping the rim of a tom. I never have to look at my gates during the show. I know they’re opening, so I can pay attention to something else.”

The approach to miking Miles McPherson’s kit. (click to enlarge)

Taylor York’s partial drum kit is captured with Sennheiser e 904s, with a Shure SM57 for glockenspiel. “That’s what they had on them before and it’s nice to have a bit of variety, a different sound, as compared to Miles’ drums.”

On the bass, where Mapp is seeking both attack and definition, he takes a pre-cabinet feed via a Klark Teknik DN100 direct box (DI), and then post-cabinet feeds from both a SansAmp DI and a Shure Beta 52. Radial J48 DIs take acoustic guitars direct, with Radial JDIs for Howard’s keyboards and drum pad.

Making Room
Each of the three electric guitars plays to two cabinets. An sE Electronics Voodoo VR1 ribbon mic is mounted in the center of the cone of the “distorted” cab in each set, with a dynamic mic positioned slightly off axis to the cone of the “clean” cabs.

Shure SM7Bs are applied to the clean cabs of Taylor York and Jon Howard, while a Heil Sound PR-40 is the choice for Justin York.

In the band’s IEM mix, Bing emphasizes the feed from Taylor York’s SM7 on Justin York’s PR-40 – choices the band, being the “tone freaks” they are, prefer.

“They like that cut in their ears,” Mapp adds, “but I use the VR1s for the house. They sound huge and take EQ well. My thing is getting the guitars as big and in your face as possible, and a lot of that is EQing out a lot of the 2 kHz to 4 kHz range, then clearing up the low mid to make room for all three guitars.”

Mapp also deploys sE Electronics IRF Reflexion filters on the rear-firing cabs to help eliminate initial reflections when the stage is shallow and backed by a hard wall.

In speaking to both engineers, it’s clear that Paramore wants to hear everything their audiences do – from the crowd noise Bing mixes into their IEMs via a pair of Shure KSM 27 cardioid condensers positioned downstage left and right, to the often less than pristine vocals provided by the fans that Williams pulls up on stage.

A closer look at Hayley Williams’ custom-painted Sennheiser transmitter. (click to enlarge)


“We use one of her backup mics for that,” Bing says, “and the band wants to hear it even if they’re screaming and off-pitch. I run that post fader so I can mix it in for everyone at the same time.”

Mapp also uses the feed from the KSM27s when recording the show to his JoeCo BlackBox BBR 64-MADI recorder, a recording that’s as useful a reference for him as it is for the band.

Sennheiser wireless microphone and IEM receivers that live close to Bing’s monitor mix location. (click to enlarge)

“I always love working with a band that cares about their tone and how that’s being delivered to the audience,” he says. “I’ve been fortunate that a lot of bands I’ve worked with get that. And this band really listens to their sound and to each other. Even with three electric guitars going at once they don’t step on each other.”

Sennheiser microphones are the universal choice for vocals, headlined by Williams’ SKM 2000 wireless mic custom painted with her orange signature color. Hardwired e935 microphones are posted for the background vocals of Justin York and Jon Howard.

The Paramore production team. Front row, L-R: Eddie Mapp, Mehdi Rabii (security), Erik Leighty (production manager), Nathan Warshowsky (drum tech), Joseph Howard and David Bernson (guitar techs), and Ryan Stowe (video tech). Back row, L-R: Chad Peters (lighting director), Travis Bing, Charles Martin (lighting tech), Riley Emminger (guitar tech), Sean Henry (audio tech), Andrew Weiss (tour manager), and Aaron Holmes (merchandise). (click to enlarge)

In the overall sonic big picture, the key for Bing and Mapp is reflecting the level of attention the band pays to the sound they’re creating at the source by keeping their mixes well defined and clean on stage and in the house – a characteristic they also strive for when it comes to the actual stage setup, particularly downstage near Williams, York and Davis.

“It has to be a very clean stage because the three of them just go crazy,” Bing says. “We try to make it as safe as possible, because when Taylor’s in the right mood he’ll swing his guitar around, drag it on the floor and knock over drum kits.

“That makes for a good show, but when it comes to audio gear, he understands that if he breaks it, he’s bought it.”

Based in Toronto, Kevin Young is a freelance music and tech writer, professional musician and composer.

{extended}
Posted by Keith Clark on 06/05 at 03:29 PM
Live SoundFeatureBlogConcertEngineerMicrophoneSound ReinforcementSystemTechnicianPermalink

In The Studio: 10 Things About Sound You May Not Know…

Does compressed music generate any physical effects? And much more...
This article is provided by Bobby Owsinski.

 

This is something that I posted a couple of years ago that’s worth posting again.

It’s the 10 things about sound, some of them which you probably didn’t know.

They come from Julian Treasure, the author of “Sound Business” and chairman of the the UK audio branding company The Sound Agency.

He speaks internationally about the affect of sound on people, business and society.

The following comes from a CNN article outlining Julian’s TED Conference presentation.

Especially be aware of number 7!

1) You are a chord. This is obvious from physics, though it’s admittedly somewhat metaphorical to call the combined rhythms and vibrations within a human being a chord, which we usually understand to be an aesthetically pleasant audible collection of tones.

But “the fundamental characteristic of nature is periodic functioning in frequency, or musical pitch,” according to C.T. Eagle. Matter is vibrating energy; therefore, we are a collection of vibrations of many kinds, which can be considered a chord.

2) One definition of health may be that that chord is in complete harmony. The World Health Organization defines health as “a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity” which opens at least three dimensions to the concept. On a philosophical level, Plato, Socrates, Pythagoras and Confucius all wrote at length about the relationship between harmony, music and health (both social and physical).

Here’s Socrates: “Rhythm and harmony find their way into the inward places of the soul, on which they mightily fasten, imparting grace, and making the soul of him who is rightly educated graceful, or of him who is ill-educated ungraceful.”

3) We see one octave; we hear ten. An octave is a doubling in frequency. The visual spectrum in frequency terms is 400-790 THz, so it’s just under one octave. Humans with great hearing can hear from 20 Hz to 20 KHz, which is ten octaves.

4) We adopt listening positions. Listening positions are a useful set of perspectives that can help people to be more conscious and effective in communication—because expert listening can be just as powerful as speaking. For example, men typically adopt a reductive listening position, listening for something, often a point or solution.

Women, by contrast, typically adopt an expansive listening position, enjoying the journey, going with the flow. When unconscious, this mismatch causes a lot of arguments.

Other listening positions include judgmental (or critical), active (or reflective), passive (or meditative) and so on. Some are well known and widely used; for example, active listening is trained into many therapists, counselors and educators.

5) Noise harms and even kills. There is now wealth of evidence about the harmful effect of noise, and yet most people still consider noise a local matter, not the major global issue it has become.

According to a 1999 U.S. Census report, Americans named noise as the number one problem in neighborhoods. Of the households surveyed, 11.3 percent stated that street or traffic noise was bothersome, and 4.4 percent said it was so bad that they wanted to move. More Americans are bothered by noise than by crime, odors and other problems listed under “other bothersome conditions.”

The European Union says: “Around 20% of the Union’s population or close on 80 million people suffer from noise levels that scientists and health experts consider to be unacceptable, where most people become annoyed, where sleep is disturbed and where adverse health effects are to be feared. An additional 170 million citizens are living in so-called ‘grey areas’ where the noise levels are such to cause serious annoyance during the daytime.”

The World Health Organization says: “Traffic noise alone is harming the health of almost every third person in the WHO European Region. One in five Europeans is regularly exposed to sound levels at night that could significantly damage health.”

The WHO is also the source for the startling statistic about noise killing 200,000 people a year. Its findings (LARES report) estimate that 3 percent of deaths from ischemic heart disease result from long-term exposure to noise. With 7 million deaths a year globally, that means 210,000 people are dying of noise every year.

The cost of noise to society is astronomical. The EU again: “Present economic estimates of the annual damage in the EU due to environmental noise range from EUR 13 billion to 38 billion. Elements that contribute are a reduction of housing prices, medical costs, reduced possibilities of land use and cost of lost labour days.” (Future Noise Policy European Commission Green Paper 1996).

Then there is the effect of noise on social behavior. The U.S. report “Noise and its effects” (Administrative Conference of the United States, Alice Suter, 1991) says: “Even moderate noise levels can increase anxiety, decrease the incidence of helping behavior, and increase the risk of hostile behavior in experimental subjects. These effects may, to some extent, help explain the “dehumanization” of today’s urban environment.”

Perhaps Confucius and Socrates have a point.

6) Schizophonia is unhealthy. “Schizophonia” describes a state where what you hear and what you see are unrelated. The word was coined by the great Canadian audiologist Murray Schafer and was intended to communicate unhealthiness. Schafer explains: “I coined the term schizophonia intending it to be a nervous word. Related to schizophrenia, I wanted it to convey the same sense of aberration and drama.”

My assertion that continual schizophonia is unhealthy is a hypothesis that science could and should test, both at personal and also a social level. You have only to consider the bizarre jollity of train carriages now—full of lively conversation but none of it with anyone else in the carriage—to entertain the possibility that this is somehow unnatural.

Old-style silence at least had the virtue of being an honest lack of connection with those around us. Now we ignore our neighbors, merrily discussing intimate details of our lives as if the people around us simply don’t exist. Surely this is not a positive social phenomenon.

7) Compressed music makes you tired. However clever the technology and the psychoacoustic algorithms applied, there are many issues with data compression of music, as discussed in this excellent article by Robert Harley back in 1991.

My assertion that listening to highly compressed music makes people tired and irritable is based on personal and anecdotal experience - again it’s one that I hope will be tested by researchers.

8) Headphone abuse is creating deaf kids. Over 19 percent of American 12 to 19 years old exhibited some hearing loss in 2005-2006, an increase of almost 5 percent since 1988-94 (according to a study in the Journal of the American Medical Association by Josef Shargorodsky et al). One university study found that 61 percent of freshmen showed hearing loss (Leeds 2001).

Many audiologists use the rule of thumb that your headphones are too loud if you can’t hear someone talking loudly to you.

For example, Robert Fifer, an associate professor of audiology and speech pathology at the University of Miami Leonard M. Miller School of Medicine, says: “If you can still hear what people are saying around you, you are at a safe level. If the volume is turned so loudly that you can no longer hear conversation around you, or if someone has to shout at you at a distance of about 2 or 3 feet to get your attention, then you are up in the hazardous noise range.”

9) Natural sound and silence are good for you. These assertions seem to be uncontroversial. Perhaps they resonate with everyone’s experience or instinct.

10) Sound can heal. Both music therapy and sound therapy can be categorized as “sound healing.” Music therapy (the use of music to improve health) is a well-established form of treatment in the context of mainstream medicine for many conditions, including dementia and autism.

Less mainstream, though intellectually no more difficult to accept, is sound therapy: the use of tones or sounds to improve health through entrainment (affecting one oscillator with a stronger one).

This is long-established: shamanic and community chant and the use of various resonators like bells and gongs, date back thousands of years and are still in use in many cultures around the world.

Just because something is pre-Enlightenment and not done in hospitals doesn’t mean that it’s new-age BS. Doubtless there are charlatans offering snake oil (as in many fields), but I suspect there is also much to learn, and just as herbal medicine gave rise to many of the drugs we use today, I suspect there are rich resources and fascinating insights to be gleaned when science starts to unpack the traditions of sound healing.”

It’s worth it to check out the original article on CNN.com since it also contains the original TED video that the above came from.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blog.

{extended}
Posted by Keith Clark on 06/05 at 03:26 PM
RecordingFeatureBlogOpinionStudy HallAnalogBusinessDigitalEducationMeasurementSignalSound ReinforcementStudioPermalink

Blazing New Trails: Starin Develops A New Way To Serve The AV Marketplace

Moving beyond the classic "distributor" model

“We have a little bit of a different business model than everyone else,” says Jim Starin, talking about the company he founded in 1988 that has grown from a fledgling Midwest-based regional sales rep firm to playing a significant role in the success of a “who’s who” of manufacturers, as well as the thousands of professional designers, installers and dealers who utilize their products across North America.

The original rep firm was called “Starin Marketing,” a moniker now simply adapted to “Starin.” But the subtraction of that second word actually reflects the steady addition of a huge array of services – marketing being just one of many – that the company provides. A short-hand term to describe the Starin model is “distributor,” but really, it’s only a starting point.

Based where it started 25 years ago in Chesterton, IN, about 40 miles east of Chicago, the company defines itself as a “channel manager,” a hybrid of a product distributor, customer service and fulfillment center, and sales and marketing team representing more than 55 brands at current count. And, by the way, the rep firm aspect remains, although it’s now less than 10 percent of the overall picture.

“Our mission is to make the job of the AV professional easier and more profitable,” Jim Starin explains. “We can exactly mirror what each manufacturer we work with wants in terms of programs, pricing, and so on, for its dealer/re-seller customer base. Each of these arrangements is custom and has to account for every single detail for every single manufacturer, so there’s nothing cookie-cutter about how it’s set up.

“In the past, customers felt like they were being put on the ‘B team’ if they dealt with a distributor, but that’s definitely not the case with the way we approach it,” he continues. “Our customers also understand the advantages of the consolidation of brands and services we provide. If something goes wrong with a component or system, there’s no finger-pointing – they know that they have a single source to go to in order to directly address and fix the problem.”

A headquarters facility in Chesterton serves as the heart and hub of the operation, custom-built in 2006 to provide 10,000 square feet of warehouse space for product staging and distribution, and another 6,000 square feet for dozens of staff specialists and infrastructure to support it, including a large technical training and education facility outfitted with plenty of A/V capability.

A Starin warehouse stocked and shipping AV product throughout North America. (click to enlarge)

Owing to its substantial growth, a few years ago Starin began leasing another 50,000 square feet of warehouse space in nearby Portage, IN, and has also revamped a previous facility to accommodate the company’s IT operation.

Numerous other Starin team members work remotely, spread among 10 offices, joined by more than 70 contracted independent rep firms, in order to provide face-to-face, hands-on support of the continent-wide customer base. It’s an arrangement that enhances both convenience and communication between the company and the customer. “We have ‘feet on the street’ for every re-seller in North America,” notes Starin.

How It All Works
Originally from the Chicago area, he hadn’t set out to work in the professional A/V industry when hired by a hi-fi store as a college student in the early 1970s in DeKalb, IL, but this chance employment opportunity proved to be the catalyst.

“I found the business aspect of the store to be absolutely fascinating,” he explains. “Things like inventory management, the process of opening a new store location, and so on, really intrigued me. They’re the key to how it all works, the difference between success and failure, no matter what you happen to be selling.”

After college, Starin took a position with the Advent loudspeaker company in Cambridge, MA, and was soon charged with traveling around the country to provide customer support and business-building consultation to stores carrying the Advent line. It was helpful to the stores, to be sure, which in turn meant that they were likely to sell more of his employer’s loudspeakers. He notes that this is still the MO (modus operandi) for his company today.

Wanting to return closer to home and also transition to the professional market, Starin eventually joined Audio Resources, a Chicago-based pro audio sales rep firm founded by brothers Wayne and Terry Hrabak. “I prefer the objectivity of professional AV as opposed to the consumer market, both in terms of business and technology,” he says. “It’s much less ‘ethereal,’ far more direct and reality based.”

Plenty of meeting and conference facilities
for the company and clients alike. (click to enlarge)

He learned the ropes of the pro market as well as the unique facets of the sales territory, representing several leading brands while building relationships and working with a talented team that included Bill Mullin, who now serves as “second in command” to Starin.

When Audio Resources decided to transition out of the rep business, Starin formed his own firm based in Chesterton, representing several notable companies, including Electro-Voice, an association that remains to this day. The fledgling business thrived, on the path to garnering more than 100 “rep of the year” awards (and counting) over the years.

The defining moment came in the mid-1990s when a major manufacturer represented by Starin sought to eliminate smaller dealer accounts as a downsizing measure, deeming them not worthy of investment.

“I understood their thinking but didn’t agree with it. Instead, I saw a lot of potential,” Starin says. “The lion’s share of our territory was rural, some of it urban, but a lot of smaller customers, and when you look at a composite of all of them, it’s a significant amount of profitable business. So I talked the manufacturer into letting us buy product direct and serve those deals backed by commensurate support. That really got the ball rolling for us.”

Jim Starin and his team celebrating joining the local Chamber of Commerce just after the company was founded 25 years ago. The enterprise was based in Jim’s basement, and five of these six people are still employed by Starin.  (click to enlarge)

World Of Difference
And it’s gone and grown from there. The 55-plus brands now in the Starin fold span four primary market channels, including pro A/V, MI, conference/presentation and broadcast/video products, with each channel headed by an experienced manager in that market and backed by a dedicated staff.

Just a sampling of audio brands in the A/V channel includes Midas/Klark-Teknik, Turbosound, RCF, Sony Pro, CAD Audio, and dozens more, joined by leading names from the other channels such as Gentner, Bosch, ETA, Barco, Grass Valley, and on and on.

Each brand is served with its own custom approach, defined between Starin and the individual entity, and it often includes design and application support that can be dispatched to a job site within 24 hours.

The practice extends to the online world, where Starin has stepped up with a customized web portal program for each brand that allows business to be conducted online at any time, day or night. The portals are standardized in terms of form, function and operation, so it’s the same experience for customers across brands.

The program demands a tremendous amount of IT support, with each portal providing the unique information of the individual brand, their various product and training services, differing password protections, and more. There’s also the matter of making it all work with every manufacturer’s own IT system and preferences. Just keeping product pricing and ever-changing terms current is mind-boggling, yet Starin insures this happens on a weekly basis on every portal. (Click here to go to the Starin website.)

“The portals make all of the difference in the world, really set us apart in terms of serving our customers,” explains Starin. “Anything and everything you need to do business with a brand and company is included, and it’s all there and up to date, any time you want it.”

A recent addition provided to the portals is Learning Curve, an effective way to provide training to individuals and businesses on their own schedule, available when it works best for them. Learning Curve helps fill the educational gap created by the lack of time to devote to in-person product introduction and application sessions.

Extending its online capability even further, Starin also become the first participating distributor with brands listed within the InfoComm AV-iQ master product directory.

The launch point to individual web portals tailored to every brand – more than 55 total.  (click to enlarge)

“At the same time, for those who don’t want to be in front of a computer, we still do real, live, in-person hands-on training and product information sessions,” Starin states, bringing focus back to what he sees as the biggest key to business success: people. A look at his own company’s approach provides the proof. Starin’s first employee, Susan Kelley, still serves after more than two decades on staff, joined by more than a dozen 10-plus-year veterans.

The company has also long been an ESOP (Employee Stock Ownership Plan), with almost 45 percent now owned by its employees. “It’s pretty straightforward,” Starin states. “If you want your people to behave like an owner, make them one. That’s really paid off.”

After a long climb to building the company to its current status, Starin is looking to cut back on his day-to-day involvement, gradually turning over the reins to Mullin (who was named president in May) and the well-prepared and motivated team. Still, it’s not hard to envision him continuing to pursue his passion for figuring out what makes businesses tick and helping people achieve their goals.

“This is what we live by: nobody ever dies wishing they had spent more time in the office, so we work to shorten their workdays,” he concludes. “That’s really our whole concept, the primary mission.”

Keith Clark is editor in chief of Live Sound International and ProSoundWeb.

{extended}
Posted by Keith Clark on 06/05 at 01:43 PM
AVFeatureBlogAVBusinessManufacturerPermalink

Church Sound: Getting Our Soundcheck Priorities Straight

The nature of soundcheck, building a mix and working together.
This article is provided by Church Soundcheck

 

Who is soundcheck for, anyway? Sounds like a silly question, at least until you start talking with the worship teams and tech crews at a lot of churches.

Then the realization begins that in fact there are some different perspectives floating around out there.

From my perspective as an audio engineer, soundcheck is not the process of setting up the microphones or making sure that everything is working properly, and it’s not a time for the worship team to rehearse its music for the day.

Rather, soundcheck is what happens in between those two processes, a time of mutual benefit to the tech crew, musicians and singers. You can actually divide it into three parts ­ the technical phase, the vocalist/player phase, and the worship team phase.

In the first part, the sound team refines the gain structure and the sound character of each individual instrument and vocal. This process allows the mixer to work out any last moment problems that might occur with the gear, like a mic cable that starts to fail, or a dirty connection in a patchbay that finally shows up. (And you thought it only happened to you!)

Once the console settings are at a good starting point from which to build the front-of-house mix, the musicians can begin to ask for refinements in their monitor mixes. With the players content with their monitor mixes, the vocalists can start to refine what they’re hearing as well.

What I call the worship team phase involves everyone. It may help to understand that I consider the tech crew, the musicians and the vocalists all to be equal members of the worship team. Each member is offering up their gifts to God, and it only comes together when everyone in the team gives it their all.

This final part of soundcheck allows the musicians and vocalists to start rehearsing their songs while the tech crew begins to rehearse their mix. In every rehearsal, the vocalists make subtle adjustments to how they’re going to sing their parts, even changing who takes which harmony part.

“The sound team refines the gain structure and the sound character of each individual instrument and vocal.”

The keyboard player locks in on which chord inversions he or she plans to use, the guitarist works out chords and begins to add stylistic nuances to the sound, and so on. The musicians have a right to know that all of the work they’re doing so carefully will be heard in a balanced musical mix for the congregation.

Getting The Whole
What many musicians and vocalists don’t understand is that there are a great many subtle adjustments that the tech crew can do as well with the mix that will accent and enhance what the musicians and vocalists are doing on stage.

That cool skank guitar line should be heard in proper perspective in the mix. That bass guitar riff that only happens once, coming out of the bridge, needs to be heard. Adding a slight flanging effect to the backing vocals only during the chorus can really make the parts jump out.

Fitting a single-repeat echo on the worship leader’s part in just a couple parts of the song, or even just a couple of words in the song, works incredibly well if it’s placed right in the same tempo as the song.

The only way that the sound mixer is going to know that those parts exist, understand how those refinements can and should fit in his mix, and be prepared to pull off whatever is necessary to make it work each time the song is played, is through careful listening and time—lots of it.

There have been many times when the players and singers decide they’re confident that they know the song and move on to the next, or decide to take a break, and I’m left there hanging, only partly done creating the sound that I was going for.

Understand that this is a two-way street. I can’t expect someone playing an electronic keyboard, for example, to play for me so I can work with the FOH sound without them being able to hear some of that keyboard sound in the monitors.

Yet I can’t properly set the gain structure for that instrument without having them play the instrument.

A question I often get asked is which do I set first,­ the FOH mix or the monitor mix? And the answer, of course, is both.

The problem is when the player expects the sound of the keyboard to be perfect in level and sound character from the first note. Perhaps shock therapy would work in these cases!

Building The Mix
If the worship music style includes a rhythm section or a number of instruments, then the soundcheck allows the mixer to start building the mix. Each sound tech may use a different approach to building the mix.

Personally, I like to build in blocks, working with one input at a time. In other words, I’ll start with the kick drum, then the snare, check to see how they work together, then add the rack toms and floor tom, then finally include the hi-hat and cymbals.

Then I’ll work with the bass guitar sound, check it with the kick and snare, and then check how the bass guitar and the full drum kit fit together.

Once I’m confident that I have a solid foundation upon which to build the mix, then I’ll move to the keys, guitars, other instruments, then backing vocals and finally the worship leader. My process can frustrate a worship team that wants to hear something consistent on stage during the soundcheck because I’m frequently pulling things up and down in the FOH mix as I work my way through it.

While this doesn’t affect their monitor mix, they can certainly hear the FOH changes from their location on stage, and those changes can be disconcerting.

“Then the realization begins that in fact there are some different perspectives floating around.”

Necessary Components
But from my perspective, the soundcheck is mine. It’s my time to use as needed. It’s the period when my needs to get the FOH mix together have to come first, and I have to be in control of how we use that time.

That may sound selfish, but it’s a necessary component if we’re going to achieve technical excellence together. Once we enter the rehearsal time, then the playing field levels out and the needs of the tech crew and the players/vocalists are equal and should be worked out together.

Another approach voiced by several seasoned sound mixers is philosophically the opposite of my approach. They prefer to keep all of the instruments and vocals up in the mix at all times so that any EQ choices or level changes can constantly be evaluated as a whole rather than individually.

That’s a tremendously valid point, and I would simply say go with what works for you. I can easily see the players and singers jumping to the defense of this second approach, since it supports their desire for more consistency during the soundcheck.

However, the reality is that neither approach will necessarily deliver a better end result than the other. On top of that, those on stage aren’t the ones mixing, and they aren’t the ones who will catch the heat if things don’t sound great, so each individual sound mixer has to use the approach of building a mix that works best for them.

I will say that I’ve found myself in recent days virtually forced by exceedingly short soundcheck times to keep the full mix up and learn how to achieve the results I’m looking for quickly and accurately without having the luxury of isolating individual parts, at least not to the degree that I’m used to doing.

Enjoy The Process
The soundcheck is a handshake, if you will, between the tech crew and the worship team. The end result should be one of joyful abandon during a worship service.

Yes, it’s O.K. for the tech crew to enjoy the worship service as much as the worship team. I can’t exactly throw my head back, close my eyes and worship God like the vocalists might be able to do. If I do, I’ll likely miss a cue or create a problem. But that doesn’t mean that I can’t have fun and enjoy the process.

The bottom line is to help you engage in a conversation between the worship team and tech team that will help all come to an understanding and agreement about the objectives of a soundcheck.

I’ve witnessed worship teams say things to tech crew volunteers and production staff that, shall we say, they wouldn’t have said if Jesus were standing there. I’ve also been around worship teams and tech teams alike saying stuff about the other “side” that they shouldn’t have been saying.

When you get down to the bottom of all that strife, it generally turns out to be a lack of understanding. Some music pastors and many musicians and vocalists think of the tech crew as subservient to them. Unfortunately, some tech crew leaders think more highly of their efforts than they should, as well.

The reality is that every one of them needs to come to the understanding that they’re all in this together, that each musician, singer, worship leader, sound tech, lighting tech, video graphics tech, and so on are all equal members of the same team, striving together toward a common goal.

Why am I so hot on this topic? Because through Church Soundcheck and my work as a consultant to churches, we hear about this kind of strife happening every week.

“The soundcheck is mine. It’s my time to use as needed.”

We often find ourselves counseling or at least consoling some embattled tech guy, music pastor or player. We even hear from musicians who are tired of tech guys beating up on the players and singers.

It doesn’t have to be that way. Our time on this planet may be short. Jesus may be back sooner than we think. I would suggest that it’s way past time that we lay down our petty personal goals ­ let’s learn to enjoy our time of worshipping together.

It’s an honor to serve the needs of technical excellence for great players, singers and music pastors. It’s no fun when those individuals are full of themselves and acting like idiots. I’ve worked with both.

I’ve looked, and can’t find anywhere in the Word where it says that we’re supposed to be at odds with one another during a worship service. So let’s choose to worship God together and get on with the task at hand.

Curt Taipale heads up Church Soundcheck, a thriving community dedicated to helping technical worship personnel, and he also provides systems design and consulting services.

{extended}
Posted by admin on 06/05 at 01:03 PM
Church SoundFeatureBlogStudy HallConsolesDigitalEducationEngineerMixerMonitoringSound ReinforcementTechnicianPermalink

In The Studio: Compressing Drums—Setting Attack & Release

A wonderful tool, but if not used well, it can really "jack things up"
These videos are provided by Home Studio Corner.

In this two-part presentation, Joe focuses on compression, using a relevant example of applying compression on a drum bus.

As he notes, compression can be a wonderful tool, but if not used well, it can really “jack things up.”

He starts with a focus on what he calls the “secret ingredient” of compression—the attack knob. It can enhance “punch and depth,” but it can also make a mushy mess of things when not applied correctly. The goal is usually to keep sound open and present, while fitting appropriately within the mix.

Next, Joe moves on to the release portion of compression, which he describes as a kind of “tone knob” or control. Here, transients can be fine-tuned and sustained in order to attain the desired tone.

 

 

 

 

Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.

{extended}
Posted by Keith Clark on 06/05 at 11:39 AM
RecordingFeatureBlogVideoStudy HallDigital Audio WorkstationsProcessorStudioPermalink

Tuesday, June 04, 2013

Not So Obvious: It’s About More Than Sound

There are plenty of other things to be aware of...

With festival season upon us, I got to thinking that there are plenty of articles that tell us which knob to turn, which button to push, and other mechanical methods of our trade. So I’ve set out here to provide information on some of the other “not so obvious” aspects to keep in mind when working gigs this summer and fall. 

These are things that I’ve encountered in touring as well as here in “home territory” at local festival settings. Some of this may not apply to your methods of working just yet, but may prove to be useful one day in your career. And, not all of them are directly show related, but have proven to serve me well nonetheless.

Fly in the day before the gig. Airplane cabins are pressurized at an altitude of 8,000 feet when at cruise altitudes, meaning your ears are subjected to several changes in “atmospheric pressure” every time you fly. We’ve all been there, even when driving—just a few hundred feet worth of elevation can result in a notable change in the pressure in our eustachian tubes. It can take a good while for the pressure to dissipate so that our hearing is stable enough to trust in a mixing environment. Plus the extra day may be the only chance you get to explore a new city…

Get plenty of sleep. Long gone are the days of “rock ‘n’ roll all night.” Remember that you have a job to do. Just as a guitar player has to take good care of his instrument, we have to take care of ours. In this case, our instruments are our awareness and our ears. We need to be sharp and hear well. Fortunately, we carry these instruments everywhere we go, so it’s convenient to use them as an excuse to stay away from of the places we’re often asked to go after gigs. 

Don’t be “that guy.”
Avoid being the person complaining about every little detail of every little thing at the gig. Unless its something truly detrimental to the show, shut up and deal. And realize that some things may have been done on purpose, with your input neither wanted nor needed. No one likes a prima donna, and besides, that’s the band’s job. On the other hand, do be the person with the extra Sharpie or gaff/board tape or multitool at the ready. Event organizers are always watching, and your preparedness (or lack thereof) will be noticed. 

No one likes a perfectionist. Don’t spend an hour hollering into the talkback mic about “dialing in” the perfect verb. Presumably you brought your “cans” to the gig, so use them.

Once the show starts, you’re going to change it anyway, and there’s probably someone close to the PA trying to hang a banner or make some other contribution to the event—and you’re ticking them off. So just stop it.

Besides, you can have more fun tomorrow morning by “tuning the system” with some Whitesnake or Ozzy, waking the hippies out of their drug of choice-induced comas.

Every show is your dream gig. It doesn’t matter that maybe this particular band is lousy, or the humidity is stifling, or…whatever. Every show is the most important thing in your world every day.

And what if the management of your actual “dream gig” is standing right behind you, and you’re not doing your best? What are the odds of you actually landing that gig? 

Mix some bluegrass. Inside. In a gym. Doing monitors from FOH. Real bluegrass, not the “sellouts” that went all electrified and stuff. You know, big tube condenser for the band to crowd around, and a half-dozen more mics for instruments, one of which is on a fiddle pointed directly at a wedge due to artist taste. Go ahead. Try it. Rock and country mixers have nothing on the folks that mix this every day. Hats off to them.

Keep the horses in the barn. Just because a system has serious horsepower doesn’t mean you have to use it all…all the time. Be appropriate to the genre.  I recently finished a tour with a 60s band that was doing stuff that ran all the way from “The Sound of Silence” to “Good Times, Bad Times.” SPL ran from 68 dB to 104 dB. Dynamics anyone? 

If you’re mixing bluegrass and you’re good, you can get as loud as a rock band. Don’t. Other engineers know you’re just showing off, and we’re talking smack about you behind your back, and you deserve it. 

There are plenty of other things to be aware of, but I hope to have at least helped spark an internal dialogue on making your shows better for everyone, you included.

Here’s hoping that we all have a great season, and maybe we can catch each other at an event…the day before.

Todd Lewis is production manager at Stewart Sound in Leicester, NC.

{extended}
Posted by Keith Clark on 06/04 at 05:31 PM
Live SoundFeatureBlogBusinessEngineerMonitoringSound ReinforcementTechnicianPermalink

Why Power Matters: Beyond Amplifiers To The Big Picture

Audio people need not be electricians, but we must know how power distribution works

For professional audio people, the word “power” usually conjures up visions of racks of amplifiers are used to drive the loudspeakers in a sound system. But the amplifier and other system components must have a stable power source from which to operate. 

Thus the issue of power distribution, all the way from Hoover Dam to your sound system, is vital. Some of the principles of audio signal distribution in sound systems are borrowed directly from utility companies, and so much can be gained by taking a look at how they do it.

Most of the useful stuff in the modern world requires an energy source to operate. Several have been widely exploited, including petroleum and its derivatives, natural gas, and even atomic energy. All of these energy sources can be used to generate another form of energy that exists naturally in the environment—electricity.

Our world is teaming with energy just waiting to be harnessed. The atomic age began when scientists discovered that matter is a vast energy source. It’s mass is the “m” in E = mc2. Even a small amount of matter multiplied by the speed of light squared equals a very large “E”—which stands for energy. We don’t create energy; we transform and modify it for our own use.

An energy source has the potential for doing something that scientists call work. From our friends at dictionary.com, work is defined as, “The transfer of energy from one physical system to another, especially the transfer of energy to a body by the application of a force that moves the body in the direction of the force. It is calculated as the product of the force and the distance through which the body moves and is expressed in joules, ergs, and foot-pounds.”

Once we understand the nature of work, power is easy. Power is the rate of doing work. When electricity was first being harnessed, a common power source was the horse. A typical horse can do a certain amount of work over a certain span of time.

James Watt determined one “horsepower” to be 33,000 foot-pounds per minute. Horses can be ganged together to multiply their power, so a team of horses can out-pull a single animal.

While horses are no longer a common power source in developed countries, but horsepower lives on as a way of rating other sources. Any electrical power rating, such as watts, can also be stated in horsepower.

An important concept to grasp: power is a rate, and when properly specified it must be accompanied by words like “average” and “continuous” to be meaningful.

Most modern power sources can be used in multiples to create bigger ones, like the engines on an airplane or the amplifiers in an equipment rack. The concept is used nearly everywhere that power is generated or consumed - small sources can be used in multiples to create larger sources.

Not An Invention
Electricity is the power source of interest for producing and maintaining a modern lifestyle. It’s what makes life as we know it today possible.

While people existed long before electricity was harnessed, life became a lot easier once humans had a readily available electrical power source at their disposal. Its use is so widespread that we take it for granted.

Few would question the integrity of an electrical outlet found anywhere in a modern building. We just “plug in” without thought and expect it to work—and it usually does.

It’s a great thing that this source can be so reliable, but the bad part is that high reliability causes us to take it for granted, and discourages us from seeking an understanding of how it works.

Contrary to popular belief, electricity is not an invention. It has existed for as long as there has been matter in the universe.  We know that all things are comprised of atoms, and that electricity is the flow of atomic parts (electrons) from one place to another.

The rate of electron flow is called a current. The pressure under which it flows is called a voltage. Both of these quantities (and most units in general) were named to honor electrical pioneers. All electrical power sources can be characterized by their available voltage and current.

In fact, the simplest formula for electrical power is:

W = IE
Where W is power in Watts,
I is current in Amperes
E is electromotive force in Volts

Alternating And Direct
If current flows in one direction only along a wire, it is called direct current or DC. If the current flows in two directions, such as back and forth along a wire, it is called alternating current or AC.

It’s possible to convert AC to DC, and DC to AC, and in fact, this is necessary to get a sound system component to work.

The utility company generates AC by using some other form of energy in the environment, such as flowing water turning a turbine (hydraulic power), the wind turning a propeller (pneumatic power), the burning of coal, or even nuclear reactions. The discovery and utilization of new power sources is of prime importance to modern humans. Our continued existence depends on it.

The electrical power generation process was invented and refined with a great number of influences, including the physical limitations of electrical components and wire, and even the political and economic forces that existed at the time.

Because it’s highly impractical to change a system once it’s in place, many possible approaches were considered. I doubt if Edison or Westinghouse (or perhaps even Nikola Tesla, the genius inventor of AC and numerous other milestone technologies) could envision the far-reaching implications of their choices regarding electrical power distribution.

Thomas Edison and Nikola Tesla, pioneers of electrical distribution.

After much experimentation, debate, and lobbying, it was decided that the method of electrical power distribution in the U.S. would be AC, primarily due to its inherent advantages with regard to generation and transportation over long distances. (The true turning point of the debate proved to be the first successful use of widespread AC distribution at the 1893 World’s Fair in Chicago.)

AC has at least three advantages over DC in a power distribution grid:

1) Large electrical generators generate AC naturally. Conversion to DC involves an extra step.

2) Transformers must have alternating current to operate, and we will see that the power distribution grid depends on transformers.

3) It’s easy to convert AC to DC, but expensive to convert DC to AC.

The sine wave is a natural waveform from things that rotate, so it is the logical waveform for the alternating current. The frequency of the waveform describes how often it changes directions. An optimum frequency for power distribution was determined to be 60 Hz.

Any devices that required DC could simply convert the 60 Hz AC provided by the utility company, and this is exactly what sound system components do, even today.

Source Be With You
The core electrical power sources that are created to supply electricity to consumers are truly massive.

For example, Hoover Dam has a power generation rating of about 3 million horsepower (imagine feeding them!), and an electrical generation capacity of about 2000 megawatts.

This AC electrical power must be transferred from the dam to the consumer. The problem is that most people live at remote distances from the really big power sources. A power transmission system must be used to get electricity from point “A” to point “B” with a tolerable amount of loss.

There are electrical advantages to doing this at a high voltage, since it minimizes the losses in the electrical conductor used. Voltage and current can be “traded off” in a power distribution system. This is the job of the transformer.

Remember that that W = IE so we can make an equivalent power source with a lot of E and a little I, or a lot of I and a little E. Transformers are the devices used to make the trade-off.

With the power remaining the same (at least in theory) a step-up transformer produces a larger voltage at a smaller current, and a step-down transformer does the opposite. Both types are found in the power distribution grid (and also in the signal chain of a sound system!).

Typical voltages for long distance transmission are in the range of 155,000 volts to 765,000 volts in order to reduce line losses. At a local substation, this is transformed down 7,000 volts to 14,000 volts for distribution around a city.

There are normally three legs or “phases.” Three-phase power is the bedrock of the power grid. A large building might be served by all three phases, where a house would normally be served by only a single phase. Before entering a building, this distribution voltage is converted to 220 volts to 240 volts using a step-down transformer (there is a tolerance - your mileage may vary).

For our discussion here, we will use 220-volt/110-volt nominal values. The place where the power comes into the building is called the service entrance. This is of prime importance, because, in effect, it’s the “power source” of interest with regard to electrical components in the building. Any discussion of power distribution within a building is centered upon the service entrance and how its electricity is made available throughout the structure.

A food chain is now apparent. Electricity is harnessed from the environment, converted into a standardized form, distributed to various locations, and converted again into the form expected by electrical devices and products.

Most of the local wiring from the power substation is run above ground on utility poles, making it a likely target for lightning strikes, falling limbs, high winds and ice. Many locales choose to bury all or part of the wiring to reduce the risk of interruption and to remove the eyesore of wires strung from poles.

On To The Outlet
In most parts of the world, 220 volts (or close to it) is delivered to the electrical outlets for use by appliances.

In the U.S., the step-down transformer on the utility pole is center-tapped to provide two 110-volt legs. This is called “split phase” and requires a third wire from the center tap into the dwelling (neutral) and a neutral wire from each outlet back to the service entrance of the building to connect to it.

On the other hand, 220-volt circuits and appliances do not require a neutral, although it is often included if some sub-component of the appliance uses 110 volts.)

In the U.S., most household appliances are designed for 110-volt power sources. Using a lower distribution voltage has some pros and cons. The advantage is that a lower voltage poses a lower risk of electrocution, while still providing sufficient power at an outlet for most appliances. The down side is that the lower the distribution voltage, the large the wire diameter that must be used to minimize wire losses.

So a 110-volt circuit requires more copper (less resistance) to serve its outlets to maintain the same line loss as a 220-volt system. Most households have several appliances that are designed to take advantage of the full 220 volts delivered by the utility company. These appliances have high current demands and the higher distribution voltage allows them to be served with a smaller wire gauge.

It’s unusual (but not unheard of) for sound reinforcement products to require a 220-volt outlet, at least in the U.S.

Many motorized appliances utilize AC in the form it’s delivered by the utility company. But other products (including sound reinforcement components) internally convert the delivered 110 volts AC to DC.

This is the very first step taken inside a product when power is delivered to it from an electrical outlet. The power supply provides the DC “rails” necessary to power the internal circuitry. These rail voltages can range from a few volts to hundreds of volts, depending upon the purpose of the product. Some power supplies are external to the product.

“Line lumps” and “wall warts” are commonplace in sound systems. These offer the advantages of mass production and more efficient certification, as well as keeping AC (and its potential audible side effects) out of a product. But external supplies can be inconvenient and proprietary, so consumers are split in their acceptance.

Battery-powered devices bypass the whole system and place a DC power source created by chemical reactions right where it’s needed - inside of the product. The disadvantage is that there is no way to replenish this source other than replacing it or running a wire to an external power source.

Why This Matters
Audio people need not be electricians, but we must know how power distribution works. The AC power distribution scheme that is thoroughly entrenched in the U.S. infrastructure can be intermittent, noisy and even lethal.

The common practice of plugging different parts of a sound system into different electrical outlets can have very negative audible side effects, such as “hum” and “buzz” over a loudspeaker.

Far more serious, an improperly grounded system can prove deadly. Perhaps you’ve heard tales of unsuspecting musicians who lay their lips on microphones while touching their guitar strings. This is no urban myth - it can happen without proper power practice.

Pat Brown teaches SynAudCon seminars and workshops worldwide, and also online. Synergetic Audio Concepts has been the leading independent provider of audio education since 1973, with more than 15,000 “graduates” worldwide, For more information go to www.synaudcon.com.

{extended}
Posted by Keith Clark on 06/04 at 04:37 PM
AVFeatureBlogStudy HallAmplifierAVMeasurementPowerSignalPermalink

Church Sound: The Recording And Mixdown Of Energetic Gospel Music

Breaking it down to a logical process

One of the most memorable events in The Blues Brothers is the scene where a church congregation dances to the gospel band. The dancers do insanely high flips and cartwheels to this exuberant, joyful music.

I was honored to make a studio recording of similar music played by a top local gospel band, the Mighty Messengers. Here’s a look at my recording process, along with several audio examples in mp3 files.

The Recording
On the day of the session, we set up a drum kit in the middle of the studio. Surrounding the drummer were two electric guitarists, a bass player, a keyboardist and a singer who sang scratch vocals.

Because the bass, guitars and keys were recorded direct, there was no leakage from the drum kit, so we got a nice, tight drum sound. I mic’d each part of the kit. Kick was damped with a blanket and mic’d inside near the hard beater.

We recorded the guitars off their effects boxes, so we captured the effects that the musicians were playing through.

I set up a cue mix so the band members could hear each other over headphones. We recorded on an Alesis HD24XL 24-track hard-drive recorder, which is very reliable and sounds great.

Most of the songs required only one or two takes—a testament to the professionalism of this well-rehearsed band.

A few days later after mixing the instrument tracks, we overdubbed three background harmony singers (each on a separate mic). Finally we added the lead vocalist.

The Mixdown
I copied all the tracks to my computer for mixing with Cakewalk SONAR Producer, a DAW which I like for its smooth workflow, top-quality plugins and 64-bit processing.

Figure 1: Project Tracks

Figure 1 shows a typical multitrack screen of the project.

Here’s a short sample (Listen) of the mix of “My Heavenly Father” (written and copyrighted 2010 by Dr. William Jones).

Let’s break it down. Click on each linked file throughout to hear the soloed tracks without any effects, then with effects.

First, the bass track: Listen

Figure 2: Bass EQ.

To reduce muddiness and enhance definition in the bass track, I cut 5 dB at 250 Hz and boosted 6 dB at 800 Hz. Then the bass track sounded like this: Listen

Note that in Figure 2 this EQ was done with all the instruments playing in the mix. EQ can sound extreme when you hear a track soloed, but just right when you hear everything at once.

Next, the kick drum track: Listen

To get a sharp kick attack that punched through the mix, I applied -3 dB at 60 Hz, -6 dB at 400 Hz, and +4 dB at 3 kHz. Here’s the result: Listen

Thinning out the lows in the kick ensured that the kick did not compete with the bass guitar for sonic space.

Now the snare track: Listen

I boosted 10 dB at 10 kHz to enhance the high hat leakage into the snare mic. This is extreme but was necessary in this case. I also added reverb with a 27 msec predelay and 1.18 sec reverb time. Finally, I compressed the snare with a 7:1 ratio and 1 msec attack to keep the loudest hits under control. Here’s the result: Listen

The drummer did not play his toms on this song. But on other songs, to reduce leakage, I deleted everything in the tom tracks except the tom hits (see Figure 1). Cutting a few dB at 600 Hz helped to clarify the toms.

Next, here’s the cymbal track, unprocessed: Listen. Notice how loud the snare leakage is relative to the cymbal crash.

To reduce the snare leakage into the cymbal mics, I limited the cymbal track severely. The limiting reduced the snare level without much affecting the cymbal hits. This is very unusual processing but it worked. Note the cymbal-to-snare ratio with limiting applied: Listen

The guitars sounded great as they were… a little reverb was all that was needed. These two players should do a solo album! Check it out: Listen

The keyboards also needed only slight reverb: Listen

Moving on to the background vocals, we recorded three singers with three large-diaphragm condenser mics. Listen to the raw tracks with the vocals panned half-left, half-right and center

We applied 3:1 compression and added a low-frequency rolloff to compensate for the mics’ proximity effect. I “stacked” the vocal tracks, not by overdubbing more vocals, but by running the vocal tracks through a chorus plug-in.

This effect doubled the vocals, making them sound like a choir—especially with a little reverb added. Here are the background vocals, with compression, EQ, chorus and reverb: Listen

Finally, here’s the unprocessed lead-vocal track. Listen to how the word “never” is very loud because it is not compressed.

To tame the loud notes I added 4:1 compression with a 40 msec attack. Figure 3 shows the settings.

Figure 3: Vocal Compressor

The track also needed a de-esser to reduce excessive “s” and “sh” sounds.

To create a de-esser, I used a multi-band compressor plug-in, which was set to limit the 4 kHz-to-20 kHz band with a 2 msec attack time. This knocks down the sibilants only when they occur.

De-essing does not dull the sound as a high-frequency rolloff would do. Listen to hear the lead vocal with compression, de-essing and reverb. The word “never” is not too loud now, thanks to the compressor.

The Completed MIx
We’re done. Listen to the entire mix without any processing.

And listen to the same mix again with all of the processing as described.

As we said earlier, we recorded the electric guitars playing through their effects stomp boxes, so I didn’t need to add any effects to them. Those players knew exactly what was needed.

For example, here’s another song mix that showcases the slow flanging on the right-panned guitar: Listen. By the way, this song is in 5/4 and 7/4 time.

Of course, every recording requires its own special mix, so the mix settings given here will not necessarily apply to your recordings. But I hope you enjoyed hearing how a recording of this genre might be recorded and mixed.

Bruce Bartlett is a microphone engineer (http://www.bartlettmics.com), recording engineer, live sound engineer, and audio journalist. His latest books are Practical Recording Techniques 6th Edition and Recording Music On Location. style=

{extended}
Posted by Keith Clark on 06/04 at 12:43 PM
Church SoundFeatureBlogStudy HallDigital Audio WorkstationsEngineerMicrophoneSoftwareStudioTechnicianPermalink

In The Studio: A Trip Through Bruce Swedien’s Mic Closet

Some of the mics deployed by a recording legend on all-time classic tracks
This article is provided by Bobby Owsinski.

 

Bruce Swedien is truly the godfather of recording engineers, having recorded and mixed hits from everyone from Count Basie, Duke Ellington and Dizzy Gillespie to Barbra Streisand, Donna Summer and Michael Jackson.

He’s a mentor of mentors, as so many of his teachings are now handed down to a generation now just learning (his interview in The Mixing Engineer’s Handbook is a standout).

Bruce is also a collector of microphones and will not use one he doesn’t personally own, so he knows the exact condition of each.

Recently he posted a bit about the mics he uses on his Facebook page, and I thought it worth a reprint.

A few things stick out to me:

1. His use of an Sennheiser 421 on kick. I know it’s a studio standard for some reason (especially on toms), but I never could get it work on anything in a way that I liked.

2. His use of the relatively new Neumann M149, because it’s eh…........new.

3. His synthesizer advice (under the M49) is a real gem.

Here’s Bruce.
——————————————————————

“Constantly being asked about my mics, so here goes:

“My microphone collection, spanning many of the best-known models in studio history, are my pride and joy. My microphones are prized possessions. To me, they are irreplaceable. Having my own mics that no-one else handles or uses assures a consistency in the sonics of my work that would otherwise be impossible.”

AKG C414 EB
“My first application would be for first and second violins. It’s really great mic for the classical approach for a string section.”

Hear it on… the first and second violins in Michael Jackson’s ‘Billie Jean.’

Altec 21B
“This is a fantastic mic, and I have four of them. It’s an omni condenser, and [for jazz recording] what you do is wrap the base of the mic connector in foam and put it in the bridge of the bass so that it sticks up and sits right under the fingerboard. It wouldn’t be my choice for orchestral sessions, though.”

Hear it on… Numerous recordings for Oscar Peterson between 1959 and 1965.

RCA 44BX & 77BX; AEA R44C
“[The 44BX] is a large, heavy mellow-sounding old mic with a great deal of proximity effect. This is very useful in reinforcing the low register of a vocalist’s range if that is desired. If I am asked to do a big band recording of mainly soft, lush songs, I almost always opt for ribbon mics for the brass. I suggest AEA R44C or RCA44BX on trumpets, and RCA 77DX on trombones. Ribbon mics are great for percussion too.”

Hear it on… trumpets and flugelhorns in Michael Jackson’s ‘Rock With You’ (at 0:54); percussion in Michael Jackson’s ‘Don’t Stop Till You Get Enough.’

Sennheiser MD421
“The kick is about the only place I use that mic, and I mike very closely. I frequently remove the bass drum’s front head, and the microphone is placed inside along with some padding to minimise resonances, vibrations, and rattles.”

Hear it on… the kick drum in Michael Jackson’s ‘Billie Jean.’

Shure SM57
“For the snare I love the Shure SM57. In the old days it wasn’t as consistent in manufacture as it is now. I must have eight of those mics, and they’re all just a teeny bit different, so I have one marked ‘snare drum’.

“But the ones I’ve bought recently are all almost identical. On the snare drum, I usually go for a single microphone. I’ve tried miking both top and bottom of the snare, but this can cause phasing problems.”

Hear it on… snare drum in Michael Jackson’s ‘Billie Jean.’

Telefunken 251
“These mics have a beautiful mellow quality, but possess an amazing degree of clarity in vocal recording. The 251 is not overly sibilant and is often my number one choice for solo vocals.”

Hear it on… Patti Austin in ‘Baby, Come To Me,’ her duet with James Ingram.

Telefunken U47
“I still have one of the two U47s that I bought new in 1953, and will still frequently be first choice on lead vocal. This is a mic that can be used on a ballad or on a very aggressive rock track. It has a slight peak in its frequency response at around 7 kHz, which gives it a feeling of natural presence. It also has a slight peak in the low end around 100 Hz. This gives it a warm, rich sound.

“For Joe Williams, another mic would never have worked as well. I figured out that it was the mic for him when I heard him speak. After you’ve been doing this for as long as I have, you begin to have instinctive sonic reactions, and it saves a lot of time!”

Hear it on… Joe Williams in the Count Basie Band’s ‘Just The Blues.’

Neumann M149
“I have a pair of these that Neumann made just for me, with consecutive serial numbers, and they sound so great. That’s what I use now in XY stereo on piano.”

Neumann M49
“This is very close sonically to the M149, but not quite the same. It’s a three-pattern mic and the first that Neumann came up with which had the pattern control on the power supply… you could have the mic in the air and still adjust the pattern. I use these for choir recording in a Blumlein pair, which is one of my favourite techniques because it’s very natural in a good room.

“When I was recording with Michael and Quincy I was given carte blanche to make the greatest soundfields I could, so what I also did was pick a really good room and record the synths through amps and speakers with a Blumlein pair to get the early reflections as part of the sonic field. The direct sound output of a synthesizer is very uninteresting, but this can make the sonic image fascinating.

“You have to be really careful, though, to open up the pre-delay of any reverb wide enough to not cover those early reflections. They mostly occur below 120ms, so with 120ms pre-delay those sounds remain intact and very lovely.”

Hear it on… Andre Crouch choir in Michael Jackson’s ‘Man In The Mirror,’ ‘Keep The Faith.’

Neumann U67
“The predecessor to the U87, and an excellent microphone, but it’s not one of my real favourites, as a purely instinctive reaction. It’s just a little bit too technical perhaps, and it doesn’t have sufficient sonic character for me to use it on a lead vocal, for instance. It’s a good choice of microphone for violas and cellos, however, and the U87 can also work well in this application.”

Hear it on… violas and cellos in Michael Jackson’s ‘Billie Jean.’

I’ll post a bit of Bruce’s interview from The Mixing Engineer’s Handbook in a future post.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blog.

{extended}
Posted by Keith Clark on 06/04 at 09:54 AM
RecordingFeatureBlogStudy HallEngineerMicrophoneStudioPermalink

Monday, June 03, 2013

Church Sound: Maximize Your Mix With A Step-By-Step Guide Through A Console

An opperator must have a firm understanding of the concepts behind the buttons.

Good church sound often crescendos or crashes at the mixing board. A new whiz-bang mixing console will not improve the quality of sound one bit if your sound system is flawed in design, doesn’t have enough amplification, delivers uneven coverage, or has poor system processing.

But even if all of that is in sync, the board can still fail to orchestrate good sound if a sound engineer isn’t operating it properly.

A Complex Board
I’ve trained sound technicians in church ministries on systems ranging from a 12-channel mixer to a 56-channel mixer, and on systems that have a single loudspeaker to systems with multiple clusters cross-matrixed to deliver left-center-right information into a room.

All of this taught me that the mixing board is one of the most complex components of a sound system. To obtain good sound, an engineer must have a good understanding of not only what all the buttons do on a soundboard, but also the concepts behind pushing those buttons.

Employing some simple principles can go a long way to helping raise the performance level of your church console.

Step 1: Go For The Gain
One of the most important components of a mixing board is gain structure. If the gain structure is off, you will have distortion or noise (hiss) in your system.

A graphic showing various functions discussed in this article. (click to enlarge)

If gain structure isn’t right, the board also will be awkward and unpredictable in its response. For example, sliding the lever from the bottom to part of the way up could result in parts of your loudspeaker system flying by your head. Well, it might not be quite that bad, but the effect could clean out your ears and the ears of everyone around you.

You may also find yourself riding the fader on the board all the way to the top without getting the right level of sound. In the process, you’ll generate all kinds of hiss.

On most mixing boards, the first knob on an input channel is the gain (or trim) knob. In layperson’s terms, this is the master volume-control button. If we liken the mixing board to a plumbing system, this knob is the master valve. If the valve is barely open, the water pressure (or sound) is low; if open too far, the pressure will be so high that it will produce incredible distortion.

Mackie has a very good paper outlining the way to set up channel gain. I’ll borrow the salient points here:

—Turn the input trim control of the desired channel all the way down.

—Connect the mic or line input to that channel (turn down or mute all other channels). Press the channel’s solo button, making sure it’s set to PFL mode if you have a PFL/AFL option. As a musician begins to sing or play, turn up the channel’s input trim. You should see the input level on the mixer’s meters. Adjust the input trim until the meter level is around zero (0) dB.

—Adjust the channel’s volume the way you want it with either fader or gain control. Don’t forget to turn up the master volume or you may not be able to hear the sermon.

Step 2: “Aux” The Signal
Once you’ve set up the channel gain on your mixer, you can proceed to the auxiliary sends, sometimes referred to as “mon sends,” or the monitors. Each of these knobs (two to eight) operates as a kind of valve that allows you to send sound to another output.

One aux send could flow sound to monitors or loudspeakers on stage so that musicians and other people on the platform can hear each other. Another could flow sound to a digital or analog recorder for making recordings, another to the nursery loudspeakers, and still another to an overflow seating area.

The bottom line is that aux sends offer the sound engineer a way to send sound to various places without affecting the main speaker system.

What’s important to understand about an aux send is whether it is pre-fade or post-fade. If it is pre-fade, or pre-fader, then sound will be sent at a certain level regardless of the position of the fader at the bottom of the channel.

The gain (or trim) is the only valve that will affect the amount of sound that goes through the aux send. If the aux send is post-fade, or post-fader, sound will be sent in proportion to the fader at the bottom of the channel and in proportion to the gain. So if the gain is set properly but the channel fader is down, no sound will be sent through the aux send.

I like to run my stage monitors pre-fade. This allows me to make changes to the house sound mix without affecting the monitors.

Conversely, I like to run “recording send,” “effects send,” and “sends” post-fade to other sound systems. This allows those levels to follow what I am doing on the channel faders.

Step 3: Equalize The Mix
We could have many good discussions (OK, disagreements) about this point. But equalization offers sound mixers the opportunity to be creative, smart, and innovative (or on the other hand, inept). To begin, I recommend that sound engineers start with the equalizer section set flat. That means all level controls should be set at zero, or straight up.

Next, we need to understand how we hear sound. We hear sound from a low of about 20 Hz (Hz = hertz or cycles per second), which is a very low frequency. A kick drum is usually tuned between 80 Hz and 100 Hz.

At the other end, we hear sound up to about 20,000 Hz or 20 kHz (k = 1,000), which is a very high frequency. A dog whistle at around 22 kHz is out of the range of the human ear.

The equalizer on a mixing console allows you to select a frequency or frequency range and to increase or decrease the level of a specified range. For example, if I am hearing ringing or feedback, I try to equate it to a number. If the ringing I hear is about the level of an “A” on the music scale, it equates to about 440 Hz (a piano is tuned to A440).

I would then either turn the midrange section down on the mixing board, or I would select 440 Hz on the frequency selection knob, then turn down the level control for that frequency. The key to successfully using the equalization section is learning to translate what you hear into numbers representing Hz (cycles per second).

Step 4: Route The Signal
The bottom of the channel strip offers another option for routing signals. On most mixing boards, you can choose left or right signals as well as subgroups. By selecting the right buttons, you can assign sound to travel right to the main output of the mixer or through a subgroup.

Subgroups are good for controlling the volume of multiple inputs. For example, you could assign the worship leader’s microphone to the main mix and the background vocalists’ mics to a subgroup. This will allow you to bring the total level of all the background mics up or down with one fader.

The mute button is the channel’s on/off button. Caution: if you turn it off, it might also turn off the auxiliary sends. Check your console’s operating manual to see whether this is how your mute button works.

Regardless, the mute button will affect a channel in the subgroup as well as the main mix. When the pre-fade listen (PFL) button is pressed down on most soundboards, a channel can be assigned to headphones regardless of the channel fader position.

This is very handy for cueing up recordings (from hard disc, CD, tape or whatever) to be played through the system. You can listen via headphones to a recording without letting the signal go to the main mix.

The master section of a Mackie analog console. (click to enlarge)

However, proceed with caution. If you have any prefade aux sends turned on, the sound on the recording will be sent there. There’s nothing worse than checking to see if a recording is cued properly during prayer in a worship service and forgetting to turn off the prefade aux send monitors. Been there, done that, will never do that again.

The channel fader is the master volume control for that input. Most mixing takes place in adjusting the volume of the signal that goes through the mix board.

Step 5: Master The Mix
The master section of the mixing board consists of the subgroup control, mains, aux masters, and headphone level. This section is where everything comes together before it is sent out of the mixing board.

If proper sound checks have been done and the board has been set up correctly, you can spend most of your time mixing by adjusting levels, and can adjust subgroups to bring the mix together. You can also make minor individual channel-level changes, minor equalizer changes on individual channels, and small adjustments on the aux sends.

Once the mix is set up, you’re free to camp out at the master section to manage the mix.

Practice to Perform
To obtain good sound, a sound operator/engineer must have a good understanding of not only what all the buttons do on a console, but also the concepts behind pushing those buttons.

The most important thing that an operator can do to learn how to mix sound is to spend time experimenting with various buttons on the board. The time to do this is not during a service, however. Nor should this be done when rehearsing for a worship service or performance. This time should be spent with musicians, adjusting the sound levels as they rehearse and setting the mix for when they perform.

In addition, anyone mixing should read through the console manual to be come familiar with its features and how to use them. Also, read articles on mixing sound and try to attend training sessions or workshops on sound.

The better we understand every part of a mixing console, the better our mixes sound.

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with sound at his church for more than 25 years.

{extended}
Posted by Keith Clark on 06/03 at 05:50 PM
Church SoundFeatureBlogStudy HallConsolesEngineerMixerMonitoringSound ReinforcementTechnicianPermalink

Making All The Right Noises: Graham Burton On The Evolving Role Of System Tech

“I realized that it was fast becoming the most important job in live sound.”

 
Not many 16-year-olds are lucky enough to land their first job as a system tech on a Bon Jovi tour; however, Graham Burton was no ordinary teenager.

Technically, in fact, his pro audio career actually began some four years earlier, when he landed a work experience role with local UK firm Richard Barker PA Services.

That was 1989, and by the turn of the century, the British-born Burton had toured internationally as a monitor engineer, front of house engineer and tour manager with the likes of Eric Clapton, The Stylistics, and Billy Ocean.

But by 2005, he found himself drawn back to his system tech roots, a role he currently holds with South England-based rental house BCS Audio. Why?

“Simple,” he notes. “I realized that it was fast becoming the most important job in live sound.”

I recently caught up with him to find out more.

Paul Watson: So, system teching. Haven’t you ‘been there, done that’?

Graham Burton: [Laughs] First, it’s a very different beast now compared to back then. When I was out with Bon Jovi. I was somewhat down the food chain – you could call me a ‘mic tech,’ I suppose, though I learned a lot about the role of a main system tech too. I was putting mics on stands and cabling them up; it wasn’t until I started working for (hire companies) Soundcheck and Eurohire that I started working with speakers, really.

And that was shortly after the Bon Jovi stint, right?

Yeah, that’s right. Those two companies got me right into live PA systems as well as disco systems; it could be anything from a pair of Bose 802s on sticks to a truck full of Renkus-Heinz kit that I’d be dealing with.

What you have to remember is that we didn’t have line arrays back then, so system teching was often a case of deploying lots of big boxes, trying to point them in the right direction, and hoping for the best, really.

As simple as that, then. Or not, so to speak…

Indeed! You’d frequently end up putting a lot more PA into a venue than was necessary, just to get coverage, because the horizontal dispersion wasn’t as wide as it is now with line arrays. Then you’d end up with cone filtering and all kinds of things happening within the actual speaker boxes, which could make things tricky.

We would lug around our analog graphics (EQs), giant racks of comps and gates, and huge 48-channel analog consoles with at least 16 aux outs on them, like a Midas (Heritage) 2000, for example, and make our tweaks. It was cumbersome, and it could get pretty stressful.

We’re talking mid-1990s here, right?

Yes, this was also the time I started doing monitors for The Stylistics. Monitors, in my view, is the best place to start, because you learn your frequencies a lot quicker and the band is telling you what they want, whereas when you’re at FOH you’re really trying to interpret what the audience wants.

We had a 13-way mix on that stage, then 20 wedges with some single mixes, some paired. It was a big band, so I had to deal with all sorts of drums, percussion, horns, and keys – there was quite a lot going on there, and a lot of musicians to try and keep your eyes on.

Over the next decade, you built a solid reputation as a FOH guy and worked on some big tours, as well as many of the major UK festivals. What made you ditch the white gloves and relative glamour, dare I ask, to come back to the tricky world of system tech?

It was interesting. I was 28, so that’s eight years ago, and I’d broken my leg while riding my motorbike, which had put me out of work. This just so happened to coincide with the time when things were really changing in PA technology, and I was taking a very keen interest in loudspeakers.

When I was in plaster, all the bands I’d been working with had to hire other people because I couldn’t do the job, and by the time I got the cast off, it was festival season and they understandably didn’t want to change their engineers.

Then as luck would have it, I got a call from BCS Audio asking whether I was free to work a show. Low and behold, it was system teching! This is when I had my ‘lightbulb moment,’ where I thought “hang on, there’s a hell of a lot more to this role now, and I can probably make more money doing it.” And sure enough, I got the bug again for setting up systems, and found that I got so much more out of it than engineering.

Because it was becoming a true art form?

Precisely, and the role continues to evolve. The really fascinating thing for me was that instead of thinking two-dimensionally, as you would do with the old boxes, I found myself thinking horizontally and vertically as well as left and right, working out how I’d get my PA coverage onto the audience without taking their attention away from where the performance is happening.

Will you elaborate?

Well, in venues with balconies, for example, we now fly the PA lower than we would have done in the old days, but we angle it up into the balconies so it still brings peoples’ attention down to the stage. It’s a lot more to think about.

Before, you’d have a groundstacked PA and a flown PA, and the flown PA would be in the eyeline of the audience in the balcony, so their attention was being drawn 10, maybe 20 feet above where the performance was happening.

So it’s a psychological thing as well, then?

It really is. You have to try and get across to the audience that the performance is happening there, and now we can develop that with systems; you can bring the audience into the show more, basically.

What are the other key differences compared to working the role a decade ago?

The gear is all a lot lighter now. And instead of time-aligning stuff with an old digital delay unit in the rack, which wasn’t all that accurate anyway, most of the loudspeaker systems now come with their own amplifier, which has its own delay circuit inside, so you can get everything absolutely spot on.

What’s your preferred kit when it comes to figuring delay times?

I use (Rational Acoustics) Smaart, but unlike many techs, I also use a laser measure. I then figure out how close the laser measurement is to the Smaart reading, which allows me to put that little bit of human element into it. I don’t like anything being absolutely perfect as it just feels clinical; my methods are more organic. Yes, it’s all going digital, but I prefer to have some ‘imperfections’ in there rather than it all being bang on the numbers all the time.

Burton making adjustments on an L-Acoustics LA8 amplified controller. (click to enlarge)

So basically, use the tools as a guide but don’t take them as gospel?

That’s the way I approach it. Besides, it’s very negotiable whether you can actually hear the differences when you’re talking about a millisecond or two. The time-alignment of systems is such a minefield these days, especially when figuring out what bits of the system you need to delay back to what point. The biggest trend we’re seeing at the moment is bands wanting to delay everything back to the kick drum.

Because it’s typically the furthest point back on stage?

Exactly, but again, this is all negotiable. It’s the very clinical side of system teching, and I tend to steer away from that. Yes, in the old days, you’d need a whole rack of digital delay units to do the job, and technology makes it far easier to accomplish now. But generally, I just don’t go down that road unless I’m asked to by the engineer – it’s their show after all, and I’m there to work with them, not against them.

What does the future hold for system techs?

Well, while many FOH engineers know how to mix, some still know little about the physics and science behind the systems, so they have to have trust in people like us. What we can do now is light years ahead of where it used to be, but I’m always learning new tricks. I find myself asking other techs “why are you doing it like that?” because I want to understand.

I reckon there are seven ways of doing everything and you’ll never find them all yourself; you have to learn from other people. Let’s all share the knowledge and try to make sure every gig is the best it can ever be. That’s what I say.

Paul Watson is the editor for Europe for Live Sound International and ProSoundWeb.

{extended}
Posted by Keith Clark on 06/03 at 04:23 PM
Live SoundFeatureBlogBusinessEngineerLoudspeakerMeasurementSound ReinforcementSystemTechnicianPermalink

80 Years On & Counting: Progress In “Getting It Right” With Speech Reinforcement

Are we now, finally, getting onto the right track?

April 27, 2013 marked the 80th anniversary of a historic milestone in the history of audio.

On that date in 1933, the Philadelphia Orchestra, under deputy conductor Alexander Smallens, was picked up by three microphones at the Academy of Music in Philadelphia – left, center, and right of the orchestra stage – and the audio transmitted over wire lines to Constitution Hall in Washington, where it was replayed over three loudspeakers placed in similar positions to an audience of invited guests. Music director Leopold Stokowski manipulated the audio controls at the receiving end in Washington.

The historic event was reported and analyzed by audio pioneers Harvey Fletcher, J.C. Steinberg and W.B. Snow, E.C. Wente and A.L. Thuras, and others, in a collection of six papers published in January 1934 as the Symposium on Auditory Perspective by the IEEE, in Electrical Engineering. Paul Klipsch referred to the symposium as “one of the most important papers in the field of audio.”

Prior to 1933, Fletcher had been working on what has since been termed the “wall of sound.” “Theoretically, there should be an infinite number of such ideal sets of microphones and sound projectors [i.e., loudspeakers] and each one should be infinitesimally small,” he wrote. “Practically, however, when the audience is at a considerable distance from the orchestra, as usually is the case, only a few of these sets are needed to give good auditory perspective; that is, to give depth and a sense of extensiveness to the source of the music.”

In this regard, Floyd Toole’s conclusions – following a career spent researching loudspeakers and listening rooms – are especially noteworthy. In his 2008 magnum opus, Sound Reproduction: Loudspeakers and Rooms, Toole notes that the “feeling of space” – apparent source width plus listener envelopment – which turns up in the research as the largest single factor in listener perceptions of “naturalness” and “pleasantness,” two general measures of quality, is increased by the use of surround loudspeakers in typical listening rooms and home theaters.

Given that these smaller spaces cannot be compared in either size or purpose to concert halls where sound is originally produced, Toole states that in the 1933 experiment, “there was no need to capture ambient sounds, as the playback hall had its own reverberation.”

Fletcher’s dual curtains of microphones and loudspeakers. (click to enlarge)

Localization Errors
Recognizing that systems of as few as two and three channels were “far less ideal arrangements,” Steinberg and Snow observed that, nevertheless, “the 3-channel system was found to have an important advantage over the 2-channel system in that the shift of the virtual position for side observing positions was smaller.”

In other words, for listeners away from the sweet spot along the hall’s center axis, localization errors due to shifts in the phantom images between loudspeakers were smaller in the case of a left-center-right (LCR) system compared with a left-right system. Significantly, Fletcher did not include localization along with “depth and a sense of extensiveness” among the characteristics of “good auditory perspective.”

Regarding localization, Steinberg and Snow realized that “point-for-point correlation between pick-up stage and virtual stage positions is not obtained for 2-and 3-channel systems.”

Further, they concluded that the listener “is not particularly critical of the exact apparent positions of the sounds so long as he receives a spatial impression. Consequently 2-channel reproduction of orchestral music gives good satisfaction, and the difference between it and 3-channel reproduction for music probably is less than for speech reproduction or the reproduction of sounds from moving sources.”

The 1933 experiment was intended to investigate “new possibilities for the reproduction and transmission of music,” in Fletcher’s words.

Many, if not most, of the developments in multichannel sound have been motivated and financed by the film industry in the wake of Hollywood’s massive financial investment in the “talkies” that single-handedly sounded the death knell of Vaudeville and led to the conversion of a great many live performance theatres into cinemas.

Given that the growth of the audio industry stemmed from research and development into the reproduction and transmission of sound for the burgeoning telephone, film, radio, television, and recorded music industries, it is curious that the term “theatre” continued (and still continues to this day) to be applied to the buildings and facilities of both cinemas and theatres.

This reflects the confusion not only in their architecture, on which the noted theatre consultant Richard Pilbrow commented in his 2011 memoir A Theatre Project, but also in the development of their respective audio systems.

Theatre, Not Cinema
Sound reinforcement was an early offshoot, eagerly adopted by demagogues and traveling salesmen alike to bend crowds to their way of thinking; as Don Davis notes in 2013 in Sound System Engineering, “Even today, the most difficult systems to design, build, and operate are those used in the reinforcement of live speech. Systems that are notoriously poor at speech reinforcement often pass reinforcing music with flying colors. Mega churches find that the music reproduction and reinforcement systems are often best separated into two systems.”

The difference lies partly in the relatively low channel count of audio reproduction systems that makes localization of talkers next to impossible.

Since delayed loudspeakers were widely introduced into the live sound industry in the 1970s, they have been used almost exclusively to reinforce the main house sound system, not the performers themselves. This undoubtedly arose from the sheer magnitude of the sound pressure levels involved in the stadium rock concerts and outdoor festivals of the era.

However, in the case of, say, an opera singer, the depth, sense of extensiveness, and spatial impression that lent appeal to the reproduced sound of the symphony orchestra back in 1933, likely won’t prove satisfying in the absence of the ability to localize the sound image of the singer’s voice accurately. Perhaps this is one reason why “amplification” has become such a dirty word among opera aficionados.

In the 1980s, however, the English theatre sound designer Rick Clarke and others began to explore techniques of making sound appear to emanate from the lips of performers rather than from loudspeaker boxes. They were among a handful of pioneers who used the psychoacoustics of delay and the Haas effect “to pull the sound image into the heart of the action,” as sound designer David Collison recounted in his 2008 volume, The Sound Of Theatre.

Out Board Electronics, based in the UK, has since taken up the cause of speech sound reinforcement, with a unique delay-based input-output matrix in its TiMax2 Soundhub that enables each performer’s wireless microphone to be fed to dozens of loudspeakers – if necessary – arrayed throughout the house, with unique levels and delays to each loudspeaker such that more than 90 percent of the audience is able to localize the voice back to the performer via Haas effect-based perceptual precedence, no matter where they are seated. Out Board refers to this approach as source-oriented reinforcement (SOR).

The delay matrix approach to SOR originated in the former DDR (East Germany), where in the 1970s, Gerhard Steinke, Peter Fels and Wolfgang Ahnert introduced the concept of Delta-Stereophony in an attempt to increase loudness in large auditoriums without compromising directional cues emanating from the stage.

In the 1980s, Delta-Stereophony was licensed to AKG and embodied in the DSP 610 processor. While it offered only 6 inputs and 10 outputs, it came at the price of a small house.

Out Board started working on the concept in the early 1990s and released TiMax (now known as TiMax Classic) around the middle of the decade, progressively developing and enlarging the system up to the 64 x 64 input-output matrix, with 4,096 cross points, that characterizes the current generation TiMax2.

The TiMax Tracker, a radar-based location system, locates performers to within 6 inches in any direction, so that the system can interpolate softly between pre-established location image definitions in the Soundhub for up to 24 performers simultaneously.

The audience is thereby enabled to localize performers’ voices accurately as they move around the stage, or up and down on risers, thus addressing the deficiency of conventional systems regarding the localization of both speech and moving sound sources.

Source Oriented
Out Board director Dave Haydon put it this way: “The first thing to know about source-oriented reinforcement is that it’s not panning. Audio localization created using SOR makes the amplified sound actually appear to come from where the performers are on stage. With panning, the sound usually appears to come from the speakers, but biased to relate roughly to a performer’s position on stage.

Out Board’s Dave Haydon with a TiMax2 Soundhub mix matrix and playback server. (click to enlarge)

“Most of us are also aware that level panning only really works for people sitting near the center line of the audience. In general, anybody sitting much off this center line will mostly perceive the sound to come from whichever stereo speaker channel they’re nearest to.

“This happens because our ear-brain combo localizes to the sound we hear first, not necessarily the loudest. We are all programmed to do this as part of our primitive survival mechanisms, and we all do it within similar parameters. We will localize even to a 1 millisecond (ms) early arrival, all the way up to about 25 ms, then our brain stops integrating the two arrivals and separates them out into an echo. Between 1 ms and about 10 ms arrival time differences, there will be varying coloration caused by phasing artifacts.

“If we don’t control these different arrivals they will control us. All the various natural delay offsets between the loudspeakers, performers and the different seat positions cause widely different panoramic perceptions across the audience. You only to have to move 13 inches to create a differential delay of 1 ms, causing significant image shift.

TiMax Tracker that locates performers to within six inches in any direction. (click to enlarge)

“Pan pots just controlling level can’t fix this for more than a few audience members near the center. You need to manage delays, and ideally control them differentially between every mic and every speaker, which requires a delay-matrix and a little cunning, coupled with a fairly simple understanding of the relevant physics and biology,” Haydon says.

More theatres are adopting this approach, including New York’s City Center and the UK’s Royal Shakespeare Company. A number of Raymond Gubbay productions of opera-in-the-round at the notoriously difficult Royal Albert Hall in London – including Aida, Tosca, The King and I, La Bohème and Madam Butterfly—as well as Carmen at the O2 Arena, have benefited from source oriented reinforcement, as have numerous other recent productions.

Veteran West End sound designer Gareth Fry employed the technique earlier this year at the Barbican Theatre for The Master and Margarita, to make it possible for all audience members to continuously localize to the actors’ voices as they moved around the Barbican’s very wide stage. He notes that, in the 3-hour show with a number of parallel story threads, this helped greatly with intelligibility to ensure the audience’s total immersion in the show’s complex plot lines.

As we mark the 80th anniversary of that historic first live stereo transmission, it’s worth noting that, in spite of the proliferation of surround formats for sound reproduction that has to date culminated in the 64-channel Dolby Atmos, we are only now getting onto the right track with regard to speech reinforcement.

It’s about time.

Sound designer Alan Hardiman is president of Associated Buzz Creative, providing design services and technology for exhibits, events, and installations. He included the TiMax Soundhub in his design for the 4-day immersive theatre production The Wharf at York, staged at Toronto’s Harbour Square Park.

{extended}
Posted by Keith Clark on 06/03 at 01:41 PM
AVFeatureBlogStudy HallAVDigitalLine ArrayLoudspeakerProcessorSignalSound ReinforcementPermalink
Page 52 of 169 pages « First  <  50 51 52 53 54 >  Last »