Wednesday, September 04, 2013
Milestones: Adamson Systems At 30
An informed passion for design and innovation
Brock Adamson first got into audio as a teenager while living near Toronto, but entirely by coincidence. “When I was about 16,” he explains, “I was exposed to some fairly high-powered amplifiers and loudspeakers – a stereo system for the family home that had a lot of horsepower that came without any enclosures for the woofers. I don’t know why, but I went after it and built the boxes.”
In fact, his success in the audio industry, examined as the company bearing his name, Adamson Systems Engineering, marks its 30th year, is very much a result of his habit of single-mindedly going after things – a habit that informed the early work he did after leaving Ontario and heading west in the late 1960s. “I ran into a bunch of musicians in Edmonton and it was all downhill from there,” he says dryly.
While there, he connected with people working in every facet of the music and audio industries, from engineers and instrument makers to innovators experimenting with early portable recording rigs.
“Many of them were eccentric to say the least, but they did fantastic work and I learned a tremendous amount,” he notes. “That was when my understanding of electronics began to develop,” adding that his skills as an engineer and designer were enhanced by the influence of a number of mentors who helped him develop a deeper understanding of the relationship between the loudspeaker and the human ear, and by experimenting with Brüel & Kjaer’s reflection-free measurement system.
Residing in Vancouver by 1978, he got involved with Rocky Mountain Sound and began working closely with Jeff Berryman, a computer scientist and loudspeaker designer employed by a competitor.
“Jeff’s a great mathematician,” Adamson observes. “We were both working on some very interesting stuff at the time meeting often, and I started developing high-power prototypes and building things for the company.” Although he became a partner in Rocky Mountain Sound, his passion for design and innovation eventually led him to branch out to form Adamson Systems Engineering in 1983.
His reputation in the local industry grew, ultimately landing him a gig as an event site audio consultant for the 1986 Vancouver Expo “Transportation and Communication: World in Motion – World in Touch” exhibit. While it represented the recognition of his capabilities as a sound designer and raised his profile in the audio industry, it was also a trial by fire. The initial systems installed at 11 of the venues were far less than ideal for the performance applications they were intended for.
“I went at it hammer and tong and re-did them all. I had to ‘MacGyver’ it in about 10 days,” he explains. “Up to then, I’d been involved in one-offs in performance venues and recording studios. This was my first taste of commercial applications. I put in 200 hours over a week and a half, but I learned a lot and it all came off very well, except for Margaret Thatcher’s opening address, as she was notoriously microphone shy.”
The MH225 that paired a Kevlar M160 compression driver with an acoustic waveguide.
Meanwhile, he’d also been prototyping diaphragms and drivers, as well as building loudspeakers, but found the region difficult in terms of gaining access to the necessary tools and materials.
“I was frustrated with the situation in Vancouver because I’d been developing Kevlar speaker cones and it seemed like everything I wanted – machinery and technology – was available in the Great Lakes Basin. So I shipped everything across Canada and put it into a warehouse in Pickering (Ontario).”
His decision to relocate was also influenced by the fact that much of his immediate family was based in Ontario, including his brother-in-law, then a distributor for Crown who’d acquired one of the first Crown analyzers, the TEF 10. “That was a hell of a powerful piece of equipment then – clunky by today’s standards, but it worked well.”
Movers & Shakers
In the late 1980s, Adamson received a call from noted acoustician and researcher Floyd E. Toole, whose research at the Canada National Research Council’s loudspeaker test and measurement facility into the correlation between loudspeaker measurement and user preference helped spur the growth of the Canadian loudspeaker industry.
“I didn’t know Floyd,” Adamson says, “but he’d heard I had the TEF 10 and said, ‘We’re having a confab in Ottawa and we need you to show up.’” In addition to Toole, Richard Heyser, then of Jet Propulsion Laboratories, Stanley Lipshitz, John Vanderkooy and Laurie Fincham also attended.
These fortuitous contacts had an important, long-term impact on his designs, and it also led to later encounters with Heyser and German physicist Manfred R. Schroeder. “So through owning that analyzer, I met people that really expanded my horizons,” he says. “I mean, Heyser and Lipshitz in the same room – they’ve got to find something to argue about. One would fill a blackboard with equations then the other would go, ‘No, no, no,’ swing another blackboard over and start working on his own blackboard. It was like watching a tennis game when you’ve never held a racket before in your life.”
Toole also put Adamson in touch with Dr. Earl Geddes, whose Waveguide Theory was integral to the development of Adamson System Engineering’s MH225 – the first loudspeaker that paired the company’s Kevlar M200 mid-range compression driver with an acoustic waveguide. “The fundamental principles underlying Geddes’ theory are extremely solid,” Adamson points out, “and it changed the way we went about designing waveguides, as well as fundamentally affecting the conceptualization of the sound chambers that we use in line arrays today.”
The patented Co-Linear Drive Module at the heart of Y-Axis line array modules.
The release of the MH225 loudspeaker put the company on the map globally in the late-1980s/early-1990s, a milestone in efforts to eliminate distortion while maximizing directivity and SPL. Those early products also established the value of Kevlar diaphragms and informed the development of later technologies.
“The patented phase plug of the early MH225 and all the principles – the concentric rings, the way they sum as well as all the boundary considerations for how the wave is supposed to pass through these devices – were carried through into the line array environment,” he notes.
As line array technology became the de facto choice of touring acts in the late 90s, the company brought its patented Co-Linear Drive Module to bear on one of the first line arrays designed and manufactured in North America, the Y-Axis Series. It helped cement Adamson System Engineering as an innovator in the loudspeaker market and led to substantial international growth in the early 2000s.
“Many audio manufacturers in North America, including us, were focused on the conventional approach to loudspeaker arrays,” he states. “(L-Acoustics) V-DOSC was made in France where my biggest distributor, DV2, was located. I was aware of V-DOSC’s impact because it was beating down the door in competition for sales in France.” Initially, he adds, “I was fixated on building a horizontal array, but then I went, ‘you know what? I’m going to build a vertical array. Let’s get on with it.’ But I still couldn’t get my head around the mid-range and I’d be damned if I’d copy somebody else.”
Arrays of MH225s left, center and right at a concert in Laguna Beach, CA, in the early 90s.
As it turned out, Adamson got his head around the problem while he and another engineer were trading off at a Toronto club, taking turns mixing a Reggae band. “I was trying to explain to him what couldn’t be done and I realized right then how to do it.”
Immediately he drew the initial design out on a napkin, and 93 days later, the company shipped the first Y-Axis Series line array. “It was brilliant. Everyone pitched in. Our production team worked so late into the night we often crashed at the warehouse. One night I actually used a pizza box as a pillow and woke up with a slice of cheese pizza stuck to my face.”
The subsequent evaluation of Y-Axis by French distributor DV2 as well as the engineering team at EML Productions led to that leading European hire company to heavily invest in the system. Consequently, Adamson technology was deployed at some of the most prestigious festivals worldwide and ultimately became a staple for touring artists such as Linkin Park, Rob Zombie, John Legend and Simple Minds.
The company continued to expand its offerings with the T21 Sub, which incorporated the SD21 Symmetrical Drive, multilayer 21-inch Kevlar driver, as well as the 2-way Metrix and 3-way Spektrix line source array cabinets.
Like Y-Axis, these products featured the Adamson Integrated Rigging (AIR) System, and they opened up a huge portion of the market to the company – specifically performance applications and venues in which full-on concert arrays weren’t necessary.
Lots Of Space
That growth prompted a move in 2004 to Adamson Systems Engineering’s current headquarters, a 37,000 square-foot design and manufacturing facility on 38 acres of land in Port Perry, Ontario.
In addition to accommodating the company’s expanding staff, the new location provided the type of working environment Adamson prefers. “I don’t like being jammed in and surrounded by the city and here we’ve got lots of space to measure, test and A/B big loudspeaker systems outside,” he explains.
Loudspeakers being built in the Port Perry facility.
Additionally, there’s been keen focus on tightly integrating loudspeakers with other audio system elements, transforming and broadening capabilities to also become an electronics engineering facility and manufacturer. “Convergence of technology is a well-known phenomenon,” he says. “I mean, you start off with a cell phone, and then you’re sending texts, then emails, and pretty soon you’re watching movies on your phone.”
The first result of this effort in Adamson terms is Energia, incorporating patented E-capsule loudspeaker technology, networkable Class D amplification, DSP, cable and power distribution, AVB network hardware with software integration of control, as well as 3-D simulation and diagnostics. First off the line came the E15 line array module, followed by the recently released smaller E12, while the rest of the components continue to be developed in phases, with the company working closely with a number of partners worldwide in this effort.
In response to this initiative, a number of leading hire companies – including Sound Image (U.S.), Wigwam (UK), Norwest (Australia), and a number of French companies are currently utilizing the E15.
Energia E15 arrays deployed for Paul McCartney in Latin America. (Credit: Simon Duff and William Shincapie)
It was also recently deployed Adamson Colombian partner C Vilar for a series of Paul McCartney concerts in Latin America.
Adamson Systems Engineering continues to grow, bringing on new people to join those who have long shared the founder’s passion for innovation and a relentless work ethic. As Brock Adamson optimistically puts it regarding plans for a future focused on expanding the company’s product scope and production facilities even further: “We just haven’t put the shovel in the dirt yet.”
Based in Toronto, Kevin Young is a freelance music and tech writer, professional musician and composer.
Church Sound: Methods For Getting The Most From Your System
It starts with a working knowledge of gain structure and a few related concepts
If terms such as gain structure, impedance matching and headroom are unfamiliar, or worse, give you a headache, don’t worry, you’re not alone. Most church sound techs would rather have their gear work perfectly right out of the box than have to tweak it into compliance.
Nevertheless, when it comes to setting up and operating a sound system, a working knowledge of gain structure (and a few related concepts) will help you get the best possible performance from whatever equipment you use.
In short, gain structuring has to do with setting the relative levels of audio signals going into and out of two or more connected audio circuits.
Audio gear has a range of input and output signal levels within which it sounds good. Going outside of that range results in problems such as hiss, distortion, reduced fidelity (especially when dealing with digital gear) and lowered power output.
It’s vital to have a basic understanding of gain structure in a live sound system and to know some ways to optimize gain structure for each piece of gear in the signal chain. As an aid to understanding, see the diagram of a signal chain in Figure 1. Some sound systems will be simpler than the one depicted in the diagram, while others will be more complex, but the basic principles apply to any configuration.
Figure 1: In a typical P.A. setup’s signal chain, one of two input sources is connected to an input channel on a mixer, with a compressor patched in to the channel insert and an external effects unit patched in to the auxiliary loop. The channel signal is routed internally to an equalizer in the console’s master section, and the mixer’s master output bus feeds a power amp’s input.
All audio gear has a peak maximum signal level (above which the signal begins clipping) and what is referred to as its noise floor: the natural noise of the electronics when no input signal is present (Figure 2).
The total difference between the two extremes is called the dynamic range, which is expressed in decibels. For example, if the peak maximum signal level of a device is +24 dBu and the noise floor is -60 dBu, the device has a dynamic range of 84 dB (24 dB + 60 dB = 84 dB).
The difference between the noise floor and the nominal level at which gear operates (+4 dBu on a typical VU meter) is called the signal-to-noise ratio (S/N ratio).
Finally, the difference between the nominal operating level and the maximum peak level is referred to as headroom.
Figure 2: Maximum peak level, nominal level, and noise floor level determine how much headroom you have as well as what your signal-to-noise ratio is.
Why is all that important, and what does it have to do with getting optimal performance from your audio gear?
As a rule, you want to drive the inputs of a piece of gear at as high a level as possible without inducing distortion.
So if the nominal level is +4 dBu and the peak maximum level is +24 dBu, theoretically you have 20 dB of headroom to work with. That means you’ll probably want to set the input level so that the loudest peaks fall a few decibels short of the maximum, say around +16 dBu on a ppm scale (Peak Program Meter).
I say probably because practice doesn’t always follow theory — use your ears as the final judge. Conversely, input signals that fall far below the nominal level become increasingly noisy as they approach the noise floor.
Setting the optimal output level is even trickier, because output stage characteristics can vary wildly from one piece of gear to another. For example, some begin distorting at relatively low settings, while others sound best when they’re wide open.
That’s one of the reasons it’s important to become familiar with each piece of gear in the signal chain. If one or more pieces of digital gear are in the chain, additional considerations will apply.
That may seem plain enough, and in general it is, but a number of additional variables must be considered at each link in the chain. And only by recognizing and dealing with those variables can you get optimal performance from your sound system. Let’s take a closer look…
When structuring gain relationships, you should always start at the beginning of the signal chain, which is, not surprisingly, the sound source.
In the case of direct line feeds from sources like instrument amplifiers, electronic keyboards and personal mixers on stage, a sound tech can do little more than request that the signals arrive at the mixer inputs at optimal levels.
But the tech does have control over at least two sources: condenser microphones with built-in pads and direct injection (DI) boxes with selectable output levels.
Many condenser microphones have a built-in pad (input attenuator) that reduces the signal between the capsule and the output electronics by 10 dB or so.
Generally, you won’t need to engage the pad unless the mic is used on an especially loud sound source. For example, if an AKG C 535 condenser mic is used on a loud vocalist or snare drum, then its 14-dB pad should be engaged. If a condenser mic is used on acoustic strings or as an overhead on a drum kit, the pad can stay off.
Engaging the pad for soft sound sources raises the noise floor of the capsule to a point where it can be noticeable during quiet passages; so use the pad only when necessary. The general rule of thumb is: if you hear something that sounds like clipping or limiting from a condenser mic itself, activate the mic pad. If not, then full steam ahead.
Many DI boxes have selectable output levels. For instance, DOD 265 Stagehand DI boxes have selectable 20-dB and 40-dB output pads, and some Whirlwind DI boxes have 20-dB pads.
Because many mixing consoles have pads on the input strips, it’s best to send as hot a signal as possible from a DI box without clipping the output of the box itself (something that usually happens only on active DI boxes).
That lets you trim back the signal to something usable at the console input while keeping the signal as hot as possible for its trip through the signal snake to the console.
This procedure helps attenuate the effects of any ground-loop problems that might exist on that line due to its interaction with, say, a bass guitar amplifier on stage.
A quick note regarding ground loops: If a passive DI box has a metal XLR jack and the mic cable’s shield isn’t properly disconnected from the XLR shell, then it’s impossible to break a ground loop using the ground-lift switch on the DI box.
In that case, you’ll need to replace the mic cable with one that has the shell properly floated, or use a short XLR-female-to-XLR-male adapter with the shield disconnected from the shell. You won’t believe how much grief that can save you.
In the Channel
Once you optimize the levels coming from the various sound sources, you’re ready to connect them to individual channel-strip inputs on your mixer. Nearly every mixing console has a trim (or gain) control on each channel strip, and many consoles also include a pad switch, most of the time labeled as pad.
However, on many Allen & Heath consoles the mic/line selector switch is also used to engage the 20-dB pad for XLR sources. In any case, the pad and gain controls are used individually or in combination to make the level of the source signal coming into the console compatible with the input level of the channel preamp.
Mixer-channel pads generally reduce the input signal strength by a fixed amount, usually around 20 dB. The pad is placed ahead of any transformers or other electronics in the circuit and should be engaged only when the input signal is too hot to be comfortably handled by the channel preamp. The gain (or trim) control is a continuously variable potentiometer that adjusts the channel preamp gain.
Microphone preamps typically offer up to 60 dB of gain boost, far more than most other gain stages in the signal chain, so be particularly careful when adjusting them. If the input level is set too high, the preamp will be driven into clipping, causing distortion; if it is set too low, excessive noise will result.
Most consoles with a Solo function on the channel strip show the input level of a single channel on a meter when the channel is placed in Solo mode.
To adjust the trim, zero all the controls on that channel strip and lower the fader completely. Put the channel into Prefader Solo mode (sometimes called PFL for PreFader Listen) and monitor it with headphones so you can evaluate the sound source for distortion or hum.
Have the vocalist or instrumentalist sing, talk or play his or her instrument while you watch the solo meter level; bring up the gain level until the meter approaches 0 dB on the loudest transients. If you hear distortion on the headphones or the gain control can’t be turned down low enough to get the solo meter down to 0 dB on the peaks, engage the pad and bring up the gain as appropriate.
In practice, you’ll probably want to set the channel level to peak somewhere between -6 dB and -10 dB on the PFL meter during sound check. Things tend to get louder during the actual show, and it’s preferable to incur a little noise rather than clipping the input stages when the sounds get heavier onstage.
Only after you have set the input level properly should you bring up the fader and add the signal source to the house mix.
Levels can also change during the course of the show — guitarists are notorious for hedging their bets by playing softly during sound check and then cranking up the sound when the crowd arrives.
Have no fear, though: if the signal level starts creeping up into the hot zone, you’ll probably notice the peak-overload LED on the channel strip blinking at you.
Similarly, you may find that you need to pull the fader down really low to make the signal fit in the mix. In either case, adjust the input gain back on the offending channel and readjust the level of the channel fader in the final mix.
However, be aware that adjusting the input gain during a live show will also affect the monitor sends from that channel, which can make the musicians onstage very unhappy. You may have to turn down the input while turning up the monitor sends to counteract the reduction in signal strength.
Still, that beats having a clipping channel sound bad for the entire service or performance.
Inserts and Loops
Once the source sound has passed the channel preamp stage, it is routed to the mixer’s internal mixing buses, but it can make several stops along the way. Many mixers feature channel inserts that let you patch an outboard processor, usually a dynamics processor, into the signal path just after the preamp stage so that the entire audio signal must pass through it before reaching the EQ section and other internal circuitry.
Most channel inserts do not have send and receive level controls, so you will have to rely on the processor’s input and output level controls (assuming there are any) to set the gain at that point in the signal chain.
When using a compressor, the idea is to adjust the compressor’s output (or makeup) gain to compensate for any gain reduction caused by the compression itself. For example, if you read 10 dB of gain reduction on the compressor’s gain reduction meter, you may have to crank the output level up by 10 dB to get the volume level back in the game.
Similarly, if you want to patch in an external processor without sending the entire signal through it (such as for reverb or echo effects), or if you want to be able to route signals to it from more than one input channel, the usual method is to connect the processor to an effects or auxiliary bus.
The signal from the effects or aux bus send is routed to the external processor’s input, and the signal from the external processor’s output is returned to the effects or auxiliary return in the stereo output bus (or another mixer channel input if you want ultimate control).
Unlike channel inserts, effects and aux sends and returns nearly always have level controls. Try setting the effects or aux send and receive levels in the mixer’s master section to their halfway points, and then slowly turn up the processor’s input level control until you get a consistently robust level. If the processor has a mix control, set it for 100 percent wet/effect.
Something else to consider when using external audio processors is their operating level. The insert points and effects busses on large professional mixers generally operate at a +4 dBu level, whereas those on lesser-grade mixers generally operate at -10 dBV.
Fortunately, many outboard processors can be switched between the levels. Look for a little button near the input jacks on the back that’s marked -10/+4 or something similar and set it accordingly.
If the processor’s input gain control must be set very low to prevent clipping the meter, you’re probably asking its -10 dB input to handle a +4 dB signal from the console, which is not nice to do.
In that case, trim back the input of the console strip itself until you can get the processor’s input control somewhere up around 50 percent.
Conversely, setting the processor’s input to +4 dB for a console with a -10 dB level will result in extra noise or not enough signal to drive the processor properly.
In critical listening situations you can also get transformer-based audio level shifters from companies like Ebtech or Whirlwind, which will boost or attenuate the levels from -10 dB to +4 dB or + 4 dB to -10 dB appropriately.
It’s important to think it all out in advance and listen during sound check to avoid bad audio during the actual worship service.
Finally, outboard processors can behave in dramatically different ways, so you need to understand each one. For example, the overload LED on one processor might flash when the input signal reaches 6 dB below clipping, whereas on another it might not flash until the unit has been driven into distortion.
Or you might get a perfectly clean signal when cranking the output level of one processor to maximum and find that another one gets increasingly noisy past the halfway point. Listen, then adjust, listen, adjust, etc….
On the Bus
The internal bus structure of a mixing console is also subject to headroom and S/N considerations. Whereas some consoles like to have their mixing buses driven hard, others’ buses can be clipped quite easily.
A good example of an inexpensive live console that needed its buses driven hard is an old Peavey console I had 30-plus years ago. There was a lot of noise in the mixing buses, but by running the output faders down around 2 or 3 (on a scale of 1 to 10) and driving the input stages a little hotter, it was possible to get a decent S/N at the outputs.
On the other hand, a more recent vintage Alesis 16-channel live console didn’t have extra headroom in the mix bus but was very quiet. In that case, I ran the output faders up around 8 or 9 and then trimmed back the channel inputs until the output was at the right level.
The easiest way to determine the correct gain-staging approach is to plug in a dynamically consistent signal source, such as a drum machine or sampler, and listen with headphones for any crunching or distortion at the console output.
If there’s a lot of noise on the outputs with the faders up and no input signal present, bring the fader down until the noise is manageable.
Headphones make this easier to judge in a noisy room, so get yourself a quality pair and make friends with them.
If you hear distortion on the console outputs even when the meters read below 0 dB and the output faders are below halfway on the console, it means the internal mixing buses are clipping. In that case, bring the input faders down and the output faders back up.
High-end consoles have extremely quiet buses and a lot of headroom, so you typically won’t run into that sort of problem with them.
But many inexpensive consoles can be tweaked in the way I described to sound better than you might imagine.
If you want to go further, you can use an oscilloscope and a signal generator to actually see clipping in the various stages and adjust the levels accordingly. Yes, it’s the ultimate geek thing to do, but oscilloscopes can be great troubleshooting tools.
Hit ‘Em Hard
You can also tweak the gain structure between the equalizer and the amplifier to improve the S/N of the entire PA system.
For example, if you have sufficient gain from the equalizer output, you can raise its level by 10 dB and trim the input on the amp down by the same amount to attenuate any hum or ground-loop problems between the console and power amplifiers. That can really help in a quiet mixing situation such as a church service.
Proper grounding, balanced inputs and shielded cables should, in theory, allow for an ultra-quiet connection between the console equalizer and the amplifiers. However, that’s rarely the case in the real world. I’m always tweaking things one way or another to get the outputs as hot as possible without clipping and then turning down the inputs on the next stages.
Nearly any sound tech can properly operate a really expensive console with plenty of headroom and low noise, but it takes someone with real skills to make a cut-rate, unforgiving board sound great.
I have observed many guest engineers working with the same equipment get results ranging from fabulous to mediocre or worse, depending on how they ran the levels. So don’t feel put down as a sound tech when you are given some inexpensive gear and asked to make it sound great.
Making an inexpensive system sound like a million bucks is the ultimate challenge. You can indeed spin straw into gold if you use your brains and experience.
Getting your church sound system to sound its best takes more than a great set of mixing ears for a particular music style. It requires understanding how each piece of gear in the signal chain works and exploiting its potential to the max while working around any weak points.
Once you reach that level of knowledge, you are truly sympatico with the sound system and can make it do most anything you want.
Mike Sokol is the chief instructor of the HOW-TO Church Sound Workshops. He has 40 years of experience as a sound engineer, musician and author. Mike works with HOW-TO Sound Workshop Managing Partner Hector La Torre on the national, 36-city, annual HOW-TO Church Sound Workshop tour. Find out more here.
In The Studio: Drum Damping Techniques
Effective ways to get drum sound firmly under control
What Is Damping?
Damping (or dampening if that’s how you roll) is controlling the decay and overtones of a drum. Damping the drum is NOT a way to deal with a poorly tuned drum.
Knowing how to effectively control the sound of the drums through damping is essential for every drummer, producer and recording engineer.
Here I’ll cover some inexpensive products, some do-it-yourself options, and then some kick drum specific methods. This article is dealing with recording live drums, not damping and muffling for practicing.
Moongels—These are great inexpensive, reusable jelly pads that stick to the top head of the drum to control the ringing.
Super simple to apply. I like these a lot. Two sets should last you for years. These work best on the top head of a snare or tom and fall off the bottom.
O Rings/E-Rings—These are my favorite method of damping toms. They’re thin clear plastic rings that sit on top of the drum head. I love the instant gratification they provide.
On snare it’s not always my favorite sound, and can get in the way for brush work. They’re also inexpensive and should last a long time unless they get folded or bent.
Do It Yourself Options
Gaffer tape or masking/painter’s tape—NEVER use duct tape or electrical tape on drums, it’s just gross. Gaffer (Gaffa), masking tape or painters tape will apply and remove cleanly from the drum.
There are a couple techniques for using tape to damp. Try making a loop of gaffer tape sticky side out, stick it about an inch from the rim. With masking or painters tape take a 4-inch strip folded so there is a small “handle” for easy removal.
I’ll tend to use tape on the bottom of toms if E-Rings aren’t enough. You can also use tape on cymbals if they’re too ringy/washy.
Reused O-ring made from an old drum head—Next time you change your drum heads, cut them into O-Rings. Cut off the outer edge and the center. This costs nothing just some time.
Cotton balls – I once heard of putting a few cotton balls inside a tom to very naturally reduce the sustain time of the drum. Sounds like a good trick but I haven’t yet tried it.
Kick Drum Specifics
Pre-damped heads – There are a wide variety of drum heads available with damping built-in.
One of the most common is the Evans EMAD 2 which is a normal drum head with a plastic ring and foam damping insert. I really like the sound of these heads. Aquarian and Remo also have nice pre-damped heads.
Inside the kick – From pillows to more advanced systems, there are a lot of options inside the kick drum.
IMO all kick drums should have some kind of damping to sound good. You can take a blanket, fold it and place it inside so its just touching the batter head. Place something heavy like a sandbag or cement block on top so it doesn’t slide away during the performance.
I’ve read that some people will partially fill their kick with shredded newspaper but that seems really silly to me. A chunk of acoustic foam will work nicely as well.
Jon Tidey is a Producer / Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. To comment and ask questions about this article go here.
Tuesday, September 03, 2013
Compact Console Connectivity: I/O, I/O, It’s Off To Patch We Go…
Taking stock of onboard and expansion connections
Back in the analog (only) days, hooking up to a console was a rather simple process. Stage inputs ran through a copper wire snake that plugged into the front of house console’s inputs. Console outputs were run back to the stage through the same snake line’s returns or they ran through a separate “drive” snake that sent the line level output signals to the amp racks.
If you needed to insert a processor to a channel or group, you simply grabbed a TRS to dual 1/4-inch cable and hooked up the unit to a console’s insert jack. If you wanted to use a board microphone or an audio playback deck, you simply reached around the back of the console, parted the rat’s nest of cables and found an unused input to plug in to. Even if the system used multi-pin connectors, setting up and patching FOH took a bit of time as everything was point-to-point.
Another challenging aspect was patching the stage on shows that had multiple bands. Most of us old sound folks paid our dues playing “patch monkey” on shows that required us to swap out stage snakes and stage lines into the main snake head, while trying to get every signal into the correct snake channel. Trying to do it all in the short turnover times the promoters wanted is why many of us became grumpy old sound folks.
While it was – and still is – relatively simple to interface analog gear, digital consoles have been a boon. Many digital consoles have a way to connect to a digital snake system, saving our backs from having to deal with heavy multicore copper cables.
Another obvious benefit is the need for far less FOH racks, as well as not needing to patch all of that outboard gear into to the console. With built-in processing, digital consoles have lightened our trucks and shrunk the area required for the mix and control position, allowing promoters to sell more seats.
From a connection standpoint, the biggest caveat (at least to me) is that many digital consoles allow patching and routing of signals directly in the software. No more reaching into a pile of cable spaghetti to find an open jack while also trying to to accidentally unplug something. With many digital boards, you can open a set-up/patch menu and assign inputs and outputs to wherever you need. Routing effects, processing and audio mixes has never been easier.
Another great feature is the ability of many digital systems to record either directly to a USB connected drive, to a DAW, or to a stand-alone multi-track deck. Being able to archive shows and do “virtual” sound checks (where you play back feeds directly recorded from the band and use these signals to help set up and tune the PA at the next gig before the band shows up) has been a real blessing to many who work behind a console.
Connectivity is what it’s all about, and today’s digital consoles have really hit the mark with their routing, patching and interfacing effectiveness.
All of these features and benefits are not just limited to large digital boards – many compact models also offer quite extensive interface and routing capabilities, so let’s have a look via this Photo Gallery Tour, taking stock of the connections that are onboard as well as expansion capabilities they have in terms of inputs and outputs.
And because many of these models only hit the market recently, we’ll also provide some additional details on their overall facilities.
Craig Leerman is senior contributing editor for Live Sound International and is the owner of Tech Works, a production company based in Las Vegas.
Church Sound: The Value Of Knowing How Equipment Fails
Diagnosing the problem correctly to get the system back up and running
Your worship leader has been knocked to the floor as his monitor wedge suddenly explodes with a huge volume increase. The church service ends and you’ve got 30 minutes until the next service to find the problem and fix it. Where do you start?
A very similar scenario happened to a friend of mine working in pro audio. Only it happened for four nights in a row until the problem was solved. And the primary reason it took so long to fix was that someone above him didn’t recognize the ways equipment can fail and thus denied his request for a swap of a particular piece of gear.
Equipment usually fails in these ways:
—No volume / no signal
—Signal cuts in and out
—Decrease in sound quality
These failures are usually simple things like a short in a cable, a bad cable, or a blown piece of equipment like a power supply or internal electronics. There are also problems like a blown loudspeaker.
What about the boost in the monitor volume? No, my friend didn’t crank the gain or any such thing.
Let’s look at the signal path:
—Signal comes into the mixer from mics, instruments, and a computer
—Mixer possibly routes signal to effects units
—Mixer sends signal out to power amplifier
—Amplifier sends signal to monitor
Using this path, let’s look at the possible sources of a problem:
Right off the bat, we can drop cables from our list of potential problem sources. Also, as all the input sources are boosted in the monitors, we can drop those off the list. It’s something after the signal gets to the mixer.
Now we’re down to:
Monitors fail down. Blown power supply, blown speaker cone, blown fuse. Our list is getting shorter.
Where would you put your money between the mixer and the amplifier?
You’re going to say the amp…not because you think that’s going to be the source of the problem…but because you really don’t want the problem to be in the mixer. You’d have to get it serviced. Repair could be expensive. You’d have to borrow another mixer. “Denial ain’t the name of a river in Egypt.”
Truth be told, the amp is going to fail down as well.
Hate to break it to you…the mixer is busted. My friend knew this and wanted to swap mixers but the “higher up” continually denied that possibility and so forced him to try everything from cable replacement to monitor replacement until finally recognizing it was the mixer.
Stuff fails. You’re alerted to this failure by what you hear (or what you don’t hear). Knowing how equipment fails will help you diagnose the problem correctly and get the system back up and running as soon as possible.
Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians, and can even tell you the signs the sound guy is having a mental breakdown. To view the original article and to make comments, go here.
In The Studio: An Interview With Legendary Engineer Al Schmitt
A focus on microphone selection and approaches from a master
Here’s an excerpt of an interview that I did with legendary engineer/producer Al Schmitt that’s featured in the second edition of The Recording Engineer’s Handbook.
After 18 Grammys for Best Engineering and work on over 150 gold and platinum records, Al Schmitt needs no introduction to anyone even remotely familiar with the recording industry. Indeed, his credit list is way too long to print here (but Henry Mancini, Steely Dan, George Benson, Toto, Natalie Cole, Quincy Jones, and Diana Krall are some of them), but suffice it to say that Al’s name is synonymous with the highest art that recording has to offer.
Do you use the same setup every time?
Al Schmitt: I usually start out with the same microphones. For instance, I know that I’m going to immediately start with a (Neumann) tube U 47 about 18 inches from the F-hole on an upright bass. That’s basic for me and I’ve been doing that for years. I might move it up a little so it picks up a little of the finger noise.
Now if I have a problem with a guy’s instrument where it doesn’t respond well to that mic then I’ll change it, but that happens so seldom. Every once in a while I’ll take another microphone and place it up higher on the fingerboard to pick up a little more of the fingering.
The same with the drums. There are times where I might change a snare mic or kick mic, but normally I use a (AKG) D112 or a 47 FET on the kick and a (AKG) 451 or 452 on the snare and they seem to work for me. I’ll use a Shure SM57 on the snare underneath and I’ll put that microphone out of phase. I also mic the toms with (AKG) 414s, usually with the pad in, and the hat with a Schoeps or a B&K or even a 451.
What are you using for overheads?
I do vary that. It depends on the drummer and the sound of the cymbals, but I’ve been using (Neumann) M 149s, the Royer 121s, or 451s. I put them a little higher than the drummer’s head.
Do you try to capture the whole kit or just the cymbals?
I try to set it up so I’m capturing a lot of the kit in there which makes it a little bigger sounding overall because you’re getting some ambience.
What determines your mike selection?
It’s usually the sound of the kit. I’ll start out with the mics that I normally use and just go from there. If it’s a jazz date then I might use the Royers and if it’s more of a rock date then I’ll use something else.
How much experimentation do you do?
Very little now. Usually I have a drum sound in 15 minutes so I don’t have to do a lot. When you’re working with the best guys in the world, their drums are usually tuned exactly the way they want and they sound great so all you have to do is capture that sound. It’s really pretty easy. And I work at the best studios where they have the best consoles and great microphones, so that helps.
I don’t use any EQ when I record. I use the mics for EQ. I don’t even use any compression. The only time I might use a little bit of compression is maybe on the kick, but for most jazz dates I don’t.
How do you handle leakage? Do you worry about it?
No, I don’t. Actually leakage is one of your best friends because that’s what makes things sometimes sound so much bigger.
The only time leakage is a problem is if you’re using a lot of crap mics. If you get a lot of leakage into them, it’s going to sound like crap leakage. But if you’re using some really good microphones and you’re get some leakage, it’s usually good because it makes things sound bigger.
I try to set everybody, especially in the rhythm section, as close together as possible. I come from the school when I first started where there were no headphones. Everybody had to hear one another in the room, so I still set up everybody up that way. Even though I’ll isolate the drums, everybody will be so close that they can almost touch one another.
What’s the hardest thing for you to record?
Getting a great piano sound. You know, piano is a difficult instrument and to get a great sound is probably one of the more difficult things for me. The human voice is another thing that’s tough to get. Other than that, things are pretty simple.
The larger the orchestra the easier it is to record. The more difficult things are the 8 and 9 piece things, but I’ve been doing it for so long that none of it is difficult any more.
What mics do you use on piano?
I’ve been using the M 149s along with old Studer valve preamps on piano, so I’m pretty happy with it lately. I try to keep them up as far away from the hammers as I can inside the piano. Usually one captures the low end and the other the high end and then I move them so it comes out as even as possible.
It sounds like you’re a minimalist. You don’t use much EQ or compression.
No, I use very little compression and very little EQ. I let the microphones do that.
What’s you’re setup for horns?
I’ve been using a lot of (Neumann) 67s. On the trumpets I use a 67 with the pad in and I keep them in omnidirectional. I get them back about 3 or 4 feet off the brass. On saxophones I’ve been using M 149s. I put the mic somewhere around the bell so you can pick up some of the fingering. For clarinets, the mic should be somewhere up near the fingerboard and never near the bell.
How do determine the best place in the studio to place the instruments?
I’m working at Capital now and I’ve worked here so much that I know it like the back of my hand so I know exactly where to set things up to get the best sound. It’s a given for me here. My setups stay pretty much the same. I try to keep the trumpets, trombones and the saxes as close as possible to one another so they feel like a big band. I try to use as much of the room as possible.
I want to make certain the musicians are as comfortable as they can be with their setup. That means that they have clear sightlines to each other and are able to see, hear and talk to one another. This means having all the musicians as close together as possible. This facilitates better communication among them and that, in turn, fosters better playing.
I start by setting members of the rhythm section up as close to each other as possible. To get a tight sound on the drums and to assure no leaking into the brass or strings’ mics, I’ll set the drums up in the drum booth. Then I’ll set the upright bass, the keyboard and the guitar near the drum booth so they all will be able to see and even talk easily to each other.
If there’s a vocalist, 90 percent of the time I’ll set them up in a booth. Very few choose to record in the open room with the orchestra, although Frank Sinatra and Natalie Cole come to mind.
If you had only one mic to use, what would it be?
A 67. That’s my favorite mic of all. I think it works well on anything. You can put it on a voice or an acoustic bass or an electric guitar, acoustic guitar, or a saxophone solo and it will work well. It’s the jack of all trades and the one that works for me all the time.
Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blogs. Get the second edition of The Recording Engineer’s Handbook here.
Friday, August 30, 2013
Church Sound: Microphone Choices And Approaches For Worship Services
Getting it right at the place where it all begins...
A church service is a dynamic, multifaceted event combining spoken word from experienced and inexperienced speakers (“talkers”), solo vocalists, choirs, bands ranging from mellow to rock levels, and responses from the congregation. Though the program is typically consistent from week to week, new variations arise, ranging from a holiday pageant to a business meeting.
The primary goal of the service is to engage the congregation in the worship experience through word and music. To help achieve this, the audio system must be as transparent as possible.
What the congregation hears begins with the microphones. If the audio sources aren’t captured with sufficient level and clarity, everything else in the system is playing catch-up.
Let’s take a look at miking options and approaches for the variety of sources involved in typical church services.
The pastor is the key person (and therefore audio source) in a typical service. Choosing the best mic and finding the optimal placement relative to the pastor’s mouth will make the difference between comfortable listening and straining to hear – and between having headroom in the system and riding the fader on the edge of feedback.
The major components in the church mic toolbox include goosenecks (AKG DAM+ Series), lavaliers (Countryman B2D), headsets (Audio-Technica BP892 MicroSet), handhelds (Shure SM86) and hanging choir (DPA 4098H). All of these suppliers and others offer a full line of these tools and more.
Since most pastors need their hands free, the most common choices are a gooseneck pulpit/podium mic, a lavalier, or a headset. Each has its own characteristics and benefits. When the presenter is in a relatively fixed position and using notes, a pulpit mic does a decent job in capturing the voice with sufficient level, especially when the person takes the time to adjust the mic position and doesn’t stand too far away.
Lavalier mics keep the hands free and the face unencumbered, and often have a shaped frequency response to compensate for “chest resonance” and the off-axis loss of the voice’s higher frequencies. Yet sometimes it’s difficult in live settings to achieve a full frequency response and sufficient level.
Lavs with omnidirectional polar patterns typically generate less noise from clothing, cable movement, and handling, and pick up sound sources relatively equally from all directions. Positioning is less critical than with a more directional pattern, and head turns aren’t as likely to result in significant drops in level or changes in frequency response. But the lack of directionality makes it harder to keep their pattern away from the loudspeakers, which can lead to feedback.
Directional lavs provide greater isolation along with the potential of more handling noise and heightened sensitivity to breath and plosive sounds. Additionally, the user’s movements need to be more consistent so that the head doesn’t dramatically move away from the pickup pattern and change the output level.
Consider a mic’s polar pattern in relation to its application and factors such as stage noise and loudspeaker location. Left to right we see omnidirectional, supercardioid and cardioid patterns.
If the pastor is amenable, a headset mic is a more reliable choice, and the frequency response is invariably better for a live setting. Steve Diamond, co-founder and acoustics consultant at AGI Professional in Eugene, OR, notes that he recommends headsets in almost all cases due to their consistency and better frequency response in live settings.
Headset mics provide considerably greater gain before feedback, more natural voice quality with full frequency response, and more consistent audio level with movement. Place the element toward the corner of the mouth but out of the direct line of the voice to minimize breath noise and consonant pops, and use the provided windscreen.
Headsets with single- and dual-ear mounting are readily available, with some of the single-ear units I’ve tried providing sufficient hold for active movement, and staying in place when putting on reading glasses. Boom length on many models can be user-adjusted for optimal positioning, and shaped to fit the curve of the face.
For both lavs and headsets, some manufacturers offer different caps for their elements that alter the frequency response curve via acoustical equalization so that they can be used for different applications.
For a mic with a fairly flat frequency response, the addition of a cap can increase high-frequency response over a specified bandwidth for off-axis use. The same model is sometimes available with different sensitivities, so that its response characteristics and maximum SPL can be matched to the user.
Look for models with detachable cables at the mic end so that if a cable is damaged, it can be easily replaced. Most higher end models have rugged and sometimes even Kevlar-reinforced cables.
Moisture resistance is also important for these miniature mics, and most hae tight seals at joints and connectors, barriers to protect the mic element, and small rings on the boom to channel moisture.
Often during a service, a variety of individuals will make announcements, lead a prayer, or speak a testimony. Before a less experienced presenter goes up to the pulpit, some basic coaching on how to approach the mic will be beneficial. If it’s a church that holds rehearsal and/or sound check prior to the service, all the better. Persuade these folks to attend to get some practice.
Too often when people hear their voice become louder, they speak more quietly or back off the mic; reassure them that the sound engineer will turn down the level if they’re too loud. Let them know to work fairly close to the mic – a hand-width away (or closer for greater level), and on-axis.
This distance allows the presenter some side-to-side head movement while still remaining in its coverage pattern. Encourage them take a moment to adjust the mic to their height before speaking. The same advice applies if they’ll be using a handheld mic.
Particularly with inexperienced talkers, adjust mics for optimum performance.
With a gooseneck mic, keep an eye on where loudspeakers are located. Especially if they’re located above and slightly to the front of the pulpit and altar area, try to keep the mic element positioned level, more or less parallel with the floor and at mouth level to the presenter, rather than pointing up toward the ceiling (and the loudspeakers). This can increase gain before feedback.
When reinforcing the choir with mics, specialized small-diaphragm cardioid condensers are typically used, either suspended from the ceiling or on high stands. Use these mics sparingly, and try to maintain a ratio between mics of at least three times their distance to the nearest singer. Position them above the head of the tallest vocalists in the back row, and set the angles so that they are aimed toward the choir (rather than pointing straight down to the floor) and with their lower sensitivity zones toward the speaker system.
Donnie Haulk, CEO of AE Global Media in Charlotte, NC, sometimes uses wireless for choir miking, especially when the choir is on portable risers or varies its location. He places choir mics on boom stands to position them overhead, connected to bodypack transmitters that relays the signal to the board.
Optimizing The Situation
Overall gain before feedback will be highest with judicious use of the channel mute to minimize the number of open mics. Mute any mic that is unused for a time during the service; by keeping an eye open and knowing the routine, you’ll have it unmuted by the time someone is ready to use it.
Encourage the “hands-length” rule.
Also, it’s often helpful in minimizing stage noise – as well as preventing noise from HVAC systems and other sources from adding to the mix – to engage a high-pass filter on a mic, or on the mixing console, starting at approximately 80 to 150 Hz.
When he runs sound during a services, AGI’s Diamond told me that he high-passes everything except bass, kick, and sometimes floor toms, and sets the filter knee based on the particular microphone and user’s voice to eliminate both noise and excessive proximity effect when a vocalist is overly close to the mic.
When musicians in the praise band have onstage amps, try having the amps side-fire so that they are aimed across the stage rather than pointed directly at the pews – directing their output to the congregation.
Ask the musicians to adjust their desired tonalities, but at a lower level than if they were trying to fill the building with sound. If they need more of themselves on stage, either put more of their signal into the monitors or tip their cabinets a bit so that the speakers are pointing more toward their ears.
Try miking the amp’s speaker toward the outside of the cone to mellow the HF content of the signal. This technique will have several positive effects.
First, minimizes the bleed of the guitar or bass into the vocal and other mics on stage. Second, it gives the mix engineer more level and audio shaping control over the entire mix, so that an overly loud guitar or bass part will not overwhelm the vocals or other instrumentation.
With praise bands, Diamond insists that the players and singers “do in the sound check what they’re going to do in the service.” Being “inspired” to change instruments, amp settings, positioning, and distances from the mics just before the service can create problems for the person at the console.
Wired Or Wireless?
In churches, as in other settings, more applications that traditionally were filled by wired mics are going wireless – even though wireless units are more costly per channel.
Wireless (versus wired) sells “easily three to one” to his church customers, says Mike O’Rourke of Peach State Audio in Suwanee, GA. About the only wired mics he supplies to these folks are for a podium or in fixed choir positions. Drums often still have wired mics, with most other instruments using DI boxes or going wireless.
The majority of vocalists in larger, more “production-oriented” churches are wireless, providing more range of motion as well as offering the mix engineer more flexibility for channel assignments and positions. AE Global’s Haulk adds that wireless “cleans up the stage” and is easier for the lay person to use since the receivers are “permanently wired into the system,” while the transmitters are able to be flexibly assigned for different applications and locations.
Don’t leave this mic open when it’s not in use.
To add versatility to their wireless purchases, churches will often have both a handheld and a beltpack transmitter that can be used with a particular receiver, depending on the requirements of the event. Manufacturers accommodate this flexibility by selling transmitters separately, and in some cases providing pre-packaged sets which include a receiver and both types of transmitters.
When beginning the process of adding wireless, think through the possibilities to the end rather than just filling the immediate need for a minister’s wireless or a mic for a vocalist. By analyzing the possibilities, the current two channels might eventually lead to 12.
An investment in higher quality systems to begin with will make compatibility and interference avoidance easier in the future as well. Also keep an eye on current and upcoming FCC plans for the UHF spectrum with regard to “white spaces” use. All of the leading manufacturers can be helpful in this regard.
Gary Parks has worked in pro audio for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.
In The Studio: Taming A Harshly Distorted Electric Guitar (Includes Video)
Keeping what you want while eliminating what you don't
One of the challenges when mixing a heavily distorted guitar is that sometimes it can sound a little bit harsh or brittle. This might be when the player slides his fingers across the strings, or maybe some fret noise comes into the performance — or even just certain notes/harmonics that are a bit unpleasant or too resonant.
You can try to notch out these frequencies with a conventional EQ by finding the parts of the performance that sound bad, finding the specific frequencies that sound bad, and reducing them by 3 or 6 dB.
However, a conventional equalizer is going to pull out those frequencies across the entire performance. So if there are parts that sound great, you’re still pulling out that harmonic content across the entire performance when you really just want to pull it out during parts that sound harsh. This is where a de-esser comes into play.
Conventionally a de-esser is used for vocals. When the singer makes “Sss” sounds, the idea is that the compressor is going to reduce the amplitude of those “Sss” sounds at the frequency you set, while letting the rest of the performance come through unprocessed. It’s a perfect application in this situation with a harsh distorted guitar, because it basically acts as an adaptive equalizer.
So if the performance gets a little too harsh, the compressor is going to turn those harsh parts down. However, when it’s not too harsh, it maintains the tone that you want to keep.
Check out the video tutorial below, where I use a Waves C-1 compressor with frequency-specific sidechain settings to tame a harsh distorted electric guitar.
Eric Tarr is a musician, audio engineer, and producer based in Columbus, OH. He is a Ph.D. student at the Ohio Sate University in electrical and computer engineering studying digital signal processing.
Be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article go here.
A Note On The Signal Path Quality Of Measurement Systems
Does it really need to be "higher than high"?
Editor’s Note: Be sure to see the related article by Jamie entitled “Anatomy Of A System Measurement Rig: Probes, Preamps & Processors”.
While it may be counter-intuitive, for 99.52367 percent (roughly speaking) of common applications, the signal quality that our system measurement/analyzer rigs require to produce good measurements is, by pro audio standards, not really that high—particularly when compared to the signal quality demanded for studio recording or even simple listening.
What we require from the signal transmission path in our measurement rigs is really quite humble.
Frequency Response: Flat (+/-.25 dB) throughout the measurement bandwidth, a spec that should be easily achievable from 20 Hz to 20 kHz by all levels of audio gear.
The biggest offenders here are unexpected filters—HPF caused by wiring issues or faulty DC blocking and phantom isolation circuits, LPF from long transmission lines and poorly implemented anti-aliasing filters, and of course, the random unintentionally inserted EQ filter (a frustratingly common occurrence when grabbing measurement signal feeds off of extra board and processor outputs.)
Generally, this can be checked by simply examining the heavily averaged spectrum (RTA) of a known spectrally flat pink noise source. Any major FR deviations will show themselves quickly.
Channel Consistency (in FR and Latency): The requirement that the measurement signal channels have virtually the same response and latency (which is by far the norm) so that dual-channel system response measurements—made through the comparison of two signal channels—are not biased by any transfer function (FR) or latency differences between the signal channels of the measurement rig.
A quick way to check this is by applying the exact same signal to both input channels (or all channels in the case of a multi-channel rig) and then performing transfer function (FR) and delay (IR) measurements between channels. The FR should be flat in magnitude and phase (signifying no latency issues), and the delay between the channels should be zero.
It is possible that your delay measurement could show zero time offset, and yet the phase response shows some deflection from flat in the VHF—this occurs when the two channels are off by a factor of 1/2 sample due to uncorrected interlacing in the ADC (the ADC samples at 96 kHz and alternates between channels to produce two 48 kHz signals - the driver should correct for this 1/2 sample latency offset)
Signal-to-Noise and THD: Hold onto your hats—for most most applications in standard measurement rigs, we only truly need an signal-to-noise ratio of >70 dB and THD performance
<1 percent! Of course we expect performance much higher in the electrical path of our measurement rigs, but for the basic requirements of the spectrum, FR and IR measurements we most commonly make, this level of performance will not significantly impact our data.
Think of it this way: it's not uncommon to have the SNR of the acoustic environment for our measurement be far below 70 dB (the old Spectrum Arena in Philadelphia comes to mind). The key here is to be aware of your noise floors (acoustic and electrical), measure well above them, and use all the tools at your disposal to help protect and improve your data quality (Amplitude and Coherence Thresholding, and liberal application of data averaging.)
Note that this is also why few measurement systems and situations see any appreciable difference between the use of 16-bit and 24-bit A/D conversion.
Channel Isolation: Verify that there is no significant cross-talk between your measurement channels (>70 dB isolation @ 1 kHz). There are a couple easy ways to do this. The simplest is to input a sine wave on one channel, and then view the spectrum of the other.
Another way is to measure a piece of electronic gear with a bit of delay/latency. If there is significant (in terms of our measurement world) cross-talk, a look at the IR measurement will show an impulse at zero time as well as the correct delay through the device.
The measurement signal path quality requirements detailed above would probably be horrifying to a recording engineer, audiophile nut, or even your standard pro-audio system engineer. But for the purposes of acquiring valid, stable and useful measurements with our rigs, that is where the bar is set. And this is good news to us, because our (99.52367 percent of us) measurement rigs don’t need to be comprised of high-end, esoteric, built-from-unobtainium gear.
Standard professional quality gear in functioning order will do most of us just fine. If you are involved in the (0.48633 percent), well, most professional quality gear will work for you too.
There are of course exceptions created by the demands of situations like measuring very low noise levels (low NC measurements)—and those are the applications where you need spend the extra $$ on your signal chain.
Jamie Anderson is a founding member of Rational Acoustics, which provides training courses, hardware products/packages, and professional consulting for sound system measurement, analysis, and alignment. He has been teaching and working in the field of sound system engineering, measurement and alignment for almost 20 years. During his career, Jamie has worked as a technical support manager and SIM instructor for Meyer Sound Laboratories, as a system engineer on tour for A-1 Audio (kd Lang) and UltraSound (Dave Matthews Band).
Thursday, August 29, 2013
Just Because It’s Sound Doesn’t Mean It Has To Be Mixed
When you only have a hammer...
Mixing is like driving – everybody does it, it gets you from here to there, and it seems like it’s been part of the culture forever. Driving is so pervasive that’s it’s easy to forget there are other ways of getting around.
Most of the time, though, it seems that we get behind the wheel simply out of habit. It’s only on those rare occasions when we deliberately walk to the store to pick up a loaf of bread, for example, that we remember there are other ways, perhaps better ways, of doing the simple things that need to get done.
It’s been said that when the only tool you have is a hammer, everything looks like a nail. This is particularly apt in the case of mixing for staged events. The bias toward mixing that’s pervasive in the industry is traceable in part to the large overlap in the designs of traditional recording, broadcast and live consoles, as well as to schools teaching audio (i.e., “recording schools”) that continue to focus on the art and techniques of the mixdown.
Originating in broadcast and recording sessions involving multiple microphones, and refined in multitrack recording studios producing mono or stereo masters, mixing has become so entrenched in the industry and in the minds of many who dream of working in it that it’s almost as if no other way of working with sound is even remotely conceivable.
But of course it is. Mixing live sound is a relatively recent innovation. It wasn’t so long ago that virtually all pop music acts relied on instrument amplifiers and the acoustic output of the drum kit to fill a venue, the level of the vocals being brought into balance either via a relatively primitive house sound system or a couple of loudspeaker columns at the sides of the stage. Even the Beatles toured that way. It was only during their first tour of the U.S. and Canada in 1964 that the limitations of this setup became apparent, when their then state-of-the-art 100-watt Vox amplifiers were drowned out by thousands of screaming fans.
Sound reinforcement systems evolved rapidly over the ensuing decade, spurred by the needs of bands touring large venues and the staging of outdoor festivals, such as those at Monterey, the Isle of Wight, and Woodstock, where sound system designer Bill Hanley mixed sound for the three-day festival using several Shure 4 x 1 mic mixers.
Amplifiers and drums were miked along with vocals, and consoles soon grew larger as a result, offering more and more input channels. They evolved to include EQ, dynamics processing, and eventually an output matrix section providing the ability to route a variety of inputs to any or all of a series of outputs, a capability that had proven useful on Broadway and in London’s West End theatres.
One significant difference between rock concerts and live theatre, however, is that in concerts, the sound system is employed primarily to achieve high sound pressure levels throughout the venue, whereas in live theatre, the goal is to increase intelligibility. That line is often blurred in big, dynamic rock musicals, such as those of Andrew Lloyd Webber, whose sound designers evolved separate vocal and orchestra mixes, striving simultaneously for clarity in the voices and power in the orchestra.
When the deployment of delayed loudspeakers in the audience area became a staple of sound reinforcement systems in the late 1970s and early 1980s, it became possible to permit lower volume levels at the front of a large venue than would otherwise be required to provide sufficient reinforcement at the rear. For theatres, houses of worship, and corporate events, this is critical: if voices are presented at a level that is significantly louder than their natural level, any attempt to convey an illusion of realism goes out the window. As an added benefit, remote loudspeakers can be equalized to compensate for the inevitable acoustic deficiencies that lurk under balconies and in other less-than-favorable acoustic areas of a hall.
“A matrix section on a console can help manage a complex PA system,” Craig Leerman wrote recently. “I’ll route the main console left and right outputs to the matrix and use the various matrix outputs to feed the different zones and delays. With EQ and delay available on the matrix outputs, it’s easy to tune and time align a loudspeaker zone to the rest of the PA.
“Once all of the various zones have been dialed in, any further overall level adjustments are simply handled via the left and right masters on the console. Using the matrix for this can give both stereo and mono feeds of the program, as well as a reduced stereo image feed.”
For recording or broadcast requirements with a limited channel count, a stereo or mono mix will usually fit the bill, but for the house, perhaps we can employ the matrix to go even further.
As a case in point, consider a talker at a lectern in a large meeting room. Conventional practice would dictate routing the talker’s microphone to two loudspeakers at the front of the room via the left and right masters, and then feeding the signal with appropriate delays to additional loudspeakers throughout the audience area. A mono mix with the lectern midway between the loudspeakers will allow people sitting on or near the center line of the room to localize the talker more or less correctly by creating a phantom center image, but for everyone else, the talker will be localized incorrectly toward the front-of-house loudspeaker nearest them.
In contrast to a left-right loudspeaker system, natural sound in space does not take two paths to each of our ears. Discounting early reflections, which are not perceived as discrete sound sources, direct sound naturally takes only a single path to each ear. A bird singing in a tree, a speaking voice, a car driving past – all of these sounds emanate from single sources. It is the localization of these single sources amid innumerable other individually localized sounds, each taking a single path to each of our two ears, that makes up the three-dimensional sound field in which we live. All the sounds we hear naturally, a complex series of pressure waves, are essentially “mixed” in the air acoustically with their individual localization cues intact.
Our binaural hearing mechanism employs inter-aural differences in the time-of-arrival and intensity of different sounds to localize them in three-dimensional space – left-right, front-back, up-down. This is something we’ve been doing automatically since birth, and it leaves no confusion about who is speaking or singing; the eyes easily follow the ears. By presenting us with direct sound from two points in space via two paths to each ear, however, conventional L-R sound reinforcement techniques subvert these differential inter-aural localization cues.
On this basis, we could take an alternative approach in our meeting room and feed the talker’s mic signal to a single nearby loudspeaker, perhaps one built into the front of the lectern, thus permitting pinpoint localization of the source. A number of loudspeakers with fairly narrow horizontal dispersion, hung over the audience area and in line with the direct sound so that each covers a fairly small portion of the audience, will subtly reinforce the direct sound as long as each loudspeaker is individually delayed so that its output is indistinguishable from early reflections in the target seats.
Such a system can achieve up to 8 dB of gain throughout the audience without the delay loudspeakers being perceived as discrete sources of sound, thanks to the well-known Haas or precedence effect. A talker or singer with strong vocal projection may not even need a single “anchor” loudspeaker at the front at all.
As an added benefit to achieving intelligibility at a more natural level, the audience will tend to be unaware that there is a sound system in operation, an important step in reaching the elusive system design goal of transparency – people simply hear the talker clearly and intelligibly at a more or less normal level. This approach, which has been dubbed “source-oriented reinforcement,” precludes the sound system from acting as a barrier separating the performer from the audience, because it merely replicates what happens naturally, and does not disembody the voice through the removal of localization cues.
Traditional amplitude-based panning, which, as noted above, works only for those seated in the “sweet spot” along the center axis of the venue, is replaced in this approach by time-based localization, which has been shown to work for better than 90 percent of the audience, no matter where they are seated. Free from constraints related to phasing and comb-filtering that are imposed by a requirement for mono-compatibility or potential down-mixing – and that are largely irrelevant to live sound reinforcement – operators are empowered to manipulate delays to achieve pin-point localization of each performer for virtually every seat in the house.
Source-oriented reinforcement has been used successfully by a growing number of theatre sound designers, event producers and even DJs over the past 15 years or so, and this is where a large matrix comes into its own. Happily, many of today’s live sound boards are suitably equipped, with delay and EQ on the matrix outputs.
The situation becomes more complex when there is more than one talker, a wandering preacher, or a stage full of actors, but fortunately, such cases can be readily addressed as long as correct delays are established from each source zone to each and every loudspeaker on a one-to-one basis.
This requires more than a console level matrix with just output delays, or even assigning variable input delays to individual mics, since it necessitates a true delay-matrix allowing multiple independent time-alignments between each individual source zone and the distributed loudspeaker system.
One such delay matrix that I have used successfully is the TiMax2 Soundhub, which offers control of both level and delay at each crosspoint in matrixes ranging from 16 x 16 up to 64 x 64 to define unique image definitions anywhere on the stage or field of play.
The Soundhub is easily added to a house system via analog, AES digital, and any of the various audio networks currently available, with the matrix typically being fed by input-channel direct outputs, or by a combination of console sends and/or output groups, as is the practice of the Royal Shakespeare Company, among others.
A familiar looking software interface allows for easy programming as well as real-time level control and 8-band parametric EQ on the outputs. A PanSpace graphical object-based pan programming screen allows the operator to drag input icons around a set of image definitions superimposed onto a jpg of the stage, a novel and intuitive way of localizing performers or manually panning sound effects.
For complex productions involving up to 24 performers, designers can add the TiMax Tracker, a radar-based performer-tracking system that interpolates softly between image definitions as performers move around the stage, thus affording a degree of automation that is otherwise unattainable.
Where very high sound pressure levels are not required, reinforcement of live events may best be achieved not by mixing voices and other sounds together, but by distributing them throughout the house with the location cues that maintain their separateness, which is, after all, a fundamental contributor to intelligibility, as anyone familiar with the “cocktail party” effect will attest. As veteran West End sound designer Gareth Fry says, “I’m quite sure that in the coming years, source-oriented reinforcement will be the most common way to do vocal reinforcement in drama.”
The TiMax PanSpace graphical object-based pan programming screen.
While mixing a large number of individual audio signals together into a few channels may be a very real requirement for radio, television, cinema, and other channel-restricted media such as consumer audio playback systems, this is certainly not the case for corporate events, houses of worship, theatre and similar staged entertainment.
It may sound like heresy, but just because it’s sound doesn’t mean it has to be mixed. We now have more than the proverbial hammer at our disposal. With the proliferation of matrix consoles, adequate DSP, and sound design devices such as the TiMax2 Soundhub, mixing is no longer the only way to work with live sound – let alone the best way for every occasion.
Sound designer Alan Hardiman is president of Associated Buzz Creative, providing design services and technology for exhibits, events, and installations.
Church Sound: Working Within Volume Limits
Dealing with the root cause of the problem
The topic for this post comes from a reader who wants to know what he should do when faced with the requirement to mix no louder than 85 dB SPL peaks. That’s right, 85 dB peaks.
Why I Hate Volume Limits
I used to own a video production company. We were often hired to do video based on length. I always tried to talk the client out of imposing a length limit on a project saying, “The video needs to be as long as it needs to be, then it’s done.” I feel the same way about volume.
Ideally, the worship leader, front of house engineer and church leadership are all on the same page when it comes to volume. In that ideal world, the music will be mixed as loud as it needs to be to convey the power and energy (or lack thereof) required. The band, the song and the crowd will tell you how loud it needs to be. Go over that and it’s too loud; go under and it’s too soft.
Imposing a arbitrary limit on volume to me seems a bit like telling the pastor his sermon needs to be 3,000 words, no more, no less. But we live in a less than ideal world, and we have to live within arbitrarily defined volume limits. So what’s a sound guy to do?
The first thing I would do when faced with a limit like that is find out where the number is coming from. Is it based in an inaccurate reading of OSHA hearing protection guidelines? If so, educate yourself and have a rational conversation with your pastor. Help him to understand that 8 hours of exposure to 85 dBA SPL in a machine shop 5 days a week is a whole different animal than 85 dBA SPL peaks for 15 minutes of worship music.
If that’s not the case, dig a little deeper and see where the number came from. Did someone wander by the booth one day and see 85 on the meter and think, “That sounds about right?” Are people complaining that it’s “too loud?” Is it really too loud or are there spectral balance issues? Or perhaps the setter of the number doesn’t like electric guitar. Or drums.
Acoustic drums will generate 85 dB peaks with the PA turned off, so you need to figure out where this is coming from.
System tuning and spectral balance are huge issues that can be addressed and give you a to more leeway in mixing at an appropriate level. 85 dBA mixes can still be excruciating, while 100 dBA can be enjoyable if done well.
Live Within Your Means
Or in this case, your leadership. In my current church, I have a different definition of “too loud” than my senior pastor does. Since his is lower, I have to adapt my mixing style to suit him—he’s the boss after all. The challenge for me is that his definition changes week to week.
I’ve been told it’s “awesome” one week at 92-94 while it could be “too loud” at 90-91 next week. So I’ve spent a lot of time working on getting my mixes right, the balance correct and the system tuned to his liking.
I’ve also adapted a different way of metering my loudness. I use a software program called LAMA, which can display both a standard SPL readout (I use A, Slow) and an average (I have chosen 10-seconds). LAMA allows me to set colors at various levels so I have my average number turn yellow at 85 dBA SPL, and red at 91, which gives me a “corner of the eye” indication as to where I am.
I keep an eye on the standard readout as well, and occasionally my peaks run into the low to mid 90s, but for the last month and a half, if I keep my 10-second average below 90, my pastor is happy.
Personally, I’d be happier if it was louder. But I’m not paid to be happy; I’m paid to make him happy. I often say, “If you can’t abide by the limitations your leadership puts on you, then you need to leave.” Same applies here.
Again, I would talk to my pastor and find out where this is coming from. As him if it would be OK to try mixing to a 85 dB 10-second average and see how that feels. Address the spectral and mix balance issues; you might be surprised.
The reader asked if he should compress the inputs, and bus compress the mix to give him the power he wants, while staying under the “legal limit.” To me, that’s a little like putting your phone on speaker and holding it in front of you while you drive.
Yes, you could compress the inputs a few dB, then bus compress a few more, then compress the master another a little further, and compress it again in the DSP. That would certainly raise your average SPL while keeping your peak below 85.
However, it’s very likely that this technique will result in the perception that it’s even louder, which may cause your limit to be lowered further. You could also hard-limit your DSP so you can’t exceed 85; but again, if you suck all the dynamics out of the music, all the life goes with it, and it will also sound louder. This would be self-defeating on two fronts.
At the end of the day, I think you’re better off dealing with the root cause of the problem rather than trying to figure out how to stay below an arbitrary number.
Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.
In The Studio: An Uncommon Cure For A Muddy Mix (Includes Video)
Addressing an element that can detract from clarity
In this video, Joe Gilder shares an uncommon cure for a muddy mix. Of course, there are numerous cures for the problem—a lot of it depends upon the specific cause of the problem. There are so many variables.
But here’s one you might not be thinking of, and it can lead you on a “wild goose chase” if it’s not addressed early: reverb. As helpful as reverb can be in enhancing a mix in any number of ways—adding fullness and depth and so on—it can also cause some problems, detracting from the clarity of the track.
Joe provides a discussion of the problem and then some solutions to address it.
Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.
Tuesday, August 27, 2013
Do You Want To Get Paid For All Of This?
Do your homework before agreeing to waste your life
Yes, the goal of the business. Getting paid.
I was told once that there are three parts of the gig: 1) Getting the gig. 2) Working the gig. 3) Collecting the check.
It’s like three legs on a stool. All three need to be there or you have a problem. If you are a volunteer at a church, this is irrelevant, at least until you transition into the paid side. Might be good to know all of this in advance.
I used to have a venue that I did a lot of shows for. They got me for a good price and they gave me shows when the place was rented out. Good relationship.
A small time beauty pageant rented the building. They were given my number and we worked out the details. Never had any problem working like that there.
I was there early. I went above and beyond. I assisted their video crew to make sure everything went well. I handled the lighting for them. I helped carry their gear out when we packed up. Normal service level for every client.
When I went to collect the check, it was written out for half of what we agreed on. I tried to give them the benefit of the doubt. We never took a deposit, I needed the full amount.
“Where’s your contract?”
That’s what she said.
She actually took off and left her husband to run interference when she left. That almost ended in a fistfight. I was furious. Half. Seriously. Half.
Two things happened after that. 1) That venue never let her work a show there again. 2) I completely changed how I work.
Unless it was a client I had good experiences with, everyone paid a deposit to hire me. Whether it was installing a system or running a show. Anywhere from 10-50 percent of the estimated cost. They had to put some skin in the game or I wasn’t blocking my schedule for them.
I created contracts and made sure there was a paper trail to each gig. Even the guys who used me regularly had to have a contract or paper trail.
I played dumb a lot. The regulars would call and try to get a verbal agreement. I would tell them to email me the details and I would confirm as soon as I got back. I never gave them a yes or no on the phone.
I told them how I was likely to forget the details. I was working on another project and couldn’t make notes or work it out with them right then: “Email me the dates and details. I will call you when I get time to go over them.”
Sometimes, they got frustrated. Eventually, they knew the routine. Whenever there was a conflict, after that, I just pulled out the contract or email and reminded them of what we agreed to do. Saved me a lot of headaches and time.
Think about it. Block a day or two out of your life. Don’t plan anything else, not even sleep. Plan to work yourself into a sweat and take orders from random people. Plan to spend your own money for lunch and gas to be there. Now. Plan to go home empty handed. No check. Wasted day. Hard work. Aggravated. Time taken away from your family. Cost you money to be there. It happens. Unless you plan ahead.
Don’t worry about them not calling you again. Don’t worry about losing that show. Do you want days like that? The legitimate clients understand that people need to get paid. The hustlers and hacks are the ones trying to get away with that crap. You don’t need their work anyway.
One of the crews I worked for taught me how to handle that. I saw this more than once.
We were hired to provide stage, sound, lights and techs for a local concert. They brought in some good bands and a good headliner. They rented a local stadium for about 3,000 people and expected to fill it up.
Our contract required a 50 percent deposit to hire us with the balance due as soon as the rig was up and operational. No exceptions. Not after the show or even after soundcheck. As soon as lights came on and we could check microphones we were to be handed the second payment.
The promoter had paid the deposit, but didn’t have the rest once we were live. He actually expected to raise it from walk-up ticket sales. He hadn’t sold enough to cover it by the time we finished setup.
The owner calls me over, tells me the story and has me pull power.
As the bands are setting up and the audience is listening to house music, I pull the power cables off the main racks. Everyone starts to freak out. We are an hour from the first scheduled soundcheck. Five bands waiting to set up. No sound or lights.
The owner apologizes to everyone, but clearly informs the people in charge what is about to happen.
“All of these guys on my crew are being paid to be here. Those trucks have burned a lot of fuel to haul this stuff here. That gear could be on another paid show right now, but it’s here because you signed a contract and agree to pay us before soundcheck. I’m not spending any more time or money on a show that isn’t gong to pay for it. Your deposit breaks us even as of right now. If the balance isn’t paid by soundcheck, we are loading it back in the trucks and going home.”
That promoter got busy. I don’t know what bank he robbed or if he raided grandma’s mattress, but that money was there by soundcheck.
Make sure the gig is worth your time. In the early days, you end up working free or cheap, just to get established. Nobody walks into six figures as a rookie tech. Get past that. You can find out what reasonable day rates are for the type of work you are doing. You can find out what is a fair price to run a whole show.
Do your homework before agreeing to waste your life. Do the math. If you will make more money working at Walmart, just do that. That nice check may sound great until you break it down over 14 hours, gas money and lunch. Not to even mention the steady job you could have instead of this hit and miss show money.
So. If you like working for free, keep that volunteer status. If you want a real career, learn the business side. If you aren’t willing to negotiate and discuss money, you will always be a volunteer. Whether you planned to or not.
If you need extra backup for the contract and collection side, check out the other way I make money on my blog. I work with a company that makes sure I have all the legal counsel I need. No more bad decisions or stupid advice for me. Watch the introduction video and see for yourself
M. Erik Matlock is a 20-plus-year veteran of pro audio, working in live sound, install, and studios over the course of his career, as well as owning Soundmind Production Services. Erik provides advice for younger folks working (or aspiring to work) in professional audio at The Art Of The Soundcheck—Random Stories and Wisdom from an Old Soundguy. Check it out here.
Keeping The Boss Happy: The Monitor Scene For Bruce Springsteen & The E-Street Band
Talking with the dynamic duo of monitor engineering
Troy Milner and Monty Carlo have worked seamlessly side-by-side for more than a decade as monitor engineers, riding the faders for Bruce Springsteen and his 17-piece E-Street Band at stage left and stage right respectively – and they wouldn’t have it any other way. I recently caught up with the dynamic duo backstage prior to a show at London’s Wembley Stadium.
Paul Watson: So, four hands are better than two, then?
Troy Milner: I guess so! [laughs] We are completely independent of each other though; we each get our own splits, and we each have our own set of stage racks and Waves servers.
Monty Carlo: With 18 people on stage, it’s pretty involved, and with Bruce, you never know – he does a set list but he doesn’t follow it, ever, so we’re always on our toes!
TM: They’ve actually always had two monitor engineers. Monty’s been here a lot longer, and I joined on in 2000. It’s actually the way they’ve always liked it for 20-plus years, but we can do a lot more now due to the technology advancements.
How does your partnership work, exactly?
TM: Well, I take care of the drummer, the violin player, the guitarist, the bassist, and the keyboard player who is right here next to me; then I deal with various wedges that are located around the stage for some solos for Bruce.
MC: I hadle pretty much everybody else, really. We each have a lot going on and there are a lot of cues for each song; and again, as Bruce doesn’t follow the set list, well…
I can’t see any wedges on stage – where are you hiding them?
MC: [smiles] There are a number of proprietary Solotech wedges imbedded in the stage, a mixture of double 12s, single 12s and single 15s, and we’re using JBL VT4888s for side fills. The rest of the band is on in-ears, but Bruce is completely old school.
Troy Milner (left) and Monty Carlo at one of the DiGiCo SD7s in Springsteen monitorworld prior to a show at Wembley Stadium in London. (click to enlarge)
What in-ear systems are you running, and do you have any RF issues?
TM: We use Shure kit, PSM1000 IEM systems and the Axient wireless mics, which we like a lot. The boxes underneath are Albatross headphone amps that I use for the drummer – he’s hard wired. When he sits down he plugs right into his seat on his left side and never moves, so he doesn’t need to be wireless.
MC: We have 70 channels of RF between backline and myself and Troy, and although here [in the UK] it’s not too bad, when we’re in Italy… Well, it’s notorious for RF issues. Thankfully, the kit we’re using makes life a whole lot easier than it could be!
What does Bruce like to hear in his wedges?
MC: He’s got a little bit of everything – it’s so tough as each musician has their own wants and needs, but with Bruce, I just kind of fill it up around him between the side fills and the floor wedges so that he hears everything. I have everything panned to make it feel more “live” – the piano is coming from his left and the organ from the right, and the same with the horns, just to kind of open things up, and so he knows where it’s all coming from.
And after all these years, Bruce is still on a classic Shure SM58 capsule.
MC: Absolutely – it still does the job great. We’ve tried a few different things, but it’s still the best sounding and most reliable solution. Also, when it rains and Bruce is out running through the crowd, we don’t have to worry about it falling apart. You can build a house with it.
You both mix on DiGiCo SD7s. Is it essential that you’re on the same console?
Proprietary Solotech wedges in the stage keep it really clean. (click to enlarge)
TM: For our setup, absolutely. We have snapshots for all of the songs, and I’m up to 205. There are some songs that I know Bruce won’t do, but every one is programmed for me on the snapshots. I couldn’t do that without the SD7.
MC: On the whole, the SD7 has been really flexible. It’s also great for moving stuff around. Troy double-assigns the drums so the drummer has his own set of drum inputs and the rest of the band has their own set too, so in terms of tailoring things quickly, everything’s just so easy to do on this desk.
TM: That’s right, the drummer is a little more demanding, so I kind of mix him old school; the control groups are pretty static for him. I’ll hammer him with certain parts that he just wants to hear. For example, he might want two bars of the opening riff from the guitar player, then he wants to get rid of it, so I have to be very hands-on. Monty’s obviously got different stuff that he handles, too.
So on one hand you’re mixing dynamically, yet you’re also relying on hundreds of snapshots… It must get scary if Bruce throws you a curveball.
TM: Oh it can get pretty crazy, that’s for sure! Although the SD7 is pretty much instant access with regard to recalling snapshots, because Bruce has so many songs, it does slow the process down a little: for example, 27 of his songs start with the letter “s,” so it can still take me a second to locate them even with the shortcut buttons.
In fact, I recently asked one of the software guys at DiGiCo if he could give me the first two or three letters rather than just one to search snapshots, as that would be perfect, and he was like, “You guys are worse than Broadway!” [laughs]
How advantageous is it having banks of 12 faders on the console rather than eight?
TM: Oh, very, and for drums especially. Also, having 12 in the center for the control groups is a real bonus – I have a bank for mixing control groups and another bank for mute groups and that works really, really well.
Additionally, the console’s assignable rotaries are perfect for me on my drum bank. I’m always writing thresholds on the gates for the drums because he is so dynamic, and so that I always know when I am in the drum bank – it’s just a visual thing. These functions save me huge amounts of time.
What are your mix counts?
TM: With all the reverbs, tech mixes and crew mixes, we’re at 60 outputs; and there’s two of us, remember. I scratch my head and think “how did I get to 60?” But I have a lot of sends that I use and the keyboard player has his own mixer, so instead of doing direct outs I just send 16 stem mixes to his mixer, then he sends his mix back to me so I can broadcast it wirelessly for him.
Is there much digital processing in Bruce’s vocal chain, or is this also old school?
TM: I’m real simple on it, because Monty is doing Bruce’s monitor mix. I take care of the vocal for everybody else, so I can tailor it a little more and control it as he is so dynamic and all over the place, which is awesome.
But again, it means you’re having to ride the fader?
Springsteen’s mic is a blend of cutting-edge Shure Axient wireless technology and an old school SM58 capsule. (click to enlarge)
TM: That’s right. I’m feeding Bruce’s vocals to the six people I take care of. I run the multiband compressor, which is just great, and then use a little bit of EQ before running it through the Waves Blue 76 just as an overall “grab.”
How are you finding the Waves SoundGrid?
TM: It’s been great, but obviously the stuff in the desk has been great too. We do have some guitar amp sims and distortions though – Bruce plays the harp through his vocal mic and it sounds like a distorted miked amp, which we’re using a Waves guitar simulator for.
You’ve got two DiGiCo SD Racks each?
TM: Yes. In my world, I’m running old school copper snakes from Monty, and Monty is basically “control central.”
MC: I have all of the splits of everything and we split copper to Troy, copper to me, and then copper to front of house. We’ve talked about sharing racks between us; we haven’t done it yet, but maybe down the road it’s something we can do.
Communication between the two of you during a show must be crucial – how vocal are you?
TM: It depends, but we do have a great talkback system. I basically have a stereo mix of all the talkbacks that I send to a matrix, then I send whatever I am cueing to that same matrix to my wireless cue system, so no matter what I am listening to, the talkbacks are also there too.
MC: Exactly, so if there’s a problem on my side I can say “hey we’re gonna switch this.” We always make sure we have direct communication, as it’s another tool to keep us ahead of the band. Some shows we’re more vocal than others, but we’re always on top of things.
Sounds like you’ve got the perfect setup going on…
TM: Unless we’re in Italy…
MC: [laughs] Yep, we’re all good until we go to Italy!
Paul Watson is the editor for Europe for Live Sound International and ProSoundWeb.
The Old Soundman: Fending Off Sadistic Sidemen
Did you ever imagine OSM would take pity on an innocent young engineer who is being unfairly tormented on a gig, by a mean old wanker?
Dear Old Soundman,
One of the bands I mix (that does corporate shows) has an older bass player who does nothing but complain about the sound every night.
“It’s too high-end-y, the sax should sound fuller, forget the singers!
Turn their damn monitors down now! You suck, get your ears fixed, etc.”
Yes, but does he say anything about turkey basters?
Now I should note that he is not the leader of the band, nor does he sign my check. The rest of the band is happy with what I do.
Could you … perhaps … talk to the Leader Of The Band, maybe, or whoever else signs your check?
Ask them, hey, why doesn’t Fecal Matter over here have to speak to me politely?
Do the Elephant Man routine, the line about “I am not a monster! I am a … Man!” That ought to go over well.
How do I deal with this guy?
Last year I saw a hilarious interchange, between a dark and stormy FOH/Tour Manager and his incredible stage crew. The techs were really good at what they did, and were relaxed and friendly.
Their boss called them a name and the drum tech replied in a thick New York accent, “So, what, you don’t have to speak to me like a person?” It irritated the guy so much, how they laughed at him when he vowed that they would never work for him again.
I wasn’t prepared for the rudeness that this individual then sprayed at me, whose check he definitely didn’t sign. I have personally been on the peace tip for about seven years now, and have been rewarded by Society for doing so, but I wanted to put this guy into a wall, really badly.
See, I came from an environment where you slugged someone who bothered you. Eventually, I had to temper that inclination, once I had a handgun around the house.
To be serious for two seconds here, you also need to look at how you appear, how you think of yourself, and whether you are dissatisfied with the gig.
Do you look this bonehead in the eye, or do you slink away and just take it? Are you truly confident of your competence?
I was getting jerked around once by a big lug. He walked up to a keyboard I was soundchecking under a tight schedule, and reached over and changed the patch I was using.
I blew up, he walked away laughing, and when I had the stage done, I found him outside the venue.
I faced him and said “What is it going to take to get you to act right with me?” I had had it. He was genuinely perplexed, and said he hadn’t known he was bothering me that much.
So – are you giving this individual the excuse of saying they weren’t aware you were bugging? Are you a frustrated musician, and look up to these lowlifes too much?
If you deserve better, are you willing to go elsewhere?
These are hard questions to answer. Contrary to popular opinion, every once in a while I do put the joking aside.
But don’t think i’m going to make a habit of it!
The Old Soundman
There’s simply no denying the love from The Old Soundman. Check out more from OSM here.