Feature

Tuesday, August 19, 2014

Church Sound: Attaining The “Perfect” Volume

This article is provided by ChurchTechArts.

 
Last year, a worship leader named Jordan Richmond wrote a post on Thom Rainer’s blog. The post is entitled, “How Loud Should Our Church Music Be?” and it incited no small number of comments. In fact, if you have some time, go read the comments; some are quite amusing.

I think the article raises an important point, and is a good starting point for discussion. However, I do take issue with a few things he said.

His premise is that as a worship leader, he’s not unfamiliar with volume complaints. But how do you solve that? While on vacation at Disney World, he pulled out his trusty iPhone and measured the SPL at the shows he saw there. He came up with 75 dB [sic] as “the answer” to the correct volume.

I put [sic] at the end of 75 dB because he didn’t specify A- or C-weighting, and that makes a big difference. But that’s not the only thing.

Uncalibrated iPhones with free SPL apps cause more harm than good. Now that every member of your congregation has an SPL meter in their pocket, the number of people telling us they have proof it’s too loud is going way up. The problem is, an uncalibrated iPhone or Android phone is not at all accurate.

When I attempted a calibration on mine (using an actual SPL calibrator), I found my (paid and “professional”) SPL meter was off by −10 dB. That translates to about double the perceived volume. Even after I calibrated it, it’s not truly calibrated, it’s just close.

So before we start talking absolute numbers, let’s be sure we are using an actual and calibrated SPL meter. Even those are not super-helpful in determining the appropriate volume, but we’ll get to that shortly.

I respectfully disagree that there is one perfect volume for all venues. Shoot, I won’t even agree that there is one perfect volume for one venue. Our church’s average and peak volumes vary by a good 5-8 dB depending on the song set, arrangements and band makeup.

Some churches demand loud, energetic worship. Others prefer quieter, more contemplative music. This is OK!! I get really frustrated when I hear people talking about setting a universal standard for music levels.

If you like quieter music, find a church that does quieter music. If you like it loud, go to a loud church. But don’t go to a church known for loud music and complain it’s too loud! Likewise, if you’re a worship leader or FOH person, don’t go to a quiet church and try to recreate a Hillsong concert. That’s just—dare I say it?—stupid.

I would also disagree that Disney is the standard. Sure, Disney gets a lot of things right. I like going there when I can. I think we can learn a lot from how they do things; they create great experiences for their guests. But to say that the volume of their shows is the ideal volume is a bit of a stretch. First of all, I suspect the actual level was higher than 75 dB. Second, it’s totally different material.

Realistically, I think you could find just as many people who think Disney shows are too quiet as those who think it’s too loud. And you can probably say the same for many churches. So I guess this another way of re-stating my previous point. Finding the appropriate volume for a particular church is a tricky thing, and it’s a very individual thing.

I suspect many a church member, pastor and board member read that blog post and ran into the sound booth yelling, “Here it is! Proof that it’s too loud. Never more than 75 dB [sic] again!”

This does about as much good at solving the volume problem as painting a green lobby blue does at placating those who don’t like anything but yellow.

The article was not all bad, however. In fact, aside from those three points, I think he is on balance. In fact, I agree with more than I disagree with.

I completely agree that spectral balance is key. He made the observation (from his free, RTA Lite app) that the overall sound was balanced and smooth.

I would argue that spectral balance is more important than actual SPL levels in determining what is acceptable to a congregation and what is not.

For example, even if we agreed that 75 dB (A- or C-weighted, it doesn’t matter for this illustration) SPL is the “perfect” volume, I could drive everyone out of the room by playing a 1KHz square wave at 75 dBA SPL. I could also put together a mix that sounds so offensive at 75 dB SPLA that people would still complain.

On the other hand, I’ve heard mixes that averaged well over 100 dB SPLA and not only did people not complain, they had their hands up and wanted more.

The key is getting the spectral balance right. Too many young engineers (and to be fair, some old ones) put way too much emphasis either on the extreme low end, or the top end. I’ve been to a couple of conferences lately where this was certainly true. At both, the low end was so over-emphasized that you could almost see those 8-foot long waves gobbling up everything else. It sounded terrible—the volume didn’t matter at all.

But in a well crafted mix, people want more. At least up to a point. But we’ll get back to that.

Like other things, content is king. He didn’t point this out as much in the article, but in subsequent comments, he pointed out that Disney has professional talent on stage and in the booth. So, the quality of the content in the mix was very high, and the mix itself was well done.

It was also being played through a well tuned Meyer Sound PA. And he intimated that there was the appropriate amount of dynamic range to the program. Because the average level was comfortable, loud portions of the show felt good.

If most of the congregation thinks it’s too loud, it’s too loud. I’ll probably take some flak for this one, but I believe it’s true. I can’t figure out why churches with a mostly older demographic hire young worship leaders “to attract the younger people” then get upset when the worship gets loud.

On the other hand, I can’t figure out why worship leaders and sound people go into older churches and try to “turn the tide,” crank it up to 11 and then wonder why people get mad and leave.

If you are standing in the sound booth looking out over the congregation and many of them have their hands over their ears, something is wrong. You need to figure out what it is. It may be a mix issue, the drums on stage may be too loud, or the music might be entirely wrong for the congregation. Or it may just be too loud.

Either way, you’re not doing yourself or anyone else any favors by quoting OSHA guidelines or Bible verses about loud worship.

Most of the time, the absolute volume is not the issue, but when something is wrong, we need to investigate it and fix it. We’ll tackle what I think the most common issues are next time.

Mike Sessler now works with Visioneering, where he helps churches improve their AVL systems, and encourages and trains the technical artists that run them. He has been involved in live production for over 25 years and is the author of the blog Church Tech Arts.

{extended}
Posted by Keith Clark on 08/19 at 06:03 PM
Church SoundFeatureBlogStudy HallBusinessEngineerMeasurementProcessorSoftwareSound ReinforcementTechnicianPermalink

In The Studio: Gain Structuring With Plug-Ins

Courtesy of Universal Audio.

 
For those of us who toiled over faders back when the earth was still cooling, the concept of gain structure was fairly easy to grasp.

Each separate box was a link in the audio chain, visibly connected via patch cables, and analog distortion was easy to hear and identify.

In today’s all-digital, all-in-the-box world, it’s not that simple. Signal paths can be unconventional and convoluted, and digital distortion can be subtle and sneaky.

But while the dawn of the DAW has fundamentally changed the way we make records, proper gain structure is no less critical to good recording. The user-friendly, forgiving design of computer audio programs can make it all too easy to overlook a poorly-thought-out signal chain, and the results can sneak up and bite you.

Everything To Gain
From its initial capture to its place in the final mix, a signal in a typical recording chain travels through a multitude of stages or devices. Each of these devices, whether “real” hardware or software plug-ins, needs to receive an optimal signal level at its input; not enough level can add noise, while too much can cause clipping and distortion.

Keeping an eye on the input and output levels of every plug-in in the chain can ensure that each device’s output feeds a clean signal to the next device’s input.

Other than channel inserts, most plug-ins’ input levels are controlled via the mixer’s effects send, as well as the plug-in’s own input level control. Matching the mixer’s send level with the plug-in’s input level is key to proper gain structure.

Sending too low a level to the effects buss and then turning up the plug-in’s input level to compensate will result in a noisier signal. Conversely, sending too hot an effects send level and then turning down the plug-in’s input level will result in a distorted signal.

Generally speaking, unity gain is the goal. With some exceptions (most notably compressors and other dynamics processors), a good rule of thumb when building your gain structure is to try and achieve the same peak level whether the plug-in is inserted into the signal chain or not. If your signal level is noticeably higher or lower when you bypass the device, it’s a good idea to examine your gain structure.

If A Signal Clips In The Forest
Clipping can be particularly problematic in the digital domain. Raise the input signal to an analog device, and distortion will gradually rise until it clips. Digital circuitry has no such safety zone–a single dB too high will take your signal from clean to clipped.

Unlike analog clipping, this digital clipping can be difficult to hear, particularly when it’s just one element of a dynamic mix. If the clipping goes undetected, the digital information for that sound is permanently corrupted, even if the levels are brought back down later in the mix.

The distortion from digital clipping can have a subtle but undesirable effect on the sonic quality of your track, usually in the form of barely perceptible levels of a brittle, harsh digital sheen that can fatigue your listeners.

Even a relatively small bit of gain from certain plug-ins—for example, a high-pass filter—can boost peaks and transients pretty significantly. Don’t depend on your meters to alert you to these, either. In most DAW setups, plug-in inserts occur pre-fader, so even if you keep the levels of your channel strips below clipping, distortion within a given plug-in may not show up if the level was brought back down further along the signal chain.

Once again, your ears are you most important tools. Solo each device and listen.

What To Watch For
Needless to say, different types of signal processors will affect overall gain structure differently, and some are easier than others to work with.

With a reverb like the EMT 250 Classic Electronic Reverberator, distortion is typically not that hard to hear. But the “soft” nature of some reverb algorithms can mask other artifacts, including noise resulting from too low an effect send level.

UAD EMT 250

Multiband EQ can be particularly nefarious, especially when it comes to peaks and transients. With modern multiband EQ plug-ins like the Neve 88RS Channel Strip, it’s not hard to inadvertently overlap a range of frequencies in two different bands, and the cumulative boost can result in clipping.

Compression and dynamics processing present a different set of challenges, and an in-depth discussion of how they affect the signal chain is a subject for an article of its own.

Briefly, though, it’s important to pay attention to a compressor’s attack and gain settings, as these can have a major impact on the gain structure of the signal coming out of your compressor.

Get To Know Your Plug-Ins
Just as every guitar and every vintage amp has its own sonic character, so too does every signal processor. This is no less true for software plug-ins than it is for hardware.

Different devices have different ways in which they handle gain and clipping. And getting a good sense of how each of your plug-ins performs in different situations is as important as knowing any other instrument in your arsenal.

In the analog era, engineers would test each new box by running a sine wave through it and looking at the signal on an oscilloscope.

They could see where each device would clip at specific frequencies, what kind of distortion would occur, and other characteristics that helped to map out the device’s optimal gain settings and place in the chain.

You can easily do the same thing with your frequently used plug-ins. Open an oscilloscope or frequency display in your DAW, set the plug-in’s input (and output, if it has one) level to unity gain, and send a sine wave through the device. Watch the output as you gradually raise the send level.

Of course, the geek factor of testing with sine waves is no substitute for listening. Many tracks in a mix will have multiple plug-ins inserted in their signal path. Don’t forget to listen to each one individually, rather than the results of several effects combined.

UAD Neve 88RS

Like Watching Paint Dry
If you’ve gotten this far you’ve probably figured out that gain structure is neither exciting nor creative. But it’s one of those necessary facts of recording life, and ignoring it is not an option.

The more familiar you are with gain structure in general and your equipment in particular, the less time you’ll have to spend optimizing levels when you’ve got a room full of antsy musicians waiting to record.

Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys. Despite being immersed in professional audio his entire adult life, he still refuses to grow up. This article is courtesy of Universal Audio.

{extended}
Posted by Keith Clark on 08/19 at 05:17 PM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEngineerProcessorSignalSoftwareStudioPermalink

Monday, August 18, 2014

Backstage Class: Alternative & Effective Approaches To Sound Check

So much of what we do as sound engineers is based on habit and repetition. Better safe than sorry, if it ain’t broke don’t fix it, that’s the way everyone does it, and so on.

I enjoy questioning and testing that validity of these patterns. One of the beautiful aspects of live sound is that there is no true right or wrong way, but rather, certain approaches are more likely to result in preferable outcomes than others.

With that in mind, let’s focus on the process we most commonly call “sound check.”

Why EQ the kick drum by itself with all the other microphones turned off? How often during the actual show do you mute every other mic to just hear that kick drum sound? How relevant and useful is it to waste oh-so-valuable sound check time EQ’ing solitary mics only to start over once the rest of the stage mic interactions are introduced?

Of course I understand doing a quick test of every mic individually, but beyond that, what we really need to know is how that instrument sounds with all the other mics turned on as well. Seems we forget that every mic hears everything on stage at some level. 

Want way more time to really get your sound dialed in and have the band love you at the same time? At the next gig, walk in and tell the band, “O.K., this is how I would like to sound check. After a quick tap line check to make sure everything works, you guys come on up and do whatever you want, rock some tunes, rehearse and jam.

“First we’ll get monitors sorted and close. To avoid confusion, here is a simple hand signal method, point at what you want and then point at where you want it and then point up or down so we know what to do. And while you’re rocking out, I’ll get all your sounds dialed in out front. I may stop you for a moment if there’s a particular problem, but what’s best for me is for you to play as many tunes as possible and get comfortable on this stage.

“Oh, and drummer person, if you can, lean into some extra toms so I can grab them as well.”

Important Things
Congratulations—you’ve just gone from having your band annoyed with being subjected to 50 hits on each drum to having happy musicians doing what they (hopefully) truly love.

With the artists playing, bring up each instrument and get a rough EQ and meanwhile, you also learn important things like how much the cymbals bleed into the toms, or how much guitar is getting into the vocal mics while you are EQ’ing them. 

As the band kicks out the jams, my approach is to bring up drums one at a time and do a rough EQ, then all the drums and refine the EQ. Add bass, check the drums and bass combo, then EQ the bass.

Next, lay guitars on top, get a rough EQ, and touch up bass and drums. Then give a listen to just guitar and bass without drums, and EQ them to fit.

All the while, I’m dialing in my compressors and gates. I finish with muting all the other mics and EQ’ing vocals with the full band playing. Then vocals, add in guitars, add bass, and then add drums. My headphones are always at the ready for cuing up and checking certain things.

If specific issues come up while the band is playing, hey, don’t worry about it yet. Get the rest of the sounds together first. The goal is obtaining a solid grasp of the bigger picture in the time it takes to test one mic at a time.

Plus you’re actually mixing, and are free to make drastic changes to hear those blends and combinations in a way that you can never do during an actual show. You’re also now the coolest engineer the band’s ever worked with.

Back In The Day
I started using a version of this approach about 25 years ago with a 60-piece orchestra I mixed weekly. Due to wind and the size of the area being covered, there was the need to rely on fairly close mic’ing, and ended up with about 24 inputs.

It dawned on me pretty quickly that the whole “O.K., now will the third flute please play” method was a complete waste of time and left me scrambling to try to scrape a mix together when the show began.

So I devised a plan. Turn every gain knob all the way up, and have every channel muted with the fader down. As the various orchestra members showed up, tuned their instruments, and began playing, the clip light for the mic(s) near them would come on.

I would crank the gain down to below clip, PFL that channel to make sure it sounded fine, and then un-mute so I knew which channels had the gains set. As the channels were un-muted, I brought those faders up and began blending and EQ’ing, while waiting for the next clip light.

Finally, the conductor would have the orchestra play a short segment, and that was that. The whole process took about 15 to 20 minutes, I had the mix together, and then had time to go to the stage to fix any issues before the show started. 

Dave Rat heads up Rat Sound Systems Inc., based in Southern California, and has also been a mix engineer for more than 30 years.

{extended}
Posted by Keith Clark on 08/18 at 11:06 AM
Live SoundFeatureBlogStudy HallConsolesEngineerMixerProcessorSignalSound ReinforcementTechnicianPermalink

Friday, August 15, 2014

Old Soundman: Deep Questions & Intriguing Quests

Greetings, Old Soundman, from the Great North Woods—

And greetings to you from my secret location!

I’m not all that old, only been riding the faders for a few years now.

Ride ‘em, soundman! Yee-ha!! Hey, did you ever hear of the X Bar X Boys?

The calluses are just about right on my fingers and in my ears.

What a bizarre viewpoint. I don’t really like to think about calluses. But whatever floats your boat!

I was one of those “dumb youngsters” who thought a fancy school was the way to go. Luckily, I didn’t pay my tuition right away and spent it on some crappy gear after dropping out (don’t tell the government).

They’ll have to torture me with old Bing Crosby records before I talk!

My first gig was with a 10-piece funk band with horns, lots of fun. But I made it and they kept hiring me.

You probably worked cheap.

Anyway, after a few years of fumbling through gigs and paying the first of my dues, I have two questions that haven’t been answered. First: What’s the best microphone to use on a sewing machine?

You’ve stepped across the line. I’ve told you people countless times: funny stuff—me; audio and philosophical questions—you.

But to answer your question, use a condenser mic, and crouch there all night, holding it up to the sewing machine. Don’t use a mic stand like the cheaters do.

Second:  When will I see the worst band ever?

Thanks,
Matt

Tomorrow night. If you survive that, you’ve only got 20-some years to go in order to catch up with my main man here…

Hello Old Soundman—

Like you, I’ve been at this for a long time, over 30 years.

Believe it or not, you’ve got me beat, brother!

Remember when bands didn’t use monitors?

I’ve heard tell of those days!

I appreciate and respect your words of wisdom, biting witm and especially your ability to keep doing show after show.

But you have the same ability…Just like Willie Nelson and Blue Oyster Cult!

I’ve toured the world with large and small acts, and even became a dreaded FOH/tour manager to get out of banging gear.

Isn’t that the worst? Every whining musician on your case all of the time. Forgetful bandleaders. Insane agents. Demented wives and girlfriends. Checking everybody in/out of hotels. Waiting at airports. Selling merchandise. Ah, the devil was working overtime when he came up with the position of tour manager!

After 20 years on the road. I went back to school to get my degree in order to land a “suit ‘n’ tie” gig that paid off with stock options. I got the degree, but never the gig.

How come? Didn’t you ever apply at a Fortune 500 company? How about Wal-Mart? Toys-R-Us? Chili’s? National Public Radio? Chico’s Bail Bonds?

Is it just possible that I’ve done rock ‘n’ roll too long and there’s no hope of ever becoming a member of the establishment?

Signed,
JWL

It sounds like it to me. But the good news is that you’ve got plenty of company!

When you come to my club, the drinks are on me, pal of mine. We’ll solve all the world’s problems. And watch the sun come up!

Luv,
The Old Soundman

There’s simply no denying the love from The Old Soundman. Read more from him here.

{extended}
Posted by Keith Clark on 08/15 at 12:41 PM
Live SoundFeatureBlogOpinionEducationEngineerMicrophoneSound ReinforcementTechnicianPermalink

Emulation Destination: Plug-Ins For Enhancing Live Applications

Ask live mix engineers their favorite effects and processors, and you’ll get dozens of different answers.

Some still prefer outboard gear, ranging from the more common to the esoteric. Vintage purists may want older tube gear that’s no longer even manufactured, while others in this camp aren’t satisfied with anything less than a single particular unit that was only used (and possibly built into) a specific recording studio.

In the past it was difficult, if not impossible, for a production company or venue to assemble a rack of outboard gear that would satisfy every engineer. It was also hard for concert tours, particularly those comprised of one-off fly dates, to carry all of the outboard pieces that they wanted, or more importantly, that the artist that they worked for needed.

Those days are largely in the past, as consoles with onboard digital processing and software plug-ins have taken over effects duties. Do you prefer the onboard effects and processing in your digital console? No problem, just patch them in where want.

Need a particular effect or processor that’s not available onboard? Again, no problem as you can probably use a plug-in directly with the console or augment it with a full server-type plug-in platform such as Waves SoundGrid or the Soundcraft Realtime Rack. Further, Avid consoles come with a collection of plug-ins, with VENUE consoles using the TDM VENUE format and the more recent S3L console accommodating the growing AAX format.

In a recent article (here), I provided an overview of the various types of plug-in formats as well as the platforms that support them, along with some general applications. I’m continuing the discussion here with a look at specific plug-ins that I’m using and/or that have peaked my interest.

For most bands, I keep things relatively simple from an effects standpoint. I begin by patching in a quality vocal reverb, an adjustable delay for vocals, and a solid snare drum reverb.

While the onboard effects and verbs of most consoles range from pretty good to great (and a few are quite stellar), there are times when I want something more specific to suit the needs of a particular performer or to better duplicate the recorded sound in the live realm. And for me, this is where plug-ins can come into play. Here are a few that have caught my eye.

Studio To The Road
An effect that has been limited to studios is double tracking, where a main track is duplicated on a second track, lending a fuller sound—and many times, with the two tracks being panned left and right, providing a stereo signal from a mono instrument.

We can sort of emulate double tracking live by applying a very short delay to a signal and feeding both the original and delay signals to the PA. This gives a “thicker” sound especially useful on guitars and vocals.

However, Waves recently introduced the Abbey Road Reel ADT, designed to emulate Abbey Road Studios’ process of Artificial Double Tracking, a signature effect created at the studio in the 1960s for The Beatles. The Reel ADT was developed in association with Abbey Road and the process has been a closely guarded secret of the facility until recently.

The original process was created by Abbey Road engineer Ken Townsend, who connected a primary tape deck to a second speed-controlled tape deck allowing two versions of the same signal to be played back simultaneously. By varying the speed of the second machine, the replayed signal could be moved around to simulate a separate take.

A retro look to go with the classic sounds supplied by the Waves Abbey Road Reel ADT.)

The Abbey Road Reel ADT can even emulate the sound of tape complete with wow and flutter effects, providing the closest thing yet to real double tracking. Where was this a few years ago when I mixed a Beatles tribute band?

Lexicon has long supplied go-to reverbs, with the PCM96 stereo verb and effects processor one of the most popular. It has 28 legacy and new reverbs, delays and modulation effects that can be integrated into both digital audio workstations (DAW) as well as live rigs.

While the 1RU package is compact, it still takes up space and adds weight, while its price may not be in the budget. Plug-in format to the rescue, with Lexicon offering the PCM Bundle that utilizes the same algorithms and presets from the PCM96 hardware unit at about a quarter of the price.

A Vintage Plate from the Lexicon PCM Bundle.

A really cost-effective direction for getting the vintage Lexicon sound is the Native Instruments Reverb Classics plug-in that provides emulation of some reverbs based on the classic Lexicon 224 and the 480L, which are still used in many top studios. Although the number of room and hall choices compared to the PCM Bundle is limited, the Native Instruments Reverb Classics could be just the ticket for a vocalist who is singing classic hits.

Another classic tool that’s hard to drag out on the road is a real plate reverb.

These large units operate by inducing vibrations into a large plate of sheet metal with an electromechanical transducer and then using a pickup (or two for stereo) to capture the vibrations.

Elektro-Mess-Technik (EMT) introduced the EMT 140 in the late 1950s, and it became a popular model featured on many hit records.

But instead of lugging around a 600-pound plate reverb to gigs (and ticking off the stagehands), you can load up the EMT 140 Classic Plate Reverberator plug-in from Universal Audio. It replicates the sonic signatures of three different EMT 140s that are installed at The Plant Studios in Sausalito, CA.

Classic Squeeze
Compressors are sometimes tricky, with each different model subjectively better or worse than others depending on the source material and the expected results. Most engineers have their favorites for use on vocals, bass, drums and percussion, and I’m no exception. Some of these classic hardware units are still popular in the studio today, but aren’t too practical for live use due to cost and durability factors.

One solution stepping up to meet these need from a plug-in standpoint are Waves CLA Classic Compressors bundles, providing numerous emulations created with the aid of multiple Grammy Award winning mixer and producer Chris Lord-Alge. They offer takes on the classic Teletronix LA2A and Urei 1176 units that I love for giving a “classic squeeze” on both live vocals and instruments.

The Softube Summit Audio TLA-100A plug-in sounds and looks like the original hardware.

Softube has teamed up with hardware manufacturer Summit Audio to develop the Softube Summit Audio TLA-100A, based on the Summit Audio Tube Leveling Amplifier, another staple in many studios.

This plug-in adds a saturation control, allowing the user to pump up the character of the processing without overdriving the input that was required on the hardware version to get the same effect. It should find many uses onstage, especially with acoustic guitars and vocals.

Need more choices? Look no further than the new McDSP 6030 Ultimate Compressor, an AAX plug-in offering 10 different compressors in a single interface. Some of the units are emulations of existing gear with unique variations by McDSP and other units were designed from the ground, providing a wide range of processing and dynamic options in one package.

The McDSP Ultimate Compressor provides 10 options in a single package.

For those who are doing live broadcast remotes or needing to tame the feeds to live webcasts and video recording, the Waves MaxxVolume may be just the ticket.

It combines the technologies from the L2 Ultramaximizer, C1 Parametric Compander, Renaissance Vox, and Renaissance Compressor, offering up a slew of versatile processing. From leveling dialogue at the podium to helping control acoustic instruments in the mix, MaxxVolume is handy indeed.

Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.

{extended}
Posted by Keith Clark on 08/15 at 10:09 AM
Live SoundFeatureBlogProductConsolesDigitalDigital Audio WorkstationsMixerProcessorSignalSound ReinforcementPermalink

Thursday, August 14, 2014

Digital Dharma: A/D Conversion And What It Means In Audio Terms

This article is provided by Rane Corporation.

 
Like everything else in the world, the audio industry has been radically and irrevocably changed by the digital revolution. No one has been spared.

Arguments will ensue forever about whether the true nature of the real world is analog or digital; whether the fundamental essence, or dharma, of life is continuous (analog) or exists in tiny little chunks (digital). Seek not that answer here.

Rather, let’s look at the dharma (essential function) of audio analog-to-digital (A/D) conversion.

It’s important at the onset of exploring digital audio to understand that once a waveform has been converted into digital format, nothing can inadvertently occur to change its sonic properties. While it remains in the digital domain, it’s only a series of digital words, representing numbers.

Aside from the gross example of having the digital processing actually fail and cause a word to be lost or corrupted into none use, nothing can change the sound of the word. It’s just a bunch of “ones” and “zeroes.” There are no “one-halves” or “three-quarters.”

The point is that sonically, it begins and ends with the conversion process. Nothing is more important to digital audio than data conversion. Everything in-between is just arithmetic and waiting. That’s why there is such a big to-do with data conversion. It really is that important. Everything else quite literally is just details.

We could even go so far as to say that data conversion is the art of digital audio while everything else is the science, in that it is data conversion that ultimately determines whether or not the original sound is preserved (and this comment certainly does not negate the enormous and exacting science involved in truly excellent data conversion.)

Because analog signals continuously vary between an infinite number of states while computers can only handle two, the signals must be converted into binary digital words before the computer can work. Each digital word represents the value of the signal at one precise point in time. Today’s common word lengths are 16-bits, 24-bits and 32-bits. Once converted into digital words, the information may be stored, transmitted, or operated upon within the computer.

In order to properly explore the critical interface between the analog and digital worlds, it’s first necessary to review a few fundamentals and a little history.

Binary & Decimal
Whenever we speak of “digital,” by inference, we speak of computers (throughout this paper the term “computer” is used to represent any digital-based piece of audio equipment).

And computers in their heart of hearts are really quite simple. They only can understand the most basic form of communication or information: yes/no, on/off, open/closed, here/gone, all of which can be symbolically represented by two things - any two things.

Two letters, two numbers, two colors, two tones, two temperatures, two charges… It doesn’t matter. Unless you have to build something that will recognize these two states - now it matters.

So, to keep it simple, we choose two numbers: one and zero, or, a “1” and a “0.”

Officially this is known as binary representation, from Latin bini—two by two. In mathematics this is a base-2 number system, as opposed to our decimal (from Latin decima a tenth part or tithe) number system, which is called base-10 because we use the ten numbers 0-9.

In binary we use only the numbers 0 and 1. “0” is a good symbol for no, off, closed, gone, etc., and “1” is easy to understand as meaning yes, on, open, here, etc. In electronics it’s easy to determine whether a circuit is open or closed, conducting or not conducting, has voltage or doesn’t have voltage.

Thus the binary number system found use in the very first computer, and nothing has changed today. Computers just got faster and smaller and cheaper, with memory size becoming incomprehensibly large in an incomprehensibly small space.

One problem with using binary numbers is they become big and unwieldy in a hurry. For instance, it takes six digits to express my age in binary, but only two in decimal. But in binary, we better not call them “digits” since “digits” implies a human finger or toe, of which there are 10, so confusion reigns.

To get around that problem, John Tukey of Bell Laboratories dubbed the basic unit of information (as defined by Shannon—more on him later) a binary unit, or “binary digit” which became abbreviated to “bit.” A bit is the simplest possible message representing one of two states. So, I’m six-bits old. Well, not quite. But it takes 6-bits to express my age as 110111.

Let’s see how that works. I’m 55 years old. So in base-10 symbols that is “55,” which stands for five 1’s plus five 10’s. You may not have ever thought about it, but each digit in our everyday numbers represents an additional power of 10, beginning with 0.

Figure 1: Number representation systems.

That is, the first digit represents the number of 1’s (100), the second digit represents the number of 10’s (101), the third digit represents the number of 100’s (102), and so on. We can represent any size number by using this shorthand notation.

Binary number representation is just the same except substituting the powers of 2 for the powers of 10 [any base number system is represented in this manner].

Therefore (moving from right to left) each succeeding bit represents 20 = 1, 21 =2, 22 =4, 23 =8, 24 = 16, 25 =32, etc. Thus, my age breaks down as 1-1, 1-2, 1-4, 0-8, 1-16, and 1-32, represented as “110111,” which is 32+16+0+4+2+1 = 55. Or double-nickel to you cool cats.

Figure 1, above, shows the two examples.

Building Blocks
The French mathematician Fourier unknowingly laid the groundwork for A/D conversion in the late 18th century.

All data conversion techniques rely on looking at, or sampling, the input signal at regular intervals and creating a digital word that represents the value of the analog signal at that precise moment. The fact that we know this works lies with Nyquist.

Harry Nyquist discovered it while working at Bell Laboratories in the late 1920s and wrote a landmark paper describing the criteria for what we know today as sampled data systems.

Nyquist taught us that for periodic functions, if you sampled at a rate that was at least twice as fast as the signal of interest, then no information (data) would be lost upon reconstruction.

And since Fourier had already shown that all alternating signals are made up of nothing more than a sum of harmonically related sine and cosine waves, then audio signals are periodic functions and can be sampled without lost of information following Nyquist’s instructions.

This became known as the Nyquist Frequency, which is the highest frequency that may be accurately sampled, and is one-half of the sampling frequency.

For example, the theoretical Nyquist frequency for the audio CD (compact disc) system is 22.05 kHz, equaling one-half of the standardized sampling frequency of 44.1 kHz.

As powerful as Nyquist’s discoveries were, they were not without their dark side, with the biggest being aliasing frequencies. Following the Nyquist criteria (as it is now called) guarantees that no information will be lost; it does not, however, guarantee that no information will be gained.

Although by no means obvious, the act of sampling an analog signal at precise time intervals is an act of multiplying the input signal by the sampling pulses. This introduces the possibility of generating “false” signals indistinguishable from the original. In other words, given a set of sampled values, we cannot relate them specifically to one unique signal.

Figure 2: Aliasing frequencies.

As Figure 2 shows, the same set of samples could have resulted from any of the three waveforms shown. And from all possible sum and difference frequencies between the sampling frequency and the one being sampled.

All such false waveforms that fit the sample data are called “aliases.” In audio, these frequencies show up mostly as intermodulation distortion products, and they come from the random-like white noise, or any sort of ultrasonic signal present in every electronic system.

Solving the problem of aliasing frequencies is what improved audio conversion systems to today’s level of sophistication. And it was Claude Shannon who pointed the way. Shannon is recognized as the father of information theory: while a young engineer at Bell Laboratories in 1948, he defined an entirely new field of science.

Even before then his genius shined through for, while still a 22-year-old student at MIT he showed in his master’s thesis how the algebra invented by the British mathematician George Boole in the mid-1800s, could be applied to electronic circuits. Since that time, Boolean Algebra has been the rock of digital logic and computer design.

Another Solution
Shannon studied Nyquist’s work closely and came up with a deceptively simple addition. He observed (and proved) that if you restrict the input signal’s bandwidth to less than one-half the sampling frequency then no errors due to aliasing are possible.

So bandlimiting your input to no more than one-half the sampling frequency guarantees no aliasing. Cool…Only it’s not possible. In order to satisfy the Shannon limit (as it is called - Harry gets a “criteria” and Claude gets a “limit”) you must have the proverbial brick-wall, i.e., infinite-slope filter.

Well, this isn’t going to happen, not in this universe. You cannot guarantee that there is absolutely no signal (or noise) greater than the Nyquist frequency.

Fortunately there is a way around this problem. In fact, you go all the way around the problem and look at it from another direction.

If you cannot restrict the input bandwidth so aliasing does not occur, then solve the problem another way: Increase the sampling frequency until the aliasing products that do occur, do so at ultrasonic frequencies, and are effectively dealt with by a simple single-pole filter.

This is where the term “oversampling” comes in. For full spectrum audio the minimum sampling frequency must be 40 kHz, giving you a useable theoretical bandwidth of 20 kHz - the limit of normal human hearing. Sampling at anything significantly higher than 40 kHz is termed oversampling.

In just a few years time, we saw the audio industry go from the CD system standard of 44.1 kHz, and the pro audio quasi-standard of 48 kHz, to 8-times and 16-times oversampling frequencies of around 350 kHz and 700 kHz, respectively. With sampling frequencies this high, aliasing is no longer an issue.

O.K. So audio signals can be changed into digital words (digitized) without loss of information, and with no aliasing effects, as long as the sampling frequency is high enough. How is this done?

Determining Values
Quantizing is the process of determining which of the possible values (determined by the number of bits or voltage reference parts) is the closest value to the current sample, i.e., you are assigning a quantity to that sample.

Quantizing, by definition then, involves deciding between two values and thus always introduces error. How big the error, or how accurate the answer, depends on the number of bits. The more bits, the better the answer.

The converter has a reference voltage which is divided up into 2n parts, where n is the number of bits. Each part represents the same value.

Editors Note: For those working the math, “2n parts” is also known to be “2 to the nth power.”  Use the x^y function on a scientific calculator to achieve the correct result.

Since you cannot resolve anything smaller than this value, there is error. There is always error in the conversion process. This is the accuracy issue.

Figure 3: 8-Bit resolution.

The number of bits determines the converter accuracy. For 8-bits, there are 28 = 256 possible levels, as shown in Figure 3.

Since the signal swings positive and negative there are 128 levels for each direction. Assuming a ±5 V reference [3], this makes each division, or bit, equal to 39 mV (5/128 = .039).

Hence, an 8-bit system cannot resolve any change smaller than 39 mV. This means a worst case accuracy error of 0.78 percent.

Each step size (resulting from dividing the reference into the number of equal parts dictated by the number of bits) is equal and is called a quantizing step (also called quantizing interval—see Figure 4).

Originally this step was termed the LSB (least significant bit) since it equals the value of the smallest coded bit, however it is an illogical choice for mathematical treatments and has since be replaced by the more accurate term quantizing step.

Figure 4: Quantization, 3-bit, 50-volt example.

The error due to the quantizing process is called quantizing error (no definitional stretch here). As shown earlier, each time a sample is taken there is error.

Here’s the not obvious part: the quantizing error can be thought of as an unwanted signal which the quantizing process adds to the perfect original.

An example best illustrates this principle. Let the sampled input value be some arbitrarily chosen value, say, 2 volts. And let this be a 3-bit system with a 5 volt reference. The 3-bits divides the reference into 8 equal parts (23 = 8) of 0.625 V each, as shown in Figure 4.

For the 2 volt input example, the converter must choose between either 1.875 volts or 2.50 volts, and since 2 volts is closer to 1.875 than 2.5, then it is the best fit. This results in a quantizing error of -0.125 volts, i.e., the quantized answer is too small by 0.125 volts.

If the input signal had been, say, 2.2 volts, then the quantized answer would have been 2.5 volts and the quantizing error would have been +0.3 volts, i.e., too big by 0.3 volts.

These alternating unwanted signals added by quantizing form a quantized error waveform, that is a kind of additive broadband noise that is generally uncorrelated with the signal and is called quantizing noise.

Since the quantizing error is essentially random (i.e. uncorrelated with the input) it can be thought of like white noise (noise with equal amounts of all frequencies). This is not quite the same thing as thermal noise, but it is similar. The energy of this added noise is equally spread over the band from dc to one-half the sampling rate. This is a most important point and will be returned to when we discuss delta-sigma converters and their use of extreme oversampling.

Early Conversion
Successive approximation is one of the earliest and most successful analog-to-digital conversion techniques. Therefore, it is no surprise it became the initial A/D workhorse of the digital audio revolution. Successive approximation paved the way for the delta-sigma techniques to follow.

The heart of any A/D circuit is a comparator. A comparator is an electronic block whose output is determined by comparing the values of its two inputs. If the positive input is larger than the negative input then the output swings positive, and if the negative input exceeds the positive input, the output swings negative.

Therefore, if a reference voltage is connected to one input and an unknown input signal is applied to the other input, you now have a device that can compare and tell you which is larger. Thus a comparator gives you a “high output” (which could be defined to be a “1”) when the input signal exceeds the reference, or a “low output” (which could be defined to be a “0”) when it does not.

Figure 5A: Successive approximation, example.

A comparator is the key ingredient in the successive approximation technique as shown in Figure 5A and Figure 5B. The name successive approximation nicely sums up how the data conversion is done. The circuit evaluates each sample and creates a digital word representing the closest binary value.

The process takes the same number of steps as bits available, i.e., a 16-bit system requires 16 steps for each sample. The analog sample is successively compared to determine the digital code, beginning with the determination of the biggest (most significant) bit of the code.

Figure 5B: Successive approximation, A/D converter.

The description given in Daniel Sheingold’s Analog-Digital Conversion Handbook offers the best analogy as to how successive approximation works. The process is exactly analogous to a gold miner’s assay scale, or a chemical balance as seen in Figure 5A.

This type of scale comes with a set of graduated weights, each one half the value of the preceding one, such as 1 gram, 1/2 gram, 1/4 gram, 1/8 gram, etc. You compare the unknown sample against these known values by first placing the heaviest weight on the scale.

If it tips the scale you remove it; if it does not you leave it and go to the next smaller value. If that value tips the scale you remove it, if it does not you leave it and go to the next lower value, and so on until you reach the smallest weight that tips the scale. (When you get to the last weight, if it does not tip the scale, then you put the next highest weight back on, and that is your best answer.)

The sum of all the weights on the scale represents the closest value you can resolve.

In digital terms, we can analyze this example by saying that a “0” was assigned to each weight removed, and a “1” to each weight remaining—in essence creating a digital word equivalent to the unknown sample, with the number of bits equaling the number of weights.

And the quantizing error will be no more than 1/2 the smallest weight (or 1/2 quantizing step).

As stated earlier the successive approximation technique must repeat this cycle for each sample. Even with today’s technology, this is a very time consuming process and is still limited to relatively slow sampling rates, but it did get us into the 16-bit, 44.1 kHz digital audio world.

PCM, PWM, EIEIO
The successive approximation method of data conversion is an example of pulse code modulation, or PCM. Three elements are required: sampling, quantizing, and encoding into a fixed length digital word. The reverse process reconstructs the analog signal from the PCM code.

The output of a PCM system is a series of digital words, where the word-size is determined by the available bits. For example. the output is a series of 8-bit words, or 16-bit words, or 20-bit words, etc., with each word representing the value of one sample.

Pulse width modulation, or PWM, is quite simple and quite different from PCM. Look at Figure 6.

Figure 6: Pulse width modulation (PWM).

In a typical PWM system, the analog input signal is applied to a comparator whose reference voltage is a triangle-shaped waveform whose repetition rate is the sampling frequency. This simple block forms what is called an analog modulator.

A simple way to understand the “modulation” process is to view the output with the input held steady at zero volts. The output forms a 50 percent duty cycle (50 percent high, 50 percent low) square wave. As long as there is no input, the output is a steady square wave.

As soon as the input is non-zero, the output becomes a pulse-width modulated waveform. That is, when the non-zero input is compared against the triangular reference voltage, it varies the length of time the output is either high or low.

For example, say there was a steady DC value applied to the input. For all samples, when the value of the triangle is less than the input value, the output stays low, and for all samples when it is greater than the input value, it changes state and remains high.

Therefore, if the triangle starts higher than the input value, the output goes high; at the next sample period the triangle has increased in value but is still more than the input, so the output remains high; this continues until the triangle reaches its apex and starts down again; eventually the triangle voltage drops below the input value and the output drops low and stays there until the reference exceeds the input again.

The resulting pulse-width modulated output, when averaged over time, gives the exact input voltage. For example, if the output spends exactly 50 percent of the time with an output of 5 volts, and 50 percent of the time at 0 volts, then the average output would be exactly 2.5 volts.

This is also an FM, or frequency-modulated system—the varying pulse-width translates into a varying frequency. And it is the core principle of most Class-D switching power amplifiers.

The analog input is converted into a variable pulse-width stream used to turn-on the output switching transistors. The analog output voltage is simply the average of the on-times of the positive and negative outputs.

Pretty amazing stuff from a simple comparator with a triangle waveform reference.

Another way to look at this is that this simple device actually codes a single bit of information, i.e., a comparator is a 1-bit A/D converter. PWM is an example of a 1-bit A/D encoding system. And a 1-bit A/D encoder forms the heart of delta-sigma modulation.

Modulation & Shaping
After 30 years, delta-sigma modulation (also sigma-delta) emerged as the most successful audio A/D converter technology.

It waited patiently for the semiconductor industry to develop the technologies necessary to integrate analog and digital circuitry on the same chip.

Today’s very high-speed “mixed-signal” IC processing allows the total integration of all the circuit elements necessary to create delta-sigma data converters of awesome magnitude.

Essentially a delta-sigma converter digitizes the audio signal with a very low resolution (1-bit) A/D converter at a very high sampling rate. It is the oversampling rate and subsequent digital processing that separates this from plain delta modulation (no sigma).

Referring back to the earlier discussion of quantizing noise, it’s possible to calculate the theoretical sine wave signal-to-noise (S/N) ratio (actually the signal-to-error ratio, but for our purposes it’s close enough to combine) of an A/D converter system knowing only n, the number of bits.

Doing some math shows that the value of the added quantizing noise relative to a maximum (full-scale) input equals 6.02n + 1.76 dB for a sine wave. For example, a perfect 16-bit system will have a S/N ratio of 98.1 dB, while a 1-bit delta-modulator A/D converter, on the other hand, will have only 7.78 dB!

Figures 7A - 7E: Noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.

To get something of a intuitive feel for this, consider that since there is only 1-bit, the amount of quantization error possible is as much as 1/2-bit. That is, since the converter must choose between the only two possibilities of maximum or minimum values, then the error can be as much as half of that.

And since this quantization error shows up as added noise, then this reduces the S/N to something on the order of around 2:1, or 6 dB.

One attribute shines true above all others for delta-sigma converters and makes them a superior audio converter: simplicity. The simplicity of 1-bit technology makes the conversion process very fast, and very fast conversions allows use of extreme oversampling.

And extreme oversampling pushing the quantizing noise and aliasing artifacts way out to megawiggle-land, where it is easily dealt with by digital filters (typically 64-times oversampling is used, resulting in a sampling frequency on the order of 3 MHz).

To get a better understanding of how oversampling reduces audible quantization noise, we need to think in terms of noise power. From physics you may remember that power is conserved, i.e., you can change it, but you cannot create or destroy it; well, quantization noise power is similar.

With oversampling, the quantization noise power is spread over a band that is as many times larger as is the rate of oversampling. For example, for 64-times oversampling, the noise power is spread over a band that is 64 times larger, reducing its power density in the audio band by 1/64th.

Figures 7A through 7E illustrate noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.

Noise shaping helps reduce in-band noise even more. Oversampling pushes out the noise, but it does so uniformly, that is, the spectrum is still flat. Noise shaping changes that.

Using very clever complex algorithms and circuit tricks, noise shaping contours the noise so that it is reduced in the audible regions and increased in the inaudible regions.

Conservation still holds, the total noise is the same, but the amount of noise present in the audio band is decreased while simultaneously increasing the noise out-of-band—then the digital filter eliminates it. Very slick.

As shown in Figure 8, a delta-sigma modulator consists of three parts: an analog modulator, a digital filter and a decimation circuit.

The analog modulator is the 1-bit converter discussed previously with the change of integrating the analog signal before performing the delta modulation. (The integral of the analog signal is encoded rather than the change in the analog signal, as is the case for traditional delta modulation.)

Figure 8: Delta-sigma A/D converter. (click to enlarge)

Oversampling and noise shaping pushes and contours all the bad stuff (aliasing, quantizing noise, etc.) so the digital filter suppresses it.

The decimation circuit, or decimator, is the digital circuitry that generates the correct output word length of 16-, 20-, or 24-bits, and restores the desired output sample frequency. It is a digital sample rate reduction filter and is sometimes termed downsampling (as opposed to oversampling) since it is here that the sample rate is returned from its 64-times rate to the normal CD rate of 44.1 kHz, or perhaps to 48 kHz, or even 96 kHz, for pro audio applications.

The net result is much greater resolution and dynamic range, with increased S/N and far less distortion compared to successive approximation techniques—all at lower costs.

Good Noise?
Now that oversampling helped get rid of the bad noise, let’s add some good noise—dither noise. Dither is one of life’s many trade-offs. Here the trade-off is between noise and resolution. Believe it or not, we can introduce dither (a form of noise) and increase our ability to resolve very small values.

Values, in fact, smaller than our smallest bit… Now that’s a good trick. Perhaps you can begin to grasp the concept by making an analogy between dither and anti-lock brakes. Get it? No? Here’s how this analogy works: With regular brakes, if you just stomp on them, you probably create an unsafe skid situation for the car… Not a good idea.

Instead, if you rapidly tap the brakes, you control the stopping without skidding. We shall call this “dithering the brakes.” What you have done is introduce “noise” (tapping) to an otherwise rigidly binary (on or off) function.

So by “tapping” on our analog signal, we can improve our ability to resolve it. By introducing noise, the converter rapidly switches between two quantization levels, rather than picking one or the other, when neither is really correct. Sonically, this comes out as noise, rather than a discrete level with error. Subjectively, what would have been perceived as distortion is now heard as noise.

Lets look at this is more detail. The problem dither helps to solve is that of quantization error caused by the data converter being forced to choose one of two exact levels for each bit it resolves. It cannot choose between levels, it must pick one or the other.

With 16-bit systems, the digitized waveform for high frequency, low signal levels looks very much like a steep staircase with few steps. An examination of the spectral analysis of this waveform reveals lots of nasty sounding distortion products. We can improve this result either by adding more bits, or by adding dither.

Prior to 1997, adding more bits for better resolution was straightforward, but expensive, thereby making dither an inexpensive compromise; today, however, there is less need.

The dither noise is added to the low-level signal before conversion. The mixed noise causes the small signal to jump around, which causes the converter to switch rapidly between levels rather than being forced to choose between two fixed values.

Now the digitized waveform still looks like a steep staircase, but each step, instead of being smooth, is comprised of many narrow strips, like vertical Venetian blinds.

Figure 9: A - input signal; B - output signal (no dither); C - total error signal (no dither); D - power spectrum of output signal (no dither); E - input signal; F - output signal (with dither); G - total error signal (with dither); H - power spectum of output signal (with dither).

The spectral analysis of this waveform shows almost no distortion products at all, albeit with an increase in the noise content. The dither has caused the distortion products to be pushed out beyond audibility, and replaced with an increase in wideband noise. Figure 9 diagrams this process.

Wrap With Bandwidth
Due to the oversampling and noise shaping characteristics of delta-sigma A/D converters, certain measurements must use the appropriate bandwidth or inaccurate answers result. Specifications such as signal-to-noise, dynamic range, and distortion are subject to misleading results if the wrong bandwidth is used.

Because noise shaping purposely reduces audible noise by shifting the noise to inaudible higher frequencies, taking measurements over a bandwidth wider than 20 kHz results in answers that do not correlate with the listening experience. Therefore, it’s important to set the correct measurement bandwidth to obtain meaningful data.

Dennis Bohn is a principal partner and vice president of research & development at Rane Corporation. He holds BSEE and MSEE degrees from the University of California at Berkeley. Prior to Rane, he worked as engineering manager for Phase Linear Corporation and as audio application engineer at National Semiconductor Corporation. Bohn is a Fellow of the AES, holds two U.S. patents, is listed in Who’s Who In America and authored the entry on “Equalizers” for the McGraw-Hill Encyclopedia of Science & Technology, 7th edition.

 

{extended}
Posted by Keith Clark on 08/14 at 02:08 PM
AVFeatureBlogStudy HallAnalogAVDigitalEthernetInterconnectNetworkingProcessorSignalPermalink

Wednesday, August 13, 2014

In The Studio: The Five Levels Of Mixing Quality

This article is provided by the Pro Audio Files.

 
The meaning of the word “good” is one of my odd and recurring fascinations. What makes a “good” mix?

Music is a subjective field with many general principles but very few hard and fast rules.

The arts are inherently up for interpretation — and so “good” to one person may be “bad” to another. I could site examples of this, except there’s so many I feel there’s really no need.

So I often repose the question to myself: what makes a “good” mix? After all, that’s what I get paid for right? To make “good”/”great”/”unfrickin’ real” mixes.

Keeping in true-to-blog format, here’s a list of what I feel makes for the levels of “goodness” in a mix.

Level 1: Getting The Sound “Out of the Way”

At the most fundamental level, recordings are ultimately adulterated forms of a musical performance.

The fact is nothing really equates to the sound in the room, and when we start putting microphones in between the performance and the two measly speakers that are attempting to regurgitate that performance, it’s going to fall flat.

Couple that with any deficiencies of the recording space, equipment, or (hey, hey) tracking engineer — or lack thereof — and we soon find that the record pales in comparison.

Level 1 is the recognition that mixing is a necessary evil — someone has to compensate for all of this and “get the sound out of the way.”

Because it’s really hard to enjoy a performance when the guitar sounds like it’s under a blanket and the vocal sounds like the singer was chewing on the microphone in a space that sounds like a space-cavern and coffin at the same time.

No matter how good the performance is, bad sound is going to interfere with the listener’s experience.

Level 2: Making The Sound “Larger Than Life”

Once the sound is out of the way the most important job is done. But now we get to view mixing as a creative medium.

While a recording will never have the power and impact of a live performance, the actual sound performance can do things that can’t happen in nature. And that’s a fantastic thing.

Records, like film, have grown into their own art because of the manipulation that can occur. We can create a sound that is “larger than life”.

This comes in many forms: giving the sound a greater space, elements that are more vivid than we would hear them even in the best of sound systems, shaping sounds to have a stronger perceived impact than they normally would, etc.

Level 3: Enhancing The Musicality

This is the level that separates the aspiring engineers from the inspired engineers.

The ability to “help the music along” is often lost on the bands and artists who need it the most — but for the vetted artists, bands, producers who hear and feel musicality — this is the real litmus test.

The engineer hears the basic mix and begins to interpret the musical intentions.

There’s a myriad of means in which musicality is expressed, interpreted, and subsequently helped along — and a great deal of it is instinct — but when you hear it you hear it.

All I can say is a great deal of this process involves automation — bringing key elements out at exacting moments.

Level 4: Understanding The “Bigger Picture”

Music does not exist in a box.

Having an appreciation for the culture of people creating and listening to that music is paramount.

This doesn’t mean strictly playing to the aesthetic of the audience, but also knowing how to manipulate their expectations. This means not only understanding what the listener wants, but also why, and what the effects of altering their expectations may be.

Level 5: Doing Everything To Serve The Song

The mixer’s role is generally understood to be the tailoring of elements within the production.

However, the mixing phase is still a production phase, and as such, there is still time for adding, removing, or changing the vision of elements.

I have done everything from adding crazy effects, muting instruments, replacing drums, overdubbing guitars, and even added vocals onto records. The cornerstone to all of this is doing so in good taste.

The other important consideration is that the mixer sometimes must sacrifice their own importance. The things which “feel” the best aren’t necessarily the things that “sound” the best.

Putting things out of balance, leaving them muddy or thin, overly reverberant or awkwardly dry, can all go towards the main goal: the success of the song. The most successful songs are the ones that are the most compelling to the listener and that doesn’t always mean perfection.

Conclusion

There are two points I’d like to make about this article.

First, all five of these “levels” correlate. They’re not in fact separate stages or concepts, but more like degrees of mastery.

To this day I am still refining my skills in levels 1 & 2, even though my main goals are mastery of levels 3, 4 & 5.

My second point is that I didn’t choose these levels randomly. I put them in order of primary importance and difficulty of mastery.

The vast majority of mixes I hear do not have proper negotiation of levels 1 & 2 — far be it from 3, 4, or 5.

It takes a great deal of study, practice, and discipline master the art of mixing, so don’t ever be afraid to revisit the foundation during your journey!

 
Matthew Weiss engineers from his private facility in Philadelphia, PA. A list of clients and credits are available at Weiss-Sound.com. To get a taste of The Maio Collection, the debut drum library from Matthew, check out The Maio Sampler Pack by entering your email here and pressing “Download.”

Also be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article, go here.

{extended}
Posted by Keith Clark on 08/13 at 06:14 PM
RecordingFeatureBlogStudy HallConsolesDigital Audio WorkstationsEngineerMixerProcessorStudioPermalink

Church Sound: Mistakes Worship Teams Make That Can Compromise Services

The mistakes worship teams commit while approaching God do not preclude His presence, but they do erect obstacles to the flow of the Holy Spirit.

Here, then, are 10 common errors churches can avoid in the pursuit of God:

1. Turning minor mistakes into public spectacles. When a vocalist forgets to turn on a wireless microphone or a technician commits a track cueing error, the worst thing a worship leader can do is to proclaim the mistake to the entire congregation.

If the audience didn’t notice, why bring it up? And, if it was an obvious error, then everyone already knows about it.

Far from being a way to humanize the proceedings, public notifications only hinder the work of the Spirit and demoralize the person responsible. It’s better to go on with the service and discuss the incident in the context of love at a later debriefing.

2. Playing too much. Some musicians live to play and feel compelled to use every chord they know each service.

Just as too many cooks spoil the broth, so does too many notes spoil the song. If each segment can be given some air to breathe in the form of silence around the song, then each part that is played takes on added value and weight.

Ed Kerr says it best, “Make every note you play count toward the goal of communication and away from a focus on your ability.”

3. Playing too loudly. Worship “wars” are known for their resounding barrage of noise.

The goal of the band should not be to destroy the congregation’s hearing, but to play music that encourages the audience to participate in a journey to the throne of God. How loud is too loud is a question each team must answer based on the culture and circumstance of the local assembly.

However, a rule of thumb is to keep the stage level low enough that unamplified voices can be at least partially understood from a one-foot distance. The house mix level should be below 95 dB-A average response.

4. Choosing inappropriate material. I recently attended a worship service designed for 40-year-olds that incorporated a musical style more appropriate for 20-year-olds. While the audience seemed to appreciate the band’s efforts, they never became engaged in the proceedings. There were, though, a few “Gen Xers” in another room who were drawn to the sounds emanating from the sanctuary.

As a church consultant, I’ve been asked to referee many battles between the old and new, and have discovered the new is more readily digested when coated with cues from the old. No one wants to be outmoded, and there will always be someone who lives to hear Journey-esque music performed by a Steve Perry wannabee.

Keeping everyone happy is one way to direct people to Christ.

5. Selecting songs average people can’t sing. In a recent informal survey of non-participatory church goers, the majority cited the frustration they feel when they desire to worship in song, but are hindered by a musical selection beyond their range.

While the team members may impress themselves with their virtuosity and skill, the average Joe in the pew just gives up and stares into space.

Engaging people is never accomplished by making them feel inferior and inadequate.

In the words of Chariya Bissonette, “It doesn’t matter what you [the vocalist] can do. It only matters what Christ can do through you.”

6. Starting the service late. If the service is to begin at 10 am, then that’s actually when it should start - lest those who made the effort to be there promptly are disenfranchised while those who failed to arrive early are greeted with an “it doesn’t really matter mentality.”

One of the most precious commodities people have is time, and starting a service late implies their time gift is not important to the staff and team.

7. Treating rehearsal time as practice time. As Jamie Harvill states, “Rehearsal is crafted to polish the song, not to learn it. Individual practice time is when learning occurs.” Curt Coffield uses the time/money scale to weigh the value of rehearsal. If each member’s time is worth $25 per hour, imagine the total value of every rehearsal event and treat it appropriately.

8. Buying a Hyundai, then driving it like a Ferrari. Audio and video systems cost what they are worth. There is no way a modest system can perform like an expensive, properly designed system.

Churches love to set system budgets, and then try to force the integrator to “make it work.” Unfortunately, God’s laws of physics apply in His house just like they do at an Eminem concert.

As the cliché says, you get what you pay for. If a church needs to reproduce video and audio at a high level, it takes the right equipment and personnel to achieve the goal.

9. Presenting a hip image of Christianity in place of the image of Christ. God does not call us to make Christianity cool. There is nothing cool about suffocating to death on a cross while stripped naked.

The Gospel is a wonderful message and conveys hope, but not at the expense of truth. Our message must be applicable to all people for all time in all circumstance.

10. Creating virtual music. Performing “Muzak” versions of rock tunes with guitars played through modeling modules and drums banged out on electronics drums does not endear the message to someone raised on real rock ‘n’ roll.

If the situation is appropriate for virtual instruments and the room acoustics are atrocious, then virtual may be the answer. However, if authenticity is the goal, then authentic instrumentation is the means for success.

Discernment is needed to understand when to wail and when to use in-ears.

Kent Morris is noted for his church sound training abilities. He has more than 30 years of experience with A/V, has served as a front-of-house engineer for several noted performers and is a product development consultant for several leading audio manufacturers.

{extended}
Posted by Keith Clark on 08/13 at 05:50 PM
Church SoundFeatureBlogStudy HallBusinessEngineerMixerSound ReinforcementStageSystemTechnicianPermalink

Fill-osophy: Proper Fill Goes Well Beyond Just Adding Loudspeakers

Front fill loudspeakers are required in many sound reinforcement situations because the main loudspeakers don’t cover the first few rows of seats, and in some productions, they’re also used because the mains (often far above the seats or positioned to the sides) result in skewed localization.

Another less frequently encountered need is to correct the mix balance as heard from the front rows, primarily in musical theater.

Even very well-designed, pattern-controlled loudspeakers, clusters, and arrays lose pattern control at middle and low frequencies, so if we attempt to cover the front rows with higher frequency energy from the loudspeakers above, we can end up washing the front of the stage or platform with (uncontrolled) middle and low frequency energy.

This results in muddy sound on the stage (which may force stage monitor levels up) plus increased potential for feedback from the front line microphones.

It’s also often the case that the loudspeakers we own (sound departments, rental companies) or which we have selected for other specific reasons (installed systems) simply do not cover from the front to rear rows, or they do, but with significant overlap or projecting sound onto ceiling surfaces.

Other factors which dictate where we locate the primary loudspeakers (and therefore impact how they cover) include architect/event planner/client influence (appearance) and structural needs.

Finally, on rock n’ roll stages (clubs, larger spaces and outdoors), the stage monitor and back line volume levels may, at times, be so loud (due to musician demands) that even if we have complete coverage from the primary loudspeakers, the quality of the mix heard directly in front of the stage has taken a severe hit. This problem is also present in many churches employing louder contemporary worship and/or modern gospel music.

Location, Location
For theatrical sound where the primary loudspeakers cover to the edge of the stage, those seated in the front rows have the experience of seeing the actors directly ahead but hearing their voices coming distinctly from above. Transparency takes a big hit!

There can also be audible comb filtering if the acoustic source and the reinforced sound from the loudspeakers are near equal in level because of the difference in their arrival times (location disparity). The need to prevent these distortions also occur in other events such as worship spaces and corporate work.

In these situations, we employ front fill loudspeakers that can be laid across the stage lip/apron, attached via yoke mounts (to stage or pit rail) or propped up in front of the stage. Often they must be visually masked or completely concealed. Front fills should be mounted as low as possible (for sight lines), but their depth of coverage is compromised when mounted too low.

Other than in large-scale concert productions, there is seldom a need for very large front fill loudspeakers with high output. Close proximity to the target seats/patrons plus the efficiency of modern-day loudspeakers allow us to employ compact short throw devices, and in most cases, the vertical coverage is not as big of a concern as the horizontal.

Given the mounting height we’re faced with, and the throw distance required, the vertical pattern typically needs to be in the 40-degree to 60-degree range.

There are two characteristics which effect what we need in the horizontal axis: wider coverage (so fewer devices are needed) and seamless coverage, with no holes nor and overlap across the target seats.

Figures 1 and 2 show two front fill systems in theater spaces. In Figure 1, the seats are far enough away that the 80-degree conical devices are well suited and few are required. In Figure 2, the front seats are much closer and require a larger quantity of wide coverage front fills for complete coverage across the front rows. This tendency for coverage to vary depending on seating versus loudspeaker locations exists in any other type of venue or event.

Figure 1: Three 80-degree (h) loudspeakers cover 17 seats across at front row. (Photo by Kai Harada)

Some stages and almost all platforms in worship spaces are low in elevation. When faced with this, along with the need to reach into several rows of seats, we must get the front fill loudspeakers as high as we can without encroaching into the line-of-sight of the audience.

Fortunately we can rely on the ability of middle and high frequencies to diffract, to some degree, around the listeners and often we can reach into 2-3 rows—“enough to get by”—when this condition exists.

Older and more traditional churches may provide platforms, pulpit design or other millwork at the front that can facilitate installing front fill loudspeakers at an appropriate height and without being conspicuously visible. In some cases we may even seek complete concealment which is great, provided that electro-acoustic performance is not compromised.

Figure 2: Five 100-degree (h) loudspeakers cover 13 seats across. (Photo by Tom Young)

Various Scenarios
Unlike the other classes of fill, front fills may need to be fed a different mix than that which is sent to the rest of the loudspeaker system. In musical theater, those who are seated close to the orchestra pit are often subjected to an imbalance of instruments over voices.

Correcting this with front fills is the sound designer’s responsibility and requires the cooperation of the music director (conductor) to rehearse and then control the pit band. In this case, the front fill mix (sent via a matrix or an aux mix bus) consists of voices, and the front fill levels are set so that these voices are blended with the instruments emanating from the pit. Localization is also improved as part of this process.

Figure 3: If the main loudspeakers have an A/B or A/B/C configuration, the approach also applies to fill loudspeakers. (Photo by Mark Frink)

Another loudspeaker-related requirement unique to some musical theater productions is the need for A/B or A/B/C loudspeakers in the primary loudspeaker system. Logically this will also be needed for front (and other) fill loudspeakers (Figure 3).

In large-scale concert sound, where there is an effort made to provide better sound at the very front, front fills need to have more horsepower than what we would need for smaller venues.

In this scale of production, we can almost always achieve what is needed with available loudspeakers such as stage monitors or medium size “full range” models that are yoked or propped up at the rear so they are aimed correctly, provided that these devices exhibit appropriate coverage patterns when in this orientation.

Figure 4: Front fills deployed on (left to right) large, medium and small concert stages.

Loudspeakers with rotatable high-frequency horns may be of benefit here (Figure 4). Although they may not be the best candidate, multiband line array elements may be used for front fill duty in this scale of venue—provided, of course, that the HF vertical coverage is not excessively narrow, mounting height is appropriate, and aiming is done correctly.

Finally; in large concert stages we more often see center subwoofer arrays at floor level which can place more demands on the front fills which often sit on, or directly above, the subwoofers.

Under & Over
Almost all point source and line array systems used in performance venues and churches are trimmed at a height that facilitates even coverage to all exposed seats, and in as compact a manner as possible.

In venues with balconies, there will be seats that are shadowed by the balcony overhang. and in many venues the ceiling, or the acoustic reflectors suspended over the audience, obstruct sound from reaching the rear-most and highest seating.

The devices used for under balcony fill are likely to vary from small (i.e., dual-5 or dual-6, 2-way) to medium small (single-8 or -10, 2-way) and installed as close to the ceiling as possible. Although multipurpose loudspeakers are often deployed for this, there are purpose-built fill devices available from a wide range of manufacturers.

Over balcony devices vary from medium-small (single-8 to single-10, 2-way)  up to mid-size 2- and 3-way horn-loaded designs. Most often, we use multipurpose products adapted to this application (Figure 5).

For portable sound applications such as Broadway show bus and truck tours, under and over balcony loudspeakers are likely to be hung from lighting pipes and/or box trusses that are already installed or brought in for lighting.

Rock n’ roll shows in road houses seldom carry their own fill loudspeakers and will tie into the installed fill systems. When these are aligned and equalized to the primary system, they can work very well. However, not taking the time to measure and correct the alignment, etc. can result in severely compromised sound for those who are seated in the “wrong” seats.

Figure 5: Under balcony and over balcony fill deployment.

With installed systems, we have the opportunity (and often the need) to position under and over balcony devices in a more streamlined manner. Yoked fill loudspeakers are most often used, provided there are structural elements behind the ceiling surfaces to anchor them to. In some cases we can work with the architect/client to embed the loudspeakers into the ceiling and color-match them.

Placement & Coverage
In general, it’s good practice to position overhead fill loudspeakers forward of the target seats so that, along with delay, the acoustic energy from these is localized toward the stage.

More often than not, under balcony ceilings do not work well with recessed ceiling loudspeakers due to the direction that the sound is projected as well as the resulting limited area that these will cover.  However, acceptable results can be achieved with such loudspeakers when the ceiling is curved or angled upwards, from front-to-back.

Where we place these devices and where their coverage begins are determined primarily by intuition and experience. It’s been my experience that sighting where the balcony occlusion begins to obstruct where one sees (while seated) the HF elements in the primary loudspeaker(s) simply doesn’t work and locating the fill devices 2-3 rows ahead of this point is necessary.

From this point to several feet above standing height at the rear wall determines the required vertical coverage. We should avoid reflections from the ceiling, especially when it is flat and low. Ceilings that are curved up, from front-to-back, are less of a concern. Reflections off untreated rear walls are seldom an issue because we typically angle the fill loudspeakers down and therefore the reflections are “grounded” (absorbed) by the audience.

Figure 6: Polar graphs showing coverage, in both axes, of a typical small-format 90-degree (h) by 40-degree (v) loudspeaker.

The depth of the under balcony seating determines whether one or two delay “rings” are required as well as the coverage required from these devices. Unless complex computer models are constructed (and you trust them), when determining the best-suited fill loudspeaker and how many are needed, one should (at the very least) utilize the manufacturer’s polar response graphs or beamwidth chart and pay particular attention to the coverage pattern over the 2 kHz -12.5 kHz range.

This frequency range significantly affects clarity/intelligibility as well as timing cues (localization). Note that in most cases the specified coverage is less in both axes at these higher frequencies (Figure 6).

Tom Young is the principal consultant at Electroacoustic Design Services (EDS) with both worship and performance space projects in and around New York City and throughout New England. EDS specializes in sound reinforcement system design, loudspeaker system measurement and optimization, acoustic design and noise reduction. He’s also the moderator of the Church Sound Forum here on PSW.

 

{extended}
Posted by Keith Clark on 08/13 at 08:29 AM
AVFeatureBlogStudy HallAVConcertInstallationLoudspeakerSound ReinforcementStagePermalink

Tuesday, August 12, 2014

In The Studio: Five Steps To Checking Your Drum Phase When Mixing

This article is provided by Bobby Owsinski.

 
One of the most important yet overlooked parts of a drum mix is checking the phase of the drums. This is because not only will an out-of-phase channel suck the low end out of the mix, but it will get more difficult to fix as the mix progresses.

I covered how to check the polarity of the drum mics a few weeks ago (here), but here’s an excerpt from my Audio Recording Basic Training book that covers a way to check the phase when you’re setting up for a mix as well.

—————————————-

A drum microphone can be out of phase due to a mis-wired cable or poor mic placement. Either way, it’s best to fix it now before the mix goes any further.

1) With all the drums in the mix, go to the kick drum channel and change the selection of the polarity or phase control. Is there more low end or less? Chose the selection with the most bottom end.

2) Go to the snare drum channel and change the selection of the polarity or phase control. Is there more low end or less? Chose the selection with the most bottom end.

3) Go to each tom mic channel and change the selection of the polarity or phase control. Is there more low end or less? Chose the selection with the most bottom end.

4) Go to each cymbal mic or overhead mic and change the selection of the polarity or phase control. Is there more low end or less? Chose the selection with the most bottom end.

5) Go to each room mic channels and change the selection of the polarity or phase control. Is there more low end or less? Chose the selection with the most bottom end.

You’d be surprised how many times that flipping the phase on one or two of the drum mic channels results in a better, fuller sounding kit, even on one that’s well-recorded.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blog. And go here for more info and to acquire a copy of Audio Recording Basic Training.

{extended}
Posted by Keith Clark on 08/12 at 04:03 PM
RecordingFeatureBlogStudy HallConsolesEngineerInterconnectMicrophoneMixerSignalStudioPermalink

Ample Dynamics: Sound Reinforcement For Ray LaMontagne’s Supernova

Recently we had the pleasure of connecting with Jon Lemon, a noted, seasoned mix engineer who has handled front-of-house mix duties for numerous top artists, among them Beck, Janet Jackson, The Smashing Pumpkins, Nine Inch Nails, as well as his current client, American singer-songwriter Ray LaMontagne and the Supernova tour.

Lemon’s working with system tech Kyle Walsh and monitor engineer Ed Ehrbar, who share their thoughts here on the systems they’re utilizing, supplied by Eighth Day Sound (Cleveland and the UK) to reinforce the live performances on the current shed tour by the Grammy-winning LaMontagne and his talented band.

PSW/LSI: How did you connect with this artist and tour?

Jon Lemon: This is our first tour together, and it came about in kind of a funny “it’s a small industry” way. I was meeting James Gordon (managing director of DiGiCo), who happened to be with my friend Kevin Madigan, who was front of house for Ray on his last tour. Because he is with CSN, Kevin suggested to Ray’s tour manager (Daniel Herbst) that I’d be a good replacement. It ended up that I knew Ray’s manager, Michael McDonald, too, so it all just kind of worked out.

At the time, Ray wasn’t tied to any specific sound company. Since I had a good working relationship with Eighth Day Sound (Cleveland and the UK), and the production manager Mark Jones had also worked with Eighth Day extensively, we were in complete agreement that they would be a good fit for the tour, and it has been.

The sound team in “deep thought mode” at one of the tour’s DiGiCo SD10 consoles. Clockwise from top left: Monitor tech Mike Veres, front of house engineer Jon Lemon, system tech Kyle Walsh, and monitor engineer Ed Ehrbar.

You’re using Adamson Systems Energia for your main arrays. Is that another situation where you have a long history with the company?

Lemon: I’ve always respected Adamson’s philosophy as a company and have enjoyed using the Y18s over the years. However, I was completely prepared to go out with another well-known rig that I’d used before and was happy with.

But when I met with the folks at Eighth Day, we started talking about the Energia system – I had expressed interest in using it last year but it wasn’t fully complete. They’d just added some of the newer boxes (E12 12-inch full-range modules and E218 18-inch subwoofers) and felt it was a really powerful system that would provide the flexibility this tour needs (currently out in sheds, LaMontagne will be playing in theater venues this fall).

Adamson Energia arrays and subwoofers in place prior to a show on the tour.

Since I’d used the E15 system with The Smashing Pumpkins last year on several occasions in France and had a good experience, I thought it was worth checking out the finished item. So Eighth Day flew a system for evaluation and I played with it in the shop for a few days. As it turns out, it’s probably one of the best PAs I’ve ever heard.

What were you looking for from the rig?

Lemon: I knew the E15s were fine – as I said, I’d used them before – but I was curious about the E12s and the E218 subs. So I put them through their paces, first playing some program material that I know well. What I really wanted was to check out how even the E12 was through the full bandwidth. It was great and it also couples seamlessly with the E15 as an underhung for close into the stage. What a great product.

LaMontagne and band mates in concert.

As far as the E218s are concerned, I knew Ray would never need super heavy bass, so I was confident these would more than do the job. For my first listen we ground stacked them and they sounded great, more than enough low end for our purposes.

From there I pulled up some live programming through the console to see how much headroom was left. Ray is very dynamic on stage – his performance ranges from whispering and light strumming to a heavy rock sound. Headroom is essential. Driven with the Lab.gruppen amplifiers (16 PLM 20000Qs stacked eight per side) – which I’m a huge fan of – it was no problem.

It was obvious that the system was going to be terrific and a great PA for our needs. We went to rehearse in Portland, Maine, and had Ben Cabot from Adamson on hand. Colin Studybaker from Lab.gruppen was also on site making sure the amps and the Lake LM 44s were running currently with the new presets. Both Adamson and Lab.gruppen were very supportive of our efforts so I felt comfortable from the get-go.

Main system power and processing by Lab.gruppen and Lake.

What’s the typical load-in process on this tour?

Kyle Walsh: Depending on the riggers, we can be in and up within about two hours. At each venue, I come in and shoot the room/mark points, and then go back to the bus and put all of the information into the Adamson Blueprint AV software. It creates specifics for handing the system and we’re good to go.

Mike and I (Mike Veres is the monitor tech with Eighth Day and an integral part of the setup team) dump the truck and organize and set angles as the gear comes in. By the time we get “hands,” the points are up and we’re ready to fly the arrays.

After that, we place fills and subs, and then run snakes. Jon steps in to build front of house around his DiGiCo SD10 console, followed by my alignment and tuning. I utilize (Rational Acoustics) Smaart v7 to assist the tuning process, making adjustments on the Lake filtering in the Lab.gruppen amplifiers via a tablet interface. So it’s usually a pretty easy morning, depending on the rigging.

Load-out is even easier, able to be done in an hour depending on the push. Not having to zero the boxes while loading is a real time saver – you can land it with one person if need be.

A perspective of the main system.

How’s the new Blueprint AV software working out?

Walsh: It’s great, pretty much set and forget. I worked with Ben (Cabot) in the beginning and we knocked everything out. We have a few presets that we use and the software is very straightforward.

Jon mentioned that the system needs to have some degree of flexibility – can you provide some specifics?

Walsh: Some of the venues have weight restrictions. Fortunately it’s easy to reconfigure the system. I’ve flown all E12s, all E15s, or a mixture of both, and even ground-stacked them in a few places. It all transitions very easily and sounds terrific no matter what configuration we put together.

Jon, you’ve mixed many tours with DiGiCo consoles, correct?

Lemon: Yes, I had one of the first D5s back in the day and haven’t really mixed on any other console since. I think the DiGiCo boards, in general, have a really good, almost analog sound – they have since the beginning. As a company, they’re extremely receptive to suggestions from engineers like me, which in turn leads to the consoles being very user friendly.

Assembling an Energia array comprised of E15 and new E12 modules.

I tend to choose the specific console model based upon the reality of what I’m going to use. Sure, tons of channels are great, but if you don’t need them, go with something smaller. I love the SD7 and all its features, but the SD10 is exactly the same in audio quality and has more than enough features for the needs of this particular tour. So I have an SD10 at front of house and 192 racks on stage enabling us to run at 96k , and it’s equipped to run SoundGrid-compatible Waves plug-ins that provide me with an even wider assortment of tools for the mix.

So you’re a fan of plug-ins?

Lemon: Absolutely. I love them. The more you get into SoundGrid, the more you can create specific nuances for the mix. The CLA-76 compressor/limiter really suits Ray’s vocals, so I use that along with the Rennaisance DeEsser and C6 for plosives and sculpting. There are four other vocalists on stage – really good singers – and I use the same chain for them, too. From there the four vocals go into a group that’s tweaked with the CLA-3A limiter and C6 multiband dynamic compressor, which produces a very cohesive vocal sound.

I set up a lot of group busing; for example, I have two group buses for drums, a normal one and another for parallel compression (with an SSL compressor), so I’ll use the Waves NLS (non-linear summer) plug-in to drive that. I actually apply the NLS on all of the bus/groups in my mix – it gives me a real analog feel.

Jon Lemon at the ready for sound check.

Are you carrying outboard gear?

Lemon: Yes. I always have a Waves MaxxBCL (bass enhancement, compression and level maximization) at the top of my rack. I haven’t done a gig without this piece of gear for as long as it’s existed. Granted, I can get a plug-in to handle the same thing, but I just love having those knobs available to grab at. There’s also an Avalon VT-737sp channel strip that allows me to quickly EQ or compress Ray’s vocal if needed, and again, there’s just something nice about having the box right there. And, there’s a Summit TLA100A (tube leveling amplifier) for bass – this is on the bass group (electric and upright), so I wanted something simple, effective and flexible.

Because Ray is performing old and new songs (two distinctly different vocal styles) on this tour, I need to step up the reverb on certain passages, so I’m carrying three Bricasti M7 stereo reverb processors MIDI’d up to the SD 10. I use one exclusively for backing vocals and another for drums. They’re really impressive pieces of gear. And that pretty much does the trick.

Ed, you’ve also got an SD10 for monitors, correct?

Ed Ehrbar: Yes, I like the SD10 because it’s sonically the same as the SD7, which is usually my console of choice, but the SD10 is suiting my needs on this tour completely. I’ve been able to pare down the size of the console budget without sacrificing any quality. Waves also has come a long way, and the ease of using the various plug-ins on DiGiCo consoles has greatly improved.

A classic summer tour view with Jon Lemon at front of house for LaMontagne.

What’s happening on stage?

Ehrbar: We’ve got d&b audiotechnik M2 wedges for the performers and a couple Sennheiser G3 IEM mixes for the techs. This show is very straightforward with great music and great players. You don’t really need much more.

Lemon: All of the vocals are handled with Sennheiser e 935 dynamic microphones, with drums captured by a selection of classics – beyerdynamic TG M88 and a Shure SM91 on kick, two Telefunken M80s on the snares with Sennheiser eb 414s underneath, Neumann 184s for hi-hat and cymbals, and Shure VP88 stereo condensers over the kit and drummer. On guitars there’s a mix of a Shure SM57, a Neumann TLM 103 and a Telefunken M80.

What’s standing out in your mind on the tour at this point?

Lemon: The only unusual thing is how bloody consistent the PA is night after night. I find that surprising. I’ve used a lot of big-name systems, and this is very sophisticated. Other than that I’m just very lucky – I’m working with great people and a terrific sounding band, which makes it even easier to make them sound good. I wouldn’t change a thing.

{extended}
Posted by Keith Clark on 08/12 at 01:46 PM
Live SoundFeatureBlogConcertConsolesEngineerLine ArrayMicrophoneProcessorSound ReinforcementSubwooferTechnicianPermalink

Church Sound: Advice For New Technical Artists

This article is provided by CCI Solutions.

 
The life of a church tech is crazy. You’re the first to arrive and last to leave, get few days off and for less money than your secular counterparts.

Despite that, I believe tech ministry is one of the most amazing ministries you can serve in. I’ve recently been asked for advice on starting a career as a church tech. Those who’ve asked have varying skills, personalities, specialties and areas needing improvement, but all of them received the same advice from me.

First, church techs must become proficient in multiple, if not all of the tech disciplines of audio, video and lighting. Every tech has specialties and some are blessed with multiple specialties. Most churches, however, only have the budget to hire one tech and that person has to lead them all.

Even in churches that can afford multiple, more specialized techs, being well-versed in all disciplines makes you more effective, more valuable and better equipped to handle possible issues that could come your team’s way.

Second, be open to learning from those more experienced or knowledgeable. Many young artists struggle with being teachable. There are some seasoned artists who struggle with this too.

Often we get a little bit of knowledge and we think we know it all. I’ve certainly had prideful moments, but when I’ve taken the opportunity to learn from those who know more than me, I benefit greatly and so does everyone around me.

The best techs I’ve met have this trait. The other day I spoke with a well respected and seasoned sound guy who was experimenting with a new technique he learned from someone else. There is always something more or new to learn in the tech field, the trick is to stay open to learning it.

Finally, create boundaries that will guard the hearts of you and your family. This may ruffle feathers, but it’s easy for ministry to overtake your life, mess with your family and kill your zeal for serving.

One of the hardest things for me to learn was that I had to create boundaries to protect myself and family. For every church that has amazing leaders who are protective of their people, there are more that are just trying to get by and ask too much of their staff.

Churches don’t burn people out on purpose, but ultimately it’s not the church’s responsibility to protect you and your family. A church’s top priority must be the whole ministry before each person. Your priority must first be you and your family, and then your ministry.

Learn every discipline you can, take advantage of opportunities to learn more, and have healthy boundaries. For more than 12 years now I’ve loved both serving in and leading technical arts ministries. I believe it’s a very noble calling, one that is increasingly critical in the church today. 
 

Duke DeJong has more than 12 years of experience as a technical artist, trainer and collaborator for ministries. CCI Solutions is a leading source for AV and lighting equipment, also providing system design and contracting as well as acoustic consulting. Find out more here. Also read more from Duke at dukedejong.com.

{extended}
Posted by Keith Clark on 08/12 at 09:14 AM
Church SoundFeatureBlogStudy HallBusinessEducationEngineerMixerSound ReinforcementTechnicianPermalink

Monday, August 11, 2014

Compact Advantages: The Latest On Smaller Consoles & Mixers

Digital consoles have certainly changed the way our workflows and the ways we mix.

No more cumbersome large-frame analog consoles that take four or more stagehands to move and set up.

No more promoters and event planners crying about how much room said large-frame analog console and associated outboard drive and effects racks are occupying at front of house.

No more large, heavy analog snake cables to coil up at the end of the gig now that we can run a single coax, fiber or Cat cable for networking.

The “no more” list seems endless, but the initial buy-in to this digital revolution took some big bucks as early digital consoles were quite pricey.

As with electronic devices over time, prices come down and feature sets go up, with digital consoles being no exception. In fact, a new category of digital consoles sprung up a few years ago when manufacturers listened to those of us who didn’t necessarily want or need a large consoles for our small- to medium-sized gigs.

These “compact” units often offer most/all of the bells and whistles of their bigger siblings with the exception of reduced fader counts, or they may actually be different consoles altogether and offer a reduced feature set. Even with fewer features they still pack a punch and offer up way more processing than we ever had in our analog racks as well as more routing capabilities than a large-frame 48-channel analog console ever could.

The compact digital mixers we’re referring to here fall into the 16- to 32-channel size, but don’t let fader count or onboard inputs fool you, because many offer increased channel capabilities by using fader layers and adding additional stage I/O units. Some can even be cascaded together allowing desks to conveniently increase capabilities or even form a larger console.

Many of these smaller mixers aren’t skimping in the routing department either, as many have quite a few mix buses and matrix outputs. The same goes with processing and effects. Even the most miniscule units offer multiple effects and processing like compression and gating on each channel, again providing more capabilities than the largest tours had with analog desks just 10 or so years ago.

And if you don’t like the onboard effects and processing, many models offer the ability to integrate plug-ins that are software processors crafted to emulate the operation and results of modern or vintage outboard gear, and they can also be used to formulate new creations offering a different take on a particular effect or processor.

As we’re tasked to do more live recording, manufacturers have answered the call by providing a slew of recording options, including recording to USB, external hard drives and tablets, as well as multi-track recording to a DAW via a dedicated path like USB and FireWire or a digital network protocol like MADI or Dante. Not satisfied with the just recording the event, some of these consoles can be configured into a mix-down desk at the push of a button to deliver double duty as both a recording and live audio desk.

Topping it off, the majority of compact mixers can also be remotely operated via computer and/or tablet. Wired or wireless, these remote devices provide access to mix functions that allow users a user to move around the venue rather than being anchored to the control surface. Some even accommodate multiple tablets (or smart phones), allowing performers to control their own monitor mixes.

With all these capabilities and more, it’s no wonder that compact consoles are a big hit. Enjoy our Real World Gear Photo Gallery Tour of a variety of compact consoles.

{extended}
Posted by Keith Clark on 08/11 at 06:26 PM
Live SoundFeatureBlogProductSlideshowConsolesDigitalInterconnectMixerNetworkingSoftwareSound ReinforcementPermalink

In The Studio: Audio Effects Explained (Includes Audio Samples)

This article is provided by Audio Geek Zine.

 
A while ago I mentioned using modulation effects to help create movement within a mix. Here, I’ll explain the different types of modulation effects that we have available for mixing, and then move along to gates, compression, EQ, delay, reverb, de-essing, and a whole lot more.

The modulation effects I’ll be discussing include:

—Tremolo
—Vibrato
—Flanging
—Phasing or Phase Shifting
—Chorus

I’ll start with some easy ones then move on to the harder to explain—but more commonly used—effects.

All of them are built around a low-frequency oscillator, more commonly referred to as just an LFO. An LFO is an audio signal usually less than 20 Hz that creates a pulsating rhythm rather than an audible tone.

These are used for manipulate synthesizer tones, and as you will see, to create various modulation effects. All of the effects listed use sine wave as the wave shape for the LFO.

image

Tremolo is an effect where the LFO is modulating the volume of a signal. The signal attenuation amount is controlled by the depth and the rate adjusts the speed of the LFO cycles.

Listen to an example of Tremolo

Vibrato is an effect where the LFO is modulating the pitch of a signal. This is accomplished by delaying the incoming sound and changing the delay time continually. The effect usually not mixed in with the dry signal. The depth control adjusts the maximum delay time, and rate controls the lfo cycle.

Listen to an example of Vibrato

Flanging is created by mixing a signal with a slightly delayed copy of itself, where the length of the delay is constantly changing. Historically this was accomplished by recording the same sound to two tape machines, playing them back at the same time while pushing down lightly on one of the reels, slowing down one side. The edge of a reel of tape is called the flange, hence the name of the effect.

These days we accomplish the same effect in a much less mechanical way. Essentially the signal is split, one part gets delayed and a low frequency oscillator keeps the delay time constantly changing. Combining the delayed signal with the original signal results in comb filtering, notches in the frequency spectrum where the signal is out of phase.

We usually have depth and rate controls. The depth controls how much of the delayed signal is added to the original, and the rate controls how fast it will change.

Phasing (or phase shifting) is a similar effect to flanging, but is accomplished in a much different way. Phasers split the signal - one part goes through an all-pass filter then into an LFO, and is then recombined with the original sound.

An all-pass filter lets all frequencies through without attenuation, but inverts the phase of various frequencies. It actually is delaying the signal, but not all of it at the same time. This time the LFO changes which frequencies are effected.

Phase shifters have two main parameters: Sweep Depth, which is how far the notches sweep up and down the frequency range; and Speed or Rate, which is how many times the notches are swept up and down per second.

Listen to an example of Phasing

Chorus is created in nearly the same way as Flanging, the main difference is that Chorus uses a longer delay time, somewhere between 20-30 ms, compared to Flanging which is 1-10 ms. It doesn’t have the same sort of sweeping characteristic that Flanging has, instead is effects the pitch.

Again the LFO is controlling the delay time. The depth control affects how much the total delay time changes over time. Changing the delay time up and down results in slight pitch shifting.

Listen to an example of Chorus

You may have noticed that the majority of effects here involve delay. You can recreate most of the effects by using a digital delay with rate and depth controls, such as the Avid ModDelay2.

Reverb

The different types and methods, and I’ll also explain the most important parameters. I’ll mostly be talking about the kinds you will be using when mixing and what is available as plugins.

Digital Reverb Technology
There are two ways of creating a reverb effect in the digital world, by using mathematical calculations to create a sense of space, which is called algorithmic. And, by creating an impulse response, a snapshot of a real space, and applying that to the sound, which is called convolution.

Reverb is essentially a series of delayed signals, and algorithmic reverbs work pretty well to recreate this. Most reverb plugins, stomp boxes, and racks are algorithmic style.

When you want really realistic reverb, then convolution can not be beat. To create an impulse response the creator goes into a room and records the sound of a starter pistol going off and the natural reverb of the room.

The recordings are then deconvolved in software which is removing the sound of the starter pistol from the recording, leaving only the reverb.

Sine wave sweeps can also be used for the impulse creation. This is a more accurate way of creating reverb because it also captures the character of the room, and the way different frequencies react in the room.

The same process can be used to create impulse responses of speaker cabinets, guitar amps, vintage rack gear or basically anything that can make a sound.

Analog Reverb Types
In the analog world there are a few other ways, most of which will not be available to the home studio musician, except for their recreations in plug-ins. Analog reverbs come in three flavors—plate, spring, and chamber.

Invented in 1957 by EMT of Germany, the plate reverb consist of a thin metal plate suspended in a 4-foot by 8-foot sound proofed enclosure. A transducer similar to the voice-coil of a cone loudspeaker is mounted on the plate to cause it to vibrate. Multiple reflections from the edges of the plate are picked up by two (for stereo) microphone-like transducers. Reverb time is varied by a damping pad which can be pressed against the plate thus absorbing its energy more quickly.

This is what a plate reverb sounds like: platereverb.mp3

A spring reverb system uses a transducer at one end of a spring and a pickup at the other, similar to those used in plate reverbs, to create and capture vibrations within a metal spring. You find these in many guitar amps, but they were also available as stand alone effect boxes. They were a lot smaller than plate reverbs and cost a lot less.

This is a spring reverb: springverb.mp3

The first reverb effects used a real physical space as a natural echo chamber. A loudspeaker would play the sound, and then a microphone would pick it up again, including the effects of reverb. Although this is still a common technique, it requires a dedicated soundproofed room, and varying the reverb time is difficult.

This is a chamber: Chamber.mp3

These three types of reverb are all available in digital form in addition to a few other styles simulating real spaces, and others not found in nature.

Natural Reverb Types

Room – A room is anything from a classroom to conference room. There is generally a short decay time of about 1 second: room.mp3

Hall – A hall is larger than a room, it could be from a small theatre with 1 second of decay up to a large concert hall with a decay time up to 2.5 seconds: hall.mp3

Church – The decay time of a church can vary between 1.5 seconds to 2.5 seconds: church.mp3

And Cathedral decay times can go above 3.5 seconds: cathedral.mp3

Remember, the sound of a room is not just the decay time. The materials it was built with make a huge impact on the character of the sound. Stone, wood, metal and tile all sound drastically different.

There are also a few other types of reverb that are not natural - these are Non Linear, Gated and Reversed.

Non-Linear has a decay that doesn’t obey the laws of physics: non-lin.mp4

Gated was a popular effect in the 1980s, but it’s sounding pretty cheesy these days: gated.mp3

Reversed sounds like this: reverse.mp3

Reverb Parameters

Reverb Type – What kind of reverb emulation it is. There are Halls, Rooms, Chambers, Plates, etc…

Size – What the physical size of the space is. This can range from small through large.

Diffusion – How far apart the reflections are from each other.

Pre-Delay – Sets a time delay between the direct signal and the start of the reverb

Decay Time – Also known as RT60, which is how long it takes for the signal to reduce in amplitude 60 decibels.

Mix (Wet/Dry) – Sets the balance between the dry signal and the effect signal. When you have the reverb effect on an insert you need to adjust the wet and dry ratio, when you are sharing the reverb in a send and return configuration you want the mix to be 100 percent wet.

Early Reflection Level – Controls the level of the first reflection you hear. Early reflections help determine the dimensions of the room.

High Frequency Roll Off – Helps control the decay of high frequencies (as it is found in natural reverb).

Tips For Using Reverb

—Using pre delay can help keep your vocals up front, while still giving them space.

—Try to keep decay times short for faster tempo music.

—Filter out low frequencies before the reverb to keep it from sounding muddy

—Try de-essing the reverb to reduce harsh sibilance.

EQ & Filtering

The terms EQ and filter seem to mean different things.

Filtering is generally what we say when we want to remove frequencies, and EQ is when we want to shape the sound by boosting and cutting.

The truth is, it’s all filtering.

Parameters

Cutoff – Selects the frequency. This is measured in Hertz

Gain – How much boost or attenuation at the cutoff frequency. This is measured in decibels

Shape or Type – This chooses what kind of filter you will be using. The filter shapes are hi and low pass, band pass, peaking, notch, and shelf

Quality or Width – Usually just referred to as Q is the shape of the EQ curve and how much of the surrounding frequencies will be affected.

What does an equalizer actually do?
An equalizer adjusts the balance of frequencies across the audible range. EQ is an incredibly powerful tool for crafting a mix.

Filter Shapes

A low-pass filter also known as high-cut filter removes frequencies above the cutoff. A high-pass filter or low-cut filter does the opposite, it removes everything below the cutoff.

Low-Pass Filter Clip (click to play)
High-Pass Filter Clip (click to play)

When you use both these filter types at once, it’s called a band-pass filter, the top and bottom frequencies are removed. With these three filter shapes, Q effects the steepness of the filter.

Band-Pass Filter Clip (click to play)

A notch filter is the opposite of a band pass filter, it lets all frequencies through except for a narrow notch in the spectrum which is attenuated greatly. The Q effects the width of the notch.

Notch Filter Clip (click to play)

There are two filter shapes that allows you to control how much the frequency will be attenuated.

A low-shelf EQ will boost or cut anything below the cutoff, a high-shelf EQ gives you boost or cut above the cutoff. You can choose how steep the slope is with the Q control.

Low-Shelf Clip (click to play)
High-Shelf Clip (click to play)

A peaking filter is also known as a bell curve EQ; you can boost or cut any frequency with a peak at the cutoff and a slope on either side.

Peaking Filter Clip (click to play)

EQ Designs

There are two main types of EQ designs: graphic and parametric.

Graphic equalizers have multiple peaking filters at specific frequency bands and give you a few overlapping frequency bands with adjustable gain. These are most commonly seen on consumer music players, but in professional audio these are very useful for mixing live music.

Parametric equalizers give you the most flexibility you can choose the shape, cutoff, gain, and quality. This is the type you will be using when mixing. Nearly all plug-in equalizers will be parametric.

EQ Usage Tips

—Use high-pass filters to remove unnecessary low frequencies from your tracks

—Use notch filters to remove unwanted noises from a recording

—Get rid of the frequencies you don’t need before boosting the ones you do, although it may not be your first instinct when EQing, it works a lot better

—High Q values will cause ringing or oscillation when boosted, this is not usually something you want to happen

—Adjust the EQ so that the level remains constant whether engaged or bypasses, it’s too easy to be fooled by louder being better

Some of my favorite equalizer plug-ins:

Apulsoft ApQualizer: Very clean EQ with 64 bands, frequency analyzer and complete control.

Stillwell Audio Vibe-EQ: A vintage style EQ that has some nice coloration, I like it most on electric guitars.

Avid EQ III: Standard included Pro Tools plug-in does the job 99 percent of the time.

Delay

In its simplest form, a delay is made up of very few components.An audio input, a recording device, a playback device and an audio output.

Tape Delay
Early delay processors, such as the Echosonic, Echoplex and the Roland Space Echo, were based on analog tape technology. They used magnetic tape as the recording and playback medium.

Some of these devices adjusted delay time by adjusting the distance between the playback and record heads, and others used fixed heads and adjustable tape speed.

Analog Delay
Analog delay processors became available in the 1970s and used solid state electronics as an alternative to the tape delay.

Digital Delay
In the late 1970s, inexpensive digital technology led to the first digital delays. Digital delay systems function by sampling the input signal through an analog-to-digital converter, after which the signal is recorded into a storage buffer, and then plays back the stored audio based on parameters set by the user. The delayed (“wet”) output may be mixed with the unmodified (“dry”) at the output.

Software Delay
And these days you’ll most likely be using plug-ins for your delay processing, same principles, just without the moving parts, additionally, they can sound pretty close to any of the other styles or be totally unique like OhmBoyz below.

image

Effect Parameters
OK, so that’s it for the history of delay processors. Now let’s move on to the parameters.

Delay Time—How long before the sound is repeated

Tempo Sync—Each repeat of the delay will be in time with the song, ¼ notes or 1/8 notes

Tap Tempo—Tap this button along with the song to set the delay time

Feedback—Output is routed back to input for additional repeats

Mix/Wet-Dry—Mix of original signal with delayed signal

Rate—LFO rate to change delay time

Depth—Range of delay time change for LFO

Filter—Usually a high cut filter, each repeat gets darker sounding

Stereo delays often have separate Left and Right controls.

What Does It Do?
Now, what sort of sounds can we get with delay processors?

An automatic double tracking effect can be accomplished by taking a mono signal, run it into a stereo delay, have no processing on the left side, and a very short delay on the right side. Have a listen here.

A slap back or slap delay has a longer delay time from about 75 to 200 milliseconds. This is the sort of delay was a characteristic of the 50s rock n roll records. Listen to it on guitar here.

A ping pong delay uses two separate delay processors that feed into each other. First the dry signal is heard, the signal is sent to the left side, this delayed signal is sent to the right side, and the right side is sent back to the left.

Chorus, flanging and phasing can all be created with delays as well. Listen to The Home Recording Show #11 or read about it here for more on that.

Tips On Using Delay

—On vocals, try using a short delay instead of reverb, sometimes it works better.

—Set up a ping pong delay after a large reverb, so the reverb seems to get steadily wider.

—Be careful with that feedback control, things can get very loud, very quickly.

Gates, Comps, De-Essers

A noise gate is a form of dynamics processing used to increase dynamic range by lowering the noise floor, and it is an excellent tool for removing hum from an amp, cleaning up drum tracks between beats, background noise in dialog, and can even be used to reduce the amount of reverb in a recording.

The common parameters for a noise gate are:

Threshold – Sets the level that the gate will open, when the signal level drops below the threshold the gate closes and mutes the output.

Attack – How fast the gate opens.

Hold – How long before the gate starts to close.

Release – a.ka decay—how long until the gate is fully closed again.

Range – How much the gated signal will be attenuated.

Sidechain – For setting an alternate signal for the gate to be triggered from, sometimes called a Key.

Filters – The filters section allows you to fine tune the sidechain signal.

What’s It For?

The normal use for gating is for removing background noise. An essential tool for clean dialog recording. Some other uses for gates are gated reverb and using the sidechain to activate other effects.

How To Set A Noise Gate

To set up a gate properly, start with the the attack, hold, and release as fast as possible. Set the range to maximum, and the threshold to 0 dB.

Start lowering the threshold until the sound starts to get chopped up by the gate. Slow down the attack time to remove any unnatural popping. Adjust the hold and release times to get a more natural decay.

If you don’t want the background noise to be turned down as much then you can reduce the range control.

Other Uses

Gated reverb was a popular effect in the 80s, mostly because of Phil Collins records.

To set it up, take your drum tracks and send them to a stereo reverb with a large room preset. After the reverb, insert a stereo gate. Adjust the gate settings so that the reverb is cut off before then next hit.

In this example you’ll hear the unprocessed drums, then with reverb, then adding the gate. (Listen)

Favorite Gates

The classic Drawmer DS201 is a hardware noise gate that is hard to beat.

The gate on the Waves SSL E-Channel is good, simple and effective.

The free ReaGate VST is quite good as well.

Noise gates aren’t very much fun to talk about, but they are a powerful tool that you need to know how to use.

Compression

Compression is an effect that can take a while to understand because the results are not always as obvious as other effects. To explain it as simply as possible, when a signal goes into a compressor, it gets turned down. That’s it. How it does this, how fast, and smoothly is what makes each one unique.

Compressor Controls

Most compressors will have the same set of controls:

The Threshold control sets what level will start the gain reduction.

The Ratio sets how much gain reduction, with a 4:1 ratio for every 4 dB of signal above the threshold 1 dB will be allowed through.

The Attack control sets how fast the compressor reacts to peaks.

The Release control sets how fast the compressor reacts as the signal lowers

Makeup gain is used to bring up the overall level of the compressor after the peaks have been reduced.

Sometimes there is an auto makeup gain control, which will increase the output level to match the gain reduction.

Some compressors have a knee control that starts compressing at a lower ratio as the threshold is approached, this is very helpful for a more natural compression.

Compressors will usually have a few meters, an input level, gain reduction and output level. If there are only two meters there is usually a switch to change the output level to show gain reduction. Gain reduction meters go in the opposite direction of the level meters.

Setting A Compressor

This is my method for setting a compressor:

I choose a ratio depending on how aggressive I want the compression to be. The type of sound I’m using it on determines this, softer sounds like voice get lower ratios, bass gets a medium ratio and drums get a higher ratio.

I turn the attack and release controls to the fastest setting, and make sure the meter is showing gain reduction.

Then I lower the threshold level until I’m getting about 1 decibel of gain reduction on the peaks.

From there I’ll fine tune the attack and release for whatever sound most natural, and use the makeup gain to match the output with the input level.

If I want more compression, I’ll lower the threshold more.

Here’s an example of some electric guitar with and without compression. I’m using more compression than I normally would on this so that the effect will be easier to hear. It should be pretty obvious that the compressor has evened out the dynamics of the performance. (Listen)

Compression can bring out more details in a performance, but it will also bring up background noise especially at higher ratios, that’s not usually what you want.

A slow attack will let some of the transient through, you can use this when you want to increase the punch of drums. You want to compress the sustain of the drum, and use the make up gain to make the drums larger than life.

In this example there is an ambient room mic for a drum kit. First you will hear it without compression, then with (actually with a ton of compression), and I’ll increase (slow down) the attack time with each loop. Notice the increased bigness of the drums, and how the transients get through and keep it punchy. (Listen)

Limiter

A limiter is a compressor that’s output stays at or below a specific level regardless of the input level. It only turns down remember. The compression ratio starts at 10:1 and can go up to infinity. Limiters need very fast attack and release to be effective.

A brick-wall limiter (aka Maximizer), is a mastering tool used to increase the volume of a song as much as possible. These brick-wall limiters have an infinite ratio and will not let anything past the threshold. This type of limiter has two main controls, one for threshold and one for the maximum output level.

With these you basically set the maximum output level, something like -0.02 dB and then crank the threshold to crush everything and make it sound really loud and obnoxious (like Death Magnetic). The misuse of the brick-wall limiter is often associated with the loudness war and with compression in general.

image

Multi-Band

Another common mastering tool is the multi-band compressor.

A multi-band compressor is actually four compressors in one. The frequency range is split up into four bands like an equalizer, Low, low mid, high mid and high frequency bands. This can give you a much smoother compression with a lot more control.

De-Esser

There is one more type of dynamics processor, the de-esser. A de-esser is designed to reduce the harsh esss sounds in a voice. The compression works on a single frequency or frequency range rather than the entire input signal. These are generally used for voice processing but you might find some other uses for it.

Recommended Plug-Ins

Simple compressor: Massey CT4

Advanced compressor: Avid Smack!

Master limiter: Massey L2007

Multi-band compressor: Wave Arts MultiDynamics 5

De-esser: Massey De-esser

Distortion

I find it hard to think about the electric guitar without thinking about distortion. There was a time when electric guitars were always clean. Hard to imagine now.

Traditionally distortion was an unwanted feature in amplifier design. Distortion only occurred when the amp was damaged or overdriven. Possibly the first intentional use of distortion was in the 1951 recording of “Rocket 88″ by Ike Turner and the Kings of Rhythm.

Chuck Berry liked to use small tube amps that were easy to overdrive for his trademark sound and other guitarists would intentionally damage their speakers by poking holes in them, causing them to distort.

Leo Fender then started designing amps with some light compression and slight overdrive and Jim Marshall started to design the first amps with significant overdrive. That sound caught on quickly and by the time Jimi Hendrix was using Roger Mayer’s effects pedals, distortion would forever be associated with the electric guitar.

Not Just For Guitars

When you’re recording and mixing, you can use a bit of distortion to give any sound more edge, grit, energy and excitement. Drums, vocals, bass, samples – they can all benefit from a touch of distortion at times. Understanding the different ways distortion can be created and how they sound can help you get better sounds and make better recordings.

So What Is Distortion?

The word distortion means any change in the amplified waveform from the input signal. In the context of musical distortion this means clipping the peaks off the waveform. Because both valves and transistors behave linearly within a certain voltage region, distortion circuits are finely tuned so that the average signal peak just barely pushes the circuit into the clipping region, resulting in the softest clip and the least harsh distortion.

Because of this, as the guitar strings are plucked harder, the amount of distortion and the resulting volume both increase, and lighter plucking cleans-up the sound. Distortion adds harmonics and makes a sound more exciting.

Amp Distortion—Tube & Solid State

Valve Overdrive. Before transistors, the traditional way to create distortion was with vacuum valves (also known as vacuum tubes). A vacuum tube has a maximum input voltage determined by its bias and a minimum input voltage determined by its supply voltage.

When any part of the input waveform approaches these limits, the valve’s amplification becomes less linear, meaning that smaller voltages get amplified more than the large ones. This causes the peaks of the output waveform to be compressed, resulting in a waveform that looks “squashed.”

It is known as “soft clipping”, and generates even-order harmonics that add to the warmth and richness of the guitar’s tone. If the valve is driven harder, the compression becomes more extreme and the peaks of the waveforms are clipped, which adds additional odd-order harmonics, creating a “dirty” or “gritty” tone.

Valve distortion is commonly referred to as overdrive, as it is achieved by driving the valves in an amplifier at a higher level than can be handled cleanly. Multiple stages of valve gain/clipping can be “cascaded” to produce a thicker and more complex distortion sound.

In some modern valve effects, the “dirty” or “gritty” tone is actually achieved not by high voltage, but by running the circuit at voltages that are too low for the circuit components, resulting in greater non-linearity and distortion. These designs are referred to as “starved plate” configurations.

Transistor Clipping. On the other hand, transistor clipping stages behave far more linearly within their operating regions, and faithfully amplify the instrument’s signal until the input voltage falls outside its operating region, at which point the signal is clipped without compression, this “hard clipping” or limiting. This type of distortion tends to produce more odd-order harmonics.

Electronically, it is usually achieved by either amplifying the signal to a point where it must be clipped to the supply rails, or by clipping the signal across diodes. Many solid state distortion devices attempt to emulate the sound of overdriven vacuum valves.

Distortion Pedals

Overdrive distortion. While the general purpose is to emulate classic “warm-tube” sounds, distortion pedals can be distinguished from overdrive pedals in that the intent is to provide players with instant access to the sound of a high-gain Marshall amplifier such as the JCM800 pushed past the point of tonal breakup and into the range of tonal distortion known to electric guitarists as “saturated gain.”

Some guitarists will use these pedals along with an already distorted amp or along with a milder overdrive effect to produce radically high-gain sounds. Although most distortion devices use solid-state circuitry, some “tube distortion” pedals are designed with preamplifier vacuum tubes. In some cases, tube distortion pedals use power tubes or a preamp tube used as a power tube driving a built-in “dummy load.”

The Boss DS-1 Distortion is a pedal with this design. This is what that sounds like: Listen

Overdrive/Crunch. Some distortion effects provide an “overdrive” effect. Either by using a vacuum tube, or by using simulated tube modeling techniques, the top of the wave form is compressed, giving a smoother distorted signal than regular distortion effects. When an overdrive effect is used at a high setting, the sound’s waveform can become clipped, which imparts a gritty or “dirty” tone, which sounds like a tube amplifier “driven” to its limit.

Used in conjunction with an amplifier, especially a tube amplifier, driven to the point of mild tonal breakup short of what would be generally considered distortion or overdrive, or along with another, stronger overdrive or distortion pedal, these can produce extremely thick distortion.

Today there is a huge variety of overdrive pedals, including the Boss OD-3 Overdrive: Listen

Fuzz. This was originally intended to recreate the classic 1960s tone of an overdriven tube amp combined with torn speaker cones. Old-school guitar players would use a screwdriver to poke several holes through the the guitar amp speaker to achieve a similar sound.

Since the original designs, more extreme fuzz pedals have been designed and produced, incorporating octave-up effects, oscillation, gating, and greater amounts of distortion.

The Electro-Harmonix Big Muff is a classic fuzz pedal: Listen

Hi-Gain. High gain in normal electric guitar playing simply references a thick sound produced by heavily overdriven amplifier tubes, a distortion pedal, or some combination of both – the essential component is the typically loud, thick, harmonically rich, and sustaining quality of the tone.

However, the hi-gain sound of modern pedals is somewhat distinct from, although descended from, this sound. The distortion often produces sounds not possible any other way. Many extreme distortions are either hi-gain or the descendants of such.

An example of a hi-gain pedal is the Line 6 Uber Metal: Listen

Power-Tube. A unique kind of saturation when tube amps output stages are overdriven, unfortunately, this kind of really powerful distortion only happens at high volumes.

A Power-Tube pedal contains a power tube and optional dummy load, or a preamp tube used as a power tube. This allows the device to produce power-tube distortion independently of volume.

An example of a tube-based distortion pedal is the Ibanez Tube King: Listen

Other Ways To Distort

Tape Saturation. One way is with magnetic tape. Magnetic tape has a natural compression and saturation when you send it a really hot signal. Even today, many artists of all genres prefer analog tape’s “musical,” “natural” and especially “warm” sound. Due to harmonic distortion, bass can thicken up, creating the illusion of a fuller-sounding mix.

In addition, high end can be slightly compressed, which is more natural to the human ear. It is common for artists to record to digital and re-record the tracks to analog reels for this effect of “natural” sound. While recording to analog tape is likely out of the home studio budget, there are tape saturation plugins that you can use while mixing that simulate the effect quite well.

Here’s a bass guitar with a bit of tape saturation from the Ferox VST plug-in: Listen

Digital Wave Shaping. The word clipping in recording is usually a bad thing. And generally it is, unless we’re trying to distort something on purpose. In the digital world we can use powerful wave shaping tools to drastically distort and manipulate a sound.

Rather than subject you to the technical explanation of how it works, just listen to Nine Inch Nails, they use this a lot. It’s perfect for really harsh, aggressive, unnatural and broken sounds.

Here’s some examples of Ohmforce Ohmicide on a drum loop: Listen

Why Is This Important?

Knowing those sounds can help you be a better musician, engineer and producer. It will help you make decisions on what gear to purchase and what is appropriate for a song.

What Else?

Besides guitar, what else is distortion good for? Well, pretty much anything, as long as it’s appropriate for the song.

—Slight distortion can make something sound more exciting, too much can sometimes make it really tiny sounding.

—When recording electric guitars, you can get a way bigger sound by using less gain and recording the same part multiple times, double or quad-tracking.

—Distortion can sound really cool on drums, but you may have to heavily gate the drums, the sustain can get out of control.

*Note: All audio samples except the last two were copied from various internet sources, mostly manufacturer websites.

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. To comment or ask questions about this article go here.

 

{extended}
Posted by Keith Clark on 08/11 at 01:51 PM
RecordingFeatureBlogStudy HallAnalogDigitalDigital Audio WorkstationsMeasurementProcessorSignalSoftwareStudioPermalink

Wireless Update: Progress As The Next Era Unfolds

As the world around us increasingly embraces the benefits of wireless technology across multiple industries, demand for radio frequency (RF) spectrum has never been greater, and it continues to grow. In the U.S., the Federal Communications Commission (FCC) is charged with making it all work.

In 2012, Congress passed the Middle Class Tax Relief and Job Creation Act, part of which authorized the FCC to conduct a voluntary ”Incentive Auction” among over-the air television broadcast licensees, intended to release spectrum from 698 MHz downward for mobile broadband telecommunications.

The “incentive” refers to a one time offer for the broadcasters who give up their spectrum to share in the proceeds of the auction, which most believe will exceed $20 billion.

While this is great news for users of smartphones, tablets, and laptops, it also means that a large incumbent base of UHF spectrum users will have to relocate. This includes many TV broadcasters and Low Power Auxiliary Stations (LPAS) – which is FCC terminology for wireless microphones, intercoms, and in-ear monitor systems, among other devices.

Shifting Landscape
Having lost access to the 698-806 (700 MHz) band in 2010 due to the transition to digital television broadcasting, it’s easy to view this latest auction plan as a major loss for the pro audio and production industry. But even though some portions of the 600 MHz band will eventually become off limits for wireless microphone operation, there are many aspects to the shifting landscape that are important for production teams to understand and pay close attention to in the months and years ahead, some of which will have long term benefits for our industry.

A quick summary of what’s happening:

1) Portions of the 600 MHz frequency band will transition to mobile broadband service over the next five years and wireless microphones will no longer be legal to operate in the affected ranges.

2) Virtually all large-scale wireless users – those routinely using 50 channels or more – are eligible for the protection of Part 74 licensing, effective immediately.

3) The FCC recognizes the importance of the professional wireless user community, and has committed to providing adequate spectrum space for its reliable operations in the long term.

The precise details of the changing spectrum allocations will not be known for a while, as a large number of variables are in play, but defining the future of wireless is benefitting from ongoing participation in the process by pro audio wireless manufacturers and key power users. After more than a decade of providing education to the FCC on the issues affecting our products and applications, the commission now recognizes ours as a critical and unique use of spectrum that must be accommodated.

This recognition of the need for quality of service heightens in importance when viewed through the longer lens of overall wireless demand. Within our industry, we have seen explosive increases in wireless deployments. As systems have become more reliable, designers for major tours, theatrical shows, and house of worship environments have grown to believe that anything they imagine can be realized.

But it’s not just large productions. In-ear monitors have basically doubled the RF needs of even basic rock bands. In fact, it’s not uncommon to see a performer wearing three bodypacks these days – one for guitar, one for a headworn vocal mic, and a third for IEMs.

Priorities & Balance
While these changes have had a profound effect on the pro audio industry, they are modest compared with the leading source of spectrum demand: telecommunications and the internet.

Congress sees these arenas as ripe for innovation and job creation through the development of a robust national broadband infrastructure. In short, you and your neighbor’s phone, tablet, laptop, and home wireless network increasingly continue to tax the nation’s finite spectrum resources, forcing the conversation at the federal level about priorities and balance. 

The U.S. government is not alone is this dilemma, by the way – it’s happening all over the world, pitting broadcasting versus mobile telecommunications, wireless computing versus pro audio. In fact, the situation is so fluid that, depending on the particular issue, opposing interests on one topic may become allies on another and vice versa.

While the FCC’s recognition of wireless microphones as a class that deserves protection is encouraging, there is no denying that, by 2018 or so, there will be changes in the way we operate. These changes will focus on two primary areas: interference protection through licensing, and utilization of alternative frequency ranges.

Significant Development
The biggest recent change for our industry – and one with immediate impact – results from a recently approved Report and Order (R&O) regarding LPAS license eligibility under Part 74 of the FCC rules.

The R&O revised an outdated regulation that had for decades limited the availability of licensing to broadcasters, cable TV content producers, and motion picture makers. This is a significant development, and it underscores a recognition of the reach that wireless audio has in a variety of professional contexts.

Under Part 74 rules, licensed wireless operators can register their reservation of available open TV channels at a specific location and time in the national geolocation database, which houses data on all protected services (TV broadcast, public safety communications, and wireless microphones), and prevents interference from other sources.

Development of this system started in 2010 to usher in a new class of unlicensed RF products to the TV band, known commonly as “white space” devices, under a “spectrum sharing” arrangement – an increasingly popular concept among regulators.  As the VHF and UHF bands become more crowded after the Incentive Auction, licensed status and geolocation database access will be important levers for pro audio frequency coordinators and production teams. 

Part 74 licensing is now open to any sound company, rental firm, venue, or other professional entity that routinely uses 50 or more channels of wireless (including mics, IEMs, intercoms, and control systems). Eligibility is defined by usage, not equipment ownership, so a large house of worship, a Broadway musical, a touring rock show, and a convention center could all qualify. License terms are for 10 years.

For those users who do not qualify for a Part 74 license, database protection is still available, but it requires an additional request to the FCC that must be submitted 30 days in advance of the event. This process offers an extra measure of protection for smaller installations and productions, and is particularly suitable for regularly scheduled events like weekly church services, nightly theater presentations, or a firm calendar concert series or tour.

It’s important to note that, now and in the future, both licensed and unlicensed wireless microphones are legal to operate on any vacant TV channel. Post auction, the rearrangement (repacking) of the remaining TV stations, the addition of mobile broadband services in the 600 MHz band, and changes to the designation of today’s wireless microphone channels from “exclusive” to “shared” will reduce the number of available channels.

This development, along with the likely increased deployment of white space devices, will make geolocation database reservations all the more critical. Licensed status is the key to “real time” database access and therefore should be pursued by all those who qualify.

Because the amount of spectrum repurposed through the auction is directly related to the number of broadcasters willing to participate, we won’t know the exact fate of the 600 MHz band until sometime after the Incentive Auction, currently planned for mid-2015. The plan then calls for a 39-month transition period for the broadcasters to move to their new channel assignments, and wireless microphones will be able to operate in the auctioned spectrum until the winners commence service.

Identifying Alternatives
The 700 MHz transition and dawn of the white space device era encouraged wireless microphone manufacturers to explore development of robust technologies outside of the TV bands, and many of those products, such as those operating in the 900 MHz and 2.4 GHz spectrum, can be deployed successfully in many applications. But both the industry and the FCC have concluded that these offsets will not meet the long term needs of professional audio, and that a concerted effort is needed to identify alternatives for the industry.

To that end, in the Incentive Auction Report and Order, the FCC stated: “Recognizing the many important benefits provided by wireless microphones, we will also be initiating a proceeding in the next few months to address the needs of wireless microphone users over the longer term, both through revisions to our rules concerning the use of the television bands and through the promotion of opportunities using spectrum outside of the television bands.”

This commitment is important, and it reflects FCC’s understanding of the value of professional audio in our daily lives, despite the many large industries hungry to deploy wireless technologies. This future proceeding provides a great new opportunity for the audio community to shape its future. Rest assured, the industry representatives who have been engaged in the spectrum dialogue in Washington will continue to do so as the next era of wireless unfolds.

Mark Brunner is senior director, Global Brand Management, at Shure.

{extended}
Posted by Keith Clark on 08/11 at 12:57 PM
Live SoundFeatureBlogBusinessEducationMeasurementMicrophoneSoftwareWirelessPermalink
Page 52 of 194 pages « First  <  50 51 52 53 54 >  Last »