Feature

Thursday, May 23, 2013

Church Sound: Of All The Knobs On The Console, This One Is The Key…

Getting this right just makes your day go so much easier
This article is provided by ChurchTechArts.

 
A typical mixing console may have dozens, even hundreds of knobs and buttons and faders.

Each one has a specific function, but one is more important than all the rest. It’s typically at the top of the channel strips and it’s called “gain” (or sometimes “trim”), and it’s perhaps the most misused and misunderstood control on the whole board.

Get it set wrong and no amount of fading, EQ or outboard processing will fix it. Get it right and the rest of your mix will come together much, much easier.

So, how do you get it right? Well, it depends. (Great - thanks Mike!) Seriously, it depends—on your board. I wish I could tell you to turn the gain up until you get to 0, then you’re done. That may work, but it may not be optimal.

You have to do some experimenting and listening. Follow along and I’ll walk you through the process, then we’ll look at some specific things to listen for.

For starters, adjusting gain needs to be done in a methodical manner. If you’ve ever been to a concert early and watched a sound check, you’ve seen how it’s done. Turn down all of the faders on the board, and start with the gains all the way counter-clockwise. Start with one instrument, say the kick drum.

Ask the drummer to kick, kick, kick, kick. He keeps going until you tell him to move on. If your board has a PFL or Solo button on the channel, push it for the kick drum (or whatever you’re starting with). The PFL should route that channel to your main meters (check your manual), so keep your eye on it and gradually bring the gain up.

When it starts to peak around 0, you are close. Now you can start bringing up the monitors, and then house fader.

Repeat this process with all the other instruments on stage. Then move on to vocals. (We’ll deal with some more suggestions for a sound check in another post.) You should now have all the instruments and vocals hitting 0 or a little more. At this point, you may be done. Or not.

It all depends on your board. Some boards have a lot more headroom than others, and if you cap the levels at 0, you are not fully utilizing all the gain they have available, and are not maximizing your signal to noise ratio (the difference in signal level between the noise floor and the signal or music).

Other boards are pretty much spent at 0, and if you send 10-16 channels all at 0 to the main bus, it will overload and you will distortion. Or you may just be on the verge of clipping all the time.

This is where you need to listen and pay attention to your board. The console we have at our church will take +8 inputs all day long, mixed into groups and to the mains with no hints of saturation or distortion. So we can run stuff hot, and maximize our signal to noise ratio.

On the other hand, I once used a console at another church that would be completely out of headroom if you ran all the inputs at 0.

Play with your board, try different levels and once you settle on a level that works, stick to it. Make it systematic so everyone uses the same gain structure.

Once it’s repeatable, you’ll have better, more consistent sound every week.

This weekend I was reminded of another gain setting that is just about equally important (perhaps even more so), and that would the gain on the wireless mic the pastor is using. Here’s what happened.

I’ve been in the process of revamping our entire wireless mic family over last few months. The new mics have been really great.

The challenge is that we have some speakers (people talking) who like the new mics, and one (turns out he’s the new senior pastor) who doesn’t so much. Since he’s new, I’m cutting him some slack and letting him use a lav (for now…).

And that’s the rub. We have one bodypack that is “assigned” to the speaker for the weekend. Sometimes we’ll plug in a lav, other times a different model, and each mic has a different sensitivity rating; some speakers are loud, others are quiet.

If you read the first part of this article carefully, you know where this is going. Just like the input gain on your console, the bodypack also has an input gain setting (at least it should – if it doesn’t go order a new one that does).

Sometimes it’s a rather coarse “0”, “-10” switch; other times it’s a little control in the battery compartment that needs a tweaker; and yet other times, it’s a handy thumbwheel on the side of the transmitter.

The problem is that too often we sound engineers get so busy, we jack in a mic, drop in a battery and hand it to the speaker who is already running to the stage for a sound check. We crank up the gain on the board as he says, “Check one, two…are we done?” and hope for the best.

It’s not until he’s up on stage at the beginning of the message that you hear the familiar crackle of some sound gremlins having a bad day. You check your console gain, everything is fine; you may even check the compressor, the EQ and everything else.

Check the wireless receiver. If it’s a good one, it will have an audio level meter. A less good one will have a clip light. If you see clipping, or the meter is maxed out, you’re in a world of hurt. You’ve gone and done it – you’ve used up all the headroom in that little bodypack.

And it’s not like you can run up on stage during the message, reach into the pastor’s back pocked, grab the mic and tweak the little dial down a bit. Oh no, you’re in trouble.

Something I’m trying to get my engineers to be more cognizant of is the wireless mic gain. We used to put the mics in a tray and put them in the green room for the “on stage” folks to just pick up. But now we’re keeping them at the front of house console. That way, we can help them get the mic fitted properly, show them how to use it if it’s new to them and most importantly, adjust the gain on the pack before they’re 100 feet away on the stage (and while we can lay eyes on the receiver so we know what we’re doing!).

So here’s my procedure. Speakers and actors must pick up the mic at the sound board. Before they will strap on said mic, while standing there, they will give us a realistic level while we adjust the gain on the pack.

They will then proceed to the stage at the appointed time for sound check, and we’ll do the gain trimming and level adjusting for the house (and monitors if necessary).

At the end of the service, the mics will be delivered back to the sound board so that batteries can be recharged and so we don’t have to chase people all over the church looking for them.

Yep, that input gain control is the most important setting, whether it’s on the bodypack or the console. Getting this right just makes your day go so much easier. Get it wrong and you’ll hear, “Why was Jack all crackly and distorted for the whole message – it was really distracting!”

And that, my friends, is not good sound (apologies to Alton Brown).

Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.

 

{extended}
Posted by Keith Clark on 05/23 at 08:53 AM
Church SoundFeatureStudy HallConsolesEngineerMixerSound ReinforcementTechnicianPermalink

Tuesday, May 21, 2013

One-Stop Shopping: Captain, What Does It Mean, This Term “Full Production”?

The sound company’s job is to advance the show with the artist and show up with a rig. Not so when the full production falls into your lap.

Sound companies handle “one-off” shows every day. It’s usually formulaic, and after a while, we do it by rote. 

But what happens when the client wants one-stop shopping? This is also known as “full production” or “turnkey service,” and it’s quite a bit more involved than an average show. Generally months of planning and coordination are needed, as well as work with a number of subcontractors. It just can’t be done by the seat of the pants.

Normally, when a sound company is hired for a show, the client is a promoter or a venue. They provide the stage, they provide the power, and they provide the labor. The sound company’s job is to advance the show with the artist and show up with a rig. Not so when the full production falls into your lap.

Particularly for large, multi-stage festivals, hiring a single source to handle all the entertainment elements of the event is almost a necessity. The event director has too many other things to handle to have to worry about the details of his entertainment. 

Steve Rosenauer, director of the St. Mary’s University Alumni Association Fiesta Oyster Bake in San Antonio, Texas, once told me his definition of full production: “As a client, full production means working with a knowledgeable and experienced company that can produce a turn-key operation with regard to organizing, building and operating the necessary staging, sound, lights and equipment needs, with all meeting the negotiated specifications of the event as well as the bands. A company that does this can greatly enhance the quality of the event and provide a solid peace of mind to the entertainers and the event organizers.”

For the purposes of describing the process of a full production event, I will use the Fiesta Oyster Bake as my example. It’s a two-day, six-stage festival which kicks off San Antonio’s annual Fiesta Celebration every April. Fiesta has been ranked as the second largest party in the U.S. (Mardi Gras being first) by the National Meeting Planners Association. (And yes, they bake tons of oysters!) For years, our company, Sound Services, worked with this event. (Note that we recently chose to close the company for reasons completely unrelated to business.)

PREP MAKES PERFECT
In order to be ready by mid-April, we would start working in November. To be fair, we had been doing this event for nearly a decade, and had amassed a team of subcontractors with whom we were all very comfortable. Until a company gets to this point, preparations probably need to commence even sooner.

In November, we would begin talking about what our needs were going to be. Because city electrical inspectors were involved, we checked the City Code Compliance for any new electrical requirements. For example, one year (and for the first time), we were required to ground all of stages to the audio power distribution services, as well provide non-conductive covering of all power cables running in public areas. Not fun to discover things like this at the last minute!

We provided staging, sound, lights, backline, labor and all technical personnel for the festival. Because the client uses many more generators than just ours, they made those arrangements, but they used our generator provider so we were assured that power would not be a problem. The generator provider also stayed in contact on any change orders he received that might affect us.

Also by November, the client usually had more than half of the talent booked, so we got a vague idea of what to expect from headliners’ riders. By December, we started talking with our subcontractors, discussing what had changed from the previous year, giving them the firm dates, and requesting a firm price by January. 

After ringing in the new year, and still four months out, it was time to nail down the financials. Be very meticulous with this process!  Everything must be committed to paper, and math triple-checked in order to avoid any mistakes that could cost an entire profit margin.

It’s doubly vital to get this facet correct in the first year with an event, because the client will base future projections on those first year costs. Therefore, a mistake probably can’t be made up for next year.

Only after every cost is defined and listed, as well as those of the subcontractors, should the price be committed to the contract submitted to the client. Note: the one thing we found most often overlooked is the cost of a production manager. The hours and hours you spend working on this shouldn’t be done for free!

WORKING IN EARNEST
We would submit our contract on the first of February, with the understanding that requests on artists’ riders would probably cause an increase in total price. By this point, the client had all talent booked, so we could start working in earnest to learn just what those extra costs might be. My goal was to have all this information by the 15th of the month, still two months out.

There is a negotiation with contract riders and advancing the show that can - with some diplomacy - help reduce the number of additional line items for your client. Because most headliners’ riders are based on arena shows, for example, they will often concede some lighting instruments. 

On the other hand, you don’t want artist representatives to think your client is cheap, so know where and when to stop asking for concessions. It’s important to manage your client’s expectations in this regard as well. Most touring artists also understand that festivals differ from concerts, so if the stages are adequately stocked to begin with, most of the added line items will be for backline and spotlights.

Once we determined all of the additional artist-related expenses, we submitted a contract addendum. This addendum should include absolutely everything - a. client will begin to lose confidence if presented with more than one price addition. His budget is set in stone by this time, and your math errors and oversights are not his fault.

MINIMUM OF 40
Because Sound Services was responsible for the entire Oyster Bake Festival, not just the two stages we were physically covering, it was imperative that we advance the show with every artist. In this case, we’re talking a minimum of 40 bands, which made for a lot of work. But it accomplished several very important things. 

First, we got a thorough look at the requirements of every stage, and were assured that each subcontractor could adequately cover the entertainment line-up. If there was a particularly tough set change on a stage at a particular time, we could arrange to have extra help on hand at that time. 

Second, it gave each artist a feeling of confidence to know that individuals who care about their performances run the festival. Third, we established consistency in the way the artists were handled. The subcontracting sound companies all appreciated this.

And fourth, we could apprise artists of the “special quirks” of this festival. For example, it’s held on a university campus that is, itself, located in a neighborhood, not on a major thoroughfare. Getting to the venue is difficult when 80,000 other people are also trying to do the same, and there is no alternate route.

Sometimes when we told first-time performers to allow three hours to arrive, some balked, but we remained adamant. The ones who didn’t believe us were invariably late, which is a no-win for everyone. (By the way, returning artists were never late!)

Further, artists can’t drive to any stages except the main one, because they’re all positioned among campus buildings. For this reason, full backline was provided at every stage, and musicians were discouraged from bringing more gear than they absolutely had to have. To accommodate this, the university set up a team of volunteers to ferry musicians and their gear to the stages. It took several years to streamline this process.

Once all the advance work was complete, we created stage plots and input lists for every stage, and for both days. These were then dispatched to the sound companies working the festival with us.

GETTING CLOSER
A pre-production meeting with the festival committee and all stage managers was held six weeks to two months out. Each committee reported on their progress and, although we weren’t involved in things like pizza ovens and beer sales, it helped us to know what was going to be happening around us. 

Entertainment production is an important part of this meeting, and we made it a real bonding experience. Construction of “Stage 1,” for example, meant an entire campus parking lot has to be closed two days prior to the event, and thus it was critical that the timing be executed properly by the university security department. 

We also got to meet the stage managers and orient them as to what was expected of them. These folks are critical for smooth-running shows, and we let them know that. While their duties are light, the few things we needed from them are all important to the show.

Other things covered in this all-important meeting were issues of water, green rooms, use of volunteers (there are hundreds!) and getting musicians to the event and their respective stages. Over the years, and learning from our mistakes, we developed methods to efficiently accomplish these tasks, but until you’ve worked with an event for a long time, these issues are extremely important to thoroughly think through. For example, from experience we all learned that as much water as we thought we needed - double it!

At this time, we also walked the campus with the festival director, making note of things like trees that needed trimming or light poles tp temporarily remove. (Grounds and electrical departments need to be notified in advance to schedule work like this!)

WHO’S DOING WHAT
By one month out, we had a firm grip on exactly who was doing what. For example, if there was a sound company short a monitor engineer, this was the time to step in and lend a hand. Each subcontractor provided us with a list of personnel and how many vehicles (and of what type) they would be bringing on site. One aspect to double-check: be sure each contractor is providing enough people. For example, backline duties done properly for six stages requires more than two techs.

At this point, we would tally up all production people (including stagehands and spotlight operators) and provide the festival director with the number of parking passes and wristbands needed. Remember - on a multi-day festival, each person might need a fresh wristband each day. We also padded this number by a few more to replace ones that were inevitably lost.

Very key: the best technical person on staff must be in charge of production management. Even with the best preparations, all kinds of little things can go wrong, especially at multiple stages. One person not involved in production at any one stage has to be free to fight the fires, and this person should be well versed in technical knowledge as well as diplomacy. 

Our production manager for the festival spent each day traveling between stages, providing a break to a beleaguered engineer here, dealing with a power problem there, handling a recalcitrant band engineer somewhere else.  He also carried a radio for instantaneous contact. And, this person must have healthy legs – in a very crowded festival, a golf cart won’t work!

Three weeks out, we assembled packets for all of the subcontractors involved.  These included parking passes and wristbands, a map of the campus showing all stages and parking areas, a complete schedule of the event, and for the sound providers, stage plots and input lists. Load-in times were also provided.

Scheduling personnel is critical at this point. We staggered the load-in times so that we could make the best use of our stagehands. Stagehands have a four-hour minimum, and each is usually scheduled to work at more than one stage during a shift.  For load-out, we scheduled a much larger number of stagehands. This schedule was then filed with the labor company as a written work order, and note that this also included spotlight operators as well.

IT’S SHOWTIME!
Two days before the festival, we began to build the stages. The provider arrived with semi-trucks loaded with staging, and we again walked the site with the festival director, spotting the stages, front-of-house risers, spot towers and security towers.

The day prior to opening, we loaded in at our two stages, which then left us free to address the mayhem of everyone else loading in the next morning. The lighting contractor also loaded in with us in order to be out of the way, and this left the lighting directors free to work with headliners who might arrive early. On-site security was continuous at this point.

Day one of the festival would arrive, and we were free to conduct headliner soundchecks on our stages. Fortunately, the first act didn’t begin until 6 pm, so the atmosphere wasn’t too stressful.

The production manager was also available to address the various surprises that unfold, as they invariably will. This is where months of planning pay off and you can look really good to the client, who’s running around putting out all kinds of fires while his production people are calmly doing their jobs.

If all subcontractors are competent and well prepared, the event should run like an average one-off show. One caveat, however: it’s still a multi-day, multi-stage festival, with thousands of people swarming all over, so competent, well-informed stage managers become critical to your existence. 

They aren’t needed to get artists on and off the stage – we had already planned that out. They are most definitely needed to competently answer artist questions - “Where are our food coupons?” and “Where is our dressing room?” and the like. They also kept lots of water on ice, and plenty of ice in the ice chests.

The most important thing stage managers did, however, was manage the radios. Each stage had a radio, as did the production manager and the lead backline technician, and they were on a common channel with the event director. 

As the production staff performed its various tasks, we didn’t have time to monitor a radio, but when we had a problem or needed help, we simply asked a stage manager to contact whomever we needed. Previously we carried individual radios, but learned that this alternative approach worked so much better for everyone, plus it gave the stage managers a sense of ownership of their jobs as well. 

The best advice: “be round.” Roll with the punches and don’t get too excited by the inevitable little surprises that spring up. Make the production of entertainment as smooth as possible and don’t create tension or problems. That’s a big reason you were hired!

THE AFTERMATH
When it’s all over, the results of diligent planning and scheduling should continue to pay off. We found that handling a large number of stagehands at the end of the festival worked best if we arranged for the crew chief to assemble all of them at a pre-arranged site and make assignments from there.

Stagehands were first dispatched to the stages manned by our subcontractors, then re-routed to our stages last.  We always got this show loaded out within our four-hour labor minimum, by the way.

The production manager continued to make a circuit of the stages, being sure each stage had its allotted stagehands and collecting any left-behind belongings. We later attempted to repatriate these items with their owners.

When all the dust cleared a week or two later, we sat down and created a recap of the event, and this went into the file for next year. We also sent this recap to the festival director. Included were a summary of any issues that came up, general incidents, what worked well and what didn’t, and suggestions for improving next year’s event.

By working with the client in this fashion, we made ourselves a part of the event team, and enjoyed a multi-year contract. We also ingratiated ourselves to our subcontracting partners, who appreciated the work and reciprocated when appropriate. 

It’s just good business to develop this kind of working relationship with your clients and fellow business people, and it leaves you feeling pretty good about yourself as well.

Teri Hogan is a long-time audio professional and was co-owner of Sound Services Inc., a sound company based in Texas.

{extended}
Posted by Keith Clark on 05/21 at 01:30 PM
Live SoundProductionFeatureStudy HallProductionAudioBusinessConcertEngineerSound ReinforcementTechnicianPermalink

You Make Music You Say? So, You’re In The Fashion Industry

Inside the evolution and development of sounds through the decades that have become marketable commodities

The sound of popular music can be clearly compared to other forms of seasonal commodities such as clothing fashion and hairstyles.

Much as bell-bottoms, the hula-hoop, and whitewall tires have become fashion statements associated with certain times past, “the sound” of most records will in later times provide a key identifier in time stamping that music.

Like fashion, some of these sounds periodically re-emerge; some become an ongoing fabric of music production, while others are never heard of again.

We live in a time of interchangeable parts; music is in an advanced state of industrialization, where mass produced components are used and reused in everything.

The 1990s was the re-decade of reissue, review, reflection, reuse, repurpose, and reprocess. Déjà vu was the prevailing vu for most of the population. The vast reissue on CD of long-lost catalogs has allowed all previous decades to be available in our time.

Today’s record producers have the texture, style and tone of all previous decades to draw upon. Many records today have some production aspect or sound taken from the past - the sound of nostalgia. Songs and fragments of songs from the past in the latest music stimulate the recollection of countless personal histories.

As in any other commodity/consumer paradigms, consumerism and commercialism drive this thirst for new sounds. The predominately young music consumer seems to have a never-ending appetite for what is new from the music industry because it defines them as different from their older sister, their younger brother, their parents, all other generations. 

Unlike the fashion industry, where styles are seldom worn past their season until a generation later when they may be rediscovered, greatest hits collections and classic albums from every period remain part of the active playlist of life.

In fact, the greatest hits of any year will be on the market by mid-year and blasting from every radio during end-of-summer long weekends. These recordings form sonic postcard collections of memories good and bad.

Before records were invented and technology’s impact on music production, musical passages and phrases, a good story, a singer’s performance, and a memorable chorus were the primary identifiers of a pop song. They remain important, but these “hooks” now include “the sound”.

Those in music production often refer to the sound of a recording separate from the song. A recording can have a good or unique sound that catches the public ear and at the same time the song may be inconsequential. 

As Dick Clark once commented “(the sound is) what the kids listen for… the more different, the more original, the more unique the sound is, the more chance a record has of becoming a hit”.

“The sound,” in many cases, could be described as a formula that in and of itself is a marketable commodity that is, in the first instance, sold by a producer to the record label.

The ingredients, which make up this formula, come together as new devices become available and naturally enough each subsequent wave of music producer builds on and modifies earlier formula. Often they are so identifiable with the state of the technological art of the time that the trained ear can place the year of production based on the sound.

Geography plays a part in identifying styles. At various times a swarm of hits will emerge from a certain geographical region where there is a vibrant scene and a community of musicians.  Their collective style becomes identified with that place.

The sound of New Orleans, Chicago, Motown, San Francisco, TexMex, Merseyside, Londonbeat, Memphis, Liverpool, West Coast, Mid Atlantic, Berlin, Nashville, Jamaican, Brazilian, Casa del Sol, and Euro are just a few of the places associated with a certain style of sound that have received world wide attention during a certain period of time.

In 1925, Victor and Western Electric engineers oversaw the first electrical recording sessions that would produce usable masters for Victor. (click to enlarge)

Sound and music became a commodity incrementally starting at the beginning of the 20th century when records first captured and fixed a performance. It took a second leap when electric recording first appeared in the late 1920s, and another when tape recording was developed in the 1940s. 

But the greatest leap occurred when Les Paul changed the recording studio from a place where a performance was documented to a place of creation. He was the first to use the technique of “sound on sound” or overdubbing, a cornerstone of the modern pop music sound, and was years ahead of what has become common practice.

Using this technique, Les Paul teamed up with vocalist Mary Ford and they changed the way pop music sounded. Together they had 28 hits from 1950 to 1957. The best known are “How High The Moon”, “Mockin’ Bird Hill”, “Vaya con Dios”, and “Tiger Rag”. 

By and large, The music industry of the day saw multi-track and the “overdubbing” technique as Les Paul’s sound, and not a general tool for music production. In the early 1960s, the Beatles and The Beach Boys would change all that.

Les Paul working his multitrack magic in the 1950s. (click to enlarge)

The “Les Paul” sound was a production technique that become a bankable commodity. Such techniques are often associated with sounds that people have never heard before.

In the 1950s, Ross Bagdasarian had been making records as a songwriter and arranger. He had a particular knack for writing novelty songs, and in 1958, had a hit with “Witch Doctor” in which he sonically conveyed a shrunken head singing by recording the vocals at half the tape speed used to record the instruments and then playing it all back at normal speed. The singing went up in pitch by an octave.

Later that year, he invented an alter ego, David Seville, and introduced a vocal trio using the technique as on his earlier hit. “The Chipmunk Christmas” sold 3.5 million copies in five weeks, and from 1959-80, Alvin, Simon and Theodore made eight chart albums selling over 30 million copies.

In more recent times, they could be seen on Saturday morning cartoons, and a few years ago, Bagdasarian’s son brought the Chipmunks back, where they’ve starred in a number of successful animated features. These days pitch-shifting their voices is a plug-in.

Phil Spector created another landmark pop sound, noteworthy because he may be considered the first record producer to have attained “star” status. Spector’s “Wall of Sound” hits with The Crystals, The Righteous Brothers, The Ronettes, and Tina Turner’s classic “River Deep-Mountain High” were exclusively associated with the reverb chambers at Gold Star Studios in Hollywood.

Many other top artists of the era also used these same chambers, including Sonny & Cher, Iron Butterfly, The Rolling Stones, The Beach Boys, and Herb Alpert and the Tijuana Brass. In fact, the Gold Star chambers were so popular that the owners built two copies and leased them to other studios, linked by high-quality telephone lines. The Phil Spector sound and those chambers were a commodity for a while in the 1960s.

Unique sound may come from a state-of-the-art device, or a vintage piece of equipment, because unlike other types of media that drop older state of the art in favor of the latest innovations, sound producers prize vintage microphones, sound generators and processor devices for the qualities they have.

Some of these qualities are so characteristic that modern digital multi-processors have presets that attempt to create the attributes of these vintage devices. The manufacturers of production equipment have, in a sense, made a commodity of sound processes.

Phil Spector (foreground) building “walls” in the 1960s. (click to enlarge)

And, of course. there is the emergence of manufacturers of audio equipment that pride themselves in offering exact duplicates of what are now legendary devices from an earlier time.

For the music producer/engineer, the latest techniques and “sounds” are directly associated with the hardware. Hundreds of production equipment manufacturers jostle for market share. NAB and AES trade show booths beckon aisle-strolling musicians, producers, and engineers to have a listen to the latest new thing. 

The most popular processes become commodities in the marketing and application of these devices. By the next show, today’s hot new sound will inevitably be copied and included in any number of devices by other manufacturers.

The sounds that these devices create may be as clearly identified as a specific preset program on a digital effect, or it might be quite beyond a layman’s description.

In either case, the sound created will be tangible to those who care, if by no other way then through some comparison to earlier sounds, processes, and recordings.

The variety of effects would seem to allow an infinite number of sounds to be used on the myriad of recordings made; however, in practice, this has not been the case. The desire of producers to make records that sound commercially familiar has created a remarkable number of productions that sound alike.

The phenomenon has been made easier by the variety of pre-set programs that come with most sound generators and processors. By switching from one setting to another, the sounds from dozens of records can be heard. The preset computer program in the processor generates nothing except in response to a signal passing through it. Here we have sonic artifacts that have become marketable commodities.

For instance, reverb sounds are cataloged by the type of instrument that might go through it as suggested by the manufacturer. A gated reverb snare exists on every processor made today.

Most reverb devices retain preset descriptions that reference devices of the past. A good example are “plate reverbs”, a reference to a mechanical device invented by EMT that in the mid-1960s revolutionized how reverb was generated in a studio.

The heart of rock and roll may be Cleveland, but it sounds like guitar distortion; in this case a common denominator is the Celestion G12 loudspeaker that is in an overwhelming number of guitar amplifiers made by a number of different manufacturers.

Ken Bran, the engineer who headed the introduction of the Marshall amplifier that was first made famous by Hendrix and Led Zeppelin, acknowledged that without Celestion, they couldn’t have accomplished that sound.

The Marshall amp that first incorporated the classic Celestion G12 loudspeaker driver. (click to enlarge)

In the beginning, Celestion was probably unaware of the desirability of loudspeaker distortion. The company had spent decades trying to build distortion-less loudspeakers, but came to understand that distortion was actually a key selling point.

As Celestion’s promotional material states, “The paper edge cone of the classic G12 and it’s resonate break-up characteristics are the starting point from which many of the modern guitar loudspeakers have been developed”.

This loudspeaker was first introduced in 1969 and continues to be sold with the assurance that the characteristics of its distortion hasn’t changed through the years. Here is the subjective, absolutely unquantifiable attribute of some form of unique distortion that has become a marketable commodity.

During the past few decades, an abundance of performance processors have been invented to add distortion to the sound coming out of a guitar. Two are particularly noteworthy.

The fuzz-tone effect pedal was first marketed in 1963, and as Charley Watkins described, “It was an exciting sound - as if the speaker cone was being torn apart.” In 1963, “Hold Me” by P.J. Probe was one of the first chart records to use a fuzz-tone, but the record to place fuzz-tone effects at the top of any guitar player’s shopping list was the 1964 debut single for The Rolling Stones, the classic “Satisfaction”.

Interestingly, Keith Richards dismissed the prominence of the fuzz effect as a gimmick. Countless recordings have been done with variations of that fuzz-tone sound.

The second is the cry baby wah wah pedal, which can make the guitar sound as though it were speaking. As the pedal is rocked back and forth. a notch of frequencies are enhanced.

The wah wah remains a backbone sound of rock blues soul music and became a recognizable commodity when used by Jimi Hendrix on “All Along The Watch Tower” and Eric Clapton on Cream’s 1967 hit “Tales of Brave Ulysses”.

Stereo itself became a fashion statement soon after it was introduced in the late 1950s when the first generation of “ultimate” playback systems appeared on the market.

People would fill hi-fi stereo shows that were held in most major cities. A great many successful recordings were produced for the stereo enthusiast. Trains, bongos and guitar solos would race from one speaker to the other.

No matter how small the listener’s accommodation might have been, the infinite space of recorded music could overlap and open the confined space the listener occupied. The sonic image of space had become a commodity. 

Since those days when the pan knob was added to consoles, signal processing has become significantly more elaborate, diverse, and much more than merely an enhancement tool for the sound entering the microphone. It has become an integral part of the overall production.

Today, where the “musical instrument sound” ends and the “effect processor sound” begins is in most cases seamless. The development of 5.1 surround sound has allowed the consumer hi-fi industry to reinvent itself, as the consumer now sits in the showroom and hears why six loudspeakers are better then two - all the better to hear space ships wiz around the room.

Currently residing in Australia, Tom Lubin is an internationally recognized music producer and engineer, and is a Lifetime Member of the National Academy of Recording Arts and Sciences (Grammy voting member). He also co-founded the San Francisco chapter of NARAS. An accomplished author, his latest book is “Getting Great Sounds: The Microphone Book,” available here.

{extended}
Posted by Keith Clark on 05/21 at 06:15 AM
RecordingFeatureBlogBusinessDigital Audio WorkstationsEngineerProcessorSoftwareStudioPermalink

Friday, May 17, 2013

From Simple to Complex: The Wide World Of Drum Microphone Techniques

Drum sound can enhance or degrade an entire mix, so be sure to pay it due diligence

A drum kit can be viewed as a single instrument. Like an orchestra, it can be captured with a pair microphones in a stereo configuration (or a single stereo mic). Or, it can be viewed as a collection of individual instruments, picked up with numerous close mics.

A minimal approach (Figure 1, below), also sometimes called area miking, uses only one or two mics overhead (or maybe a stereo mic), another in the kick, and maybe one on the snare.

The mics pick up the set as a whole, and the balance among the drum kit pieces depends more upon the drummer. Area miking tends to work best with acoustic jazz and other types of acoustic music.

Typically, overhead mics are placed about 2 feet above the “front” of the cymbals, while the snare mic is a couple of inches over the rim aiming at the center of the head, and the kick mic is inside, near the beater.

For traditional jazz, where the kick drum is often tuned to be intentionally resonant – even a bit boomy – the mic is more commonly placed by the front head of the drum.

Something to try: reverse the polarity of the kick mic to see which polarity sounds best for your application.

Condenser models are a good choice for overheads because they provide a sharp transient response that accurately reproduces the cymbals. Large-diaphragm condensers tend to capture the fullness of the toms better than small-diaphragm units.

Also don’t be afraid to try dynamic mics. Many classic rack-tom and floor-tom sounds have been captured with Sennheiser MD 421s, Shure SM7s or SM57s, and quite a few others. 

Figure 1: A minimal mic approach. (click to enlarge)

The Ol’ XY
An overhead stereo pair can be coincident, near-coincident or spaced. The coincident-pair (XY) stereo technique yields a narrow stereo spread with sharp image focus, while the spaced-pair method provides a wide spread with more diffuse images. If the cymbals are too loud relative to the toms and snare, raise the overhead mics, lower the cymbals.

Since the mics are relatively far from the kit, they also pick up room acoustics and sometimes other instruments. Room reflections can make the sound a little distant and muddy. This can be good or bad; depending on the type of music, the sound you’re going for, and so on.

Careful placement (particularly moving the kit away from walls) can minimize or eliminate the problem. In churches (and some clubs), it’s common to see a plexiglass barrier placed around the drum kit, intended to keep the acoustic sound of the drums from overpowering the vocals and other instruments. 

When lowering the overhead mics, exercise caution or the cymbals can easily become too loud.

Another approach is to place the mics near each side of the drummer’s head, so that they “hear” the kit in stereo pretty much as the drummer hears it.

Apply EQ as needed to compensate for Fletcher-Munson loudness contours (how the ear hears different frequencies at different levels).

A nifty “minimalist” technique that can work with a small kit is clipping a mini omnidirectional condenser mic to the right side of the snare drum rim, a few inches over the rim, over the drummer’s knee (Figure 2). It will pick up the snare, toms and cymbals all around it.

The resulting sound is tight and full, somewhat like that of multiple mics.

To adjust the balance among the drum-kit parts, position the mic toward or away from the toms, and raise or lower the cymbals.

Figure 2: A single mini omnidirectional mic plus one for the kick. (click to enlarge)

The single-omni method limits what can be done with EQ. Boosting the higher frequencies (2 kHz and up) to bring out the toms’ attack can add an unwanted harsh or metallic effect to the cymbals. It’s better to use subtractive EQ in the low frequencies, roughly 100 Hz for the toms and 200 Hz for the snare.

Mic Multiplicity
With a multiple-mic approach (Figure 3), one mic is placed close to each drum and cymbal. That provides extra control of the EQ and effects for each drum independent of the others, resulting in a tighter, more present sound.

Toms, in particular, benefit from individual miking, for a richer and more distinctive sound. Multiple miking is the norm for rock and fusion.

A common snare mic is a cardioid dynamic type with a presence peak in the frequency response, such as the Shure SM57 or Beta 57A. Bring the mic in from the front of the set on a boom and place it a couple inches over the rim aiming at the center of the head.

Figure 3: Multiple miking. (click to enlarge)

The sound will be full with the mic near the top head, and thins out and becomes brighter as you move the mic toward the rim and down the side of the drum.

Some engineers like to mike the snare drum top and bottom. If this approach is taken, reverse the polarity of the bottom mic to prevent phase cancellations. The bottom (snare) head moves out when the top (batter) head moves in, so the heads are in opposite polarity and can cancel out each other’s sound if you mix their signals.

Some snare drums make an ugly ringing sound at a particular frequency. To eliminate it with EQ, insert a high-Q, narrow peak in the 200 Hz to 600 Hz range. Sweep the frequency up and down until you exaggerate the ring frequency, then apply cut.

Here are some suggestions to prevent hi-hat leakage into the snare mic: mike the snare closely; bring the snare-mic boom in under the hi-hat and aim the snare mic away; use a piece of foam or pillow to block sound from the hi-hat; and consider using a de-esser on the snare.

For toms, a good choice is a cardioid dynamic mic with a presence peak and a deep low end such as the Sennheiser MD 421.

Place the mic about 2 inches over the rim and 1 inch inward, angled down to aim at the center of the head.

The usual EQ is a cut around 400 Hz to 600 Hz to remove the unwanted artifacts, as well as a boost around 5 kHz for attack. Another boost around 80 Hz to 100 Hz adds fullness.

It’s common to gate the toms to reduce the low rumble of vibrating heads and to prevent leakage, resulting in a tighter sound.

To do this, solo a tom track, insert a gate in the track, and then while the tom is playing, gradually turn up the gate’s threshold until the gate cuts off the sound between tom hits but does not cut off the hits themselves.

Finally, set the gate’s hold time to 0 to 100 milliseconds, whatever sounds good.

Highs & Lows
Place overhead mics about 2 feet over the cymbals, and if possible, use condenser or ribbon types with a frequency response that is flat to 15 kHz or even 20 kHz. This characteristic captures the delicate, beautiful “ping” of the cymbal hits, while a peak in the highs tends to sound harsh.

Usually the snare mic, overheads, or room mic pick up enough hi-hat. But if you want to mike it separately, aim a condenser mic down about 8 inches over the edge farthest from the drummer. Apply a high-pass filter at 500 Hz or higher to reduce snare leakage and to remove the low “gong” sound that close miking would otherwise pick up.

You might apply a high-pass (low-cut) filter to the overhead mics that capture the cymbals to remove frequencies below 500 Hz to 1 kHz. This will reduce drum leakage and room reverb in the cymbal mics, which is especially helpful if the room acoustics are poor.

Kick drum is largely a matter of experimentation. Some place a blanket or folded towel inside the drum, pressing against the beater head to dampen the vibration and tighten the beat. The blanket shortens the decay portion of the kick-drum envelope. To emphasize the attack, use a wood or plastic beater – not felt – and tune the drum low.

Figure 4: Positioning the kick mic. (click to enlarge)

For starters, place the kick mic inside on a boom, a few inches from where the beater hits (Figure 4), if the front head has been removed. Mic placement close to the beater picks up a hard beater sound; off-center placement picks up more skin tone, and farther away picks up a boomier shell sound.

Current practice among many drummers is to use a front head with a small open sound hole, in which case the mic should be placed directly in front of the opening. Kick drums are usually tuned quite low for a sharp attack sound used in pop or rock, and much higher for traditional jazz.

To create a “tight” rock-type kick sound – almost like a dribbling basketball – cut several dB around 400 Hz and boost around 2 kHz to 4 kHz.

Mics designed for the kick drum usually have that characteristic built in.

If the kick drum does not have a hole in the front head, just mike the front head and boost around 2 kHz to 4 kHz to add attack if needed.

A popular microphone design for kick drum is a large-diameter, cardioid dynamic type with an extended low-frequency response.

Some mics are designed specifically for the kick drum, such as the AKG D112, Audio-Technica AT AE2500, Shure Beta 52A, and the Electro-Voice RE320, which has a curve that’s switchable to being optimized for kick drum applications.

Finally, when miking drums on stage, you don’t want a forest of unsightly mic stands and booms. I suggest “banquet style” mic stands and short mic holders that clip onto drum rims and cymbal stands (such as those from Mic-Eze at www.ac-cetera.com). Another option is to use mini drum mics with built-in clamps and goosenecks.

Drum miking can be simple or complex. As with all things audio, it depends.

One thing that really helps is being as familiar with the sound of as many aspects of the drum kit as possible. This means a lot of listening to a wide range of styles and different types of players.

Those who play very hard with heavy sticks present challenges that are quite different from those who play more quietly, or with a greater dynamic range.

The bottom line is that the drum sound can enhance or degrade an entire mix, so be sure to pay it due diligence.

Bruce Bartlett is a microphone engineer (www.bartlettmics.com), recording engineer, live sound engineer, and audio journalist. His latest book is “Practical Recording Techniques 6th Edition.”

{extended}
Posted by Keith Clark on 05/17 at 04:08 PM
Live SoundFeatureBlogStudy HallMicrophoneSound ReinforcementStagePermalink

Thursday, May 16, 2013

Avoiding Poor Judgment & Getting A Handle On Subjective Audio

Using instinctive knowledge to judge the loudness of a system

I suspected trouble when I accepted responsibility for something I had no control over. There was a 50/50 chance of either being the hero or the whipping boy. I didn’t care for those odds, but I couldn’t change them.

It all started when a friend of the family asked me to help make sure that the sound for his wedding didn’t ruin the ceremony. The catch was that I wasn’t handling the sound - his cousin, the part-time DJ, had that job.

I’m not sure if he asked me because he respects my experience, or because he was afraid Cousin DJ would do something to cause the bride to cry. I prefer to think it was the former, and so chose not to ask for clarification.

Enter the rehearsal. Cousin DJ showed up at the outdoor location with a small powered mixer and a single loudspeaker, but forgot the adapter to patch in his laptop. Fortunately, someone had an iPod and dock with speakers; at least we managed to get playback, tinny as it was.

So the rehearsal was a waste, but we’d be able to make it up before the ceremony since Cousin DJ likes to show up two hours early to get everything setup and checked (paraphrased). Plenty of time to set up and prove the system, as well as soundcheck the guitarist/singer.

Come wedding day, Cousin DJ arrives 40 minutes before the ceremony is scheduled to start. Only 80 minutes late, but hey, at least he brought his trusty sidekick. I already had power waiting, a condenser microphone ready for the acoustic guitar, and when they pulled up, I helped muscle the gear into place. 

But once this was done, things changed. Immediately it became obvious that Cousin DJ and trusty sidekick had a “method” all their own -  and I was an outsider. I didn’t fight it. After all, the day wasn’t about me.

The loudspeakers were placed directly on the ground, the classic smile slapped on the graphic EQ, and the mic I brought for the acoustic guitar was pushed aside for an SM58. I put my mic back in the family minivan next to the two cases of stuff I’d brought along to cover for Cousin DJ in case he forgot something critical.

So how’d the sound for the ceremony turn out? 

The overall volume level should have been louder, but it couldn’t be turned up because the system would have gone into feedback. Actually, though, this was a blessing in disguise, because had it been louder, we would have been inflicting earlash (distortion, etc.) due to the EQ setting.

There was also a hash of noise from the laptop that’s typical of the onboard audio jacks of most sound cards. And last but not least, the woofer on the right speaker kept cutting in and out during the processional.

In the end, my involvement came down to a nervous groom wanting to be sure that everything was going to come out all right. Since ignorance is bliss, the sound was just fine as far as the wedding party was concerned. They had other things on their mind.

Beyond the technical problems that were encountered, the ceremony was marked by what I would classify as poor production judgment.

I highly recommend reading Bob Thurmond’s articles on PSW (here and here) about defining a good system.  He’s obviously a seasoned professional with much knowledge and experience, and I came away from his articles with renewed understanding.

Bob touches on the complexity of the problem of trying to uniformly quantify the attributes of a good sound system. Logically, this also extends to every aspect of our craft, where we’re trying to get a handle on what’s ultimately a subjective experience.

After the wedding, I had lunch with Chris Gauvin, a local corporate sound pro. Our conversation meandered here and there, and since he had a forthcoming concert gig, he asked me about my thoughts on band and music mixing. I answered as I always do: 80 percent of the job is getting the levels right - not only the individual elements, but the whole mix.

Yes, EQ, effects and compression can help make things better, but when it comes down to the meat and potatoes, getting the levels right is the majority of the craft. Achieving this takes a process of good production judgment.

But that judgment can only be properly made when you have the solid foundation of a personal quantification system. As humans, we’re wired for communication. The one absolute that we all have is knowing exactly how loud we have to speak for someone else to hear us clearly. 

Learning how to use that instinctive knowledge for judging the loudness of a system, and understanding where various types of events belong in that reference, can be a powerful tool.

Now, I’m not going to talk badly about those in our business who set an objective loudness standard and use SPL meters to stick to it. Mainly because I have absolutely no objection to this line of thinking. 

But speaking from experience, if you’re doing this, make sure it’s loudness you’re trying to keep a handle on - and not earlash. The worst thing you can do to treat earlash is to deprive the audience of the appropriate listening experience. 

Solve the earlash issue and you’ll probably find that the need for an objective loudness standard isn’t so rigorously necessary. Good? Again, yes, but not so absolute.

Since his start 29 years ago on a Shure Vocalmaster system, James Cadwallader remains in love with live sound. Based in the western U.S., he’s held a wide range of professional audio positions, performing mixing, recording, and technician duties.

{extended}
Posted by Keith Clark on 05/16 at 05:08 PM
Live SoundFeatureBlogEngineerMeasurementSignalSound ReinforcementSystemTechnicianPermalink

Square Waves And DC Content: Deconstructing Complex Waveforms

Illustrating the harmonic structure (content and phase relationship) of a signal

I’ve heard it argued by that square waves contain DC. How else could they have the flat top and bottom that make it square?

Let’s look at a square wave and see what causes it to have its square shape.

A complex waveform can be constructed from, or decomposed into, sine (and cosine) waves of various amplitude and phase relationships.

This is the basis of Fourier analysis. A square wave consists of a fundamental sine wave (of the same frequency as the square wave) and odd harmonics of the fundamental.

The amplitude of the harmonics is equal to 1/N where N is the harmonic (1, 3, 5, 7…). Each harmonic has the same phase relationship to the fundamental.

If we construct a square wave from just the first two harmonic components we can begin to see how the square shape occurs (Figure 1).

We can see here that the red trace is the first harmonic (fundamental) and the green trace is the third harmonic at its correct amplitude.

Figure 1: Square wave generated from only two components. (click to enlarge)

When the first harmonic is at its maximum value the third harmonic is’ at its minimum value. By adding these two together we get the black trace, which is starting to resemble a square wave.

If we now use five harmonic components 0, 3, 5, 7 and 9) to construct the square wave we see it really starting to take shape (Figure 2).

Figure 2: Square wave generated from five components. (click to enlarge)

At the end of each half cycle of the square wave all of the harmonic components are all going the same direction (either positive going or negative going). This is what accounts for the sharp rise.

Towards the middle of each half cycle the harmonics alternate between being at minimum and maximum amplitude just as we saw in Figure 1. This is what accounts for the flat portion of the square wave. Increasing to 10 harmonic components results in Figure 3.

Figure 3 - Square wave generated from 10 components. (click to enlarge)

Note that only the first six harmonics are shown individually, but 10 harmonics are used to generate the square wave. Increasing still further to 100 and 1,000 harmonic components are shown in Figure 4 and Figure 5, respectively.

Figure 4 - Square wave generated from 100 components. (click to enlarge)
Figure 5 - Square wave generated from 1,000 components. (click to enlarge)

From these graphs it should be well understood that the high frequency harmonic content, and not a DC component (0 Hz), is responsible for the shape of a square wave.

In fact, DC by its very definition cannot cause any frequency dependant waveform shape.

The DC component of a signal is simply the average value of that signal.

Signals that are symmetrical about the time axis will have a DC value of zero.

Signals that are asymmetrical about the time axis mayor may not have a DC value of zero.

If the area between the positive half of the waveform and the time axis is equal to the area between the negative half of the waveform and the time axis, there will be no DC component present.

If these areas are not equal there will be DC in the signal. In other words, the average value of the signal has to be non-zero if there is DC present in the signal.

To test this for our square wave we will apply a high pass filter below the fundamental of the square wave.

Our square wave is at 200 Hz so let’s use a second order (12 dB/octave), 50 Hz high-pass filter. The result is shown in Figure 6.

Figure 6 - Square wave passed though a 50 Hz high-pass filter. (click to enlarge)

Here we see that the end of each half cycle of the square wave is drooping (or tilting) back towards zero. Some may think this is due to DC being eliminated by the high pass filter. This is not the case. The real reason for the tilt is because of the phase shift of the harmonic components that make up the square wave.

Notice that, in the middle of each half cycle, the maximum of the first harmonic (red) is no longer aligned with the maximum/minimum of the other harmonics. This can also be seen, perhaps more clearly, by looking at the zero crossings.

In the previous graphs, whenever the fundamental had a zero crossing all of the other harmonics also had zero crossings. This is not the case in Figure 6 with the high pass filter applied.

If we look at the response of the 50 Hz high-pass filter in the frequency domain (Figure 7), we can see how the phase shift of the harmonic components occur. The phase shift at 200 Hz is approximately 21 degrees relative to the high frequency limit of 0 degrees (no phase shift), which results in the 292 usec (microsecond) time shift of the fundamental.

Figure 7 - Transfer function 2nd order Butterworth, 50 Hz high-pass filter. (click to enlarge)

The phase shift of the next component (3rd harmonic, 600 Hz) is approximately 7 degrees. This results in a 32 usec time shift of this component. All of the higher harmonics are subjected to less
phase shift (are closer in-phase) and thus occur at roughly the same time. So Figure 7 predicts precisely what we see in Figure 6.

To double check our assertion that the phase response of the 50 Hz high-pass filter is the cause of the square wave tilt and not the removal of any DC, let’s pass the square wave through a 50 Hz all-pass filter instead of the high-pass filter.

This will maintain roughly the same phase shift as the high-pass filter but the low frequency content of the signal will not be attenuated. If there is any DC component contributing to the square wave we should see a difference (i.e., the tilt should no longer be present).

A quick look at Figure 8 should confirm that it is virtually identical to Figure 6.

Figure 8 - Square wave passed though a 50 Hz all-pass filter. (click to enlarge)

This should confirm unequivocally that the flat topped structure of a particular waveform is not due to DC. It is instead due to the harmonic content of the signal and the phase relationship of those harmonics to the fundamental.

Hopefully you’ve noticed something interesting about the waveform after it was subjected to a high-pass filter (or an all-pass filter).

The phase shift of the components has caused the peak amplitude at the leading edge of the square wave to increase!

The amplitude of the raw square wave (no filtering) is approximately 0.79. After passing through the high-pass filter the peak amplitude is approximately 1.25.

Increasing the order of the filter or increasing the frequency (up to a limit related to the fundamental of the square wave) will increase the peak amplitude.

As seen in Figure 9, with a 4th order Butterworth high pass filter at 125 Hz the peak amplitude exceeds 2.2.

This is an increase of the peak amplitude of almost 280 percent compared to the amplitude of the original square wave!

This highly peaked waveform (no longer a square wave) results when the fundamental and some of the lower harmonics are shifted such that their positive peaks occur at, or very close to, the same time as the positive peaks of the other harmonics.

Figure 9 - Square wave passed though a 4th order Butterworth, 125 Hz high-pass filter. (click to enlarge)

This also occurs for the negative peaks of the fundamental and the negative peaks of the harmonics. Careful inspection of Figure 9 will show this to be the case.

When signals are clipped by devices that do not have sufficient headroom to pass them the tops (and bottoms) of the waveform becomes flattened.

Whether the signal is periodic, symmetrical or asymmetrical, the flattening of a waveform (by design or by clipping) can only be accomplished via the addition of in-phase harmonics—just as in the case of a square wave.

If the clipping of the signal also results in the area bounded by the waveform above the time axis and below the time axis no longer being equal then a DC component has been introduced into the signal as well.

This, however, has no effect on the general shape of the waveform. If the DC component was somehow removed without otherwise affecting the signal, the waveform would simply experience a vertical shift, either up or down, depending on whether the DC component that was removed was greater than or less than zero.

I hope that this has helped to illustrate that it is the harmonic structure (content and phase relationship) of a signal that is responsible for flat-top shaped waveforms. Square waves are merely one example of these types of waveforms.

Charlie Hughes has been int he pro audio industry for over 20 years, having worked at Peavey and Altec Lansing. He currently heads up Excelsior Audio Design & Services, a consultation, design and measurement services company located near Charlotte, NC. He is also a member of the AES, ASA, CEA and NSCA, in addtion to being an active member of several AES and CEA standards committees.

 

{extended}
Posted by Keith Clark on 05/16 at 04:56 PM
AVFeatureBlogStudy HallAVMeasurementProcessorSignalPermalink

Pro Production: Understanding The Language Of The “Show”

A glossary of common terms you'll likely to hear in a performance venue.

When first starting out in audio, a newcomer will hear terms that may seem like a whole new language. These terms are very common to hear in arenas, union halls, theatres, and similar venues.

While not so common in clubs, even there you can hear some of these terms used by the seasoned veterans of the business. This would be particularly true of clubs that work with regional and national touring acts.

You can expect to hear experienced production people barking out instructions such as: “That rack goes ‘front of house left’ and that case goes ‘upstage center’”

Learning the common terms of the business can make the day go smoother for everyone. In the above phrase, ‘front of house’ refers to the house mixing position. ‘Front of house ‘left’’ would be from the engineer’s perspective facing the stage.

In the second part of the example, ‘upstage’ refers to the part of the stage farthest from the audience while ‘center’ refers to… well… the center of the stage. Therefore, a piece to be positioned ‘upstage center’ would go to the center and rear of the stage.

Editor’s note: The ancient folk of Europe built their stages raked, so that the rearward portion was actually a little higher than the front-most part. That way, people in the audience could see all the action, rather than straining to see something transpiring at the back of the stage. This was the quite literal origin of the terms ‘upstage’ & ‘downstage’.

Perhaps the simplest and most commonly used terms are ‘stage left’ and ‘stage right’. These terms are referring to the performer’s perspective as they are facing the audience.

Standing on the stage and facing the audience the area on your left would be ‘stage left’. Obviously, following form, the area to your right would be ‘stage right’.

Below are simple but important terms that you should know for working in this field. Where appropriate, common abbreviations are shown. Many times you will find these abbreviations stenciled onto the road cases.

Also, it is common for only the abbreviations to be used on stageplots. With that in mind, it is important to also familiarize yourself with the common abbreviations.

Positional Terms
Stage Left (SL): Left side of the stage from the performer’s perspective facing the audience.

Stage Right (SR): Right side of the stage from the performer’s perspective facing the audience.

Downstage (DS): This would be the part of the stage closest to the audience. You would consider it the front of the stage.

Upstage (US): This would be the part of the stage farthest from the audience. You would consider it the rear of the stage.

Center Stage (CS or C): This one is self explanatory.

Off-stage: This refers to the area just off the main performing area of the stage.

The above terms can be combined for a more descriptive location on the stage. For example: Downstage Left (DSL): Would be downstage (front area of stage nearest the audience) and stage left.

Other common stage position terms and abbreviations are
Downstage Right (DSR)
Downstage Center (DSC)
Off-stage Left (OSL)
Off-stage Right (OSR)
Upstage Left (USL)
Upstage Right (USR)
Upstage Center (USC)

Note: The term ‘backstage’ does not apply to a position on the stage but rather refers to the area behind the stage (or sometimes beside the stage) that is used for dressing rooms, storage, production equipments, etc. This area is not viewable by the audience.

Front of House (FOH): When this term is used referring to a position then it means the area where the house mixing console is (or will be) located for the show.

This is also sometimes referred to as Mix World. When not used as a positional term it can be used such as- FOH Speakers, FOH console, FOH amps, etc . In those usages it is denoting items that are used for the ‘house’ PA (PA for audience sound and not performer’s monitor system).

Monitor World (MON): This is the area, almost always offstage, where the monitor mixer is located. Also called ‘Monitor Beach’, ‘Stage Mix Position’.

Sound Wings: Separate risers on each side of the stage (or sometimes can be part of the main stage) where the speakers for the house are stacked. Sometimes the name is shortened to ‘Wings’.

Spot Bay: Area reserved for spotlight and operator. Can be a riser, scaffolding, or purpose built area. Also called ‘Spot(light) tower’.

Work Terms
Stage Plot: A stage plot is a ‘map’ of sorts showing you a rough layout of the stage.

Show-call: Anyone scheduled for ‘show-call’ will be doing work that must be done during the show.

Note: Proper decorum generally dictates that anyone working show-call wear ‘blacks’. That refers to black pants and a black shirt.

Hit: Sometimes someone will ask something such as “When does the show hit?”. ‘Hit’ would be the scheduled start time of the performance.

Strike: An item to be ‘struck’ is meant to be removed from the stage. You could be told “Strike the guitarist’s vocal mic”. In that case you are being told to remove the mic from the stage (or to skip it on the stage plot if before setup).

Set Change: This is the process of clearing the stage of the opening act’s gear and preparing the stage for the headlining act.

Loader(s): These are people who are assigned in the truck/trailer to be unloaded. They remove items from the truck and then others take it from there to the stage during load in.

Pusher(s): These people take the equipment from the ‘loaders’ and ‘push’ it to the stage area.

Stagehand(s): These workers await the equipment at the stage area and position it as the ‘pushers’ bring it to them. Also called ‘Hands’

Note 1: Loaders and Pushers may assume other duties once their initial roles are complete.

Note 2: On load out the loaders will still be assigned to the truck/trailer but many times all other hands will tear down and then push their own ready work to the truck rather than having designated pushers for load out. This can vary so check with the Crew Chief.

Crew Chief/Stage Manager: This person(s) will direct the load in, setup, set change, and load out to see that it all occurs in an orderly fashion.

LD: Lighting Director. Main lighting operator.

Spot Op/Spot Operator: Person whose assigned duty will be operating the followspot during the show.

Tech: Generally refers to anyone with a ‘technical’ knowledge of the system or a part of the system.

Band Engineer (BE): The engineer that actually mixes the group.

System Engineer (SE): The engineer whose job it is to see that the system is configured and operating properly.

Focus: Once the light show is hung and raised the LD may want to ‘aim’ the fixtures to cover certain parts of the stage. This is called the ‘focus’.

Dressing Cables: Process of making cable runs neat and safer by taping them to the stage with Gaffer’s tape.

Spike the Stage: A term that means to mark the position of items that might or will be moved so that they can return to their original position during ‘set change’.

Items

Backline: This term refers to, and is interchangeable with, band gear. This includes guitar amps, drums, etc.. This refers to the items themselves and NOT an area of the stage.

Wedge (MON): Common term indicating a stage monitor.

Drumfill: Drummer’s monitor. Can be a simple ‘wedge’ or can be a larger setup depending on the act’s requirements.

Sidefills: Stage monitors placed on the side of the stage to supplement the individual monitors of the musicians.

Stacks: Term used to denote house speakers for the audience (FOH Stacks).

Flown Speakers: Speakers that are suspended from overhead truss (or other means) rather than simply stacked near the stage.

Distro (PD): AC distribution center. Used for larger shows where the system needs more power than a few typical circuits can provide. Also called ‘Power Distro’.

Gaffer’s Tape (Gaff): Professional tape used to ‘dress cables’. While similar in appearance to duct tape, gaffer’s tape leaves less residue behind, has a non reflective appearance, is easier to tear, and is by far the professional’s choice.

Pin 1 Lift: This is an adapter or balanced microphone cable where the shield is intentionally not connected on one end of the cable or adaptor. This is a much safer way to attempt to lessen or remove the noise noise caused by a ground loop. Lifting the AC ground is dangerous and not recommended.

{extended}
Posted by Keith Clark on 05/16 at 04:44 PM
ProductionFeatureBlogStudy HallProductionAudioLightingRiggingStagingVideoAnalogBusinessConcertEducationInterconnectLoudspeakerMixerSignalSound ReinforcementSystemPermalink

Can Award-Winning Recordings Be Made In A Home Studio?

A reasoned perspective plus thoughts from recording engineers Ed Cherney, George Massenburg and David Hewitt

To begin with, there are many things that you cannot do in a home studio. A competent recording of a live band - still the mainstay of the recording industry - is usually impossible in your bedroom. Fitting an orchestra in there is also challenging.

And even though the topic of this piece is whether award-winning (i.e., Grammy) music can be conceived, recorded and mastered in a home studio, there’s nothing to indicate that any such recordings actually have been.

What is possible theoretically is not always possible in practice. That being said, however, amazing things can be done in home studios, and it’s an interesting topic for recording enthusiasts to ponder.

Can all of the semi-pro equipment that promises great results actually deliver a recording that rivals the “majors”?

A Bit Of History
Making a record used to be a complex task, requiring engineering by highly trained technicians using specialized equipment in a large, dedicated facility. There were also many other people involved, each with a specific contribution made to the process. Musicians, vocalists, songwriters, arrangers, producers, publishers and engineers all had their own areas of expertise; each did a job and that job only.

As the business matured, these jobs started to blend, and recording technology also advanced. The equipment became smaller, cheaper and less complicated to operate. By around the mid-1970s, it was possible and practical for recording artists to purchase home versions of professional recording equipment in order to produce and record themselves as well.

In the last 20 or so years, the wide availability of low-cost, high-quality digital recording technology has greatly narrowed the price and performance gap between pro gear and semi-pro demo making tools.

There has been the rapid, massive evolution of the digital audio workstations as a primary recording medium. Further, it’s now possible for anyone with a computer, whether used mainly for word processing or surfing the Internet or whatever, to access the same technology that professional recording engineers, producers and artists currently use.

But back to the intriguing question: Is it possible for someone to actually record a award-winning product with a desktop, laptop, or any other currently available type of dedicated home recording device?

Let’s put it all into perspective. I contacted some of the most respected engineers in the industry, all of whom have worked on award-honored projects of their own, to get their thoughts on this topic.

After all, they should know what makes a great recording.

Ed Cherney

Ed Cherney

Grammy and TEC Award-wining engineer/producer Ed Cherney works in all genres of music; the only common denominator among his diverse credits is quality. As a recording engineer and or mixer, he’s worked with - to name just a few - artists from Bette Midler, Bonnie Raitt, Wynonna and Poe to Jackson Browne, Keb’Mo, Bob Dylan, and the Rolling Stones.

“Yes, Grammy projects can be done in project studios, if you have a nut behind the wheel who knows how to drive,” Cherney says. “The one thing I’ve noticed, though, are a lot of really great computer and software manipulators that edit, tune and fix, but have very little knowledge about good audio, never having the opportunity to listen to music through Class A gear, great acoustical spaces and great microphones.

“And don’t forget that most contemporary music projects are conceived in home studios and then maybe expanded in commercial facilities.”

George Massenburg

George Massenburg

George Massenburg’s engineering and producing credits include Billy Joel, Kenny Loggins, Journey, Madeleine Peyroux, James Taylor, Randy Newman, Lyle Lovett, Aaron Neville, Little Feat, Michael Ruff, Tot, The Dixie Chicks, Mary Chapin Carpenter and Linda Ronstadt, among many others.

“Of course you can make a recording of a terrific piece of music anywhere, anytime,” Massenburg states. “To the degree that you as an artist/writer are unprepared, insecure, unready, unclear on the concept, unenlightened, unimaginative, lazy and/or uncommitted, you may need more technology, groovier decor, bigger crews and other artificial supports to prop you up.”

David Hewitt

David Hewitt

Hewitt owns and operates Remote Recording Services, and besides being a Grammy winner, he has done things like engineering for the Concert for America in Madison Square Garden, Woodstock 1999 and the 2002 Academy Awards.

“Can you make a Grammy-winning recording at home? Sure, as long as you use my mobile recording truck!” Hewitt remarks with a laugh. “Seriously, it’s really all about the music, artist, and the song; if it’s a worthy piece of work, why not?

“It could be done. Not to underestimate good engineering and good equipment, though. We have recorded albums at people’s homes with our truck. We did an Aerosmith album that way.”

“In the late 1970s, for Jackson Browne’s Running on Empty, Greg Ladanyi and I recorded it using the old Record Plant remote white truck for all the stage shows on the tour, but some of the songs were done on a four-track in hotel rooms along the way,” he continues. “One was even recorded on a two-track in the back of a bus somewhere in New Jersey, with cardboard boxes used as overdubs for the snare and bass drums!”

What’s The Difference?
I believe that most people would rather hear a great song, with a killer performance—even if it had a sound quality that was less than state of the art—instead of a badly performed, horrible piece of drivel that is “slick” sounding and well recorded.

If that’s the case, will there be any difference in the sound quality if a very talented artist records his or her new album at home rather than at a big studio, especially if the material is just as good as it was on the last record, and the same producer and same engineer are employed?

The software the artist owns is the exact same program and version that the big studio has. Does the hard drive on the studio’s computer sound better than the hard drive on the artist’s computer?

In the digital world, data is data, after all. Does a document on one computer read any differently than that same document on any other, assuming everything is working correctly?

So, is there any difference between the sound quality the talented artist will get if making the record at home as opposed to recording it in a very expensive studio. Theoretically, the answer is no, but in practice, things are not always so simple—especially in the recording world.

Remember, in my little example all the personnel are skilled, talented and very good at what they do. And so are the arrangements, players, productions, material, etc.

The Input Signal
Things start to differ a bit when we get to the actual sounds that are being recorded. First and foremost on my list is the quality of the input signal.

I like to explain this concept by using a lowly little audio cassette (remember those?) for my example. If you carefully transfer what you feel is the best sounding recording you have ever heard to that cassette, then play back the cassette copy, a semi-faithful reproduction of the original recording will be heard. There may be some added noise, distortion or signal coloration, but there will be a definite aural signature of the quality of the original signal.

In other words, if you record something you think is the best sounding, most professional, sonically perfect example of high-quality sound you’ve ever heard on to a cassette, it will sound very much like the original on playback, except for any noise and/or distortion that the cassette may impart on the signal.

However, if you then stick a little telephone answering machine microphone into that same cassette deck, and record yourself saying “Testing 1, 2, 3; How now, brown cow?” into that mic, anyone and everyone who hears it will be able to tell that the recording was made using a cheap mic plugged into the cassette machine.

That lowly little cassette still has the ability to allow the best sounding recording ever made to be distinguished from the homemade, cheap mic recording.

So, in a digital recording environment, which is supposed to be an uncolored neutral medium, people making recordings must make sure they have the highest quality input signal - even though it can be altered after the fact - so as to get the sound/image they want printed with the best quality, or, at least, with the quality that is desired.

What I’m saying is that while your recording gear will do a professional job of recording what you put into it, you must do a professional job on the front end in getting that sound. And that’s where the big studios can’t be beat.

The Big Studio Advantage
Where the big studio has another advantage is in the quality of the mics, preamps, equalizers, compressors etc., that they own.

To help get that input signal to sound its best, the big studio is usually second to none.

Accordingly there are many more choices than the artist would have at home. Plus, there is a difference of literally hundreds of thousands of dollars in the total worth of that equipment, compared to what the artist might own.

Still, if the artist chooses one very good front end for the home system, with a sound that he or she loves, then there is a good possibility that the sound of the big studio can be approximated for that artist.

But there will not be as many choices, variations, or options available at home. One factor that can help home studio owners is to use samples.

Because their samples are usually made in bigger studios with great gear, popular sample playing software like Cakewalk Studio Instruments can bring the home/project user closer to pro studio quality.

To the artist’s advantage, these samples are usually made in big studios using high-quality equipment. This further levels the playing field, because if these sames are used, the quality would be identical to that of when the samples were made in the larger facility.

Monitoring
Next of my list of things to get the home studio closer to the big studio is accurate, trustworthy monitoring, which is crucially important.

The trend that most engineers and producers follow for monitoring is to shy away from the large monitor speakers in a studio, and instead use smaller, near field systems that are found in many facilities and that can be easily carried around. These speakers are much smaller, and very affordable, which is another element in the artist’s favor when planning his or her project/home studio.

In the last several years, self-powered near field monitors have gotten very popular, and because the power amp is built into the speaker, once again it will be the same one, whether used in the studio or at home.

However, no one will try to argue that the sound you get out of a near field system in your bedroom can equal an $80,000 system in a well designed acoustical space, which brings us to…

Acoustics
As far as having a nice large recording room with high ceilings that is acoustically tuned, with isolation booths, and controlled reflections and reverberation, the big studio will always have the advantage. That is, unless the artist lives in an old former church or similar structure, or has spent a fortune treating the room.

However, because many instruments are recorded direct, and a space for recording vocals or a single instrument can be designed in a spare bedroom, it is less of a problem than it would be if trying to record an orchestra at home.

Finding a good place at home for your control room is a sticky challenge. Standing waves, bass build-up, high-frequency absorption, reflections of sound off the mixer or computer all have to be dealt with.

What you’re trying to do is to get an accurate sound in the room, which will let the mixes translate well on other systems. This is not always possible in a room at your house.

Fortunately, there is a lot of literature and information on the subject, and quite a few companies have come out with products that allow one to quite reasonably treat the room with baffles, diffusers and bass traps.

When the room is “tuned” for the best response, recording, mixing and monitoring there becomes a trustworthy process, and the artist can rely upon the sounds being heard and mixes being done as accurate and true.

The Final Consideration
Finally, there is mastering. Most big studios send their product out to dedicated mastering houses for the final finishing touches. No one will argue about the sterling quality of the world’s best mastering houses. If you want your project done right 100 percent of the time, that’s where it should go.

IK Multimedia T-RackS, which includes the vintage tube equalizer shown here, is one of many mastering software suites well suited for home/project studios.

However, more and more project studios do desktop mastering to go with their desktop recording. The software tools can do a surprisingly decent job, and many home studio owners find that they can master projects to their own satisfaction.

Yet the fact remains that home-mastered projects can suffer from a lack of good monitoring, and more often suffer from a lack of good sense on the part of the person doing the mastering.

There’s a reason that pro mastering engineers do a professional job, and it’s because they know what they’re doing, in addition to having the best tools. In the end, the project studio is best for people who have a good foundation in recording and know what they want.

It is a technological marvel that so much can be done in a bedroom studio, but the major studios serve a crucial purpose and always will.

Bob Buontempo has more than 30 years of professional recording experience, and has been the president/owner of Buontempo Entertainment Services since 1976. He has also taught numerous recording and audio educational courses over the years. Bob offers special thanks to Ed Cherney, George Massenburg and David Hewitt for their contributions to this article.

 

{extended}
Posted by Keith Clark on 05/16 at 03:41 PM
RecordingFeatureBlogStudy HallDigitalDigital Audio WorkstationsMonitoringSoftwareStudioSystemPermalink

The Single-Mic Technique: An “Old-Fashioned” Approach That’s Surprisingly Effective

Sometimes air can serve as the best mixer

What goes around comes around. From the 1920s through the 1940s, PA systems for music often used only a single microphone.

Band members would gather closely around this mic, balancing their sound by moving toward or away from the mic. Radio broadcasts and recordings often used one mic as well. And over the past several years, this “old-fashioned” technique is making a comeback.

Many bluegrass and folk bands use the one-mic method with surprisingly good results, typically using a large diaphragm cardioid condenser. It picks up sound with amazing clarity and usually with very good gain before feedback.

How can a single mic work so well? As the theory goes, the fewer the number of open mics, the better the gain before feedback.Also, a single mic picks up all instruments and vocals with a coherent, focused sound. There are no phase cancellations between multiple mics to color the tone or smear the transients.

Careful Placement
Want to try the single-mic method? Set up the mic on a stand, ideally in a shock mount. (Use a boom if you need more room for instruments). Place the mic at about chin height and 12 to 18 inches away from the performers. The stand should be positioned in the middle of two or three musicians. If the band is larger, every two people might be allocated a single mic.

In a typical bluegrass or folk group, you’ll see a fiddle, guitar, banjo, mandolin, singers and maybe a dulcimer or bass. It’s possible to get a good balance of all these elements through careful mic placement.

Raise the mic stand to make the vocals louder relative to the instruments, or vice versa. A thing to try is aiming the mic slightly left or right of center to adjust the balance between performers. Some performers and engineers prefer to run the mic signal through a high-quality preamp and then feed the preamp’s line-level signal to front-of-house.

Feedback created by stage monitors is always a concern, and this method helps eliminate the potential for problems. And performers tend to hear each other just fine anyway because they are close together and generally not using guitar amps.

However, lead acoustic guitar players often want/need a monitor to hear themselves. It’s a good idea to start with no equalization, and then tweak a graphic EQ to notch out feedback frequencies.

Certain Advantages
One obvious advantage of the single-mic technique is that the stage looks cleaner. Gone is the forest of mic stands, booms and cables. Instead, you have a low-tech, old-fashioned look that fits in well with the music.

Setup is much quicker as well: just place the mic, plug it in, adjust position, and you’re done. The band determines the mix, rather than the sound mixer who might not be familiar with the music. Of course, musicians are happier with this arrangement than sound mixers!

However, with the single-mic method, fine control of the mix balance, EQ and effects is given up. The technique works best for small acoustic groups that have a good live balance. Also, sound may be a little thin because you’re not hearing the usual close-mic proximity effect. Some bass boost can help with this. One other disadvantage is that the method is unfamiliar to some house engineers.

It comes down to another theory, one that says air is the best mixer. The single mic capitalizes on this, capturing a balanced blend of all instruments and vocals from one point.

Give it a try, and you just might be delighted with the purity and simplicity of this technique.

AES and SynAudCon member Bruce Bartlett is a live sound and recording engineer, audio journalist, and microphone designer. His latest books are “Practical Recording Techniques, 6th Edition” and “Recording Music On Location.”

{extended}
Posted by Keith Clark on 05/16 at 03:08 PM
Church SoundFeatureBlogStudy HallMicrophoneMixerProcessorSound ReinforcementStagePermalink

Wednesday, May 15, 2013

Million Dollar Sound: Analog Style For Elton John At The Colosseum

"This is a place to practice the real art of live sound." -- Matt Herr, front of house engineer for Elton John

Elton John’s Million Dollar Piano is exactly that: A singularly stunning instrument that successfully marries technology and art, and cost just that much to create.

As the name of the legendary performer’s latest headlining act to take up residence at The Colosseum in Caesars Palace, The Million Dollar Piano runs two full hours, expanding beyond the core band heard during The Red Piano – his previous Caesars show – with a pair of cellists that give added verve to the early hits, four backing vocalists including Rose Stone of Sly & The Family Stone pedigree, and percussion avatar Ray Cooper, who has accompanied John on countless occasions for decades running.

Having established its own longstanding presence within Sir Elton’s orbit over the years, Clair Global came to The Colosseum stage for Million Dollar to manage audio orchestrations, with company veteran Matt Herr taking charge at front-of-house and independent engineer Alan Richardson given the keys to monitorworld.

Utilizing a Meyer Sound house PA including left/right hangs equipped with eight M3D cabinets, a pair of arrays using three MSL-4 enclosures and a CQ-1 at the bottom, and twin MICA arrays hung 11 deep in the center, the show is further reinforced with eight MILO 120s covering additional seating, two M’elodie arrays serving as side fills, and a brace of MM-4 miniature front fill loudspeakers mounted on the stage lip.

Left on a relatively loose leash to define what’s heard in the house, Herr concedes that “Elton is old school, there’s no getting around that. He loves to feel the PA, the tightness of the low-end, his wedges are loud as hell. He feeds on all that. He never tells me what it should sound like, he just tells me to turn it up.”

Deserving Of Analog
Guided by the tenets of solid musicianship and dedication to a common dynamic, John and his band are a collaborative entity that puts playing first, foremost, and above all.

Elton John and band on stage at The Colosserum. (click to enlarge)

Eschewing even the casual notion of using a click track, the group is as free-flowing as one of Sir Elton’s pant legs as it flaps in the tempest of sonically-displaced air coming from his wedges.

The sum product of musicians of this caliber is known in many cases to cry out for the good old warmth of analog, and in this instance Herr heeded that call without any reservations. “That’s the way I’ve always heard this music in my head,” he admits. “In my estimation it just deserves an analog sound.”

So it comes as no surprise when a pair of Yamaha PM5000 consoles are discovered at the house mix position. The main board, a 52-channel model, is supplemented by a 28-channel incarnation of the desk kept to one side at a 90-degree angle.

Front of house engineer Matt Herr at his Yamaha PM5000 console. (click to enlarge)

Outfitted with a dozen stereo channels used for John’s MIDI’d piano as well as effects returns and band member Kim Bullard’s keyboard rig, the big 5K is complemented by its baby brother, which is used solely to capture the myriad stage inputs arriving from Ray Cooper’s sprawling percussion section.

Housing tympani, xylophone, marimba, tubular bells, chimes, roto-toms, congas, vibraphone, enough toys to satisfy the child musician in a dozen adults, and much more, Cooper’s world leaves a sizable footprint within the 125 feet of horizontal space that the stage opens up with onto the audience.

All of the channels on the smaller desk are sent to a stereo out, then subbed into the 5K, where they can be controlled via a single fader and flown into a VCA. Herr then uses the big desk’s mute groups to mute Cooper’s input according to what he’s playing or not playing.

A 34-space, two-sided rack provides the real estate necessary for effects, which among a number of devices, includes a TC 2290 for vocal delay and Klipsch harmonizers for background vocals.

“While there’s a little ‘verb and harmonizer on the backing girls, I use a Lexicon 480L on Elton’s vocals,” Herr explains. “A LARC controller mounted atop my main console gives me control of the 480L, which I also use on piano. With the LARC’s presets I can quickly punch-up six or seven reverb times and delays and such, and on the piano I use a small hall setting that I don’t change at all – it’s just a set-it-and-forget it kind of thing.”

Among his inserts, Herr maintains five gates on Cooper’s tympani, and four more on his roto-toms. Still more gates spread across drummer Nigel Olsson’s kick and toms. Arriving on the scene courtesy of a collection of vintage dbx 160 comp/limiters, compression is used somewhat sparingly on vocals, bassist Matt Bissonette’s electric bass, and guitarist Davey Johnstone’s acoustic guitar.

“I really don’t rely on subgroups,” Herr says of his mix strategy. “All of my inputs hit the stereo bus and then I just break them up under the VCAs. The desk has 12 VCAs, that’s one of the reasons I wanted it for this show so much. I use 11 of them all the time – I send all the drums to one, the bass, guitars, Elton’s vocals, boys’ vocals, girls’ vocals to others, and so on down the line right through the cellos and percussion.”

While Herr is simpatico with the PM5000, he also had the opportunity to mix John on a new Yamaha CL Series digital console at the Yamaha 125th anniversary celebration held at Disney’s Hyperion Theatre during the recent NAMM Show in Anaheim.

Left to right, long-time Elton John Band members Ray Cooper, Davey Johnstone, and Nigel Olsson. (click to enlarge)

“I enjoyed using the CL; in fact, when we have solo shows with Elton like this event, it will be my desk of choice,” Herr states. “The CL is very user friendly and sounded really good, in my opinion. The Neve inserts sounded fantastic; I used one of the compressors on Elton’s vocal. Normally, I use an outboard compressor, but this one worked quite well.”

The Million Dollar Piano’s input list takes a democratic approach to the matter of microphone selection, spanning its stage plot with everything from a hardwired Audio-Technica AE6100 at John’s vocal position to Sennheiser 421s at various points for drums and percussion, AKG 460 on ride cymbal, AKG 414s on overheads, Shure SM57s on snare top and bottom, and Beta 52 on kick.

Countryman was the DI of choice for bass and Bullard’s keys, while guitarist Davey Johnstone miked his electric guitar cabinets with both Sennheiser 409s and 609s, which are used selectively as needed.

Good Kind Of Loud
Four years went into the building of the highly-modified CFIIIS Yamaha grand piano that earned its Million Dollar name prior to Sir Elton simply calling it “Blossom.” A visual treat and sonic masterpiece, the instrument is equipped with its own video display on its audience side that manifests colorful graphics and imagery with amazing definition.

A good look at the piano, as well as the wedges for John and the stage arrangement. (click to enlarge)

At any given moment, you may see anything from Versace-inspired designs to video footage of past performances and ambience-enhancing expressions matching adjacent scenery or lighting effects.

Pushing the scales at nearly 2,000 pounds, the Million Dollar Piano stays put at Caesars, moving in-and-out on a lift in accordance with his dates at The Colosseum. Four stereo MIDI modules – two of them active all the time – are used to create the huge sound that comes out of the instrument.

“With our MIDI configuration, I can really fatten up the sound when he lays into the keyboard with his left hand,” Herr says. “The sound is super tight and it hits you in the chest like a kick drum. If I spread my Eventide Eclipse harmonizer across it too, it gets even more impressive. The sound of the strings really comes out. You get so much more than what we all expect to hear from a piano, even if it is a million-dollar one.”

No mics are used on Blossom, which is a good thing really, because the stage volume around the piano is so loud that their presence would only promote unwanted bleed. For the record, the source of all this unbridled gain is a collection of Clair 12 AMS wedges.

“I use two stereo mixes for Elton,” relates monitor engineer Richardson. “The first runs through a pair of single 12 AMS wedges, and is just his vocals and vocal reverb return only. Nothing else goes in there. I use a pair of double-12 Clair AMS wedges for the second stereo mix, and it’s for a band mix including his piano. If there’s anything truly characteristic of Elton’s mix, it’s the fact that it’s loud. It’s always been like that, and he wants it that way. He has a history of blowing up studio monitors, headphones, wedges, you name it.”

A historian of monitoring trends in the business, Richardson has carefully charted the progression of John’s escalating use of sonic horsepower onstage and applied lessons learned to his own strategies in use today.

“I took this job 17 years ago,” he notes, “and at that time they were using S4s. With the backlash you got off of those cabinets onstage, you had to adopt a piercing, high-end sound in order to hear yourself. As we moved away from S4s, however, Elton still thought he wanted to hear this wailing, high-pitched sound onstage, but he really didn’t need to anymore. What he thought he wanted to hear wasn’t necessary.

“I took it as my task to slowly draw him away from the needs of his past, and I’ve been successful to a large degree. Now it’s still loud, but a good kind of loud. Not an annoying rip-your-head-off loud.”

Monitor engineer Alan Richardson at the PM1D. (click to enlarge)

The Real Art
Working from a Yamaha PM1D, Richardson does all of his cueing on headphones, this to provide him with the advantage of hearing feedback before John does, and also because it gives him a chance to react quickly if things are getting too loud in the house.

“When it’s getting loud in the house it creates this umbrella effect that starts enveloping the stage,” he explains. “When that starts happening, I can hear it immediately in my headphones, and I can call Matt out front and ask him to pull it back 2 dB. That’ll make all the difference in the world, and then we’re not battling each other. If it doesn’t get turned down out front, then I’d have to turn the stage up, and it would be all downhill from there.”

The layout and miking of Nigel Olsson’s drum kit. Note that he also uses a Soundcraft GB8 to create his own personal stage mix.(click to enlarge)

Beyond the full-frontal wedge approach, most of the remaining stage is on in-ear monitoring, with band members each giving expression to their own preferences on earbuds. Sennheiser G3 wireless systems handle transmission and receiving chores.

Back out front, Herr notes that all things loud considered, it’s still a joy working with the “handcrafted” old school, organic sound emanating from the stage. “For what we do it would be a shame to take it out of the analog world,” he adds on a parting note. “A lot of people see my gear and say ‘what the hell, you’re pretty young to be using that stuff’ [Herr is 38]. But this is my 20th year at Clair so I’ve been doing this long enough, and know a lot of the people who pioneered this route.

“I think a lot of times the digital world takes the fun out of mixing. You become more of an operator. This band never plays a song the same way twice. You have to be on your toes constantly, moving faders and keeping pace with everything transpiring on stage. This is a place to practice the real art of live sound.”

Gregory A. DeTogne is a writer and editor who has served the pro audio industry for the past 30 years.

 

{extended}
Posted by Keith Clark on 05/15 at 01:50 PM
Live SoundFeatureBlogConcertConsolesEngineerLine ArrayMixerMonitoringProcessorSound ReinforcementSubwooferTechnicianPermalink

Tuesday, May 14, 2013

The Bigger Picture On The Equalization Of Loudspeakers

Considering the off-axis as well as the on-axis response

The procedure often followed for equalizing a loudspeaker is to place the measurement microphone on-axis and adjust for the flattest frequency response.

This often involves boosting some filters when the axial response over a range of frequencies is lower than the average.

Those that are opposed to the use of boost filters may choose to arrive at the same resultant response by reducing (cutting) parts of the response to the lowest common denominator. This results in the same electrical curve, but without compromising headroom in the signal chain.

The whole procedure is initially performed looking at the direct field of the loudspeaker.

A Different Perspective
This article suggests a modification to this approach, by considering the off-axis as well as the on-axis response.

The goal of the equalization process is to produce a better sounding system for the audience. Yet a relatively small percentage of the audience sits in the on-axis position. It would therefore seem ill-advised to consider only the axial position when equalizing a system.

Another popular approach is to average the response of a number of seating positions to arrive at the best “common denominator” curve for the equalizer. This can also produce good results, but care must be taken to exclude artifacts due to room reflections from the measured data, as each movement of the mic places it in a different acoustic environment.

This makes it a challenge to limit the response measured at each test position to the direct field from the loudspeaker. Most contributions from the room will be random at different seating positions and not addressable by equalization. So, the spacial average is not a bad idea, it is just hard to implement.

Another Path, Same Destination
Another way to consider off-axis listener positions is to determine the base equalization curve for the loudspeaker by observing its 3-dimensional radiation balloon.

A properly gathered balloon will reveal the anechoic response of the loudspeaker at all listener angles.

Since the direct field of a loudspeaker is considered to be largely independent of the acoustic environment (at least at short wavelengths), direct field equalization based on balloon data has a strong theoretical basis.

Figure 1 shows the axial frequency response of a multi-way loudspeaker. The dip in response at 500 Hz is due to phase cancellation between multiple drivers in the box. A boost filter at this frequency center will restore the axial response to flat.

But observation of the entire radiation balloon at 500 Hz (Figure 2) shows that the devices come into phase at two off-axis positions, possibly producing a significant peak in the response for many of the audience members.

The “correction” made to the on-axis response will produce an even less ideal response off-axis where a greater number of listeners are located, or even worse at a microphone position causing a feedback problem.

Figure 1 - The on-axis 1/3-octave frequency response magnitude of a multi-way loudspeaker. Conventional wisdom suggests the application of filters to “flatten” the response as shown. But doing so without examining the balloon data could produce off-axis problems. (click to enlarge)

Devices that have a destructive phase offset at one listener position are likely to have an in-phase relationship at another. The notch might better be addressed by the use of precision signal delay between the elements.

In general, for a single transducer equalization will inflate or deflate the balloon at a specified octave band. The same effect occurs at all angles around the device.

But what is often needed is a re-shaping of the balloon, which can be accomplished by using multiple elements and varying their physical spacing and relative delay.

Figure 2 - The 500 Hz directivity balloon reveals that the loudspeaker is producing as much sound pressure level at +/- 60 degrees as on axis. Boosting an equalizer filter could make the sound “boomy” for listeners at those angles. (click to enlarge)

This is the heart and soul of beam-steered loudspeakers. Boost and dip filters applied based on the axial response only can change the directivity of a multi-element device, since the phase shift introduced by the filter can cause a change in the way that the multiple elements interact.

It should be noted that psychoacoustics also plays a role in this.

Peaks in a frequency response are much more audible (and bothersome) to humans than dips in the response. We are more aware of “too much” than “too little.”

A safe approach to equalization that embodies the theories described herein is to avoid the use of boost filters when calibrating a sound system, since it is better to have “too little” than to have “too much.”

Figure 3 - The 4 kHz directivity balloon reveals that the loudspeaker has a smooth roll-off as the listener moves off-axis. Cutting the equalizer to improve the axial response is not likely to produce problems for an off-axis listener. (click to enlarge)

If a sound practitioner used the axial response for equalization, and cut-only filters to smooth the resonant peaks, acceptable results would likely be attained for most listeners since off-axis lobes will not be “inflated” by the process.

Conclusion
Loudspeaker balloon data is useful for direct field equalization of a loudspeaker. It is plausible that the complete equalization of the direct field can be determined from the axial frequency response and the spherical balloons.

The remaining equalization chores can be left to the installer or commissioner of the system. This can include compensation for boundary loading, device coupling, etc.

The possibility also exists based on the above for a manufacturer to provide an appropriate equalization curve in the form of a simple magnitude vs. frequency plot or as a setting that can be imported into popular digital signal processors.

Pat and Brenda Brown head up SynAudCon, the leading independent organization for in-depth, practical audio education.

{extended}
Posted by Keith Clark on 05/14 at 03:06 PM
AVFeatureBlogStudy HallAVLoudspeakerMeasurementProcessorSound ReinforcementPermalink

Monday, May 13, 2013

How Do You Know What Sounds Good?

Developing a frame of reference

Most of us feel that we know good sound when we hear it. We also have tools to help us analyze the properties of sound, and we’ve learned to interpret those measurements and equate them with good and bad.

But I find it fascinating that sometimes even when things measure “poorly,” we say that it sounds good anyway. Or, conversely, sometimes the measurements look spectacular but “there’s just something wrong.” The challenge is defining what actually constitutes “good” sound, and how we know it is so.

A major issue is that our references are all different from one another. Actually, it goes further than that - many of us have rather poor references. There might be a high-water mark in our memory of some time in the past when we heard a particularly amazing sound system and/or a really great mix.

But memories of sound (or most anything else for that matter) are quite fallible.

In addition, what medium are we using on a daily basis? If it’s something like a car stereo or an iPod, are we “listening through” their shortcomings, becoming oblivious to noticing things like their inherent distortion? Our frames of sonic reference very rapidly get used to what we’re hearing most often.

In a similar way, vision can be quickly skewed by the prevailing source of light. You’ve heard of white balance on cameras, right? Our ears do the same thing as our eyes (and both are wired to our brains) - they adjust to the prevailing conditions and consider that to be “normal.”

Live Is, Well, Live
It’s helpful to remember a time when you realized that there’s another level of sonic excellence. Not the sound, per se, but the fact that it was different and somehow “better” than average.

My favorite personal memory of this is the time I was walking across the campus of the University of Maryland, on the way to set up a recital recording, and out of one of the dorm windows, I heard a saxophone being played.

Then it struck me—I was instantly aware that it was a real, live saxophone being played by a real, live person, rather than a reproduction. Ever since that day, I’ve pondered why this was so obvious. The answers aren’t easy but one thing is clear - loudspeakers and amplifiers don’t quite do sound the justice it deserves.

Something is missing, even from the best playback or reinforcement systems on the planet. Studying this concept could help lead us toward developing better systems and/or operating them in a better fashion.

The MLA Experience
A couple of years ago, I followed up on a suggestion that I should check out the Zac Brown Band on tour, since they were coming through my neck of the woods here in Albuquerque. Upon arriving, Martin “Ferrit” Rowe provided me with a behind-the-scenes look at the tour’s then-new Martin Audio MLA (Multi-cellular Line Array) system.

After explaining the system and how it works, Ferrit led me around the seating area at the Hard Rock Pavilion to demonstrate the coverage. I can assure you that it was pretty amazing.

As he explained it, each driver in the array is fed independently via DSP, thus allowing the coverage to be very carefully shaped according to the needs of the space and the desires of the system techs and engineers.

I could clearly hear where the coverage began and where it stopped, and the evenness was very impressive. From 10 feet in front of the stage to 150 feet back, there appeared to be only 3 dB of difference, even in the mids and highs. I’ve never heard anything like it.

But more importantly, all this whiz-bang technology had provided good sound! Normally I’m not all that wild about shows like this, but it was definitely a treat in this case.

And kudos to Preston Soper on systems and Eric Roderick at front of house for their fine work. These guys had the system dialed in to a degree that, in my opinion, is more rare than it should be.

Of course the best-sounding system and mix in the world doesn’t mean much if the music is lousy. This was not the case with the Zac Brown Band - I really enjoyed the variety of material, the tightness of the band, and the fun that they were having along with the audience. It was just simply a great show.

Back to References
There’s nothing wrong with regularly listening to music in your car or on an iPod or both. You may also have a very good feel for what sounds good, and routinely deliver this with your own system and/or mix. But I also think we need to challenge our boundaries in order to find ways to listen to better systems as often as possible.

Not only that, but we should try to listen to acoustic instruments, whether they be pianos, drums, classical guitars or saxophones, as often as possible. There simply is no reference better than the real thing.

Those of you with a “day job” of mixing grindcore (death metal, industrial music) may scoff at the notion that listening to Chopin on a Steinway is going to help, and to an extent, you have a point. But anyone in this business for more than a few years knows you’re likely to work with a very wide variety of sources and systems over the course of a career.

There likely will come that day when knowing what a piano or a saxophone really sounds like will be quite valuable. Being able to bring it up in the house and remain confident that it actually sounds the way you think it should will be a comforting feeling. It also might give one a good reason to protect one’s hearing along the way…

My final thought: as audio professionals, we are personally the key to good sound. Today’s technology, deployed properly, reproduces sound better than ever before.

Yet a constant refrain in magazines and online forums, as well as our own conversations, is that good sound seems to be more elusive than it should. We, the system designers, technicians and operators, are responsible.

Now get out there and attend a Chopin concert.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 15 years.

{extended}
Posted by Keith Clark on 05/13 at 03:41 PM
Live SoundFeatureBlogConcertDigitalEngineerLine ArrayMixerSignalSound ReinforcementSystemPermalink

The Relationship Between Amplifier Damping Factor, Impedance & Cable

What it really means within the context of a system

Ever have one of your friendly amplifier reps walk in your office to present their new mondo-gazillion-watt beast and point out the damping factor spec of greater than a bazillion? Why, gee-whiz! That’s like 10 times more than the other guy! It must be awesome! Right?

Well, as we have seen before, it depends on how you are going to use it. Let’s start with defining damping factor and see what it means to us.

Amplifier damping factor is defined as “the ratio of the load impedance (loudspeaker plus wire resistance) to the amplifier internal output impedance.” 

This basically indicates the amplifier’s ability to control overshoot of the loudspeaker, i.e., to stop the cone from moving. It is most evident at frequencies below 150 Hz or so where the size and weight of the cones become significant.

A system where the damping factor of the entire loudspeaker/wire/amplifier circuit is very low will exhibit poor definition in the low frequency range. Low frequency transients such as kick drum hits will sound “muddy” instead of that crisp “punch” we would ideally want from the system.

The formula for calculating damping factor:

image

Where:
ZL = The impedance of the loudspeaker(s)
ZAMP = The output impedance of the amplifier
RW = The resistance of the wire times 2 for the total loop resistance

.
Very few amplifier spec sheets state the output impedance, but you can generally call the manufacturer for this spec or you can calculate it by dividing the minimum rated load impedance by the damping factor rating.

For example, if we are using amplifier with a damping factor rating of 400 and it requires a minimum load of 2 ohms, then its output impedance would be calculated as being 0.005 ohms.

So let’s look at several examples and figure out what we can control in the design of our system to achieve the best results. Say we have two 8 ohm subwoofers connected to an amplifier with a damping factor of 400 with 100 feet of 12 ga. wire with a resistance of 0.00159 Ohms/foot times 100 feet gives us a total resistance of 0.159 Ohms.

Plugging the numbers into our formula, we get:

image

In this case, our system damping factor is just 12. Most experts agree that a reasonable minimum target damping factor (DF) for a live sound reinforcement system would be 20, so we need to consider changing something to get this up.

The critical element in this definition is the “loudspeaker plus wire resistance” part. In this case, the resistance in 100 feet of 12 ga. wire with a 4 ohm load results in around 0.7 dB of loss, much greater than the maximum target of 0.4 dB of loss, so let’s try bigger wire. 10 ga. wire has a resistance of .000999 ohms/foot times 100 feet equals .0999 ohms and will get us to the 0.4 dB target.

What will it do for DF?

image

OK, now we’re pretty close to the 20 we were looking for. Notice that the loudspeaker impedance can also give us a big change.

The higher the circuit impedance, the less loss we have due to wire resistance.

What if we change our wiring so we have one 8 ohm loudspeaker connected instead of two? Going back to our 12 ga. wire, we calculate:

image

Even better! In fact, if you run the numbers a few times, you will see that in a system with some significant length of wire, we will find that damping factor will generally be 20 or higher as long as our total wire loss is 0.4 dB or less.

What if we have a self-powered subwoofer? In this case, our loudspeaker wire is probably around 14 ga. and since the amplifier is in the loudspeaker enclosure, it is probably less than a couple feet long.

Assuming the manufacturer is connecting two 8 ohm loudspeakers to the amplifier, and 14 ga. wire has a resistance of .00256 ohms/foot times 2 feet equals 0.00506 Ohms of resistance, and our amplifier has a damping factor spec of 400, what do we get?

image

Wow! Now that’s a significant difference! Kind of supports the idea of using self-powered subwoofers, or at least putting the subwoofer amps as close as possible to the subs.

So we’ve looked at the differences in the size and length of our wire and the differences in hanging one loudspeaker on the line vs. two to change the impedance of the line.

What if we choose an amplifier with a higher damping factor spec., say 3,000? That’s a big difference, so we should see a much higher damping factor in our circuit, right?

Assuming this amplifier can drive a minimum 2 ohm load, we find the output impedance would be 0.001 ohms. Plugging the numbers into our single loudspeaker with 12 ga. wire system, we get:

image

Hmm, not such a big deal.

That higher amplifier damping factor only improved our system damping factor by 0.31 over the amplifier with a DF spec of only 400.

What if we use the amplifier with the 3,000 DF spec in our self-powered sub with 2 feet of 14 ga. wire?

image

Remember our calculation using the 400 DF amplifier was 264.55, so now we start to see when the amplifier spec becomes significant.

Essentially, in sound reinforcement systems where we have some significant length of wire between the amplifier and the loudspeaker, the amplifier DF spec has little affect on the performance of the system.

So what have we learned? In live sound reinforcement systems, damping factor is really driven by the length and size of our wire and the impedance of the loudspeakers we connect at the other end.

Since damping factor is mostly affects low frequency, we should endeavor to keep our subwoofer loudspeaker lines as short as possible and/or use larger gauge wire. We should keep the impedance of the connected load as high as possible by connecting only one transducer per wire instead of two.

So is more amplifier damping factor better? As one of my colleagues recently said, “Sure! If the loudspeaker terminals are welded to the amplifier output terminals!” Well, maybe he overstated it a little bit, but yes, as long as the loudspeaker wire is really short, then by all means!

Jerrold Stevens has more than 25 years of experience in the audio industry, including contracting, independent sales representative, live sound and studio engineering, and audio system consulting and design. He now works with Eastern Acoustic Works (EAW).

{extended}
Posted by Keith Clark on 05/13 at 02:21 PM
AVFeatureBlogStudy HallAmplifierAVInterconnectLoudspeakerMeasurementSignalSubwooferPermalink

Church Sound: Keeping The Workspace Organized Is A Big Key To Success

In life, and in mixing, a truth that holds is the little things make a big difference

Having the luxury of visiting and mixing in many different church sound booths I’ve seen almost everything, from an 8-channel powered mixer sitting (precariously, I might add) on a chair to a fully loaded DiGiCo mixing board/workstation in a 20- by 40-foot decked-out production booth

However, the most indelible impression that sticks with me about each and every booth that I have been in is the organization—or the lack thereof.

Mama always said. “Cleanliness is next to Godliness.’ I don’t want to get into a theological discussion on the issue, but I am going to preach about the importance of keeping your sound booth clean and organized.

Feedback, Buzz & Chaos
I saw it myself. A stressed-out operator was behind the board, the sound check was supposed to be done 15 minutes ago. The service start time was just 10 minutes away. The problem: feedback, buzz, and chaos!

The booth was a whir of activity. The stressed-out operator was yelling things at the tech on the platform such as, “try input 17 on the stage box!” The ttech dutifully stuck his hand In the swirl of mic cables that could be best described as the proverbial rat’s nest. BANG! (Yep, the tech disconnected a live channel.)

Every musician and early arriving parishioner jumped about two feet in the air, one even let out an involuntary “Hallelujah!” The operator—now embarrassed—barked back at the platform, “not that channel’ The other onel”

Needless to say, worship wasn’t necessarily wonderful that day. There are some obvious things that could have prevented this:

1) A pre-planned, written input list
2) Organized stage layout
3) Color-coded cables
4) Labeling the mixing board

In life, and in mixing, a truth that holds is that little things make a big difference. Applying the above listed “little thing"s could have helped set the stage for worship, rather than level it.

Organizational Tools
There are some little things that can be a big help in the sound booth:

— If you have multiple wireless mics, simply put a different colored piece of electrical tape on each system. Using the same color on the transmitter and receiver will allow for a quick visual check of which transmitter goes with which receiver.

— If you have a patch bay, use different color parch cables for input and output connections. If you only have one color patch cable, use colored tape on the connectors. Purchase a patch cable hanger to store the extra cables. You can even use a kitchen towel hanger.

—To store extra connectors there are many plastic storage containers that have multiple compartments for holding things.

—For the various loose cables that inevitably end up in a booth, use a Simple plastic storage unit/file cabinet. This same type or storage works well to hold wireless transrnitters and assorted microphones.

Being organized can make a big difference in the peace you experience during sound check, and always results in a better service—at least from a sonic perspective…

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with sound at his church for more than 30 years.

{extended}
Posted by Keith Clark on 05/13 at 02:07 PM
Church SoundFeatureBlogStudy HallConsolesEngineerInstallationSound ReinforcementTechnicianPermalink

In The Studio: Seven Mixing Techniques That Can Really Pay Off… Or Get You In Trouble

It's so crazy it might just work!
This article is provided by the Pro Audio Files.

 

This article is about some pretty crazy techniques that can really take your sound up a notch or totally screw up your mix! These aren’t techniques I use all the time, but enough to warrant a mention.

If you’ve got some experience under your belt, here are a few things you can do when the situation warrants it.

1. Multiple Outputs Tor Group Sends
This is useful if you know you’re going to be doing some parallel processing. Most commonly I’ll use this approach on the drum bus.

Why? Well, if you use a single output into two send channels you get the same levels going to both. If you have independent control you can set them evenly to begin with, then apply your compression to your parallel track. From there you have finer control of how much of each element is going into the compressor.

Once you blend your parallel compression in, you might find that while your cymbals may start to fluff up nicely, your snare and kick are starting to “pancake” out. By individually controlling the output levels you can fine tune the compressor’s reaction.

Why not do this from auxiliary sends? Because DAW aux sends tend to be glitchy enough that there can sometimes be a delay on the return. It doesn’t help you out if you’re comb filtering your return when the goal is to get a fuller sound.

2. Leaving The Midrange Bump In Vocals
Four out of five times I find myself cutting some kind of bulky tone between 350-600 Hz in vocal tracks. This usually leads to a cleaner sound.

However, in the mix you may find the solid midrange to be very helpful, especially once you’ve added some treble. One way to control this range but not remove it is to use a compressor with a customizable side-chain circuit.

Triggering the compression without the lows, upper-mids and treble of the vocal opposed to regular compression will open up the presence of the vocals like an EQ would, but without losing that dead center midrange. If you have a knee control you can make the compression very transparent even without the upper and lower ranges in the detector circuit. Conveniently, the stock digi compressor in Pro Tools will let you do all of this.

3. Overdrive For Compression
I discovered this while tracking vocals through my 1176. The transformer will actually apply its own compression when driven into.

If I set the attack on bypass and really fine tune the input, I can get a very subtle compression that brings up sustain without compromising any attack. The “drawback” here is that this process generates distortion. If I can find that sweet spot, the distortion will excite the sound coming in. I’ve done this on vocals, guitars and bass guitar very successfully – compressing without compression engaged!

The way this works will vary from each piece of gear but you might just find something new about one of your favorite (or least favorite) items.

4. Clipping Instead Of Limiting
Taking that idea a step further: one of the riskiest but potentially coolest approaches is to square off a signal instead of limiting.

Clipping is essentially limiting with a zero attack and release time. Limiting is far more transparent in terms of frequency control, but often comes at the price of punch. Likewise, clipping is not transparent in terms of tone, but leaves the dynamics outside of the peak signal completely intact.

Now, this usually sounds terrible. But over very short spans of time, particularly on sources that have broad frequency content, it can actually sound fairly transparent or even good (snare drums anyone?).

Now a lot of audio guys will jump down my throat for this one, but remember, we soft-clip things all the time. Hard clipping your converters is really not so different than overdriving the outputs of an MPC, which hip-hop producers have been doing for years.

5. Adding A “Note” To Kicks
You’ve probably triggered a sine wave from a kick drum to add weight to it. It’s one of many helpful ways of getting a beefier kick sound. This one will actually help glue the kick into the track.

Use a square wave and a low pass filter with the corner frequency set to the fundamental tone. Because slight overtones will still remain, you will get a “note” rather than just weight. This can really help the kick “belong” in the record. The pitfall is that you have to be careful what note you are choosing.

There’s only so much range in that sub area, so you have to be pretty careful about your tuning. Stay with the bass and/or the chord, whichever modulates the least, or stay a fifth down and away from those notes. Once your square is at about 65 Hz, you’re probably going to get too much tone.

6. Parallel Distortion
A cool way to excite something is to create a mult, filter it, and add a touch of distortion. Then blend that mult under the original signal. Hi-passing everything under 1 kHz can be a great exciter for vocals. Hi-passing 100 Hz and low passing 1 kHz can be cool on rhythm guitars or bass to add body. It’s like EQ, but creates a harmonic signature that helps give things a lot of depth.

Now, normally I prefer to do this with linear phase EQs because it gives me exactly what I’m going for without any kind of phase cancellation between the dry and parallel signal.

However, sometimes that cancellation can be useful. If you set your corner frequency and slope just right you can gently ease off any frequency ranges you might not want too much of — so with a discerning ear, minimum phase can be the way to go.

7. Delay The Room

Often times I get tracks to mix that were recorded with nice equipment but not in the most brilliant space. The space is either too tight or two loose. Loose can be OK, a little dynamics processing can help it out. Too tight it almost doesn’t sound like a room capture — and what do you do with it then? Turn it into the coolest delay ever!

If you have a tight room sound, instead of simply blending it up -20 dB below the close sound, strap a delay on there with some feedback and turn it to 100 percent wet return. Now you’ve got an echo that sounds exactly like the room it was tracked in — perfectly realistic and immediately blends with everything else.

If the room is loose you might be able to sneak a little more room in there by adding a slight delay (20-50 ms) with no feedback — essentially acting as a pre-delay. This way you can get a little more of it in there without losing the forwardness of your close capture.

The last thing you can try is this: if you want something to feel far way you can nudge the room capture forward in time. Instant “back wall” if you need it!

Remember the key here is discretion! These techniques can be great, but they can also destroy what’s there.

If 66 percent of the song is the performance, 20 percent is the tracking, 10 percent is the basic mixing, then the last 4 percent is this kind of stuff. Level, Pan, and basic signal processing (in that order) are far more useful and important in getting right. But every now and then you need a little extra “somethin’ somethin’.”

I hope I’ve given you a new technique to try out. I’d love for you to pay it forward by leaving a comment (here) with a cool technique you’ve used and what kind of results it has achieved!


Matthew Weiss engineers from his private facility in Philadelphia, PA. A list of clients and credits are available at Weiss-Sound.com.

Be sure to visit the Pro Audio Files for more great recording content. To comment or ask questions about this article go here.

{extended}
Posted by Keith Clark on 05/13 at 01:51 PM
RecordingFeatureBlogVideoStudy HallDigital Audio WorkstationsEngineerInterconnectProcessorSoftwareStudioPermalink
Page 52 of 167 pages « First  <  50 51 52 53 54 >  Last »