Feature

Wednesday, June 05, 2013

In The Studio: Compressing Drums—Setting Attack & Release

A wonderful tool, but if not used well, it can really "jack things up"
These videos are provided by Home Studio Corner.

In this two-part presentation, Joe focuses on compression, using a relevant example of applying compression on a drum bus.

As he notes, compression can be a wonderful tool, but if not used well, it can really “jack things up.”

He starts with a focus on what he calls the “secret ingredient” of compression—the attack knob. It can enhance “punch and depth,” but it can also make a mushy mess of things when not applied correctly. The goal is usually to keep sound open and present, while fitting appropriately within the mix.

Next, Joe moves on to the release portion of compression, which he describes as a kind of “tone knob” or control. Here, transients can be fine-tuned and sustained in order to attain the desired tone.

 

 

 

 

Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.

{extended}
Posted by Keith Clark on 06/05 at 11:39 AM
RecordingFeatureBlogVideoStudy HallDigital Audio WorkstationsProcessorStudioPermalink

Tuesday, June 04, 2013

Not So Obvious: It’s About More Than Sound

There are plenty of other things to be aware of...

With festival season upon us, I got to thinking that there are plenty of articles that tell us which knob to turn, which button to push, and other mechanical methods of our trade. So I’ve set out here to provide information on some of the other “not so obvious” aspects to keep in mind when working gigs this summer and fall. 

These are things that I’ve encountered in touring as well as here in “home territory” at local festival settings. Some of this may not apply to your methods of working just yet, but may prove to be useful one day in your career. And, not all of them are directly show related, but have proven to serve me well nonetheless.

Fly in the day before the gig. Airplane cabins are pressurized at an altitude of 8,000 feet when at cruise altitudes, meaning your ears are subjected to several changes in “atmospheric pressure” every time you fly. We’ve all been there, even when driving—just a few hundred feet worth of elevation can result in a notable change in the pressure in our eustachian tubes. It can take a good while for the pressure to dissipate so that our hearing is stable enough to trust in a mixing environment. Plus the extra day may be the only chance you get to explore a new city…

Get plenty of sleep. Long gone are the days of “rock ‘n’ roll all night.” Remember that you have a job to do. Just as a guitar player has to take good care of his instrument, we have to take care of ours. In this case, our instruments are our awareness and our ears. We need to be sharp and hear well. Fortunately, we carry these instruments everywhere we go, so it’s convenient to use them as an excuse to stay away from of the places we’re often asked to go after gigs. 

Don’t be “that guy.”
Avoid being the person complaining about every little detail of every little thing at the gig. Unless its something truly detrimental to the show, shut up and deal. And realize that some things may have been done on purpose, with your input neither wanted nor needed. No one likes a prima donna, and besides, that’s the band’s job. On the other hand, do be the person with the extra Sharpie or gaff/board tape or multitool at the ready. Event organizers are always watching, and your preparedness (or lack thereof) will be noticed. 

No one likes a perfectionist. Don’t spend an hour hollering into the talkback mic about “dialing in” the perfect verb. Presumably you brought your “cans” to the gig, so use them.

Once the show starts, you’re going to change it anyway, and there’s probably someone close to the PA trying to hang a banner or make some other contribution to the event—and you’re ticking them off. So just stop it.

Besides, you can have more fun tomorrow morning by “tuning the system” with some Whitesnake or Ozzy, waking the hippies out of their drug of choice-induced comas.

Every show is your dream gig. It doesn’t matter that maybe this particular band is lousy, or the humidity is stifling, or…whatever. Every show is the most important thing in your world every day.

And what if the management of your actual “dream gig” is standing right behind you, and you’re not doing your best? What are the odds of you actually landing that gig? 

Mix some bluegrass. Inside. In a gym. Doing monitors from FOH. Real bluegrass, not the “sellouts” that went all electrified and stuff. You know, big tube condenser for the band to crowd around, and a half-dozen more mics for instruments, one of which is on a fiddle pointed directly at a wedge due to artist taste. Go ahead. Try it. Rock and country mixers have nothing on the folks that mix this every day. Hats off to them.

Keep the horses in the barn. Just because a system has serious horsepower doesn’t mean you have to use it all…all the time. Be appropriate to the genre.  I recently finished a tour with a 60s band that was doing stuff that ran all the way from “The Sound of Silence” to “Good Times, Bad Times.” SPL ran from 68 dB to 104 dB. Dynamics anyone? 

If you’re mixing bluegrass and you’re good, you can get as loud as a rock band. Don’t. Other engineers know you’re just showing off, and we’re talking smack about you behind your back, and you deserve it. 

There are plenty of other things to be aware of, but I hope to have at least helped spark an internal dialogue on making your shows better for everyone, you included.

Here’s hoping that we all have a great season, and maybe we can catch each other at an event…the day before.

Todd Lewis is production manager at Stewart Sound in Leicester, NC.

{extended}
Posted by Keith Clark on 06/04 at 05:31 PM
Live SoundFeatureBlogBusinessEngineerMonitoringSound ReinforcementTechnicianPermalink

Why Power Matters: Beyond Amplifiers To The Big Picture

Audio people need not be electricians, but we must know how power distribution works

For professional audio people, the word “power” usually conjures up visions of racks of amplifiers are used to drive the loudspeakers in a sound system. But the amplifier and other system components must have a stable power source from which to operate. 

Thus the issue of power distribution, all the way from Hoover Dam to your sound system, is vital. Some of the principles of audio signal distribution in sound systems are borrowed directly from utility companies, and so much can be gained by taking a look at how they do it.

Most of the useful stuff in the modern world requires an energy source to operate. Several have been widely exploited, including petroleum and its derivatives, natural gas, and even atomic energy. All of these energy sources can be used to generate another form of energy that exists naturally in the environment—electricity.

Our world is teaming with energy just waiting to be harnessed. The atomic age began when scientists discovered that matter is a vast energy source. It’s mass is the “m” in E = mc2. Even a small amount of matter multiplied by the speed of light squared equals a very large “E”—which stands for energy. We don’t create energy; we transform and modify it for our own use.

An energy source has the potential for doing something that scientists call work. From our friends at dictionary.com, work is defined as, “The transfer of energy from one physical system to another, especially the transfer of energy to a body by the application of a force that moves the body in the direction of the force. It is calculated as the product of the force and the distance through which the body moves and is expressed in joules, ergs, and foot-pounds.”

Once we understand the nature of work, power is easy. Power is the rate of doing work. When electricity was first being harnessed, a common power source was the horse. A typical horse can do a certain amount of work over a certain span of time.

James Watt determined one “horsepower” to be 33,000 foot-pounds per minute. Horses can be ganged together to multiply their power, so a team of horses can out-pull a single animal.

While horses are no longer a common power source in developed countries, but horsepower lives on as a way of rating other sources. Any electrical power rating, such as watts, can also be stated in horsepower.

An important concept to grasp: power is a rate, and when properly specified it must be accompanied by words like “average” and “continuous” to be meaningful.

Most modern power sources can be used in multiples to create bigger ones, like the engines on an airplane or the amplifiers in an equipment rack. The concept is used nearly everywhere that power is generated or consumed - small sources can be used in multiples to create larger sources.

Not An Invention
Electricity is the power source of interest for producing and maintaining a modern lifestyle. It’s what makes life as we know it today possible.

While people existed long before electricity was harnessed, life became a lot easier once humans had a readily available electrical power source at their disposal. Its use is so widespread that we take it for granted.

Few would question the integrity of an electrical outlet found anywhere in a modern building. We just “plug in” without thought and expect it to work—and it usually does.

It’s a great thing that this source can be so reliable, but the bad part is that high reliability causes us to take it for granted, and discourages us from seeking an understanding of how it works.

Contrary to popular belief, electricity is not an invention. It has existed for as long as there has been matter in the universe.  We know that all things are comprised of atoms, and that electricity is the flow of atomic parts (electrons) from one place to another.

The rate of electron flow is called a current. The pressure under which it flows is called a voltage. Both of these quantities (and most units in general) were named to honor electrical pioneers. All electrical power sources can be characterized by their available voltage and current.

In fact, the simplest formula for electrical power is:

W = IE
Where W is power in Watts,
I is current in Amperes
E is electromotive force in Volts

Alternating And Direct
If current flows in one direction only along a wire, it is called direct current or DC. If the current flows in two directions, such as back and forth along a wire, it is called alternating current or AC.

It’s possible to convert AC to DC, and DC to AC, and in fact, this is necessary to get a sound system component to work.

The utility company generates AC by using some other form of energy in the environment, such as flowing water turning a turbine (hydraulic power), the wind turning a propeller (pneumatic power), the burning of coal, or even nuclear reactions. The discovery and utilization of new power sources is of prime importance to modern humans. Our continued existence depends on it.

The electrical power generation process was invented and refined with a great number of influences, including the physical limitations of electrical components and wire, and even the political and economic forces that existed at the time.

Because it’s highly impractical to change a system once it’s in place, many possible approaches were considered. I doubt if Edison or Westinghouse (or perhaps even Nikola Tesla, the genius inventor of AC and numerous other milestone technologies) could envision the far-reaching implications of their choices regarding electrical power distribution.

Thomas Edison and Nikola Tesla, pioneers of electrical distribution.

After much experimentation, debate, and lobbying, it was decided that the method of electrical power distribution in the U.S. would be AC, primarily due to its inherent advantages with regard to generation and transportation over long distances. (The true turning point of the debate proved to be the first successful use of widespread AC distribution at the 1893 World’s Fair in Chicago.)

AC has at least three advantages over DC in a power distribution grid:

1) Large electrical generators generate AC naturally. Conversion to DC involves an extra step.

2) Transformers must have alternating current to operate, and we will see that the power distribution grid depends on transformers.

3) It’s easy to convert AC to DC, but expensive to convert DC to AC.

The sine wave is a natural waveform from things that rotate, so it is the logical waveform for the alternating current. The frequency of the waveform describes how often it changes directions. An optimum frequency for power distribution was determined to be 60 Hz.

Any devices that required DC could simply convert the 60 Hz AC provided by the utility company, and this is exactly what sound system components do, even today.

Source Be With You
The core electrical power sources that are created to supply electricity to consumers are truly massive.

For example, Hoover Dam has a power generation rating of about 3 million horsepower (imagine feeding them!), and an electrical generation capacity of about 2000 megawatts.

This AC electrical power must be transferred from the dam to the consumer. The problem is that most people live at remote distances from the really big power sources. A power transmission system must be used to get electricity from point “A” to point “B” with a tolerable amount of loss.

There are electrical advantages to doing this at a high voltage, since it minimizes the losses in the electrical conductor used. Voltage and current can be “traded off” in a power distribution system. This is the job of the transformer.

Remember that that W = IE so we can make an equivalent power source with a lot of E and a little I, or a lot of I and a little E. Transformers are the devices used to make the trade-off.

With the power remaining the same (at least in theory) a step-up transformer produces a larger voltage at a smaller current, and a step-down transformer does the opposite. Both types are found in the power distribution grid (and also in the signal chain of a sound system!).

Typical voltages for long distance transmission are in the range of 155,000 volts to 765,000 volts in order to reduce line losses. At a local substation, this is transformed down 7,000 volts to 14,000 volts for distribution around a city.

There are normally three legs or “phases.” Three-phase power is the bedrock of the power grid. A large building might be served by all three phases, where a house would normally be served by only a single phase. Before entering a building, this distribution voltage is converted to 220 volts to 240 volts using a step-down transformer (there is a tolerance - your mileage may vary).

For our discussion here, we will use 220-volt/110-volt nominal values. The place where the power comes into the building is called the service entrance. This is of prime importance, because, in effect, it’s the “power source” of interest with regard to electrical components in the building. Any discussion of power distribution within a building is centered upon the service entrance and how its electricity is made available throughout the structure.

A food chain is now apparent. Electricity is harnessed from the environment, converted into a standardized form, distributed to various locations, and converted again into the form expected by electrical devices and products.

Most of the local wiring from the power substation is run above ground on utility poles, making it a likely target for lightning strikes, falling limbs, high winds and ice. Many locales choose to bury all or part of the wiring to reduce the risk of interruption and to remove the eyesore of wires strung from poles.

On To The Outlet
In most parts of the world, 220 volts (or close to it) is delivered to the electrical outlets for use by appliances.

In the U.S., the step-down transformer on the utility pole is center-tapped to provide two 110-volt legs. This is called “split phase” and requires a third wire from the center tap into the dwelling (neutral) and a neutral wire from each outlet back to the service entrance of the building to connect to it.

On the other hand, 220-volt circuits and appliances do not require a neutral, although it is often included if some sub-component of the appliance uses 110 volts.)

In the U.S., most household appliances are designed for 110-volt power sources. Using a lower distribution voltage has some pros and cons. The advantage is that a lower voltage poses a lower risk of electrocution, while still providing sufficient power at an outlet for most appliances. The down side is that the lower the distribution voltage, the large the wire diameter that must be used to minimize wire losses.

So a 110-volt circuit requires more copper (less resistance) to serve its outlets to maintain the same line loss as a 220-volt system. Most households have several appliances that are designed to take advantage of the full 220 volts delivered by the utility company. These appliances have high current demands and the higher distribution voltage allows them to be served with a smaller wire gauge.

It’s unusual (but not unheard of) for sound reinforcement products to require a 220-volt outlet, at least in the U.S.

Many motorized appliances utilize AC in the form it’s delivered by the utility company. But other products (including sound reinforcement components) internally convert the delivered 110 volts AC to DC.

This is the very first step taken inside a product when power is delivered to it from an electrical outlet. The power supply provides the DC “rails” necessary to power the internal circuitry. These rail voltages can range from a few volts to hundreds of volts, depending upon the purpose of the product. Some power supplies are external to the product.

“Line lumps” and “wall warts” are commonplace in sound systems. These offer the advantages of mass production and more efficient certification, as well as keeping AC (and its potential audible side effects) out of a product. But external supplies can be inconvenient and proprietary, so consumers are split in their acceptance.

Battery-powered devices bypass the whole system and place a DC power source created by chemical reactions right where it’s needed - inside of the product. The disadvantage is that there is no way to replenish this source other than replacing it or running a wire to an external power source.

Why This Matters
Audio people need not be electricians, but we must know how power distribution works. The AC power distribution scheme that is thoroughly entrenched in the U.S. infrastructure can be intermittent, noisy and even lethal.

The common practice of plugging different parts of a sound system into different electrical outlets can have very negative audible side effects, such as “hum” and “buzz” over a loudspeaker.

Far more serious, an improperly grounded system can prove deadly. Perhaps you’ve heard tales of unsuspecting musicians who lay their lips on microphones while touching their guitar strings. This is no urban myth - it can happen without proper power practice.

Pat Brown teaches SynAudCon seminars and workshops worldwide, and also online. Synergetic Audio Concepts has been the leading independent provider of audio education since 1973, with more than 15,000 “graduates” worldwide, For more information go to www.synaudcon.com.

{extended}
Posted by Keith Clark on 06/04 at 04:37 PM
AVFeatureBlogStudy HallAmplifierAVMeasurementPowerSignalPermalink

Church Sound: The Recording And Mixdown Of Energetic Gospel Music

Breaking it down to a logical process

One of the most memorable events in The Blues Brothers is the scene where a church congregation dances to the gospel band. The dancers do insanely high flips and cartwheels to this exuberant, joyful music.

I was honored to make a studio recording of similar music played by a top local gospel band, the Mighty Messengers. Here’s a look at my recording process, along with several audio examples in mp3 files.

The Recording
On the day of the session, we set up a drum kit in the middle of the studio. Surrounding the drummer were two electric guitarists, a bass player, a keyboardist and a singer who sang scratch vocals.

Because the bass, guitars and keys were recorded direct, there was no leakage from the drum kit, so we got a nice, tight drum sound. I mic’d each part of the kit. Kick was damped with a blanket and mic’d inside near the hard beater.

We recorded the guitars off their effects boxes, so we captured the effects that the musicians were playing through.

I set up a cue mix so the band members could hear each other over headphones. We recorded on an Alesis HD24XL 24-track hard-drive recorder, which is very reliable and sounds great.

Most of the songs required only one or two takes—a testament to the professionalism of this well-rehearsed band.

A few days later after mixing the instrument tracks, we overdubbed three background harmony singers (each on a separate mic). Finally we added the lead vocalist.

The Mixdown
I copied all the tracks to my computer for mixing with Cakewalk SONAR Producer, a DAW which I like for its smooth workflow, top-quality plugins and 64-bit processing.

Figure 1: Project Tracks

Figure 1 shows a typical multitrack screen of the project.

Here’s a short sample (Listen) of the mix of “My Heavenly Father” (written and copyrighted 2010 by Dr. William Jones).

Let’s break it down. Click on each linked file throughout to hear the soloed tracks without any effects, then with effects.

First, the bass track: Listen

Figure 2: Bass EQ.

To reduce muddiness and enhance definition in the bass track, I cut 5 dB at 250 Hz and boosted 6 dB at 800 Hz. Then the bass track sounded like this: Listen

Note that in Figure 2 this EQ was done with all the instruments playing in the mix. EQ can sound extreme when you hear a track soloed, but just right when you hear everything at once.

Next, the kick drum track: Listen

To get a sharp kick attack that punched through the mix, I applied -3 dB at 60 Hz, -6 dB at 400 Hz, and +4 dB at 3 kHz. Here’s the result: Listen

Thinning out the lows in the kick ensured that the kick did not compete with the bass guitar for sonic space.

Now the snare track: Listen

I boosted 10 dB at 10 kHz to enhance the high hat leakage into the snare mic. This is extreme but was necessary in this case. I also added reverb with a 27 msec predelay and 1.18 sec reverb time. Finally, I compressed the snare with a 7:1 ratio and 1 msec attack to keep the loudest hits under control. Here’s the result: Listen

The drummer did not play his toms on this song. But on other songs, to reduce leakage, I deleted everything in the tom tracks except the tom hits (see Figure 1). Cutting a few dB at 600 Hz helped to clarify the toms.

Next, here’s the cymbal track, unprocessed: Listen. Notice how loud the snare leakage is relative to the cymbal crash.

To reduce the snare leakage into the cymbal mics, I limited the cymbal track severely. The limiting reduced the snare level without much affecting the cymbal hits. This is very unusual processing but it worked. Note the cymbal-to-snare ratio with limiting applied: Listen

The guitars sounded great as they were… a little reverb was all that was needed. These two players should do a solo album! Check it out: Listen

The keyboards also needed only slight reverb: Listen

Moving on to the background vocals, we recorded three singers with three large-diaphragm condenser mics. Listen to the raw tracks with the vocals panned half-left, half-right and center

We applied 3:1 compression and added a low-frequency rolloff to compensate for the mics’ proximity effect. I “stacked” the vocal tracks, not by overdubbing more vocals, but by running the vocal tracks through a chorus plug-in.

This effect doubled the vocals, making them sound like a choir—especially with a little reverb added. Here are the background vocals, with compression, EQ, chorus and reverb: Listen

Finally, here’s the unprocessed lead-vocal track. Listen to how the word “never” is very loud because it is not compressed.

To tame the loud notes I added 4:1 compression with a 40 msec attack. Figure 3 shows the settings.

Figure 3: Vocal Compressor

The track also needed a de-esser to reduce excessive “s” and “sh” sounds.

To create a de-esser, I used a multi-band compressor plug-in, which was set to limit the 4 kHz-to-20 kHz band with a 2 msec attack time. This knocks down the sibilants only when they occur.

De-essing does not dull the sound as a high-frequency rolloff would do. Listen to hear the lead vocal with compression, de-essing and reverb. The word “never” is not too loud now, thanks to the compressor.

The Completed MIx
We’re done. Listen to the entire mix without any processing.

And listen to the same mix again with all of the processing as described.

As we said earlier, we recorded the electric guitars playing through their effects stomp boxes, so I didn’t need to add any effects to them. Those players knew exactly what was needed.

For example, here’s another song mix that showcases the slow flanging on the right-panned guitar: Listen. By the way, this song is in 5/4 and 7/4 time.

Of course, every recording requires its own special mix, so the mix settings given here will not necessarily apply to your recordings. But I hope you enjoyed hearing how a recording of this genre might be recorded and mixed.

Bruce Bartlett is a microphone engineer (http://www.bartlettmics.com), recording engineer, live sound engineer, and audio journalist. His latest books are Practical Recording Techniques 6th Edition and Recording Music On Location. style=

{extended}
Posted by Keith Clark on 06/04 at 12:43 PM
Church SoundFeatureBlogStudy HallDigital Audio WorkstationsEngineerMicrophoneSoftwareStudioTechnicianPermalink

In The Studio: A Trip Through Bruce Swedien’s Mic Closet

Some of the mics deployed by a recording legend on all-time classic tracks
This article is provided by Bobby Owsinski.

 

Bruce Swedien is truly the godfather of recording engineers, having recorded and mixed hits from everyone from Count Basie, Duke Ellington and Dizzy Gillespie to Barbra Streisand, Donna Summer and Michael Jackson.

He’s a mentor of mentors, as so many of his teachings are now handed down to a generation now just learning (his interview in The Mixing Engineer’s Handbook is a standout).

Bruce is also a collector of microphones and will not use one he doesn’t personally own, so he knows the exact condition of each.

Recently he posted a bit about the mics he uses on his Facebook page, and I thought it worth a reprint.

A few things stick out to me:

1. His use of an Sennheiser 421 on kick. I know it’s a studio standard for some reason (especially on toms), but I never could get it work on anything in a way that I liked.

2. His use of the relatively new Neumann M149, because it’s eh…........new.

3. His synthesizer advice (under the M49) is a real gem.

Here’s Bruce.
——————————————————————

“Constantly being asked about my mics, so here goes:

“My microphone collection, spanning many of the best-known models in studio history, are my pride and joy. My microphones are prized possessions. To me, they are irreplaceable. Having my own mics that no-one else handles or uses assures a consistency in the sonics of my work that would otherwise be impossible.”

AKG C414 EB
“My first application would be for first and second violins. It’s really great mic for the classical approach for a string section.”

Hear it on… the first and second violins in Michael Jackson’s ‘Billie Jean.’

Altec 21B
“This is a fantastic mic, and I have four of them. It’s an omni condenser, and [for jazz recording] what you do is wrap the base of the mic connector in foam and put it in the bridge of the bass so that it sticks up and sits right under the fingerboard. It wouldn’t be my choice for orchestral sessions, though.”

Hear it on… Numerous recordings for Oscar Peterson between 1959 and 1965.

RCA 44BX & 77BX; AEA R44C
“[The 44BX] is a large, heavy mellow-sounding old mic with a great deal of proximity effect. This is very useful in reinforcing the low register of a vocalist’s range if that is desired. If I am asked to do a big band recording of mainly soft, lush songs, I almost always opt for ribbon mics for the brass. I suggest AEA R44C or RCA44BX on trumpets, and RCA 77DX on trombones. Ribbon mics are great for percussion too.”

Hear it on… trumpets and flugelhorns in Michael Jackson’s ‘Rock With You’ (at 0:54); percussion in Michael Jackson’s ‘Don’t Stop Till You Get Enough.’

Sennheiser MD421
“The kick is about the only place I use that mic, and I mike very closely. I frequently remove the bass drum’s front head, and the microphone is placed inside along with some padding to minimise resonances, vibrations, and rattles.”

Hear it on… the kick drum in Michael Jackson’s ‘Billie Jean.’

Shure SM57
“For the snare I love the Shure SM57. In the old days it wasn’t as consistent in manufacture as it is now. I must have eight of those mics, and they’re all just a teeny bit different, so I have one marked ‘snare drum’.

“But the ones I’ve bought recently are all almost identical. On the snare drum, I usually go for a single microphone. I’ve tried miking both top and bottom of the snare, but this can cause phasing problems.”

Hear it on… snare drum in Michael Jackson’s ‘Billie Jean.’

Telefunken 251
“These mics have a beautiful mellow quality, but possess an amazing degree of clarity in vocal recording. The 251 is not overly sibilant and is often my number one choice for solo vocals.”

Hear it on… Patti Austin in ‘Baby, Come To Me,’ her duet with James Ingram.

Telefunken U47
“I still have one of the two U47s that I bought new in 1953, and will still frequently be first choice on lead vocal. This is a mic that can be used on a ballad or on a very aggressive rock track. It has a slight peak in its frequency response at around 7 kHz, which gives it a feeling of natural presence. It also has a slight peak in the low end around 100 Hz. This gives it a warm, rich sound.

“For Joe Williams, another mic would never have worked as well. I figured out that it was the mic for him when I heard him speak. After you’ve been doing this for as long as I have, you begin to have instinctive sonic reactions, and it saves a lot of time!”

Hear it on… Joe Williams in the Count Basie Band’s ‘Just The Blues.’

Neumann M149
“I have a pair of these that Neumann made just for me, with consecutive serial numbers, and they sound so great. That’s what I use now in XY stereo on piano.”

Neumann M49
“This is very close sonically to the M149, but not quite the same. It’s a three-pattern mic and the first that Neumann came up with which had the pattern control on the power supply… you could have the mic in the air and still adjust the pattern. I use these for choir recording in a Blumlein pair, which is one of my favourite techniques because it’s very natural in a good room.

“When I was recording with Michael and Quincy I was given carte blanche to make the greatest soundfields I could, so what I also did was pick a really good room and record the synths through amps and speakers with a Blumlein pair to get the early reflections as part of the sonic field. The direct sound output of a synthesizer is very uninteresting, but this can make the sonic image fascinating.

“You have to be really careful, though, to open up the pre-delay of any reverb wide enough to not cover those early reflections. They mostly occur below 120ms, so with 120ms pre-delay those sounds remain intact and very lovely.”

Hear it on… Andre Crouch choir in Michael Jackson’s ‘Man In The Mirror,’ ‘Keep The Faith.’

Neumann U67
“The predecessor to the U87, and an excellent microphone, but it’s not one of my real favourites, as a purely instinctive reaction. It’s just a little bit too technical perhaps, and it doesn’t have sufficient sonic character for me to use it on a lead vocal, for instance. It’s a good choice of microphone for violas and cellos, however, and the U87 can also work well in this application.”

Hear it on… violas and cellos in Michael Jackson’s ‘Billie Jean.’

I’ll post a bit of Bruce’s interview from The Mixing Engineer’s Handbook in a future post.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blog.

{extended}
Posted by Keith Clark on 06/04 at 09:54 AM
RecordingFeatureBlogStudy HallEngineerMicrophoneStudioPermalink

Monday, June 03, 2013

Church Sound: Maximize Your Mix With A Step-By-Step Guide Through A Console

An opperator must have a firm understanding of the concepts behind the buttons.

Good church sound often crescendos or crashes at the mixing board. A new whiz-bang mixing console will not improve the quality of sound one bit if your sound system is flawed in design, doesn’t have enough amplification, delivers uneven coverage, or has poor system processing.

But even if all of that is in sync, the board can still fail to orchestrate good sound if a sound engineer isn’t operating it properly.

A Complex Board
I’ve trained sound technicians in church ministries on systems ranging from a 12-channel mixer to a 56-channel mixer, and on systems that have a single loudspeaker to systems with multiple clusters cross-matrixed to deliver left-center-right information into a room.

All of this taught me that the mixing board is one of the most complex components of a sound system. To obtain good sound, an engineer must have a good understanding of not only what all the buttons do on a soundboard, but also the concepts behind pushing those buttons.

Employing some simple principles can go a long way to helping raise the performance level of your church console.

Step 1: Go For The Gain
One of the most important components of a mixing board is gain structure. If the gain structure is off, you will have distortion or noise (hiss) in your system.

A graphic showing various functions discussed in this article. (click to enlarge)

If gain structure isn’t right, the board also will be awkward and unpredictable in its response. For example, sliding the lever from the bottom to part of the way up could result in parts of your loudspeaker system flying by your head. Well, it might not be quite that bad, but the effect could clean out your ears and the ears of everyone around you.

You may also find yourself riding the fader on the board all the way to the top without getting the right level of sound. In the process, you’ll generate all kinds of hiss.

On most mixing boards, the first knob on an input channel is the gain (or trim) knob. In layperson’s terms, this is the master volume-control button. If we liken the mixing board to a plumbing system, this knob is the master valve. If the valve is barely open, the water pressure (or sound) is low; if open too far, the pressure will be so high that it will produce incredible distortion.

Mackie has a very good paper outlining the way to set up channel gain. I’ll borrow the salient points here:

—Turn the input trim control of the desired channel all the way down.

—Connect the mic or line input to that channel (turn down or mute all other channels). Press the channel’s solo button, making sure it’s set to PFL mode if you have a PFL/AFL option. As a musician begins to sing or play, turn up the channel’s input trim. You should see the input level on the mixer’s meters. Adjust the input trim until the meter level is around zero (0) dB.

—Adjust the channel’s volume the way you want it with either fader or gain control. Don’t forget to turn up the master volume or you may not be able to hear the sermon.

Step 2: “Aux” The Signal
Once you’ve set up the channel gain on your mixer, you can proceed to the auxiliary sends, sometimes referred to as “mon sends,” or the monitors. Each of these knobs (two to eight) operates as a kind of valve that allows you to send sound to another output.

One aux send could flow sound to monitors or loudspeakers on stage so that musicians and other people on the platform can hear each other. Another could flow sound to a digital or analog recorder for making recordings, another to the nursery loudspeakers, and still another to an overflow seating area.

The bottom line is that aux sends offer the sound engineer a way to send sound to various places without affecting the main speaker system.

What’s important to understand about an aux send is whether it is pre-fade or post-fade. If it is pre-fade, or pre-fader, then sound will be sent at a certain level regardless of the position of the fader at the bottom of the channel.

The gain (or trim) is the only valve that will affect the amount of sound that goes through the aux send. If the aux send is post-fade, or post-fader, sound will be sent in proportion to the fader at the bottom of the channel and in proportion to the gain. So if the gain is set properly but the channel fader is down, no sound will be sent through the aux send.

I like to run my stage monitors pre-fade. This allows me to make changes to the house sound mix without affecting the monitors.

Conversely, I like to run “recording send,” “effects send,” and “sends” post-fade to other sound systems. This allows those levels to follow what I am doing on the channel faders.

Step 3: Equalize The Mix
We could have many good discussions (OK, disagreements) about this point. But equalization offers sound mixers the opportunity to be creative, smart, and innovative (or on the other hand, inept). To begin, I recommend that sound engineers start with the equalizer section set flat. That means all level controls should be set at zero, or straight up.

Next, we need to understand how we hear sound. We hear sound from a low of about 20 Hz (Hz = hertz or cycles per second), which is a very low frequency. A kick drum is usually tuned between 80 Hz and 100 Hz.

At the other end, we hear sound up to about 20,000 Hz or 20 kHz (k = 1,000), which is a very high frequency. A dog whistle at around 22 kHz is out of the range of the human ear.

The equalizer on a mixing console allows you to select a frequency or frequency range and to increase or decrease the level of a specified range. For example, if I am hearing ringing or feedback, I try to equate it to a number. If the ringing I hear is about the level of an “A” on the music scale, it equates to about 440 Hz (a piano is tuned to A440).

I would then either turn the midrange section down on the mixing board, or I would select 440 Hz on the frequency selection knob, then turn down the level control for that frequency. The key to successfully using the equalization section is learning to translate what you hear into numbers representing Hz (cycles per second).

Step 4: Route The Signal
The bottom of the channel strip offers another option for routing signals. On most mixing boards, you can choose left or right signals as well as subgroups. By selecting the right buttons, you can assign sound to travel right to the main output of the mixer or through a subgroup.

Subgroups are good for controlling the volume of multiple inputs. For example, you could assign the worship leader’s microphone to the main mix and the background vocalists’ mics to a subgroup. This will allow you to bring the total level of all the background mics up or down with one fader.

The mute button is the channel’s on/off button. Caution: if you turn it off, it might also turn off the auxiliary sends. Check your console’s operating manual to see whether this is how your mute button works.

Regardless, the mute button will affect a channel in the subgroup as well as the main mix. When the pre-fade listen (PFL) button is pressed down on most soundboards, a channel can be assigned to headphones regardless of the channel fader position.

This is very handy for cueing up recordings (from hard disc, CD, tape or whatever) to be played through the system. You can listen via headphones to a recording without letting the signal go to the main mix.

The master section of a Mackie analog console. (click to enlarge)

However, proceed with caution. If you have any prefade aux sends turned on, the sound on the recording will be sent there. There’s nothing worse than checking to see if a recording is cued properly during prayer in a worship service and forgetting to turn off the prefade aux send monitors. Been there, done that, will never do that again.

The channel fader is the master volume control for that input. Most mixing takes place in adjusting the volume of the signal that goes through the mix board.

Step 5: Master The Mix
The master section of the mixing board consists of the subgroup control, mains, aux masters, and headphone level. This section is where everything comes together before it is sent out of the mixing board.

If proper sound checks have been done and the board has been set up correctly, you can spend most of your time mixing by adjusting levels, and can adjust subgroups to bring the mix together. You can also make minor individual channel-level changes, minor equalizer changes on individual channels, and small adjustments on the aux sends.

Once the mix is set up, you’re free to camp out at the master section to manage the mix.

Practice to Perform
To obtain good sound, a sound operator/engineer must have a good understanding of not only what all the buttons do on a console, but also the concepts behind pushing those buttons.

The most important thing that an operator can do to learn how to mix sound is to spend time experimenting with various buttons on the board. The time to do this is not during a service, however. Nor should this be done when rehearsing for a worship service or performance. This time should be spent with musicians, adjusting the sound levels as they rehearse and setting the mix for when they perform.

In addition, anyone mixing should read through the console manual to be come familiar with its features and how to use them. Also, read articles on mixing sound and try to attend training sessions or workshops on sound.

The better we understand every part of a mixing console, the better our mixes sound.

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with sound at his church for more than 25 years.

{extended}
Posted by Keith Clark on 06/03 at 05:50 PM
Church SoundFeatureBlogStudy HallConsolesEngineerMixerMonitoringSound ReinforcementTechnicianPermalink

Making All The Right Noises: Graham Burton On The Evolving Role Of System Tech

“I realized that it was fast becoming the most important job in live sound.”

 
Not many 16-year-olds are lucky enough to land their first job as a system tech on a Bon Jovi tour; however, Graham Burton was no ordinary teenager.

Technically, in fact, his pro audio career actually began some four years earlier, when he landed a work experience role with local UK firm Richard Barker PA Services.

That was 1989, and by the turn of the century, the British-born Burton had toured internationally as a monitor engineer, front of house engineer and tour manager with the likes of Eric Clapton, The Stylistics, and Billy Ocean.

But by 2005, he found himself drawn back to his system tech roots, a role he currently holds with South England-based rental house BCS Audio. Why?

“Simple,” he notes. “I realized that it was fast becoming the most important job in live sound.”

I recently caught up with him to find out more.

Paul Watson: So, system teching. Haven’t you ‘been there, done that’?

Graham Burton: [Laughs] First, it’s a very different beast now compared to back then. When I was out with Bon Jovi. I was somewhat down the food chain – you could call me a ‘mic tech,’ I suppose, though I learned a lot about the role of a main system tech too. I was putting mics on stands and cabling them up; it wasn’t until I started working for (hire companies) Soundcheck and Eurohire that I started working with speakers, really.

And that was shortly after the Bon Jovi stint, right?

Yeah, that’s right. Those two companies got me right into live PA systems as well as disco systems; it could be anything from a pair of Bose 802s on sticks to a truck full of Renkus-Heinz kit that I’d be dealing with.

What you have to remember is that we didn’t have line arrays back then, so system teching was often a case of deploying lots of big boxes, trying to point them in the right direction, and hoping for the best, really.

As simple as that, then. Or not, so to speak…

Indeed! You’d frequently end up putting a lot more PA into a venue than was necessary, just to get coverage, because the horizontal dispersion wasn’t as wide as it is now with line arrays. Then you’d end up with cone filtering and all kinds of things happening within the actual speaker boxes, which could make things tricky.

We would lug around our analog graphics (EQs), giant racks of comps and gates, and huge 48-channel analog consoles with at least 16 aux outs on them, like a Midas (Heritage) 2000, for example, and make our tweaks. It was cumbersome, and it could get pretty stressful.

We’re talking mid-1990s here, right?

Yes, this was also the time I started doing monitors for The Stylistics. Monitors, in my view, is the best place to start, because you learn your frequencies a lot quicker and the band is telling you what they want, whereas when you’re at FOH you’re really trying to interpret what the audience wants.

We had a 13-way mix on that stage, then 20 wedges with some single mixes, some paired. It was a big band, so I had to deal with all sorts of drums, percussion, horns, and keys – there was quite a lot going on there, and a lot of musicians to try and keep your eyes on.

Over the next decade, you built a solid reputation as a FOH guy and worked on some big tours, as well as many of the major UK festivals. What made you ditch the white gloves and relative glamour, dare I ask, to come back to the tricky world of system tech?

It was interesting. I was 28, so that’s eight years ago, and I’d broken my leg while riding my motorbike, which had put me out of work. This just so happened to coincide with the time when things were really changing in PA technology, and I was taking a very keen interest in loudspeakers.

When I was in plaster, all the bands I’d been working with had to hire other people because I couldn’t do the job, and by the time I got the cast off, it was festival season and they understandably didn’t want to change their engineers.

Then as luck would have it, I got a call from BCS Audio asking whether I was free to work a show. Low and behold, it was system teching! This is when I had my ‘lightbulb moment,’ where I thought “hang on, there’s a hell of a lot more to this role now, and I can probably make more money doing it.” And sure enough, I got the bug again for setting up systems, and found that I got so much more out of it than engineering.

Because it was becoming a true art form?

Precisely, and the role continues to evolve. The really fascinating thing for me was that instead of thinking two-dimensionally, as you would do with the old boxes, I found myself thinking horizontally and vertically as well as left and right, working out how I’d get my PA coverage onto the audience without taking their attention away from where the performance is happening.

Will you elaborate?

Well, in venues with balconies, for example, we now fly the PA lower than we would have done in the old days, but we angle it up into the balconies so it still brings peoples’ attention down to the stage. It’s a lot more to think about.

Before, you’d have a groundstacked PA and a flown PA, and the flown PA would be in the eyeline of the audience in the balcony, so their attention was being drawn 10, maybe 20 feet above where the performance was happening.

So it’s a psychological thing as well, then?

It really is. You have to try and get across to the audience that the performance is happening there, and now we can develop that with systems; you can bring the audience into the show more, basically.

What are the other key differences compared to working the role a decade ago?

The gear is all a lot lighter now. And instead of time-aligning stuff with an old digital delay unit in the rack, which wasn’t all that accurate anyway, most of the loudspeaker systems now come with their own amplifier, which has its own delay circuit inside, so you can get everything absolutely spot on.

What’s your preferred kit when it comes to figuring delay times?

I use (Rational Acoustics) Smaart, but unlike many techs, I also use a laser measure. I then figure out how close the laser measurement is to the Smaart reading, which allows me to put that little bit of human element into it. I don’t like anything being absolutely perfect as it just feels clinical; my methods are more organic. Yes, it’s all going digital, but I prefer to have some ‘imperfections’ in there rather than it all being bang on the numbers all the time.

Burton making adjustments on an L-Acoustics LA8 amplified controller. (click to enlarge)

So basically, use the tools as a guide but don’t take them as gospel?

That’s the way I approach it. Besides, it’s very negotiable whether you can actually hear the differences when you’re talking about a millisecond or two. The time-alignment of systems is such a minefield these days, especially when figuring out what bits of the system you need to delay back to what point. The biggest trend we’re seeing at the moment is bands wanting to delay everything back to the kick drum.

Because it’s typically the furthest point back on stage?

Exactly, but again, this is all negotiable. It’s the very clinical side of system teching, and I tend to steer away from that. Yes, in the old days, you’d need a whole rack of digital delay units to do the job, and technology makes it far easier to accomplish now. But generally, I just don’t go down that road unless I’m asked to by the engineer – it’s their show after all, and I’m there to work with them, not against them.

What does the future hold for system techs?

Well, while many FOH engineers know how to mix, some still know little about the physics and science behind the systems, so they have to have trust in people like us. What we can do now is light years ahead of where it used to be, but I’m always learning new tricks. I find myself asking other techs “why are you doing it like that?” because I want to understand.

I reckon there are seven ways of doing everything and you’ll never find them all yourself; you have to learn from other people. Let’s all share the knowledge and try to make sure every gig is the best it can ever be. That’s what I say.

Paul Watson is the editor for Europe for Live Sound International and ProSoundWeb.

{extended}
Posted by Keith Clark on 06/03 at 04:23 PM
Live SoundFeatureBlogBusinessEngineerLoudspeakerMeasurementSound ReinforcementSystemTechnicianPermalink

80 Years On & Counting: Progress In “Getting It Right” With Speech Reinforcement

Are we now, finally, getting onto the right track?

April 27, 2013 marked the 80th anniversary of a historic milestone in the history of audio.

On that date in 1933, the Philadelphia Orchestra, under deputy conductor Alexander Smallens, was picked up by three microphones at the Academy of Music in Philadelphia – left, center, and right of the orchestra stage – and the audio transmitted over wire lines to Constitution Hall in Washington, where it was replayed over three loudspeakers placed in similar positions to an audience of invited guests. Music director Leopold Stokowski manipulated the audio controls at the receiving end in Washington.

The historic event was reported and analyzed by audio pioneers Harvey Fletcher, J.C. Steinberg and W.B. Snow, E.C. Wente and A.L. Thuras, and others, in a collection of six papers published in January 1934 as the Symposium on Auditory Perspective by the IEEE, in Electrical Engineering. Paul Klipsch referred to the symposium as “one of the most important papers in the field of audio.”

Prior to 1933, Fletcher had been working on what has since been termed the “wall of sound.” “Theoretically, there should be an infinite number of such ideal sets of microphones and sound projectors [i.e., loudspeakers] and each one should be infinitesimally small,” he wrote. “Practically, however, when the audience is at a considerable distance from the orchestra, as usually is the case, only a few of these sets are needed to give good auditory perspective; that is, to give depth and a sense of extensiveness to the source of the music.”

In this regard, Floyd Toole’s conclusions – following a career spent researching loudspeakers and listening rooms – are especially noteworthy. In his 2008 magnum opus, Sound Reproduction: Loudspeakers and Rooms, Toole notes that the “feeling of space” – apparent source width plus listener envelopment – which turns up in the research as the largest single factor in listener perceptions of “naturalness” and “pleasantness,” two general measures of quality, is increased by the use of surround loudspeakers in typical listening rooms and home theaters.

Given that these smaller spaces cannot be compared in either size or purpose to concert halls where sound is originally produced, Toole states that in the 1933 experiment, “there was no need to capture ambient sounds, as the playback hall had its own reverberation.”

Fletcher’s dual curtains of microphones and loudspeakers. (click to enlarge)

Localization Errors
Recognizing that systems of as few as two and three channels were “far less ideal arrangements,” Steinberg and Snow observed that, nevertheless, “the 3-channel system was found to have an important advantage over the 2-channel system in that the shift of the virtual position for side observing positions was smaller.”

In other words, for listeners away from the sweet spot along the hall’s center axis, localization errors due to shifts in the phantom images between loudspeakers were smaller in the case of a left-center-right (LCR) system compared with a left-right system. Significantly, Fletcher did not include localization along with “depth and a sense of extensiveness” among the characteristics of “good auditory perspective.”

Regarding localization, Steinberg and Snow realized that “point-for-point correlation between pick-up stage and virtual stage positions is not obtained for 2-and 3-channel systems.”

Further, they concluded that the listener “is not particularly critical of the exact apparent positions of the sounds so long as he receives a spatial impression. Consequently 2-channel reproduction of orchestral music gives good satisfaction, and the difference between it and 3-channel reproduction for music probably is less than for speech reproduction or the reproduction of sounds from moving sources.”

The 1933 experiment was intended to investigate “new possibilities for the reproduction and transmission of music,” in Fletcher’s words.

Many, if not most, of the developments in multichannel sound have been motivated and financed by the film industry in the wake of Hollywood’s massive financial investment in the “talkies” that single-handedly sounded the death knell of Vaudeville and led to the conversion of a great many live performance theatres into cinemas.

Given that the growth of the audio industry stemmed from research and development into the reproduction and transmission of sound for the burgeoning telephone, film, radio, television, and recorded music industries, it is curious that the term “theatre” continued (and still continues to this day) to be applied to the buildings and facilities of both cinemas and theatres.

This reflects the confusion not only in their architecture, on which the noted theatre consultant Richard Pilbrow commented in his 2011 memoir A Theatre Project, but also in the development of their respective audio systems.

Theatre, Not Cinema
Sound reinforcement was an early offshoot, eagerly adopted by demagogues and traveling salesmen alike to bend crowds to their way of thinking; as Don Davis notes in 2013 in Sound System Engineering, “Even today, the most difficult systems to design, build, and operate are those used in the reinforcement of live speech. Systems that are notoriously poor at speech reinforcement often pass reinforcing music with flying colors. Mega churches find that the music reproduction and reinforcement systems are often best separated into two systems.”

The difference lies partly in the relatively low channel count of audio reproduction systems that makes localization of talkers next to impossible.

Since delayed loudspeakers were widely introduced into the live sound industry in the 1970s, they have been used almost exclusively to reinforce the main house sound system, not the performers themselves. This undoubtedly arose from the sheer magnitude of the sound pressure levels involved in the stadium rock concerts and outdoor festivals of the era.

However, in the case of, say, an opera singer, the depth, sense of extensiveness, and spatial impression that lent appeal to the reproduced sound of the symphony orchestra back in 1933, likely won’t prove satisfying in the absence of the ability to localize the sound image of the singer’s voice accurately. Perhaps this is one reason why “amplification” has become such a dirty word among opera aficionados.

In the 1980s, however, the English theatre sound designer Rick Clarke and others began to explore techniques of making sound appear to emanate from the lips of performers rather than from loudspeaker boxes. They were among a handful of pioneers who used the psychoacoustics of delay and the Haas effect “to pull the sound image into the heart of the action,” as sound designer David Collison recounted in his 2008 volume, The Sound Of Theatre.

Out Board Electronics, based in the UK, has since taken up the cause of speech sound reinforcement, with a unique delay-based input-output matrix in its TiMax2 Soundhub that enables each performer’s wireless microphone to be fed to dozens of loudspeakers – if necessary – arrayed throughout the house, with unique levels and delays to each loudspeaker such that more than 90 percent of the audience is able to localize the voice back to the performer via Haas effect-based perceptual precedence, no matter where they are seated. Out Board refers to this approach as source-oriented reinforcement (SOR).

The delay matrix approach to SOR originated in the former DDR (East Germany), where in the 1970s, Gerhard Steinke, Peter Fels and Wolfgang Ahnert introduced the concept of Delta-Stereophony in an attempt to increase loudness in large auditoriums without compromising directional cues emanating from the stage.

In the 1980s, Delta-Stereophony was licensed to AKG and embodied in the DSP 610 processor. While it offered only 6 inputs and 10 outputs, it came at the price of a small house.

Out Board started working on the concept in the early 1990s and released TiMax (now known as TiMax Classic) around the middle of the decade, progressively developing and enlarging the system up to the 64 x 64 input-output matrix, with 4,096 cross points, that characterizes the current generation TiMax2.

The TiMax Tracker, a radar-based location system, locates performers to within 6 inches in any direction, so that the system can interpolate softly between pre-established location image definitions in the Soundhub for up to 24 performers simultaneously.

The audience is thereby enabled to localize performers’ voices accurately as they move around the stage, or up and down on risers, thus addressing the deficiency of conventional systems regarding the localization of both speech and moving sound sources.

Source Oriented
Out Board director Dave Haydon put it this way: “The first thing to know about source-oriented reinforcement is that it’s not panning. Audio localization created using SOR makes the amplified sound actually appear to come from where the performers are on stage. With panning, the sound usually appears to come from the speakers, but biased to relate roughly to a performer’s position on stage.

Out Board’s Dave Haydon with a TiMax2 Soundhub mix matrix and playback server. (click to enlarge)

“Most of us are also aware that level panning only really works for people sitting near the center line of the audience. In general, anybody sitting much off this center line will mostly perceive the sound to come from whichever stereo speaker channel they’re nearest to.

“This happens because our ear-brain combo localizes to the sound we hear first, not necessarily the loudest. We are all programmed to do this as part of our primitive survival mechanisms, and we all do it within similar parameters. We will localize even to a 1 millisecond (ms) early arrival, all the way up to about 25 ms, then our brain stops integrating the two arrivals and separates them out into an echo. Between 1 ms and about 10 ms arrival time differences, there will be varying coloration caused by phasing artifacts.

“If we don’t control these different arrivals they will control us. All the various natural delay offsets between the loudspeakers, performers and the different seat positions cause widely different panoramic perceptions across the audience. You only to have to move 13 inches to create a differential delay of 1 ms, causing significant image shift.

TiMax Tracker that locates performers to within six inches in any direction. (click to enlarge)

“Pan pots just controlling level can’t fix this for more than a few audience members near the center. You need to manage delays, and ideally control them differentially between every mic and every speaker, which requires a delay-matrix and a little cunning, coupled with a fairly simple understanding of the relevant physics and biology,” Haydon says.

More theatres are adopting this approach, including New York’s City Center and the UK’s Royal Shakespeare Company. A number of Raymond Gubbay productions of opera-in-the-round at the notoriously difficult Royal Albert Hall in London – including Aida, Tosca, The King and I, La Bohème and Madam Butterfly—as well as Carmen at the O2 Arena, have benefited from source oriented reinforcement, as have numerous other recent productions.

Veteran West End sound designer Gareth Fry employed the technique earlier this year at the Barbican Theatre for The Master and Margarita, to make it possible for all audience members to continuously localize to the actors’ voices as they moved around the Barbican’s very wide stage. He notes that, in the 3-hour show with a number of parallel story threads, this helped greatly with intelligibility to ensure the audience’s total immersion in the show’s complex plot lines.

As we mark the 80th anniversary of that historic first live stereo transmission, it’s worth noting that, in spite of the proliferation of surround formats for sound reproduction that has to date culminated in the 64-channel Dolby Atmos, we are only now getting onto the right track with regard to speech reinforcement.

It’s about time.

Sound designer Alan Hardiman is president of Associated Buzz Creative, providing design services and technology for exhibits, events, and installations. He included the TiMax Soundhub in his design for the 4-day immersive theatre production The Wharf at York, staged at Toronto’s Harbour Square Park.

{extended}
Posted by Keith Clark on 06/03 at 01:41 PM
AVFeatureBlogStudy HallAVDigitalLine ArrayLoudspeakerProcessorSignalSound ReinforcementPermalink

In The Studio: Best Practices For Backing Up Your Data

The importance of data backup cannot be overstated
This article is provided by the Pro Audio Files.

 

What Is A Backup?
A backup is a working safety copy of your production data. The goal of a systematic approach to backups is to keep data loss from stopping or significantly delaying your work.

If properly implemented, a backup system will contain current production data for all in-progress projects as of the conclusion of the most recent session.

As many readers know very well, the importance of data backup cannot be overstated. What you may think of as “your data” are someone else’s proprietary master recordings, not to mention their art.

Preventing data loss will protect those valuable assets and preserve your professional integrity.

Best Practices, Best Practices
As the heading suggests, best practices for backups are all about redundancy. In particular, I’d like to share three particular types of redundancy that are critical elements in maintaining working safety copies.

Redundancy #1: Multiple Copies
This point may seem obvious, but a systematic approach to backups should facilitate at least two complete, secure copies of the project data. Before you laugh, note the word ‘secure’. I’d encourage anyone who is concerned about the safety and integrity of client data to think of ‘secure’ copies as ‘non-public’ copies.

For example, the practice of leaving client data on local hard drives in commercial studio facilities is not secure. Anyone who needs more drive space can delete the data. Anyone who is bored or curious enough can open it or copy it.

Additionally, shared folders and drop-boxes that aren’t password protected are essentially public places. A network is a community in a very real way.

Proprietary data should be secured behind closed doors. Online cloud backup tools like Gobbler make that easier than ever.

Redundancy #2: Multiple Locations
The two secure copies of your client data should be stored in two different locations. This practice will prevent the ugliest of the long list of data loss events from affecting both of your copies.

Fire, flood, theft, and power surge will invariably affect an entire facility. Nobody wants to consider that (myself included). Unfortunately there is a long list of stories that can be recounted in which something really bad happened to all of the “redundant” media.

The easiest way to facilitate multiple locations is to take advantage of the inherently remote ‘location’ that cloud storage provides. Just be sure to actively login and copy your data at the end of each session. Most media applications suggest that automation tools like Time Machine be disabled to optimize system performance.

Redundancy #3: Multiple Technologies
There are two ways that you can provide technological redundancy within your backup regimen.

The first is by choosing different storage media for your two different copies.

This redundancy can eliminate the risks introduced by unforeseen factors like defective optical media and infant mortality in hard drives.

Some simple examples of technological redundancy in the primary backup technology could include:

—Copy #1 copied to network attached storage; copy #2 burned to DVD-ROM
—Copy #1 copied by the facility’s AIT library; copy #2 on your removable Firewire drive

This difference doesn’t have to be perfect or dramatic. The key is to avoid high-risk scenarios like two identical drives from the same manufacturer, or multiple DVDs.

The other type of technological redundancy is the secondary technology. This includes any hardware or software necessary to write or read data to/from the primary storage medium.

Common examples include:

—Automated archival applications like Retrospect or PresStor
—Optical media drives
—Tape drives

As a rule of thumb, it’s a good idea to rely on as little intermediate technology as possible when backing up client data. Each additional mechanism can represent a barrier to future retrieval. When secondary technology is needed, make it unique to one copy.

Good Habits Make It Easy
The difficult thing about unexpected crises is the whole ‘unexpected’ part. There’s no way to sit at the end of a session and know whether this is going to be one of the times you’ll end up using a backup to restore client data (and avert a larger crisis). The only option is to get in the habit of a sufficiently redundant, systematic backup routine.

I’ve found that there a number of 5- to 10-minute odd jobs that I can accomplish around the studio while my data copies to secure cloud and local storage. For that matter, having a little email time before barreling on to the next thing can be pretty luxurious as well.

The Limits Of Backups
While backups can save the day during the production cycle, they’re not particularly useful once the project is over and the masters are ready for delivery. Since backups are working safety copies of your production data, they’re still completely reliant on specific versions of production technology, like DAW and plugin applications.

There are detailed, standardized practices for archives which will be discussed in a different article. For reference, check out the Recommendation for Delivery of Recorded Music Projects (pdf) published by the Producers and Engineers Wing of The Recording Academy.

Rob Schlette is chief mastering engineer and owner of Anthem Mastering (anthemmastering.com) in St. Louis, MO, which provides trusted specialized mastering services to music clients across North America.

Be sure to visit the Pro Audio Files for more great recording content. To comment or ask questions about this article go here.

{extended}
Posted by Keith Clark on 06/03 at 09:39 AM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEthernetNetworkingStudioPermalink

Pro Production: The Wide World Of Road Cases

Storage, protection & getting things from point A To B

If audio gear doesn’t arrive at the venue ready to work, you’re out of luck. It’s the primary reason why racks and cases are so important (if overlooked).

And in addition to the obvious purpose of protecting equipment from damage, they also serve other important functions, including helping to organize, inventory, store, transport and set up a systems.

Being able to locate things quickly on a show is a must, especially when you’re working with local stagehands who are not familiar with your gear. Having system gear well organized in labeled cases, trunks and racks makes things go smoother and faster at load in and load out. 

From a logistics standpoint, cases assist the production team by making items that are awkward to move around on their own easier to move, store and transport.

Wheeled cases can be rolled around the shop, at the gig, or easily up a truck ramp and the case handles help in moving as well as stacking cases in trucks.

Road trunks and cases may also be moved by forklift trucks, allowing larger items to be easily relocated.

Racks offer a huge advantage: equipment can be wired up and ready to go.

Three letters often mentioned with cases are ATA, which stands for the Air Transport Association of America, and more specifically, this organization’s Specification 300, which covers reusable transit and storage containers. The specification sets guidelines and testing standards that puts every case into one of three category certifications: a Category I rating means it can survive 100 airline shipments; Category II refers to 10 airline shipments; and Category III refers to a single airline shipment.

The variations of cases is endless. Here’s a recently introduced line of Grundorf cases specifically for the PreSonus StudioLive Series digital mixer line.

Many styles can meet ATA standards, and note that not all “ATA-style” cases are actually certified. It’s best to check with the manufacturer to find out what level of protection a case really offers.

Bags To Ammo

Cases can be made from a wide variety of materials. The most common in our industry are also often called “flight cases” or “ATA cases,” and they are made with laminate-covered plywood panels joined with extruded aluminum edging and heavy ball corners for added protection.

Quarter-inch laminate covered plywood panels are used for smaller and lighter duty cases, while three-eighth-inch and half-inch panels are used for larger heavy duty ones. Other common case materials include injection molded plastic, rotationally (roto) molded plastic, and plywood.

While there are dozens of case manufacturers, many production folks opt to build at least some of their own. Wooden road trunks are popular among the do-it-yourself crowd, and for those wanting to build ATA style cases, companies like TCH, Penn Elcom and Reliable Hardware sell all the parts needed. 

Many manufacturers offer cases and trunks sized to fit two-, three-, or four-across in standard trucks and trailers. These “truck pack dimension” cases usually feature “stacking cups” in the lid, allowing a similar sized case to ride securely atop another, with the wheels of the uppermost case secured from movement in the recessed stacking cups of the case below.

Lets take a look at the most common types of cases:

Bags. Soft-sided bags are popular for soft goods like stage skirting or curtains and smaller production gear like tripod loudspeaker stands. While they keep the items clean and may make transporting a pair of stands a little easier, they don’t offer much in the way of protection. 

A compact case with space for 15 mics and accessories.

Covers. While technically not cases, covers are popular for larger loudspeakers, especially subwoofers. Many are padded and help keep the units clean as well as protect against dings and scrapes when transporting. Some manufacturers offer covers for particular models of loudspeakers, and there are several aftermarket manufacturers that make covers for a variety of stock speakers, as well as custom models to fit everything else.

Hampers. Large canvas hampers (a.k.a., hamper trucks) are commonly used to store and transport curtains. Plastic models (a.k.a., tubs) are sometimes seen holding cables, and are popular with some regional companies because they can stack in the shop, saving space.

Totes. Plastic totes are often utilized for organizing small parts, and can then be placed inside larger trunks providing an organized storage solution. Totes with the attached lids are also popular with some companies for carrying mic cables and gear. 

Ammo Cans. Military ammo cans are the de facto storage containers for truss bolts in show business. These waterproof cans are great for organizing small parts.

Riding The Rails

Racks are yet another type of case, and they house electronic components that are bolted into the case on “rack rails.”

Standard rack rail dimension is 19 inches wide, and gear is designated by how many vertical “spaces” (or rack units) they use in a rack: single space (1RU) is 1.75 inches, 2RU is 3.5 inches, 3RU, is 5.25 inches, and so on.

Rack-mount gear has “ears” that extend on either side of the front panel, and these allow the item to be bolted onto the rails.

Rails can be tapped for bolts (usually a 10-32 thread), or open spaced and used with bolt and captive backing cage nuts that slip into place on the slotted rail. Even though they’re bolts, the fasteners are normally referred to as rack screws, and have large flanged heads to help support the electronic gear. 

Phillips and square drive are the most common heads on rack screws, but high security applications may use screws with heads that require a special tool. Standard road racks usually have two removable doors (called lids) that latch on securely and provide protection when the unit is transported. 

Along with holding electronic equipment, racks can be used to house drawers at various heights, as well as stationary or sliding shelves, lock-box storage units, connector panels, lighting panels, ventilation and fan panels, and blank panels to fill up unused spaces in the rack and to help manage airflow inside the rack.

A rack (or three) in full.

Some racks are outfitted with a single set of rails in the front, while others offer an additional set of rails in the rear to help secure larger/heavier equipment. While the width of the racked equipment on the rails is always 19 inches, the depth of the rack can vary with the application and requirements with depths between 18 to 22 inches being the most common.

Standard racks have their rails bolted directly to the sides of the rack. Shock mounted racks use an inner frame with rails that are isolated either by foam, springs, or rubber mounts to the outside case. These racks offer more protection to fragile electronics. 

Slip-over racks are basically a standard rack that fits into a road case with a slip-over top. These offer the extra protection of a shock rack, but with added flexibility. The rack can be used by itself out of the case, sitting in the tray portion of the case, angled into the tray portion of the case, or sitting on top of the closed exterior case. The variety of positioning options is one of the reasons why slip-over cases are very popular at front of house and for video switching.

There are a lot of variations in designs for racks. Smaller racks may be used to house individual pieces, and may feature rubber feet, while larger racks holding multiple pieces of gear usually have wheels to make them easier to move around.

More Variations

Slant-top rack mixer cases bolt a mixing console to rails in the case oriented in a comfortable operating position. A variation called the “pop up” case features rack rails that adjust to different angles allowing the user to select the most comfortable operating position of the mixer.

This case offers a wide range of storage options, all with convenient access.

In addition, this style allows for a more compact case as the mixer can be stowed in a lowered position. Another variation is the slant-top with rack spaces, allowing a console to be mounted in operating position at the top of the rack, along with additional rack gear below. Some of these larger cases have lids that are used as tables to increase the work surface area.

Side-by-side racks (a.k.a., “double wides”) mount two racks next to each other in the same case. A variation is the side-by-side, with a mixing console area on top under a lid, particularly popular for installed systems where equipment needs to be rolled into different positions, such as in a multi-purpose room.

A more elegant version of this style can be found in many churches, where a roll-top design secures the console when it’s not in use, and doors secure the rack contents.

Some manufacturers incorporate the rack lids into stand-alone tables, or tables that hook on to one side of the rack. When packed for transport, the table legs fold up into the case lid, and the lid is attached to the rack as usual.

Hitting The Road

Road trunk is a term for a larger case. These can be item-specific, like a feeder cable trunk, or generic ones that get loaded differently depending on what is needed at the gig.

Some models offer removable divider panels to allow different compartmentalization options. 

“Pullover” or “slipover” cases are common for larger items like loudspeaker cabinets and backline amplifiers. The equipment can be removed from the case for use, or used while in the lower tray portion of the case keeping the gear on wheels.

Mixing console cases usually offer a compartment built into the case at the rear of the mixing console called a “doghouse” that allows cables or a snake fan to be pre connected to the console and stored in the case when not in use.

“Mic boxes” are cases designed to hold and transport microphones, direct boxes and accessories. They commonly include foam inserts that provide protection and organization for the mics.

Workbox cases feature drawers and storage areas to organize tools and supplies at a show. Workstations are a specialized version of a workbox designed around a particular job, such as guitar maintenance.

With covers removed, these units remain handy as loudspeaker carts.

A guitar workstation may include a padded top area to hold the guitar, drawers for items like strings and tools, and even a built in guitar tuner. These cases are very popular with road crews who need to maintain equipment on tour.

Production office cases provide a desk area and storage and some cases can hold computers, printers, copiers, coffee makers, and fax machines providing every office amenity at the venue.

Road boxes are large workbox units that feature drawers and shelves to organize all the tools, parts and supplies for a tour or production. Some boxes can feature power outlets for charging tools, coffee pots and even small refrigerators allowing the crews to bring some of the comforts of home with them on the road. 

Crates & Carts

Crate is another term sometimes used for a trunk, and it usually refers to custom-built containers that hold trade show booths and equipment. These are usually made from wood and have forklift pockets at the bottom so they can easily be moved about the yards and docks at a convention center. While they’re built rugged, they’re typically not designed to be used on a daily basis and do not usually have features we require in our trunks, like quick latch doors, wheels, and even handles.

Carts are another item used to store and move gear. Basically a cart is a rolling metal frame or dolly system made for specific items, or adapted to hold equipment. The most common carts in live music production are called “meat racks” – large metal frame carts used to store and transport lighting instruments.

On movie and video sets, carts are used to hold and transport the mixers, media storage devices, wireless receivers, microphones and boom poles used by the audio department. In theater, they’re used extensively to transport scenic items.

Also note that cases can be custom ordered (or built) to meet virtually any specific need. The only limitation is your imagination.

Maintenance Required

—Regularly inspect cases and racks for any damage or problems. 

—Repair or replace parts especially important items like castors, handles, hasps and hinges.
 
—Lubricate castors, hasps and hinges with the manufacturer’s recommended product.

—Be sure that castors and rack rails are bolted firmly to the case.

—In older cases, deteriorating foam can be replaced, making the case like new again.

Craig Leerman is senior contributing editor for Live Sound International and ProSoundWeb, and is the owner of Tech Works, a production company based in Las Vegas.

{extended}
Posted by Keith Clark on 06/03 at 06:53 AM
ProductionFeatureStudy HallProductionAudioCases & AccessoriesLightingRiggingStagingVideoBusinessConcertEducationStageSystemPermalink

The Old Soundman: A Rare Glimpse Into The Tool Bag Of A Legend

A rather seedy individual pushes OSM's buttons, giving him the chance to rant -- his favorite thing!

An individual who corresponds to the description of a fugitive known to viewers of “America’s Most Wanted” and “Cops” known only as “JR” sent in this question, which gives me a chance to pontificate, something that I very much enjoy doing.

The beautiful thing is that I can go on about stuff that my questioners are correct about, or just as well when they are off the mark. It’s all fuel for my turbine, my reactor, my photovoltaic converters.

What’s in the essential gig bag for OSM? Does he carry a duffel with anything besides a drum key, bottle of bourbon and crescent wrench?

Let’s take things in order.

I just bought two new drum keys due to guys like you making off with the last one I had. It always happens, either the drummer doesn’t have one (what’s up with that?) or his tech doesn’t.

So I lend them mine, then I go out to have a smoke, and forget to persecute the wicked to get my drum key back. I mean, they only cost a few bucks! What’s wrong with humanity?

Just a few weeks ago, I was mixing a show at someone else’s club, and the house guy told the drummer that his rack tom was resonating. But neither one of them had a frickin’ key! So I saved the day. I did it for everybody’s sake, for the sake of the show. Capisce?

Number two: It has been many years since I felt a heightened sense of excitement by hoping to be like Keith Richards clutching his stupid bottle of Jack, or Slash with the eternal Marlboro.

Sure, I smoke and drink in moderation, but it is my personal feeling that Jack Daniel’s and Jim Beam are more consistently violence-producing than heroin or crack cocaine. Oh, I’m sorry, were those companies about to buy a banner ad? Whoops…

The crescent wrench—now you’re preaching to the choir! I got my crescent, my needle nose, my clippers, my big electronics store tweezers, my soldering gun and solder, my beer bottle openers, Sharpies of different colors, fine point sharpies, my pliers, my personal nobody-else’s-loogies talkback mic.

My Sony headphones, with their ridiculous bag with the seams that split down the sides; why haven’t they rectified this awful design flaw in the last decade? Everybody I know hates these things!

I also have some generic wire to repair XLRs with. I have some black trick line, and some cable ties. I have some orange mountaineering line. I have a polarity tester and an XLR tester, a bag of XLR Y-cables, pin one lifts, and sex changers.

I have RCA to quarter adapters. One of your buddies stole my vise grips. Foam earplugs for sensitive friends and suffering strangers.Tweakers. Greenies. A reversible big-ass screwdriver. Flashlights.

Yes, it’s heavy, Soundman! You couldn’t handle this bag!

Extra batteries. Guitar tuner. Lanyards. One-inch gaff, two-inch gaff. AC ground lifts. Electrical tape. In-ear monitors. Mini stereo to quarter-inch stereo jacks. Orange three-banger ac combiners. A razor knife. Multimeter. Hex key sets, inches and metric. Ten-foot tape measure.

What’s the Ultimate Sound Man (USM) need to get the job done?

Hey, bro? One more time—I handle the jokes around here. I make funny. You laugh. That is the sequence, not the other way around.

I do not resemble an Ultimate Support keyboard stand. I would not mind being compared to the Ultimate Support percussion table, those things have many uses. I am not an Ultimate Support mic stand with the squeezy thing. I am also not a low profile collapsible Ultimate Support guitar stand, but those were a cool innovation, wouldn’t you agree?

Enough. Begone!

Luv,
The Old Soundman

There’s simply no denying the love from The Old Soundman. Check out more from OSM here.

{extended}
Posted by Keith Clark on 06/03 at 06:46 AM
Live SoundFeatureBlogConcertEngineerInterconnectMonitoringSignalStageSystemTechnicianPermalink

The Invention Of The Phonograph: From Early Recordings To Modern Time

The storied history of a device that has significantly shaped modern culture

For the purposes of introduction, the gramophone and the phonograph have been considered a single invention since the former evolved out of the latter to eventually replace it. Both were inscribing, groove based systems. The gramophone/phonograph was the keystone innovation on which the record industry was invented.

A good place to begin an exploration of the technological development of the record and the industry it spawned is in the early 1860s. This was when a Frenchman named Leon Scott, working at the Massachusetts Institute of Technology (Cambridge, Massachusetts), came up with a device called a phonautograph for tracing vocal patterns.

A thin hog’s bristle quill was attached to the centre of a compliant diaphragm at the back of a tapered horn that concentrated the sound. The other end of the quill was a sharp fine point. A piece of smoked glass would travel past the point as a sound entered the horn causing a wavy line that corresponded to the sound vibrations to be scratched into the soot.

A later version of the device positioned the stylus against a cylinder of heavy paper coated with soot. The cylinder was rotated by hand, and if someone shouted into the horn, an image of the vibrations were captured on the paper. There was no provision to manufacture Leon Scott’s device as it was not a marketable product aside from its scientific applications, but it did point the way towards later invention.

In 1877 a French inventor, Charles Cros, proposed the Phono-Graphos. He never had the money to build his idea, but it was much closer to what would become the gramophone. Cros described a method for recording on a round flat glass plate. He also suggested a means of playback. Like Scott, Cros failed to commercialize his idea because he could not see a wide market for the Phono-Graphos.

By the 1860s, telegraph and Morse code had become widely used. Two major problems were that the messages could not be stored and they could not be transmitted very far without requiring the weak signal to be listened to, copied and repeated by an operator.

One of these young telegraph operators was Thomas Edison. In the late 1860s, he figured out that you could take a magnetically actuated stylus and vibrate it up and down to emboss a waxed paper disc.

To repeat the indented message, the disc was flipped over so that the indented dots and dashes were now seen as a series of bumps of two different lengths. A “playback” stylus rode over the bumps, and, as it moved up and down, would make contact with a switch that repeated the original message.

Dictating with a cylinder phonograph.

This was the first of a long line of commercially successful inventions by Edison, whose inventiveness was only matched by his ability to develop his ideas into commercial products.

One day he heard his telegraph repeater operating at high speed and noted it sounded somewhat like music (Edison was not a particularly musical person). He took the same horn and membrane type “microphone” that Scott had invented and mounted it on a screw mechanism. This was made to travel across a revolving cylinder that had been wrapped in tin foil. When he cranked the cylinder and talked loudly into the horn a continuous groove was embossed which represented the sound.

He then took the cylinder and played it back on a similar mechanism with a lighter and more compliant stylus and diaphragm. The first working model of the phonograph used an up and down motion called “hill and dale” recording and was patented in December 1877. Edison’s first recorded words to himself were “Mary had a little lamb”.

The quick success of his invention had as much to do with his marketing genius as it did with its ability to record and playback sound. Edison believed the phonograph would be used for archival purposes, not as a means of playing distributed duplicates. He wrote:

“We will be able to preserve and hear again, one year or one century later a memorable speech, a worthy tribute, a famous singer, etc… We could use it in a more private manner: to preserve religiously the last words of a dying man, the voice of one who has died, of a distant parent, a lover, a mistress.”

Within the scientific intellectual community, the phonograph was seen as a means of preserving truth and maintaining cultural stability. A prominent magazine of the time, The Electrical Review (1888), speculated:

“Had Beethoven possessed a phonograph the musical world would not be left to the uncertainties of metronomic indications which we may interpret wrongly, and which at best we have but feeble suggestions; while Mozart, who had not even a metronome, might have saved his admirers many a squabble by giving the exact fashion in which he wished his symphonies to be played.”

In 1887 Edison licensed to Jesse Lippincott to franchise the phonograph as a dictating machine. He went about setting up 33 franchises across the U.S. All went bankrupt pursuing this misconception of application, except for the District of Columbia franchise, incorporated in 1889 by Edward Easton as the Washington D.C. franchise of the Jesse Lippincott’s North American Phonograph Company.

It was one of 33 franchises set up to lease and service phonographs. All of them failed with the exception of the D.C. franchise. The D.C. operation quickly realized that the device was not suitable for dictating when, after renting a hundred units to Congress, got all of them back because they were evaluated as unsuitable for Hansard purposes.

The only precursor of the record’s ultimate commercial success was the common practice to use the voice to demonstrate the quality of the recordings. Naturally, a few demonstrations were sung, and of course some instrumental accompaniment was added. By and large, however, the first companies involved in phonographs saw the device as a business machine. It was seen as a tool for documentation, dictation and sound analysis for historical and scientific purposes and office, court, and hospital reporting.

The phonograph was soon being used by researchers for anthropological and cultural field studies to document the oral histories and ceremonies of indigenous people. Some of the earliest phonograph recordings of indigenous people’s oral histories and ceremonies were done of the Torres Strait Islanders.

Many publications speculated on how it might have been if there had been a recording available of great events of the past and how in the future there would be such recordings. There were also descriptions of famous people making mistakes and these too became part of the record. There was also speculation that in decades to come, sound production would allow the removal of anything undesirable.

Unfortunately, the anticipated market had several problems with the device’s capabilities. The amount of time that could be recorded on each cylinder was very limited.

The recording and playback machine, out of technological necessity, evolved into different machines to optimize performance thus eliminating the notion that one machine could do both. The machines were so bulky that portability was out of the question.

Other Edison licensees began commercializing cylinder phonographs in other applications more suitable to exploit the features of the phonograph, and the Washington D.C. franchise followed suit. A market grew in amusement arcades, carnivals, amusement parks, and nickelodeons and in other public and semi-public establishments.

In a considerably less noble application, the phonograph would find a market in whorehouses. Recorded music became a common backdrop for amorous endeavour.

Although there were probably many times when the privacy of a theatre box provided a place for discrete interludes accompanied by music, and just as likely the hedonist might occasionally find a piano player in the salon of their favorite whore house, but a client needed to be a bit of an exhibitionist to perform while a live musician chorused him on. Records provided music without the musician thus eliminating a physical presence and a source of inhibition.

Columbia Records
Then the public began to buy phonographs. In general these playback applications required a ready supply of pre-recorded material and in 1890 the D.C. operation began to sell pre-recorded cylinders under the Columbia label (thus the oldest record label in the world came into existence).

The survival of this one company was due to its astute pursuit of alternate markets and because it began to sell recordings on the Columbia label (named after the District Of Columbia).

By 1891, Columbia had 200 titles in its catalog and was the largest record company in the world. But a major obstacle in commercializing pre-recorded material was the difficulty in duplicating a title. To produce a batch of 200 recordings of a march, it had to be played 20 times in front of a battery of 10 recording horns. For the phonograph to prosper as public entertainment, the production process and duplication had to be simplified and cost effective.

In 1900, the company had opened a London office and by then was selling both Edison cylinders and Berliner disks. Due to financial problems during World War I, the U.S. operation was forced to sell its British subsidiary to the local manager Louis Sterling. A year later, U.S. Columbia also failed and the British operation bought it from receivers to get access to the recently developed electric cutting system that was only available to U.S. companies.

The company was reorganized in 1925 and went international, operating under different names in different countries. In the U.S. it was known as the General Phonograph Company Inc. The company invested in broadcasting by taking over United Independent Broadcasters and renamed the U.S. operation the Columbia Phonograph Broadcasting Co. During the depression in the 1930s, the company again had financial woes, and in particular, the performance of the U.S. record operation was poor (sales were 6 percent of 1927 levels).

The broadcast network had potential but was equally unprofitable, so bad sales and the likelihood that it was in violation of antitrust laws due to this 50 percent interest in the Victor Talking Machine Company in the U.S. caused the company to divest its U.S. interest in Columbia. Meanwhile, Columbia U.K. was merged with HMV (the U.K. operation of the Victor Talking Machine Company) in 1931, and the company was renamed the Electrical and Music Industry (EMI). The U.S. radio network continued as the Columbia Broadcasting System and became profitable during the next decade.

The North American Phonograph.

The U.S. Columbia Records was sold to Grigsby-Grunow, a manufacturer of refrigerators and radios. This company went bankrupt in 1934, and Columbia was sold to the Brunswick label. The American Record Company (ARC) had been formed in 1929 through the merger of three small labels, Oriole and Perfect, Romeo, and Banner.

ARC acquired the Brunswick label (started in 1916) in 1931 and changed the name of the entire company to Brunswick Record Corporation, of which it was at the time of the Columbia acquisition.  CBS bought Brunswick in 1938. CBS deactivated the Brunswick label and reactivated the Columbia label, later selling Brunswick to Decca in 1942.

Outside the U.S., the Columbia label would remain EMI’s flagship pop label until the early 1950s when CBS pulled out of its overseas arrangement with EMI. We’ll return to the EMI, HMV and Victor Talking Machine Company connections subsequently.

Berliner’s Gramophone
A German inventor living in the U.S. solved the duplication problems associated with the cylindrical shape of the Edison phonograph. Emil Berliner would have been aware of Scott’s invention and it is widely accepted that he knew of Cros’s patent. Berliner’s approach used a round flat plate as a recording surface.

This approach made duplication significantly easier whereby records could be pressed much like waffles and not very different from the way records were made until the CD came along (in fact CDs are also a pressed medium).

In 1895, Berliner formed the Berliner Gramophone Company and began to sell a hand driven player. An associate of Berliner’s, Eldridge Johnson, incorporated a wind-up spring motor and the modern gramophone was complete. The two of them started the Victor Talking Machine Company in 1901 for the purposes of manufacturing gramophones and producing records.

Berliner’s foreign rights agent travelled to London in May 1898 in order to raise enough funds to establish a recording and pressing facility. To pay for an expensive patent war with Columbia, Berliner sold his patent rights in Britain and Europe to a group of English investors called the Gramophone Company.

The initial catalog of records was pressed by Berliner’s brother in a factory set up in Hanover, Germany.  During World War I,  the Hanover operation became Deutshe Grammophon, which would evolve into Polygram. In 1902, Columbia and Victor pooled their patents and put aside a legal case that was pending. This freed them both to put all their efforts into promoting their products. Between 1902 and 1906, The Victor Company, in order to stimulate record sales, gave away models of the Type P Premium Player when a customer purchased several records at once.

Columbia began selling disks in the U.S. in 1901, but Victor quickly became the dominant label in America. In 1902, Gramophone manufactured the first 78RPM record and in 1907 a double-sided disk was issued. In 1912, Edison introduced the diamond tipped stylus which further improved the reproduction quality. By 1917, both Columbia and Victor were emphasizing the ability of their machines to supplant a dance orchestra at the most elegant and stylish affairs.

In 1900, the Gramophone Company bought the rights to Francis Barraud’s painting of his dog Nipper sitting in front of a phonograph. The artist was paid £50 for the painting and £50 for the copyright providing he changed the phonograph to a gramophone. Barraud painted a gramophone right over the original. The name of the painting was “His Master’s Voice” which became a trade mark and label (HMV) for the Gramophone Company.

When Berliner visited London he asked to use the image as a trademark in the U.S.  Berliner, Victor, then RCA all used Nipper. In Egypt, India and Moslem countries it was not used at that time by HMV because dogs were considered unclean. In India, a listening cobra was substituted for the dog on those records with Indian artists. In Italy it was never used because there, a bad singer was said to sound like a dog. Victor also used the logo for releases on its Japanese subsidiary which was sold to Japanese interests before World War II.

HMV and The Victor Talking Machine Company maintained an ownership in each other for 50 years. Johnson sold his interests to bankers Seligman and Sprayer in 1926. During the depression record sales dropped through the floor. The Camden New Jersey pressing plant was converted to making radios. Victor dropped most of its artists though many later appeared back on the label through the HMV connection.

In 1929, the Radio Corporation of America bought Victor from the lawyers. RCA had no intention of continuing the record operation but wanted the company for the radio manufacturing facilities. When ASCAP began to claim that the radio industry should pay royalties for air play which eventuated in the first arrangement with the National Association of Broadcasters (1932), RCA realized that the Victor catalog was a gold mine and decided to continue the label using the pressing plants that it had acquired along with the radio plant. The RCA-Victor label was begun.

Formalizing The Blues
The exclusive patent rights that Edison, Columbia and Victor had, ended in 1917, but they continued to dominate the record industry though others were finding opportunities at the fringes. The growth of independent production accelerated with the introduction of the vacuum tube to the process which made recording easier and more portable.

By the beginning of the 1920s, regional, ethnic and culturally different music was beginning to be recorded by traveling field recordists who carried their equipment in the backs of their cars. They would set up in hotel rooms, bars, and music halls and record the best local performers.

The artist was paid a small fee and the recordist/producer would press records and attempt to sell the recording to the big city record companies. Initially, these recordings had little impact on the music industry but would have a significant influence on future generations of musical artists. Music that had never before been written down, documented or copyrighted was now being codified for future generations. Did the recording of ethnic and regional blues, hillbilly, or folk music change this music? Of course it did.

The recording process forced a honing of these songs into a structure that was just long enough to fill a 10-inch, 78 rpm record. This restriction became a catalyst in formalizing the structure of the ethnic song. It also immortalized forever those emotions that previously were conveyed only in the presence of the blues, hillbilly, or folk singers’ performance. These recordings captured the music of the people, in particular black music, as well as hillbilly (later called country and western) and folk.

Music that only existed as oral histories was now quantifiable and directly comparable outside of a public performance. This music now had stable roots from which others would build. Once created, the records provided a means of musical recollection of the life and times of these popular performers.

Recordings also changed the standards of ethnic composition and performances. When this music was entirely by oral delivery, cliché and borrowed music wasn’t a problem for the performers since those who were listening would seldom have heard the original source, and if they had, it was unlikely they would have been able to make comparisons from memory.

The commodification of the performance allowed aficionados of the music to quickly compare and know when they heard someone play a stolen classic.

Building On The Roots
Throughout the 20th century, the records of blues, hillbilly and folk artists of earlier times have been available for all to hear, providing a starting point for later performers to build on in their times. While this ethnic and regional music continued to evolve as an oral tradition, it was no longer necessary to follow these singers from one gin joint, music hall, or saloon to another in order to hear what they had to say.

Once the records existed, the music could be discovered by each new wave of performers. Through the lyrics and performances of these songs, the recordings captured an impression of the lives, times and places of these singer/songwriters.

Bessie Smith learned to sing the blues by going on the road with Ma Rainey, but Billie Holiday could listen to Bessie’s records, and Aretha could build her music on what she heard in the records of Billie and Bessie.

Not only have late 20th century singers been able to look through the phonographic window to the past, so too have instrumentalists. Eric Clapton (and so many others) could sit with Robert Johnson and pick up classic licks, even though Clapton was born seven years after Johnson died.

Few pop music artists are void of influences from artists of the past. Records have connected all the times since the turn of the 20th century into a continuum, but with all the times of the past available at the same time just by playing a record.

As Simon Frith wrote some years ago, “Popular music came to describe a fixed performance, a recording with the right qualities of intimacy or personality, emotional intensity or ease. ‘Broad’ styles of singing taken from vaudeville or the music hall began to sound crude and quaint; ...

This change also coincided with a different type of music entrepreneur, the record producer, who, unlike the music hall operator, had little contact with the audience or any experience with trying to please the public on the spot. For the record industry, the audience was essentially anonymous…”

In Europe… In The Beginning… Opera
In Europe right after the turn of the century, armed with a portable recording system, Gramophone’s talent scouting technician Fred Gaisberg traveled all over Europe and into Asia looking for and recording every form of folk music and pub entertainer.

In the beginning “serious” opera stars were loath to involve themselves with such gadgetry, but just when phonograph records were in peril of becoming an exponent of all things rustic, exotic and vulgar, legendary Italian tenor Enrico Caruso was signed in 1902 by the Victor Company and recorded by Gaisberg.

Actually, Caruso was signed to the Gramophone and Typewriter Company Of Italy. He would later sign directly with Victor, where he would have more than 40 top-10 hits. This marked a turning point for the fledgling HMV/Victor record label that now had the opportunity to market high culture into the home.

It made ownership of a gramophone not only acceptable but essential by the best families. The record labels promoted operatic arias that were short enough to fit on a record and they soon became accepted as examples of popular music.

Caruso was a famous opera singer long before making a record, but the recordings he made between 1913 until his death in 1921 made him an international figure and extremely wealthy. His voice became known to millions of people who had never been to an opera and almost everyone with a gramophone had a Caruso record. He was the first artist to have a successful recording career, and the Gramophone and Victor owed their success and survival through their infancy to Caruso’s popularity.

Further development of the recording process and the medium of distribution, the record, became a matter of perfecting the original invention. The amplifier was invented and introduced to the process and new materials and techniques were introduced, but there was little change to the original concept until magnetic recording came along in the 1950s, and CD supplanted records in the 1980s (though in the 21st century some still prefer vinyl).

The Record Player
The expiration of the Edison-Columbia-Victor gramophone and phonograph patents in 1917 meant that dozens of companies entered the market. Record players were built for every taste. Some had features that today make one wonder what the designers were thinking.

The Ko-Hi-Ola for instance, was a phonograph with a built-in grandfather clock, a storage area for records and a “special” secret compartment. As the ad for the unit described, “The Ko-Hi-Ola is more useful than the ordinary phonograph, more ornamental than the usual grandfather’s clock and has exclusive features not found in other machines”.

There were many manufacturers of phonographs. Most machines had some unique sales feature that had little to do with the sound. Most were commercial failures as were the companies that made them. Phonograph players remained mechanical devices for sometime after electric recording began in the early 1920s, but in order to hear the extended response of the electric recordings, larger horns were needed.

The need for better reproduction equipment to hear the improved record quality pushed the development of electric record players, so that by the mid 1930s, the large playback horns were disappearing and replaced with speakers. This was made possible by the development not only of amplification but a low-cost stylus and cartridge that generated an electrical signal that could be amplified. The stylus was now connected to a piezoelectric crystal that when twisted back and forth along the modulating grove would generate an easily amplified alternating current.

Beginning in the 1920s, the gramophone became a centerpiece for an evening’s entertainment with friends. It was a more social instrument compared to the radio that was also becoming a part of the modern home. No one knew for sure what would be on the radio, while on the other hand, the person running the gramophone party had control of the material played. There could be discussion about what was heard and how it differed from other recordings.

For three generations to come, such record parties would be social events, but the nature of the event would change. By the 1950s, a party-goer would be younger and come with a “thumb full” of 45s, and the player would be set up in the garage where there was enough room to do and demonstrate all the latest dances.

What made the party a good one was if the participants brought with them enough of the latest releases. The idea of sitting and listening to a recording, then discussing itm continued through the 1960s and 1970s, the hippie generation and the concept album.  (For more on that, be sure to check out this article.)

Currently residing in Australia, Tom Lubin is an internationally recognized music producer and engineer, and is a Lifetime Member of the National Academy of Recording Arts and Sciences (Grammy voting member). He also co-founded the San Francisco chapter of NARAS. An accomplished author, his latest book is “Getting Great Sounds: The Microphone Book,” available here.

{extended}
Posted by admin on 06/03 at 06:23 AM
RecordingFeatureBlogStudy HallAnalogEducationEngineerSignalStudioSystemTechnicianPermalink

Sunday, June 02, 2013

In The Studio: Observations On Improving Bass Guitar Recording

A great bass will groove tight with the drums and support the guitars...
This article is provided by Audio Geek Zine.

 
Bass doesn’t always get the attention it deserves in a recording situation. I see a lot of recordists rush through bass recording, only to later be frustrated with the bass when it comes time for mixing. It’s really too bad because it’s the foundation of the song.

A great bass will groove tight with the drums and support the guitars. Fitting it in the mix will take minimal effort and you will be loving life.

A great recording starts with a great source. When it comes to tracking bass guitar, the source is comprised of many factors.

Musician:
—Technique and playing position – Playing with a pick or with fingers or thumb. Intensity, Playing close to the bridge, in the middle or close to the neck. Choose what is appropriate for the song.
—What is played – Playing bass lines that serve the song and don’t clash with the drums or guitars rhythmically or melodically.
—Tuning – Check the tuning often.

Bass:
—Strings – New strings usually sound best and give you the brightest tone to start with.
—Electronics (Pickups and EQ) – The pickup selection and tone settings.
—Wood and construction – The wood used in the neck and body really effect the sound. Maple and Ash are bright and punchy, mahogany is thicker and darker.

Amplification chain:
—Cable – Debatable how much impact this has, how about just using one that doesn’t hum or crackle if you move it.
—Pedals – If a particular pedal helps get you the desired tone, go for it. I would hold off on spatial effects (delay, reverb) until mixing as they require their own attention.
—Amplifier and EQ settings – Tube or solid state. As a starting point put all EQ knobs at 6.
—Cabinet – 1×12-inch, 4×10-inch, 1×15-inch.
—Cabinet position – Where in the room, close to walls, on the floor or elevated.

Everything contributes to the sound you’ll be recording, do whatever you can to get this close to what you need from the start. It won’t be the same for every song so you may want to have a few options for basses, though a Fender Jazz bass or MusicMan is versatile enough to get you what you need 80 percent of the time. Rent or borrow what you don’t own before looking for magical plugins to solve all your bass problems.

In my experience getting good bass gear for recording made my life so much easier further along in my projects. For recording you don’t necessarily need a massive bass rig, I use a Sterling Ray 34 (Low-end Music Man. Swamp ash body, maple neck, humbucker pickup with active EQ) into a small Ampeg BX112 solid state combo amp with a single 12-inch woofer. Greatest bass recording gear ever? Ha—far from it, but it got me so much closer to the sound I was looking for.

Prior to that I was fighting with a mahogany bass that was deep but had almost no midrange when recorded making it hard to hear clearly in the mix.

A great bass tone in the room is more likely to inspire a great performance. Now you need to capture and enhance it.

The Recording Chain:
—Mic selection – Dynamic mics and large diaphragm condensers are most common for bass amps. Spend some time comparing.
—Mic position – distance and angle play a big part as always.
—Direct Box (DI) – A direct box allows you to split the signal from the bass, one side continues to the amp, the other goes to a preamp.
—Microphone Preamp – Every preamp has its own tone. A pad option may be required.
—Compressor – Optional but worth testing if you have the option. Its very common to compress the DI track while recording in pro studios.

Record at a conservative level, if you’re really digging in for a grindy tone keep the peaks no higher than -6 dBFS (DAW metering). An average of anywhere from -18 dB to -12 dB is all you need. Dynamics will likely be reduced and additional processing is inevitable by the final mix. A clipped signal is useless.

Can you get a great bass recording with just a DI (or plugging right into the interface)? Yes. Can you get a great bass recording with just an amp? Yes. Splitting the signal with a DI before the amp and recording to two tracks gives you more flexibility when it comes to mixing. You may prefer the sound of one over the other, or a blend of the two.

When you do blend the DI and miked amp signals in your DAW it is very likely that you will hear some phase issues. The problem is caused by the DI signal gets to the interface before the mic signal does causing a slight delay.

Try inverting the polarity of one of the tracks. This will usually be a dramatic improvement in the low frequencies. This can further be improved by delaying the DI track, often just by a few milliseconds or even samples. You might find it easiest to start with the tracks “out of phase” then adjust the delay until you have the most cancellation, and invert the polarity again (now in phase). You may not get it to be absolutely perfect but do try to find the best compromise.

By now you should have a good, very usable, better than average bass track recorded into your DAW. We won’t get into processing and mixing bass in this article, if you really need info on that right now, check out the Sept 2011 issue of Sound On Sound, great tips on mixing bass in there.

Have a listen to the audio examples I’ve prepared. Compare the different playing styles, mic position, mic type.

In the delay compensation file, notice how the tone changes quite dramatically just by delaying the DI in increments of 10 samples.

Bass Recording Examples by RevolutionAudio

Jon Tidey is a producer/engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com.

{extended}
Posted by Keith Clark on 06/02 at 03:44 PM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEducationMicrophoneStudioPermalink

Friday, May 31, 2013

Musicians Are From Jupiter, Engineers Are From Saturn

Communication amongst aliens...

Being the online forum lurker that I sometimes am, there are times when I’m struck by a pattern or trend that was never clear before (for whatever reason). It happened just a few weeks ago, and the context is relevant to what we do in our business of live sound.

This particular discussion, on a music forum, was a thread about “the great tone we’ve spent years perfecting that is destroyed when it goes through a PA.” (You guessed it – I’m talking about guitar players here.) These guys and gals do indeed often spend an inordinate amount of time fiddling with strings, pickups, pedals, guitar cables, tubes and the like, all in an effort to realize the holy grail: the perfect guitar tone for a particular song or passage.

In a way, this quest can never be fully satisfying, because there really is no “perfect” tone. However, the sonic signature coming from a guitar rig is very personal and can indeed make a big difference in a song. Some players are known for their particular tone, with one of the most famous being the “brown sound” of Eddie Van Halen, epitomized on the band’s first album.

But what do these folks mean when they say their tone is lost in the PA? Further, the thread included a second group pointing out that this doesn’t have to be the case. Several of them explained that with a proper PA, the right microphone and a decent bloke behind the desk, the player’s tone should be amplified just as he or she desired it to be.

Worlds Apart?
It got me to thinking about how difficult communication can be between different types of people on and around the stage. The four types that I can ID in this scenario include the artist (musician), his/her tech, the monitor engineer, and the front of house engineer. In my view, this chain contains the most important parts of the equation. Perhaps the system tech can have an influence as well, but generally will plan for a flat PA and correct gain structure, with perhaps a touch of “flavor” per the FOH engineer’s taste.

What I’ve observed over the years is that in many ways, these four parties don’t necessarily communicate as well as possible. Sure, artists and monitor engineers are most likely talking and getting things sorted out. Instrumentalists and their techs are probably communicating effectively. But are the techs and FOH, and maybe even FOH and monitors communicating? For that matter, how about the artists and FOH?

This is likely the heart of the issue, for several reasons. FOH engineers and the artists they work with often have different purposes. The audience is focused on the artist, while the house system exists to deliver what they expect to hear. This may seem like a contradiction, but really, performing and mixing are two different worlds. Which brings me back to the thing about guitarists and the PA. 

Of IEMs & Men
What the artist wants and/or needs to hear in their in-ear monitors and/or wedges is not at all the same as what comes out of the house system. And therein lies one of the fundamental issues. In many cases, the artist doesn’t know what his/her tone sounds like from the audience perspective. By the same token, the FOH engineer doesn’t necessarily know what the mix sounds like in the artist’s ears.

Thus artists and monitor engineers share their own world, while FOH is on another planet. Of course, this is largely the way things should be. But what if the FOH engineer could tap into various monitor mixes to get an idea of what the artists are hearing? What if monitors could listen in on the main house mix? Would this be of value? It probably depends on the artists and engineers involved.

Certainly today, with the proliferation of digital transport (think Dante or AVB), this is no longer an impossible task. Depending on the system, it’s probably not cumbersome or cost prohibitive either. Anyway, it’s something to consider

Stageside Manner
Another aspect is some animosity (and dare I say, at times, outright disrespect) between the various parties. Some on the engineering side seem to think that musicians are, well, idiots, and many musicians believe that engineering types don’t have the first clue about music.

It’s because the two specialties are often at cross purposes. Some musicians are not technically oriented, and certainly their goals and perspectives can be different from engineers who are dealing with a wide range of challenges and often don’t share the same concerns as musicians. Again, none of this is necessarily wrong.

But if our goal is to connect the artists to the audience and vice versa, perhaps we can be a bit more empathetic. Doctors call it “bedside manner,” but it applies to our field too. For instance, if a guitarist suggests that “all I need is a ‘57 placed on the upper left speaker,” sometimes an engineer’s inclination is to say “well, I can make it work better with a 421.” But in reality, if we can’t get a decent guitar tone with a ‘57, we shouldn’t be in this business.

Perhaps a better way to handle it would be to accept what the musician says and just make it happen to the best of our ability. Or, one step further: get things set up and then offer the musician the chance to listen from the house and determine if this is the sound he/she is after. People are often more comfortable if some of their ideas are incorporated into the solution.

By the same token, it would be nice for musicians to trust engineers more to preserve the core of their sound and represent them properly. A “stageside manner” will help, but so will competence and confidence as long as it doesn’t turn into arrogance. Something to keep in mind with any musician listening to the mix is the “3 dB rule,” which in essence states that most musicians will want to hear themselves about 3 dB louder than is appropriate. Thus it’s smart to leave some room to boost the sound, which can then be readjusted accordingly come showtime.

As for the audience, shows are usually an overall “experience” that they want to enjoy. They may not know if there are problems, or the really fine points of an amazing mix. They probably won’t say “I heard a great show last night” but they will say “I went to a great show last night.” And their experience will be enhanced by quality sound reinforcement, especially if they can hear the lyrics clearly and feel the groove.

As has often been said: “No one ever leaves a show humming the lights.” But they probably don’t leave humming the kick drum or the bass line in most cases, either. The priority is still the artist, the song, and the audience. If we can learn to communicate better, we’ll be in good stead with this holy trinity.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.

{extended}
Posted by Keith Clark on 05/31 at 04:31 PM
Live SoundFeatureBlogOpinionEngineerMixerSound ReinforcementStageTechnicianPermalink

In The Studio: Checking Out The Slate Digital Virtual Console Collection

How effective are these plug-ins? Let's find out.
This article is provided by Audio Geek Zine.

 

It seems like every few weeks there some new piece of audio software that claims to make your music bigger, louder, deeper, and more badass in every way. Every new plug-in is announced as a “total game changer.”

Like that means something.

When it was released a while back, Steven Slate’s Virtual Console Collection (VCC) was one of those so-called game changing plug-ins. There was SO MUCH HYPE about this product that I was completely put off by the idea of it and tried to ignore it for a while.

VCC is a plug-in that claims to make your mixes sound more analog and to make your DAW react exactly like an analog console. Not only that, but you get a choice of several consoles that you can use in any combination.

Say you wanted your guitars mixed on an SSL, drums on a vintage Neve, bass on a vintage RCA tube console, everything else through a Trident console and finally all those tracks summed through an API. Impossible in real life, but accomplished in a minute with VCC.

image

Details
Actually VCC is a pair of plug-ins – the channel and the mix bus. Generally you stick the mix bus on the master fader and a channel on every track of your project. You can also use the mix bus plugin on submixes if you prefer.

The channel plug-in models the inputs of the console, while the mix bus plug-in is the summing and main out of the console, and it includes some crosstalk in the algorithm.

You get a choice of five consoles:
—Brit 4K, a 4000 series SSL
—US A, a classic API
—Brit N, a Neve 8048
—Ψ, a Trident 80B
—RC-Tube, a hybrid of two vintage RCA tube broadcast consoles

Each console algorithm is made to match the frequency response and overload reaction of the original console. If you push them hard, the console reacts differently; this is completely unlike what you’re used to mixing digitally. Each console has it’s own sound. It’s not a huge dramatic change, but it makes a noticeable difference.

The interface is really simple and easy to understand right away. For the channel, there is a VU meter at the top, console selection knob, an input trim to tweak + or - 6 dB and a drive control which gives you control over the non-linear saturation with + or - 6 dB. The mix bus plug-in has stereo VU meters, console selection and drive control.

image

Both plug-ins have a group option which opens up an advanced settings panel for grouped settings. You can have up to eight groups or have channels independent. Grouped plug-ins will have all the controls linked, which makes it really easy to try out different algorithms.

My first experience with VCC was when Slate released the new RC-Tube console option as a separate plug-in. It was about $60, including an iLok 2, which is required to run either version of VCC. I figured if it was anything close to the hype, it would be well worth it.

I really enjoyed using RC-Tube. It really seemed to live up to the hype and really seemed to make my mixes better. This model has softened highs and makes the bottom end is a little tubbier. It wasn’t long before I was using it on every track in every session.

When the upgrade price to the full version dropped to $130 last year,I bought it. The other models are just as useful. The option to mix things up and have multiple groups makes it much more flexible, although a little more time consuming to set up.

It takes a while to get used to hearing the differences with VCC—you might think it’s doing nothing at all until you bypass it, and suddenly the whole mix falls apart. This is the sort of effect that works best setting up near the start of the mix. You can add it at the end but it will alter your balances and EQ.

Examples
I made a demo track in a hard rock style. Steven Slate Drums 4, 3 direct guitars played through Amplitube 3, and a bass guitar direct through an MXR M80 Bass DI+. The drums are split out multichannel and an instance of VCC channel is on each track.

There’s no additional processing, no EQ, compression or reverb that wasn’t part of the Amplitube preset. It is UNMIXED.

All the VCC channels are on the same group. The input is at -2 and drive is at the default.

VCC bypassed: Listen

All VCC channels on, now set to Brit 4k, SSL console: Listen

US A, API console: Listen

Brit N, Neve console: Listen

Ψ, Trident console: Listen

RC-Tube, RCA broadcast console: Listen

As you can hear, the effect is not dramatic; it’s also unique, not like adding compression or EQ or distortion.

It’s not an effect you need for a great mix, but it definitely helps, and once you’ve tried it I’m sure you’ll find it essential. For me it lives up to the hype, I find it makes a big difference and I don’t want to mix without it now.

For a look at what’s actually going on ‘under the hood’ of these plug-ins, Eric Beam has done some pretty extensive testing of VCC.

Jon Tidey is a producer/engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com.

{extended}
Posted by Keith Clark on 05/31 at 12:53 PM
RecordingFeatureBlogDigital Audio WorkstationsProcessorSoftwareStudioPermalink
Page 52 of 168 pages « First  <  50 51 52 53 54 >  Last »