Friday, June 28, 2013
Properly Training Stagehands Can Make A Big Difference
It's productive to take the pro-active approach in working with and helping train stagehands.
Any time a group of production professionals congregates to chat, the conversation invariable turns to the topic of labor. That’s when the griping and complaining begins. It seems that less-than-stellar stagehands are an epidemic - at least to hear all of the talk about it.
Our sound company has nothing but excellent experiences with stagehands because we chose to do address this issue head on. We do business with several labor companies in our general region, and developed a training curriculum that was subsequently presented to management at each of these companies.
The crux of our offer was simple: here’s what we want and expect from stagehands, and we’ll provide the training to meet these standards, free of charge. Perhaps not surprisingly, the response has been overwhelmingly positive.
Every labor company has an “A-Team” and then “a bunch of other guys.” The goal of our training program is to bring all of them up to A-Team status, so that no matter who shows up on a call, we can be confident that we won’t have to hold their hands, or lose valuable time and/or risk injury, regardless of what task is assigned.
One key aspect is scheduling compatible times to conduct the training. Several sessions covering each topic have needed to be held in order to allow as many stagehands to attend as possible.
To overcome the “recalcitrant” (some might say lazy) individuals who don’t really want to exert the effort, we’ve informed labor company management that we’ll no longer accept stagehands that haven’t chosen to “bother” with our training program.
- The program’s selling points to stagehands: The benefits of our experience, which will help make them more employable.
- The training pertains to every customer, not just our company.
- This is being provided at absolutely no cost to them.
In addition, we DO NOT share with them what we tell their employers, that without training they won’t be allowed to work our calls. Negativity begets negativity.
Half of the program’s curriculum pertains to attitude, since it’s the most important aspect of any job. We’d much rather work with a stagehand with very little experience but a great attitude than a self-proclaimed “seasoned pro” who knows it all and complains about everything. Positive work habits are also covered in this section.
The other half of the program largely focuses on techniques. Properly rolling cables and stowing microphone stands as well as a host other job-related activities are covered. It’s important to provide actual hands-on demonstrations a centerpiece of this effort.
For example, our company rolls all cables in the circular method - thumb and forefinger style. However, we also teach over-and-under, because stagehands also need to know how to do this for other customers.
Whenever possible, we also share our training classes with our lighting partners so that these various specialized techniques can be addressed, further increasing the overall value of the time spent.
On the Same Page
A very important thing to remember: a program like this can be fun! It can also helps get everyone on the same page from the outset, and fosters better communication and working relationships. Out of an environment like this, productivity thrives.
Also keep in mind that if every sound company would offer training along these lines, we could all collectively improve the labor situation in very short order. The key is understanding that it’s a win-win situation for everyone involved, and it must always be presented that way.
Let’s have a look at an overview of the training curriculum we’ve developed. By the way, this information is always provided in handout form AFTER a training session. Giving it out before or during a session leads participants to be reading ahead rather than paying full attention to the presentation.
To be early is to be on time. To be on time is to be late. To be late is to not show up!
Do not impose your own personal dictates. Be observant. Ask questions.
Always approach a job or project with a positive attitude. Always try to think, “I can do that” or “I can get that done.” This goes a long way to how the rest of your day will go.
Conversely, shouting, cursing, complaining and lewd language are not conducive to a good working environment. These things create tension. Note that we have discharged stagehands for actions of this type.
Do not show up for work under the influence of drugs or alcohol. It’s the single best way to be cut and banned from returning.
Unless specifically instructed, never, ever touch the musicians’ instruments. This is a professional show, not “Star Search.”
You do not need to be accessible to every person you know on a “24-7” basis. Unless you have an impending family emergency, when on the job, turn your cell phone off. This is what voice mail is for - check it and return calls during breaks.
Wear sturdy shoes - and no sandals.
When working outside in the sun, black is the worst color to wear.
On an all-day show, having an entire change of clothing on hand is a good idea.
A sweat towel also comes in quite handy. At the very least, your feet will thank you for a clean pair of socks midway through the day.
And, please - don’t make us have the “Stinky Talk” with you. Yes, it’s often a dirty, smelly job, but don’t start the day that way.
When in doubt, ask!
There is no one correct way to roll cables - BUT- there is only one correct way for each sound company. Ask. And learn how to roll cables: circular, over and under, etc. Cables never “forget,” and if they’re rolled differently than usual, they can be damaged. This can get expensive!
Roll cables as if you’re going to be the person to use them next.
When rolling cables, be aware that there are many different types, and they usually go in different places. As a general rule, it’s best to keep them separated so that stowing them is both accurate and swift. Note in each trunk what size and type of cables are already rolled and packed. Follow that lead.
Cases and their respective lids are usually identified by matching numbers, words like “FRONT” and “BACK” written across both the lid and case, color codes or the like. Pay attention - putting on the wrong lids, or putting them on upside down, can warp or otherwise damage cases.
Be gentle with things like snake latches and other multi-pin connectors. They are delicate and very costly.
When you see something that is broken or obviously should be repaired before it does break, bring it to the sound company’s attention.
When dealing with mic stands, find out how the company wants its stands to be stored. Usually, fully collapsed is the accepted method. Leaving out a telescoped boom means there’s a good chance it will get bent, and therefore ruined. Again, if look carefully at the cases to figure out what is to be stored where.
When loading trucks, be respectful of the case wheels. DO NOT ram the wheels onto the lift gate. This will bend the casters, which cost at least $25 each.
Never ride a lift gate up to the truck box while holding gear. If a case, loudspeaker stack, etc., begins to roll, it’s almost impossible to stop. And it will likely take you off with it. This is one of the more unsafe aspects of stagehand work (on the ground, that is).
If faced with a falling loudspeaker stack - please, please, please - don’t try to put your body between it and the ground. You’ll lose every time. Yes, loudspeakers are expensive, but they aren’t as important as your safety.
Teri Hogan is a veteran audio professional who co-owned Sound Services, a performance audio company in Texas.
Wednesday, June 26, 2013
Church Sound: In-Depth Primer On Fixing Coverage & Related Issues In Houses Of Worship
Becoming more familiar with the concepts of acoustics, processing, system configuration and more
Audio is an essential element in any modern-day religious service. What is heard by the congregation is a combination of the acoustic qualities of the room and the performance of the audio system.
Some of the desirable acoustic qualities in a house of worship are:
Reverberance: When well controlled with early decay, the effect is perceived as a beautiful sound that enhances the quality of the audio. See the Rane Pro Audio Reference for a definition of “reverberation.”
Clarity: The ratio of the energy in the early sound compared to that in the reverberant sound. Early sound is what is heard in the first 50 - 80 milliseconds after the arrival of the direct sound. It’s a measure of the degree to which the individual sounds stand apart from one another.
Articulation: Determined from the direct-to-total arriving sound energy ratio. When this ratio is small, the character of consonants is obscured resulting in a loss of understanding the spoken word.
Listener Envelopment: Results from the energy of the room coming from the sides of the listener. The effect is to draw the listener into the sound. Where a conference room would be optimized for articulation and clarity, a symphony hall is optimized for reverberance and listener envelopment. A good house of worship is optimized as a compromise between the somewhat conflicting requirements of music performance and the spoken word.
Articulation must be excellent but sufficient reverb is required to complement music performances. All reflections must be well controlled to achieve this balance and ensure the best possible listener experience.
An Example Of Good Sound
There are other possible examples, but the author really likes this one: In some mosques, cathedrals and tabernacles there are wonderful domed ceilings that have marvelous natural acoustic properties.
The acoustic coupling from performers to the congregation grouped under the dome makes for a very (dare I say) “spiritual” experience. For the purpose of this article, this level of performance is a “gold standard” to which other acoustic spaces will be compared in the search for improvements and recommendations.
The USA Pavilion at Florida’s Epcot Center makes for an interesting case study. There is a dome ceiling in the pavilion. Under the dome an eight-part acappella group called the “Voices of Liberty” performs. For those under the dome listening to the group, the sound is beautiful and inspiring. Moving out from under the dome, the “magic” is gone.
This level of performance is not feasible in a typical house of worship, but it does establish an icon as to what could be if there was sufficient skill (and budget) applied to the acoustic and audio system design.
And Now The Ugly World In Which We Live
Contrast this to a typical public address system squawking bad sound to the congregation. That which was good is replaced with misery.
You reach for a bottle of aspirin to calm the headache induced by a pair of blaring powered loudspeakers. Some of the problems encountered by audio designers/consultants include:
Excessive Reverberation, such that articulation and clarity is poor.
Echo, where a discrete sound reflection returns to a listener more then 50 milliseconds from the direct sound and is significantly louder then the reverberation sound.
Flutter Echo, repeated echoes that are experienced in rapid succession that occur between two hard parallel surfaces. All echoes ruin the acoustic properties of a room and a flutter echo is particularly damaging.
Coloration Due To Reflections, where a reflection destructively recombines with the direct sound, modifying the frequency response in the process. These are non-minimum-phase colorations as correction with equalization is not possible.
Delayed Sound, from coupled volumes (contamination from adjacent rooms storing sound energy and then returning the energy to the main room).
Psychological Reconditioning, a common problem for the clergy and congregation to be so preconditioned by bad sound that they become resistant to change and find it difficult to (at first) recognize good sound.
This can also work in the audio consultants favor when the customers are preconditioned by good sound and are willing to invest the required resources toward good audio design.
For those of us designing audio for houses of worship with a rectangular room, flat walls and probably a vaulted ceiling, some form of sound reinforcement is required. Through attention to detail and careful design of the audio system, the experience of the congregation can be non-aspirin inducing and the system simple to use.
Common Signal Processing Blocks
Let’s begin by looking at the universal signal processing chain common to all audio systems.
In the simplest systems these functions are accomplished in an audio mixer that feeds a pair of powered loudspeakers.
More sophisticated systems include equalization, compression, limiting, automation, feedback suppression, electronic crossovers and other tools of the trade. These days it is possible to include all of these functions in a DSP (digital signal processor).
Figure 1: Microphone to amplifier chain.
One example of the signal chain from the minister’s microphone to the power amplifiers is shown in Figure 1.
The signal processing flow starts at the analog input. A 2-band parametric equalizer filters out-of-band low frequencies. The microphone signals are summed together in an automatic mixer. An AGC (automatic gain control) reduces the dynamic range and a high-pass filter in the side chain improves the performance of the AGC.
The level control can be tied to a pot on the wall or a smart remote. There is a feedback suppressor for good measure. A 2-way crossover supports a biamplified system. The 10-band parametric equalizers are utilized for both wide- and narrow-band corrections.
Generally, wide-band filters correct minimum-phase frequency response irregularities in the speaker drivers and in the room response.
Narrow-band filters are useful to partially correct non-minimum-phase related problems such as energy stored in room modes (reverberant energy).
A limiter could also have been added to protect the system from clipping if that feature is not included in the power amplifier.
Now let’s take a look at some of these signal processing blocks in greater detail.
Analog Input / Microphone Preamp
It’s surprising how often even experienced audio consultants will configure an audio input incorrectly. It’s important that as much gain as possible is accomplished at the front end of the system in the analog gain stage.
Any additional gain from digital trim after the input stage degrades optimum signal-to-noise performance. As an example, let’s set the input gain to a value of +40 dB.
One way is where the analog gain is set to a value of +45 dB and the digital trim is set to -5 dB (as in Figure 2), the measured input referred noise is -127 dBu.
Figure 2: Rane Drag Net input block.
A common (but incorrect) way would have the analog gain set to a value of +30 dB and the digital trim set to +10 dB (the author has seen this repeatedly), to give the same Mic gain of 40 dB—but now the input-referred noise is degraded to -114 dBu.
That is an increase of 13 dB for the noise floor, or a change (in the bad direction) of 8 dB in the maximum SNR (signal-to-noise ratio). Your exercise is to determine why the SNR was only degraded by 8 dB rather then the intuitively obvious value of 13 dB.
Answer: The noise floor does drop by 13 dB, but this combination of settings causes the analog input stage to clip at an input level that is 5 dB lower. Hence, the change in system SNR is 8 dB.
Applying attenuation after the input stage (rather then gain) reduces overload performance and so should be used with skill and discretion.
It is the proper technique to maximize noise performance.
Figure 3: Drag Net parametric for input low cut.
Input Low-Cut Filter
A very good idea is to add a low-cut filter set to ~80 Hz after the input stage to minimize the effects of undesirable low-frequency noises such as bumps and thumps that come from handling the mic and also wind blasts and pops from speaking too closely into the microphone.
In Figure 3, both 2nd-order filters are set to the same frequency to produce a 4th-order filter.
There should also be a low-cut filter in line with the SC (side chain) input of the AGC (automatic gain control).
Figure 4: Drag Net [arametric for AGC side chain.
This filter can be set to a higher corner frequency (such as 120 Hz in Figure 4) to improve the performance of the AGC by rejecting the effects of low-frequency noises.
The Auto Mixer
A Little Automation Buddy
Figure 5: Drag Net auto mixer block.
An auto mixer (shown in Figure 5) is a good idea when there is more then a single open microphone.
Auto Mixers combine the signals from multiple microphones and automatically correct for the changing gain requirements as the NOM (Number of Open Microphones) changes.
Threshold with “last on” is a useful setting for all microphones used in a worship service (Figure 6).
Figure 6: Auto mixer input edit block.
Unused microphones (input levels are below threshold) are gated. When the input of a microphone is above threshold then other inputs with a lower assigned priority level are ducked.
Automatic Gain Control
A compressor is the correct processing block in this link of the audio chain. Something is needed here to prevent exuberant preaching from melting down the congregation.
Surprisingly, an AGC can be very useful in this position but configured to behave more like a specialized compressor by using the settings shown in Figure 7.
Figure 7: Drag Net AGC block.
The value of “Threshold re: Target” is set to have an offset of 0 dBr so that “Threshold” has the same value as the “Target.” “Maximum Gain” becomes 0 dB and the gain curve starts to look like a compressor but there are additional controls in an AGC for Hold and Release that are useful when the input level is below threshold.
These settings avoid the problems of compressor “pumping” when that exuberant loudspeaker is at the microphone as attenuation levels are held between spoken phrases. Then, when transitioning to a more reserved loudspeaker, the hold time (below threshold) is short enough to expire so that the gain returns to a normal level.
An Exciting Labor-Saving Tip:
Put a Control On the Wall
Here is an exciting tip: A level control can provide attenuation as needed under the control of a pot on the wall or a smart remote.
This is handy in systems where a minister needs to run a system alone without the assistance of an audio specialist who is running a mixing board.
The remote can be located on or close to a pulpit which places control of the audio system at the fingertips of the minister.
The DSP control is shown in Figure 8.
A Gift From Above?
The next item in this processing chain is somewhat controversial. It is a feedback suppressor.
To some audio consultants. these are heresy!
The argument is that a properly calibrated system has no need of such a band-aid.
This is generally true, but there is one case when it is wise for an audio consultant to suffer the ignominy of using a feedback suppressor (Figure 9)—a lay clergy where the person speaking is untrained and/or unfamiliar with proper use of a microphone.
Figure 8: Drag Net level block mapped to a remote level control.
The author has witnessed such a person cup their hands (in the attitude of prayer) directly around the microphone capsule.
The hands form a resonant chamber that results in squealing feedback.
A good feedback suppressor would have locked on to the offending tone and notched it out post-haste.
Figure 9: Drag Net feedback suppressor.
Now We’re Having Real Fun
Parametric equalizers are used for both wide and narrow band corrections. Generally, wide-band and shelf filters can correct for minimum-phase frequency response irregularities.
One interesting detail of Figure 10 is Hi-Shelf Filter 1. This filter was added after achieving flat in-room response. Since the system was calibrated in an empty room, this extra high-frequency energy is intended to compensate for the high-frequency absorption of the congregation when the room is full of people.
There is also a noise-masking effect in some congregations that will tend to obscure the intelligibility of the spoken word.
In practice this approach of adding a bit of extra high-frequency energy into the room works well.
Figure 10. Drag Net parametric block (may have up to 15 bands per block).
Narrow-band filters (see Figure 11) are useful to partially correct non-minimum-phase related problems such as energy stored in room modes.
At low frequencies this energy causes bass to sound indistinct, and in midrange to lower treble this energy is perceived as reverberation.
These filters attenuate the frequencies that bounce about the room. In an acoustically live room, room resonances can propagate for a surprisingly long time causing these frequencies to “build up.”
Figure 11: Parametric with narrow-band filters.
Narrow-band filters are just a partial solution. Greatest effectiveness is achieved when filters are used in conjunction with acoustic room treatments such as diffusers, high/mid frequency absorbers and bass traps.
Example #1: A Small Church
The ceiling is low suspended acoustic tile over an open space covered with thin carpet. The RT60 (the time it takes the reverberant sound to decrease by 60 dB) is short, so controlling reverberation is not a problem for audio clarity.
In fact, the room is a touch “dry” for music, and content of the worship service includes live musical performances.
The sources of audio are the minister with a wireless microphone and the band. Additional sources are DVD/CD players and other devices as needed. Control is via a 24-channel mixer with all inputs used.
Output is to a pair of powered loudspeakers mounted high in the corners of the room in a stereo configuration. This installation was done by members of the congregation without consultation with an audio professional.
Next, let’s look at some specific examples to bette illustrate these points.
The quality of the audio is poor with numerous problems including uneven frequency response. An experienced sound person is required to run the mixer for all audio system use.
Futher, there is poor coverage of the congregation from the stereo speaker pair. People sitting in the “hot spots” just in front of the loudspeakers are blasted with excessive level, and the rest of the congregation is exposed to a strong interference pattern between the two loudspeakers (Figure 12).
Figure 12: Stereo loudpeaker pair coverage.
The system is uncompensated for room modes, room response and loudspeaker response irregularities. There is a small “sweet spot” in the center of the room where the two speakers combine coherently but there is an isle down the center of the seats. Since there are no chairs, no one is seated in the “sweet spot”.
So does this audio system work the way it is? Yes, but everyone knows the congregation may not be receiving the best possible audio experience. This example is rich in possibilities.
Improvements to this system are accomplished in a number of ways. A DSP can be used for equalization, other processing and to add automation to the minister’s microphone. The entire worship band could be run through a mixer with each individual input processed by an AGC.
There are admittedly downsides to automating the audio mixing of a large group, as the automation is not as intelligent as an experienced sound person, but is possible in some cases. On addition, the loudspeaker are examined to look at options that provide more even coverage of the congregation.
Improvements to this audio system can be introduced in phases.
Add a DSP box between the output of the mixer and the feeds to the main speakers and on-stage monitors.
Features added could be:
Parametric wide-band equalization. This alone would greatly improve this system.
Parametric narrow-band equalization. A short RT60 makes this unnecessary at this time. However, remodeling could increase RT60 to where narrow-band equalization would be needed. (This room could use bass absorbers).
High-pass filtering. If not in the 24-channel mixer already.
Compression. Always a good idea with microphones because of the inverse square law relationship between the preacher’s mouth and the location of the microphone.
Feedback suppression. If needed.
Automation is incorporated with automixers and remote controls. There are many exciting ways to add these features depending on the needs of individual congregations.
The most obvious upgrade would be to add the ability for a minister to turn on and control the main microphones from a simple control panel located in easy reach at the front of the room.
The very uneven coverage of the congregation by the stereo loudspeaker pair needs to be addressed, as shown in Figure 12. The seats directly in front of the speakers have enough level to kill small animals.
If the audio system were perfect then each seat in the congregation would have the same audio level. In the author’s experience, similar rooms have been controlled within a couple of dB.
In this example, the seat closest to each loudspeaker is about 15 dB louder then the worst seat on the floor, and interference between the two speakers adds to a very lumpy and unpleasant frequency response.
Another problem is that the FOH (front of house) mixer is placed in a location for good sound, causing the levels at the ends of the front rows to be way too loud.
Line Array Loudspeakers
One improvement is to remove the stereo pair of point-source loudspeakers and install a floor-to-ceiling line array located in the center of the back wall as shown in Figure 13 (below).
Coverage of the congregation is more even, and the level at the FOH mixer location is very similar to the coverage level over the whole floor of the congregation.
The level of the stage monitors is greatly reduced and some of the stage monitors may no longer be needed depending on the individual needs of the musicians.
Within the near field of the line array there is a range were the audio level will decrease by only 3 dB for each doubling of distance which greatly helps even the coverage across the entire floor.
One other characteristic of this application is that the audio is distributed across the whole line so that even if a microphone is right next to the line there is little tendency to feedback.
In this example, there is a low suspended-acoustic-tile ceiling that shortens the length of a line array speaker. This limits some of the good qualities of a line array, so it might not be the best solution.
If the room were remodeled so there was a high ceiling, then a line array would make more sense because a longer line array would fit.
This is especially true if the newly remodeled ceiling was acoustically reflective causing the RT60 of the room to be much greater.
Figure 13: Line array loudspeaker coverage.
The high directivity of a long line array greatly helps to project the audio out to the floor rather then have the audio directed toward the ceiling where it contributes to the reverberant energy and slap echoes in the room.
Supplemental Distributed Array Loudspeakers
Because of the dropped ceiling, another option would be a distributed array of supplemental ceiling loudspeakers in the back of the room as shown in Figure 14. The loudness level of the main stereo pair could be reduced by at least 12 dB.
Figure 14: Distributed array loudspeaker coverage.
This would greatly diminish the effects of the hot spots in the front of the room but would leave the level at the back of the room way too low. Ceiling loudspeakers can be added in the locations shown to fill in the audio in the back of the room.
It would be very important to include a speaker over the mixer location so the audio at that location matches the level in the congregation to aid in achieving an accurate mix.
Why The Delay?
The ceiling loudspeaker signals should be delayed in time, so their output combines coherently with the output from the point-source pair in the front of the room.
If the rear loudspeakers are not correctly delayed then the loudspeakers in the room will not combine correctly.
This room is too small for audio from the front of the room to be perceived as a distinct echo.
Applying a proper delay to the ceiling loudspeakers can minimize the problem of localization confusion that occurs if the first arrival sound is coming from the overhead loudspeakers and not the front of the room.
Example #2: A Mid-Sized Contemporary House Of Worship
This second example is a medium sized house of worship. The vaulted ceiling is high and the floor in the congregational seating area is covered with hard-industrial vinyl.
The RT60 is longer then the first example at approximately 1.5 seconds so reverberation is a problem in an empty room. The sources of audio are again ministers on a microphone and a worship band.
Control is via a 32-channel mixer. The speaker system is an array of three large boxes mounted as a central cluster high in the peak of the ceiling. A professional audio company did the installation and calibration of the audio system.
The quality of the audio in this church is much better than in the first example. An interesting question is: how good is “good enough”? When interviewed, members of this congregation can usually hear. Rarely is the audio painful to listen to so some say that the audio quality is fully acceptable.
This is a good time to reflect back on the example in the introduction where domed ceilings were held up as an icon of natural acoustic wonderfulness. Let’s examine each individual audio characteristic previously discussed and see how this audio system installation stacks up.
Reverberance is not well controlled and is dependent on the configuration and occupancy of the room. Low-mid frequencies are a particular problem as the energy builds up and is never trapped or controlled.
Clarity is fairly good and meets a minimum standard. But articulation is acceptable but not outstanding. The ALCONs (Articulation Loss of Consonants) rating of this room is fairly low but in the acceptable range. However, there is room for improvement.
Listener envelopment is nonexistent and completely pales in comparison to the example of a domed ceiling.
Again, as in the first example, an experienced sound person is required to run the mixer for any use of the audio system, as there is no automation in the audio system.
There is good coverage of the congregation from the central cluster, but people sitting in the area where the coverage patterns between two of the loudspeakers overlap experience uneven frequency response due to the comb filtering caused by the interference between these two loudspeakers.
Bass response is particularly poor. The poor bass response leads to the impression that the system lacks sufficient power.
A DSP is already in the system and can be used for additional equalization and other tasks. The same recommendation applies to add enough automation so that a simple service can be done without bringing in a sound person.
The loudspeaker system may already be fully adequate. The first temptation may be to add a subwoofer to add bass power, but after a quick survey it is probable that the buildup of mid-bass energy in this room makes the quality of the bass so poor that adding more bass will only make matters worse.
To fix the room, the ceiling and walls could be completely covered in bass absorptive panels, but this is not really practical so a compromise is to add bass traps to the corners of the room and the ridge of the ceiling.
If it is not possible to tame the room with traps, then narrow-band filtering techniques could be employed.
Figure 15: Distributed array loudspeaker coverage.
This is where the room is evaluated for the natural modes that build up energy in the room and these frequencies are notched out with a very narrow filter. A combination of some absorptive panels and narrow-band filters might be the best compromise.
There are regions (as shown in Figure 15) where the coverage from the individual speakers in the cluster interfere with each other rather than combine cooperatively. This interference is frequency-dependent.
The solution is to reduce the contribution of some of the loudspeakers of those problem frequencies so that interference is minimized. The system would then require re-calibration to complement the above changes. That should do it.
At the time of publication Michaël Rollins was a senior digital design engineer for Rane Corporation. This and other educational articles are available in the RaneNotes Library, a subset of the Rane Pro Audio Reference.
The Studio Curmudgeon: Using Multiband EQ To “Fix It In The Mix”
Try these suggestions, but then try something totally different. Innovate -- don’t blindly follow.
Before there was digital recording, before spring reverb, even before analog tape, there was EQ. Equalization is one of the oldest tools in the audio engineer’s arsenal, and one of the most useful.
Used judiciously, EQ can do wonders to de-clutter a crowded soundscape. Used with precision, it can remove offending sounds we hadn’t necessarily intended to capture. Used correctly, a bit of EQ can be all that’s needed to make peace between dueling guitars, scoop the mud from the heaviest drums, or make a mundane vocal stand up and shine.
But all too often, EQ is misused and misunderstood, typically in a vain attempt to fix a poor recording.
Rule Number One in recording still applies: garbage in equals garbage out. A little EQ is great for helping make a good track sound better, but no amount of EQ will make a bad track sound good.
The best mix starts with the best recording, so try to capture the best sound you can to begin with. Move mics, listen at the source, not just in the control room or in your headphones. Make sure what’s being recorded sounds as close to what’s being played as possible – before it’s too late to do anything about it.
Your ears are the bottom line when it comes to applying EQ. While we can talk about a few general principles, every instrument has its own unique characteristics and timbre, and will react differently to boosting or cutting specific frequencies. So take these and all suggestions with a few grains of salt; use them as a starting point but make your decisions based on what sounds good.
EQ Giveth & EQ Taketh Away
When it comes to EQing, less truly is more, and in nearly all cases it’s better to take away than to add. Many less experienced users have a tendency to make an instrument stand out by boosting frequencies, but the cumulative results can be dangerous. Adding just 2 dB of gain to two different instruments means that when they excite the same frequencies (and trust me, they will, and probably at the worst possible moment), you’ve got 4 dB of gain. Add too much EQ and your mix can easily turn to mud. It’s often a better idea to try attenuating those same frequencies in other instruments instead.
Another good reason to minimize your use of additive EQ: while cutting frequencies is a passive process, boosting frequencies makes your EQ function as a preamp within the signal flow. Adding any preamp means adding noise and distortion, and the preamps in most EQ circuitry are less than optimal.
All those arguments aside, sometimes it’s simply more effective to boost one element of the mix, rather than rolling off dozens of others. Once again, the operative word here is moderation – a little boost of 1 or 2 dB goes a long way.
EQing Drums – If it Doesn’t Fit, You Must EQ It
If your mix includes drums, it’s a good bet you’ll spend a considerable portion of your mixdown time EQing them. Because drums cover such a wide tonal range, there’s plenty of other stuff in the mix that can compete with those frequencies. Kick and snare in particular tend to be prominent parts of the song’s sonic fabric, and when it comes to helping them play nicely with other instruments and vocals, EQ is your best friend.
Of course, assuming you’re working with a live drum kit (as opposed to isolated drum samples), you’re not working in a vacuum. Since every drum track also contains leakage from other mic, boosting a frequency on one track can also bring up the off-axis sounds of adjacent mics, potentially creating more problems than it solves.
For a dull sounding kick drum, adding a slight boost anywhere around 80 Hz to 120 Hz will produce more boom and a more rounded “thud.” (Typically, the kick tends to compete with the bass guitar for that frequency range, and it’s a good idea to decide which of the two should occupy the lower and upper edges of that zone. See the section on bass later in this article for more on this.)
Adding a tiny bit of 500 Hz can bring out the “click” of the beater hitting the drum head, and can be helpful in preventing the kick from disappearing once your track hits the listener’s earbuds in the inevitable low-fi MP3 version.
Snares come in such a wide range of sizes and materials, it’s a bit tough to generalize about frequencies. But the sound of the snare wires rattling lives in the 5 kHz to 10 kHz range, and a bit of gain there is great for brightening up a dull snare. If you’re plagued with a boxy sounding snare, try rolling off a bit of 300 through 800 Hz.
With toms, a common mistake is to try boosting low end to make them stand out. Adding a couple of dB at 100 Hz will increase their power, but at the expense of muddying the mix. A better strategy for perking up those tom fills is to leave the bottom end alone and add a tiny bit of 5 kHz to bring out the attack. And as with the snare, play around with rolling off that same 300 through 800 Hz range to eliminate boxiness.
Almost every tom has a resonant ring, and some can be problematic. Of course, the basics apply: tune the toms first and foremost to reduce or eliminate ringing. Whatever problem resonance remains can be addressed using a surgical approach with a multiband EQ. Select a narrow Q and boost the gain as you sweep the midrange band. When you locate the offending frequency, apply a few dB worth of cut to make it go away.
Overhead mics can be a mixed blessing. Their position and relative distance from the kit makes them great for adding air and ambience, but loud cymbals can overpower the mix. Try adding a bit of 10 kHz to brighten the track, and then backing off the overall level to get the air without too much metal.
The Bottom Line On Bass
Since bass and kick occupy the same frequency range and (hopefully) work together, it’s almost always necessary to use EQ to differentiate them in the mix. As mentioned earlier, it’s best to pick one as the rounder, bottom-y sound and make the other a bit more bright and punchy; which is which will be dictated by the song.
One of the questions i hear most often is whether it’s best to record the bass direct or mic the amp. The answer, as you might expect, is “it depends.” Ideally, many engineers opt to record both the amp and a direct track simultaneously, balancing them in the mix for the best possible tone.
Of course, in today’s project studio world, it’s not always possible to record live at the volume you’d like. If you’re working with a bass track that was recorded direct, chances are it’s a bit flat and nondescript compared with a mic’ed bass amp. The good news is, that flatness will ultimately make EQing the DI track far easier, since there’s less coloration to begin with.
Like the kick drum, boosting the 80-120 Hz range on an electric bass will add roundness and bottom end. To add presence and attack, go for a slightly higher range than with the kick, around 1 kHz. Don’t add too much or you’ll bring out the finger noise as well.
Making Space For Guitars And Keys
Guitars are among the most versatile instruments; that same versatility can make them a real challenge. With electric guitars, if you’re fortunate to have a player who knows their amp and their sound, your best bet is to change as little as possible.
If you’ve got two rhythm guitar parts going, a bit of panning and EQ can help distinguish one from the other. Try a slight boost at around 100 Hz on one to bring up the lower mids (with perhaps a corresponding cut on the other guitar). Experiment with higher frequencies on the second part – boosting different frequencies between about 750 Hz and 10 kHz will each bring out a different type of sparkle. Scooping out a bit of 250 to 500 Hz can help eliminate some harshness and woofiness.
Acoustic guitar is a very different animal. Each has its own unique tone and timbre, and much will depend on the player, the sound of the room, the mics you’ve used and where you’ve placed them. A mic too close to the sound hole will deliver a boomy sound; a slight cut at 100 Hz can help. Close miking can also pick up some boxiness from the wood’s resonance, especially around the midrange. Try dropping a bit of the 300 to 400 Hz range. And of course, bring out the shimmer and strumming sound by boosting the upper ranges, from 750 Hz up to around 10 kHz (watching out again for finger noise).
Acoustic pianos, like their guitar counterparts, are organic instruments subject to a number of unique conditions. Every piano has its own character and tone to begin with, further affected by the room, the mics, mic placement and of course, the player. Few instruments cover as wide a range of frequencies and overtones as the piano, which can be both a blessing and a curse. What you do with regard to EQ depends largely on the song - a dense part with close-clustered chords is probably best treated with subtractive EQ, while a spare, melodic passage might benefit from a bit of boost in the upper mids.
Keyboards are a whole other issue, and could easily be the subject of an entire article alone. Synths cover such a wide range of sounds, it’s impossible to generalize about what will work on any given patch. For the most part, you’re quite literally playing it by ear.
Listen Before You Look
I’ll close with the same point I opened with: take this and all advice as nothing more than suggestions. There are no hard and fast rules except one: use your ears. If it sounds wrong, it probably is. So close your eyes and listen. Adjust your EQ, close your eyes and listen again. Don’t just solo the track, either - listen to your changes in the context of the whole mix.
Especially in today’s DAW-oriented world, we all have a tendency to stare at the screen. But it’s important not to depend on spectrum analyzers and meters instead of listening. Try out these suggestions, but then try something totally different. Innovate - don’t blindly follow. Every song is unique, every instrument and room is different, and every artist and song is unique.
What worked for one person on one recording won’t necessarily be what’s right for you.
Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys.. Despite being immersed in professional audio his entire adult life, he still refuses to grow up.
Posted by Keith Clark on 06/26 at 11:47 AM
Tuesday, June 25, 2013
When Hearing Starts To Drift, How To Avoid Becoming “EQ Oblivious”
Wonder if your ears are the same trustworthy instruments they once were?
Ever notice that some shows sound really bright, I mean, the “ouch kind-of-crazy painful” type of bright - and what is the engineer thinking?”
You’re a month into a tour, getting off a plane en route to another show. Hmm… Wonder if your ears are the same trustworthy, spring-fresh little helpers they were three weeks ago?
Or maybe - just maybe - the rigors of travel combined with that head cold, eight beers and four hours of sleep last night has dulled the senses a bit, and your mix has drifted away from being a sonically perfect masterpiece.
I refer to the phenomenon of our ears misleading us, and the resulting variation in the tonal balance of the mix, as “drift.” As in “it sounded great early on, but somewhere over the course of the last 30 shows, the mix has drifted in the ear bleed zone.”
There are many methods to equalize sound systems, and a mind-boggling quantity of different systems and venues out there to be equalized. Though there is a somewhat common goal of a smooth, flat sound, there is no standardized way to EQ, and not even a universally accepted sound to go for.
Every show has unique needs, and each engineer has his/her own style and approach to mixing, so finding a common method and result is pretty much out of the question. And even if a standard did exist, the reality is that the opening act or the third band at a “no soundcheck” festival is at the mercy of the whims of whomever EQ’d the system.
To make matters worse, the touring life can wreak havoc on our most important tool, our ears. Even on a good day, well rested and healthy, is the mix going to be as well balanced and as smooth as it was 30 shows, two continents and five plane flights ago?
Getting a grasp on a consistent sonic footprint to present to each new audience is one of the most difficult and overlooked aspects of being a sound engineer. How, among all the variables, does the engineer find that “grounding point” that can be carried from show to show?
And how does one avoid being oblivious to possibly subjecting that 30th audience to a mix that sounds like broken glass and razor blades?
There are a few fairly simple methods that can help engineers avoid drifting away and getting lost in the sonic landscape of misperception. Having reference points is extremely useful for identifying, locking on and preventing a mix from tonally drifting over the course of a tour or even throughout the show.
Our ears are our primary source of information, but they’re also just one of our five basic senses. However, smell is highly unlikely to be useful in all but the most extreme situations, and we don’t have the time to really run around tasting the equipment. So we’re left with sound, sight and touch, along with the ability to compare with the past to help us out.
Some people use their voice, some use a CD, while others use pink noise and some sort sophisticated test equipment such as (Rational Acoustics) Smaart, (Meyer) SIM or an RTA (real-time analyzer).
From a sound engineer’s perspective, there are several dilemmas and obstacles in the real world. As already noted, our hearing can be inconsistent over time for numerous reasons, and using your voice or a CD is perception based. Therefore, they can typically be inconsistent over time.
So using an RTA or one of the PC-based analyzers can be a consistent non-perception-based method that’s helpful. But note that this approach sometimes just isn’t practical, and I’ve yet to use a measurement system that provides consistently usable results without also needing “touch up” via ear.
Often there isn’t the opportunity to play a test CD, let alone the problem of subjecting 30,000 people to hearing “check one, two” or pink noise for 15 minutes of tuning.
Simply put, what is needed is a trustworthy, repeatable reference point that can be easily carried and utilized anywhere, anytime without affecting the show, a way to get enough information for an optimized mix without putting a single instrument through the system.
Copying Is Easy
Rather than trying to remember the sound you’re looking for, and hoping your ears are honest, there’s an easier, more dependable method. For example, a small system with an accurate tonal balance already dialed in can serve as a good point of comparison.
Simply, “A-B” the two systems and adjust the big system to sound similar to the smaller one. All that’s really needed is an accurate pair of sealed headphones and a CD that sounds similar to the mix to be achieved.
Though most headphones won’t do many favors in the very low frequency region, a decent pair will provide a very usable reference point from 100 Hz or so, on up. Using the headphones as a real-time comparative reference, play the CD through the system while it’s also cued up in the headphones, and then EQ the system to sound like the headphones.
If the chosen CD is relatively well balanced and mixed, and the engineer has done a reasonable job of copying its sound, the result should be with a surprisingly good system EQ.
Think of the system EQ as the tool for compensating and bringing the venue/system combination to a “correct” tonal balance. Think of channel EQ as the tool for getting the instrument/microphone combination to the “desired” tonal balance.
If done properly, the mix bus of the console should carry signal that is somewhat close in tonal balance to prerecorded music. Therefore, CDs should sound good through the system with no channel EQ.
An offshoot of this - usually used at festivals where it’s not possible to play a preferred CD - is to simply use whatever house music is being played as the comparative reference in my headphones. The system can be EQ’d on the spot with no disruption to the event.
The beauty of this is that if your ears are dull, both the headphones and system will sound dull, and therefore you should still be able to match them up. It can also make you aware that your ears are indeed dull when you experience an overwhelming desire to crank up the high-frequency EQ.
Having a comparative audible reference is one useful tool in the bag of tricks, but not enough to guarantee being on the right track. Another tool is all about using the eyes.
Whether it’s a conventional RTA or a PC-based analyzer, having an accurate visual reference is extremely helpful in preventing a mix from drifting. What’s needed is an analyzer with at least a 30 dB window and 2 dB (or better) resolution.
To make any analyzer truly useful in achieving show-to-show tonal consistency, there has to be a way to determine and store a desirable curve. With a bit of attention to the analyzer, it should become obvious that the best sounding shows will tend look a certain way. (And be sure to use a very slow release time on the analyzer).
Typically this “curve” shows up as an angled line gently sloping downward from left to right, at a rate somewhere between 1 dB and 3 dB per octave. For a good starting point, try a straight diagonal line starting at the top of the screen at 20 Hz and ending 20 dB down at 20 kHz. It doesn’t really matter how the curve is saved/stored - RTAs allow curves to be stored and shown, others don’t.
An easy “recall” method I’ve come up with is “ultra-high tech.” Stick a piece of clear tape to the analyzer screen, and use a thin indelible marker (“Sharpie”) to draw your line. Trace a show that really sounded good. Or, as I’ve done when I don’t have my personal Sound Technology RTA 4000 (or even clear tape), peel a long thread from a roll of gaff tape and stick it to the screen.
Keep in mind that I’m not telling you what’s absolutely “right” and/ or “wrong,” or what a curve should look like; but rather, focusing on the tools to consistently recreate the desired sound. When in doubt, cheat. Use the answers from yesterday!
Relatively The Same
A third reference point is the mechanical position of the console knobs. Assuming the system is tuned so that the reference CD sounds correct in the ” super accurate” headphones; similar mics are being used from the previous show; and the band gear is somewhat similar, then the console EQ knob positions should be in relatively the same spot regardless of the venue or system type.
This may sound like a stretch, but think about it. There’s no real reason for the console knobs to radically change from day to day, particularly if the system/room combinations are similar. I realized this when doing a live recording of a band I’ve mixed for many years. While in the recording truck, I noticed that the console EQ knobs were nearly identical to the settings I use live.
I then noticed, over the course of many shows, that when my console EQ knobs varied more than slightly from those settings, it was because the system was EQ’d poorly. Further, when I played CDs, they also required EQ to sound balanced.
If you find yourself boosting high frequencies on every channel, then most likely your system EQ is too dull or you’re mixing bright. If 2.5 kHz is cut on more than half of the channels, then most likely there’s too much 2.5 kHz in the system EQ. Use this information of how the knobs are positioned to help refine system EQ. Over time, refine the curve.
If you think you’re getting off course, pop in the reference CD mid show if needed (be sure it’s muted in the PA of course) and listen to it on headphones. Do a comparison to your mix during the show. Even with different music or songs, this can provide a good idea of whether the mix is tonally balanced or drifting.
Finally, run a board tape from the left and right mix, pre system EQ. This will show if the system EQ method is working. It’s a checkpoint - the tapes should sound listenable and tonally balanced when compared to a CD on headphones (and, for that matter, on smaller hi-fi systems).
If not, make the adjustments in the system EQ to compensate at the next show, and remember, a dull tape means the system EQ is too bright, and vise versa.
The key is to use the tools at your disposal to help keep you on track. I’ve seen many a great engineer scrambling for a mix at festivals. I’ve seen tour after tour come through where you can tell how long the sound crew has been on the road by how hard the power amplifiers for the high-frequency two-inch drivers are clipping.
With a bit of thought and self-control, each and every mix can be made to sound a certain way, regardless of how honest our ears might be at a given time. Now, actually mixing the show - that’s a whole different deal!
Dave Rat is the co-founder and owner of Rat Sound, a leading sound reinforcement company based in Southern California.
In The Studio: Recording The Bass Amp
Many times miking a bass amp is completely overlooked
Today everyone is conditioned to go direct with the bass guitar that many times miking a bass amp is completely overlooked.
That’s too bad because it can bring something to the track that you just can’t get any other way.
Here’s an excerpt from my Audio Recording Basic Training book that provides an exercise for bass amp miking.
Back in the 60s and 70s, the way engineers recorded the electric bass was by miking the bass amp. As direct boxes became more and more available, the trend eventually swung the other way, with most bass recording done direct.
Today it’s very common to record a bass using a combination of both an amp and direct, which provides the best of both worlds. While the bass will sound full and warm with a direct box, the amp can add just enough edge to help the bass punch through a mix.
When using a direct box, be aware that they’re not all created equal in that some will not give you the low fundamental of the bass that you expect when recording this way.
Active DIs do a better job at this than passive, although some passive boxes (like the ones made by Radial) do an excellent job because of the large Jensen transformer used in the circuit.
Depending on the sound that fits the track best, mix the amp track with a DI track. The sound will change substantially depending upon the balance of the DI and miked amplifier.
ALWAYS check the phase relationship between the amp and DI to make sure there’s no cancellation of the low end. Flip the polarity switch to the position that has the most bottom. Also remember that there’s no rule that says that you have to use both tracks, so don’t hesitate to use just a single track if it sounds best in the mix.
Miking The Bass Amp
A) Listen closely to the amp as the bass player plays. If there are multiple speakers, find the one that sounds the best.
B) Place a large diaphragm dynamic mic like (AKG) D-112, (Electro-Voice) RE20 or (Shure) Beta 52 a little off-center and a couple of inches away from a cone of the best sounding speaker in the bass cabinet.
C) Move the mic across the cone. Is there a spot where it sounds particularly good? Keep the mic at that spot. Is the sound balanced frequency response-wise? Can you hear any of the room reflections?
D) Move the mic towards the end of the cone? Is there more low end? Is it more distinct sounding?
E) Move the mic towards the center of the speaker? Is there more low end? Is it more distinct sounding?
F) Move the mic about a feet away from the speaker. Is there more low end? Is it more distinct sounding?
G) Move the mic about 2 feet away from the speaker. Is there more low end? Is it more distinct sounding? Can you hear more of the room? Does it work with the rest of the instruments?
H) Raise the cabinet about a foot off the floor. Is there more low end? Is it more distinct sounding?
I) Place the mic where it gives you the best balance of body and definition, and balance between the direct and ambient room sound.
Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blog.
Pro Production: Key Factors In A Lighting Design Plot
The right amount of luminance, enough contrast, and proper color, projection and effects
Most manufacturers of automated lighting provide photometric data showing the throw distance, the beam, and field width as well as the illuminance in lux and foot-candles (fc). (1)
By referencing the chart below (Figure 1), the designer can quickly gauge whether or not an instrument will provide the target illuminance and field width at a given throw distance.
A chart like this will get you in the ballpark, allowing you to select lighting instruments and rough in the lighting positions, but it doesn’t provide photometric data for every situation.
For example, a Martin MAC 2000 Wash fixture with a medium Fresnel lens and the zoom at median (27 degrees) produces an illuminance of 136 fc (1461 lux) at a throw distance of 16 m or 87 fc (935 lux) at 20 m, according to their photometric data provided online. But it provides no information about the illuminance between 16 and 20 m.
You can extrapolate these in-between throw distances if you know the right formulas. Some photometric charts, however, also provide the luminous intensity of a fixture in candelas (cd), from which you can calculate the center beam illuminance any given throw distance.
Figure 1: A photometric chart showing throw distance, beam, and field width as well as illuminance in lux and foot-candles.
For example, in the same photometric chart it says the fixture produces 374,000 cd. To find the illuminance at, say, 18 m, you can use the inverse square law, which says that to find the illuminance you divide the luminous intensity by the square of the throw distance.
You can also find the throw distance to meet your target illuminance if you’re given the luminous intensity.
For example, if the fixture you plan to use for key light produces 250,000 cd and you want a minimum illumination of 1614 lux (150 fc) on stage, then using the inverse square law, you can calculate the maximum throw distance.
From the throw distance you can calculate the trim height using the formula for a right triangle, which is the Pythagorean Theorem (Figure 2).
Figure 2: Right triangle showing sides a, b, and c. The side across from the right angle (a) is called the hypotenuse.
For optimal results we want the key and fill lights to be a maximum of 45 degrees above the horizon. A steeper angle will produce harsh shadows under the eye sockets, nose, and jowls.
For a 45-degree angle of projection, the vertical distance from the subject to the light is the same as the horizontal distance from the subject to the light, as shown in Figure 3. We’ll call this distance X.
Figure 3: Illustration showing elevation, setback, and trim. For a 45-degree projection, the elevation from the target is equal to the setback.
Keep in mind that we have to add the height of the stage and the height to the center of the target to find the elevation distance from the floor to the light. This is the trim height.
Once the throw distance has been determined, we can then calculate the beam diameter at the subject given the beam angle in the photometric data. That will show if we have sufficiently covered the acting area in question.
For example, if the beam angle of a fixture is 24 degrees, then use the formula for a right triangle to calculate the beam diameter for a given throw distance. Figure 22.13 shows the beam and field angles of a fixture. If we draw a line from the fixture to the subject, that’s our throw distance (in feet or meters).
The line also bisects the beam angle and creates a right triangle where the angle closest to the fixture is half of the beam angle, the opposite angle is half of the beam width (in feet or meters), and the last remaining side of the right triangle is the hypotenuse.
Figure 4: By drawing a line from the focal point in the fixture to the subject we can create a right triangle where the near angle is half of the beam angle, the opposite side of the angle is half of the beam width, the adjacent side is the throw distance, and the remaining side is the hypotenuse.
Then we can divide the width of the stage to be covered by the width of the beam at the given throw distance to figure out how many fixtures we need to uniformly wash the stage. If the diameter of the field is too narrow then move the lighting position back.
But remember that the inverse square law says that the illuminance is proportional to the square of the throw distance; a small change in the throw distance results in a larger change in illuminance.
If the diameter of the field is too wide we can choose to leave it or move the lighting position close. If the lighting position is left as is, then the field will be wider than planned, which is okay as long as the light doesn’t spill onto areas where it shouldn’t be.
To provide enough illuminance on stage for a live audience, the combination of the key light and fill light can be anywhere from about 100 fc (1076 lux) or less to about 150 fc (1615 lux) or more, depending on the amount of detail you want to reveal.
Ideally the ambient lighting should be as close to a blackout as possible, with the exception of egress lighting and any other lighting required by local code. Low ambient lighting helps to focus attention on the performance and enhances the stage lighting.
If video or film is involved, then the target illuminance is likely to change. It can vary from as little as 80 fc (860 lux) to as much as 300 fc (3228 lux) or more. New cameras are much more sensitive and require less light, but they still produce less grain and greater depth of field with higher light levels.
It’s a good idea to design the lighting system for about 125 percent of the maximum desired illumination to account for lumen maintenance, voltage fluctuations, filters and diffusion, and dirty optics.
Color Temperature & Green/Magenta Balance
Two critical issues to be aware of when using automated lighting for key light, especially with video acquisition, are the variation of color temperature across the fixtures and the color balance between green and magenta.
Discharge lamps lose color temperature as they’re used. An MSR-type lamp will lose about a half a degree of color temperature per hour of operation and an HMI lamp will lose about one degree of color temperature per hour of operation.
After 200 hours of use, an MSR lamp with an initial color temperature of 5600 K will have a color temperature of approximately 5500 K and an HMI lamp with an initial color temperature of 5600 K will have a color temperature of approximately 5400 K.
Consequently, the more fixtures there are, the more likely they are to vary in color temperature from fixture to fixture. If there are five or more fixtures, the color temperature could vary quite a bit among them. If you’re renting or hiring fixtures, some rental shops are very good about replacing aging lamps. If you’re doing an important video shoot it’s a good idea to ask for fresh lamps when you’re negotiating rates.
Also, some automated lights exhibit a color balance away from magenta and toward green. It’s particularly problematic with certain lamps and luminaires. Sometimes the light may look white to the naked eye but in reality have more green than magenta energy.
Before a video shoot, the video cameras are balanced by illuminating a white card with the key light and focusing the cameras on the card. The engineer looks at the video signals as they’re displayed on a vectorscope, which is an instrument used to visually display the color balance of a video signal (Figure 5).
Figure 5: A vectorscope displays the white balance of a light as seen by a video camera. (A) White light with too much green and (B) white light balanced correctly.
It looks somewhat like an oscilloscope, except it has a round display and the signal is represented as a single dot. If the light and the cameras are balanced correctly, then the vectorscope will show no bias toward any color, but a balance between them all.
The color temperature of a lamp is a measure of the balance between blue and red, but it doesn’t take into consideration the balance between green and magenta. A lamp can have a high correlated color temperature and still have a high green content.
Sometimes the gas mix, salt additives, reflector design, infrared filter, or a combination of any of these elements produces a shift toward green. And as a lamp ages the green spike can grow and worsen the problem. When that happens, skin tones will look sickly and pale, particularly on camera.
The video engineer can address the problem by balancing the camera away from green, but then the other sources will shift toward magenta. One way to address the problem is to use a green-absorbing gel filter designed to balance between green and magenta, such as a Roscolux Minusgreen or a GamColor Minus Green CineFilter.
The latter is available in 1/8 steps from 1/8 to full Minus Green and “Xtra” Minus Green. These filters can be taped to the exit lens of the fi xture so it won’t burn or melt
due to the heat of the lamp.
Finishing The Lighting Plot
Once the key, fill, and back lights are laid into the lighting plot, then the rest of the set can be lit. Unless there are specific requirements of the lighting, as in theatrical productions where specials are used to accent the composition or to light special objects, then the designer can be free to add lights to create mood and enhance the aesthetics of the production.
Automated lighting excels at color wash and image or beam projection.
Soft-edge, color-changing automated lights can be used in two basic ways: to light the surfaces of objects like cycs, set pieces, performers, etc. or to create shafts of aerial beams in the air. To light set pieces the fixtures are typically placed on balcony rails, a front of house (FOH) truss, a downstage truss, or, in some instances, a side truss.
In rock and roll shows where aerial beams are often emphasized, they’re often placed on upstage or mid-stage truss, on truss pods or stands, or on fl oor stands where they can produce upward pointing shafts of light.
Most automated color wash luminaires use dichroic color changers with discharge lamps, both of which have some unique characteristics. Dichroic color lends itself to more saturated color than gel filters, and the range of colors from a discharge lamp is somewhat limited compared to an incandescent lamp.
Technically speaking, an automated light with 8-bit CMY color mixing has 196,608 unique color combinations, but the visible differences are limited by the spectral distribution of the lamp and the characteristics of the dichroic color mixing filters.
If the show is being videotaped or if there’s I-mag, then the cameras and displays introduce another factor in the final rendering of the color. In those instances it’s important for the designer to monitor the video display to see how the rendered scene looks on camera.
Image & Beam Projection
The complement to color wash is image and beam projection provided by profile spot fixtures. In general, spot fixtures located more toward the downstage position can be used more effectively for projecting images on upstage set pieces. The longer the throw, the larger the diameter of the projection and the lower the illuminance on the projection surface. (Remember the inverse square law!)
In rock and roll shows, aerial beams are typically a very important part of the design; therefore, many designers place a number of profile spots in upstage and midstage positions as well as on lighting pods, lighting stands, and floor stands, and they use an interference medium like fog or haze. Without interference there are no aerial beams to be seen.
Also, beams that are projected toward the audience look brighter than those projected away from the audience.
Lighting design is very often a compromise between the designer’s vision and the reality of the budget for a production. Given a large enough budget, having profile spot fixtures in a number of different positions provides great flexibility in building scenes.
Spots in FOH positions can paint scenic elements or drops, providing a nice canvas for pattern projection or for creating texture with shape, form, light, and shadow.
Spots in side positions can help sculpt a subject with hard or soft light, bringing form and definition to the subject. Placing spots in the upstage positions allows for aerial beam projection with very interesting cones, tunnels, and other laser-like effects. The same upstage spot fixtures positioned on the floor provide another vantage point from which to create aerial projections.
The more varied and diverse positions you can find, the more interesting scenes you can create.
A final lighting design should cover all the bases for achieving your design goals. It should provide the right amount of illuminance for the project, enough contrast between light and dark to reveal the subject, and enough color, projection, and effects to match the aesthetics of the production. In the final analysis, the success of the lighting design is measured by the response of those who experience it.
Read part 1 of this article here.
Go here to acquire Automated Lighting, 2nd Edition, published by Focal Press. Use the promo code FOC20 during checkout to receive a 20 percent discount.
Richard Cadena is the technical editor for PLASA, an authorized WYSIWYG trainer, and 20-year veteran of the entertainment lighting industry including stints with two of the world’s largest automated lighting manufacturers. He has a background in electrical engineering and electronics, and he is a freelance lighting designer with a portfolio of several major lighting designs and installations and is proficient in WYSIWYG, LD Assistant, and Vectorworks.
(1) Lux is the SI unit of measurement for illuminance and is defi ned as one lumen per square meter. A foot-candle is the imperial unit of measurement for illuminance and is defined as one lumen per square foot. When using lux, the unit of measure for distance is the meter and when using foot-candles, the unit of measure for distance is feet.
Pro Production: Making The Best Of It In Upgrading An Existing Lighting System
What's happening now provides something to build on in the future
Our existing lighting system is pretty bad. When it was installed, money was tight and the church did what churches do; put in the wrong stuff, installed wrong.
We have 72 channels of dimming installed, roughly 45 of which work. And some of the “working” channels won’t hold a full load, so we’ll have lights shut off at random intervals.
When the electricians came out to survey the system, they spent about two hours going through it, only to come back saying, “Uh, yeah. There’s nothing here we can save. It would be best to re-do everything.”
So that’s what we’re going to do.
The system was over a year in the making. Our budget is still tight, so we had to make the most of every dollar. As much as I would like to upgrade our fixtures, switch over to new LED lights, add some movers and generally blow it out, what we can do is re-build our infrastructure.
We’ll be putting in two ETC Sensor racks, a 96-channel and a 48-channel. We’re actually wiring for about 124 circuits and buying 80 channels of dimmers. We’ll use the racks as patch bays of sorts; putting dimmers in the channels we’ll use most often, and swapping them around as needed.
Part of the issue for this is power (as well as cost); we don’t technically have enough to feed both racks, fully loaded. This sounds like a flaw in the logic, but follow my thinking.
Right now, we’re getting by with about 40 channels of dimming (including 12 for house lights). We’ll be more than doubling our dimmer count, and putting dimmed outlets all over the place making it easy to put lights where we need them.
Over the next few years, I expect to be buying LED fixtures, which will just need low-amperage power, and not dimming. My plan is to slowly convert many of our 80 dimming channels to non-dim, switched power outlets that will supply juice for the LEDs. We’re wiring the house lights to so we can easily convert the 12 incandescent circuits to 2-4 LED circuits very easily.
What we’re not skimping on is networking. Right now, we have 2 DMX universes, and they all drop down to a single distro location. In the new system, we’ll have 15 outlets, a mix of Net3 and DMX.
We’ll have Net3 to DMX touring adapters so we can put DMX wherever we want. As LEDs take over, we can handle 64 DMX universes over the system (that should do…).
We’re also putting in twenty-four 208-volt relayed drops for moving lights. Right now, we have to use circuit breakers to turn the movers on and off when we rent them, which is a drag. Soon, it will be a cue on the Hog.
We’re putting in two 10-button panels on the floor, one at the back and one on stage to call up frequently used scenes without firing up the lighting console.
And we’ll have a LCD touchscreen at lighting world which will do the same, and more. That will enable us to get rid of our old static lighting console that we use just for turning the lights.
That’s a quick look at the new system. It’s a lot of invisible stuff that no one will ever see, but it adds a bunch of functionality as well as reliability (not to mention safety) to our room. And it gives us something to build on, which is always a welcome change!
Note: This article was written a couple of years ago but still contains a wealth of useful information.
Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.
Pro Production: Total Environmental Dynamic Range, The Key To Maximum Visual Impact
Achieving killer imagery in the target venue
A key factor to consider when selecting a projection display is understanding the minimum Total Environmental Dynamic Range required to achieve killer imagery in the target venue.
The term Total Environmental Dynamic Range (TEDR) describes the actual contrast ratio achieved in a venue, including the impact of ambient light.
Thus, the TEDR value defines the dynamic range, or contrast, the viewers actually perceive.
You will see, even in venues with a minimal amount of ambient light, the TEDR value will be much lower than the contrast ratio defined in projector specification sheets.
The method of calculating TEDR is relatively simple:
TEDR = Projector Screen Brightness / (Projector Screen Black Level + Ambient Light reflected by the Screen)
The most complicated part of the process is converting the three values in the formula to one standard of measurement. For purposes of this article, we will use Foot Lamberts (FtL). FtL is a measurement that defines the light being reflected by the screen.
Step 1: Projector Screen Brightness
Convert the lumens produced by the projector to Projector Screen Foot Lamberts (PFL) using the following formula:
PFL = (Projector Lumens / Screen Area in Sq.Ft.) x Screen Gain
Step 2: Projector Black Level
Projector Black Level (PBL) must also be defined in terms of Foot Lamberts. The approximate PBL value can be calculated in terms of lumens, by simply dividing the projector lumen spec by the projector’s specified contrast ratio.
As an example, a 1,000 lumen projector with a 1000:1 contrast ratio, in theory, should produce a PBL of 1 lumen. We then use the same formula we used in step one to convert the lumen-based PBL to a FtL based value (PBFL).
PBFL = (PBL in Lumens / Screen Area in Sq. Ft.) x Screen Gain
Step 3: Ambient Light
For new construction, defining the ambient light that will fall on the screen is best done with the help of the lighting designer.
For existing installations, it’s recommended to take a real-world measurement using an accurate incident light meter positioned where the screen will exist. Hold the meter parallel to the screen surface aimed toward viewers’ position.
Many luminance meters measure incident light in terms of Lux. If this is the case with your meter, the Lux value will need to be converted to Foot Lamberts.
To convert Lux to Foot lamberts, use the following two formulas:
A) Convert ambient incident Lux to Lumens:
Ambient Incident Lumens = Ambient Incident Lux x Screen Area in Sq. Meters
Ambient Incident Lumens = Ambient Incident Lux x (Screen Area in Sq Ft / 10.56)
B) Convert Ambient Incident Lumens to Ambient Incident FtL (AIFL)
AIFL = (Ambient Incident Lumens / Screen Area in Sq.Ft.) x Screen Gain
Step 4: Bringing it All Together
Now that all of our variables are expressed in terms of FtL, we can use the formula to calculate the TEDR that will be achieved:
Total Environmental Dynamic Range = Projector Foot Lamberts / (Projector Black Level in FtL + Ambient Incident Light in FtL)
Or, stated in short form, using our acronyms:
TEDR = PFL/(PBFL + AIFL)
Putting Total Environmental Dynamic Range To Work
As an approximate guide, here are some Total Environmental Dynamic Range targets for the application categories listed below:
—Conference Room (PowerPoint, Spreadsheets, some Video or HD): 10-20:1 rear or front-screen
—Worship (Hymn Text, IMAG, some Video): 10-20:1 rear-screen, 20-40:1 front-screen
—Staged events and broadcast applications (CG content, Video and HD): 30-50:1
—Theater or screening room (Video, HD, CG content): at least 500-1000:1
—Home Media and Entertainment (Video, HD, Videogames, Web): at least 15-30:1 rear-screen, 30-50:1 front-screen
Of course, customer preferences and content present additional variables, meaning no simple set of rules will work for every application. However, as you start to consider TEDR in the systems you design, you will define the TEDR values that work best, as well as the projector, screen and lighting configurations that deliver those TEDR values.
It is all about dynamic imagery. The final simple rule: Reduce ambient light as much as possible. If TEDR values are still too low, bring more lumens to the task and consider the use of high gain and/or rear projection screens.
This article provided by Digital Projection.
Thursday, June 20, 2013
A Detailed Guide To Constant-Voltage Audio Systems
Clarifying and defining key power aspects with constant-voltage (or high-impedance) systems
Electric power companies have a good idea that has been applied to audio engineering. When they run power through miles of cable, they minimize resistive power loss by running the power as high voltage and low current.
To do this, they use a step-up transformer at the power station and a step-down transformer at each customer’s location. This reduces power loss due to the I2R heating of the power cables.
The same solution can be applied to audio communications in the form of a constant-voltage system (typically 70 volts in the U.S. and 100V overseas).
Such a system is often used when a single power amplifier drives many loudspeakers through long cable runs (over 50 feet). Some examples of this condition are distributed speaker systems for PA, paging, or low-SPL background music.
The label “constant voltage” has been confusing because the voltage is really not constant in an audio program. A better term might be “high impedance.”
A typical high-impedance system is shown in Figure 1. A transformer at the power amplifier output steps up the voltage to approximately 70 volts at full power.
Each loudspeaker has a step-down transformer that matches the 70-volt line to each loudspeaker’s impedance.
The primaries of all the loudspeaker transformers are paralleled across the transformer secondary on the power amplifier.
Figure 1. A typical high-impedance system using a step-up transformer on the amplifier output.
There are three options at the power-amp end for 70-volt operation:
• an external step-up transformer
• a built-in step-up transformer
• a high-voltage, transformerless output
These options are covered in detail later in this article.
The signal line to the loudspeakers is high voltage, low current, and usually high impedance. Typical line values for a 100-watt amplifier are 70 volt, 1.41 amperes, and 50 ohms.
How did the 70-volt line get its name? The intention was to have 100-volt peak on the line, which is 70.7 volts rms.
The technically correct value is 70.7 volt rms, but “70-volt (or “70V”) is the common term. There are 70 volts on the line as maximum amplifier output with a sine wave signal. The actual voltage depends on the power amplifier wattage rating and the step-up ratio of the transformer. The audio program voltage in a 70V system might not even reach 70V. Conversely, peaks in the audio program might exceed 70V.
Other high-voltage systems might run at other voltages. Although rare, the 200V system has been used for cable length exceeding one mile.
ADVANTAGES OF 70V OPERATION
As stated before, a 70V line reduces power loss due to cable heating.
That’s because the loudspeaker cable carries the audio signal as a low current.
Consequently you can use smaller-gauge loudspeaker cable, or very long cable runs, without losing excessive power.
Another advantage of 70V operation is that you can more easily provide the amplifier with a matching load. Suppose you’re connecting hundred of loudspeakers to a single 8-ohm amplifier output. It can be difficult to wire the loudspeakers in a series-parallel combination having a total impedance of 8 ohms.
Also it’s bad practice to run loudspeakers in series because if one loudspeaker fails, all of the loudspeakers in series are lost. This changes the load impedance seen by the power amplifier.
With a 70V system you can hang hundreds of loudspeakers in parallel on a single amplifier output if you provide a matching load. Details of impedance matching are covered later. In addition, a 70V distributed system is relatively easy to design, and allows flexibility in power settings.
Let’s compare a standard low-impedance system to a constant-voltage system. Imagine that you want to provide PA for a runway at an airshow. A low-impedance system might employ 30 speaker clusters spaced 100 feet apart, each cluster powered by a 1000W amplifier for extra headroom. A high-impedance version of that system might use only one amplifier providing 140V. The cost savings is obvious.
DISADVANTAGES OF 70V OPERATION
One disadvantage of a 70V system is that the transformers add expense. Particularly if you use large transformers for extended low-frequency response, the cost per transformer may run $70 to $200. Low-power paging systems, or those with limited low-frequency response, can use small transformers costing around $4.95 each. Many loudspeakers are sold with 70V transformers included.
Another disadvantage is that transformers can degrade the frequency response and add distortion. In addition, a 70V line may require conduit to meet local building code.
The main component of a 70V system is the loudspeaker transformer.
Its secondary winding has taps at various impedances. You choose the tap that matches the loudspeaker impedance.
For example, if you’re using a 4-ohm loudspeaker, connect it between the 4-ohm tap and common.
The primary winding has taps at several power levels. These power taps indicate how much maximum power the loudspeaker receives. For example, suppose you have a 70V transformer with the primary tapped at 10W and the secondary tapped at 8 ohms. Then a loudspeaker rated at 8 ohms should receive 10W at its voice coil when the primary is connected to a 70V line.
Transformers have insertion loss mainly due to resistance. Precise system calculations should take insertion loss into account. These calculations are covered in the Appendix later in this article.
With this background in mind, let’s proceed to installation practices. Here’s a basic procedure that neglects transformer insertion loss:
1. Do NOT connect the 70V loudspeaker line to the power amplifier yet.
2. Install a transformer at each loudspeaker location, or use loudspeakers with built-in transformers.
3. Connect each loudspeaker to its transformer secondary tap. The tap impedance should equal the loudspeaker impedance.
4. Connect each transformer primary to the 70V line from the power amplifier. Choose the tap that will deliver the desired wattage to that loudspeaker.
5. Add the wattage ratings of all the primary taps. This sum must not exceed the amplifier’s wattage rating. If it does, change to a lower-wattage primary tap of one or more transformers, or use a higher power amplifier.
6. Connect the 70V loudspeaker line to the 70V output of the amplifier.
As an example, suppose you are setting up a 70V system with 8-ohm loudspeakers and a 60W power amp. Connect the 8-ohm secondary taps to each speaker. Suppose the total loudspeaker wattage is 55 watts. This is acceptable because it does not exceed the amplifier power rating of 60 watts.
Here’s a more detailed procedure that emphasizes impedance matching:
1. Compute the minimum safe load.
The minimum safe load impedance that can be connected to the amplifier is given by:
Z = minimum safe load impedance, in ohms.
E = loudspeaker line voltage (25V, 70.7V, 100V, etc.)
P = maximum continuous average power rating of power amplifier, in watts.
An example: For an amplifier rated at 100 watts continuous average power, the minimum load impedance that may be connected safely to the 70.7V output is:
2. Choose transformer taps.
Tap the primary at the desired power level for the loudspeaker, and tap the secondary at the impedance of the loudspeaker. The sum of all the power taps for all the loudspeakers should not exceed the power output of the amplifier.
Note: Changing the power tap also changes the load impedance seen by the amplifier. Raising the power tap lowers the load impedance, and vice versa.
Also, changing the power tap changes the SPL of the loudspeaker. Reducing the power tap by half reduces the SPL by 3 dB, which is a just-noticeable difference in speech sound level.
If a particular loudspeaker is too loud or too quiet, you can change its power tap. Just be careful that the total power drain does not exceed the power output of the amplifier.
3. Connect the loudspeakers together.
Connect all the loudspeaker-transformer primaries in parallel. Run a single cable, or redundant cables, back to the power-amplifier transformer secondary. But DON’T CONNECT IT YET.
4. Measure the load impedance.
Before connecting the load, first measure its impedance with an impedance bridge (a simple low-cost unit is adequate). Here’s why you must do this: If the load impedance is too low, the power amplifier will be loaded down and may overheat or distort. It’s a myth that your can connect an unlimited number of loudspeakers to a 70V line.
If the load impedance measures too low, re-tap all of the loudspeakers at the next-lower power tap. This raises the load impedance. Measure again.
Usually, it’s no problem if the load impedance measures higher than the matching value (the calculated minimum safe load impedance). The system will work, but at reduced efficiency. Typically there is more than enough power available, so efficiency is not a problem.
If for some reason power the power is limited, then the system should be wired for maximum power transfer. This occurs when the measured load impedance matches the calculated minimum safe load impedance. If the load impedance measures above this value, you can re-tap all the loudspeakers at the next-higher power tap and measure again. This tap change lowers the load impedance.
Many people don’t realize that a transformer labeled for use with a specific voltage will work just as well at other voltages. See the constant voltage calculator here. It determines the power delivered from a transformer tap when driven with other than the rated voltage.
Since a 70V line is relatively high-impedance, it is more sensitive to partial shorts than a low-impedance line. Consequently, you may want to avoid running 70V lines in underground conduit which may leak water.
Use high-quality transformers with low insertion loss. Otherwise, the power loss in the transformer itself may negate the value of the 70V system.
Avoid driving small transformers past their nominal input voltage rating. Otherwise, they will saturate, draw more than the indicated power (possibly overload the amplifier) and will distort the signal.
You may want to insert a high-pass filter ahead of the power amplifier to prevent strong low-frequency transients which can cause core saturation.
The CTs amplifiers include a high-pass filter that can be selected at 70 Hz, 35 Hz, or bypass. The CH amplifiers insert a 70 Hz high-pass filter when placed in high-impedance mode.
As stated earlier, there are three power-amplifier options for 70V operation: The amplifier might have:
• an external step-up transformer
• a built-in step-up transformer
• a high-voltage, transformerless output
Let’s consider each option.
Amplifier with external transformer
This system is shown in Figure 1 (on page 1). If you use an external transformer, select one recommended or supplied by the amplifier manufacturer.
If you have a conventional amplifier with low-impedance outputs only, and you want 70V or 100V operation, Crown has the needed accessories. The TP-170V is a panel with four built-in autoformers that convert four low-impedance outputs to high impedance. The T-170V is a single autoformer for the same purpose.
Choose a transformer with a power rating equal to or exceeding the wattage of the power amplifier. The turns ratio should be adequate to provide 70.7V at the secondary when full sine-wave power is applied to the primary. Use the following formula for a 70.7V line:
T = turns ratio
70.7 = voltage of constant-voltage line
P = amplifier power output in watts
Z = amplifier rated impedance
SQR means square root
Better yet, measure the amplifier’s output voltage at full power into its rated load impedance, and use the formula:
T = turns ratio
70.7 = voltage of constant-voltage line
E = measured output voltage at full power into the rated impedance.
Amplifier with built-in transformer
If the transformer is already built into the power amplifier, simply look for the output terminal labeled “70V,” “25V,” “100V,” or “high impedance.”
Amplifier with transformerless, high-voltage output
Figure 2 shows how a power amplifier with a high output voltage can power a distributed system without a step-up transformer.
Figure 2. A constant-voltage system using a high-voltage power amplifier.
Many high-power amplifiers can drive 70V lines directly without an output transformer. For example, Crown CH amplifiers have an auto transformer (except CH 4). CTs amplifiers can provide direct constant-voltage (70V/100V/140V/200V) or low-impedance (2/4/8 ohm) operation.
In Dual Mode, the CTs 600/1200 can power 25/50/70V lines; the CTs 2000/3000 can power 25/50/70/100V lines. In Bridge-Mono mode, the CTs 600/1200 can power 140V lines; the CTs 2000/3000 can power 140V and 200V lines.
With CTs Series amps, one channel can drive low-impedance loudspeakers, while another channel drives loudspeakers with 70V transformers. This makes it easy to set up a system with large, low-Z loudspeakers for local coverage and distributed 70V loudspeakers for distant rooms—all with a single amplifier.
The Crown CTs 2000 adept at providing constant power levels into various loads. In dual mode, it delivers 1000 watts into 2/4/8 ohms and into a 70V line. In bridge-mono mono, it delivers 2000 watts into 4, 8, or 16 ohms, 2000 watts into a 140V line, and 2000 watts into a 200V line.
Crown Commercial Audio series of amplifiers and mixer-amps provide both low-Z and constant-voltage operation. For example, the 180MA and 280MA mixer-amps offer 4-ohm, 70V and 100V outputs.
Pros and cons of transformerless systems
The high-voltage, transformerless approach eliminates the drawbacks of amplifier transformers:
• limited bandwidth
• core saturation at low frequencies.
On the other hand, transformers are useful to prevent ground loops, ultrasonic oscillations and RFI. Some local ordinances require transformer-isolated systems.
Let’s look at the core-saturation problem in more detail. Sound systems can generate unwanted low frequencies, due to, say, a dropped microphone or a phantom-powered mic pulled out of its connector.
Low frequencies at high power tend to saturate the core of a transformer. The less the amount of iron in the transformer, the more likely it is to saturate.
Saturation reduces the impedance of the transformer, which in turn may cause the amplifier to go into current limiting. When this occurs, negative voltage spikes are generated in the transformer that travel back to the amplifier—a phenomenon called flyback. The spikes cause a raspy, distorted sound. In addition, the extreme low-impedance load might cause the power amplifier to fail.
Some Crown amplifiers are designed with high-current capability to tolerate these low-frequency stresses.
Production amplifiers are given a “torture test.” Each amplifier must deliver a 15-Hz signal at full power into a saturated power transformer for 1 second without developing a hernia!
Many transformers are reactive, so their impedance varies with frequency. Some 8-ohm transformers measure as low as 1 ohm at low frequencies. That’s another reason for specifying an amplifier with high current capability.
Using a high-voltage system greatly simplifies the installation of multiple-loudspeaker PA systems. It also minimizes power loss in the loudspeaker cables. If you take care that your load does not exceed the power and impedance limits of your power amplifier, you’ll be rewarded with a safe, efficient system.
APPENDIX: HISTORY OF CONSTANT-VOLTAGE SYSTEMS
In early industrial sound systems, multiple loudspeakers were carefully configured to provide a matching impedance load to the amplifier. But as these systems grew in size, several problems arose: how to connect multiple loudspeakers to the same amplifier without loading it down, how to individually control the sound power level fed to those loudspeakers, and how to overcome the power loss associated with the typically long lines that ran between the power amp and loudspeakers.
By the late 1920s and early 1930s the “step-up, step-down” idea has been applied to loudspeaker lines in what has become known as “constant voltage” distributed systems. (Radio Physics Course 2nd Ed., Radio Technical Publishing co., N.Y., 1931).
Various voltages have been tried such as 25, 35, 50, 70, 100, 140, and 200 volts, but the 70V system has become the most widespread.
After World War II, we find constant-voltage systems depicted in such reference works as Radio Engineering 3rd Ed. (McGraw-Hill, N.Y., 1947). By the end of that decade, several standards had evolved to regulate 70V specifications for amplifiers and transformers. (Radio Manufacturer’s Association, SE-101-A And SE-106, both from July 1949). In the 1950’s we find the use of 70V systems very well established as evidenced by Radiotron Designer’s Handbook 4th Ed. (RCA, N.J., 1953 and Radio Engineering Handbook 5th Ed. (McGraw-Hill, N.Y., 1959).
As component design improved, 70V systems began to achieve high-fidelity status, but there were two weak links in the chain: the step-up and step-down transformers. Good broadband transformers that could resist core saturation and distortion were expensive.
Half of this problem was solved in 1967 When Crown International introduced the DC-300. It was most likely the first high-powered low-distortion solid-state power amplifier capable of directly driving a 70V line without a step-up transformer. And in June 1987, the Macro- Tech 2400 was introduced with the capability of directly driving a 100V line. Thus, today only loudspeaker needs a transformer to step down the voltage.
APPENDIX: TRANSFORMER INSERTION LOSS
Transformers have insertion loss (power loss due mainly to resistance). This loss should be included in system calculations for precision.
Converted to a power ratio, insertion loss can be expressed as
PR = 10 (L/10)
PR = power ratio
L = insertion loss in dB (always a positive number).
Some transformer manufacturers compensate for insertion loss by adding extra windings. In that case, the power delivered to the loudspeaker is the rated value of the tap. The primary draws the rated power times the power ratio of the insertion loss.
In this case, you can calculate the primary impedance as follows:
Pt = Ps + L
Pt = total power in dBm
Ps = power to the loudspeaker in dBm
L = insertion loss in dB
Pt = Ps * L
Pt = total power in watts
Ps = power to loudspeaker in watts
L = insertion loss (as a ratio).
Then the primary impedance is calculated as follows:
Z = (70.7)2/Pt = 5000/(Ps * 10(L/10))
Z = primary impedance in ohms
Pt = total power in watts
Ps = power to loudspeaker in watts
L = insertion loss in dB.
Other transformer manufacturers do not compensate for insertion loss. In this case, the primary impedance matches its rating. However, the power delivered to the loudspeaker is less than the power applied, due to the insertion loss.
Ps = Ptr/L
Ps = power to loudspeaker in watts
Ptr = power drawn by transformer in watts
L = insertion loss (as a ratio)
To determine whether a transformer is compensated, measure the power (E2/Z) delivered to the loudspeaker when connected to 70.7 volts. If it is less than the rated power, the transformer is not compensated for insertion loss.
When making loudspeaker SPL calculations based on sensitivity ratings, subtract the insertion loss in dB from the loudspeaker sensitivity rating (if the transformer is not compensated for insertion loss). In transformers that compensate for insertion loss, the speaker receives the power indicated. Consequently, each transformer draws a little more power from the line than is indicated. The final impedance will be too low if you add power equal to the amplifier power.
With non-compensated transformers, the labeled power is not the power received, so the loudspeaker SPL will be lower than calculated. The impedance will read correctly, but the acoustic output will be lower than expected.
APPENDIX: LINE LOSS
See the line loss calculator here.
Daniels, Drew. Notes on 70-Volt and Distributed System Presentation, db, March/April 1988.
Davis, Don. Sound System Engineering, 2nd Ed., Indianapolis, Howard W. Sams Co., 1987, pp. 85-87, 402- 405. 138905-1 10-05
This article provided by Crown Audio.
Church Sound: The Advantages Of Active Loudspeakers
Do they offer a lower entry cost for a system?
For years, when we thought of active loudspeakers, often the image was of studio monitors. However, now when you pick up a catalog, it’s obvious that nearly every manufacturer has an active loudspeaker line.
I often hear people ask, “What’s so special about active loudspeakers? Why would you want to put all of your eggs in one basket?” “What are the advantages of active loudspeakers?” “Aren’t they overpriced?”
Read on, let’s find out!
Active Or Powered
Active loudspeakers are also known as powered loudspeakers.
I refer to them as “active” because there’s much more happening inside the box besides amplification. An active loudspeaker includes the enclosure, drivers, electronic crossover, compressor/limiters, delay, equalization, amplifiers, and increasingly, mini mixers and very flexible input/output functionality.
The truth is that powered loudspeakers should sound better than conventional loudspeaker designs. All of the crossover points, equalization, time alignment, compression, limiting and amplification matching are fine-tuned to meet the manufacturer’s intended sound.
The key here is the intended sound quality! We all feel that we can do a better job tweaking the loudspeaker than the manufacturer, right? What most people don’t understand is where the break-point is.
Today, processors give loudpeaker manufacturers more control over crossover points and equalization. Proper gain or power (wattage) matching is one of the most important elements of making a loudspeaker sound good and insuring the longevity of the components.
I come from the old audio school of thought that there is no such thing as too much “available” power. There is, however, such a thing as wasting money on power. You don’t need 5 KW power amplifiers for the high frequencies.
Don’t forget though, that under-powering any loudspeaker can be harmful as as well. The key to success is controlling or harnessing amp power. Quality active loudspeakers match amplifier power (wattage) to component need (i.e. power handling). The manufacturer has already done the math for you.
Bold, But True
Powered loudspeakers, as a total sound system design, can cost less than equivalent conventional component PA systems.
For this bold statement to be true, one must consider everything that makes up a PA such as crossovers, equalizers, compressor/limiters, loudspeaker processors, amplifiers, rack space and cabling.
Material cost of an active loudspeaker remains the same when compared to a passive-type box. Further, active and passive designs both require crossover networks, equalization, compressor/limiting, time alignment and power. Don’t underestimate this reduction in required space.
With powered loudspeakers there is much less gear to install and store. Most importantly, once your loudspeaker system goes “active,” the total system cost is reduced.
Let’s break this down into fundamentals, and see how this is achieved.
The audio phrase “front-end” refers to the first part of the signal path. In this case, it is the input. Depending on the loudspeaker application, this input will be at line level, mic level, or both.
A good front-end design is essential to prevent those nasty RF signals from bleeding in the audio signal. It’s pretty embarrassing to be sitting in church to hear a local trucker, on his CB radio, bled through the sound system during the sermon saying “Some smoky bear is on his a—-.” Not good. So why not use technology that prevents this issue?
For many small jobs, an obvious cost savings comes when no mixer is required. Still, every advantage comes at a price. As such, from a design and material cost viewpoint, quality front-end design is one of the few areas where active loudspeakers can add direct cost.
The back end of the electronics in an active loudspeaker are where processing, such as active crossovers, EQ points, compressor/limiters and delay, takes place. These multiple functions are why considerable savings can be via internal processing compared to external digital speaker processing.
With an active loudspeaker, electrical parameters are pre-determined by the manufacturer. This is not all digital processing wizardry either. Precise filtering can also be implemented using low-cost analog components. In fact, it’s still cheaper to use analog components compared to operating in the digital domain.
As soon as there is a need to have external processing adjustments, this may no longer be true. However, if tweaking is ever allowed, the main purpose of an out-of-the-box, acoustically aligned loudspeaker would be effectively eliminated
Eventually, the digital domain will become more cost effective over analog, especially when audio signal remains all-digital from the mixer to the loudspeaker front-end. The high cost of digital is the conversion from analog to digital and from digital back to analog.
The essential difference between stand-alone amplifiers and active loudspeakers is the required wattage the power supply has to deliver.
Traditional component amplifiers must be designed to handle various external loads. This requirement inevitably causes component amplifiers to be overbuilt.
With active loudspeakers, the load is predetermined, as is the maximum current load. If current load is predetermined, you can reduce the requirements of the supply thus, reducing the design cost.
As soon as the external elements faced by stand-alone amplifiers are removed, a designer needs only to implement the exact amount of circuitry required. Maximum current requirements are then fixed and cannot be altered from an outside source.
Once you know this, the need for short circuit protection, additional output transistors, larger pre-drivers and massive heatsinks are practically eliminated.
Another advantage active loudspeakers have over stand-alone amplifiers is weight loss (i.e., less metal). The elimination of large heavy heatsinks and power supply transformer seriously reduces overall system weight.
The need for an expensive, heavy rack mount chassis is also eliminated, and should yield further cost reductions, right? One could only hope!
I’m certainly not trying to say that active loudspeakers are the solution to every audio application, as there will always remain a need for traditional PA systems. My point is that anyone designing a new system should consider active loudspeakers as an option, especially if the system requires occasional portability.
I suspect you’ll find extra cash in your budget and fewer headaches down the road if you decide to go with active loudspeakers.
Jeff Kuells is an audio engineer and audio manufacturing consultant and was previously director of engineering for a major amplifier manufacturer.
In The Studio: DIY Building A Simple But Useful Diffusor From Salvaged Wood
Putting used (and free) boards to effective use
This week one of my neighbors left his unwanted Ikea bed frame in the alley. Among the parts of the bed was a set of SULTAN LADE slatted base.
In other words: 20 3/4-in pine boards for free. Keep an eye out for these because they can be used for a ton of simple DIY projects.
With these, I decided to make some super simple diffusors to cover up the bare wall around the closet at the back of my studio. The goal was to use the least amount of materials, hardware and effort. This design accomplished that and I didn’t even need to use a saw.
Each diffusor is made of five boards in a wide V shape. I used extra boards to get the spacing right, then held it firm with a pair of C-clamps while hammering. The clamps were a huge help to prevent the boards from shifting around.
I had just enough nails of the right length to build two diffusors, I would have built four if I had more nails. These aren’t very heavy so for now I have them mounted with a single drywall screw and picture hanger.
I’m sure an expert will disagree with the design as an effective diffusor. QRDs these are not. However, just holding it to the wall I could hear it was doing something far better than a bare wall.
Unpainted soft wood like pine is porous and I could hear it softening the highs a little. Not sure if it scatters the sound at all but surely it is doing something more than the drywall was. QRDs are complicated, heavy and extremely labor intensive to DIY.
(click to enlarge)
There are two downsides to building with free/salvaged wood like this:
1) Needing to remove staples, screws or nails before you can build.
2) Sometimes the wood is warped, which is hard to fix.
These don’t sit as flush on the wall as I’d like because of some warping.
I’m undecided whether I will leave these natural or stain them. If you’re looking for a simple wood stain, vinegar and steel wool left in a jar for a few days will give you a nice grey aged fence/barn wood look. Toss coffee grinds in the jar too and you can get a pretty dark almost chocolate brown stain.
Teas, cocoa or spices can give you different colors. Steep longer and apply repeatedly for darker color. Again, super simple and practically free, but also it doesn’t stink up your house for days with toxic fumes.
I have some more ideas for diffusors which I will explore at a later date. One idea is to use the curved SULTAN LUROY bed slats and symmetrically staggering them at a few different heights and depths. Would probably look really nice and modern in a live room especially behind a drum kit.
Jon Tidey is a producer/engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com.
Posted by Keith Clark on 06/20 at 04:08 PM
(Almost) Set & Forget: Dedicated Digital Recorders For Live Recording
If you want the record with the least amount of hassle, a dedicated recorder is definitely a great way to go
Did the client request a big recording rig, but can’t afford the cost? There is another way.
While the advantages of recording with a full Pro Tools, Nuendo or Sonar digital audio workstation are many, these rigs are somewhat complicated and time-consuming in the fast-paced lives of most live sound engineers.
The DAW consists of at least a laptop and an external audio interface, but most really big rigs require the power of a desktop computer that needs a keyboard, mouse and monitor as well. Let’s face it, many engineers just want to push faders and aren’t that keen on being a computer tech during the show.
That’s where dedicated multitrack recorders come in.The beauty of the approach is that it’s pretty much hassle free. Setup is easy because all monitoring and location information can be found right on the box instead of a computer monitor.
Plus, there’s no external interface (although you may have a breakout cable) and they’re very easy to use - just arm the tracks, hit record and you’re on your way.
Many of the most popular recorders currently available have at least 24 tracks and can be expanded as needed. Let’s take a look at what’s on the market in terms of large-format recorders.
The Roland Systems Group R-1000 is a dedicated, stand-alone 48-track recorder and player designed to work with the company’s popular V-Mixing Systems in any live event or production. The R-1000 can also be used with any digital console that has MADI output capabilities by using the Roland S-MADI REAC MADI Bridge.
It records up to 48 tracks of 24-bit audio in BWF format, with a removable hard disk drive ensures smooth integration with DAWs. Approximately 20 hours of recording (44.1/48 kHz) are provided using 500 GB HDD. Channels can be set to “pass thru” live from the digital snake inputs, record live input, or playback from recorder.
The R-1000 also plays up to 48 tracks of 24-bit audio via REAC, and data can be loaded from external devices. A marker function enables playback at any designated point.
It includes an analog monitor output and a headphone output. The versatile feature set includes video sync, timecode, GPI and RS-232C. USB ports allow backing up data and connecting PC for further software control. U.S. MSRP is $3,495.
The JoCo BBR-1 Blackbox provides 24 tracks at up to 96 kHz/24-bit resolution. It only takes up a single rack space and it’s possible to chain Blackboxes together and control them from one unit in really large recording situations.
The Blackbox doesn’t have any onboard storage so you have to supply an external hard drive, and that’s where it gets interesting. Until now, almost any kind of simultaneous recording of 8 tracks or more required a Firewire interface, but JoCo has figured out how to record 24 tracks at the same time on a drive using just a standard USB2 interface. This means that you can use a relatively inexpensive drive (well under $100) that can be directly connected to your DAW for editing afterwards.
The unit was made with live recording in mind so it’s extremely simple to operate. It also has zero latency, with inputs routed directly the outputs during record for safety against power failure. The BBR-1 requires three breakout cables (that are supplied) and has a street price of around $2,500.
The Alesis ADAT HD-24 has been around for a while now, but it’s still a solid performer. It has 24 tracks of up to 96 kHz/24-bit resolution (the track count goes down to 12 at 96k though), and multiple units can be synchronized to create a larger system.
The HD-24 comes with an onboard 40 Gigabyte hard drive and has two hot-swappable drive bays for recording and backup. It also has both analog and digital connectors on the back of the box so there’s no need for breakout cables.
The only downside to this recorder is that it records in a proprietary file format, so the transfer to a workstation can be tricky and isn’t as plug and play as some of the other units available. MSRP is around $1,600.
The Tascam X-48 is a 48-track recorder that provides full 96kHz/24-bit resolution across all tracks.
It can be used as a stand-alone unit or as a full-on workstation with the addition of a keyboard and monitor complete since it has full editing capabilities and a built-in 60-channel automated mixer.
What’s more, the X-48’s files are standard Broadcast WAVE, so they’re completely compatible with all of the latest DAWs.
The unit has a built-in 80 Gig hard drive and Firewire connections for external hard drive backup and delivery.
It’s capable of either analog or digital I/O (or both) depending upon the interface cards selected.
The X-48 also has a street price of about $2,500.
Finally, the Fostex D2424LV MKII offers 24 tracks of 96 kHz/24-bit recording with full balanced analog and digital I/O.
One of the more interesting features of the D2424LV is that it has a slot for a compact flash (CF) card for file transfer and backup. It won’t be long before you see every recorder use solid state memory instead of hard drives.
The D2424LV has twin drive bays (the CF interface is built into bay 2), and records with a standard WAVE file format for compatibility with all DAWs. One of the best features for live recording work is that the front panel acts as a detachable remote that can be located up to 30 feet from the main unit, so it can reside near the console for easy access.
The D2424 MKII has a street price of around $1,500.
If the goal is a professional-quality recording, I recommend staying away from some of the more “consumer-type” products with built-in mixers.
On the other hand, there are some high-quality, very professional 8-track digital recorders available like the Sound Devices 788T (a standard for television and film), and Zaxcom Deva (another broadcast standard) that are very light (they’re made for field use) and sound great.
These are often also more expensive than the units mentioned above and have fewer tracks, but can be rented pretty easily. In addition, most audio rental companies also will usually have an X-48 or HD-24, and more and more are now offering the Blackbox as well.
If you want the record with the least amount of hassle, a dedicated recorder is definitely a great way to go, and if you just need a rental, it can be a very economical alternative to just about anything else. It can mean the difference between concentrating on your mix or the recording.
Bobby Owsinski is a veteran audio professional and the author of several books about live and recorded sound.
Wednesday, June 19, 2013
Gone But Not Forgotten - Older Mic Models That Still Do The Job
Somewhere between the esoteric mega-buck audio salon and the trash heap...
There are advantages to getting older. Years of learning something new at every gig add up to that thing we call experience.
You remember how well the first Soundcraft console you ever used responded to your touch. You recall, despite its simple interface and lack of things to tweak, how great the reverb of a Lexicon PCM-60 sounded.
Another advantage to getting older is that the gear you lusted after as a youngster becomes affordable. The downside of this can be the making a purchase decision based on the romance of nostalgia.
Certain pieces of audio gear remain popular long after being discontinued, transforming into that romantic term “vintage.”
Depending upon how well it works and how rare it is, some gear goes up in value and actually can sell for more used than when it was new.
Sometimes the popularity of something on the used market is so great it catches the attention of manufacturers who either reissue the same model or introduce a new model that “sounds just like the old one.”
Witness various flavors of Neumann microphones and the Universal Audio LN-1176.
And a lot of older audio gear is still viable in today’s sound reinforcement world. Somewhere between the esoteric mega-buck audio salon and the trash heap, you can find things that, although decades old, still does the job.
Let’s start at the head of the signal chain by looking at some favorite older vintage microphones.
Resurrected a while back as the e609, then re-resurrected as the e609 Silver at an even lower price, the MD-409 is still much in demand.
Incredibly versatile, it’s used for vocals, drums, horns, and is also a mic of choice for electric guitar cabinets.
The 409 commands good money on the used market, with prices over $300 for a single unit in good condition not uncommon.
Shure SM59 (click to enlarge)
A real “sleeper” you can get for a song! Ruler-flat frequency response from 100 Hz to 10 kHz means this mic has no built-in “hype.”
This feature, along with relatively low output, probably contributed to its eventual demise, since it was marketed primarily as a vocal mic.
But these same features make it great for horns, guitar cabinets, and other places you might not want a SM57 “presence peak”.
Not terribly common, but as of a few years ago at least, they could be found for under $100.
AKG D224 (click to enlarge)
A mic with a very interesting technology, it has a pair of coaxially mounted diaphragms connected together via an electronic combining network.
Since AKG was able to optimize the diaphragms for their intended frequency ranges, the mic was said to “bring the sound of a condenser microphone to a dynamic”.
This design addressed the common problem of phantom power being unavailable in the field.
Electro-Voice N/D308 (click to enlarge)
Part of the original N/DYM line that debuted about 20 years ago, the N/D308 cardioid is a great workhorse utility mic.
Its wider pattern is just the thing for splitting a pair of rack toms. The flat front of the yoke-mounted “tea egg” makes for easy visual identification.
Stedman N-90 (click to enlarge)
I’ve been told these were “a side-address EV RE20.” When I contacted Stedman, they told me, “The capsule was supplied by EV, and built to our specifications.”
Not employing the EV “Variable-D” technology found in the RE20, the N-90 will have some good ol’ proximity effect.
A large-diaphragm dynamic, it’s great for low brass like baritone sax and trombone, or the low rotor of a Leslie if you’re lucky enough to get to do that sort of thing.
Get an N-90 and try it on snare bottom! (And thank me later!!)
AKG D12E (click to enlarge)
A large-diaphragm dynamic designed for vocals that found it’s way inside kick drums and in front of brass.
Said to be the predecessor of the D 112, it doesn’t have the top end “click” of the egg-shaped AKG.
However, the D12E’s low end is, as some might say, “like buttah!” Useful in the popular two-mic kick drum technique, employed along with your favorite flavor boundary mic.
Easy to find, and not terribly expensive.
Dave Dermont has worked in professional audio for well over two decades and is a frequent contributor to ProSoundWeb and the LAB Forum.
Church Sound: Proper Applications Of Passive And Active DI Boxes
Getting signal from here to there optimally
A direct box, DI unit, DI box, or simply DI is an electronic device that connects a high impedance line level signal that has an unbalanced output (a.k.a., a piece of equipment) to a low impedance mic level balanced input, usually via XLR connector. DIs are frequently used to connect an electric guitar, electronic piano, or electric bass to a mixing console’s microphone input.
The DI performs level matching, balancing, and either active buffering or passive impedance bridging to minimize noise, distortion, and ground loops. DIs do not perform impedance matching.
DI (pronounced dee EYE, not DIE as in “die feedback, die!”) is variously claimed to stand for direct input, direct injection or direct interface. DI units are used with professional and semi-professional PA systems and sound recording studios.
The basic component of a DI unit is a step-down transformer. Transformers consist of two or more wires that wind many times around a metal core. The ends of each coil of wire protrude from the windings; one pair of ends is the input, and the other pair is the output. The input coil is called the primary, and the output coil is called the secondary.
When an electrical signal through the primary coil, it creates a magnetic field around the coil. The field then induces an analogous signal in the secondary coil, which appears at the output leads. If the primary has more windings than the secondary, it’s called a step-down transformer because the signal level and impedance are lower at the output than they are at the input.
If the secondary has more windings than the primary, it’s called a step-up transformer because the signal level and impedance are higher at the output. However, the power does not increase with respect to the input. Step-up transformers are used at the input stage of mic preamps and adapters to connect a microphone to a line-level or guitar-amp input.
Passive DI Units
A passive DI typically consists of an audio transformer used as a balun. (A balun is an electrical device that converts between a balanced signal and an unbalanced signal.) Typical turns ratio is about 500:1, to match a nominal 50 kOhms signal source such as the magnetic pickup of an electric guitar to a 100-Ohm input.
Cutaway view of a Radial ProDI passive direct box.
Less commonly, a passive DI may consist of a resistive load, with or without capacitor coupling. Such units are best suited to outputs designed for headphones or loudspeakers.
Cheaper passive DI are more susceptible to hum. Passive units also tend to be less versatile than active. However, batteries are not required, they’re simple to use, and the better units are extremely reliable.
Some models have no settings, while others can have a ground-lift switch (to avoid ground loop problems) and a pad switch (to accommodate different source levels). Some passive DIs also have a filter switch for coloring the sound.
Active DI Units
An active DI contains a preamplifier. Active DIs can provide gain and are more complex yet versatile than passive units.
Active DIs require a power source, via batteries or a standard AC outlet connection, and may contain the option for phantom power use. Cheaper units offering both options may perform far better on fresh batteries than on phantom power, or vice versa, so it is important to test a prospective purchase in the mode in which it will be used.
Most active DIs provide switches to enhance versatility. These include gain or level adjustment, ground lift, power source selection, and mono/stereo mode. Ground-lift switches often (perhaps unintentionally) disconnect phantom power.
A pass-through connector is a second output, sometimes simply connected to the input connector, that delivers the input signal unchanged, to allow the DI to be inserted into a signal path without interrupting it. This is essential in many applications. Pass-through is common on active compared to passive DIs, and is commonly referred to as a bypass.
True bypass occurs when the signal goes straight from the input jack to the output jack with no circuitry involved and no loading of the source impedance. False-bypass or simply “bypass” occurs when the signal is routed through the device circuitry with no intentional change to the signal.
However, due to the nature of electrical designs there is almost always some slight change in the signal. The extent of change and how noticeable it may be can vary widely from unit to unit.
Countryman Type 85 active DI box.
Direct boxes are typically used in instances of instruments or other devices that only contain an unbalanced 1/4-inch output that needs to be connected to an XLR input.
Headphone Outputs. A DI can be used to receive a signal from any headphone jack, such as those on personal stereo systems or keyboards. If the signal is to be connected to a single input then a mixing facility is required in the DI unit. If stereo is required, then either two DI units or a single stereo unit can be used. The jack cannot normally be used for headphones as well.
Passive resistive load
Acoustic Or Electric Instruments. DIs can be used on instruments with electronic circuitry and pick ups that do not contain an XLR balanced output. An example of this application would be an electric keyboard that needs to be connected to a mixer board, either directly or through a snake. Another example would be an acoustic guitar with pickups or an electric guitar or bass guitar that would be mixed through a mixing console into a main or monitor mix.
Electric Keyboards. For best results use the line output(s), unless the keyboard has built-in balanced outputs (some high end units only) which are essentially built-in DI units and should give the best results of all. If monitor amplifiers are also to be driven directly from the keyboard, the DI unit must have a passthrough connector. Alternatively, take a signal from the amplifier instead, see below.
Passive balun type
Electric Guitar. A DI can be used to take a line in from an electric guitar. When dealing with electric guitars and electric guitar amplifiers, better results will often (not always) be obtained by instead using a microphone in front of the loudspeaker. This is because the tone of the guitar is often shaped by the amplifier and speaker used in the setup. A DI in the chain before the speaker or amplifier will often result in a loss of fullness or pleasant tone.
Using a microphone eliminates hum from ground loops which are often troublesome when using DI units with mains-powered amplifiers. But a microphone will of course pick up background noise which a DI connection will not, and will most often be more susceptible to feedback in live situations.
If an electric guitar is to be connected to a DI and an amplifier is to be connected as well, then the DI unit must have a pass-through connector. Alternatively take a signal from the amplifier, see below. For players using effects (including distortion) built in to their amplifiers, this is the only option, otherwise the contribution of these effects will be lost. If a pass-through is used, normally the DI unit is between any effects units and the amplifier for the same reason.
Passive balun type
Electric Bass Guitar/Acoustic Guitar. When dealing with electric bass or acoustic guitar, a DI is most often preferrable to using a microphone on an amplifier. This is because these instruments are often valued in a mix for being clean.
The signal path from the instrument should go into the DI unit and should then pass through to any sort of instrument amplifier. Often any amp used in this setup would be for monitoring purposes only, with the major component of the sound coming from the balanced send of the DI. The DI should be chosen with the specifications of the individual instrument in mind.
Often the best possible tone is achieved by one stage of preamplification. Following this idea, an active instrument, which means that the instrument has a preamplifier inside of it, should utilize a passive DI unit, while a passive instrument, meaning there is no preamplifier inside, should utilize an active DI.
Instrument Amplifiers. Some high-end instrument amplifiers contain built-in DI units. Most of these work as well as or better than any external unit, as they are well matched to the signal, but caution should be used with these as they often are not transformer isolated.
Better results will often (not always) be obtained by instead using a microphone in front of the loudspeaker. This is especially true of electric guitar. Using a microphone eliminates hum from ground loops which are often troublesome when using DI units with mains-powered amplifiers. But a microphone will of course pick up background noise which a DI connection will not, and will most often be more susceptible to feedback in live situations.
—From line or slave output
—From loudspeaker output (parallel to loudspeakers)
—From effects loop (may need pass-through)
Passive resistive load type
—From loudspeaker output (parallel to loudspeakers, and check impedance and power handling capacity of the DI!)
Passive balun type
—From line or slave output
Tips & Tricks
One of the most common applications for DI boxes is to connect equipment with high-impedance outputs (such as synths) to a mixer’s low-impedance inputs using long cables. If you were to run a long cable (say, 100 feet) from a guitar to an amp, it would completely load the guitar; you’d lose high-frequency response and add noise.
However, if you connect the guitar to a nearby DI box with a short instrument cable, you can then run a 100-foot mic cable to a mic preamp near the guitar amp. The mic preamp’s output is then connected to the input of the amp.
Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians, and can even tell you the signs the sound guy is having a mental breakdown. To view the original article and to make comments, go here.
In The Studio: Six Steps To Your Best Mix Ever
We’re manipulating emotional cues in the form of sound
Audio engineering is a surprisingly competitive arena. Us meek and mild mixers often find ourselves in head-to-head competition — or even tougher — competing against some imaginary beacon of greatness.
But this ain’t basketball. We don’t know who wins based on points. In fact, the only people who really keep score are other engineers — the kick in that song is a 9.2 out of 10, but the vocal reverb is only a 7 out of 10. Most people don’t really think or judge this way.
What makes a great mix? Well, most producers will tell you a great performance and great arrangement mixes itself. There’s a reason for this. A great mix isn’t really separate from a great production, and a great production isn’t really separate from a great song. The mix isn’t really the balancing of the production elements. The mix is facilitating the song on record.
This facilitation comes through the balancing of elements, the manipulation of tone and dynamics and the orchestration of space. But the whole goal is to make the listener hear, and feel, a song in the artist’s intended way. We aren’t really manipulating sounds, we’re manipulating emotional cues in the form of sound.
Let’s break it down:
1. Figure out the emotions of the song. This is the sum of the parts. When you listen to the song, the lyrics and the performance, there are feelings and intentions. Now, some of them will be clear, others will be ambiguous, and some will be contrasting. But we’ll get to that. For now, the question is what should the end listener be feeling when they are listening to the song.
This may vary section to section. Or the feeling might come from the difference between sections. The point is: figure out how the song is meant to hit the listener. The more you can figure this out, the stronger of a foundation you have for your mix.
2. Figure out how each element supports the emotions. Emotions are complex. You might have a “sad” sounding piano riff. If the whole effect were sadness, you might have a sparse, dragging, and/or lightly played drum part (or maybe no drums). But, you might have the sad piano riff contrasted with driving drums. This might create the feeling of fighting through something, or feeling distressed, or a host of other emotions.
Figuring out how each part interacts gives you context for your mix. If the parts contrast in feel, perhaps they should contrast tonally or dynamically as well? Is the piano supposed to be sad — as in depressed — or sad as in haunting? Perhaps emphasizing lower tones in the former and higher tones in the latter will help convey that intention. I can’t prescribe any kind of formula for this, that’s the beauty and subjectivity of mixing.
3. Figure out what’s important.
Once you have an idea of what and how everything is contributing to the song, you can start figuring out what’s most important to the feeling and when. This way, if you are say, EQ’ing to separate elements, you know which element is bowing out of the way to the other.
If the bass has all the inside groove, you don’t want to EQ the bass to make room for the kick. Or, if you do, you want to do it because you’re turning the bass louder than the kick.
Similarly, if the piano is expressing the feeling you want featured, and the bass is really just there for support, you probably want the piano to dominate in the record. In fact, it might even be good if the piano is masking the bass a bit in that scenario.
4. Scrutinize your vocals. As humans, there is nothing we understand more clearly than the human voice. Even if the song is in a different language we hear joy, pain, anger and love fairly clearly.
There is some degree of universal language that supersedes words. Find the parts of the performance that conveys the feeling, and bring those out. Check the entrance and exits of words, notes and phrases. A lot of interesting stuff tends to live in the entrances and exits.
5. Think of associations. Literal meaning tends to be underwhelming in a song. A literal meaning in a song would be when the performer tells the listener what to feel. It can be useful to a degree, but ultimately you want the listener to find their own emotions in the song.
One way to do this is to think of associations. An association is when something makes the listener think/feel something else. In this regard, the listener digs the emotional response out from within.
An easy example: putting an echo on something. Echoes are often associated with loneliness because we tend to hear echos in empty places. If the context is right, the listener will pull that association up themselves.
6. Focus on transitions and variation. I asked on my Facebook page which main elements make for a great song. Almost everyone mentioned “contrast.”
We are meant to detect contrast. We have a built in kinetic sense that we naturally use to focus our attention on whatever is changing. And we enjoy change. Making sure these changes are well orchestrated is paramount to an effective song — primarily to keeping the song engaging (at the very least).
And that’s how you make the best mix. Things like compression, EQ, choosing reverbs — these are all a technical means to an end. The end is the artistic intention, emotion, and how well it translates over the listener’s playback system.
Matthew Weiss engineers from his private facility in Philadelphia, PA. A list of clients and credits are available at Weiss-Sound.com.
Be sure to visit the Pro Audio Files for more great recording content. To comment or ask questions about this article go here.