Tuesday, March 11, 2014

Church Sound: EQ Is Really Just A Numbers Game

Do you remember how you were taught to use EQ? Were you really taught, or did you just figure it out?

One of the questions that I get asked all the time is how I know what to adjust on the EQ section of the console. I’ve found that if you can make it into a numbers game, turning the right knob becomes second nature.

When I teach sound, one of the things that we spend a lot of time on is frequency response. That’s because for me, when frequency response was explained the light bulb came on. All of the sudden things made sense and I could quantify what I was hearing.

Once I explain frequency (you know the drill—hearing is 20 Hz to 20 kHz or 20,000 Hz, low to high), the lesson continues by giving anchor points. I start by drawing a horizontal line, putting 20 Hz on the left side and 20 kHz on the right side of the line.

Then I began talking about a kick drum normally being tuned in the 80 Hz to 100 Hz range and draw a vertical line just to the right of the 20 Hz mark. This is anchor point #1.

I then ask what a piano is tuned to, and every time someone in the audience will throw out A440. Bingo, 440 Hz. If a piano is available, I then hit the A440 key (the A above middle C) on the piano, or if a piano isn’t available I use a pitch pipe.

I then draw a vertical line to the right of 80 HZ that is approximately one-third of the way up the horizontal line. This is anchor point #2.

Unfortunately, the next illustration is becoming less and less applicable as it’s no longer used very often. If you’re over the age of 35, you probably remember the tone generated by the Emergency Broadcast System (EBS). It’s not as commonly heard these days, so for those too young to have heard or remember, it was a 1,000 Hz (or 1 kHz) tone that played on radio and television stations, accompanied by an announcer saying, “This is a test of the Emergency Broadcast System, this is only a test…” (Check it out here.)

Regardless, I still use it in training, doing my best to replicate the 1,000 Hz tone vocally by saying “DOOOOOOOOOOOOOOOO” and trying to hold the pitch on or at 1 kHz. (Those who have been to one of my training sessions know that this is a weak point…:>)

For the younger crowd, and to the delight of those who may someday be in one of my training classes, I downloaded a frequency generator app to my phone. While my phone’s speaker won’t reproduce much below 150 Hz, I’m able to clearly play A440 and the EBS tone.

Anyway, back to the lesson. After hearing the EBS tone, I then draw a vertical line to the right of 440 Hz that is approximately half of the way up the horizontal line. Anchor point #3.

I then will selectively put the sound system that I’m using into feedback by boosting frequencies on an open mic channel. When the feedback starts, I ask if it’s above or below 80 Hz? Above? OK, then is it above or below 440 Hz? Above? OK, then is it above or below 1 kHz? Below? OK, our feedback issue is between 440 Hz and 1 kHz. Now, it it closer to 440 Hz or 1 kHz?

You get the point. Now they’re getting an understanding of where to adjust the EQ.

I then move the frequency knob between 440 Hz and 1 kHz, and then cut the level at and around the frequency the sweep knob is on. I can then also move the sweep knob up and down until I hit the frequency that best solves my feedback problem.

The goal is for this numbers game to translate to greater understanding of EQ. It can be taken further if needed, but usually, what I’ve described here unlocks the mystery for most fledgling audio techs.

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with sound at his church for more than 30 years.

Posted by Keith Clark on 03/11 at 12:44 PM
Church SoundFeatureBlogStudy HallTrainingConsolesEducationEngineerMeasurementProcessorSystemPermalink

Bedside Manner: Working More Effectively With Clients

The term “bedside manner” is usually associated with doctors, but I think it’s equally appropriate for any situation where customers are being served in perhaps a technical way and communication between parties is essential.

Psychology matters, and should be considered in the presentation, the choice of words, and certainly the attitude of the vendor or service provider.

Case in point: when I first moved to Albuquerque in 2004, I owned a 1996 BMW 328i. After visiting the local dealer for service a couple of times, I decided that not only did they charge too much, they were snobs. The next time, I took the car to an independent mechanic. He wasn’t a snob, but I still didn’t like his bedside manner.

Long story short, I ended up at another local shop and have been going there ever since (about eight years so far). They just “get it” and know how to talk to their customers. They customize the dialog based on the level of technical knowledge their customers have, and make the expensive repairs just a little less painful with an easygoing manner. And, they’re 100 percent trustworthy. I wouldn’t go anywhere else.

Jedi Mind Tricks
As many of you know, I’m a classical musician on the side. The orchestra that I perform with puts on concerts six times a year at a variety of venues throughout the Albuquerque area.

Every time we visit one particular venue, we arrive to monitor wedges strung across the front of the stage. No one from the orchestra has asked for them, but they are there. Work was put into hauling them out and hooking them up. Time and effort wasted.

What worries me about this is that an unsophisticated audience might easily equate “sound equipment on the stage” with “sound reinforcement is involved.” This is a subtle thing, but I’ve been at gigs where patrons have complained about the “loud volume” and asked to have “the speakers turned down” when the PA wasn’t in use and there were no microphones on stage, although loudspeakers were visible.

When I’ve answered to that effect, I’ve gotten scowls because as far as I can tell, these concerned patrons don’t believe me. Part of the problem is related to my decades-long rant about how shows often really are too loud; in other words, the audience does care, and for acoustic music they don’t want to have their collective face ripped off.

O.K., back to my original point. Who cares if the monitors are on stage if they aren’t being used? To me, it points to a gap in communication. Just like in the military, the job is often to “do exactly what you’re told, nothing more, nothing less.”

The flip side is that in pro audio, it also helps to read minds. Placing monitors on a stage for a 100 percent acoustic symphony orchestra, without being asked to do so, shows that someone is not very good at reading minds or considering the needs of their customers.

A counter example is a recent experience I had with a really good systems tech from a local sound company.

I mentioned Scott Boers of AE Productions (a regional sound company in Albuquerque) in my previous article (here). He was terrific on that job, and found the perfect balance of deferring to me (I was FOH) most of the time while getting in a few good suggestions at the same time.

Scott obviously has a very good sense of what can and perhaps should be said without being intrusive. He knows gear and brought along everything that was needed, plus a few extra items.

When I asked questions, he had answers. In turn, he asked some good questions about certain things, which caused me to think a bit more. For example, he suggested adding a mic on the upright basses, and I thought it was worth a try. It resulted in an improvement, and wouldn’t have happened without his suggestion.

The main thing about his approach is that it put me, the customer, at ease. I had no doubt he was A) good at his job, B) sensitive to our needs, and C) paying attention to what was going on around him, i.e., the job at hand.

Psych 101
A big part of what we have to do as pro audio practitioners is think a few steps ahead of our customers—both the patrons and talent, including their BE or ME if they have them.

If we’ve been working with an artist or group for a while, we’ve hopefully gotten used to their quirks and can maybe even read their minds some of the time. If we’re working with a new client for the first time, building trust must happen quickly and is often based on this elusive idea of bedside manner.

We have to be chameleons, changing our colors with the client’s needs, with the style of music being presented, and with the times. Technology is constantly changing, and we need to constantly change as well.

Nothing builds trust faster than a client feeling that they’re being heard and understood, and that you’re doing everything in your ability to get it right. Sure, there are some jerk artists out there that have fired people for a single mistake. But the more common story is where something goes wrong, the tech fesses up but promises to get it fixed, and fast, and does so. This kind of approach is usually what gets us re-hired.

As in sales, role-playing can be a good tool to work out some of your responses, questions and approaches. I suggest finding a trusted co-worker or colleague or two, and trying it out. Come up with solid, realistic scenarios and work through how you’d respond.

If you work with an old road dog, get him to share some “war stories” (what old road dog doesn’t love to tell war stories?). In addition to being great to hear, they usually offer a lot of learning potential.

And, most of all: have fun. Developing a winning bedside manner doesn’t come naturally to many of us. But it can be learned and cultivated, and enjoying the process along the way usually leads to a far better result.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.

Posted by Keith Clark on 03/11 at 09:42 AM
Live SoundFeatureBlogOpinionBusinessConcertEducationEngineerSystemTechnicianPermalink

Monday, March 10, 2014

In The Studio: The Importance Of Good Monitors

A good studio mix should sound and "feel" right on any set of monitor loudspeakers...
This article is provided by BAMaudioschool.com.

Simply, good monitors are very important.

Video productions monitor using screens. Audio productions monitor using loudspeakers that are driven by amplifiers.

Every decision made in a musical production (not only regarding sounds but also arrangements and even performances) is based on what everyone is hearing.

For example, if you’re recording a bass sound and the monitors sound thin, you may mistakenly believe that the bass sound itself is thin and compensate by adding more bottom. Then when you hear the same sound in a different location you may find that the sound has too much bottom and is muddy.

Likewise, if a singer is performing and cannot hear the proper blend of music and their voice, they will not be able to perform as comfortably and to their fullest potential.

As stated above, every decision made in a musical production will be based on what people hear. What people hear can be affected by many things, including loudspeaker type and condition, amplifier type and condition, headphone type and condition, quality of wire and soldering, ear wax, sinus congestion due to colds or allergies, ear fatigue, and the changing acoustics as people move around in the room.  Even air pressure (based on weather and altitude) can affect how things sound.

Going back to the video comparison, you would not crank up the amount of “red” on the screen before you adjusted the color of the video itself. Video monitors can be set differently in a variety of ways, and screens display colors differently as they age. Colors on a screen can even look different if viewed in a dark rather than a well-lit room. And looking at the sun for a few seconds makes it difficult to properly see and judge any image (much like the ear fatigue from listening to very loud music makes it difficult to properly hear and judge any sound).

Similarly in audio, volume is very important. Human sensitivity to frequencies (our ability to hear bass, midrange and treble) is different depending on the volume of what we are listening to (thanks Fletcher & Munson). When we listen quietly, we hear frequencies less evenly. When we listen loudly, we fatigue our ears and the speakers.

Another variable that can change how you hear something is perspective (actually, perspective affects how you judge what you hear). Listening to material that has extra bass or treble will affect how you perceive and even judge the next thing you hear. If you listen to rap and reggae all day and then try to mix a punk rock song, you may judge that the kick and bass sounds need to have more bottom when they may actually be appropriate for the musical style.

Let’s drop most of the variables.

First, your wiring should be solid (a bad solder joint can affect sound), your patchbay should be clean, and all of the knobs and faders you’re going through should be well maintained.

A good amplifier is as important as good loudspeakers (Both are important so do not try to skimp on one or the other). The amp should be powerful enough that you will not blow your loudspeakers due to distortion (distortion can flatten a sine wave to a square wave, which can blow loudspeakers because the cone will be jumping rather than smoothly moving – more on this later).

The combination of amplifiers and loudspeakers is crucial. Different loudspeakers will sound differently. Different amps will sound differently. Therefore, the sound you hear is based on a combination of the amp and speakers you use. 

I’ve often mixed different amps with different loudspeakers in order to get a sound that I thought would translate properly to other sound systems.

Of course, some companies (such as Genelec) make loudspeakers with built-in amplifiers that have been designed together so the combination of amp and loudspeaker will always be consistent.

Finally, be aware of how your room acoustics will affect the sound of your monitors. Mind parallel surfaces that can make sound bounce back and forth, and be careful of high frequencies bouncing off of the glass to the studio.

Oh, and don’t forget to listen in mono now and then.

I personally like to monitor through ProAc loudspeakers (with a Bryston 4B amplifier), Yamaha NS-10s, Genelec 1031s, and a bunch of small boom boxes (Sony, Panasonic, etc). I sometimes used Auratones but prefer the small speaker in Studer 2 tracks.

I hardly ever listen to “big” loudspeakers as I get great bottom and depth from the ProAcs. However, I do listen to “small” loudspeakers frequently, and even listen from another room for final decisions.

I used to have a cheap stereo in college that sounded terrible, but in ways I was used to. When I started mixing, sometimes I would hear problems on that cheap stereo that I did not notice when listening on “nice” loudspeakers. 

Now I try to listen to as many loudspeakers as possible in the hopes that one pair will show me something wrong that I did not notice on “nice” speakers. I even take copies of what I’m doing to stereo stores and listen in each system and boom box until I find something that needs attention.

The people who hear your mixes do not usually listen on “nice” systems, but rather on overly-hyped systems made for volume rather than even sound. They even listen with stereo speakers split in different rooms (right in one room, left in another) and many people share ear buds with friends (each hearing only one side of the music). My parents had a stereo with a broken left speaker that they never fixed. Imagine my surprise when I finally heard both sides of Beatles songs!

Does that mean we back off from making things sound as good as we can? Does that mean that we should avoid extreme stereo so the people who share earbuds will hear the correct parts? Of course not.

A good mix should “feel” right on any set of loudspeakers. There are exceptions (such as reggae sounding right on a loudspeaker without any bottom) but for the most part, your goal should be something that feels right no matter where you hear it.

STORY TIME: Once upon a time, I was mixing a TV theme song recorded by a very famous musician and produced by another very famous musician. During the song there was a break with the following rhythm: “da da da-da da da da-da.” I was being slick (it was for stereo cable TV) so I panned each part of the rhythm from left to right.

Unfortunately when they were creating graphics for the intro the stereo in the graphics suite had only had one working loudspeaker, and so they only heard half of the rhythmic part. They created the graphic that hit along with the half of a part, and when it was shown in stereo with the full part it looked stupid. We had to then go back and change the song to fit the wrong graphics that were made (it was the cheaper fix).

Bruce A. Miller is a veteran recording engineer who operates an independent recording studio and the BAM Audio School website.

Posted by Keith Clark on 03/10 at 04:21 PM
RecordingFeatureBlogStudy HallAmplifierLoudspeakerMonitoringSignalStudioPermalink

Tech Tip Of The Day: What Goes Into A Church System?

10 significant aspects to keep in mind with seeking a new sound system for a church...
Provided by Sweetwater.

Q: I’m on a committee to purchase a new sound system for my church. Are there any special considerations that go into this type of installation?

A: In many ways, designing an effective system for a house of worship is one of the most demanding jobs in the audio business.

While you are undoubtedly interested in good stewardship of your congregation’s funds, keep in mind that the following points are not “luxuries,” but are essentials for good sound system.

#1 Dynamic Range: Church sanctuaries are usually quieter than other gathering places. In fact, the noise floor sometimes resembles recording studio environments more than auditoriums.

So, the sound system must be quieter than usual to prevent audible noise in the audience area. You should specify a system with as much as 96dB of dynamic range.

#2 Signal-to-Noise Ratio: Many listening environments have a “sweet spot” for which the sound system performance is optimized. But in a house of worship, every seat must be optimized for adequate signal-to-noise ratio. Generally a minimum of 25dB S/N ratio is appropriate for every seat in the audience area.

#3 Uniform Coverage: Many auditoriums are plagued with “hot” and “cold” spots in sound coverage. This can usually be attributed to interaction between multiple loudspeakers, and is unavoidable when more than one loudspeaker is used to provide sound coverage. A good design assures that there is even coverage in the audience area, and that no seats are unusable because of loudspeaker interaction.

#4 Versatility: While it is possible to design sound systems that are optimized for speech or music, your system must perform well for speech and music. The attributes of these two types of systems are often at odds, so this is a very difficult task.

Your system must have the accuracy and clarity needed for speech reproduction, while maintaining the extended frequency response and power handling required for music.

#5 Hum and Buzz: Audible AC hum is a major detriment to a church sound system. It usually results from improper grounding practices, either in the wiring or the actual equipment. Remember that off-the-shelf equipment must often be modified to work without hum.

#6 Gain Before Feedback: Whenever a microphone is placed in the same room as a loudspeaker, the potential for feedback exists.

Things that aggravate this further are multiple microphones and long miking distances - necessities for most churches. Your sound system must be extremely stable, meaning that loudspeaker array design and mic placement are critical to the end result.

Your sound personnel must understand the limitations of the sound system and be trained to manage the open microphones and working distances for people using the system.

#7 Wireless Microphones and RFI: These can adversely affect the performance of a sound system. It must be properly shielded against such, with appropriate filtering devices installed when necessary.

In addition, the operating frequencies for your wireless mics must be carefully selected to work properly in the presence of other RF broadcasts in your area.

#8 “Clean” Installation: An important yet often overlooked aspect of sound system design is the installation. Proper interconnect practices must be carried out, and all applicable electrical codes must be observed.

In addition, a “clean” installation means that wiring has been concealed as much as possible, and that the finished system blends well with the decor of the building.

#9 Professional Equipment: Selecting marginal equipment is usually false economy. You need a system that provides reliable, quality performance for years to come.

It’s best to deal only with companies that provide reliable, repairable products. Loudspeakers should be “stress tested” for safety, so they can be suspended above a congregation with confidence.

#10 Calibration, Training and Documentation: A properly calibrated sound system will be much easier for your personnel to operate. A significant amount of expertise is required to make a system “user friendly.”

Your sanctuary is a critical listening environment for speech and music. Your sound system must provide adequate gain, intelligible speech, even coverage and extended bandwidth to all listener seats. The best value in a sound system is one that meets all of these criteria.

For more tech tips go to Sweetwater.com

Posted by Keith Clark on 03/10 at 12:57 PM
Church SoundFeatureBlogStudy HallAnalogAVDigitalEducationInstallationInterconnectLoudspeakerMixerSound ReinforcementSystemPermalink

Troubleshooting Radio Frequency Interference (RFI) Problems

Setting forth a methodical procedure to effectively deal with this problem

You turn on the sound system and you hear a radio station. Now what?

Let’s lay aside the ‘magic fixes” and “voodoo methods” and set forth a methodical procedure to deal with this problem, which is called Radio Frequency Interference (RFI).

The key is to “divide and conquer.” It’s essential that the problem be localized to one part of the sound system. If more than one problem exists, these tests will help disclose that also.

Start with simple tests and proceed to more rigorous ones. Problems range from simple to complex, but more fall into the simple category.

You’ll need some basic tools to troubleshoot RFI problems. Note that there are MANY more esoteric gadgets out there which can prove invaluable under many circumstances.

But before we pull out the “big guns” let’s look at some inexpensive tools that will locate most of the problems:
—Headphone amplifier
—Dynamic microphone
—Mic to line preamp
—Battery-operated phantom power supply
—Ohm meter

Divide and Conquer
Start by dropping the level of the main fader on the console/mixer. If the RFI drops in level, you’ve localized it to the front end of the mixer (at least ahead of the main potentiometer).

If it doesn’t go away, unplug the output of the mixer from the rest of the system. If there is still RFI at the output of the system, there is help later in this document.

If unplugging the mixer output stopped the RFI, the problem lies in the mixer and/or devices connected to it.

Check Mic Lines
Drop the channel faders on the console one by one. If the problem goes away when a specific channel is dropped, you have isolated the problem.

If the RFI drops in level a little with each channel, you may have found many problems! It is important to determine whether the RFI is getting into the mic lines (very common) or somewhere else.

Listen to each mic line individually through the headphone amplifier. You may need an external phantom power supply for condenser microphones.

If the mic lines are clean through the headphone amp, but have RFI through the mixer, the mixer input may not be grounded properly or may not be RFI immune. See the section on mixer inputs for some other things to try.

Mixer Problems?
Pick up the mixer (if it’s not too large/heavy), and turn it. The RFI will either get better, get worse, or stay the same. If this changes the level of the RFI, consult the manufacturer of the mixer.

Too Many Grounds?
Using an appropriate outlet tester, check the AC socket that the mixer is plugged into to make sure that it is properly grounded. Disconnect each mic line from the mixer.

Using an ohm meter, check for shorts between any of the three conductors and a building ground. The building ground should be accessible on the 3-prong of the AC outlet that the mixer is plugged into.

Mic lines often get “unbalanced” by a conductor getting shorted to conduit, etc. somewhere up the path. They will still work, but will be noisy since there is no common-mode rejection at the balanced input of the device.

Electricians often ground audio shields to conduits, jack plates, etc. The only ground on a mic line should be at the mixer! Find the improper grounds and disconnect them.

If you’re in doubt at all about the condition of the building electrical ground, have a qualified electrician check it out.

It’s usually a bad idea to drive a dedicated ground rod for the audio system, as it establishes different ground potentials for different electrical devices in the same building.

Shorted Shields?
Using an ohmmeter, check for shorts between shields of pairs of multi-conductor cables. Each pair of a snake cable should be individually shielded, and each shield should be isolated from the other shields.

When the outer jacket is removed from a snake cable for wiring purposes, it is important to heat shrink the individual pairs to maintain their isolation.

This is a time consuming process, and many installers overlook it. If there are shorts between shields that can’t be located at either end, you may need to pull new wire.

Ron Steinberg of Rentcom Communications told of an installation that was an RFI/EMI nightmare. Upon testing the newly installed mic lines, it was discovered that there was no continuity between the drain wire and the foil shield (an extra layer of Mylar was isolating them).

Very unusual, but not impossible. The manufacturer replaced the wire without question. The moral of the story? Assume nothing.

Choir Mic Problems?
Are the choir mics causing RFI problems? Most choir mics have a “module” that goes on the end of the line from the mixer.

A small diameter cable proceeds from that point to the microphone. On most choir mics, this is an unbalanced line.

Such lines should be cut to length and never coiled up. Unplug the mic line from the module, leaving the module plugged into the mixer.

Does the problem go away? No? Unhook the module and plug in a regular dynamic microphone into the mixer through the same mic line.

Does the problem go away? You may need to install filters on the mic input to the module. Better yet, consult the manufacturer of the microphone for some remedies.

Chances are you aren’t the first one that has had this problem.

RFI Source
Caution! The following test should be performed with care.

Sometimes RFI problems are intermittent. A useful tool is a broadband RFI source that can be used to “infect” a component with RFI. Sound expensive? Not really. I use an electric fence charger for this purpose.

These are available from any farm supply store, priced at about $75 or less. (If you’re unfamiliar, as the name implies, a fence charger is a high-voltage/low-current source for powering electric fences.)

This simple box has an AC cord and two terminals - one is “hot” and the other is ground. Take two pieces of 12 gauge solid copper wire (Romex works well) and connect them to each terminal.

Strip back the last .25 inch of insulation from each, exposing the copper conductor. Bend the two wires so that the exposed ends are about one-eighth inch apart.

Plug the unit in. You will get a spark at one-second intervals. The spark is a very broadband source of RFI, and interference from it will be audible on any radio station and all TV stations - both VHF and UHF.

If there is an RF path into your sound system, this will find it! Use caution and common sense here. The shock from these devices is significant (can put a bull on its knees), so make sure that you set it up safely.

Also, make sure that any nearby computers or DSP processors are switched off, as the generator may interfere with them. This is EXTREMELY important.

The RF energy will obey the inverse-square law, so a location for the generator within 5-10 feet of the mixer should be adequate.

The pulsing “snap” of the spark will be clearly identifiable as you listen to the outputs of the mixer, mic lines, etc. For increased resolution, use an oscilloscope.

If you can keep this out of your sound system, you can keep out anything.

Back to Pin 1
Make certain that the mixer is earth grounded on pin 1 of all input and output connectors. This is a VERY common problem that can result in RFI getting into the internal circuitry through the pin 1 connection.

One test that will reveal this is the “hummer” test described in our last newsletter. The “hummer” test involves feeding about 100 ma of current into pin 1 and listening to the mixer output for hum.

Figure 1: Hummer schematic

See Figure 1 for a hummer schematic, If your mixer has a pin 1 problem, consider breaking the shields out of the XLR plugs and tying them directly to the mixer case.

You may have to scrape a little paint to get to metal, but it is important that the noise currents in the shield go to earth and not onto the circuit board. (This procedure is well-documented in the June 1995 AES Journal, with a number of grounding and shielding authorities commenting on the procedure.)

Try a Different Mixer
Before you go to too much trouble, try substituting another mixer for the one that you are using.

I keep a small, four channel unit for this purpose. It has transformer-balanced inputs and outputs and runs off of batteries which keep it isolated from the building AC and grounds.

If your mic lines still have RF when hooked-up to this mixer, your problem is getting more serious.

If the problem goes away, you will quickly learn why some mixers cost much more than others. Remember, it’s the stuff on the inside that counts.

Rule Out the Wire
Substitute a different mic line. I keep a 100-foot length of “star quad” cable for making acoustic measurements.

Substitute such a cable for your installed mic line. Just lay it out along the same path and hook the mic to it.

If the RFI goes away, you have learned why some wire costs more than others. In sensitive installations, this is a test that you’II want to perform before you spec the wire.

Filtering Out RFI
When all else fails, you may need to install some filters on input and output lines. These filters come in several forms.

Figure 2: Soldering of ceramic disc capacitors.

The most readily available (and simplest) are ceramic disc capacitors. These are soldered from pins 2 and 3 to pin 1 (Figure 2).

The capacitor becomes a low-pass filter that provides a low impedance path for frequencies above its corner frequency to ground.

What is the correct value? A common one is 0.01 microfarads. Since the filtering characteristics are dependent upon the circuit impedance, the best thing to do is to start with a small value and increase it until the high-frequency roll-off becomes audible (broadband pink or white noise as a source).

A capacitor substitution box works well for this purpose. Be sure to use the same value on each pin. Sometimes it is necessary to achieve a steeper roll-off than a single capacitor can provide.

You can accomplish this by using a series inductor (or “choke”) along with the capacitor. These can be acquired at the local Radio Shack.

Ferrite beads are also useful as input filters. These can be slipped over a small length of bus wire and placed in series with the signal. RFI filters can get quite complex, and only some simple examples have been listed here.

Before you get too elaborate, make certain that you are not attempting to “cover-up” a design flaw in one of the sound system components.

The most practical way to fix such problems is to consult the manufacturer or substitute a different device for the one giving you headaches. Sometimes manufacturers are reluctant to admit that their product(s) have a problem, so prove it to yourself by the substitution method.

When you find yourself resorting to these kinds of methods, the best solution often is to install input transformers on the offending mic lines. Transformers offer the highest RFI immunity and RFI “blocking” capability of all methods discussed.

Their sole drawback is cost. Then again, if you are making your fifth trip back to a venue to troubleshoot RFI problems, transformers start to look quite economical!

RFI troubleshooting chart

It is rare, but not impossible, for RFI to get into the system after the mixer. Make certain that you isolate the problem to pre- or post-mixer immediately. This can save hours of troubleshooting.

RFI can be a major problem for system installers and designers. The measures contained herein represent ideas contributed by a large group of audio professionals, and should suffice in correcting most RFI problems that are not design flaws in signal processing equipment.

When in doubt as to a device’s RFI immunity, SUBSTITUTE A KNOWN-GOOD DEVICE.

One learns to appreciate very quickly the price and performance of professional-quality equipment in high-RFI environments.

Pat & Brenda Brown lead SynAudCon, conducting audio seminars and workshops online and around the world. For more information go to www.synaudcon.com.

Posted by Keith Clark on 03/10 at 11:54 AM
AVFeatureBlogStudy HallInstallationMeasurementSignalSound ReinforcementSystemWirelessPermalink

Friday, March 07, 2014

Summation With Others: Focus On Loudspeaker Arrayability

ar•ray•a•bil•i•ty [uh-ray-a-bil-i-tee]: 1. The properties of a transducer, or a transducer system, that permits summation with others of like kind, thereby providing an overall increase in energy.

While this definition includes devices such as microwave and sonar antenna arrays, here I’m specifically referring to an increase in acoustical energy.

And with that, it’s fitting to say that it’s hard to find “arrayability” in most dictionaries. That’s because it’s a word that I made up in 1984 when I started my loudspeaker company, Apogee Sound. Even “arrayable” isn’t in most dictionaries, but both terms show up incessantly in today’s loudspeaker marketing claims.

Essence Of The Issue
Sound travels quite slowly in air. This makes the arrival time, which dictate the relative phase relationship among two or more sonic sources, a critical issue in respect to how they will interact when more than one wavefront meets another.

So what makes a loudspeaker arrayable? “Any loudspeaker that is intended to be used in multiples must be carefully designed and thoroughly tested with like units to establish its viability as a member of an array that collectively provides reasonably flat constructive summation.” Well, that’s a mouthful. And unfortunately, careful design and thorough testing is not always the case.

There is a significant cost involved in building multiple prototypes, then possibly re-engineering the initial design, and then re-building and re-testing additional iterations until optimal acoustic summation has been achieved. It’s far easier (and cheaper) to predict what the array will do, rather than construct it in the physical domain and actually measure the response of various groupings of loudspeakers. And predicted response is precisely what most manufacturers concentrate on.

Let’s keep in mind that the goal of any loudspeaker array is to behave as a larger version of its individual elements. This doesn’t mean that the array cannot possess new properties of its own (and indeed it will) but rather, that the fundamental response of each array element should be constructively incorporated into the overall array response.

Sound Meets Light
It’s easy to think of sonic energy in the same way that we think of light sources: aim two or more luminaires at an object and they will collectively brighten the object more than a single lamp. Lights don’t care if they are of equal intensity, differing intensity, or widely varying distances from the subject that they are illuminating. That’s because of how incredibly fast light waves travel.

A copious quantity of loudspeaker arrays.

Not so with sound. In the audio domain, two or more loudspeakers focused on a given point in space may or may not produce a greater level of intensity. And even if multiple loudspeakers do increase the overall intensity, they are likely to cause all sorts of unevenness in the spectral response, thus degrading the sonic quality from that of a single loudspeaker. And that leads us to arrayability…

First, let’s establish that loudspeakers are not so easily defined as arrayable or not arrayable. In a manner of speaking, all loudspeakers possess both characteristics to some degree. In practice the issue is complex, and involves an understanding of wavelengths, pressure, and acoustic power. (See “Understanding Wavelengths” for more).

Schools Of Thought
Belief System #1: Many sound system designers and audio practitioners have adopted the notion that if two trapezoidal loudspeaker enclosures with a stated angle of dispersion – let’s say 30 degrees for this example – are placed adjacent to one another at a 30-degree included splay angle (this means that the side walls of the trapezoidal enclosures are probably 15 degrees each), then the result will be a seamless 60 degree system. Or, if three enclosures are used adjacent to each other, they will form a seamless 90 degree system, and so on.

If this was as complicated as it gets, then all 30-degree loudspeakers would be the same as all others. Moreover, the total power output of the array could never be greater than that of each array module. But that’s absolutely not the case.

There is no such thing, in practical usage, as a 30-degree loudspeaker, nor a 45, a 60, or a 120 (though there are plenty of 360-degree loudspeakers; most are known as subwoofers). To understand why this is true, let’s look at how directivity is stated in loudspeaker product literature.

The angle of dispersion that defines acoustical coverage is almost always based on -6 dB “down” points in which the off-axis energy is -6 dB (less) than the on-axis energy. And that’s usually measured at 1 kHz, or at some frequency in which the HF horn (and MF horn, if applicable) exhibit some measure of control. But this nearly ancient way of characterizing a loudspeaker’s dispersion ignores the vast differences between LF, MF, and HF. Acoustical behavior is not cut and dried.

At lower frequencies, the LF content will get wider and wider in dispersion until it becomes essentially omnidirectional, under all but the most abnormal of conditions (think 60-foot long horns…abnormal, right?). Very long wavelengths (low “E” on a bass guitar is about 30 feet in length) need either very large horns or very large arrays to provide pattern control. Think of it like this: doing much of anything that might control LF dispersion takes a lot of building materials.

Coverage provided by vertical and horizontal L-Acoustics ARCS arrays.

So here we have our first conundrum: the directivity pattern of a loudspeaker, thought by many to dictate what an array will (or will not) accomplish, typically varies by about two orders of magnitude from a loudspeaker’s LF range (assuming it can reproduce 40 Hz) to its HF range (assuming it can reproduce at least 4 kHz).

And that brings us to our second conundrum: an important property of dispersion, and especially how it affects arrayability of like devices, is often overlooked. The property is what happens after the -6 dB down point. A given horn might cut off drastically in amplitude, whereas another might fall off slowly and gently as the dispersion angle increases.

In other words, a horn stated to be 90 degrees at 1 kHz might provide very usable response at 120 degrees, perhaps being only -9 dB than its on-axis reference amplitude. And that could be a good thing if the response remains reasonably flat. The off-axis seating is likely to be physically located much closer to the loudspeaker than the on-axis seating, which means that additional “fringe-fill” loudspeakers may not be needed.

By contrast, another 90-degree horn or horn array might be as much as -24 dB lower at 120 degrees, making its broad-band off-axis response rather unusable. But more to the point, these widely varying horn characteristics will not behave the same when configured into an array. And this leads us to…

Belief System #2: In this second look at arrayable loudspeakers we see a smaller following among loudspeaker designers because the rules aren’t as easy to comprehend. If you put two 90 degree loudspeakers next to each other, and splay them apart at a 20 degree included angle, should you not get 90 + 90 + 20 = 200 degrees of dispersion? It would seem so. But that’s not at all what happens.

Instead, the pressure from the two or more radiating devices, that’s the horns and cones, will combine in some particular manner. Exactly how they combine is the grand issue behind whether or not they will form a useful array or conversely, are just a bunch of speakers placed near each other that combine like oil and water.

For the record, this is the belief system that Apogee Sound based its designs upon. And it really can be called a “practice” rather an a “belief system” because the patterns and response curves of the resultant arrays were exhaustively measured outdoors, 30 feet above ground level, and carefully documented using Hewlett Packard and TEF measurement equipment. It was a process that took several months of full time work.

The net value, simplified as much as possible from the exhaustive process, was that low pressure audio sources combine much more readily than high pressure sources. Stated another way, sonic energy emanating from the mouths of horns is far more amenable to summing with like units than at the throat of horns.

The New Math
So it turns out that a certain 90-degree horn sitting adjacent to another 90-degree horn – both with very gradual fall-off characteristics – actually combine together quite well. The behavior is something like an old fashioned multi-cellular horn, producing a summed output that is on the order of 70 degrees.

Add a third horn to the array and the -6 dB points become even narrower, about 60 degrees. Build it larger, let’s say seven 90-degree horns, and the – 6 dB points become narrower still, about 40 degrees.

But here it’s important to note that we’re focusing on -6 dB points which, as noted earlier, are the industry standard for defining dispersion angle. The forward radiated power is much greater than that of a single horn, and that’s exactly what proper arrays should accomplish.

The useful angle of coverage of this array is much wider than its forward radiated power (again, based on -6 dB points). What do we mean by useful angle? That would be any point in the overall dispersion pattern that remains reasonably flat. If only 50 Hz to 100 Hz exhibits 180 degrees of coverage, but higher frequencies are far narrower, than 180 degrees would not be considered a useful angle of coverage.

Loudspeaker cones are altogether different from horns. Horns are designed to cover a certain dispersion angle. Cones are designed for everything but. The diameter and the geometry of a given cone’s flare-rate dictate the angle of coverage of an individual cone driver which, as one might suspect, will always be conical in nature.

The angles and configurations in which multiple cones are arrayed in respect to one another effect how a grouping of cone drivers will array together (and by the way, “array” is both a noun and a verb). That said, and as previously noted, in practical terms all acoustical behavior is a function of frequency.

So here we have some diameter of cone driver that will exhibit a wide pattern at lower frequencies and a narrower pattern at higher frequencies. And we wish to combine it with others of like kind. And that leads us to line arrays…

Line Array Directionality
Somewhat opposite to the concept of arraying groups of predominantly horn-loaded loudspeakers in the quest to ultimately generate large-system pattern control, the modern line array is a mixture of horns (or waveguides as many manufacturers name their devices) combined with various sizes of cone drivers.

The pursuit is clear: improved directivity and the benefit is delivering sound to the seated audience areas while minimizing ‘acoustic overflow’ into unseated regions of a venue. By eliminating unwanted reflections from walls and ceilings, the direct sound should be clearer and cleaner.

However, line arrays tend to excel at sending coherent sound over long distances which makes the reflections more of a discreet echo instead of the scattered reverb effect of less coherent systems. The trick, it seems, is to focus the line array very, very carefully to avoid discreet reflections.

There’s an easy way to explain what happens in a line array formation. The multiple sources in the line generate acoustical addition with one another, mostly because they are closely arrayed in the physical plane, while they simultaneously generate cancellation lobes off-axis. This factor forms a line array’s exceptional ability to provide tightly defined directionality in the long axis of the line.

As you walk towards a well designed line array and then walk directly under it, it will sound like the level was attenuated by a very large number. And that’s a good thing when dealing with feedback and other stage-related contamination.

Line ‘em up: subwoofer and full-range line arrays.

Let’s look at it like this: stack up a bunch of acoustic sources in a line – curved or not – and there will be a measure of acoustic addition and a measure of cancellation. Account for the band-limited parameters that always exist in relation to the transducer sizes and shapes (with DSP no doubt), and one can create a system that ultimately provides broad-band directional control.

But it’s not so easy. Combining conical sources, such as 6- or 8-inch cone drivers in the MF range, so that they acoustically mate with HF waveguides that are designed to keep high frequency output as narrow as possible, does not make for an easy outcome.

All sorts of things are then done to try to get vastly different dispersion patterns to roughly agree with each other. Angling the cone drivers is usually the first step. The next is to provide various forms of physical baffles that interrupt the cone driver’s native dispersion characteristics. Finally, DSP is used in an attempt to manage pattern control.

The good news is that almost anything that produces sound, when combined in a closely coupled line, will take on a measure of vertical control all on its own.  Thus, most line arrays, whether carefully engineered and tested – or not – will exhibit a measure of useful pattern control.

As the world of professional audio continues to grow and develop, it’s likely that a steady rate of product improvement will be on the menu. The job of the end user is to separate idle claims from demonstrable benefits. And that’s something we, as a group, do pretty well.

Senior technical editor Ken DeLoria has mixed innumerable shows and tuned hundreds of sound systems with an emphasis on taming difficult acoustical environments, and as the founder of Apogee Sound, developed the TEC Award-winning AE-9 loudspeaker.

Posted by Keith Clark on 03/07 at 06:29 PM
Live SoundFeatureBlogStudy HallLine ArrayLoudspeakerMeasurementSound ReinforcementSystemPermalink

In The Studio: My Top 10 Microphones

Here's what I find myself using over and over when I record if they're available...
This article is provided by Bobby Owsinski.

It’s time for another top 10 list, and this time we’ll be looking at the mics I use. Once again keep in mind that I’ve excluded lots of great microphones only because I don’t have frequent access to them.

Mics are such a personal choice because it has to do with how your ear matches up to what the mic is capturing.

That said, here’s what I find myself using over and over when I record if they’re available, but you could ask me again next week and the order might be different and some models would be replaced with something else.

1. Neumann U 87: I’ve had a love/hate relationship with the 87 over the years, but I have to say that it’s back on the love side of things. Without getting into splitting hairs on the models, I know what I’m going to get and the sound is always up to the high standard that I expect from the model. I love using it on toms, percussion, guitar and bass amps and vocals (especially in omni).

Royer R-121

2. Royer R-121: What a wonderful mic! It sounds great on most anything, takes EQ incredibly well, and is as rugged as a ribbon mic can get. I’ll use it on electric and acoustic guitars, overheads, percussion from a distance, and even on vocals occasionally.

3. AKG C414: Once again, you can split hairs over the models (there’s no doubt that the older versions sound better), but I know what to expect from the mic and it generally does not disappoint. I’ll use this most anywhere I’d use an 87 where the sound might need a bit more edge.

4. Neumann KM 84: This is one of the best small diaphragm mics ever, in my opinion. I’ll use a 184 if I can’t get the real thing, but once again, it’s not the same. I love it on the snare, hat and acoustic guitars.

Shure SM7B

5. Shure SM7B: One of the most overlooked mics ever, it’s really an SM57 on steroids with a much bigger and bolder sound. This is a great vocal mic, especially for singers with an edge to their voice. It’s also a great mic for voice-overs, which I do a lot of.

6. Electro-Voice RE20: This is one of my favorite kick drum mics. There’s no hype involved, just a nice big bottom if the drum has it in the first place. It also makes a great vocal mic on some singers.

7. Shure Beta 57: While many engineers swear by the regular old SM57 for guitars and snare, I like the newer Beta 57 better. It has a tighter pickup pattern so the isolation is better, and it’s a little crisper. I’ll use it on snare, guitar amps (along with a R-121), and hand drum percussion. I do prefer the older Beta 57 to the new 57A a little, but they’re much harder to find these days.

Mojave MA-200

8. Mojave MA-200: David Royer is a smart fellow and his tech knowledge extends beyond ribbon microphones, as his Mojave condenser mics testify. The MA-200 is a multi-pattern large diaphragm condenser that can be used anywhere I’d normally use a U87. I’d have to say that it’s a bit smoother sounding, although sometimes that’s not what you want. I’ve used it on acoustic guitars, strings, and percussion. I’d love to try it on toms, but there’s usually never enough of them around for an entire kit.

9. AKG C12: I have this down lower on my list not because it’s not a favorite mic, but because you can’t always find one. I love it on vocals, it’s magic on overheads and with strings, and it’s great on a bass amp (thanks Ken Scott, who uttered the words seared into my brain - “This is what I used to use on Paul.”).

ADK 3 Zigma C-LOL-12

10. ADK 3 Zigma C-LOL-12: That’s a mouthful of a model, but it’s basically an AKG C12 emulation and it’s a really good one. I like it on toms and bass. It’s very impressive for the price.

Once again, you might find some of the choices unusual. I’m not trying to say that other mics don’t deserve to be on a top 10 list somewhere, but these are the ones I find myself reaching for if they’re available.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. Get The Mixing Engineer’s Handbook here.

Posted by Keith Clark on 03/07 at 05:45 PM

Church Sound: Understanding Mic, Line & Speaker Signal Level

This article is provided by ChurchTechArts.

The topic for this post came out of a very real conversation I had with my new audio volunteers. We got to talking about various signal levels that we deal with in the world of audio, and it became very clear that they had not yet been exposed to any of this nomenclature. I figured it’s entirely possible that some out there are also unclear on what all these terms may mean as well.

So this will be an introductory course on basic signal level. To keep it simple, I’m not going to go into all the background math that gets us here, or define every term; I’m going to stick with the practical implications of signal level. With those two caveats, simple and practical, let’s begin.

Level And Resistance
Two terms that need defining are level and resistance. For the purposes of this discussion, level refers to voltage. Voltage is analogous to water pressure. More pressure, more flow. Voltage is a measurement of the amount of force behind the signal traveling down the wires.

The voltage we’re dealing with is pretty small, so to make it easier for us sound guys to deal with, we don’t talk about it in terms of volts at all. Rather, we use the decibel, or dB. A dB is a unitless scale used to compare like values; in this case voltage.

We have a reference value, in this case 1 Volt = 0 dBV (the V signifies voltage), and we reference everything else to that voltage. Thus, -60 dBV corresponds to 1 millivolt. Someday I’ll go into the math on how we get there, for now, take my word for it. And I encourage you to do some research on your own to learn more about voltage.

The other term we need to know is resistance, which is measured in ohms. Resistance (often expressed as impedance in our world—and yes I know that impedance is actually DC resistance plus capacitance; I’m trying to keep this simple) is exactly what it sounds like, the resistance, or impediment to signal flow.

Going back to our water analogy, think of impedance as the size of the pipe you’re pushing water through. Obviously it’s a lot harder to push a large volume of water through a straw than it is a 4-inch-diameter pipe. In audio-land, we have two basic values of impedance, low (roughly 250-600 ohms) and high (roughly 1,000-10,000 ohms). At this time, don’t worry too much about the exact values, just get the concepts. Impedance matching is a whole ‘nother post.

Right, so we have that down? Signal level (dBV) and Impedance (low and high). With that as our backdrop, let’s consider the three most common types of signals we face in audio; mic level, line level and speaker level.

Mic Level
Think of a mic level signal as a low-level, low-impedance signal. Mic level is nominally around -60 dBV, so we’re looking at 1 mV (mV=millivolt, or 1/1000 of a volt, or .001 V), give or take. The impedance is also low, in the area of 250-600 ohms.

Now, even though the voltage is low, we can send mic level signals a reasonably long distance because the impedance is fairly low. When sent over a good balanced cable, mic levels will travel hundreds of feet and arrive pretty much intact.

We see mic level coming from mics (obviously) as well as DIs (direct injection boxes). A DI turns unbalanced, high impedance signals into a balanced, low impedance mic level signal so it can be sent a longer distance.

I’ll deal with balanced/unbalanced signals in another post; for now think of them this way—balanced = 3 wires = better for longer runs, unbalanced = 2 wires = good only for short (2-15-foot) runs.

Line Level
Most professional audio gear using line level signals runs at +4 dBV, which corresponds to a little over 1.5 V. Whereas mic level signals are almost always balanced, line level can be either balanced or unbalanced.

Line level is typically high impedance, on the order of 1,000 ohms (1K ohm) or so, but because the signal level is so much higher than mic level, we can send it long distances (at least if it’s balanced).

I want to pause here for a moment to consider some practical implications. Let’s say you plug a line level signal into an input that’s designed for mic level. What would happen?

Going back to our numbers, you have an input that’s looking for 1 mV and you shove 1V into it. That’s about 1,000 times as much signal as it’s expecting (see why we use dB instead of volts? We can say 64 dB instead of 1,000 volts). You don’t have to be an electrical engineer to guess that the result will not be pleasant. While the input it not likely to be destroyed, the audio signal will be. Gross distortion will be the audible result.

On the other hand, if we plug a mic level signal into an input that is expecting a line level signal, what might happen? Again, we’re feeding a signal that’s roughly 1,000 times lower than expected; so the result will be low signal level and high noise. Starting to make sense?

Speaker Level
The third common type of signal we deal with in audio is speaker level. Speaker level is very high level (+22-33 or more dBV and can range from 11-89 volts) and very low impedance (4-16 ohms typically).

With that kind of signal level on hand, it’s pretty clear why we don’t want to plug a speaker level signal into a mic level input. That might actually blow something up. And it’s also why we can’t drive a speaker with the output of a microphone, at least not directly.

Now we could talk about the intricacies of these signaling levels for the next two weeks, but I’m trying to keep the post length manageable. In the meantime, do some research on your own. You’ll be amazed at how much more of audio makes sense when you have a firm grasp of these concepts.

Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.

Posted by Keith Clark on 03/07 at 03:03 PM
Church SoundFeatureBlogStudy HallConsolesInterconnectMeasurementMixerSignalSound ReinforcementPermalink

Three Tips For Overcoming Open-Space AV Challenges

Open-plan office layouts look nice, but where is the AV gear supposed to go?
This article is provided by Commercial Integrator

One of the many challenges facing commercial audiovisual professionals today is the lack of space for installed equipment. The move to open space in the corporate world has burdened many design engineers with the challenge of where to put the gear.

In many cases, the traditional equipment rack is being removed from the room or eliminated altogether. Today’s office spaces are taking on the characteristics of living rooms, home-style kitchens and dens. Gone are the cookie-cutter, four-walled conference rooms and cubicle spaces.

This switch in office design is pushing us in the AV industry to change our traditional approach to system integration.

The residential side of the AV business has been dealing with open-plan concept for years. Unlike the residential side, commercial industry has been slow to develop commercial-grade products that are both aesthetically pleasing and functional. Commercial product manufacturers have favored durability, functionality and practicality over the aesthetics.

Don’t get me wrong—this is not a bad thing to strive for, but can we at least try to make it look appealing if left out in the open?

Take a step back next time you install a new commercial display with a 2-inch thick bezel or one that sticks out 4 inches from the wall and ask yourself if that looks good. Good luck explaining how this is the latest in technology when the customer’s expectation is to see a sleek TV that they just installed in their own living room.

So what can we do?

AV designers and engineers have to do their homework. Look to the residential space and see what is being used in the homes of your customers. I’m not asking you to specify consumer products into commercial jobs solely on the aesthetics, but look for design cues that can be translated into the commercial space and incorporate equipment with the aesthetic in mind when designing.

Here are some ideas:

Think Small—Where space is limited you have to design accordingly. The days of rack rooms and closets are dwindling quickly. Smaller, multifunctional gear will obviously be key to designing in tight spaces. If the product does not meet your requirements, make a call. There are plenty of manufactures in our industry looking for input on improving designs. Products like in-wall boxes that can house equipment are ways to make small spaces work.

Survey Your Space—Use your surroundings to your advantage and maximize every nook to stash equipment without compromising the look and serviceability. Integrating furniture into your design is a great win when space is tight. Even the traditional mindset of ceiling mounted speakers is being challenged as architects and designers move toward higher ceilings to let the light shine through. Look to soundbars to achieve quality audio where the ceiling speaker may not be feasible.

Get Creative—Think outside the box when it comes to delivering on customer expectations. Listen to what they want and find technology that can help make that possible. That may include breaking away from traditional go-to products. In other words, get your test labs ready. In order to be creative and maintain your reputation you will have to get “hands on.” Get your bench area and imagination ready. Roll up your sleeves and test new ideas, concepts and products before unleashing the idea on your customer in order to ensure success.

Compare what consumers are buying for their homes and find the commercial equivalent. You may have to design a hybrid combination of consumer and commercial gear to achieve that goal but that’s what we as “integrators” specialize in where others fail.

Christopher Neto, CTS, is a consultant with AV Helpdesk Inc., a firm specializing in all aspects of AV design, engineering, project management and programming. He an active member of InfoComm and blogs regularly on his website AVshout.com.

Go to Commercial Integrator for more content on A/V, installed and commercial systems.

Posted by Keith Clark on 03/07 at 01:27 PM
AVFeatureBlogStudy HallAVBusinessInstallationSound ReinforcementSystemTechnicianPermalink

Tech Tip Of The Day: 70.7 Volts Of Confusion

I tried adding another ceiling loudspeaker but then everything went dead. What's that all about?
Provided by Sweetwater.

Q: I help out with the audio at my church, and when our choir director asked me to add a loudspeaker for the choir loft, I thought it sounded a little challenging but not overly difficult.

Well, when I connected the wire to one of the loudspeakers in the hallway, the entire system went dead! The plate on the back of the loudspeaker said something about 70.7 volts. What’s going on, and what the heck is the 70.7 volts all about?

A: Indeed, this situation is a bit more complex than you’d anticipated, and the answer is actually quite involved. However, we’ll save the detailed explanation of constant-voltage distributed audio systems for another time and just provide the information that’s pertinent to your situation.

In your case, you’re going to need a “70.7 volt line transformer” to add a new loudspeaker to the system.

First, however, note that there are two types of loudspeaker wiring systems. The first is the type you’re familiar with, the “voice coil” system, which typically connects a loudspeaker (or loudspeakers) directly to the power amplifier. The speaker wire length is usually less than 30 feet.

The second type is called “70.7V” (called “70 volt”), which uses a special transformer at each loudspeaker and a special 70-volt output on the amplifier. 70-volt systems are used in multiple loudspeaker installations where wire lengths may be long, typically hundreds of feet.

70-volt transformers have input “taps” (1, 2, 5, 10 watts) to select the amount of power you wish to take from the system for the loudspeaker, which doesn’t necessarily have anything to do with the power rating of said loudspeaker.

They also have output taps to match the impedance of the loudspeaker (4, 8, 16 ohms) being used, and this does need to be matched to the speaker for proper operation. It’s important that the total power requested (the load) does not exceed the total power available from the amplifier.

To calculate the “load,” add together the power requested by each loudspeaker transformer in the system.

For example: If you have a 100-watt amplifier and your system has two 20-watt loudspeakers and ten 5-watt loudspeakers, you have a 90-watt load. (2 x 20 watts = 40 watts; 10 x 5 watts = 50 watts; 40 watts + 50 watts = 90 watts) Therefore, you can safely add a 10-watt loudspeaker, or two more 5-watt loudspeakers.

For more tech tips go to Sweetwater.com

Posted by Keith Clark on 03/07 at 10:46 AM
Church SoundFeatureBlogStudy HallAmplifierInstallationInterconnectLoudspeakerMeasurementSignalSystemPermalink

Thursday, March 06, 2014

Ongoing Innovation: Advances In The Art & Science Of Microphones

Particularly over the past couple decades, meeting the needs for greater freedom and flexibility in live performance has meant that microphone companies have expended considerable effort designing and building the hardware to deliver the output wirelessly. Virtually every mic manufacturer has its own brand of wireless, and these units make up a significant portion of their microphone sales.

Add the ongoing changes in the RF landscape and a voracious demand for spectrum from communications and consumer devices, and the pressure for manufacturers to pour resources into the wireless side of mic development becomes relentless. As an example, within the past three years we’ve seen new digital wireless systems – in some cases several models – from most of the established players.

But what about the microphones themselves? What improvements, technological advances, and innovations are being created within the transducers themselves to allow them to better capture the acoustic sounds of voices and instruments – before they’re delivered to the sound reinforcement system?

Critical Parameters
Mic designers work with a number of key parameters, including the range of frequencies the mic can effectively capture, the flatness of that response over those frequencies, the consistency of the coverage pattern over the relevant frequency range, sensitivity to the nuances of the audio sound, maximum SPL before unacceptable distortion is introduced into the signal, the minimum and maximum distances at which the mic can capture a usable signal, and gain—before-feedback in a live sound reinforcement setting.

Additional critical factors include the electrical noise floor the transducer system produces, resistance to breath and wind noises, attenuation of handling noise and other mechanical vibration, filtering and other internal equalization selections, amount of proximity effect when used closely, and a form factor that lends itself to the desired application. How are manufacturers balancing, varying, and improving these and other parameters?

Frequency Response
The goal is to achieve as flat and even of a frequency response across the desired spectrum – nominally 20 Hz to 20 kHz, yet with desirable variations to benefit certain applications. 

Having a vocal mic be sensitive to very low frequencies would do nothing for capturing a voice whose fundamental is more than two octaves above 20 Hz, and would pick up extraneous stage noise and lower frequencies from bass and drums; thus a mechanical, electronic, or combination high-pass filter is useful for vocals.

DPA d:facto II microphone, capsule for wireless, and wireless adapter.

A “presence peak” to emphasize some of the overtone frequencies produced by consonants aids intelligibility, and is commonly designed in. However, the ideal vocal mic might not have the low-end sensitivity to mic an acoustic bass or kick drum, and a flat response without a presence peak would likely sound more natural on an acoustic guitar or wind instrument.

With today’s sophisticated DSP options available for equalizing the mic’s raw signal, smooth adjustments for both tone quality and intelligibility can be made after the fact – as long as the desired frequencies have been captured by the microphone.

Frequency response of the Earthworks SR40V.

The design goals for several recently introduced vocal mics have included a very flat frequency response across the vocal fundamentals and overtones to yield a neutral, “transparent” sound quality. The Earthworks SR40V and the DPA d:facto II vocal mics emphasize linearity in their frequency response, and both companies’ background in creating measurement mics for acoustic engineering and system optimization applications contribute to this trait. 

Earthworks mics capture frequencies up to 40 kHz, according to Daniel Blackmer, because “the ear is highly sensitive to phase and time differences to localize a sound source and perceive it as ‘live’.”

Neumann KMS 105 and its onboard PCB.

On the other hand, DPA doesn’t emphasize these highest frequencies, yet within the full vocal range its frequency response stays within +/-2 dB.

The supercardioid Neumann KMS 105 and Shure KSM9 are also vocal condensers that exhibit a relatively flat, natural frequency response. Neumann varied the KMS 104-plus cardioid vocal mic specifically for female vocal in the pop and rock genres, with “a more extended bass frequency response.”

Polar Pattern Control
Continuing with the KSM9, Shure recently released a version called the KSM9HS, featuring switchable hypercardioid and the rarely used subcardioid patterns.

Both KSM9 variations use a dual-diaphragm design, which in general provides greater polar pattern control at lower frequencies than a single-diaphragm design. This technology attenuates these lower frequencies at the rear so the mic is less likely to pick up audio from behind and can be less prone to feedback.

From my tests with this mic a few months ago, the subcardioid pattern resembles an omni that is attenuated at the rear, with a wide frontal pickup having similar level and frequency response, yet being resistant to feedback. The audio quality of both patterns is almost the same, with the hypercardioid setting exhibiting moderate proximity effect when used closely.

Electro-Voice has moved its proven Variable-D technology, which minimizes proximity effect when using a directional mic at close distances, into the RE320 dynamic vocal and instrument microphone. It’s especially useful when close-miking instruments with lower fundamentals such as acoustic bass or larger brass instruments, or to preserve the tonal quality and apparent level of a singer or speaker when they’re continually moving closer and farther from the mic. (The RE320 is the latest generation of the RE20 and RE27 designs.)

Shure KSM9HS offers a selection of polar patterns.

Miniature capsules and directional control aren’t an obvious pairing, yet both Countryman and DPA have made significant advances. With the new H6 headset, Countryman offers omni, cardioid, and hypercardioid patterns, enhanced with a set of acoustic “caps” that enclose the mic element.

The directional elements themselves use a “micro-drilling technique normally used to create cooling holes in jet turbine blades,” with precisely tailored arrays of holes that are smaller than a hair – with each capsule tested and adjusted for maximum null depth and consistent frequency response. The caps allow the user to select a polar pattern that is best for a given application, and go to another one at a different time. Other caps vary the high-frequency response from flat to enhanced.

In the 4099 series of instrument mics, DPA maintains directionality with the use of acoustic interference tubes integrated into the assembly. A precise balance of tube length, materials, and porting allow directional sound to enter the mic without interference, while off-axis pressure waves are acoustically canceled within the tube with minimal coloration.

This effective technology has been enhanced over the past decade, and is also used in the 4080 cardioid and 4081 supercardioid miniature instrument mics and the 5100 surround sound mic. The result is a smooth and accurate directional capture of the audio source.

Tiny mics with transparent full-range performance extend the ability to unobtrusively place a mic on an instrument, vocalist, or actor/presenter in a live setting, and achieve sound quality akin to using studio-type models.

Audio-Technica BP894 with capsule within a rotating housing on the boom.

Audio-Technica has recently released the BP894 headset, with a miniature cardioid element. The capsule is contained within a small rotating housing at the end of the mic boom, allowing the active side to be precisely positioned toward the corner of the mouth. Different than most headsets where the capsule comes right off the side of the boom, A-T placed it at a right angle to the boom so that it has a T-shaped profile.

Countryman has pioneered miniaturization in headset applications. The H6 is offered at three sensitivity levels, to accommodate applications ranging from normal speaking to high-level operatic vocals. DPA also focuses on these applications, producing mics that are widely used on acoustic instruments, pianos, and more.

Proprietary pre-polarized backplate technology yields a combination of high sensitivity and extreme SPL handling. Both companies provide a wide variety of adapters for different applications and instruments, ranging from violin to woodwinds to acoustic guitars.

Countryman H6 with rugged cables and acoustic “caps” for the mic element.

Miniaturization means thin cables and tiny connectors, which must be extremely durable, as well as water-resistant and immune from RF and EMI interference. According to Chris Countryman, the H6 uses para-aramid fibers (in the Kevlar family) to “more than double the pull strength” of the mic’s cables, along with specialized polymers for the inner insulator and outer jacket to maximize puncture resistance while minimizing induced mechanical noise.

He adds that water resistance is a big focus, with combined custom connectors based on aerospace designs with housings made with medical-industry plastics and hydrophobic coatings helping to achieve an IP67 water and dust protection rating.

EQ, Filters, Switches
As much as designers strive to create a certain frequency response curve with their mics, sound engineers sometimes need to modify that response for particular applications. Some manufacturers offer internal circuitry for that purpose.

One of AKG’s latest achievements is Versatile Response analog filter circuitry. It can be found in the D12 VR large-diaphragm dynamic cardioid mic for vocal, bass drum, and bass amp applications. 

The newly designed capsule has an extremely thin diaphragm that is especially sensitive to lower frequencies, mated with the VR circuitry and the original transformer used with the classic 1970s-era C414 studio mics. When the mic is phantom powered, one of three switchable active filter preset can be selected to customize the frequency response; these settings are for open and closed kick drum, and “vintage sound.” The mic can also be used without phantom power and delivers the unfiltered response of the capsule.

The new Telefunken Elektroakustik M82 large-diaphragm microphone also incorporates analog filter circuitry, with two switches that act independently to yield four frequency-response settings. These passive circuits are “Kick EQ” and “High Boost,” with the former reducing midrange frequencies centered around 350 Hz and the latter tilting up the mid and high frequencies with a knee at about 2 kHz. With both filters off, the mic can be applied to vocals, guitar amps, and brass instruments.

AKG D12 VR with Versatile Response analog filter circuitry.

The Avlex Superlux PRO-38MKII condenser vocal mic has an internal 12 dB/octave high-pass filter centered at 100 Hz to minimize pickup of stage noise. Its 1-inch gold-plated thin-film diaphragm is very responsive, as well as fostering a well-behaved cardioid polar pattern and a flat, uncolored frequency response.

Mic diaphragms vary in size, thickness, tension, and material – each of which affects the sensitivity, frequency response, and a variety of other characteristics. Audix uses proprietary VLM (Very Low Mass) technology within its OM Series handhelds, based on “a very lightweight diaphragm that allows for extremely fast, accurate processing of incoming signals” that yields extended frequency response with high SPL handling.

The AKG D7 vocal mic employs a specially developed Laminate Varimotion diaphragm – which varies in thickness from center to edge – allowing response tuning within the mic assembly without the use of acoustic resonators. This technology is also used in the D5 vocal and D40 instrument mics.

Audio-Technica AT4081 ribbon with higher output from a neodymium magnet motor.

Traditionally, ribbon technology has been more delicate, and it’s use in live sound was pioneered by beyerdynamic. The company’s recently introduced TG V90r handheld vocal mic uses an ultra-light and thin aluminum ribbon for excellent responsiveness to transients, housed in a well shock-mounted body. The frequency response covers the vocal range, with specifications of 50 Hz to 14 kHz, with high SPL handling and an output sensitivity of -61 dBV.

Audio-Technica holds several patents in this technology, including the Microlinear ribbon imprint that protects dual ribbons from lateral flexing and distortion.  The AT4081 provides a higher output level with the use of a neodymium magnet motor structure, and it has the audio quality to be used for recording and the durability for live use.

Ribbon technology is also used in the Royer R-122 Live and Cascade Fat Head II for guitar cabinet miking and similar applications, while the AEA N-22 can be used for applications ranging from vocals to acoustic instruments to cabinets.

Shure KSM313/NE and KSM353 ribbon mics include proprietary Roswellite “molecularly-bonded film” ribbon material, which provides the tensile strength and durability to withstand high SPL applications. These mics are precisely manufactured, with each ribbon frame being optically measured and its ribbon then custom-cut with a laser to fit it exactly – to a tolerance of .001 of an inch.

These manufacturing processes are normal for Shure, notes John Born, product manager for wired microphones. An emphasis on documentation and process control in engineering and manufacturing, consistency, and optimization is central to a philosophy that states “performance shall not change” over the years and manufacturing runs for a particular microphone model – even as internal changes might be made to improve consistency or find substitutes for obsolete components. 

With the introduction of the Digital 9000 wireless system, Sennheiser also made changes to its mic capsule suspension. According to the company’s Brian Walker, the new suspension system “greatly reduces mic handling noise while maintaining excellent audio quality.”

Sennheiser’s new mic capsule suspension.

The bands hold the capsule from six posts, offering significant mechanical isolation from the rest of the mic assembly. Sennheiser has also standardized all of its wireless mic capsules to be used interchangeably across the evolution, 2000, and 9000 Series systems, so that an investment in a particular capsule may be separate from the wireless delivery method.

Across a variety of handheld mic brands and models, interchangeable mics heads thread onto the wireless using a 3-conductor, concentric-ring connector that mate with spring-loaded pins within the transmitter body. This innovation allows a preferred mic head to move across a variety of transmitters, rather than the former method of hard-wiring the head into the transmitter.

These are just some of the examples of the ongoing innovations in mic technology. What’s currently available provides many excellent choices, yet I strongly suspect the search for ever more transparent, unobtrusive, durable, and overall great-sounding microphones will continue.

Gary Parks is a pro audio writer who has worked in the industry for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.

Posted by Keith Clark on 03/06 at 07:18 PM
Live SoundFeatureBlogProductMicrophoneProcessorSignalSound ReinforcementWirelessPermalink

In The Studio: Understanding & Applying Reamping

This article is provided by Audio Geek Zine.

Reamping is a technique used by recording and mixing engineers to process recorded audio with analog hardware, specifically guitar equipment.

A reamp box takes clean direct guitar or bass signal and runs it into a guitar amp if needed later on in the production process.

It’s often said that a reamp box is just passive DI box in reverse, which is untrue. A DI box converts a high impedance, unbalanced instrument level signal to a low impedance, balanced mic level signal. A reamp box converts a low impedance, balanced line level signal to high impedance, unbalanced instrument level signal.

The goal of the reamp box is to make the amplifier react in exactly the same way a live guitar would, but with a pre-recorded audio source.

Without the reamp box there will be an impedance mismatch and loss of tone. Hearing your guitar rig play itself can take some getting used to!

How To
Once you have the clean DI recorded, set the track’s output to analog out 3 of your interface (you will need an interface with more than just monitor outputs). No other tracks should be set to this output. 

(click to enlarge)

Connect output 3 of the interface to the reamp box with a TRS to XLR-M cable. Connect the reamp to your pedals or amplifier with a standard instrument cable.

Check that you are getting a good signal to the amplifier, adjust the output level to match a live guitar input. If the signal is noisy, try the ground lift switch.

Record the amplifier as normal with one or more microphones. Double check your routing to prevent feedback.

Why Reamp?
The decision to reamp could be for a number of reasons:

Convenience: You may like to write and record guitar parts at 3 an but your neighbors don’t just don’t understand. Record with an amp simulator silently and reamp through your big rig later.

Budget: There are many professionals with big amp collections offering a reamping services for much less than the cost of even renting the amp of your dreams.

Experimentation: After you’ve got the perfect take captured you can experiment with different amp, cab, mic and positions as much as you like.

Plan B: All too often we are rushed in the studio, it’s easy to make a mistake with mic position or perhaps just a bit too much overdrive than we’d like. If the performance is fine we just have to tweak the amp.

For Best Results
While there are many fancy direct boxes, if you’re planning on reamping, you will likely find the best results are achieved through the simplest direct capture.

A tube DI or tube preamp may be too colored and change the sound of your amp. Warming up the sound of the mic on the amp, rather than the DI, is the better option. Feel free to experiment, of course.

Usually the signal you send to the amplifier will be completely unprocessed, however you may wish to experiment with gating, light compression, and EQ.

Much like the way your guitar volume affects the character of the amp, sending the right level out of your DAW can be critical.

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com.

Posted by Keith Clark on 03/06 at 06:03 PM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEngineerProcessorStudioPermalink

Church Sound: IEM Mixing For Worship

Many worship venues have made the transition from “wedges” to “ears” for stage monitoring purposes, and often find that this can be a surprisingly tricky process.

Essentially, wedges are loudspeakers that are laid sideways and angled up at the performers. The sound content or “mix” in each monitor, or group of monitors, is customized for the performers’ needs and each often have a much different balance than the house sound mix.

In-ear monitor systems have become popular in recent years. This places earphones in the ear of the performer, mostly sealing the ear (both generic fit and custom molded models are common). They may be mixed from the front-of-house console, a dedicated monitor console, a personal on-stage mixer, or a wireless computer or tablet. Regardless of the type of delivery or mixing system, monitoring earphones (either wired or wireless) usually eliminate the need for a monitor loudspeaker “wedge.”

There are a number of advantages to in-ear monitoring including: lower stage volume (no open wedges blaring, better house sound, greater gain-before-feedback), artist mobility (no sweet spot to stand in as with most wedges), hopefully lower monitoring levels for increased hearing safety, the ability to listen deeper into the monitor mix details, a custom mix and loudness for each user, a discreet path for talkback to the user’s ears, system portability (wedges weigh a whole lot more than earphones), aesthetics, and acoustic isolation.

Those last two points may be argued as disadvantages too, but most users agree that earphones are visually less distracting than wedge monitors. Read on for more regarding the acoustic isolation.

Worship leaders and techs transitioning to in-ear monitoring should plan for increased communication and expect to spend more time building the right monitor mix. The acoustic isolation (occlusion effect) of in-ear monitoring offers extreme control, but also requires more attention to detail. But before going further into that, let’s think a bit about human hearing and stage monitoring. Consider this scenario:

A worship leader is downstage center with a single wedge monitor and his guitar and vocal mic. The tech mixes both signals into the wedge plus any other signals the WL requests, which might include other instruments. He monitors comfortably. Does he monitor in mono or stereo?

Well, it is true that a single wedge monitor reproduces a mono audio signal, but is that all he hears? No. He not only hears the sound from the wedge, but also the sounds from all around him including other performers, audience sounds, room reverberation, and more. He hears all these things with a true sense of space and dimension. Humans hear binaurally. Next, consider this scenario:

A church buys its first wireless personal monitoring system to replace the worship leader’s wedge. The sound tech removes the wedge and routes the regular monitor signal to the in ear monitor system, whether wireless or wired (mixed from FOH). At sound check the WL puts his new earphones in and soon says, “My mix is different!” The tech responds, “Nope, it’s the same mix you’ve always had.” Who’s right?

They both are. With the wedge, the WL heard the monitor signal and his acoustic surroundings as an integrated listening experience, with both ears open. Now that his ears are essentially plugged by earphones, he hears only the monitor signal provided in them, and does not hear his acoustic surroundings. He relies nearly 100 percent on the mix he receives. 

So, the WL is correct that he’s hearing a different balance than before, and the audio operator is correct in the sense that he’s feeding the same old monitor mix to the WL.The difference is the delivery method, with its DRAMATICALLY different acoustic experience. For this reason, the transition can be startling and potentially frustrating for new in-ear monitor users.

Professional earphones for stage monitoring are designed to seal the ear, acoustically isolating the user from nearby sounds. When people with normal hearing close off the opening to the ear canal, things change big time. Occlusion/isolation is a big deal, but so is the way that the user hears their own voice via bone conduction (vibration of the bones in the head).

Typically vocalists express a sudden change in the tone of their own voice when using earphones for the first time. Our WL mentioned earlier would have certainly noticed this. He didn’t hear much of his voice through air conduction any longer, due to occlusion.

Before the vocal microphone is mixed into the in-ear mix most of what a vocalist hears the bassy/muffly sounds of their voice due to this bone conduction. For this reason, vocalists often have the toughest time adjusting to wireless personal monitors.

So a vocalist using in-ear monitors is certainly going to need to hear others on the stage (such as the band or orchestra) in their monitor mix, but usually needs a lot more of their own vocal. This helps overcome the occlusion. The vocal will often need to be, by far, the loudest thing in their mix.

If vocalists do not hear sufficient level of their own voice, the bone-conducted tone of their voice is predominant and they are uncomfortable.  Ever heard a vocalist trying “ears” for the first time say, “I sound really weird!”? This is probably why. Instrumentalists using professional, sealed earphones also experience isolation, but they do not have the challenge of their instrument being mounted in their skull :>),  and hence, don’t have to deal with tonal distortion due to bone conduction.

So because of the isolation provided by proper earphones, artists no longer hear sounds naturally as they traditional have with wedges.  If there is something they want to hear, it must be deliberately routed to their monitor mix. It becomes critical that the sound tech auditions the monitor mix with earphones of preferably the same type. And, mix adjustments that required four or five knob “clicks” with a wedge might need only two or three “clicks” (or less) in great earphones. Sonic details are simply much more obvious. 

Consider a worship leader with a choir behind him: in a wedge application, he may hear plenty of the choir naturally, without any choir being folded back. But with “ears,” he will certainly want the choir mixed in his mix if he intends to hear them at all.  So the acoustic isolation offers wonderful control, but requires increased attention and effort.

Full Vs. Partial Mixes

A straight-up full IEM mix might sound much like the front-of-house mix, a commercial CD mix, or similar, have every element processed and blended at the proper “finished product” balance. A typical IEM mix intentionally omits non-essential elements (for that particular user!) so that the remaining elements may be monitored clearly, without unnecessary “clouding” from a busy mix. Clouding is sometimes also loosely referred to as “crowding.”

For instance, a bass player’s IEM mix in a modern worship band setting will certainly have his bass, the kick drum, the basics of the rhythm section, the lead vocal, and maybe a few other things he may request. But it might omit the choir mics, orchestra sounds, background singers, playback devices, “talking heads,” or other elements that are not really essential in helping him get the pitch and time cues he must focus on.

It is often helpful to remind each other (musicians and techs alike) to contrast the terms “listening” and “monitoring,” and remember the purpose of stage monitoring. This can sometimes become a race for the perfect full mix in a user’s wireless personal monitor, when oftentimes that’s really not the point at all (if it were, we could all save a ton of money and effort by routing the main PA mix to all IEMs, but that would be a disaster…)

Downward Mixing

“Downward” or “subtractive” mixing describes the idea of “less is more” in monitoring mixing, and this technique applies very well to both wedges and earphones. So when we have an artist continually asking for more and more level from various sources in their ears, we should instead turn other elements down.

The artist still gets the balance adjustment they desire, but without an overall volume increase. And this is also a better approach when it comes to the science of proper gain structure in our mixing consoles and personal monitor mixing products (wired or wireless).

Mono Vs. Stereo Mixes

Mono in-ear mixes can be made to work. But those that use their systems in stereo eventually discover that there is a world of increased monitoring flexibility available to them. And humans aren’t designed for mono.

A mono IEM mix means that everything is heard “dead center.” That is, above the head in the virtual center of the sound image, or “phantom center.” A stereo mix allows the placement of sources to be panned across the stereo space in the listener’s head.

Stagger Panning

Here, various sources are intentionally panned in different places across the stereo image for the purpose of “un-mixing” them for monitoring.  It is interesting to watch and see that musicians can (whether consciously or not) train themselves to “point” their listening to different directions in their head, depending on what sound they want to focus on at any moment. 

It is important that the user’s own “me” signal stays prominent and in the center/top of their head, or “up the middle.” Say a musician has his acoustic guitar and the worship leader vocal both placed center in his head (good), and an electric guitar is panned to 10 o’clock in his ears, another guitar may be panned to 2 o’clock, stereo keyboards might be mixed real wide across or not, and some other sources might be panned to 11 o’clock, or 4 o’clock, and so on. While we would usually not do this for a mix intended for an audience, this “un-mixing” by stagger panning can be very effective for stage monitoring. One excellent worship bassist stated:

“When running an IEM system in mono, I hear the mix dead center. That is a problem when I need to hear kick, snare, overhead drum mics, bass, acoustic guitar, electric guitars, click, percussion, back-ground vocals, the worship leader, a choir, loops, etc… I have to choose three to five things to monitor and everything else takes the back seat…”

(he just described clouding from a full mix)

”...when I use IEMs in stereo, I have a much larger sound field to use. I’ll pan background vocals slightly left, the worship leader slightly right, acoustic around 30 percent right, piano around 30 percent left, kick and bass dead center, overhead drum mics around 50 percent right, and so on…”

(sounds like his version of stagger panning)

“With a stereo mix, things don’t compete as much… In mono, the only way to get more room is to increase the gain, which takes my mix louder, whereas a stereo mix allows me to take my mix wider. In fact, I am able to use 25-35 percent less volume with a stereo mix.”—Andrew Catron, associate of worship, Lee Park Church

And here is a quote on this topic from a veteran professional monitor mixer:

“...I’ve found that creating a stereo mix with slight spread of sources with the artist’s own voice or instrument dead center allows me to keep levels under control. I also get a lot less of the ‘more me’ requests with this approach.”—Scott Fahy, lead audio engineer, Living Word Christian Center

While Catron has a good working audio knowledge, he is a musician first and it’s interesting that he sorted out the thoughts above while transitioning from a mono to stereo in-ear mix. Fahy is not a musician but a very skilled and experienced audio engineer, and usually provides several dozen monitor mixes at a time on a dedicated console in a complex worship environment. Both, from very different approaches, are convinced that stereo (vs. mono) in-ear monitoring makes for easier monitoring, happier users, and lower volume.

One Or Two Ears?

Yep, I went there… it’s important. 

Have you ever seen an artist remove one earphone on stage? Why is that? One common reason is that they “can’t hear” or are uncomfortable with their mix when wearing both earphones.

It’s most common with vocalists, in my own experience. They are certainly most comfortable with their raw and open ear(s), as they’ve been using them reliably their entire lives. But if the monitor mix is really needed and is suitably delivered to the earphones, they should wear them both. One good way to achieve this is to work toward a proper and comfortable IEM mix so that the artist is not tempted to remove either earphone. That may include ambient miking and mixing, and we’ll get that to shortly…

Using one earphone on the live stage often brings an accompanying increase in monitoring volume, finally controlled by the user at his/her bodypack receiver. This author has witnessed this in enough scenarios to note a trend: it seems that a single earphone (all else equal) tends to be run at least 10 dB louder than two earphones! 

One reason is simple: the open ear is not sealed and hears the array of surrounding stage sounds, which are often fairly loud, and the single earphone in the other ear naturally must be turned up considerably to compete clearly. Also, when one earphone is removed, “binaural summation” is defeated. This is a psychoacoustic phenomenon that very positively affects listener perception of loudness—but it only works with two ears. Yikes! 

So on top of the already increased monitoring volume, the loss of binaural summation causes even higher listening levels to be needed. It’s easy to believe, then, that a single earphone may be run well beyond twice as loud as two earphones. In the interest of hearing health and safety, anything we can do to minimize the sound pressure exposure for all users (IEMs, wedges, or any other application) is the right thing to do.

Avoiding the single ear method for extended use is highly recommended. The better move is to work toward a proper two-ear mix. It is worth the effort.

Ambience/Audience Response

Once the balance of sources is mixed well in an IEM mix, the hard part is done. And for some users, we’re finished.

But others feel the need to overcome the isolation, and we need to find a way around that. After all, performers on stage want to feel like they’re still in the venue with the worshippers—not in an a tight iso-booth at the local recording studio. This means hearing the audience sounds and room ambience. Some of this “space” happens naturally through leakage, but sometimes we must deliberately fix this. Consider this:

A worship tech sets up a new wireless IEM for his worship leader and he knows that isolation is part of the game. So, he sets up a stereo pair of cardioid condenser mics in an X-Y configuration (Figure 1), front and center, facing the audience. This simple stereo technique provides a good image of the audience sounds and some room ambience. He pans the mics hard left and right (for the worship leader’s perspective) and blends them into the IEM.

Figure 1 (click to enlarge)

This can work very well when blended just right. When the WL faces forward, it’s simple…  If, say, a sound comes from an audience member hollering a response or applauding on the worship leader’s left (house right) it will be heard and seen on the WL’s left. So, his eyes and ears agree, and the brain likes that.

...That is, as long as he remains facing forward, and center stage. But suppose he moves and turns to face a stage left guitar player during a musical moment, with his right ear now facing down stage. What has happened?

That same audience sound is still heard just as easily before, but now there is a localization error. What is heard on the worship leader’s left side is seen on his right. His eyes and ears disagree. The brain hates this. With stereo IEMs, this “stationary ambience” issue may be a problem for stage performers. His head orientation moved, but his artificial ears (the ambience mics) did not.

Our eyes and ears like to perceive sources from their correct/coincidental directions, and when they don’t agree, it’s a problem. In some cases, it’s just annoying. In other cases, it can be completely disorienting.

One approach would be to have a monitor operator updating the pan pots of the ambient mics on the fly, following the artist in real time by watching their movements and updating the directional cues. Yeah, right! Not a very reliable or repeatable solution. 

Or, maybe a GPS-enabled pan-tracker gizmo in the future smile  So in most worship environments, we live with stationary ambience. It’s manageable, and its still far better than no ambience.

Also, because the “aesthetics police” are often present, that X-Y mic pair often gets removed from the center downstage location. They typically wind up one on each end of the stage, crossed toward the back of the audience. That’s OK; it’s a compromise that can still provide a usable spacious image.

But what if we were to mount a subminiature ambient mics (which are essentially serving as artificial ears in this application) on either side of the head, or on the outside of the earphones themselves? Then, no matter where the user moves, the directional cues always work because the mics move with the user. Nifty. There are a few technologies emerging on the market that integrate some type of of binaural miking with in-ear monitors. 

Another market trend is the inclusion of an ambient mic on a personal, on-stage monitor mixer or even clipped onto a user’s lapel. These are great for communication (especially during rehearsals) and a little ambient sound, but do not provide accurate directional cues or a stereo sound field.

Potential Timing Issues With Ambient Mics

Sometimes, sound engineers will place ambient mics further back into the audience area, attempting to minimize sound leakage from the stage and PA into these mics. While that may decrease the leakage, it creates latency: there is still some leakage, but it now takes a little while for that sound to travel from the stage and PA to the mics. The further away the mics are from the PA, the longer it takes. 

When such located mics are combined into an in-ear monitor mix, the timing offset can be problematic for musicians attempting to play tightly together, as they hear slightly out-of-time musical leakage and sometimes degraded fidelity due to comb filtering. These mic placements may be more useful for recording or broadcasting applications where they can be carefully used to helped convey venue size to the audience. 

But in such applications, no musician is relying on those mixes for critical performance monitoring. So, keeping any ambient mics that may be mixed into IEMs close to the PA is a wise move. After all, we’re talking about LIVE sound, not LATE sound.

Kent Margraves began with a B.S. in Music Business and soon migrated to the other end of the spectrum with a serious passion for audio engineering. Over the past 25 years he has spent time as a staff audio director at two mega churches, worked as worship applications specialist at Sennheiser and Digidesign, and toured the world as a concert front of house engineer. Margraves currently serves the worship technology market at WAVE (wave.us) and continues to mix heavily in several notable worship environments including his home church, Elevation Church, in Charlotte, NC. His mission is simply to lead ministries in achieving their best and most un-distracted worship experience through technical excellence. His specialties are mixing techniques, teaching, and RF system optimization.

Posted by Keith Clark on 03/06 at 03:16 PM
Church SoundFeatureBlogStudy HallConsolesEngineerLoudspeakerMicrophoneMonitoringSignalStagePermalink

In-Depth Primer: Speech Intelligibility In Sound Reinforcement

What it is, what affects it, how it’s measured, and much more...

Section 1: Introduction

Most people have had this experience:

You’re driving along in your car, windows down and the radio playing. It’s a new song, one you’ve never heard before by an artist you don’t recognize, and you’ve got to get the name so you can buy the disc. The music ends, the announcer comes on and . . .

. . . you can’t understand him over the road noise.

As this simple example illustrates, there’s an important difference between music and speech. The brain is capable of “filling in” a fair amount of missing information in music, because there’s a high degree of redundancy. (If you didn’t get the bass line in the first four measures, you’ll pick it up when it repeats in the next four.) But speech is rich in constantly-changing information and has less redundancy than music. If even a modest percentage of the information is garbled or missing, the brain can’t decipher the message.

Speech communication systems therefore are subject to more stringent requirements than music systems. These pages discuss speech intelligibility in sound reinforcement - what it is, what affects it and how it’s measured.

The Speech Signal

Human speech is a continuous waveform with a fundamental frequency in the range of 100-400 Hz. (The average is about 100 Hz for men and 200 Hz for women.) At integer multiples of the fundamental are a series of changing harmonics called “formants” which are determined by the resonant characteristics of the vocal tract.

Formants create the various vowel sounds and transitions among them. Consonant sounds, which are impulsive and/or noisy, occur in the range of 2 kHz to about 9 kHz. (Below is a vocal spectrum graph for male and female speakers with an “idealized” human vocal spectrum superimposed.)


The sound power in speech is carried by the vowels, which average from 30 to 300 milliseconds in duration. Intelligibility is imparted chiefly by the consonants, which average from 10 to 100 milliseconds in duration and may be as much as 27 dB lower in amplitude than the vowels. The strength of the speech signal varies as a whole, and the strength of individual frequency ranges varies with respect to the others as the formants change.

Speech Comprehension

The listener’s challenge is to parse speech sounds into meaningful units of language - a complicated task. Gaps in the sound don’t necessarily correspond to word or syllable breaks. Speech sounds also are not discrete events: rather, they merge and overlap in time, and the articulation of a given phoneme differs in different contexts and with different speakers.

In fact, the precise ways in which the ear-brain mechanism decodes speech remain something of a mystery. Such factors as loudness, duration and spectral content certainly affect speech perception, but how they may interact is not fully understood.

Diminished intelligibility is associated with a loss of information that is coded in a number of highly interactive elements, and many factors influence it. Background noises can mask the speech. Both the direction of the source, relative to the listener, and the direction of the interfering noise can alter the degree of masking. Intelligibility is also affected by the predictability of the message, the speaker’s enunciation and, not least, the acuity of the listener’s hearing.

Go To: Section 1  Section 2  Section 3  Section 4  Section 5 

Section 2: Factors That Affect Intelligibility in Sound Systems

The goal of a speech reinforcement system is to deliver the speaking voice to listeners with sufficient clarity to be understood.

Given the complexity of the speech signal, the task of providing high-quality speech reinforcement in real-world, less-than-ideal conditions is doubly complicated.

Below is a diagram (Figure 1) of a simplified speech reinforcement system showing the main factors that affect intelligibility.

As the diagram indicates, a number of acoustic, electromechanical and electronic factors need to be considered if intelligibility is to be maintained. In order to deal with all of these factors effectively, one must understand how each affects the speech signal.


The most common obstacle that speech system designers face is the intrusion of unwanted sounds that inevitably interfere with the speech signal. The effect is called “masking,” — a general term that covers a very wide variety of situations.

Figure 1 (click to enlarge)

Masking noise can come from acoustical sources such as ventilation equipment, traffic, crowds and commonly, reverberation and echoes. It can also arise electronically from thermal noise, tape hiss or distortion products. If the sound system has unusually large peaks in its frequency response, the speech signal can even end up masking itself.

One relationship between the strength of the speech signal and the masking sound is called the signal-to-noise ratio expressed in decibels. Ideally, the S/N ratio is greater than 0 dB, indicating that the speech is louder than the noise. Just how much louder the speech needs to be in order to be understood varies with, among other things, the type and spectral content of the masking noise.

The most uniformly effective mask is broadband noise. Figure 2 is a chart showing word articulation versus S/N when the masking source is noise spanning 20 Hz to 4 kHz. Notice that the signal must be 12 dB louder than the broadband noise to achieve 80-percent word recognition.

Figure 2 (click to enlarge)

Although, narrow-band noise is less effective at masking speech than broadband noise, the degree of masking varies with frequency. Figure 3 is a chart showing word articulation versus S/N for two noise bands — 135 to 400 Hz (the fundamental frequency range of speech) and 1800 to 2500 Hz (the strongest consonant frequency range).

Figure 3 (click to enlarge)

High-frequency noise masks only the consonants, and its effectiveness as a mask decreases as the noise gets louder. But low-frequency noise is a much more effective mask when the noise is louder than the speech signal, and at high sound pressure levels it masks both vowels and consonants.

This is why the proximity effect of cardioid microphones can be so harmful to speech intelligibility: it causes the speech signal to mask itself. While cardioids are very useful for minimizing noise pickup at the source, they should always be used with a steep (12 dB/octave or greater) high-pass tuned to about 100 Hz (or higher, if the speaker’s voice range allows) so that proximity effect problems are minimized.

A human voice delivering a competing message, sometimes called a “distractor,” is also very good at masking speech — particularly at or below 0 dB S/N. In addition, the masking effect increases with the number of distractor voices. Figure 4 is a diagram comparing masking for one, two and three voices.

Figure 4 (click to enlarge)

Notice that, below 0 dB S/N, three voices become just as effective a source of masking as broadband noise. Above 0 dB S/N, however, intelligibility improves rapidly as the S/N increases. This illustrates the importance of having sufficient power in paging system to overcome crowd noise.

The direction from which a masking sound arrives, relative to the direction of the speech signal, can affect the degree of masking. If the noise comes from the same place, the masking is greatest; it decreases as the distance between the noise and the speech increases because this makes it easier for the brain to discriminate between them. The masking effect is lowest when the presentation is through headphones, with the speech in one ear and the mask in the other. (Unfortunately, we can’t take advantage of that feature in sound reinforcement).

From this discussion, we can see why reverberation is so destructive of intelligibility, especially beyond critical distance. Being itself caused by the speech, reverb mimics the speech spectrum, but generally with greater low-frequency energy.

Sufficiently long reverb and echoes — such as are encountered in cathedrals and large sports arenas — can actually function like multiple distractor voices. And by its nature, reverberant energy arrives from all angles, so it’s hard to separate from the speech using directional clues.

Frequency Response

One of the most obvious aspects of sound system performance that affect intelligibility is frequency response. Severely band-limited systems deliver speech poorly. For instance, telephones are generally limited to a 2 kHz bandwidth, and this makes it hard to distinguish between “f” and “s” or “d” and “t” sounds.

High-quality speech systems need to cover the frequency range of about 80 Hz (for especially deep male voices) to about 10 kHz (for best reproduction of consonants, which are crucial to intelligibility). Response below 80 Hz must be eliminated to the extent possible: not only do these frequencies fall below the range of the speech signal, but also they will cause particularly destructive masking at high sound levels.

It’s important, also, for the system response to be reasonably flat throughout its range. The gradual high-frequency rolloff that many reinforcement professionals favor for music applications will tend to de-emphasize consonants, which are already as much as 27 dB less loud than vowels. Likewise, prominent peaks or dips in the response can cause either self-masking or loss of consonant articulation.

Finally, the coverage of the system must be consistent throughout the intended listener area, with minimal response cancellations or off-axis dropoff in the critical high frequencies. This requirement very often dictates either a distributed loudspeaker system or carefully aimed and delayed fill speakers. Using high-Q loudspeakers will help to elevate the S/N ratio between the speech and the reverberation levels.


Early studies of intelligibility in communication systems suggest that clipping the peaks of the speech signal, and then amplifying it to restore its peak-to-peak amplitude, improves intelligibility.

The trick works in very noisy situations because clipping generates partials that are harmonically related to the fundamental — and thus less likely to mask the speech — and because it both accentuates consonants and increases the sound power of the signal.

As such, it has been helpful for band-limited communication systems that are used in very noisy environments, such as the deck of an aircraft carrier.

The fact is, however, that clipping the signal to improve intelligibility works only in cases where the signal-to-noise ratio is very poor. Figure 5 is a chart showing word articulation versus S/N for an infinitely clipped and an unclipped speech signal. Notice that the intelligibility score for the clipped signal levels out to around 50 percent at 0 dB S/N; above about +3 dB S/N, the unclipped signal scores better.

Figure 5 (click to enlarge)

In real-life speech reinforcement systems, clipping should be avoided. Obviously, it will sound objectionable through a high-quality sound system. It also will increase the masking from any noise that is picked up by the microphone, since that noise will be clipped along with the speech.

Another type of distortion that is very destructive to intelligibility is intermodulation distortion. While it is easily controlled in the electronics of a sound system, significant IM can be generated when some types of loudspeakers (particularly two-way coaxials) are driven at high levels. IM produces sum and difference products that are not harmonically related to the fundamental frequency. As such, they have a much greater masking effect than the harmonic products of clipping.

Time Response

Perhaps because it remains poorly understood and its effects are more subtle, phase response in communication systems has received scant attention. In fact, most published research about “phase” and intelligibility actually deals with the effects of relative polarity. It’s been shown, for instance, that when speech is presented with noise over headphones, intelligibility increases by about 25% if the speech signal in one ear is inverted relative to the other ear. But this result has no application in sound reinforcement, other than for in-ear stage monitors.

Go To: Section 1  Section 2  Section 3  Section 4  Section 5 

Section 3: Statistical Measures of Speech Intelligibility

Statistical intelligibility measurements use human beings, rather than electronic test instruments, to assess speech communication systems.

First proposed in 1910 and refined with the introduction of the telephone and the advent of electronic communication systems in World War II, such tests are still considered to be the most accurate and reliable measures of intelligibility.

While many variations are in use, this discussion deals most directly with the American National Standards Institute’s approved procedure (ANSI S3.2-1989, “Method for Measuring the Intelligibility of Speech Over Communication Systems”).

Method and Applications

The statistical measurement process uses trained, English-fluent talkers speaking standardized word lists through the communication system to trained, English-fluent listeners. The word lists are crafted to evaluate specific aspects of speech transmission; the ability of the listeners to identify individual words or word pairs indicates the quality of the transmission.

Such tests are used in a wide variety of applications, from examining the acoustics of conference rooms to evaluating intercoms for deep-sea divers. In professional sound reinforcement, statistical tests provide crucial information for architects and consultants, both in designing speech reinforcement systems and refining their performance in the field. They may also be used to evaluate the contributions that specific microphones, loudspeakers and signal processors make to speech intelligibility.


In order for the results of any intelligibility test to be valid, those conducting the test must be well versed in experimental design and statistical data analysis. Since human subjects are central to the tests, the experimenters must also understand the psychological factors involved, including the effects of motivation and learning through repetition. Finally, they must, of course, know how to operate the sound system properly so as to avoid introducing errors. For all of these reasons, intelligibility tests invariably are made by trained consultants who specialize in the field.

The tests use a minimum of five talkers and five listeners; larger subject groups reduce the margin of error. Talkers and listeners are selected to assure a representative cross-section of age and gender.

All must speak English as their first language and have normal hearing. Talkers must have good articulation, and are trained both to speak at a consistent level and to synchronize their words with timing signals so that the rate of presentation doesn’t skew the test results in any way. Listeners must have good discrimination, and are familiarized with all the test words that will be used, the sound of each talker’s voice and the method of recording responses.

A number of specialized word lists are in common use for testing various aspects of speech communication. The ANSI standard specifies three:

—The Modified Rhyme Test
—The Diagnostic Rhyme Test
—The set of twenty Phonetically Balanced Word Lists

Other examples of word lists include:

—The Diagnostic Alliteration Test
—The Diagnostic Medial Consonant Test
—The Spelling Alphabet Test


If at all possible, the sound system should be tested under conditions of actual use: if there are potential sources of masking noise such as outside traffic or an HVAC system, these should be present during the testing and documented for the report.

It’s also important that the system gains be set to a representative sound pressure level. Pre-recorded test material can be used as long as the recording and playback equipment don’t introduce significant noise or distortion.

At a minimum, each talker is given three PB or MRT word lists - or the complete DRT list - to read. Where only one sound system is being tested, the trained subjects are first tested face-to-face or in similarly ideal conditions to establish a “control” or baseline measurement. (Under these circumstances the intelligibility should be nearly perfect.)

This score is then used as a reference to which the system under test can be compared. During testing, supplementary information such as the speed/certainty of the listeners’ responses and their statistical opinions about the sound system should be gathered.

Analyzing the Results

There are many ways of analyzing the test data depending on the characteristics of the particular word list and the variables being tested. At the least, a set of percentage scores is calculated showing the number of times words were identified correctly by each listener. Taking an average of these can produce a single overall score. If either the DRT or MRT is used, the results are adjusted mathematically to account for guessing (no adjustment is required for the PB test). Deeper statistical analyses can yield more detailed information about the sound system if undertaken carefully.

Go To: Section 1  Section 2  Section 3  Section 4  Section 5 

Section 4: Machine Measures of Speech Intelligibility

Statistical tests using trained talkers and listeners are by far the most accurate and reliable methods for intelligibility testing. Unfortunately, they are complicated to set up, time-consuming to conduct and require extensive statistical analysis to interpret.

Hence, consultants and acousticians have long sought an automated, machine-based test that could quickly and easily yield meaningful intelligibility scores for speech systems. A number of methods have emerged over the past fifty-odd years that fall into two basic categories: analyses of the reverberant field, and measurements based on signal-to-noise ratio.

Reverberation Analysis

From at least the ancient Classical period, architects have recognized that reverberation and echoes hamper intelligibility. Indeed, that realization resulted in the development of the Greek amphitheater, a durable architectural model that survives to this day.

Modern acousticians have at their disposal several different methods to test reverberation in enclosed spaces. The most commonly used of these are:

—%ALcons - a measure that’s familiar to many sound system engineers
—Direct-to-Reverberant Ratio
—Useful-to-Detrimental Sound Ratios
—Early-to-Late Sound Energy Ratio

Each of these tests can tell us something about the reverberant qualities of a space and, therefore, how intelligible speech could be in that space. Since they deal predominantly with reverberation, however, they fail to take into account the majority of the factors that can affect a speech reinforcement system’s performance.

Signal-to-Noise Methods

With the advent of electronic communication systems and their complex potential problems, acousticians and engineers recognized that different machine testing approaches were needed.

Beginning as early as the 1940’s with telephony research at Bell Laboratories, several instrument-based tests have evolved, each of which relies on signal-to-noise measurements in one form or another. They are:

—AI - Articulation Index
—STI - Speech Transmission Index
—RASTI (another measure that’s familiar to some sound system engineers)
—SII - Speech Intelligibility Index

AI is now of interest chiefly for having demonstrated the relative importance of different frequency bands in the speech spectrum; because it doesn’t effectively account for reverberation, it has been largely superseded by the newer methods. Of these, only RASTI is available in a simple, reasonably-priced instrument.

SII (which is proposed as ANSI standard S3.5-1997) is the most robust of the machine intelligibility measures, but it requires sophisticated equipment and the calculations that it entails are quite complex. Given the prodigious computing power that’s now available at reasonable cost, however, a practical, affordable SII instrument could soon become a reality.
Limitations of Machine Measures

Their relative convenience notwithstanding, all machine-based intelligibility measures have inherent limitations.

Every machine testing method requires that the operator have significant experience and analytical skill if the results are to be accurate and useful. It can be very difficult to identify inaccurate or misleading scores and determine their causes. Most significantly, adjustments to the system that improve intelligibility may not positively affect the measured score - and adjustments that improve the measurements may not enhance intelligibility.

In addition to these factors, each testing method has its own particular limitations that must be weighed both when carrying out the tests and when interpreting the results.


Percentage Articulation Loss of Consonants. This machine measure of intelligibility is closely associated with the TEF sound analyzer. It is computed from measurements of the Direct-to-Reverberant Ratio and the Early Decay Time using a set of correlations defined by SynAudCon, and is specified in percent.

Since %ALcons expresses loss of consonant definition, lower values are associated with greater intelligibility. It is generally assumed that the maximum allowable value for typical paging applications is 10%, assuming that the environment is relatively free of masking noise. For learning environments and voice warning systems, the desired value is 5% or less.

The %Alcons method is widely used by acoustical consultants (particularly in the United States), but it has significant drawbacks. First, it is based on measurements in a single one-third octave band centered on 2 kHz; all other frequencies are ignored, so the system’s frequency response must be verified in some other way for the %Alcons score to be meaningful.

Moreover, the method does not account for many factors that can dramatically affect intelligibility, including signal-to-noise ratio, the background noise spectrum, distortion, late reflections or echoes, system frequency response, compression, non-linear phase, equalization and acoustic power. %Alcons measurements of sound systems therefore often yield overly optimistic scores. Where reverberation or strong, late-arriving reflections are the primary problem, however, they can sometimes be more useful and accurate than RASTI.

Direct-to-Reverberant Ratio

The ratio between the intensities of the direct sound and reverberation. There are several measures for this quantity. C50, one of the most popular, expresses speech clarity as the energy ratio of the first 50 milliseconds of direct sound to the overall steady-state reverberation, with 0 dB being the minimum acceptable value and +4 dB or above preferred.

A similar measure, C7, is used in Germany; C35 is yet another version. Measurements are made in a single frequency band (usually centered on 1 kHz). Each of these measures can be more reliable and repeatable than %ALcons, which also deals with the direct-to-reverberant ratio.

Useful-to-Detrimental Sound Ratios

The logarithmic ratio between the energy of sounds that are useful to intelligibility and those that are detrimental to it, expressed in decibels.

“Useful” sounds are the integrated energy of speech sounds arriving within the first 50 or 80 milliseconds after the direct sound, and “detrimental” sounds are the sum of later-arriving speech energy and ambient noise. In practice, both quantities may be found by integrating appropriate portions of the room impulse response.

Early-to-Late Sound Energy Ratio

Proposed in 1996 by G. Marshall, ELR is similar to C50 but is weighted for speech and incorporates measurements in more than one frequency band. As with other direct-to-reverberant methods, however, factors other than reverberation are not accounted for.


One of the earliest attempts to measure by machine the intelligibility of a speech transmission system, the Articulation Index was developed by Bell Telephone Laboratories in the 1940’s.

AI is based on the idea that the response of a speech communication system can be divided into twenty frequency bands, each of which carries an independent contribution to the intelligibility of the system, and that the total contribution of all the bands is the sum of the contributions of the individual bands. (AI may also be measured using one-third octave or octave bands.) Signal-to-noise ratios are computed for each individual band, then weighted and combined to yield an intelligibility score.

The AI varies in value from 0 (completely unintelligible) to 1 (perfect intelligibility). An AI of 0.3 or below is considered unsatisfactory, 0.3 to 0.5 satisfactory, 0.5 to 0.7 good, and greater than 0.7 very good to excellent.


Developed in the early 1970s, the Speech Transmission Index (STI) is an machine measure of intelligibility whose value varies from 0 (completely unintelligible) to 1 (perfect intelligibility).

In STI testing, speech is modeled by a special test signal with speech-like characteristics. Following on the concept that speech can be described as a fundamental waveform that is modulated by low-frequency signals, STI employs a complex amplitude modulation scheme to generate its test signal. At the receiving end of the communication system, the depth of modulation of the received signal is compared with that of the test signal in each of a number of frequency bands. Reductions in the modulation depth are associated with loss of intelligibility.


Rapid Speech Transmission Index, an machine method of testing for intelligibility in sound systems that is associated with Brüel and Kjaer, the instrumentation company that manufactures a portable device to implement it.

RASTI was developed as a simpler alternative to the more complex STI (Speech Transmission Index). In contrast to STI, RASTI measures only in two octave bands centered at 500 Hz and 2 kHz, respectively. It uses a speech-like excitation signal and, like STI, correlates reductions in modulation depth to loss of intelligibility.

RASTI has been implemented in a simple, portable instrument that can make very rapid intelligibility measurements, both acoustically and with an installed sound system. For this reason, it has been adopted for a number of European standards and civil system specifications. Being a radically simplified version of STI, however, it suffers compromises that have forced reevaluation of those standards.

For example, RASTI tests in only two frequency bands, with the assumption that the sound system’s response actually extends in a reasonably flat fashion from 100 Hz or lower to 8 kHz or higher. While this might well be the case in a properly-designed auditorium system, many types of paging systems fall short of such performance. In these cases, RASTI almost invariably gives an overly optimistic picture. (In fact, a sound system that reproduced only the two frequency bands in question could receive a perfect rating.)

Moreover, because it affects modulation depth, any compression or limiting in the system can cause an artificially low RASTI value - despite the fact that it may, in actuality, be acting to enhance intelligibility. RASTI also does not take system distortion or non-linear amplitude and phase into account.


Derived from and in essence identical to STI, SII is the method for by machine measuring speech intelligibility that is currently proposed in draft form as ANSI Standard S3.5-1997.

In the Standard, four measurement procedures are allowed, each using a different number and size of frequency bands. In descending order of accuracy, they are:

—Critical band (21 bands)
—One-third octave band (18 bands)
—Equally-contributing critical band (17 bands)
—Octave band (6 bands)

The value of SII varies from 0 (completely unintelligible) to 1 (perfect intelligibility).

SII is a highly capable testing method that, under the right conditions, shows good correlation with statistical tests. It features both wide bandwidth (150 Hz to 8.5 kHz) and, especially in the critical band procedure, far greater resolution than any other method. SII properly includes reverberation, noise and distortion, all of which are accounted for in the modulation transfer function. Experienced test operators can go beyond generating a single intelligibility score to diagnosing the source of a loss in intelligibility.

Under certain conditions, however, SII can yield misleading results. In particular, late-arriving reflections and echoes can distort the measurement significantly. Like RASTI, SII is susceptible to giving artificially low intelligibility scores if compression or limiting is introduced in the system. And because even the critical-band procedure ignores frequencies below 100 Hz, it may very well miss significant low-frequency masking sources.

Finally, SII does not take non-linear phase into account. Nonetheless, when used correctly by a skilled operator, it remains the most reliable and accurate of the machine methods.

Go To: Section 1  Section 2  Section 3  Section 4  Section 5 

Section 5: Future Directions

Despite their inherent limitations, all of the machine testing methods that we’ve discussed can show good agreement if the system under test is reasonably well behaved.

But intelligibility testing is most consequential (and potentially most useful) when the system has problems severe enough to impair speech transmission. Such problems can arise from a variety of sources and conditions, many of which can “fool” any of the machine testing methods.

Contemporary sound systems are sophisticated complexes of diverse, interacting components. As the simplified diagram in Figure 6 illustrates, they invariably include signal processing elements whose effects on intelligibility, and on the instruments designed to measure it, may be difficult to predict.

While the consequences of relatively simple analog processing (such as equalization and limiting) generally are benign, the same may not be true of new, powerful digital signal processing technologies.

Figure 6 (click to enlarge)

For example, much attention is now focused upon using DSPs to “deconvolve” the response of a space in order to suppress echoes and subtract or add reverberation. Because the algorithms that are involved affect the time order of the signal, there may be large consequences if these devices are misadjusted. Furthermore, if speakers are repositioned, or the acoustics of the space changes (when a curtain is closed, for example), then the particular deconvolution likely will no longer be valid and may, in fact, cause very destructive effects.

None of the present machine measures for intelligibility accounts for time distortions. In fact, we could conceive of a hypothetical system that reversed the time aspect of a signal, like playing a tape backward: no machine method would show any decrease in the intelligibility score for such a system, though it would obviously render speech unintelligible.

What’s needed is an analyzer that’s sufficiently “smart” to detect all of the factors which affect intelligibility, and render a conclusive judgement, without relying heavily on the operator’s interpretation. But the unavoidable truth is that, as sophisticated as machine-based measurement systems may be, they cannot yet approach the complexity of the human ear/brain mechanism informed by a lifetime of experience decoding speech.

We can only model those aspects of that exquisitely fine-tuned mechanism that we have come to understand. The many remaining questions regarding how it works and what factors may affect it can only be answered by further research.

These papers were written by Ralph Jones, edited by Rachel Murray, P.E., and provided by Meyer Sound.

Go To: Section 1  Section 2  Section 3  Section 4  Section 5 

Posted by Keith Clark on 03/06 at 01:44 PM
AVFeatureBlogStudy HallAVMeasurementProcessorSignalSound ReinforcementPermalink

Church Sound: Mixing Like A Pro, Part 6—The Channel Strip

This article is provided by CCI Solutions.

In previous articles in this series (here), we’ve spent considerable time on EQ and gain, but this time we’re going to pick it up a little bit and cover a number of other buttons and knobs that typically exist on each channel. These all exist on a digital console too, but may not be in the same order as we’ll tackle them here.

Most of the time you won’t likely need this, but occasionally inputs send a much stronger signal than usual and you run out of room to turn the gain down. Engaging this button will provide a cushion, usually 20 dB, which supplies room to go up or down with gain without having way too much input.

Phantom Power (48V)
Phantom power is required to operate certain types of microphones and is usually supplied by the mixing console. While we’re not looking to cover all types of microphones in this article, we’ll make a distinction between dynamic and condenser mics for the sake of our discussion on phantom power.

Dynamic microphones like the Shure SM57 and SM58 are relatively inexpensive, durable, moisture-resistant and less prone to feedback. Condenser microphones tend to produce a higher quality sound (flatter and extended frequency response) and are more sensitive to picking up sound.

Condenser microphones are good at picking up more of the detail and nuance of acoustic instruments and vocals. They also require power, and that’s where our “48V” button, otherwise known as phantom power, comes in. You might have the gain set correctly and the fader set to a normal level, but if the phantom power is not turned on, there won’t be sound from condenser mics.

AUX (or Mix)
Just as faders are used to mix the house send, the Aux sends are simply another mix that can be put together. Working the exact same way the faders do to create a mix, turn the AUX knobs to increase or decrease the level of input sources into each mix.

For most people, Aux sends will feed monitor wedges, in-ears or effects. Regardless of where the final send goes, AUX sends are simply a different way to mix inputs into an output.

If mixing a stereo house, one where both the left and right loudspeaker can be heard from most seats in the house, the pan knob can help in creating a little bit of space in the mix and create a stereo image for those listening. 

When operating with a mono system, or a stereo system where each side of the house only hears one of the loudspeakers, it’s best to leave the pan knob at center so everyone gets to hear the entire mix.

Simple enough, this button will eliminate that channel’s audio from its output destination. On some consoles (Yamaha especially), the mute button is replaced with an “on” button. In that case, switching the “onn” button off will eliminate audio.

Primarily known as Pre-Fader Listen and After Fader Listen, this button is also known as Solo. Pressing it providesthe opportunity to monitor only that input in your headphones for checking for anomalies or other specific things you’re hearing.

Some mixers have a Solo with the ability to choose whether to hear the solo pre-fader (the input right as it comes into the mixer and after the gain knob) or post-fader (the input with channel strip processing and the channel fader volume applied).

The assign buttons allow you to route the signal or sound of that channel directly to the master output or to a subgroup. The more technical term for a mixers subgroup is VCA (Voltage Controlled Amplifier) and the digital mixer version is DCA (Digital Control Amplifier).

When mixing 24-48 inputs, it can be tough to keep up with the dynamics of all the live musicians when dealing with each fader individually. Creating relevant groups by assigning multiple channels to a single subgoup allows you to adjust that group of channels with just one fader.

For example, let’s say we have eight inputs for our drums, bass, acoustic, electric, two stereo keyboards, a variety of orchestra instruments, six vocalists and a choir. In order to make mixing all of those inputs more manageable, we’ll assign them to the subgroups.

One possible breakdown for grouping could be:

1) Drums
2) Guitars
3) Keyboards
4) Orchestra
5) Lead Vocals
6) Background Vocals
7) Choir
8) Playback sources (CD, i-device, DVD player, etc.)

While you may prefer a slightly different arrangement (which is fine), right off the bat mixing has been streamlined through the use of subgroups. Finding that background vocals are getting a bit lost, but the blending of them is solid? Just push up the entire group a bit.

When you hitting that big accapella section of the song with just the drums, push just the drums and vocals a bit with two faders instead of grabbing 12. If one song is guitar led and the next one is keyboard led, make that adjustment quickly as well, without changing the overall balance of what is in each group.

If you’re going to be an active sound person (and I hope you are), assigning inputs to subgroups will help make group changes quickly.

Next time: what goes into creating an effective mix.


Duke DeJong has more than 12 years of experience as a technical artist, trainer and collaborator for ministries. CCI Solutions is a leading source for AV and lighting equipment, also providing system design and contracting as well as acoustic consulting. Find out more here.

Posted by Keith Clark on 03/06 at 12:46 PM
Church SoundFeatureBlogStudy HallConsolesEngineerMixerProcessorSignalSound ReinforcementPermalink
Page 52 of 184 pages « First  <  50 51 52 53 54 >  Last »