Feature

Tuesday, January 07, 2014

The Long And Short Of It: Acoustic Wavelengths

Most sound practitioners know that low-frequency wavelengths are much longer than high-frequency wavelengths. But because we can’t see them, to what level do we really understand them?

This is an important subject because understanding the nature of wavelengths can aid in optimally setting up and operating the various types of sound systems that most of us come in contact with.

Let’s look at the physical differential between low frequencies and high frequencies. They are quite radical, in the sense that we do not often encounter such a degree of variance in other fields.

A 20 Hz wavelength is about 60 feet long, or 720 inches. A 20 kHz wavelength is 0.055 feet long, or 0.66 inches. That’s an enormous differential, a ratio of 1091:1, or three orders of magnitude.

What does the length of a wave really mean? In two words, a lot. Sound travels at the relatively low speed of approximately 760 miles per hour in air, compared to light, which travels at approximately 671 million miles per hour.

A long, low-frequency wavelength requires some time to propagate, which means that it must first develop in the atmosphere before the sonic energy can be perceived as a note or tone. A 20 Hz wavelength takes 1/20th of a second to propagate, which is equal to 50 milliseconds. 

By contrast, a short, high-frequency wavelength takes very little time to propagate and become audible, and can do so in small spaces, whereas a low-frequency wavelength needs adequate space in which to develop. This is why studio control rooms and other critical listening environments, particularly those that are on the smaller side, will often use bass traps to even out the bass response. Bass traps are acoustic energy absorbers designed to dampen low-frequency energy in order to provide a flatter, more even, low-frequency room response by reducing LF resonances.

Low-frequency (above) and high-frequency waves.


When low frequencies propagate into an echoic room, which describes all rooms that have reflective surfaces, they generate standing waves. Standing waves are pressure nodes created when a sound wave reflected from a wall collides with the direct sound from the loudspeaker. At some frequencies the reflections will reinforce the direct sound, creating an increase in level, while at other frequencies the reflections cancel the direct sound, thereby lowering the level.

One or more bass traps, often located in the corners of the room for maximum effectiveness, will absorb the LF energy rather than let it reflect outward. Non-parallel walls and an angled ceiling can also help reduce standing waves. Incidentally, one reason that early trapezoidal loudspeaker enclosures were developed was to reduce internal cancellations. Within limits, the trapezoidal shape does exhibit certain advantages.

The differences in bass response from one room to another is one of the most apparent and meaningful effects that the room will contribute to the sonic quality.

On several occasions I’ve analyzed and equalized sound systems in large tents – once for Cirque du Soleil, another at a large business conference, and several times for general entertainment. Because tent walls are so flexible, low-frequency reflections are hardly evident at all. The LF energy literally moves the tent walls (you can feel it), thus being damped instead of reflected.

This certainly was a surprise to me after spending years tuning systems in concrete, steel, glass, and wooden structures. In the LF range it was similar to the response you expect to see when measuring outdoors. But unlike outdoors, mid and high frequencies showed pounced reflectivity and resonance, which is nearly the exact opposite of hard-walled rooms. 

Waves & Ripples
Long low-frequency wavefronts can be visualized by imagining large tsunami-type ocean waves crashing into buildings on the shore; they do not “see” the building as an obstacle and simply pass around it (assuming the building is of sufficient strength not to be destroyed). This is why it works to hang subwoofers behind a line array; they also do not “see” the line array as a barrier.

Conversely, short wavelengths can be visualized by imagining small ripples in the water that will break up, or reflect, when meeting an obstacle. As a case in point, even the typical perforated metal loudspeaker grille has a reflective and diffusing effect on high frequencies, though it is fairly minor in most cases.

How does understanding wavelengths lead to better sound system management? By knowing approximate wavelengths for various frequencies, and even visualizing them, you can potentially make better choices when it comes to loudspeaker placement.

System design factors, such as controlling the distance between subwoofers and full-range loudspeakers, or planning the distances between one subwoofer to another, become clearer when you think in terms of frequency and wavelengths. One important topic is how the range of wavelengths will be affected throughout the crossover region from full-range loudspeakers (often flown) to the subwoofers (often ground stacked). When two sources are separated by a quarter wavelength or more, constructive and destructive interference will occur, depending on the position of the listener or the measurement mic.

With respect to crossovers, it’s important to understand that a 120 Hz crossover to the subwoofers (for example) does not only affect frequencies at 120 Hz. If the crossover slope is 12 dB per octave, which is common, then at 90 Hz and at 180 Hz, the half-octave intervals, there will still be a potential for cancellations – or beneficial summation – although it won’t be as pronounced as it is at the crossover’s center frequency. One of the loudspeakers will be 6 dB lower in amplitude while the other will be 6 dB higher, assuming that the crossover slope is symmetrical.

Yet while the cancellation or summation effect of the combined sources will be reduced in amplitude still, it will be present. This makes a good case for steeper crossover slopes, with 24 dB per octave or 48 dB per octave often being rapid problem solvers. But steeper is not always better, an involved discussion of it’s own outside of our scope here.

Noise Abatement
You may sometimes be called upon to control noise “pollution” with respect to neighboring structures or to adjacent events. These might be the “air walls” for adjoining meeting rooms in a hotel environment. Or perhaps more specifically, the occupants of the staff residences located behind the rear of the outdoor Greek Theatre at the University of California at Berkeley, who aren’t thrilled with high-volume, late-night concerts. In order to help control such problems you’ll benefit by knowing about wavelengths and their effect on radiated directivity.

This is not just conjecture; in 1982 I was asked by Bill Graham to help the Greek Theatre stay in operation. There was no easy answer at that time – prior to the advent of modern line arrays – but of course we did the best we could with the tools that were then available. The outcome was essentially to reduce LF output across the board and more drastically, reduce overall operating levels.

This issue gave birth to the SPL police in the San Francisco Bay Area, and at the end of the day, nobody was pleased. Fortunately, today there are better ways of dealing with that type of problem. Line arrays and cardioid subwoofers can greatly aid in keeping the sound where it is needed, and minimizing it where it is not. 

Array Directivity
When planning the number of modules – and therefore the size of a line array or a conventional array – it will be easier to determine the required size of the array when you think about the lengths of the wavefront. An array must be quite large if the low-frequency energy is to be controlled and directed in the lower segment of the audible sound spectrum. A small array of four or five elements may control upper mids and highs adequately, but if it’s only several feet tall, it’s certainly not going to provide effective LF pattern control.

Various publications expound about line array length versus the frequencies that a line of drivers can control. A wide range of opinions are stated, sometimes varying substantially. Even the nature of the line array wavefront, whether it is cylindrical in nature – or not – is the subject of debate among loudspeaker manufacturers as well as non-partisan authors. It’s very difficult to determine which authoritatively stated opinion, or collection of opinions, should govern your system design choices.

Low-frequency wavefronts make it feasible to fly subs behind line arrays.


In support of practical applications, a good rule of thumb is that array size must equal at least a half-wavelength of the lowest frequency in which you’re seeking pattern control, but that’s just scratching the surface. A half wavelength will just begin to develop some semblance of control. If you’re looking to keep LF energy from bouncing off the rear walls, it’s wise to increase the array length to at least a full wavelength at the absolute minimum, and preferably several times greater.

Soon, however, this becomes impractical in the real world. A 100-foot-tall line array, which would be five multiples of 60 Hz, will presumably provide very effective vertical control, but is unlikely to be achievable in all but the most esoteric conditions.

DSP Support
There are things that level control of line array modules, time delay, and complex DSP frequency shading can do to potentially improve large-scale array performance beyond the obvious aspect of simply flattening the composite response.

Beam steering is one method that may be the answer to keeping array size manageable, while creating directional control that seems to defy the laws of physics. Beam steering is based on delaying some modules in relation to others, thereby increasing or altering the cancellation effect that is the very essence of how the line array principal works to control directivity. Complex DSP control is a field that’s developing rapidly, and is likely to offer continued improvements in performance for the foreseeable future.

It’s not an easy proposition to assemble and measure large-scale line arrays, let alone to attempt the thousands of variations, in an inert acoustical environment, that are needed to determine precisely what the effect of complex DSP intervention can – or cannot – achieve in performance advantages. Fortunately, computer modeling makes it much easier and less expensive to explore differing scenarios, and that is exactly what drives the majority of much present day research and development. 

Getting Results
Arrays of various types have been with us for decades. Some have proven to be very effective, providing cohesive and consistent sound quality to large numbers of people, while others have been a poor attempt at assembling loudspeakers that have little business being used together in any sort of deployment, not even in a disjointed cluster. But progress goes on.

By understanding the fundamentals of sonic energy, which in large part is being able to grasp the nature of wavelengths, you can authentically evaluate marketing claims, make informed decisions when planning and deploying loudspeaker systems, and deliver optimal results to your audience.

This short introduction to acoustic wavelengths is just that: an introduction. To fully understand how the nature of sonic energy affects the wide range of situations that the practicing sound engineer might encounter, I encourage a serious commitment to learning and understanding acoustical principals, and how they relate to real-world applications.

Senior technical editor Ken DeLoria has mixed innumerable shows and tuned hundreds of sound systems with an emphasis on taming difficult acoustical environments, and he’s also the founder and former owner of Apogee Sound, which developed the TEC Award-winning AE-9 loudspeaker.

{extended}
Posted by Keith Clark on 01/07 at 11:37 AM
AVFeatureBlogStudy HallAVLoudspeakerProcessorSignalSound ReinforcementSubwooferPermalink

Monday, January 06, 2014

50 Must-Read Pro Audio Articles From 2013

This article is provided by the Pro Audio Files.

 
Well here we are again. 2013 was fun. We dove heavy into tutorials with the launch of our YouTube channel and our first two in-depth tutorials from Matthew Weiss: Mixing Hip-Hop and Mixing with Compression. Lots of new tutorials on deck for 2014.

We’re also currently redesigning the site, and incredibly excited to share the new design with you. We’ll be switching over within the next month.

It’s become a tradition to do our year-end article roundup. We love sharing useful information and we love supporting our friends. This post lets us do both. And who doesn’t love a good listicle? Have a great 2014.

 
—-
 
Production Advice:

Sonic Scoop:

Randy Coppinger:

Recording Hacks:

Mark Marshall:

Kim Lajoie:

Designing Sound:

The Home Recording Show:

Audio Geek Zine:

ProSoundWeb:

Go here to check out the Top 20 articles on PSW for 2013.

Mix Notes:

Wink Sound:

Home Studio Corner:

Audio Issues:

Behind the Mixer:

The Recording Revolution:

Dan Comerchero is the founder and editor of the ProAudioFiles.com, a community blog where audio professionals from around the world share pro audio related articles, techniques, and advice on recording, mixing, production and more.

Be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article, go here.

 

{extended}
Posted by Keith Clark on 01/06 at 03:41 PM
RecordingFeatureBlogTrainingDigital Audio WorkstationsStudioPermalink

Friday, January 03, 2014

How To Assemble And Terminate Ethernet Cable

As with many types of cable, you can save some money by purchasing Ethernet cable and connectors and assembling/terminating them yourself.

Of course, keep in mind that putting together cables of any type requires time, patience and some know-how. Without these essential ingredients, you’ll be much better served by buying pre-terminated cables.

That said, let’s get started.

First, to attach any Ethernet RJ-45 connector ends to the lengths of cable, you’ll need an RJ-45 crimping tool.

Start with a good quality cable, and cut a piece to the appropriate length. For the best result, choose a size that is at least 3 feet (about 1 meter) long, but not more than 325 feet (about 100 meters). A cable that is either too long or too short can cause transmission problems. 

Next, use wire strippers to remove about 1 inch of the cable jacket, taking care not to nick or cut the wires.

If the wires are damaged even slightly, cut off the stripped section and start over—even minor abrasions can lead to crosstalk between the wires within the cable. 

Then separate the individual wires, arranging them in the order of the desired configuration, either straight-through or crossover. Be careful not to untwist the wires down into the jacket; doing so will cause the cable to be out-of-spec and can also cause crosstalk.

Above, a common RJ-45 crimp tool. And, below, be sure to line up everything carefully; it’s a one-shot deal. (click to enlarge)

Select the right connector for your cable. They’re available for solid or stranded conductors, but are not interchangeable.

Flatten the wires into a line, the way they’ll be inserted into the connector, and cut them to a length of half of an inch with reference to the jacket. Make sure that all ends are even with each other. (And don’t strip the wires!)

Next, insert the group of wires into the connector. By holding the RJ-45 connector with the clip facing down and the connector front facing away from you, pin #1 will be on your left. Be certain that each wire is inserted all the way so the tips of the wires at the front of the connector can be seen.

The cable jacket should extend into the RJ-45 connector end by about a quarter of an inch, because there’s a secondary mechanism that will also clamp the jacket. Inspect the cable to verify that all wires are in the correct order and are fully inserted into the connector.

Then use the crimping tool to crimp the connector to the cable. The tool presses the connector in at the two points to make a connection with the wires and to secure the cable jacket.

If crimped properly, the connector clamps both the wires and jacket. (click to enlarge)

Check the completed cable to verify once again that the wires are in the proper order, that the connector is solidly attached to the cable, and that all wires are fully inserted up to the front of the connector.

Use a cable tester that supports RJ-45 connectors to check your work. If the cable doesn’t work, you’ll need to cut off the connector and start over again, and you will not be able to re-use the connector. (It’s a “one shot” deal!)

And voila! Your connector should be properly terminated, and you can move on to attaching the next one.

May the bandwidth be with you…

Chris Bushick is an electrical engineer and sound engineer based in Austin, Texas.div class=

{extended}
Posted by Keith Clark on 01/03 at 02:45 PM
AVFeatureBlogStudy HallAVEthernetInterconnectNetworkingPermalink

Thursday, January 02, 2014

What Have You Done For Your Ears Lately?

Chances are you make at least part of your living with your ears. Stop and think about it. Could you perform your job as well…would your income level be the same…would your professional reputation be intact if you suffer severe hearing loss?

Both musicians and the live sound technicians who work with them need to be able to hear things. Not just hear them well, but hear them better than the average person. This should make us stop and consider our own hearing health, and the environments that we work in.

What have you done for…(and to) your ears lately?

Work-Related Hazards
Did you have your head deep inside a bass bin, listening for a 60-cycle hum, when somebody pushed “play” on the CD player? Were you walking past the tri-amplified sidefill stack, with your ear at compression driver level, when the lighting crew’s ladder knocked the center stage vocal mic stand over into the floor wedge to induce non-stop feedback? Did the drummer hit his primary crash cymbal, hard, 3 inches from your ear, while you were on the drum riser adjusting the hi-hat microphone?

Each of these typical events can be a daily occurrence on a typical concert stage, but any one of them might be the accident that causes you to have either temporary or permanent hearing loss. This could result in a shortened career and a decreased ability to earn a living with your chosen skill.

Accidents are one thing. Constant and intentional exposure to high sound levels is yet another. Did you just finish a 50-show run in tiny concert clubs with that new speed metal band? Was your powerful cue monitor wedge placed on end only one foot from your right ear as you mixed stage monitors for that entire world tour? Do you check 64 house mic line inputs every day with a ragged set of stereo headphones while listening to a clipping headphone amp?

Chances are good that your ears at least need a rest; but there are also certain techniques that can be employed to offer the maximum amount of protection to your hearing as you continue to do your job.

Hearing Protectors
Earplugs are now in use more and more frequently by ushers, security guards, video crew persons, and others who must work at their job while surrounded by the high-level sound intensity of today’s rock music concert programs.

Throw-away foam-type plugs are often issued on a daily basis at arenas and auditoriums for the working crews; some facilities have a nurse or public health official who will provide these items to any member of the general public audience who complains about loud sound levels.

If you’re a technician who works around powerful sound systems, but is not actually responsible for mixing sound during the show, it is a good idea to have some sort of hearing protection device handy.

The same is true if you are a sound professional who is waiting around for your band to come on while listening to someone else operate a loud system. Here are some basic options:
Disposable Foam Plugs. This type of hearing protection device comes in a small cardboard or plastic pouch, and several can easily be stuffed in a shirt pocket or a briefcase pouch. They are disposable, intended for one-time use. Common brands are E.A.R., and DeciDamp from North Health Care. Such devices offer a noise reduction rating of about 12-20 dB, depending on frequency. These plugs mainly reduce high frequencies.

Re-usable Silicon Insert Plugs. These rubberized insert cushions conceal tiny metal filtering diaphragmatic mechanisms to attenuate sound levels. They are often seen in use by gun buffs, construction workers and heavy equipment operators. The Sonic Valve II comes in its own plastic storage case with a key chain attached, and offers about a 17 dB noise reduction rating. Often available in gun shops or industrial safety supply stores, a pair can run from $15-20.

Personal Custom-Fit Earmolds. The best hearing protection device, and the one most applicable to working around musical sound, is one that attenuates all frequencies evenly. When correctly designed and properly fitted, custom-molded flexible plastic earmolds can offer 15-20 dB of balanced noise level reduction; in other words, full-frequency sound is still heard, but at a reduced level. There are numerous suppliers, who provide custom fitting services as well, such as Sensaphonics.

Industrial Headsets. When maximum attenuation of very loud sounds is desired, particularly at low frequencies, the cushioned headset works well. Offering up to 30 dB of attenuation, hearing protectors from David Clark have cushioned headpads and tight-fitting earseals. This is also an option for person who do not wish to stick standard earplugs inside the ear.  This is the type of protection often seen in use on airport runways and in the cabs of tractors and heavy cranes at construction sites.

Protecting Your Hearing On The Job
Use mini-nearfield monitors as a cue system for live mixing instead of headphones whenever possible.

By placing one or two small, powered monitors at your mixing console position and giving them the output from your stereo cue bus, you are able to solo up a mic input or an output mix and hear the signal without having to put on regular stereo headphones.

Roland, TOA, Yamaha, Tascam and other musical-instrument oriented manufacturers offer a variety of compact products.

This is particularly handy during setup and sound check. Using this method, you’ll have less loud, direct sound putting pressure on your eardrums, yet you will still hear the needed information.

Dummy headphones can be used as a quick way to lower the sound level of what you hear. Simply put on your regular stereo headphones, but don’t plug them into anything. Run the cord into your pocket. This will offer isolation from the louder acoustical environment that surrounds you during a show, while your ears have a chance to rest.

Rests away from the job site should be taken whenever possible. Remove yourself from the noisy environment and take time to have a meal, a nap, read a book, or whatever there is to be done in a quieter space. Focus on finding a ‘quiet zone’...no blaring TV or Walkman headphones. This can mean a walk outdoors, finding a secluded dressing room, or whatever.

The important thing when working around loud sound levels is to give your hearing system and ear mechanism time to recover. If you work in a loud environment, your hearing will be more sensitive and ‘fresh’ if you take regular breaks like this.

Sound Level Meters
If you do not already include a hand-held, battery powered SPL meter in your working toolkit, get one. Don’t rely on assumed level readings from your 1/3-octave real-time analyzer unless you are absolutely sure that the correct microphone is in use, (mic sensitivities can vary greatly, causing erroneous SPL readings), and that the system is properly calibrated. It’s better to have a small portable unit that you can keep in front of you on the mix console, or carry around the venue with you as you check coverage.

These handy devices can range in price from $65 (Radio Shack) to $2,500 (Bruel & Kjaer). I recommend the General Radio 1565-B Sound Level Meter (about $600); this is a hand-held battery powered meter that is approved by US Government agencies for environmental noise measurements. With its OSHA certification sticker, it helps you stand up to noise regulation officials, many of whom may have less sensitive and reliable gear.

Almost any type of SPL meter will do what you need; the accuracy difference between the cheapest and the most expensive can be about 1-2%...this would mean a possible error, plus or minus, of 1-2 dB at around 100 dB SPL. The more sophisticated, expensive units are best for critical situations.

Learn the difference between ‘A’ and ‘C’ weighting filter scales (US Government agency guidelines stipulate the use of C-weighted measurements for noise environments dominated by frequencies below 500 Hz; A-weighted measurements are most useful for making comparative readings in live show environments and discussing levels with others).

Use the sound level meter to get useful information in the front rows, the high balcony, the back of the hall, at the console…wherever you need to know the actual, average sound pressure level of your show.

Find the ideal ‘pocket’ where your show mix is as exciting and powerful as it needs to be, yet where you do not get audience complaints about excessive volume.

Use your meter as a daily reference guide, regardless of the type of acoustical environment.

Paying attention to the level of your system’s operation will be one more step toward protecting your own hearing, as well as that of others.

Long-Term Effects Of Loud Sound
We have probably all experienced TTS (Temporary Threshold Shift) after being exposed to very loud music or other sounds. This is the sensation that someone has stuffed cotton in your ears after you have already walked out of a loud environment; after one or two hours of high-level listening, your shifted hearing threshold may compensate as much as 40 or 50 dB.

In other words, your ears have ‘shut down’ to reject the extra-loud sounds that you have exposed them to. Recovery may take from a few hours to several days.

Prolonged exposure to very loud music can bring on tinnitus, which is a ringing sensation that you hear in your ears, even though no loud sounds are present around you. If you experience this ringing several days after exposure to a powerful sound system, consider that to be your own body’s way of giving you a danger signal. Heed the warning.

Have a regular hearing checkup. Get to know your audiologist or hearing specialist. Once or twice a year, get checked for both air and bone conducted sound sensitivity, speech understanding, and make sure that your inner ear parts are functioning properly.

If your job involves working with live sound, and you want to continue doing it, take time to carefully consider what your own personal approach is going to be as you work to conserve your hearing. You are also preserving your livelihood in the process.

David Scheirman is vice president, tour sound at JBL Professional and is also a long-time contributor of pro audio and sound reinforcement editorial.

{extended}
Posted by Keith Clark on 01/02 at 03:04 PM
Live SoundFeatureBlogStudy HallProductionAudioEducationEngineerLoudspeakerTechnicianPermalink

Tuesday, December 31, 2013

PSW Top 20: Most-Read Articles Of 2013

Be sure to cast your votes in the 2014 PSW Readers Choice Awards.

 

As we turn the page on 2013, we present the 20 articles that were the most-read over this past year on ProSoundWeb, based upon total page views.

Note that some of the articles that delivered top results over the past 12 months were actually written and posted well over a year ago, but they continue to prove of high interest and value to our worldwide readership.

In addition, some very popular articles posted more recently have not had as much time to accumulate traffic as others that have been posted for a longer period of time. We suspect you’ll see some of those fine articles on next year’s list.

Without further adieu, here are the top 20 articles on PSW for 2013.

Thanks for reading, and here’s to a great 2014!

 

 

Most-Read Articles #20 - #16 (Posted Thursday, December 26)

#20: Frickin’ Lasers: Exploring Better Drum Sound
By Andrew Greenwood

#19: Making It Sing: Microphones For Lead And Background Vocals
By Gary Parks

#18: Church Sound: Understanding And Effectively Using Parallel Compression
By Mike Sessler

#17: Four Suggestions For Surviving In The Pro Audio Business
By M. Erik Matlock

#16: Loudspeaker Design Trends: The Latest In A Constant Cycle Of Innovation
By Craig Leerman

Most-Read Articles #15 - #11 (Posted Friday, December 27)

#15: FIR-ward Thinking: Examining Finite Impulse Response Filtering In Sound Reinforcement Systems
By Pat Brown

#14: The Keys To Becoming A Great Technical Artist
By Mike Sessler

#13: Energy & Exposure: Presenting the Audience With The Optimum Balance
By Dave Rat

#12: Critique Your Mix By Asking These 11 Questions
By Chris Huff

#11: Dialing It In: Delivering Consistent Concert Sound For Paramore
By Kevin Young

Most-Read Articles #10 - #6 (Posted Monday, December 30)

#10: Legitimate Leslie Substitute? Inside The Neo Ventilator
By Danny Abelson

#9: An App For That: Recent Developments In Digital Console Software
By Craig Leerman

#8: How To Open Up Space In A Studio Mix
By Matthew Weiss

#7: Million Dollar Sound: Analog Style For Elton John At The Colosseum
By Greg DeTogne

#6: Everyday Carry: The Right Tools For The Job
By Craig Leerman

Most-Read Articles #5 - #1 (Posted Tuesday, December 31)

#5: Mixer Inside The Mixer: Applications Of Console Matrix Sections
By Craig Leerman

#4: From Simple To Complex: The Wide World Of Drum Techniques
By Bruce Bartlett

#3: 50 & Counting: Sonic Truth For The Rolling Stones Latest Tour
By Danny Abelson

#2: 10 Things About Sound You May Not Know…
By Bobby Owsinski

#1: Seven Obscure Mixing Techniques Used By The Pros
By Matthew Weiss

{extended}
Posted by Keith Clark on 12/31 at 06:08 PM
AVLive SoundRecordingChurch SoundFeatureBlogOpinionAVEducationSound ReinforcementStudioPermalink

Tuesday, December 24, 2013

Equalizing The Room—What It Really Means

More than semantics, equalization has very real practical consequences

“I’m going to equalize the room.” We’ve all heard that statement so many times that we scarcely think about what it literally means. We know that in practical terms it means adjusting an equalizer to suit your taste. It may be done with the latest high-technology analysis equipment, voodoo magic or simply tweaking away “until it sounds right.”

In any case, are we really “equalizing the room”? What exactly are we doing? There are lots of disagreements on this topic but all agree on one thing: You cannot change the architecture of the room with an equalizer.

You can, however, equalize the response of the speaker system. Where the room fits into all this is a matter of debate; It is much more than semantics and has very real practical consequences on our approach to sound system alignment.

What do equalizers “equalize” anyway?
Let’s assume that we have a speaker system with a flat (or otherwise desirable) free field frequency response. That is to say, it requires no further equalization. There are three categories of interaction that will cause the frequency response to change, to become, for lack of a better word, “unequalized.”

The first of these interactions are between speakers. When a second speaker is added the combination results in a modified frequency response at virtually all locations. This is true of all speaker models and all array configurations, regardless of any claims to the contrary.

The summation of the two responses varies the frequency response at each position, depending upon the relative time arrival and level between the two speakers. As additional speakers are added the variations in response increase proportionally.

The second category is the interaction of the speaker(s) with the room. These are generally termed coupling, reflections or echoes. The mechanism is similar to the speaker interaction above. The response varies from position to position, depending upon the relative time arrival and level between the direct and reflected sound.

Both of the above effects are the result of a summation in the acoustical space of multiple sources, either speaker and speaker, or speaker and reflection. Therefore the solutions for these interactions are very closely related.

The third interaction is caused by the effects of dynamic conditions of temperature, humidity and changing absorption coefficient. However, the effects of these interactions are small by comparison with the other two, so we will not touch on them further here.

Are any of these problems solvable with an equalizer? The answer is a qualified “Yes”. The magnitude of the above problems can be reduced by equalization, and substantial progress can be made toward restoring the original desirable frequency response.

If equalizers were totally ineffective, then why have we been loading these things into our racks for the last 35 years? However, in a practical sense the equalizer can only provide complete success in equalizing the response when applied in conjunction with other techniques such as architectural modification, precise speaker positioning, delay and level setting.

To what extent is the speaker/room interaction equalizable? This has been a matter of debate for more than 15 years. In particular the advocates of various acoustical measurement systems have come down hard on these issues.

What we are doing is equalizing, among other things, the effects of the room on the speaker system. Why is this controversial? It stems from the historical relationship of equalizers and analyzers. Let’s turn on the way-back machine and take a look.

Early analysis
In ancient times (the 1970s), the alignment of sound systems centered around a crude tool known as the Real-Time Analyzer (RTA) and a companion solution device, the graphic equalizer. The analyzer displayed the amplitude response over frequency in 1/3 octave resolution and the equalizer could be adjusted until an inverse response was created, yielding a flat combined response.

It takes a negligible skill level to learn to fiddle with the graphic EQ knobs until all the LEDs line up on the RTA. It is so simple that a monkey could do it, and the result often sounded like it.

Although these tools were the standard of the day, they have severe limitations, and these very limitations can lead to gross misunderstanding of the interaction of the speakers to each other and the room, resulting in poor alignment choices.

One such limitation is the fact that the RTA lacks information regarding the temporal aspects of the system response. There is no phase information nor any indication as to the arrival order of energy at the mic.

The RTA cannot discern direct from reverberant sound, nor does it indicate whether the response variations are due to loudspeaker interaction alone and loudspeaker/room interaction. Therefore the RTA provides no help in terms of critical speaker positioning, delay setting or architectural acoustics.

Second, the RTA gives no indication as to whether the response at the mic is in any way related to the signal entering the loudspeakers. The RTA gives a status report of the acoustical energy at the microphone, with no frame of reference as to the probable causes of response peaks and dips.

These peaks and dips could be due to early room reflections or speaker interactions, which can respond favorably to equalization. However, the irregularities in response could be from late reflections, noise from a forklift engine or reflections from a steel beam in front of the loudspeaker.

The equalizer will be ineffective as a forklift or steel beam remover, but the RTA will give you no reason to suspect these problems. A system that is completely unintelligible could look the same as one that is clear as a bell.

Third is the fact 1/3-octave frequency resolution is totally insufficient for critical alignment decisions. In addition, there is the misconception that a matched analyzer/filter set system is desired. It is not. The analyzer should be three times the resolution of the filter set in order to be able to provide the visible data needed to detect center frequency, bandwidth and magnitude of the response aberrations.

A 1/3 octave RTA is only able to reliably determine bandwidths of an octave or more. What appears as a 1/3 octave peak may be much narrower. What appears as a broad 2/3 octave peak, may actually be a high narrow peak placed between the 1/3 octave points. What will your graphic equalizer do with this?

Unfortunately the absence of this critical information lulled many users into a sense of complacency predicated on the belief that equalization was the only critical parameter for system alignment. In countless cases, equalizers were employed to correct problems they had no possibility of solving, and could only make worse.

Graphic equalizers have no possibility of creating the inverse of the interactive response of the speakers with the room. Simply put: “You can’t get there from here.”

The audible results of all this tended to create a generally negative view of audio analyzers. Many engineers concluded that their ears, coupled with common sense could provide better results than the blindly followed analyzer.

As a result, though RTAs were often required on riders, they only received cursory attention on show day.

Modern Analysis
Technological progress led to the development and acceptance of two analysis techniques in the early 80s: Time Delay Spectrometry (TDS) and dual-channel FFT analysis. Both of these systems brought to the table whole new capabilities, such as phase response measurement, the ability to identify echoes and high-resolution frequency response.

No longer could an unintelligible pile of junk look the same as the real McCoy on an analyzer. The complexity of these analyzers required a well-trained, highly skilled practitioner in order to realize the true benefits.

Advocates of both systems stressed the need for engineers to utilize all tools in their system, not equalizers alone, to remedy the response anomalies. Delay lines, speaker positioning, crossover optimization and architectural solutions were to be employed whenever possible. And now we had tools capable of identifying the different interactions.

But on the issue of “equalizing the room” a division arose. All parties agreed that speaker/speaker interaction was somewhat equalizable. The critical disagreement was over the extent the loudspeaker/room interaction could be compensated by equalization.

The TDS camp advocated that speaker/room interaction was not at all equalizable and therefore, the measurement system should screen out the speaker/room interaction, leaving only the equalizable portion of the loudspeaker system on the analyzer screen. Then the inverse of the response is applied via the equalizer and that was as far as one should go.

The TDS system was designed to screen out the frequency response effects of reflections from its measurements via a sine frequency sweep and delayed tracking filter mechanism, thereby displaying a simulated anechoic response. The measurements are able to clearly show the speaker/speaker interaction of a cluster and provide useful data for optimization.

Such an approach can be effective in the mid and upper frequency ranges where the frequency resolution can remain high even with fast sweeps but it is less effective at low frequencies. Low frequencies have such long periods that it is impossible to get high-resolution data without taking long time records, thereby allowing the room into the measurement.

For example, to achieve 1/12th octave resolution, the equivalent to the Western Tempered Scale, one must have a time record 12x longer than the period of the frequency in question. For 30 Hz you will need a 360ms (12x30ms). If fast sweeps are made to remove echoes from the measurement, the low frequency data has insufficient resolution to be of practical use.

Dual-channel FFT analyzers utilize varying time record lengths. In the HF range, where the period is short, the time record is short. As the frequency decreases, the time record length increases, creating an approximately constant frequency resolution.

The measurements reveal a constant proportion of direct sound and early reflections, the most critical area in terms of perceived tonal quality of a speaker system.

The most popular FFT systems utilize 1/24th octave resolution, which means that the measurements are confined to the direct sound and the reflections inside a 24 wavelengths time period across the board. This is a good practical level of resolution, allowing us to accurately equalize at around the 1/8 octave level.

With the FFT approach, more and more of the room enters the response as frequency decreases. This is appropriate because at low frequencies the room/speaker interaction is still inside the practical equalizability window.

For example, the arena scoreboard reflection is 150 ms later than the direct signal. At 10 kHz, the peaks and dips from this reflection are spaced 1/1500 of an octave apart. At 30 Hz, they will be only 1/3 octave apart. Thus the scoreboard is in the distant field relative to the tweeters, and applying equalization to counter its effects will be totally impractical.

An architectural solution such as a curtain would be effective. But for the subwoofers, the scoreboard is a near-field boundary and will yield to filters much more practically than the 50 tons of absorptive material required to suppress it acoustically.

Many years ago, the FFT camp boldly stated that the echoes in the room could be suppressed through equalization. Unfortunately, these statements were made in absolute terms without qualifying parameters, leaving the impression that the FFT advocates thought it was desirable or practical to remove all of the effects of reverberation in a space through equalization.

While it can be proven from a theoretical standpoint that the frequency response effects of a single echo can be fully compensated for, that does not mean it is practical or desirable. The suppression can only be accomplished if the relative level of the echo does not equal or exceed that of the direct and that no other special circumstances arise that cause excess delay. (Excess delay causes a “non-minimum phase” aberration and is outside the scope of this article.)

If the direct level and echo level are equal the cancellation dip becomes infinitely deep and the corresponding filter required to equalize it is an infinite peak. As we know from sci-fi movies, bad things happen when positive and negative infinity meet up.

Compensating for the response requires adjustable bandwidth filters capable of creating an inverse to each comb filter peak and dip in the response. As the echo increases, you will need increasing numbers of ever narrowing filters.

A 1ms echo corrected to 20 kHz will require some 40 filters because there are 20 peaks and 20 dips varying in bandwidth from 1 to .025 octave. A 10 ms echo would need 400 with bandwidths down to an 1/400 octave.

Obviously, it would be insane to attempt to remove all of the interaction at even a single point in the hall. In the practical world, we have no intention of attacking every minuscule peak and dip, but instead will go after the biggest repeat offenders. The narrower the filters are, the less practical value they have because the response changes over position.

Practical Implications
It is indeed possible and practical to suppress some of the effects of speaker/room interaction. If this was not possible, it would be standard practice to equalize your rig in the shop, put a steel case around the EQ rack and hit the road. The practical side of this is that we must be realistic about what is attainable and what are the best means of getting there.

The variations in frequency response due to both speaker/speaker interaction and loudspeaker/room interaction will always change with position. Once you have seen high-resolution data at multiple positions, you can never go back to thinking that your equalization will solve problems globally.

A system that has the minimal amount of the above interactions will have the greatest uniformity throughout the listening environment and, therefore, stand to gain the most practical benefit from equalization. If it sounds totally different at every seat, let’s just tweak the mix position and head to catering.

To minimize the speaker/speaker interactions requires directional components, careful placement and precise arraying. In areas where the speakers overlap, time delays and level controls will minimize the damage in the shared area. To minimize loudspeaker/room interaction, the global solutions lie in architectural modification (it’s curtain time), the selection of directionally controlled elements and precise placement.

Finally you are left with equalization. For each subsystem with an equalizer, map out the response in the area by placing a microphone in as many spots as you can and see what the trends are.

In particular, measure around the central coverage area of the speaker. Stay away from areas of high interaction, where the response will vary dramatically every inch.

Examples of this include the seam between two cabinets in an array or very close to a wall. Each position will be unique, but if you place filters on the top four to six repeat offenders you will have effectively neutralized the response in that area.

Conclusion
Modern analyzers are capable of displaying a dizzying array of spectral data. But little practical benefit will come to us if we continue with the antiquated approach of the RTA era. To fully take advantage of the benefits of equalization, we must fully comprehend how to identify the mechanisms that “unequalize” the system.

With modern tools, it becomes possible to analyze the response such that the interactive factors of speaker systems can be distilled and viewed separately. This allows the alignment engineer to prepare the way for successful equalization by using other techniques that reduce interaction and maximize uniformity in the system.

“Equalizing the room” will remain in the domain of architectural acousticians, but with advanced tools and techniques, we can proceed forward to better equalize the speaker system in the room.

Bob McCarthy serves as director of system optimization with Meyer Sound. Find out more about Bob here.

{extended}
Posted by Keith Clark on 12/24 at 07:40 AM
Live SoundFeatureBlogStudy HallLoudspeakerMeasurementSignalSound ReinforcementPermalink

Monday, December 23, 2013

A Primer On Ethernet Cabling For Digital Audio

Context, specifics and rules of thumb for the multiple twisted pair stuff we call CAT-5 and CAT-6

More and more these days, audio systems are going digital, not just digital signal processing (DSP), but also for general signal transmission between devices as well.

Let’s take a look at the care and feeding of Ethernet cabling, specifically the multiple twisted pair stuff we call CAT-5 and CAT-6.

General signal cabling has been organized into categories which we refer to as CAT-X cables. All are available using either solid or stranded conductors.

Their designations, according to ElA/TIA-568-B standards are as follows: UTP means Unshielded Twisted Pair. STP means Shielded Twisted Pair for use in extremely noisy environments employing a shield around each pair and an overall shield. ScTP, or Screened Twisted Pair, also called FTP, or Foiled Twisted Pair, is a variation of STP, but with only an overall foil shield.

The first use of 100-MHz CAT-5e cabling to carry a high-channel count of digital audio deterministically (synchronized without network crashes, extensive buffering, or re-transmissions) occurred with the development of CobraNet by Peak Audio. 

CobraNet is still licensed by Cirrus Logic and its standard use over a 100-base-T network IS a 20-bit, 48-kHz, 128-channel bidirectional configuration, but it can operate at 24 bits providing 112 bidirectional channels as well as many more channels over a gigabit network.

Initially starting out with a CobraNet developer’s kit, the French company Digigram instead came out with their own digital audio protocol and interface hardware called EtherSound. It carries 64 channels of 24-bit, 48-kHz digital audio, but does have lower latency than CobraNet, which is attractive to the live sound market.

image

The newest player on the block using CAT-5e or above is Audinate Dante. Audinate is the first to make use of the IEEE’s new standard for transporting clock across an asynchronous network to make the system deterministic. The advantage of Dante over CobraNet or EtherSound is that it requires no special hardware beyond an Ethernet connection to transport digital audio.

A plain vanilla laptop running a DAW can split XXX channels of XX-bit digital bidirectional audio out to a standard Ethernet network over its normal network port. However, it uses a UDP/IP approach rather than the more familiar TCP/IP acronym for its signal transmission.

At this point, some discussion of the data network “seven layers” may be useful. The International Organization of Standards, or ISO, created a framework to define network functions back in the 1980s. These seven layers will help us to understand some of the network acronyms commonly thrown around these days.

Figure 1: The seven layers of the ISO reference model for data networks.(click to enlarge)

It also helps in understanding the difference between digital audio transport systems that can use off-the-shelf network hardware and those that require proprietary routing and 1/0 devices. See Figure 1 for the chart on the seven layers of data networks.

There are other proprietary systems that use Ethernet-type cable, but they do not technically qualify as an Ethernet-based system unless they implement both layer 1, the physical layer and layer 2, the data link layer. Aviom and ProCo Momentum are examples of these systems that cannot make use of off-the-shelf Ethernet hardware, using their own purpose-built products instead.

Finally, there are other digital audio systems that use non-Ethernet cabling. MADI, for example, uses primarily co-axial cable and most any of these can be ported over a fiber-optic link as well.

Most of the digital audio transmission systems we are familiar with, however, use Ethernet-type cabling and the rules for this type of cable are little different that what you might be used to for typical analog audio cabling.

There are two general types of Ethernet cabling. “Backbone” Ethenet cabling uses solid conductors and, as a result, is good for up to 100 meters of transmission distance.

There are “patch” cables manufactured that employ stranded conductors and more flexible jacket material.

They are more rugged and more pliable, but are designed for distances only to 25 meters. This is similar to telephone cables. Those intended to be moved and flexed use stranded conductors, while those intended to be permanently installed without flexing employ solid conductors.

Often, we end up using a solid-conductor, stiff-jacketed Ethernet cable intended for backbone use as a patch cable or for extended portable applications.

Backbone cables used in that manner will not hold up under hard or extended use. Be sure to rotate them out before the solid conductors begin breaking. It’s always better to toss them early than to have one go down during a show. AIways be sure to have a couple of spares on hand, just in case.

For fixed installations, the pulling force on Ethernet cabling is MUCH less than for analog cables. This is because as the cable is pulled, the twisting and spacing can change between conductors which can affect the bandwidth that can be transmitted properly. DO NOT pull hard on Ethernet cabling. Its maximum pulling tension is only 25 pounds.

(click to enlarge)

When dressed into racks, be sure to avoid very sharp right-angle turns of the cabling. The minimum radius for a turn is 1 inch, or around a 50-cent piece. This, like pulling too hard, changes the twist-rate and capacitance, which affects the bandwidth capability.

Tightly dressing multiple Ethernet cables into neat combs and tight evenly spaced tie wrapping can induce comb-filter notches into the cable’s overall bandwidth.

Loose and a little messy is actually better when it comes to Ethernet cable routing. You may not notice any effect to these practices if you are transmitting only a few channels, but if you are transmitting lots of channels, you may get bit when you least expect it!

Also be aware of the limitations of cheap network analyzers. The really low-cost ones arc only checking for continuity of each conductor. The really good ones, the ones that confirm continuity AND bandwidth, can easily cost $5,000 or more.

John Murray is a 35-year industry veteran who has worked for several leading manufacturers, and has also presented two published AES papers as well as chaired numerous SynAudCon workshops. He is currently the principal of Optimum System Solutions, a consulting firm.

{extended}
Posted by Keith Clark on 12/23 at 12:54 AM
AVFeatureBlogStudy HallAVEthernetInstallationInterconnectNetworkingPermalink

Friday, December 20, 2013

A Useful Tool: Creating & Applying FIR Filters

 
In part 1 of this article series (here), I laid out a few guiding principles regarding Finite Impulse Response (FIR) filters. Here’s a quick review:

1) With FIR filters I can have very steep LP and HP filters, such as might be used in a crossover network, that have linear group delay (same delay for all frequencies).

2) The magnitude and phase response of a FIR filter can be adjusted independently. In an Infinite Impulse Response (IIR) filter they are interdependent – you can’t change one without changing the other.

3) WAV impulse responses by definition are FIR filters, since they are fixed in length and cannot (theoretically) decay for infinity. Any IR you measured can potentially be used as a filter.

So, let’s create some FIRs.

Creating The FIR Filter
I created my FIR filters in software, using the freeware rePhase to define a high-pass filter (HPF) and low-pass filter (LPF) based on a few input variables (Figure 1). You’ll want to add rePhase to your toy box for investigating FIRs. (I’ll show some other FIR-creation tools in future installments of this series.)

I’ll create a 1 kHz brickwall HP filter. FIR filters are defined by the number of “taps.” This is a way of specifying the length of the impulse response. The number of taps is equal to the number of samples.

Figure 1 – FIR filters generated with rePhase.

Basically, the more taps, the more samples, the longer the time length, and the greater the precision. The greater the precision, the greater the “sharpness” that can be attained in the frequency domain. Just as you need a long time window to resolve low frequencies when performing acoustical measurements, you need a lot of taps (long IRs) for detailed low frequency FIR filters.

Let me demonstrate. Figure 2 shows the response of a 60 tap, 1 kHz HPF, the response of a 600 tap HPF, and the response of a 6000 tap HPF. Clearly, the 6000 tap filter can be described as “brickwall.”

That’s the one we want, right? This is audio, and you never get something for nothing.

Figure 2 – At the top we see a 60 tap FIR HPF. Note the lack of sharpness due to the small tap size. Next, adding more taps (more samples or a longer time length) creates a sharper filter, and then at the bottom, we have a 6000 tap filter that is 136 ms in time duration, with latency of 1/2 of the length (68 ms).


The latency through an FIR is one-half the number of taps, so the signal through this filter will be delayed by 3000 samples. At 44.1 kHz, that’s a bit over 60 milliseconds (ms). While that might be acceptable in a home playback system, it might as well be next week for live sound. I’m going to stay with this to illustrate the concept, but in the real world we’d have to give back some of the precision to reduce the latency.

Figure 3 – The FIR generation and export settings in rePhase.


I created a 1 kHz LP filter in the same manner. Since these brickwall filters essentially do not overlap, I don’t have to worry about how they interact. Filter interaction is a main concern for analog and IIR digital filters.

These are “brickwall” filters that produce no frequency-dependent delay. Note how wacky the phase response gets in the stop band. That’s because the magnitude response is down about 80 dB.

If there is no magnitude response (i.e., no signal output from the filter), then the phase is meaningless and looks like garbage.

rePhase allows the filters to be saved in multiple formats, including WAV files (Figure 3). They can be convolved with any audio signal in real time or as a post process.

Figure 4 – The software GUI for the miniDSP OpenDRC 2×2.

We’ll need some hardware to do the convolution in real time. I saved them as 32-bit IEEE-754 mono .bin files, and dropped them into a miniDSP OpenDRC 2×2 to allow measurement and listening (Figure 4). This DSP can handle FIRs up to 6144 taps, so the 6k tap filter just makes it.

This is a very simple FIR, for illustrative purposes. It can actually have a very detailed shape, including corrective equalization filters. One FIR filter can replace a whole bank of parametric EQ filters along with HP and LP filters. It’s like having one super-duper filter rather than a bunch of simple ones.

Analog Filter Comparison
For comparison, I also generated a 1 kHz, 4th-order Linkwitz-Riley filter pair. These are typical of the crossover filters implemented by most DSPs.

These are IIR filters similar to what are produced by analog processors. I showed the transfer functions of these filters (and their sum) in part 1, so I won’t include them here.

While the magnitude response of the summed filters is flat, the phase response is not linear. The filters have produced a frequency-dependent phase shift in the response.

Applied To A Loudspeaker System
What good is a filter unless you use it to filter something? An interesting way to experiment with crossover networks is to use two identical loudspeakers. Since they’re the same, the response of the filters can be easily observed.

One will be the LF device and one will be the HF device (Figure 5). These are small cube loudspeakers that I built to use for classroom demos.

The responses of these two devices have been matched using the parametric EQ filter blocks in the OpenDRC DSP. The parametric equalizer settings were determined using the EQ module of Room EQ Wizard (REW), a freeware measurement program.

Figure 5 – My little traveling cubes.

REW calculates the filters required to correct the loudspeakers response, and saves them in a format that can be directly imported by the DSP. This saves the “grunt work” required to manually adjust the filters until the response is flat.

Note that this is not “auto EQ.” I properly collected and windowed the axial response of each loudspeaker, using 1/12th-octave smoothing to get ride of some of the detail. I then let REW do a curve-fit between 100 Hz and 10 kHz, and give me the list of the filters required to flatten it.

When satisfied, I imported this list into the DSP. The responses are shown in Figure 6. Note that they are different, because the loudspeakers are unavoidably different.

Figure 6 – The correction curve (IIR parametric filter set) for each cube loudspeaker.

There’s a good lesson there for those that think they can do high resolution equalization on multiple devices by measuring one of them, but that’s a different article.

Figure 7 shows the full-range overlaid responses of the two loudspeakers with the IIR parametric filters applied, followed by the overlaid responses of the two loudspeakers with the FIR crossover filters applied, and then the measured full-range response (both magnitude and phase) with all filters applied, and finally, the measured response LP, HP and summed responses using the LR24 crossover.

Figure 7 – At the top (1), we see the frequency response magnitude of each cube loudspeaker, post equalization. Next (2), a FIR filter is applied to each box. Note the minimal overlap of their responses due to the steep slopes. Following that (3), we see the transfer function of the summed axial response as measured in the far-field. At the bottom (4), we see the summed response using a Linkwitz-Riley 24 dB/oct crossover network. The magnitude is exactly the same as the FIR response, but the phase response shows the expected phase shift caused by the IIR filters.

Note that the magnitude responses are the same for the LR24 and brickwall crossovers. The difference will be in the phase response, and the polar response of the pair.

Proof In The Polars
I spaced the two loudspeakers 1 wavelength at 1 kHz (about 1.1 feet), shown in Figure 8.

Figure 8 – A horizontal polar was measured on the two devices. They were offset 1 wavelength at 1 kHz.

This configuration should produce a polar that looks like a shamrock on St. Patrick’s day, since the loudspeakers will be 180 degrees out-of-phase at 1 kHz at several angles around the polar, and in-phase at other angles. I used this configuration to evaluate the polar response at the crossover frequency of 1 kHz.

On the left in Figure 9, we see the polar response at crossover for the Linkwitz-Riley crossover network. The response lobes, because both transducers are “talking” in the crossover region. Since they are offset physically, lobing is unavoidable. As expected, the response resembles a four-leaf clover.

On the right is the polar response at crossover for the FIR crossover network. The lobing is eliminated because the offset transducers are not overlapping in frequency.

Figure 9 – The polar response at 1 kHz (1/3-oct, 5-degree angular resolution) using the LR24 crossover (A) and the linear phase brickwall crossover (B).


The transducers in multi-way loudspeakers have always been interdependent, and lobing has always been a big issue. The use of linear phase brickwall filters can eliminate lobing by allowing the passbands to behave independently. Given the axial transfer function and polar response, it is impossible to determine that this is a two-way system with offset transducers – pretty amazing.

The Whole Story
While it sounds like FIRs have brought us to a new level of loudspeaker performance, I must point out that multi-way loudspeakers that do not exhibit lobing through their crossover region have existed for decades and can be created without brickwall linear phase FIR filters.

Coaxial and co-entrant designs accomplish this by the physical placement of the transducers. Yes, FIRs can provide steep slopes at crossover, but if the loudspeaker produces sufficient SPL without them, there may be no benefit.

My point? Don’t be a “FIR snob” and turn up your nose at loudspeaker designs that use IIR and analog filters. I’d always take a well-designed multi-way loudspeaker with passive crossover over a complicated FIR-driven active system, IF the passive system met the needs of the application. Less can be better.

It’s often true that the major benefit of advancements in digital technology is that it gives us more ways to salvage bad designs and practices. FIR filters can often improve the performance of marginal loudspeaker designs. But when FIR technology is combined with excellent electro-acoustic design practices, there is indeed the potential to reach new levels of performance. While not a panacea, FIR filters are a nice tool to have in the toolbox.

Pat & Brenda Brown lead SynAudCon, conducting audio seminars and workshops online and around the world. For more information go to www.synaudcon.com.

The transducers in multi-way loudspeakers have always been interdependent, and lobing has always been a big issue. The use of linear phase brickwall filters can eliminate lobing by allowing the passbands to behave independently. Given the axial transfer function and polar response, it is impossible to determine that this is a two-way system with offset transducers – pretty amazing.

{extended}
Posted by Keith Clark on 12/20 at 06:23 PM
AVFeatureBlogStudy HallAVLine ArrayLoudspeakerMeasurementProcessorSignalSound ReinforcementPermalink

Thursday, December 19, 2013

Addressing The Myths Of Wireless System Transmitter Power

One of the topics that I’ve seen poorly understood, and even deliberately used to mislead people, is the issue of wireless microphone transmitter power and the effects said power has on system performance.

Let’s start with the basics: all things being equal, more transmitter power = more range for the system, but not in a linear way.

In broad terms, when discussing analog wireless systems, the receiver wants to see a signal from a transmitter (carrier) that is at least about 4-6 dB greater than the noise floor before it can make use of that signal - this is known as the receiver CNR or Carrier to Noise Ratio. The better the receiver front end, the lower this number can be.

And we all remember the inverse square law, right? In other words, every time you double the distance between transmitter and receiver (in free space), you cut the amount of RF power received by four times. Thus distance is a greater factor in losing your signal than the level of power from the transmitter.

Another way to look at it is that you need to quadruple the RF power in order to have the same reception at only double the distance. This is why in the real world, twice the power at the transmitter yields only about a 20 percent increase in range.

But the other problem is that things are never equal. There are many other factors affecting the potential range of the system, from the type of surfaces nearby (or lack thereof), weather conditions, how many people are around, and of course; potential interference from other sources. Antenna system design and positioning can also significantly affect range.

In other words, transmitter power is one factor, and relatively speaking, can be a minor one. This is not to say that more power doesn’t help, because it very well may. Background noise is nearly always present, and has lately been on the increase due to DTV and other sources such as digital data services, so greater transmitter power can help to “cut through” and provide a solid signal with no dropouts.

In some cases, however, it can also be advantageous to use the lowest power you can manage with your wireless transmitters. For one thing, there is less battery draw, which may be an important factor depending upon the type of application.

Another drawback of increasing transmitter power is the resulting increase in the level of inter-modulation (IM) products. Normally, when a larger-scale wireless system is set up, careful frequency coordination is done to minimize the impact of these IM products, and usually, a certain amount of them can be ignored because they’re so low in level. 

If the level of these products increases, then so does the potential for interference and reduced range.

What causes these IM products to be created in the first place? Any time multiple RF signals (such as from wireless microphones, TV transmissions, etc.) are combined in a non-linear device (any active device such as a transmitter output stage, receiver front end, antenna combiner, etc.), these signals can interact and create new signals which, although lower in level, can act just like additional transmitters.

Unfortunately, a typical multi-channel wireless setup generates thousands and even likely millions of these IM products! So keeping a handle on the level of these unwanted signals is important.

If there’s need to coordinate a large number of channels in a fairly small geographic area, transmitter power should be carefully considered, and the antenna system should be designed to match.

This particular subject has been a matter of debate in some circles. The IM problem is usually at its worst when a lot of transmitters are physically close to each other (such as in a theater situation).

Some manufacturers, such as Lectrosonics, have classically oriented their wireless mic systems towards high power transmission and solve the problem with their product designs by using an “isolator” to prevent the mixing of signals in the transmitter.

Although this does nothing to mitigate these signals from mixing in the receivers or receiver antenna systems, generally simpler, even passive antenna systems can be used effectively, along with receivers incorporating a robust front end.

Other manufacturers have maintained that “low power is better” and have oriented their system designs around this concept, with more complex antenna systems and highly selective receivers.

Another factor is that most real-world situations involve products from a variety of manufacturers (for example, IEMs from one manufacturer, handhelds from another, and belt-packs from yet another), and it is important to understand how all of the system components (transmitters, antenna systems, splitters, receivers, etc.) will react to the different power levels of the various transmitters.

As you can see, this issue is not always a simple matter of “more is better,” nor for that matter, is it always “more is worse.” Manufacturers of wireless equipment have addressed this issue in different ways, and one of the best is to offer variable power settings on transmitters, which has been done by Shure, Sennheiser and Lectrosonics, among others, in the past few years.

Mike Wireless is the “nom de plume” of a long-time RF geek devoted to better entertainment wireless system practices the world over.

{extended}
Posted by Keith Clark on 12/19 at 07:05 PM
AVLive SoundFeatureBlogStudy HallAnalogMeasurementMicrophonePowerSignalWirelessPermalink

In The Studio:The Trouble With Cheap Mics

On the whole, inexpensive audio gear sounds better than ever and is a much better value than even a decade ago, and yet...
This article is provided by Bobby Owsinski.

 
In many ways we’re in the golden age of audio gear. On the whole, inexpensive audio gear (under $500) sounds better than ever and is a much better value than even a decade ago and way better than 20 years ago.

The same can be said for mics, as there is a large variety of cheap mics that provide much higher performance for the price than we could have imagined back in the 70s and 80s.

That said, there are some pitfalls to be aware of before you buy. Here’s an excerpt from The Recording Engineer’s Handbook, 3rd Edition, that covers the potential downside of inexpensive mics.

One of the more interesting recent developments in microphones is the availability of some extremely inexpensive condenser and ribbon microphones in the sub-$500 category (in some cases even less than $100).

While you’ll never confuse these with a vintage U 47 or C 12, they do sometimes provide an astonishing level of performance at a price point that we could only dream about a few short years ago. That said, there are some things to be aware of before you make that purchase.

Quality Control’s The Thing
Mics in this category have the same thing in common; they’re either entirely made or all their parts are made in China, and to some degree, mostly in the same factory. Some are made to the specifications of the importer (and therefore cost more) and some are just plain off-the-shelf.

Regardless of how they’re made and to what spec, the biggest issue from that point is how much quality control (or QC, also sometimes known as quality assurance) is involved before the product finds its way into your studio.

Some mics are completely manufactured at the factory and receive a quick QC just to make sure they’re working and these are the least expensive mics available. Others receive another level of QC to get them within a rather wide quality tolerance level, so they cost a little more. Others are QC’d locally by the distributor with only the best ones offered for sale, and these cost still more.

Finally, some mics have only their parts manufactured in China, with final assembly and QC done locally, and of course, these have the highest price in the category.

You Can Never Be Sure Of The Sound
One of the byproducts of the rather loose tolerances due to the different levels of QC is the fact that the sound can vary greatly between mics of the same model and manufacturer.

The more QC (and high the resulting price), the less difference you’ll find, but you still might have to go through a number of them to find one with some magic. This doesn’t happen with the more traditional name brands that cost a lot more, but what you’re buying (besides better components in most cases) is a high assurance that your mic is going to sound as good as any other of the same model from that manufacturer.

In other words, the differences between mics are generally a lot smaller as the price rises.

The Weakness
There are two points that contribute to a mic sounding good or bad, and that’s the capsule and the electronics (this can be said of all mics, really). The tighter the tolerances and better QC on the capsule, the better the mic will sound and the closer each mic will sound to another of the same model.

The electronics is another point entirely in that a bad design can cause distortion at high SPL levels and limit the frequency response, or simply change the sound enough to make it less than desirable. The component tolerances these days are a lot closer than in the past, so that doesn’t enter into the equation as much when it comes to having a bearing on the sound.

In some cases, you can have what could be a inexpensive great mic that’s limited by poorly designed electronics. You can find articles all over the web on how to modify many of these mics, some that make more of a difference to the final sound than others.

If you choose to try doing a mod on a mic yourself, be sure that your soldering chops are really good since there’s generally so little space that a small mistake can render your mic useless.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. Get the 3rd edition of The Recording Engineer’s Handbook here.

{extended}
Posted by Keith Clark on 12/19 at 06:01 PM
RecordingFeatureBlogOpinionStudy HallBusinessEngineerMicrophoneSignalTechnicianPermalink

Church Sound: Shrinking Buildings & What That Means For Worship Tech

We’re seeing a significant shift from big worship to an emphasis on groups
This article is provided by ChurchTechArts.

 
I’ve been saying this for a while, but there is a change a comin’ in the church. While some disagree with me, I suggest that we are nearing the end of the era for big church buildings with big production going on.

Not that big buildings will disappear all together, but there will be fewer of them and they will not be the sought-after goal of most churches.

I’m hearing similar thoughts from other church leaders, and the latest to chime in is Thom Rainer. Last week he wrote a post listing seven reasons church worship centers will get smaller.

Unlike the last time I referenced a post in an article, I’m pretty much in agreement with him on this one. I really believe this is coming, and it’s going to affect what we do as technical leaders. Let’s consider some of his points.

Multi-Site & Multi-Venue Churches Are On The Rise

We see this everywhere. More and more churches are discovering that they can have a significantly more powerful impact on their community by launching multiple, smaller campuses instead of one big one. Or perhaps they will do multiple venues with different worship styles. Either way, this tend is here to stay (until the next trend, anyway).

What does this mean for us? On the plus side, all of these venues will need at least a basic production technology package. Often, it will need to be portable. So we’ll have a lot of gear to manage.

However, with smaller campuses, come smaller congregations (that’s kind of the point, right?), and smaller budgets. Not many churches will be hiring full-time guys to run campuses. I suspect what we’ll see is churches hiring one or maybe two technology directors who will oversee all the campuses, helping recruit, train and keep volunteers going.

In this scenario, technical leaders won’t be nearly as hands-on; we won’t be able to be in five places at once. But we will need to be really good at putting together packages of gear that can survive being loaded in and out by volunteers each week. If you have holes in your technical systems knowledge, now is the time to fill them in.

We’re Seeing A Significant Shift From Big Worship To An Emphasis On Groups

Thom points out—correctly in my opinion—that churches are starting to move away from the worship service being the central event of the church. It’s not going to go away, but there will be more emphasis on groups. Churches will be needing to raise up more leaders who can lead groups.

What does this mean for us? We’re already seeing it. Churches are becoming less interested in hands-on techs and more interested in technical leaders who can train and develop others to do the work.

Again, this won’t be binary. There will likely always be churches with large tech staffs who do the work. But I suspect we’ll see a shift towards volunteer teams, even in larger churches. If you’re a hard-core tech with no people skills, this is going to be a challenging transition for you.

But if you’re a builder of people and teams, you will do well. Now is the time to start honing those leadership and discipleship skills; you’ll be needing them!

We Will Be Spending Less On Buildings, More On Ministry

Again, more and more churches are foregoing a large, expensive worship center (or sanctuary, auditorium or whatever you want to call it), so they have more funds to invest in community ministry programs. I’ve always been conflicted with how much production technology costs.

On the one hand, I believe if we’re going to commit to doing production, we should do it well, and that takes money. On the other hand, I wonder sometimes if our priorities are misplaced. I’m not settled on this, and I suspect we’ll always live in tension in this regard.

What does this mean for us? Budgets will continue to shrink. We’ll have to find ways to do more with less. We will need to get very creative in how we do production. It may be that we do less production, but do what we do very well.

Hard choices will need to be made, and this will be a problem for some. If you refuse to work on anything but a Grand MA2 or a DiGiCo SD7, this may be hard for you. But if you’re open to scaling back and still doing production with excellence, this is going to be a lot of fun.

Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.

{extended}
Posted by Keith Clark on 12/19 at 05:35 PM
Church SoundFeatureBlogOpinionTrainingBusinessEngineerSound ReinforcementSystemTechnicianPermalink

Wednesday, December 18, 2013

Phase & Polarity: Causes And Effects, Differences, Consequences

The terms "polarity" and "phase" are often used as if they mean the same thing. They do not.

Polarity and Phase - these terms are often used as if they mean the same thing. They are not.

POLARITY: In electricity this is a simple reversal of the plus and minus voltage. It doesn’t matter whether it is DC or AC voltage. For DC, Turn a battery around in a flashlight and you have inverted or, more commonly stated, reversed the polarity of the voltage going to the light bulb. For AC, interchange the two wires at the input terminals of a loudspeaker and you have reversed the polarity of the signal coming from that loudspeaker.

PHASE: In electricity this refers only to AC signals and there MUST be two signals. The signals MUST be of the same frequency and phase refers to their relationship in time. If both signals arrive at the same point at the same time they are in phase. If they arrive at different times they are out of phase. The only question is how much are they out of phase, or stated another way, what is the phase shift between them?

The important point to note in these definitions is that you can reverse the polarity of one signal and you can measure this change. You need two signals to measure a phase shift.

For convenience, the word “speaker” will be used in place of the more correct term “loudspeaker” in the rest of this article.

A picture is worth 1,000 words… but a few words of explanation can help.

The following figures show the differences and some consequences of polarity and phase. Figures 1 through 12 show graphs of sine wave signals. Actually it is a sine wave from one signal source split two ways. Except for figure 1, one of the splits is “processed” by reversing its polarity and/or by delaying it (phase shifting it) as described. To put this in the real world, imagine two speaker systems side-by-side, each reproducing one of the signal splits. (More precisely, the graphs show what you would see on an oscilloscope looking at the output of a mixing console with each split going to a separate input after one of the splits has been “processed”.)

The vertical scale in the graphs is in arbitrary units of -2 to +2 with lines at each 0.5 interval. If you like, consider this as -2 to +2 volts. Because phase shifts are measured in degrees, the horizontal scale in the graphs is labeled in degrees with a vertical line at each 90-degree point. One full cycle or period of a sine wave is 360 degrees.

Assume that the signals shown are 1 kHz sine waves, in which case each vertical line represents 1/4 millisecond of time. Sound travels in air about 3.4 inches (85 mm) in 1/4 millisecond so each vertical line also represents this distance. Note that in the graphs the signals all start 1/4 millisecond or more from the left so you can clearly see when each signal starts. (The importance of this will be seen in figure 9.) There is no signal along the flat line from -90 to 0 degrees.

Signals In Polarity, In Phase
Figure 1: This shows 3 periods or 3 cycles of two simple sine waves. Both are +/-1 volt high at their peaks = total of 2 volts. One is shown in blue the other in red.

Figure 1: Sine Waves in Fig. 1 Added.

Figure 2: This is what happens when the two are combined (= added together). This is exactly what would happen on a line exactly between the two side-by-side speakers. The two signal beings being in phase and in polarity add up so the peaks are now at the +/- 2 volt lines = 4 volts or twice the original signals. Acoustically this is an increase of 6 dB = 20 x log(1+1).

Figure 2: Two Sine Waves - Same Polarity & Phase.

Signals Out of Polarity
Figure 3: This is like figure 1 but the second sine wave, shown in red, has been reversed in polarity. As you can see the + and - voltage points are exactly opposite from the first sine wave, shown in blue. This would be accomplished by reversing the +/- input connection on the speaker reproducing the red sine wave.

Figure 3: Two Sine Waves - Red = Polarity Reversed.

Figure 4: This is what happens when the two are combined. Each point of the two signals being in phase, but opposite polarity, adds up to zero. Acoustically this is an infinite decrease of output. Because you can’t take the log of 0 assume the difference is actually 0.0.01 volts (the dots = 58 more zeros). 20 x log of this number is -1200 dB. That should be pretty quiet. You can’t easily hear this with two speakers because of having two ears. But using a very carefully positioned microphone to measure this in a place with no sound reflections, you would find almost no signal.

Figure 4: Sine Waves in Fig. 3 Added.

Signals Ot of Phase

Figure 5: The second sine wave, shown in red, starts 1/4 millisecond later (90 degrees later) than the first one, shown in blue. Put another way, the second signal has been delayed by 1/4 millisecond.

Figure 5: Two Sine Waves - Red = Phase Shifted 90 Degrees.

Figure 6: This is what happens when the two are combined and it’s pretty interesting. First notice that the peaks are almost at the +/-1.5 volt lines. The value is actually +/-1.414 volts. This is a 3 dB increase. This would be like listening to two speakers but the one reproducing the red sine wave is 3.4 inches (85 mm) further away from you than the other. The first thing you hear is only from the speaker reproducing the blue sine wave. The black line starts when the sound from the second speaker is heard and this line is the combined signal of both speakers.

Figure 6: Sine Waves in Fig 5 Added

Suppose the speaker reproducing the red signal were only 2.25 inches (57 mm) further away. The signals would be shifted by only 60 degrees. The increase for the combined signal would be about 4.5 dB. So the amount of phase shift is important.

The second thing to notice is what happens at 1/4 millisecond or 90 degrees after the blue signal starts when the second signal “kicks” into the picture represented by the line turning black. There is a distinct change in the waveform.

The third thing to notice is that the entire waveform after the “glitch” is shifted in time compared to figure 7 about 45 degrees = average of 0 and 90 degrees.

Signals Out Of Phase And Polarity

Figure 7: The second sine wave, shown in red, is a combination of the sine wave in figures 3 and 5. The signal not only has its polarity reversed but it is shifted in phase by 90 degrees compared to the first signal, shown in blue. In this case the speaker reproducing the red sine wave has its +/- input connection reversed in polarity and is 3.4 inches (85 mm) further away from you than the one reproducing the blue sine wave.

Figure 7: Two Sine Waves - Red = Phase Shifted 90 Degrees & Polarity Reversed.

Figure 8: This is what happens when the two signals are combined. The picture is similar to figure 6 with two important differences. First the “glitch” at the point where the second signal starts is different. This is the point where the line turns black. Second is that the entire waveform is shifted by 45 degrees again but this time to the left of the original signal.

Figure 8: Sine Waves in Fig. 7 Added.

The “Glitches”
The glitches in figures 6 and 8 give an indication of what happens during the onset of a signal. While the so-called steady state portion of the combined signal (shown by the black portion of the lines) looks the same except for the amplitude change, these glitches will affect the transient attack of sounds. This is not to say that either will sound horrible, but a phase shift between otherwise identical replicas of a sound WILL make a difference in the sound of the initial transient attacks, depending on the frequency and amount of phase shift.

This is exactly the kind of phenomena that can occur in the crossover region of a speaker. This is because the distance from each driver to the listener is usually different and the crossover itself shifts the phase of the signal between the drivers. Speaker designers are often faced with a choice between something like what you see in figures 6 and 8. Neither is “correct” so a designer can only choose the one that “listens” better. Just looking at these two, I would bet the waveform in figure 8 might sound better and the choice would be to reverse the polarity of one of the drivers. These crossover “glitches” occur only over a small range of frequencies where both drivers reproduce the sound. It is well accepted by designers that this kind of “improvement” is sonically more significant than the fact that frequencies above and below the crossover point may be out of polarity.

Signal Phase Shifted 180 Degrees
This is where many get into trouble in thinking that phase and polarity are the same thing, meaning that it is often assumed that a 180 degree phase shift and reversing the polarity are the same.

Figure 9: In this figure each sine wave lasts for only 2-1/2 cycles. The second sine wave, shown in red, is shifted in phase 180 degrees from the first shown in blue. This is what would happen if the speaker reproducing the red sine wave were about 6.8 inches (170 mm) further away from you than the one reproducing the blue sine wave. You can see that between the 180 and 900 degrees the signals LOOK like they are simply out of polarity but they are NOT. It is VERY important to note that if you could not see the beginning or the end of these signals you could not tell whether they were out of polarity or 180 degrees out of phase. Too often this is what causes confusion between a polarity reverse and a 180 degree phase shift.

Figure 9: Two Sine Waves - Red = Phase Shifted 180 Degrees & Polarity Reversed.

Figure 10: This is the result of combing the two signals. Unlike figure 4 where the signals are simply out of polarity, and completely cancel, there are clearly two positive halves of a sine wave visible before and after the two signals cancel along the black line between 180 and 900 degrees. The first is from the blue sine wave in figure 9 that occurs before the start of the red sine wave. The second is from the red sine wave in figure 9 that continues after the blue sine wave stopped.

Figure 10: Sine Waves in Fig. 9 Added.

Signal Phase Shifted 180 Degrees And Reversed In Polarity

Figure 11: This is the same as figure 9 but the polarity of the red signal is reversed from figure 9.

Figure 11: Two Sine Waves - Red = Phase Shifted 180 Degrees & Polarity Reversed.

Figure 12: This is the two signals in figure 11 combined. Between the 180 and 900 degrees, the signals add much like in figure 2. However there are significant differences in the overall 90 to 1080 degree signal. The first 1/2 sine wave of this signal is only from the blue sine wave from figure 11. The last 1/2 sine wave is only from the red sine wave in figure 11. You can clearly see that both of these 1/2 sine waves are only 1 volt at the peaks. This is a clear difference from figure 2 where all the peaks reach 2 volts.

Figure 12: Sine Waves in Fig. 11 Added.

The reason is that the two signals in figure 11, even though identical, are offset by 180 degrees. They add together only between 180 and 900 degrees when both are being heard. More importantly, during this time period DIFFERENT parts of the same signal have added together. For example you can see that between 180 and 360 degrees it is the second 1/2 of the blue signal’s first complete sine wave that adds to the first 1/2 of the red signal’s first complete sine wave.

Real Audio Signals
Sine waves are easy to look at to dramatically show the difference between polarity and phase. Armed with this knowledge you can look at figures 13 through 18 that show something like a real audio signal where the effects of polarity and phase are more difficult to see.

The signal shown in these figures was a generated by a mathematical algorithm that produces something close to a pink noise signal. Pink noise contains all frequencies with an equal amount of energy in each octave band. Real audio signals don’t look much different than pink noise (but one would hope they sound better!). The scales on these graphs are arbitrary. You can look at the vertical scales as +/-3 volts if you like. However, because of the way the signal was generated, there was no way to define absolute time or degrees along the horizontal scales. Suffice it to say that the phase-shifted signal used in these figures was shifted by one data point out of the 240 data points that make up the signal lines.

There is one important thing to understand about phase shift. The amount of time one signal is delayed from another will have different effects at different frequencies. Assume there is a 1 millisecond time difference between two identical signals. At 500 Hz the result will be as shown in figure 10 because at 500 Hz the 1 millisecond time difference is a phase shift of 180 degrees. The signals are offset by 1/2 a cycle.

At 1 kHz the signals will be offset by 1 complete cycle. In other words you would hear one cycle from the first signal then both combine then you’d hear the one cycle from the second signal after the first stopped. This is similar to what is shown in figure 12 (which shows only 1/2 cycle) but is not the result of the same conditions that were used to make figure 12.

At 250 Hz the effect would be as shown in figure 6 because a 1 millisecond time difference corresponds to a 90 degree phase shift at 250 Hz or an offset of 1/4 cycle. At lower frequencies the phase shift would be even less and the signals would tend to add as in figure 2, approaching but never quite reaching the 6 dB increase shown in that figure.

Contrary to phase, polarity affects all frequencies the same way. It makes the positive portions negative and the negative portions positive. Put another way, it simply flips the signal over the same way at all frequencies. With these things in mind, examine figures 12 through 18

Effects of Polarity and Phase On “Real” Audio Signals
Figure 13: This shows a pink noise signal generated as noted above.

Figure 13.

Figure 14: This shows both the original signal in blue and what happens when an identical but phase shifted signal is added to it, as shown in red. The red signal is similar to the combined signal shown in figure 6. Note the increases in signal level and the changes in the waveform (many glitches). However you can also see the combined signal follows the original fairly closely.

Figure 14.

Figure 15: This shows both the original signal in blue and what happens when the phase shifted signal is also reversed in polarity and combined with it, as shown in red. In this case there are huge differences between the original and combined signal.

Figure 15.

Figure 16: To better understand what is going on, this figure shows an averaged or integrated version of the pink noise signal in figure 13. This is basically what would you would see if you graphed the readings from a typical SPL meter for the signal in figure 13.

Figure 16.

Figure 17: This shows the averaged signal from figure 16, in blue, and the averaged combined signal from figure 14, in red. Note that there are primarily level differences (mostly increases). Otherwise the two lines look very similar.

Figure 17.

Figure 18: This really shows what is going on in figure 15. The blue line is the averaged signal from figure 16. The red line is the averaged signal from figure 15. The red line shows that the out of polarity and phase-shifted signal approaches a straight line. Because you are looking at a broad frequency range, you are seeing a severe cancellation of the lower frequencies due to the polarity reversal. However, unlike the low frequencies, the upper frequencies do not completely cancel due to the phase shift. The red line contains primarily high frequency energy. In the blue signal the higher frequencies are the small “bumps”. These can be clearly seen in the red signal and most of them correspond to those in the blue signal.

Figure 18.

Figure 18 is a prime example of what you would hear if you stand exactly between two speakers playing the same signal (i.e. mono) with one speaker out of polarity. The bass will disappear. But, there will always be a difference in distance between you and the speakers due to the spacing of your two ears and probably a slight overall difference in distance between you and each speaker. A difference in distance means a difference in the time arrival and thus there will be phase shifts between the sound from the two speakers. The amount of shift will vary with frequency. Because of the shorter wavelengths at high frequencies, the phase shifts allow most of the highs to be heard. They may be out of polarity but the effect is like what is shown in figure 8. Also, in a room you would also hear sound reflections from the floor, walls, and ceiling. You would only hear something like the red line in figure 18 outdoors away from any reflective surfaces or in an anechoic chamber.

Figure 19.

The small distance between your ears and any small difference in distance from you to each speaker do not cause appreciable phase shifts at low frequencies. This is because of the considerably larger wavelengths. The difference in your distance from each speaker might be only 1 inch (25 mm). However, the wavelength of even a 1 kHz sound is roughly 1 foot (300 mm) and at 100 Hz roughly 10 feet (3 m). At the lower frequencies the polarity difference predominates because the phase shifts due to the difference in your distance from the speakers is very small compared to the wavelengths of the low frequencies. Thus the lower frequency signals, being nearly in phase but out of polarity, will cancel like in figure 4. The lower the frequency the less the phase shift between the two speakers and the greater the cancellation.

A Polarity / Phase Field Trip!!
(As with all physical exercise, check with your doctor first, who might not recommend you do this for some reason.)

Find two railroad tracks, lie across them, and wait.

Two trains, one on each track, come along. Both are right side up and both hit you at exactly the same time. The trains are in polarity and in phase.

The same thing happens again and both trains hit you at exactly the same time. However, this time one train is upside down. That is a polarity reversal.

The third time both trains are right side up but one hits you first and the other hits you shortly after the first. That is a phase shift.

The last time the second train is upside down and hits you later than the first. That is both a polarity reversal and a phase shift.

Summary
So there you have it. Although this has only touched on a few areas concerning phase and polarity issues, it is hoped you better understand the difference between the two and a few of the effects of each. Remember that the audio frequency range covers wavelengths of over 30 feet (10 meters) at the lowest frequencies to less than an inch (under 25 mm) at the highest frequencies.

While a reversal of polarity will affect all frequencies identically, a difference in time arrival between two otherwise identical signals will have very different effects on the phase between them. The amount of phase shift will be different at different frequencies and this will depend on how much time difference there is between the arrival of the two signals.

{extended}
Posted by Keith Clark on 12/18 at 03:35 PM
AVFeatureBlogStudy HallAVEducationInstallationMeasurementProcessorSignalTechnicianPermalink

Tuesday, December 17, 2013

Through The Years: A Look At Notable Microphone Developments

A wide range of significant design breakthroughs after decades of research

Microphones as we know them date back to about the mid-1800s, when many different inventors were trying to electronically transmit sounds from one place to another.

Before then, the term microphone was used to describe an acoustical device (like an ear trumpet or stethoscope) that helped amplify sounds.

One of those inventors was German physicist Johann Philipp Reis (1834-1874), who designed a sound transmitter consisting of a metallic strip resting on a membrane with a metal point contact that would complete the electrical circuit when sound waves moved the membrane (a.k.a., diaphragm).

A bit later, American Elisha Gray, an inventor and one of the founders of Western Electric, formulated a liquid transmitter that immersed a rod connected to a diaphragm into an acid solution. A second fixed rod was also immersed in the liquid, and a battery connected the two rods. As the sound waves moved the first rod, the distance between it and the second rod varied in proportion to the sound, producing corresponding changes in the electrical resistance flowing through the cell.

While Gray may have actually invented the liquid microphone first, history recognizes Alexander Graham Bell as the inventor of the telephone because he filed for a patent a few hours before Gray.

A replica of the Bell liquid telephone circa 1876. Looks similar to a microphone…

As children, we were told the story of Bell accidentally spilling some acid and calling for his assistant, saying “Watson, come here, I need you.” Watson heard Bell’s voice over the system, and it’s noted as the first “phone call.” Other inventors who contributed significantly to early microphone technology include Emile Berliner, David Edward Hughes and Thomas Edison.

As a self-confessed “mic geek,” I find the history of these devices fascinating, and thought it would be interesting to share some of the highlights of mic development. Note that this is by no means intended to be comprehensive, but focuses on what I see as many of the notable design breakthroughs through the years.

Western Electric 618A (Credit: John Schneider, Flicker)


Early Pioneers
By the late 1920s, Western Electric developed the first practical dynamic microphone, the 618A, which became a very popular with broadcasters. According to the company’s instructions for use, the 618A sported a “thin duralumin diaphragm of low mechanical stiffness” and a magnet made from “high-grade cobalt steel.”

The document goes on to state that “a number of air chambers and slot openings connecting them have been associated with the diaphragm in order to obtain substantially uniform response over a frequency range from 35 to 9,500 cycles per second.

One of these acoustic elements, in addition to exerting a control on the motion of the diaphragm, allows air to be transferred from the front to the back of the diaphragm. This eliminates effects due to changes in barometric pressure.”

RCA 77D

beyerdynamic was founded in Germany in the mid-1920s, initially developing loudspeakers for the motion picture industry. By 1939 the country introduced its first dynamic microphone for studio use, the M19, which also became a favorite of broadcasters.

Neumann unveiled the CMV3 in 1927, arguably the first commercially available condenser mic for broadcast. Later, the U47—although perhaps not the first multi-pattern condenser—was certainly a marquee product, becoming a standard in the 1950s and by 1960 largely replaced the ribbon mics used for recording.

Meanwhile, in 1932, RCA introduced the massive 77A. Designed by Dr. Harry F. Olson, it featured two vertical in-line ribbons and an acoustic labyrinth inside the case which enabled it to be unidirectional.

According to the manual, “The figure 8 pattern of the velocity-actuated part of the ribbon combines with the circular pattern of the pressure-actuated half to provide an overall cardioid pattern.” The 77A was no handheld model at 11.5 x 3.75 inches and weighing in at 4.5 pounds, it used a 1/2-inch pipe thread for mounting.

In the 1940s, RCA would introduce the classic model 77D, an Art Deco beauty with adjustable pattern control.

A tube connected the ribbon to the labyrinth in the mic’s body and the tube had an adjustable shutter that could be adjusted by the user that closed off portions of the tube openings allowing patterns control from omnidirectional to bi-directional (figure 8).

In the mid-1920s, Sidney Shure founded a radio parts supplier in Chicago named the Shure Radio Company, later changed to Shure Brothers when Sidney’s brother Samuel came onboard.

By 1932, the company was producing microphones to fill a rapidly growing market, and debuted the model 33N, a 2-button carbon microphone model that was the first lightweight, high-performance product in a field largely dominated by bulky units.

Shure went on to release the iconic model 55 in 1939, the first single-element unidirectional mic. The design is called UNIDYNE (short for “unidirectional dynamic”) and it revolutionized the field.

Before the 55, attaining anything other than an omnidirectional pattern meant using more than one diaphragm and combining their outputs, which resulted in a large mic head that usually didn’t sound all that good because the elements were spaced apart and usually had different frequency responses.

Shure 55 Unidyne

Shure solved this by using small ports that allowed sound waves to reach both sides of the diaphragm at different times, resulting in a more linear frequency response compared to using different diaphragms.

And to reduce handling noise, the capsule was suspended on springs dampened with foam to isolate the diaphragm from vibrations. Because the design only required a single diaphragm, the mic became smaller and less expensive. Factor in the timeless aesthetic design of the housing, and it’s no wonder that the 55 is still a popular model after all these years, with the current Shure catalog offering two updated models.

Electro-Voice 664

About 100 miles to the east in South Bend, Indiana, Al Kahn and Lou Burroughs converted their business from servicing radios to developing mics, and in 1930 incorporated under the name Electro-Voice. By the mid-1930s the company was up and rolling, producing a steady stream of innovations.

In 1934, while going through some old technical journals, Kahn came upon what he called “an ancient watt meter – patented in 1892 or thereabouts” which had a balanced winding to cancel hum from the stray 60 Hz fields that the watt meter might pick up.

As Kahn described it, “A little light bulb went off above my head, and I rushed back…got some tin snips, cut some laminations out, and I made a transformer and put it in and it worked.” Thus, the humbucking coil was born, and solved a major problem for mic users. 

The company moved to larger facilities in nearby Buchanan, MI, and also diversified its product offerings into loudspeakers, phono cartridges, and consumer electronics. Still, mic work continued, with the model 664, a.k.a., “The Buchanan Hammer,” hitting the market in the mid-1950s. The nickname derived from legendary durability, but the single-element cardioid, dynamic type mic was also the first model to incorporate the company’s patented Variable-D (Variable Distance) design still found in several EV mics to this day, including the RE20 and the recently introduced RE320. Variable-D uses multiple rear and side ports to achieve pattern control.

Sennheiser MD-421

Further Choices
Dr. Fritz Sennheiser founded Laboratorium Wennebostel in Germany in 1945 and started producing mics the next year. Later the company would change its name to Sennheiser, and by 1960 produced one of the most enduring models in pro audio, the MD-421.

While it currently sports a glass composite body, the original 421 body was made of DuPont’s Delrin polymer resin, one of the first mics to feature a molded body. The MD-421 also offers a user-adjustable bass roll-off filter, and is still extremely popular with live and studio engineers.

Next door in Austria in 1947, Dr. Rudolf Görike and Ernst Pless started AKG, supplying movie equipment to theaters in post World War II Vienna.

Just a few short years later, AKG had introduced new mic technologies that include an early (and perhaps the first) high-quality condenser, the first remote-controlled multi-pattern capacitor mic, and one of the first dynamic cardioid models.

The D12 was highly coveted by sound engineers for its great sound and cardioid characteristics, and 1971 saw the introduction of the 414 series, a large diaphragm condenser with multiple variable pickup patterns that is still a staple of studio and stage work.

Shure stepped up again in 1959 with Unidyne III, the first high-quality unidirectional design that is used by speaking into the end rather than the side of the microphone.

It was the predecessor to the SM57 and paved the way for modern handheld designs.

AKG D12

By the mid-1960s, the SM58 (for “studio microphone”—bet you didn’t know that!) launched, delivering a combination of sound quality and rugged reliability that’s made it the standard for live vocal use to this day.

Hideo Matsushita founded Audio-Technica Corporation in Tokyo in 1962 and since that time the company has developed a wide range of designs. My favorite comes from 1985 with the UniPoint Series of ultra-compact condenser mics.

Over the years the line has grown to include more than 30 models, including hanging, boundary, gooseneck and even handheld microphones.

Audio-Technica UniPoint U853A

The UniPoint Series continues going strong as a contractor and sound operator favorite.

About the same time, beyerdynamic introduced the M88, incorporating a new low-mass diaphragm element that offered fast transient response coupled with the ability to handle high SPL levels. It caught on with engineers around the world, and a version of the M88 is still a current item in the catalog more than 50 years later.

beyerdynamic M88

In 1964, Bell Labs received a patent for the electroacoustic transducer, an electret microphone. Electret condensers offered greater reliability, higher precision, lower cost, and a smaller size that anything available at the time.

During this period, AKG launched the world’s first 2-way cardioid microphones. One of these was the D202, a handheld model that with two diaphragms, one for the lows (20 Hz - 800 Hz) and a second one for the highs (800 Hz - 20 kHz).

AKG D202, nicknamed the “rocket” for obvious reasons

The mic sported a sintered bronze grill on the front, rear ports at the cable end and a 3-position bass cut switch. Another model, the D202ES, moved the crossover point to 500 Hz.

New Directions
Carl Countryman incorporated the family business in 1978 and since that time has been making innovative miniature mics for people and podiums. In the 2000s, the E6 earset mic was developed and has become a favorite earworn miniature model.

And, introduced a few years ago, the model B2D is the smallest directional lavalier available, with a capsule diameter of only 0.1 inch (2.5 mm).

Sennheiser decided to go small in 1983 and came out with two significant advances. The first was the development of the first directional clip-on microphone, the MKE 40, followed smallest studio clip-on microphone available at the time, the MKE 2.

More than 25 years later, the MKE 2 is still extremely popular with broadcasters and corporate audio folks.

With VLM technology first deployed in the OM Series that debuted in 1986, Audix put itself on the map just a couple of years after its founding. VLM (“very low mass”) is based on using a lightweight diaphragm that allows for extremely fast, accurate processing of incoming signals, while still offering extended frequency response and high SPL handling.

Countryman E6


In the late 1980s, David Blackmer founded Earthworks Audio in New Hampshire with an initial goal of designing and manufacturing audiophile loudspeakers. He was dissatisfied with the measurement tools of the day, and set about to improve the situation.

The first tool he designed was an omnidirectional mic, which led to the manufacture of the OM1. He wanted to get back to designing loudspeakers, but his colleagues begged to differ, persuading him to design other mics as well as preamps.

Earthworks OM1


The overall design philosophy is extended frequency response, very fast impulse response, near “perfect” polar pattern and pure signal path, with the goal of better emulating the time resolution of human hearing (10 microseconds or better). This is now provided in a wide range of condenser models for vocals, instruments and measurement.

Speaking of precision, DPA Microphones (originally Danish Professional Audio) was founded in the early 1990s by two former employees from high-end measurement tool manufacturer Brüel & Kjær.

Sennheiser MKE 40

Headquartered in Allerød, Denmark, DPA was first recognized for the 4011 cardioid followed by two headset designs—the 4066 cardioid and 4088 directional—that have helped shape the popular market genre we enjoy today.

In the late 1990s, Neumann forever altered the landscape with the introduction of the KMS 105. This condenser model proved a watershed mic for live performance, “breaking the ice” for other high-performance condensers in the live market such as the Shure KSM9, Audix VX10, Rode S1 and others.

As we moved along to a new millennium, Milab went digital with the DM-1001, a mic microphone with AES/EBU and S/PDIF outputs. The DM-1001 uses two large diaphragms, each with its own AD converters, with the polar pattern calculated in the DSP from a mix of the front and back signals from the elements.

DPA 4011

A separate programmable control offers a choice of standard or user configures patterns. Neumann also entered the digital realm with the Solution-D, a studio-oriented design with integrated DSP processing and an A/D converter that allows for gain adjustments to be made digitally inside the mic.

Craig Leerman is senior contributing editor for Live Sound International and is an avid collector of vintage microphones. Read about more about some of his mics here.

{extended}
Posted by Keith Clark on 12/17 at 05:14 PM
Live SoundFeatureBlogStudy HallMicrophoneSound ReinforcementStagePermalink

Church Sound: Mixing Like A Pro, Part 4—Making EQ Work For You

Looking at the actual sonic makeup of the sounds that come from the stage
This article is provided by CCI Solutions.

 
Editor’s Note: Go here to read previous installments in this series.

In the previous article, we took a look at various equalization (EQ) tools, identified their functions and what they can do for us. After all, we can’t use our tools effectively if we don’t know what they are or how to use them.

We’ve covered what EQ does to shape frequencies in our sound system audio, but to know how to make EQ work for us we have to look at the actual sonic makeup of the sounds that come from the stage.

Putting EQ Into Words
Before we focus on what we’re EQ’ing, we need to learn how to interpret common language into tangible EQ adjustments. You know what I’m talking about, comments like “it’s too edgy” or “it sounds muddy.” What does that actually mean?

The chart below gives us some hints using words like rumble, muddy and edge. With the help of this chart, we can take an educated guess that when someone says an input sounds “honky,” they’re referring to something in the 440-1,000 Hz range.

It’s not an exact science, I know, but getting a good feel for what responses are elicited by certain frequencies can help us in making EQ adjustments quickly. So the next time someone tells you the electric guitar sounds “edgy” or “crunchy,” you know you need to quickly look at the 2,000-4,000 Hz range to attack your problem.

Click to enlarge. Go here for an interactive version of this chart.

Focusing On Frequency Ranges
Everything we hear is made up of a range of frequencies. Each sound that hits our eardrums is made up of a collage of frequencies at a blend of sound pressure levels that our brain interprets as “the sound.”

Just as each person’s voice has a unique makeup and signature, every instrument or vocals has a makeup of frequencies that is unique to it. In order to talk about how to EQ an input, we need to learn what frequencies are involved in the sound sources we’re working with.

The chart is a great place to start to begin to understand what frequencies make up the sounds we experience on a Sunday morning. It shows us the range of any given source and it shows us the frequencies we need to focus on—and not focus on.

For example, the range of a guitar will typically start around 80 Hz and will top off around 5 kHz. Knowing there is nothing being produced below 80 Hz, the first thing we can do is turn on the low cut/high pass filter to eliminate any low frequency junk that our guitar isn’t actually producing.

We also know that the guitar isn’t producing frequencies over 5 kHz, so turning up the highs above that just adds unhelpful noise. Based on this chart we know our focus needs to be between 80 and 5 kHz.

Critical Sound — The Voice
Our most critical source in the church, the human voice also has clear-cut frequency ranges, regardless of whether your voices are singing or speaking.

While everyone’s voice has minor variations, the male voice produces frequencies between 100 and 16 kHz. While the female voice also shares the top end of 16 kHz, it doesn’t typically produce any frequencies below 240 Hz.

The first thing this should tell you is your low cut or high pass filter should almost always be engaged on these inputs. As you can see on our chart (previous page), the warmth or boominess of the voice is between 100-250 Hz, so most of the time there is nothing worth having below 100 Hz.

The most important frequency range in the voice in my opinion, and the one I see most commonly mis-adjusted, is the intelligibility range in the high-mids (2 kHz to 4 kHz).

When listening to vocals that are “honky” or “tinny,” I often see sound guys reach for the high-mids and adjust those down to try and improve the sound.

As we can see on the chart, we’re actually attacking the intelligibility when lowering the high-mids, and missing the “honky/tinny” sound that’s in the 400-2 kHz range. It seems like such a small miss on paper, but this mistake will often cost the vocals their clarity in the mix.

What To Do With Q
If you’re fortunate enough to have a full parametric EQ with a Q knob, you have a tool that allows us to get very specific with our EQ adjustments or make general, sweeping changes. Most of the time we want to make fairly general adjustments and a single octave change is great, which is a Q value of 1.

If you’re needing to subtract a bit of “honkiness” from your guitar though, a 2 dB cut centered at 700 Hz with a lower Q (maybe .7) will give you a broader, wider cut to effect everything in that 400-1 kHz range.

Or if you’re fighting a particular frequency for feedback, you can make your Q very high (maybe 5 or 6) so that you are narrowly notching out the frequency that’s causing you issues, leaving the rest of the frequencies alone and keeping some semblance of natural sound.

The Q, if you have it, really gives you a great deal more flexibility in the adjustments you make.

Feedback
Speaking of feedback, one last thing our chart can help us with is learning what individual frequencies sound like. We’ve all dealt with feedback at some point. A frequency that’s sensitive enough that when amplified the mic picks it up from a monitor or speaker again and again creating a feedback loop.

When feedback occurs, one common way to attack it is adjusting the EQ to decrease the gain of that frequency. To be able to do that quickly and effectively, we need to know what individual frequencies sound like.

Frequency Killing
At the bottom of the chart is a standard 88-key piano that shows what frequency each note produces. When it comes to training your ears to be able to quickly respond to feedback, sitting at a piano or keyboard with this chart can help you learn exactly what frequencies sound like.

Try it sometime: sit at a keyboard and focus on a typical problem range of 200-500 Hz and press one key over and over, training your brain to recognize the tone of each frequency.

The middle C is a great place to start, with a frequency of 256 Hz. I find this to be a common problem for many churches. Then jump up to middle A, with a frequency of 440 Hz. Do this occasionally, spanning the entire frequency spectrum and you’ll be a frequency killer in no time!

Wrap Up
Hopefully at this point you’re feeling more confident in what EQ is and what it can do for you. The difficult part is that there is no clear cut formula for what will and won’t work.

I can’t tell you that you should always cut certain frequencies to get a great sounding input. Our vocals and instruments are living, breathing, unique things and they all have their own flavor.

On top of that, every sound system and facility have their own nuances that come into play. When it comes to EQ, you have to trust what you’re hearing. Use the chart. Print one out and keep it at your sound console for reference.

 

Duke DeJong has more than 12 years of experience as a technical artist, trainer and collaborator for ministries. CCI Solutions is a leading source for AV and lighting equipment, also providing system design and contracting as well as acoustic consulting. Find out more here.

 

{extended}
Posted by Keith Clark on 12/17 at 04:34 PM
Church SoundFeatureBlogStudy HallEducationEngineerMixerProcessorSound ReinforcementTechnicianPermalink

In The Studio: Tips For Better Take Management

One of the major differences between an aspiring producer and an established one...
This article is provided by the Pro Audio Files.

 
One of the major differences I’ve seen between an aspiring producer and an established producer is simple playlist (take) management. Great producers will usually have a very clean session in regard to organization and take management.

Gizmos
Technology is great. It allows us to do things that have never been done before, all in the comfort of our own homes. But when is it a hinderance? When do we become a prisoner of all the possibilities? When do we start to drown in endless options?

Established producers often have a lot of clarity within their sessions. They’re not concerned with countless possibilities, rather the best option.

This means when it comes to comping tracks and saving takes, decisions are made quickly.

Saving 20 takes per part may seem like a reasonable idea to many. What if you want a different variation on the part? Not sure the timing is locked? Not sure which take has the best tuning? What if? What if? What if?

Too many “what if’s” lead to a muddy production. It’s important to make decisions. Clarity throughout the process is important. Firstly, because it affects the performances.

A guitar chord that’s off is going to trigger the bass note to be off and then the percussionist has a hard time locking in. Before long, you have a session where the whole band is a little shaky. Not making decisive decisions can create a spiral effect on the stability of the production.

Momentary Lapse Of Reason
There is also the memory lapse effect. You record a bunch of takes and while you’re working, everything seems clear in your mind: Take 12 had a good bit, take 15 was mostly good, but you want to grab the beginning from take 4.

If you put the song down for a few days and come back to the session it’s going to be hard to remember the nuances between takes.

Commit. If it’s still not good enough, re-track it. At this point, you’re better off getting a single take then a patched edit for the sake of feel. I’m always in favor of replaying the part rather than extensive edits. It will take the same amount of time and the full take will still sound better.

Worm Hole
Aspiring producers/musicians get caught in the trap of playing too much and not listening. I like to set a rule of stopping after 4 takes and giving a really good listen. Don’t set record to do an endless loop. Loop recording means you’re not listening and most likely spacing out at times.

Perspective
It’s hard to hear the music the way it really sounds while you’re playing. This is another reason why you need to stop and listen as often as you can. If you’re the producer and player your perspective is biased.

When you stop, put your instrument down and trust your ears. Listen, make notes, and re-take. Don’t be noodling on your instrument while listening. This is the only way to make really fine adjustments. It may seem like it’s the long approach, but in reality, it will save you time.

Hit It
Here is how I like to track a vocal session.

First, I’ve taken time to choose the correct mic, preamp, compressor, incense, tea, lighting and dialed in a headphone mix. (Note: It’s very important to have a great headphone mix. It will result in less fatigue and frustration from the performer.)

Next, I like record a couple of full passes before we even think about punches. Let the performer get into the vibe of the song.

After 3-4 takes, stop. Take a few second break for water and then listen. Before we listen, I make sure we both have a pencil and paper. As we review each take, we write notes of what we liked or didn’t.

Listening to 8 takes in a row is overwhelming! It’s too much to digest. Plus, I’ve heard that if you listen to 9 takes in a row it could cause bowel irritation. Ok, I made that up. But, if I have to listen to 9 takes in a row of the 3rd part background vocal I’m going to be calling my friend Johnny Walker Red… And we’re gonna have a loooong chat, if ya know what I mean.

When the last take has completed playing, we compare notes and see if we have a comp. In the event the overall performance is not there, we repeat the 3-4 take run, break, then listen, take notes, comp.

If we just need a few bits, we comp the the take and punch in where needed. Notice I mention we comp BEFORE we punch!

Performance Drift
There is something I like to call “Performance Drift.” This is when the artists’ performance changes dramatically from the first take to the last. Volume, expression, and enthusiasm may have shifted during flight. Limiting tracking to 3-4 takes at a clip prevents performance drift as there will be breaks and reviewing that keeps it fresh.

Hash It Out
Don’t use recording as your practice. Need to review something because it’s not right? Stop playback and run it. Work it out. Be prepared and ready when the red light is on.

Don’t have the mindset of “I’ll fix it later.” The performance will always suffer. Even though we know comping and punching is an option, it’s good to pretend that it is not. A coherent take will always sound better.

Binary Composting
Don’t be a take-hoarder. Go ahead and delete! Don’t be afraid. Why live in the past, when you can be in the present? Last take only so-so? DELETE.

It’s also good idea to delete all unused audio from your sessions. It’s no use carrying around that baggage. No reason to have 20 gigs of audio that you’re not using. A bloated session is harder to backup or track down a file if need be. Plus, it takes longer to load.

If you’re not using it, send it off to greener pastures (aka your trash bin). Think of it as composting for 1’s and 0’s. Dare I say binary composting?!?!

Adios
Before I tell the musician a session is over, I make sure I have a comp I can live with. It should include all crossfades and edits cleaned. I want to know I have the part and what it sounds like. Leave nothing to the imagination…except which island your summer home will be on after your single blows up.

 
Mark Marshall is a producer, songwriter, session musician and instructor based in NYC.

Be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article, go here.

{extended}
Posted by Keith Clark on 12/17 at 04:05 PM
RecordingFeatureBlogStudy HallBusinessConsolesEngineerMixerStudioTechnicianPermalink
Page 52 of 180 pages « First  <  50 51 52 53 54 >  Last »