Feature

Friday, September 26, 2014

First Things First: Thoughts On The System Optimization Process

Very often the topic of “room tuning” comes up in the practice of pro audio, but what we’re really talking about is “system optimization.” And over the course of many years, we’ve used many tools that seemed to—or actually did—contribute to desirable results.

But system optimization is not just about turning knobs (virtual or otherwise) until things sound good. Sure, you can do that, and maybe that’s all you have time for under certain circumstances, but it’s not likely to constitute the highest possible grade of work.

I’ve been pondering this topic for many years, also noting a wealth of existing reference sources (articles and books) that go to various depths in offering solid and useful information about the craft.

Against that backdrop, it occurred to me that there’s not been much focus on basic practices—things that need to be considered before we get into an involved system optimization process. That idea prompted this discussion.

Key Questions
There are shows and then there are shows. Many are simple, using only left and right loudspeakers at the stage that are mounted on stands, or ground-stacked, or flown. These are the easier ones, because the time it takes to equalize and tailor coverage of a left-right source is, generally speaking, not something that will cause you to need to consult a calendar.

Sometimes, however, it’s not so simple or straightforward. If the event is a corporate presentation, or perhaps a theatrical performance with a band in the orchestra pit, what will the optimal loudspeaker complement be? Our goal is to ensure that every audience member has an equally enjoyable experience, and that’s rarely an easy task.

How long will it take to design the system, plan it, install it, and then optimize it? Now it’s time to get out the calendar and negotiate a reasonable timeframe with the event organizer. Time is money, especially when renting high-profile venues and employing numerous support people, so the faster that you can perform effective work, the more the client is likely to be impressed and ask for you again for future events. 

Will front fill loudspeakers be required? It’s quite they will, to attain proper coverage in the front seating rows that are tough for the mains to reach.

Or it can be something more specific like live music from an orchestra pit drowning out the vocalists in the first few rows, which you can mitigate by adding vocal-only reinforcement from a series of compact, low-profile loudspeakers. (A tip: use a matrix output of the console to route only vocals to these loudspeakers).

What about the balcony? Are there seats under and over (on) it? Both spaces usually exhibit different acoustical characteristics than the orchestra seats. And perhaps there’s also a loge, or a dress circle, or a tier of balconies that each need to receive attention. Ultimately, these areas will need likely need reinforcement from additional loudspeakers.

Now, the trick is that all of these loudspeakers, perhaps arranged in several zones, need to be combined together so that the sonic energy in one area will not cause problems in another area. If the room is fairly reflective, the “corruption” of areas bleeding into other areas is common, and can be one of the more difficult aspects of optimizing the system. If the room is highly reflective, then the challenge grows significantly greater and the calendar should reflect the need for enough time to do the job properly.

Accurate & Uncolored
This leads me to consider the concept of making a system “flat” versus equalizing to a target curve.

Think about it—do you want a console that alters the frequency and phase response on every channel in some arbitrary fashion? Of course not.

You might want to alter a given channel yourself with respect to EQ, dynamics, timing, effects, but how far would you get if there was a hidden deviation from a relatively flat, even response in the signal path that could not be removed? You’d spend all of your time working around such an obstacle, instead of optimizing the musicality and intelligibility of the performers.

The same is true with loudspeakers. Whatever goes in is what should come out. No more, no less. You can apply “house” curves if you like, but the basic foundation of the system should be a response that’s as flat as you can reasonably make it, in the time allowed.

It’s simple to exaggerate bass or high frequencies (or both) for hip-hop, dance, metal (or whatever) music in order to overwhelm the listeners. But it’s quite a different story to deploy such exaggerations (and I’m not saying there’s anything wrong with them) to a system that is accurate and uncolored.

When a reasonably even response has been achieved as the starting point, then the process of “tuning to taste by ear” becomes quicker, simpler, and far more effective. Instead of a mountain of bass, you’ll have definition. The tonality of the kick drum won’t be buried in the bass guitar, and vice versa.

A typical Fletcher-Munson curve graph.

Some practitioners look at Fletcher-Munson curves and come to the conclusion that a good sound system should have a different response at different sound pressure levels, but that would be mistaking the intent of these admirable researchers. What Fletcher and Munson did do was to identify that human hearing exhibits varying sensitivities, at varying frequencies, as the overall SPL is increased or decreased.

This is not the same as saying that a sound system should change its response characteristics as the operating level is altered. Quite to the contrary, it should normally remain equalized exactly the same way, regardless of operating level.

Staying Faithful
Of course, there are exceptions. If the system is at low level, such as background music in a noisy environment, then some degree of exaggeration of the LF and HF will give it more presence. But if it’s a performance system, intended to command the attention of the audience, then it should be provide as even and flat of a response as possible.

This might seem confusing because many sound systems become brittle, strident, and outright distorted as they’re driven hard, and especially as they approach their maximum power capability.

This justifiably causes a consciousness operator to want to reduce the HF content to help save everyone’s ears—not the least his or her own. But it has nothing to do with Fletcher-Munson curves.

Distortion is far more damaging to human hearing than clean, accurate sound presented at an identical SPL. This is because of the exaggerated harmonic content that takes place when LF, MF, and HF drivers enter into their “breakup mode,” which means they no longer are faithfully reproducing the input signal but are adding harmonics of their own that are not present in the program source.

It also applies to amplifiers that are driven into clipping, as well as preamps, microphones, and just about anything else in the signal chain. In a word, it doesn’t sound good when distortion occurs.

Unfortunately, some practitioners push a given system into severe distortion because it’s what they believe represents a fat, rich sound. Fat it may be, but bearing any resemblance to music, it is not.   

Subtractive EQ
Let’s assume that you have five zones to equalize: left, right, front fill, underbalcony, and overbalcony.

We’ve already talked about front fill; generally speaking, it should be equalized to make the vocals—or other program content—sound natural and full.

But wait! There’s more. Always start by equalizing, listening, and generally making sure that you’ve achieved a balanced even sound quality from the largest and most powerful loudspeakers or clusters/arrays.

They almost always provide more bass than is necessary in most of the areas of the room, and therefore, those front fill loudspeakers, which are probably quite small, will not need to reproduce much LF because it’s already being delivered by the main system.

Consequently, the front fills can probably get much louder, whenever needed, if they’re high-passed so that they’re not being expected to provide much LF content. So EQ them while the main system is operating and use them to “fill-in” the portion of the audible spectrum (or program content) that’s not getting into the seating areas that they’re covering. This takes some time to master, and is aided greatly by using an accurate reliable measurement system.

By the way, I suggest never doing this by ear alone—very few ears are that good, if any.  Our hearing just isn’t geared towards being refined enough to replace a high-resolution spectrum analyzer.

The same approach, that of subtraction equalization, can be used with great effectiveness for equalizing delayed loudspeakers, whether they’re under a balcony, over a balcony, on box booms, or anywhere else in the room. Before even bothering to listen to these loudspeakers (expect, of course to ascertain that they’re working properly), evaluate the energy that’s coming from the mains. You’ve already adjusted their spectral content in the seating areas that they’re intended to cover. (Right?)

Is the energy that’s arriving under and/or over the balcony the same? Or is it skewed rather badly? Chances are it’s the latter. The task is to use the loudspeakers in these regions to “fill in” the portions of the spectrum that aren’t arriving from the mains. And don’t forget the need to critically time them to support—rather than clash with—the main system, or the result will be all sorts of nasty comb-filtering.

Further, by equalizing the delay loudspeakers to provide the spectral energy that’s not being provided by the main loudspeakers, you will be greatly reducing the excitation of the reverberant field in the room. You can always add in reverb later; it’s a lot harder to take it out. 

As noted at the outset, this is only a primer to get the thought process in motion. Many additional procedures exist that further your goal of going from “passable” to a beautiful sonic experience!

Over the course of more than four decades, Ken DeLoria has tuned hundreds of sound systems, and as the founder of Apogee Sound, he developed the TEC Award-winning AE-9 loudspeaker.

{extended}
Posted by Keith Clark on 09/26 at 12:24 PM
AVFeatureBlogStudy HallAVLoudspeakerMeasurementProcessorSignalSound ReinforcementPermalink

Thursday, September 25, 2014

In The Studio: Mid-Side Microphone Recording Basics

Courtesy of Universal Audio.

 
When most people think of stereo recording, the first thing that comes to mind is a matched pair of microphones, arranged in a coincident (XY) pattern. It makes sense, of course, since that’s the closest way to replicate a real pair of human ears.

But while XY microphone recording is the most obvious method, it’s not the only game in town. The Mid-Side (MS) microphone technique sounds a bit more complex, but it offers some dramatic advantages over standard coincident miking.

If you’ve never heard of MS recording, or you’ve been afraid to try it, you’re missing a powerful secret weapon in your recording arsenal.

More Than Meets The Ears
Traditional XY recording mimics our own ears. Like human hearing, XY miking relies on the time delay of a sound arriving at one input milliseconds sooner than the other to localize a sound within a stereo field.

It’s a fairly simple concept, and one that works well as long as both mics are closely matched and evenly spaced to obtain an accurate sonic image.

One of the weaknesses of the XY microphone technique is the fact that you’re stuck with whatever you’ve recorded. There’s little flexibility for changing the stereo image once it’s been committed to disk or tape. In some cases, collapsing the tracks into mono can result in some phase cancellation.

The MS technique gives you more control over the width of the stereo spread than other microphone recording techniques, and allows you to make adjustments at any time after the recording is finished.

Mid-Side microphone recording is hardly a new concept. It was devised by EMI engineer Alan Blumlein, an early pioneer of stereophonic and surround sound. Blumlein patented the technique in 1933 and used it on some of the earliest stereophonic recordings.

The MS microphone recording technique is used extensively in broadcast, largely because properly recorded MS tracks are always mono-compatible. MS is also a popular technique for studio and concert recording, and its convenience and flexibility make it a good choice for live recording as well.

What You Need
While XY recording requires a matched pair of microphones to create a consistent image, MS recording often uses two completely different mics, or uses similar microphones set to different pickup patterns.

The “Mid” microphone is set up facing the center of the sound source. Typically, this mic would be a cardioid or hypercardioid pattern (although some variations of the technique use an omni or figure-8 pattern).

The “Side” mic requirement is more stringent, in that it must be a figure-8 pattern. This mic is aimed 90 degrees off-axis from the sound source. Both mic capsules should be placed as closely as possible, typically one above the other.

How It Works
It’s not uncommon for musicians to be intimidated by the complexity of MS recording, and I’ve watched more than one person’s eyes glaze over at an explanation of it.

But at its most basic, the MS technique is actually not all that complicated. The concept is that the Mid microphone acts as a center channel, while the Side microphone’s channel creates ambience and directionality by adding or subtracting information from either side.

The Side mic’s figure-8 pattern, aimed at 90 degrees from the source, picks up ambient and reverberant sound coming from the sides of the sound stage.

Since it’s a figure-8 pattern, the two sides are 180 degrees out of phase. In other words, a positive charge to one side of the mic’s diaphragm creates an equal negative charge to the other side. The front of the mic, which represents the plus (+) side, is usually pointed to the left of the sound stage, while the rear, or minus (-) side, is pointed to the right.

Mid-Side recording signal flow.

The signal from each microphone is then recorded to its own track. However, to hear a proper stereo image when listening to the recording, the tracks need to be matrixed and decoded.

Although you have recorded only two channels of audio (the Mid and Side), the next step is to split the Side signal into two separate channels. This can be done either in your DAW software or hardware mixer by bringing the Side signal up on two channels and reversing the phase of one of them. Pan one side hard left, the other hard right. The resulting two channels represent exactly what both sides of your figure-8 Side mic were hearing.

Now you’ve got three channels of recorded audio– the Mid center channel and two Side channels – which must be balanced to recreate a stereo image. (Here’s where it gets a little confusing, so hang on tight.)

MS decoding works by what’s called a “sum and difference matrix,” adding one of the Side signals—the plus (+) side—to the Mid signal for the sum, and then subtracting the other Side signal—the minus (-) side—from the Mid signal for the difference.

If you’re not completely confused by now, here’s the actual mathematical formula:

Mid + (+Side) = left channel
Mid + (-Side) = right channel

Now, if you listen to just the Mid channel, you get a mono signal. Bring up the two side channels and you’ll hear a stereo spread. Here’s the really cool part—the width of the stereo field can be varied by the amount of Side channel in the mix!

Why It Works
An instrument at dead center (0 degrees) creates a sound that enters the Mid microphone directly on-axis.

But that same sound hits the null spot of the Side figure-8 microphone. The resulting signal is sent equally to the left and right mixer buses and speakers, resulting in a centered image.

An instrument positioned 45 degrees to the left creates a sound that hits the Mid microphone and one side of the Side figure-8 microphone.

Because the front of the Side mic is facing left, the sound causes a positive polarity. That positive polarity combines with the positive polarity from the Mid mic in the left channel, resulting in an increased level on the left side of the sound field.

Meanwhile, on the right channel of the Side mic, that same signal causes an out-of-phase negative polarity. That negative polarity combines with the Mid mic in the right channel, resulting in a reduced level on the right side.

An instrument positioned 45 degrees to the right creates exactly the opposite effect, increasing the signal to the right side while decreasing it to the left.

What’s The Advantage?
One of the biggest advantages of MS recording is the flexibility it provides. Since the stereo imaging is directly dependent on the amount of signal coming to the side channels, raising or lowering the ratio of Mid to Side channels will create a wider or narrower stereo field.

The result is that you can change the sound of your stereo recording after it’s already been recorded, something that would be impossible using the traditional XY microphone recording arrangement.

Try some experimenting with this—listen to just the Mid channel, and you’ll hear a direct, monophonic signal. Now lower the level of the Mid channel while raising the two Side channels.

As the Side signals increase and the Mid decreases, you’ll notice the stereo image gets wider, while the center moves further away. (Removing the Mid channel completely results in a signal that’s mostly ambient room sound, with very little directionality – useful for effect, but not much else.) By starting with the direct Mid sound and mixing in the Side channels, you can create just the right stereo imaging for the track.

Another great benefit of MS miking is that it provides true mono compatibility. Since the two Side channels cancel each other out when you switch the mix to mono, only the center Mid channel remains, giving you a perfect monaural signal.

Mid-Side recording signal flow.

And since the Side channels also contain much of the room ambience, collapsing the mix to mono eliminates that sound, resulting in a more direct mix with increased clarity. Even though most XY recording is mono compatible, the potential for phase cancellation is greater than with MS recording. This is one reason the MS microphone technique has always been popular in the broadcast world.

Other Variations
While most MS recording is done with a cardioid mic for the Mid channel, varying the Mid mic can create some interesting effects. Try an omni mic pattern on the Mid channel for dramatically increased spaciousness and an extended low frequency response.

Experimenting with different combinations of mics can also make a difference. For the most part, both mics should be fairly similar in sound. This is particularly true when the sound source is large, like a piano or choir, because the channels are sharing panning information; otherwise the tone quality will vary across the stereo field.

For smaller sources with a narrower stereo field, like an acoustic guitar, matching the mics becomes less critical. With smaller sources, it’s easier to experiment with different, mismatched mics. For example, try a brighter sounding side mic to color the stereo image and make it more spacious.

As you can see, there’s a lot more to the MS microphone technique than meets the ear, so give it a try. Even if the technical theory behind it is a bit confusing, in practice you’ll find it to be an incredibly useful method to attain ultimate control of the stereo field in your recordings.

Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys. Despite being immersed in professional audio his entire adult life, he still refuses to grow up. This article is courtesy of Universal Audio.

{extended}
Posted by Keith Clark on 09/25 at 02:30 PM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsInterconnectMicrophoneSignalStudioPermalink

Monday, September 22, 2014

Some Things I’ve Noticed About Working With Sound…

As with politics, it can be very difficult to be rational when we think, discuss and make decisions about sound. Of course, much about sound is subjective, even if there are quantifiable aspects to what we do.

No matter how it looks on Smaart, the end result has to be something that satisfies the audience, or at least satisfies us – and we should (hopefully) be the toughest customer of our own product.

Opinions Vary…
The first thing that came to mind while thinking about all of this is that microphones seem to be one subject about which people have more passionate opinions than just about any other piece of kit.

I was reminded of all this when perusing forums here on PSW, noting that of all the questions asked, the ones that get the most responses are along the lines of “What’s the best vocal mic for a female singer?” or “What’s the best choice for guitar cabinet mic?”

In other words, opinions are like microphones, everyone has at least three favorites… Maybe this is because there are so many microphones, and even models that have been on the market for 40 years are still being used today.

But I suspect that it is also because mics are where sound gets magically changed into electrical impulses, and thus there is such a huge opportunity to get things “right” or “wrong.” People seem really polarized about this, and count me among them.

There’s just something about the fact that microphones are the focal point where art meets technology. The emotive sound of the human voice becomes electrons moving on a wire.

The beautiful sparkle of that pre-war Martin moves the air, which moves a diaphragm and coil, and somewhere a meter moves in response. And then a loudspeaker changes the final resulting signal into a much louder version so thousands can hear that sparkle, but it all starts at the microphone.

To me, the right choice of mics and the knowledge of where to put them makes such a huge difference in the end result. First of all, it makes mixing much easier and reduces the need for EQ on the console, which is where we tend to get ourselves in trouble.

With modern DSP-based consoles, this is less of a problem, but still an issue. Why add 6 dB at 10K when we can just use a brighter-sounding microphone at the starting point?

One last thought is that I think the choice of mics has evolved along with other changes in the PA world. The use of in-ear monitors (IEM) and the fabulous loudspeaker systems of today mean that you can actually choose mics based on the way they sound, rather than just simply to avoid feedback or to overcome the loss of highs in the mains.

In my opinion you have to become familiar with these microphones and trust your own ears on your artist’s sound rather than relying on what anyone else says. Of course it can be useful to see what others say in order to narrow the choices.

But beyond that, it’s up to you.

Everyone Notices
Picture without sound Is surveillance. You’ve heard that one, right? And sound without picture is radio… you get the idea. Although I’ve heard some people (OK, specifically it was guys) argue that porn doesn’t need sound. I’ll leave that one alone.

But my point is that sound is so critically important to any type of entertainment, and yet it seems to be an afterthought in so many cases. When things go wrong with sound, everyone notices.

Ever had massive feedback at one of your gigs? If so, then you know what I’m talking about. Or how about a loud hum? Same thing – people notice.

But I would be willing to bet that if one of the banks of lights didn’t work at a show, very few people would notice.

Unfortunately, we’ve probably experienced the fact that there seems to be a universal hate for the “sound guy” or the “techie” or whatever semi-derogatory phrase might be used in a given culture. It refers to “Those nerds around here that screw up the sound.”

Why does the event producer, sometimes the corporate client, festival organizer, city cultural officer, or other “person in charge” always seem to take offense when it is brought up that the budget for the PA is not even close to adequate for the event at hand?

Meanwhile, why is there always plenty of money for decorations, the hotel ballroom, spokesmodels, door prizes, etc.?

Sometimes I think it is because they see too many movies, and not only that, but Hollywood gets it wrong. They have created this illusion that:

A) Whenever someone walks up to a microphone, we should hear feedback, and
B) loudspeakers don’t exist but everyone has great sound, and maybe
C) when they show sound reinforcement in the 1950s, despite the horrible mics of the day and incredibly limited PA systems, sounded just like the records of the day (in other words great).

The result is this disconnect between how people imagine an event and what it really takes in terms of budget, logistics, sightlines, AC power, etc. to really make things work.

How do we fix this?

I’m not sure we can directly do anything. But of course it helps to be confident, competent, and be ready to give references. Maybe even talk to a few of your past good clients (if you haven’t already) and prep them that future potential customers may be getting in touch with them.

And finally, be diplomatic. Sure, there are times when it’s appropriate to simply drop the job, but usually there is a professional way to nudge those involved to consider just how important sound really is. Maybe the thing to do is make a short video that starts out with no sound at all, then when the sound comes in, it’s distorted, then clears up a bit but there’s a loud buzz.

Then, at last, the sound is clean and clear. Ask the client at which point the sound became acceptable and then explain that you are the vendor that can provide them with quality sound because you know how important it is to them.

All Roads Lead To… Wireless?
This last muse is on a topic to which I’m very close, since I work for a manufacturer making wireless microphone products. But my observation isn’t about being a supplier, it’s about the fact that whenever I’ve been around a show, and anything, I mean anything goes wrong with the sound; the first blame gets placed on the wireless microphones.

Let me relate a story about that…

A while back, I provided the wireless microphones and backline wireless for a show in San Diego. Between comm, IEM, microphones and backline, we had about 45 channels of wireless on the stage – certainly not a system where you would want to “guess” the frequencies. To coordinate everything, I used IAS software from Professional Wireless Systems.

During rehearsals, one of the main guitar players in the house band was having problems and he mentioned to me that his level was going up and down, sounding “weak”, etc. Note that he came to me first… and that’s the point of this story.

I proceeded to swap out his transmitter and receiver and put him on a different pre-coordinated frequency, thus eliminating any one of the three things that could have been causing the problem IF the wireless was at fault.

Guess what? He still had the same problem. He was very understanding and realized that it probably wasn’t the wireless, although he didn’t know what to check next. Meanwhile, the stage manager started giving me a hard time, and no amount of calm explanation on my part would sway him. He was simply convinced that it was the wireless causing the problem.

I knew that the problem was likely the guitarist’s pedal board, seeing that it was the most complex part of the chain and had the most opportunities for intermittent connections, etc. I made a note of this, and worked in a quick trip to Radio Shack. There, I got some contact cleaner and ran across the street to a hardware store to get some Scotch Brite.

After I got back, during a break in rehearsals, I went through that pedal board with a fine-toothed comb—cleaning all the quarter-inch plugs and jacks, and DC connectors—then put it all back together. From then on, it worked perfectly. I was there to provide RF mics, but I was the only one convinced that the wireless had nothing to do with the problem and thus set about to find a solution.

After that, the stage manager seemed to give me more respect. And I think this is one way that we can all improve our standing when handling wireless mic systems – know your system inside and out and be prepared for anything.

A lot of people still seem to believe that wireless systems are run on voodoo because they don’t understand some of the fundamentals involved. But math and physics are what determine success in the RF world.

A quick side story is that one of the house IATSE guys told me that just a few weeks before, he’d had a show in that same theater with all kinds of problems with the wireless. When they called the manufacturer for help and described their location (downtown San Diego), they were told, “You guys are basically screwed,” because of the heavy use of the spectrum by TV broadcasts.

I was pretty shocked to hear that, and invited the house guy to listen to some of my channels. Not a blip, hit, dropout, nothing. He asked how this was possible, and I explained that we had done careful frequency coordination and set the antennas up properly.

Another related bit is that the monitor engineer told me that he wasn’t surprised that the wireless mics worked so well, but he wanted to know, “How come the IEM system seems so solid?” His experience had been that wireless IEMs were usually prone to problems.

Again, I explained that we had done a good coordination and thus we would be able to count on these systems to work – end of story.

The Bottom Line
All these issues are inter-related and to me, they point to the fact that we really have to know our craft inside and out. I’ve said it before, and here it is again: We should never stop learning and never think that we know it all.

We can’t assume that what worked before will necessarily work again. And we have to be studying our systems and the underlying technology all of the time in order to stay current. As we move more and more into digital consoles and wireless mics and IEMs, there is no excuse not to master these systems.

Things are constantly getting more complex, but at the same time, the possibilities for excellence are ever more available to us. Don’t forget to rely on your quality resources—the good manufacturers are always there to help.

And the basics of physics never change—until scientists and mathematicians tell us they do. Until then, have fun and make some good sound.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.

{extended}
Posted by Keith Clark on 09/22 at 01:49 PM
Live SoundFeatureBlogStudy HallBusinessEducationEngineerProcessorSound ReinforcementTechnicianPermalink

Friday, September 19, 2014

Backstage Class: Dynamic Versus Compressed

A while back I was pondering mixing live shows, as I strangely so often find myself doing, and I began to analyze the varying aspects of dynamics in live reverberant fields.

Is there something more legitimate than personal preference that would add credibility to using compression?

The studio humans and mastering labs use a ton of it, but comparatively, we live engineers use fairly little. I know it works well to control the variations in a band’s playing, and also helps with smoothing the sound, but is there yet another advantage of compression that is not so readily apparent?

On the surface it’s quite obvious that compression can be used on bass to reduce the differential between the louder and softer notes resulting in a more consistent sound. Same with vocals, and I put compressors on guitars as well.

I even take it further and run kick and snare into a subgroup that has a bit of compression on it to keep the two a bit more locked-in, volume-wise, to each other.

So what got me started (again) on this train of thought?

Not long ago I was listening to a super punchy horn-loaded sound system. Boom, crack, boom, crack, as the drums jump out at me - and they do sound cool.

But I also know from experience that the reverb decay time from the loud “on to” super punchy sounds blurs the intelligibility of everything else that immediately follows.

So if an uncompressed snare is 10 dB “on-top” of the mix, then the correspondingly loud roar of the room-reverb-decay-level from that snare will hurt overall intelligibility long after the original snare hit has been heard and ended.

Conversely, that means that if the instruments are all compressed to a fairly narrow volume range, they then would stay at an even level consistently above the room reverberation rather than the loud sounds setting off room reverberations louder than the following softer sounds.

What I’m getting at is that controlling the differential between the loudest and softest sounds not only improves intelligibility by reducing volume inconsistencies, it’s also helpful in dealing with reverberant room acoustics.

The sacrifice? The loss of some of that “slam-hit eye-blinking” impact.

But hey, the upside is your mix will sound a bit more like an album, the audience will be able to hear the various instruments and vocals, especially in reverberant rooms, and you’ll be able to get more overall volume from the PA with less clip lights flashing.

Just a thought…

Dave Rat heads up Rat Sound Systems Inc., based in Southern California, and has also been a mix engineer for more than 30 years.

{extended}
Posted by Keith Clark on 09/19 at 03:09 PM
Live SoundFeatureBlogStudy HallConcertEngineerProcessorSignalSound ReinforcementPermalink

Wednesday, September 17, 2014

Church Sound: Got The Low-End Frequency Blues?

This article is provided by Behind The Mixer.

 
Do you suffer from the low-end blues?

The symptoms include instruments lacking clarity, vocals lacking distinction, and a general feeling that “something’s rotten in the state of Denmark.”

You’ve never been diagnosed with it until today. Time to determine the source of the condition and prescribe a cure.

Consulting with churches on their audio quality, I’ve found a handful of common problems and the most common is excessive low-end frequencies in the mix. It’s the result of many factors, three of which are outlined below.

1. Poor Bass Definition

I didn’t say “poor bass EQ” because the blame doesn’t fall entirely on the sound tech. The tone of the bass guitar comes from the bass, the bass amp, and any effects pedals the musician uses. And believe me, I’ve heard a wide range of tones coming from a bass player.  ’ve heard a mush of low end coming from a bass channel and I’ve heard what I’ll call outer space sounds.

Why it’s a problem…

You’re at the mercy of the musician and whatever they send down the line. A bass tone without definition is just a source of low-end mush. But even a great bass tone might have more low end than needed. It still needs to be distinct from the kick drum.

What to do…

Listen to the raw tone coming through the bass channel. All of the mix work will either be working with this sound or against it. If the bass is sending mush, it’s not going to be easy. EQ the bass so it’s distinct from the kick drum. 

Don’t think that just because it’s a bass that the low end should be cranked. I’ve cut a bass channel below 40 Hz because it cleared up the mix and gave the bass guitar definition. Many of the key bass frequencies exist above the 100 Hz mark.

The biggest way to improve bass definition is to listen to recordings of the song. YouTube, Spotify, iTunes, Rdio, whatever works, just pick one and listen to where the bass sits in the mix. Know the song before the sound check so the bass sound is in your head.

Put on headphones and listen to the recording while mixing if it helps. Compare the recording to your mix to determine what needs to be done.

2. Poor Keyboard Mix

Keyboards can create a wide range of sounds and frequencies. These sounds can drop into the same range as bass and electric guitars, even drum toms.

Why it’s a problem…

The more similar frequencies occupying a mix, the less instrument and vocal distinction. A song using a bass, an electric rhythm guitar, and the normal drum kit is already filling up the lower frequencies and requires proper EQ’ing for separation. By adding in a keyboard, it’s another source of low end, the amount depending on the keyboard voicing.

What to do…

Apply the high-pass filter and modify the EQ so the keyboard fills in the mix as intended by the arrangement. By cutting the lower frequencies, a simple volume change might be all that’s needed. 

I find that by using a substantial low-end cut on the keyboards, not only does it clear space for the lower instruments but it also adds clarity to the keyboard.

3. Low-End Flooding

The more microphones on stage, the more channels capturing stage volume. Add a bass amp and drum kit on the stage and there’s low-end frequencies bouncing all over the place, both from the amp and drums as well as through the monitors.

Why it’s a problem…

Every little bit matters in mixing and when it comes to microphones, every little bit of extra low end adds up to a lot of low end in the house mix. Low-end frequencies will bounce around the room, lasting longer and flooding the room.

What to do…

Apply a high-pass filter (HPF) to all vocal microphones. Start around the 80 to 100 Hz range. On mixers with variable HPF frequencies, roll the filter up until it negatively affects the sound, then back it off. This can range anywhere from 120 to 220 Hz.

If it seems like I’ve been talking about HPF a lot recently, it’s because I have and because I believe it’s under-utilized.

Next, use some form of drum shield to reduce the lows. But wait, there’s more! Here’s where an oversight can be costly. If the drummer is sitting in front of a flat surface, the drum shield is only going to reflect the sound back to the wall and then back into the drum shield and the microphones. 

To control the lows, use a full drum enclosure or cover the wall with sound absorbing material. This can be anything from acoustic panels to heavy theater curtains.

Finally, in the case of stage amps, reduce them to only their required volume and point them at the musician. For example, a guitarist with a guitar amp needs the amp pointed at their head. If they’re using a full-stack amp, then mic the amp and turn down the amp. If they want the amp cranked, look into isolation cabinets.

The Take Away

The low-end blues can be overcome. Focus on these three points and listen to the results after each one. Eventually, the low-end frequencies will be reduced in the room and clarity will return to the mix.

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians, and can even tell you the signs the sound guy is having a mental breakdown. Chris is also the author of Audio Essentials For Church Sound, available here.

{extended}
Posted by Keith Clark on 09/17 at 07:36 AM
Church SoundFeatureBlogStudy HallConsolesLoudspeakerProcessorSound ReinforcementSubwooferPermalink

Tuesday, September 16, 2014

The Great Pyramid: Early Reflections & Ancient Echoes

Editor’s Note: This fascinating article first appeared in the July/August 2000 issue of Live Sound International. What follows encapsulates audio, acoustics, truth, fiction, legend, innuendo, road rage, taboo, and prognostication. We hope you find it fine reading for an unspoiled moment.

For some 20 years I worked for Intersonics, a company that developed experimental space flight hardware for sounding rockets and the space shuttle, and we also did contract R&D. It was while there, the Boss let me launch the Servodrive part of the company. The caveat was “as long as all it costs was space and lights.” So we were off in our “spare time” to create some “perfect bass.”

Multi-Modal TEF
With so much of the company’s NASA work having to do with acoustics and having good measurements, we were also one of the early companies to get a TEF machine. Being the main “acoustics guy,” I used the TEF to measure vibration resonances in space flight payloads and locate flaws in concrete blocks (looking for echoes).

Another task set was to measure/develop new transducers for acoustic levitation, another for producing a sonic boom. I even used TEF for measuring resonant modes on pecan shells. Let’s just say it got seriously “multi-tasked.”

While at Intersonics, a movie company asked to film the acoustic levitation process used by our space flight hardware. I ended up demonstrating it and being in the movie (“Mystery of the Sphinx” with Charlton Heston). During filming I had made a wisecrack to the producer about going to Egypt and measuring the pyramids.

Tom Danley in his lab.

Several years later, the same producer calls up out of the blue and asks if I was interested in “finding out why the inside of the Great Pyramid sounded so weird.” This would be for another movie — all expenses paid and a decent “nut” to boot.

Some quick research on the pyramid revealed it was a lot bigger than I imagined. It had a number of chambers and levels above the “King’s Chamber” — opening the possibility that it was not a “simple”acoustic system. A rough (and I mean rough) estimation of the resonance of the granite ceiling beams in the King’s Chamber put them at about 300 Hz. A somewhat less rough Helmholtz resonator and transmission line model suggested resonances starting at 2.5 Hz or so.

Into The Wild Blue Yonder
My long, fairly comfortable flight on Egypt Air from New York City landed in Cairo where my Egyptian adventure got off to a bad start.

Not being sure what to expect, or if my modeling meant anything in the real world, I’d packed two speaker systems for producing test tones, one for above 100 Hz and a second much larger unit for below. Both were shipped in sealed boxes containing a power amplifier, my trusty TEF 12, a B&K microphone, and an accelerometer (and a brace of cables).

Unfortunately, my power amplifier “got lost,” when we arrived at the Cairo airport. I suppose they thought the elaborate “tour” they took me on, through the dank, dark caverns under the airport looking for it, would offset the loss. Maybe they hoped this search party experience would combine with relentless jet lag to dissuade my pursuit.

Come to think of it, they never did pay up on the insurance claim (grumble, grumble). I was immediately on the phone to the Crown dealer in Helliopolis to arrange for indigenous amp rental. Thankfully several days of free time were scheduled before I was “on camera.”

My 01:30 trip from the Cairo airport to Giza, where the pyramids are, documented the chaos of local Egyptian “road rules.” Car horns are the “linga franca” for inter-driver communication. Headlights are almost never used at night on highways, but are frequently flashed in a fashion similar to horn honking.

Also curious, a marked 3-lane road can often have five lanes of door-to-door, bumper-to-bumper traffic, consisting of a zillion wannabe Formula 1 drivers in beat up cars. This routine requires constant lane changing, horn honking and jockeying for the pole position all at 15-30 mph.

During a return to Cairo proper, I discovered that even the traffic lights are different. Like everywhere else, green means “GO,” but yellow also means “GO” and red means “GO” if nobody is coming. A cop standing directly in front of your car means stop, but if you are in the car next to the one with the cop, you can go regardless of his hand motions or how hard he blows his whistle.

Photo 1: How the Great Pyramid looked on the cover of the July/August 2000 issue of LSI.

Go With The Flow
I settled in at the Movenpic hotel near the pyramids. Several days of pre-film preparation remained. It was immediately clear that many production details remained undefined.

When I asked whether or not we had any sort of production schedule, outline or anything, the answer was “it isn’t ready yet…don’t worry.” I thought that maybe that’s the way Hollywood is…real casual. So I made my best attempt to go with the flow.

The first day we went to look around the Great Pyramid (Photo 1) and the area where we needed to get to inside. The thing is huge! It’s 500 ft (152 m) across and 480 ft (146 m) high, and made of about 2.5 million 3-4 sq ft (1 sqm) blocks of limestone with interior constructed of Red Granite.

To enter the Great Pyramid, one must first enter the cave El-Mamun. A would-be robber bored into the limestone here in around 600 AD. This tunnel goes in approximately 50 ft (15.2 m) to a point they were supposedly about to give up, but heard a noise inside and re-directed the tunnel to the left.

There they hit the Red Granite casing on one of the interior passages and by following it (the tools they had couldn’t cut granite), they eventually located the Great Pyramid’s interior. From the end of this tunnel, one climbs about 120 ft (36.6 m) stooped over in a space barely 1 yard/meter high.

This section is fairly steep with an approximate 30-degree incline. Without the wooden boards fastened to the stone for footing, it would be almost impossible to make this climb while carrying gear. For me, this path created a whole new meaning to the term “walk like an Egyptian.”

Walking With The King
Next you enter the Grand Hall. It’s also inclined but now about 40 ft (12.2 m) tall with a corbled (stepped) ceiling. After trudging up another 120 ft (36.6 m) up the grand hall one finally reaches the entry to the King’s Chamber, which is another tunnel. This time, however, it’s level and about 10 sq.ft (1 sq.m) and perhaps 20 ft (6.1 m) long.

The King’s Chamber is about 40 ft (12.2 m) long, 20 ft (6.1 m) wide and 20 ft (6.1 m) high. The walls, floor and ceiling are all made of Red Granite. The granite blocks that make up the walls are huge. The one over the door is nearly 8 ft (2.4 m) high 14 ft (4.3 m) long and 5 ft (1.5 m) thick, yet all the blocks fit so tightly you can’t get a business card between them. They are polished to a surprisingly smooth finish.

However, it was kind of a pain just to get this far (and this was the easy part). When I expressed concern about getting the gear up to the King’s Chamber, I was encouraged to hear the producer’s plan to hire some locals to haul our gear in and out. He acknowledged the degree of difficulty, and he was right.

The producer was also right about the acoustics in the King’s Chamber. It sounded very weird inside there. Think of the “livest room” you’ve ever experienced, and then double that. It was acoustically “solid as a rock.” Given a minimum of 200 ft (61 m) of stone in all directions, it should be.

Photo 2: Our crew at idle.

On The Skids
Our crew had a heavy heap of equipment (larger than my gear pile). So, our fearless leader decided we would go to a wood shop and have a skid made that could be dragged up the incline. After measuring the passages, we were off the next day to find a “wood shop” known to the staff’s hired Egyptian cab driver.

When we arrived at the shop, I was underwhelmed to say the least. Our wooden skid was promised to be ready in a few days. The crew (Photo 2) were already scheduled to explore/plan for other parts of the film, so I tagged along. During these few days it became obvious that the producer and his financial backers were following two somewhat different game plans.

Photo 3: The Sphinx from the front paw view.

“Plan A” included the production goals that included me. Specifically, we sought to access the cavity they had found with sonar and ground penetrating radar under the paw of the Sphinx. Psychic/cult hero Edgar Cayce predicted this cavity to be there in the 1930s.

“Plan B” was to produce a TV documentary, which overlapped Plan A as much as possible. We had permits that essentially gave us free access to everything, so we spent a couple days filming at the Sphinx compound (Photo 3). Note: I ate lunch one day sitting between its paws.

The radar also suggested there was an underground tunnel leading from the cavity, under the sphinx and continuing on.
At the rear of the Sphinx, the sound guy and myself saw an opening at the bottom of its rear and after seeing no one was around, both of us went in (Photo 4).

Photo 4: The Sphinx rear entry point.

This cave has two forks. One fork goes down about 12 ft (3.6 m) and sounds hollow if you stomp on the floor. The other fork goes up into the body and stops.

There were no other ways of reaching the cavity and by this time the permit to drill a hole for a fiber optic camera had mysteriously been yanked by the antiquities department. Being problem solvers, the bosses decided to look for another way to reach the Sphinx cavity.

Optional Methods (Plans C, D & E)
The producer lived in Egypt part-time and had heard about a water well on the causeway.

The causeway, by the way, is the big stone ramp used to haul the stones up from the Nile for the Pyramid. One enters from the side through a short tunnel into the side of the actual roadway.

From here, you carefully climb over an iron gate and try NOT to fall into the 30 ft (9.1 m) deep hole immediately on the other side. Then you carefully descend a decrepit iron “ladder” down into the dark.

The bottom opens in to a room about 20 x 20 ft (6.1 x 6.1 m). At the far side is a down shaft about 6 x 6 ft (1.8 x 1.8 m) or so. You carefully get on another iron ladder (looks to be like 3/8-in round rusty steel bar) and climb down into the blackness about 60 ft (18.3 m). This was spooky. All we had for light was a helmet mounted flashlight (Photo 5).

This climb ends by opening into a tomb with three sarcophagi. They are set into deep niches in the walls. One is very large, made of smooth black stone, apparently precision made, with sharp corners even on the inside edges/ How did they do that? The other two made of limestone much smaller and in poor shape. All had been robbed.

Photo 5: Arches in the water well, author in frame.

Good To Go
After the lighting gear made it down and was set up, I immediately noticed the room was rectangular with square corners and had about 8 ft ceilings. At one edge was yet another downshaft. This one was smaller, maybe 4 x 4 ft (1.2 x 1.2 m), and went down quite a long way.

After climbing down an even worse ladder with ropes for safety, one encounters two pillars which would have held up the ceiling of the next “space” that once seemed to have been a two story room. The remains of the substantial rubble pile disguises that this room was ever man made. This level was a fairly creepy place with ample broken pottery shards and many human bones in the rubble.

The back walls are squared off. There’s a 7 ft (2.1 m) deep trench-like affair (full of water) around the back and two sides. It was like we were standing on top of the rubble pile created when the second story collapsed on the first. Anyway, they dug away a little at the center mound and about 10 in (25 cm) below the surface was a large granite slab. Radar detected a cavity below this slab about 6 ft (1.8 m) tall, and it seemed to lead off towards the Sphinx.

At this point more permits were needed so this discovery and all further work was snatched up by the head of the antiquities department.

(For semi-related fun I strongly suggest a visit to http://guardians.net/hawass/osiris1.htm.)

Showtime
We had gone as far as we could in the waterwell. The wooden skid was now done, and it was time for my part of the show. We had the Great Pyramid to ourselves every night after about 20:00 when the last of the sightseers depart and only the Bedouin guards remain.

As to not look the “wimp,” I grabbed a decent sized handful of cables and trudged up the slanting tunnel. When you get to the King’s Chamber, most people will have worked up a sweat. I am no marathon runner, and I had to stop at the top and catch my breath for a bit. These skinny kids come staggering up to the top with the gear and turn around go back down and get more.

The same guy carried my TEF 12 (which isn’t light), my main woofer (which was 80 lbs), and three trips of lighting batteries (each is a big car battery in a plastic cooler). I was impressed. And I realized that I was a wimp and there was no way around it. From then on when the crew hired locals to carry everything I knew they were earning a good wage — by local standards.

The lighting guy tapped into the AC mains (240-volt, 50 Hz) and set up his transformer, and we were ready. I picked a spot on the wall in the King’s Chamber to set up my stuff. I placed the source at one wall and the microphone at the opposite wall and was ready to go.

Complimentary Slopes
I had figured the use of a “known” sealed box woofer (whose roll-off slope would roughly compliment room gain slope) that would allow useful measurements to extremely low frequencies (LF). The producer wanted to get the sound on film clearly. He asked that I test it at as loud a level as practical.

I applied the first loud slow sweep starting at 200 Hz to 10 Hz — a comfortable level. Around 90 Hz I observed a strong room mode and sweeping at 1.1 Hz/sec — some real energy was transferred.

What really made everyone get up and run to the exit was the resonance near 30 Hz. At that moment I aborted that test. This was a good resonance, it got nice and strong and scared the wits out of a several crew members. Frankly, I was a little concerned myself. High-Q resonances at low frequencies can be very exciting!

The chances of something bad happening are small, but the consequences are large. Not wanting to be known as the first person in modern times to be buried in the pyramid, I moved the TEF and myself to the tunnel entry way instead of inside the King’s Chamber.

I spent several nights taking measurements there and was filmed without incident. I observed a good distribution of room modes and curiously, the red granite sarcophagus displayed several resonant modes, which directly corresponded to these room modes.

What The Witness Heard
Lying in the sarcophagus, one finds it’s nearly impossible to hum any note other than ones related to the main resonances. In that position when you do hum at the “right” frequency, it’s easy to make it seem very loud. But for someone standing next to you, it’s not loud at all. Also lying in it, the outside sounds that get coupled throughout, colors other people’s voices for a very “Darth Vader” effect.

My general observation is that the pyramid’s dimensions, the pyramid’s construction materials, and the box inside the King’s Chamber were designed to passively (as in zero electricity) enhance whatever sounds were present inside the chamber.

It also appears that any wind pressure across the pyramid’s internal air shafts, especially when it was new and smooth, was like blowing across the neck of a Coke bottle. This wind pressure created an infrasound harmonic vibration in the chamber at precisely 16 Hz.

Being a musician myself, I was especially interested to discover a patterned musical signature to those resonances that formed an F-sharp chord. Ancient Egyptian texts indicate that this F-sharp was the resonant harmonic center of planet Earth. F-sharp is (coincidentally?) the tuning reference for the sacred flutes of many Native American shamans.

Bottom line: We have 2.5 million blocks piled up in Egypt. Halfway around the world you have a guy whittling a tree into a musical instrument with exactly the same F-sharp resonance.

How Do They Do That?
The producer and crew were hot to film me placing an accelerometer on the big red granite beams which make up the roof of the King’s Chamber. Each of these beams weighs up to 90 tons (91.444 kg), and they were quarried at Aswan some 600 mi (966 km) away. They are also about 150 ft (46 m) high inside the pyramid. Another “how did they do that” question.

To reach the upper levels above the King’s Chamber, one re-enters the grand hallway, then climbs 40 ft (1.2 m) up an old extension ladder to a hole in the wall. A small bundle of knotted cords comes out of the hole, which is also the entrance to a small tunnel.

Once in the tunnel, you make a right turn and crawl a little more to an enlarged area carved out around a red Granite wall with a hole in it. Climbing through that opening, you come into the chamber directly above the King’s Chamber. This room is only about 4 ft (12.1 m) tall but is the same length and width as the King’s Chamber. The ceiling is flat and is covered with some very old graffiti.

The floor consists of big rounded bulges, which are the center beams that run the width of the room. It took some time to haul all the camera and lighting stuff up, set up, then blow all the dust out of the sensitive gear before preparing to roll.

Let Me Take You Higher
After filming at that level and climbing up through a tunnel, we got to the next level up to do the same filming bit.

It was in this room that we found a huge pile of burlap sacks filled with the chips the diggers had removed from the level below.

This room also featured a large trash pile and hundreds of water bottles from the diggers. It’s clear they were at work for some time.

Our passage to the rest of the upper levels was a real pain. Whoever did this part of the work used explosives. Essentially, this turns the experience into rock climbing. I got as far as I was able to go without help.

Fortunately, the camera and lighting guys were climbers and helped me up the last step. The top level has a peaked ceiling. There I had some time to look into any and all cracks I could find with my headlamp. I found a place which looked like it opened up into a room as I could not see anything past the edge I was peering in.

On the next trip up, the camera guy put a 40 ft (12.1 m) fiberoptic bundle into the crack to see what it was. It turned out to be a very long (couldn’t see the end) row of blocks all aligned (instead of the normal stager pattern) together — all with big parts of the lower corner broken off.

While the accelerometer footage was good for the movie, the measurements were not informative. The signal was totally swamped by 50 Hz and other electrical noise. I had a DAT recorder on hand, recording the test and mic signal for later analysis.

After being home for a few months and trying to see what else might be revealed on the DAT tape using Hyperception software, I found several things I couldn’t have seen with the TEF. The TEF showed a large number of room modes some going below 20 Hz.

How Loud Does It Get?
While doing an FFT on the between-sweep time or quiet parts of the recording I found some very LF sound — resonances which start at a few Hz and go upward to 15-20 Hz or so. At least some of these were the same LF resonances I excited with my sweep, but not all of them. This sound was present even if everyone is silent.

I crunched the results of the measurements, and they were sent on to a musicologist that was part of the staff. As mentioned, he identified that there was a pattern of frequencies, which roughly form an F-sharp chord.

Not all the resonances fell in the right place but many did and some repeated the pattern for many octaves. In other words, it was roughly tuned to F sharp over many octaves.

It has been suggested (by others) that the Great Pyramid is NOT a tomb at all but actually a temple of sorts and that these resonant frequencies were “designed into” the structure. While many exotic and often far-fetched properties have been ascribed to “the power of the pyramid,” I see a possible argument that some of the phenomena people experience in it may be caused by the acoustical properties that were measured.

The effects of LF sound were extensively studied by various government agencies to determine the effects on humans, partly for the space program. One of the things that was discovered is that infrasound (very LF) can effect ones brain wave activity (Alpha rhythms, etc.) and other biological functions.

If, as some suggest, these pyramids were constructed as a “temple” or for an initiation ritual rather than a tomb, then the LF sounds may be deliberate and have served a scared purpose — with the sound triggering and even forcing changes in brain wave state (i.e., one’s level of consciousness).

Brain Waves, Sound Waves
One of the latest rages in controlling one’s brain wave state are the light/sound machines which use black glasses and headphones with flashing lights in the glasses and LF pulsing sound in the headphones to literally trap your brain into synchronizing at the pre-programmed frequency.

It would seem like sort of a meditation ride. You need no practice to do it. It just takes you. The frequency range, which causes this effect, is at the low end of the audio spectrum or even below the LF that we hear (infrasound).

Low-pitched sounds have long been known to cause emotional responses. The massive pipe organs of the ancient cathedrals were built (at considerable difficulty one should remember) to produce powerful LF sounds to frequencies below “audibility” or infrasound because of the powerful emotional and physiological effects they have on people.

Music and movie sound tracks are reproduced loudly to have an emotional effect on most people. Before the industrial revolution (and the attendant noise pollution) humans had more sensitive hearing than we do now. Accordingly, to the ancients, the sound in the pyramids would seem even more powerful.

How Did We Forget?
Apparently man has been intentionally designing acoustic spaces for quite some time. During 1996, A Journal of the Acoustical Society of America paper, authored by Paul Devereux and Robert G. Jahn, detailed a number of ancient structures in England and Ireland which were apparently designed to enhance the bass frequencies in the voice range. Among other conclusions, Devereux and Jahn believed this was done because of the group chanting used in their rituals. Mantra’s were often part of the meditation process and are even now.

The dimensions of the Sarcophagus in the main chamber are also such that there’s acoustical reinforcement of the LF voice range as well.

As such, it seems obvious that architectural acoustics are simultaneously very old, and yet a virtually new science. The ancients had a grip on these principles, yet acoustic sciences seemed nearly lost for thousands of years. We ask why, yet we have no answer.

In many cases, architectural designs made more than 100 years before the computer are still considered to be among the best there are. This is further evidence, it seems, of how cyber-analysis can never fully replace life experience. Still, these days, architectural acoustics exist almost entirely within the computer.

One Last Thing
Lacking a time machine, one cannot “know” what the designers really had in mind when they built the Egyptian Pyramids. Clearly, they went to an amazing amount of work and had a powerful reason for doing it.

Equally clear, they had techniques and skills used in its construction that we are aware of, but what they did looks impossible with what is known about them. Still, it obviously was possible.

“High Technology” (aliens, etc.) seems very unlikely as the pyramid’s interior nooks and crannies are very roughly shaped. If they had a laser or other high-tech voodoo tools, logic predicts they would have used them everywhere, not just where it showed.

On the other hand, machining marks were visible on the inside of the sarcophagus wall from some “rotary” type cutting process. Obviously they had some mechanical help.

Anyone who has been in the Great Pyramid and chanted or hummed will tell you that it feels weird, and that the acoustic effect is powerful. In short, it is possible that the ancient builders may well have been aware that sounds, even inaudible ones, can have a profound effect on one consciousness.

The fact that they were able to quarry huge red granite blocks six hundred miles away, transport them, “machine” them to a precise fit and then polish them, implies that there is an ocean about the ancient’s we don’t know — especially regarding their application of acoustic science.

Tom Danley is the inventor of the ServoDrive, and as one of the most innovative loudspeaker designers in the world, makes his home at Danley Sound Labs.

{extended}
Posted by Keith Clark on 09/16 at 12:06 PM
AVFeatureBlogStudy HallAVLoudspeakerMeasurementProcessorSignalSound ReinforcementPermalink

Monday, September 15, 2014

Top 10 Tips For Mixing In-Ear Monitoring

 
1) Both Ears Or None. Using only one ear encourages much higher sound levels. Good custom molds provide 20 to 25 dB of isolation, allowing lower monitoring levels for artists, and reduced stage noise to compete with front of house. Mixed mode monitoring combining wedges and personal monitors ultimately results in higher overall SPL.

2) Hard-Wired Equals Hi-Def. Wired personal monitors will always sound better than wireless. In addition to loss of stereo separation and frequency response, multiplexed stereo wireless transmissions are susceptible to noise and multipath distortion.

Panned wireless mixes must be extreme as 7 o’clock and 5 o’clock pan positions become 9 a.m. and 3 p.m. due to reduced separation, and response falls off above 15,000 Hz for most wireless personal monitor systems.

3) Use A Helical Antenna. Wireless personal monitors are inherently non-diversity RF systems, where the benefits of a helical antenna’s directivity and full-angle 360 degree RF modulation can eliminate dropouts that affect multiplexed stereo transmissions more than mono.

4) Custom-Fitting Molds. Generic-fit transducers rely on replaceable foam or triple-flanged “Christmas tree” fittings to seal the ear and block out other sound. Most generic devices can also be fitted with custom molded sleeves, made of soft silicone from audiologist impressions, at half the cost of custom molds.

5) Stereo Is Better. A realistic mix, with performers on the stage’s far side panned to that ear, makes it easier to listen. Panning inputs in a stereo mix allows similar sounding instruments to stand out at lower volumes. Reverb naturally sounds better in stereo. Singers can harmonize better when their voice is centered and others are panned.

6) Drum Sub. The impact shakers and subwoofers add often helps drummers perform at their best. Other band members may decide this is something they also want, but there’s a diminishing return in adding subs, as it doesn’t help other musicians as much and it quickly begins polluting the stage with excessive low end. When drummer and bass player are in close proximity, they naturally share the drum sub.

7) Side Fills And A Downstage Pair Of Wedges Aren’t Necessary. That is, for a band that’s entirely on personal monitors. However, having them allows two things to happen:

A) Front of house (or control room) talkback is more easily understood, not just by performers, hut also by stagehands and technicians to better assist during line and sound check.

B) It’s easier for onstage visitors, whether they’re hosts or MCs interacting with your talent or the occasional sit-in with a guest performer, who may not use personal monitors in that situation.

8) Talkback (Monitors Only) Inputs For Stage Communication. Extra inputs for vocal mics on stage that can only be heard in the monitor system allow private on-stage conversations that provide security and comfort. Ideally every musician has a mic dedicated to inter-communication.

9) Individual Reverbs. All singers benefit from having their own separate vocal reverb, dedicated to their own voice and not shared with other singers. Singers can also benefit from a classic dual micro pitch shift that is simply one cent up and down with a 5 or 10 ms delay.

Grouped drum or instrument reverbs use several aux buses, but dedicated vocal effects can be either direct- or insert-patched to economize on mix buses.

10) Digital Consoles. An analog desk for personal monitors is great as long as you either bring it with you or have enough time to patch all your comps, gates and effects and you get a full sound check.

But there’s no greater joy than recalling a monitor scene on a digital desk at a festival and having everything right back where it was at the previous show. A 16,000 Hz low-pass on all personal monitors outputs from a digital desk can reduce digital artifacts.

Mark Frink is an independent author, editor, consultant and engineer who has mixed monitors for numerous top artists.

{extended}
Posted by Keith Clark on 09/15 at 12:54 PM
Live SoundFeatureBlogStudy HallConsolesEngineerMonitoringProcessorSound ReinforcementStagePermalink

Sunday, September 14, 2014

In The Studio: Mastering Your Songs In Six Steps

This article is provided by Bobby Owsinski.

 
When I began writing the latest 3rd edition of The Mastering Engineer’s Handbook, one of the things that I wanted to find out from some of the mastering greats was how they approached a project.

In other words, what were the steps they took to make sure that a project was mastered properly. Interestingly, the majority of them follow six primary steps, some consciously followed and some unconsciously. Here’s an excerpt from The Mastering Engineer’s Handbook that outlines the technique.
———————————————

If you were to ask a number of the best mastering engineers what their general approach to mastering was, you’d get mostly the same answer.

1. Listen to all the tracks. If you’re listening to a collection of tracks such as an album, the first thing to do is listen to brief durations of each song (10 to 20 seconds should be enough) to find out which sounds are louder than the others, which ones are mixed better, and which ones have better frequency balances. By doing this you can tell which songs sound similar and which ones stick out.

Inevitably, you’ll find that unless you’re working on a compilation album where all the songs were done by different production teams, the majority of the songs will have a similar feel to them, and these are the ones to begin with. After you feel pretty good about how these feel, you’ll find it will be easier to get the outliers to sound like the majority than the other way around.

2. Listen to the mix as a whole, instead of hearing the individual parts. Don’t listen like a mixer, don’t listen like an arrangement and don’t listen like a songwriter. Good mastering engineers have the ability to divorce themselves from the inner workings of the song and hear it as a whole, just like the listening public does.

3. Find the most important element. On most modern radio-oriented songs, the vocal is the most important element, unless the song is an instrumental. That means that one of your jobs is trying to make sure that the vocal can be distinguished clearly.

4. Have an idea of where you want to go. Before you go twisting parameter controls, try to have an idea of what you’d like the track to sound like when your finished. Ask yourself the following questions:

—Is there a frequency that seems to be sticking out?

—Are there frequencies that seem to be missing?

—Is the track punchy enough?

—Is the track loud enough?

—Can you hear the lead element distinctly?

5. Raise the level first. Unless you’re extremely confident that you can hear a wide frequency spectrum on your monitors (especially the low end), concentrate on raising the volume instead EQing. You’ll keep yourself out of trouble that way. If you feel that you must EQ, refer to the section of the EQing later in the chapter.

6. Adjust the song levels so they match. One of the most important jobs in mastering is to take a collection of songs like an album, and make sure they each have the same relative level. Remember that you want to be sure that all the songs sound about the same level at their loudest. Do this by listening back and forth to all the songs and making small adjustments in level as necessary.”

Following these steps just like the mastering greats do will ensure that not only will your project sound better, but you’ll avoid some of the pitfalls of mastering your own material as well.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and go here for more info and to acquire a copy of The Mastering Engineer’s Handbook.

{extended}
Posted by Keith Clark on 09/14 at 07:41 AM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEngineerMixerProcessorSignalStudioPermalink

Friday, September 12, 2014

Church Sound: Locating Your Loudspeakers & Related Issues

The decision on the location of your sanctuary main and monitor loudspeakers will have a decided impact on the success of your presentation.

In a perfect world, as it relates to audio systems for worship, it’s best practice to place the sanctuary main loudspeakers in a central cluster above the front edge of the chancel riser.

The loudspeaker (or loudspeakers) are selected to provide pattern coverage over the entire seating area without putting acoustic energy on the walls, floor or ceiling.

When we put sound on people, it is largely absorbed and only minimal reflections continue elsewhere in their journey about the room. But when the pattern coverage is poorly designed, putting acoustic energy on highly reflective surfaces such as walls, floors and ceilings, the reflected sound can pass the listener’s ears several times, creating a lack of enunciation and speech intelligibility.

A properly designed central cluster allows the sound to reach the listener only once, thereby creating the most concise possible listening situation. In many sanctuaries, however, there are physical limitations such as low ceilings or tall crosses that require an alternate consideration.

What if we can’t use a central cluster?

When forced to consider an alternate placement, the choice is usually left side and right side. It’s important to remember that sound will arrive at two different time intervals to people seated along the sides, and so we must attempt to select loudspeakers with a narrower coverage pattern.

The goal is to put sound on people at the left with the left loudspeaker(s), and on people at the right with the right loudspeaker(s), with as little acoustic energy crossing over the middle as possible.

How high should the loudspeakers be hung/flown?

Generally speaking, loudspeakers should be flown as high as possible (however, generally not to exceed 18-22 feet) in order to increase their distance from the front pew.

If the room has extremely low ceilings, we can arrive at a condition where people seated at the front are complaining that it is too loud, while the people at the rear are commenting that the sound needs to be turned up. In such an instance, it’s advisable to turn the system down to a comfortable level and hang a second and even third set of loudspeakers perhaps every 25-30 feet as we grow in distance from the chancel.

Because sound traveling through the air takes time, the second set of loudspeakers will need to utilize a time delay so that the sound traveling from the chancel coincides perfectly with the sound emanating from the second set of loudspeakers. A third set of loudspeakers will have to be delayed at yet a different setting to coincide with the sound emanating from the first two sets of loudspeakers.

In this manner, all sound source material reaches the ears of the listener at the exact same moment in time, regardless of how far back they are seated in the room, thereby maintaining speech intelligibility.

Though a sanctuary may have adequate ceiling height, if the room is very deep it’s still advisable to use multiple loudspeaker placements on delay lines.

Even if the chancel mains could be turned up loud enough to be heard at the back of the room, the sense of distance is audible (due to wall and ceiling reflections) and intelligibility is again adversely affected.

How can we minimize the possibility of feedback?

Despite the general public’s degree of sophistication in regards to quality audio, it’s not commonly understood that microphones need to be out of the live sound field whenever possible in order to minimize the possibility of feedback and annoying lingering overtones.

In other words, keep loudspeaker enclosures in front of the mics, not behind them. Of course, almost all pastors wear wireless mics, and many like to move about the room while speaking. A good church sound operator will be able to provide equalization so this may be done without feedback.

Attempt to keep monitor sound confined to the chancel riser.

Monitor loudspeakers are a wonderful benefit for the performers using them, but they can have a deleterious effect on the sanctuary sound.

If the monitors are positioned so that the monitor mix bounces off the back of the chancel and reflects back out to the congregation, it’s now combining at a different time interval with the sanctuary main mix and we have now adversely affected the speech intelligibility we had been striving so hard to create out front.

How loud should the monitors be?

Monitors should be just loud enough to keep the performers comfortable. If the monitors are too loud in relationship to the sanctuary main loudspeakers, no amount of positioning will help maintain clarity in the general seating area.

Since many praise band players are now middle-aged veterans of once-youthful rock bands, gently remind them that the purpose of the monitor line is to lend support and enunciation so that they may execute the material more perfectly.

If the monitors are intended to provide a studio-perfect mix of all instruments and voices for the listening enjoyment of the players, then you will need to be blessed with highly experienced and adequately funded audio technicians. Many larger churches in metropolitan areas are able to create this benefit for the praise musicians.

Jon Baumgartner is a veteran system designer for Sound Solutions in eastern Iowa, a pro audio engineering/contracting division of West Music Company.

{extended}
Posted by Keith Clark on 09/12 at 04:17 PM
Church SoundFeatureBlogStudy HallInstallationLoudspeakerMeasurementMonitoringSignalSound ReinforcementSubwooferPermalink

Thursday, September 11, 2014

Step By Step: A Growing Company Does Festival Double-Duty

A light summer rain was falling as I entered the grounds of the Buckle-Up Festival, a three-day, six-stage country music event in mid July held on the banks of the Ohio River in the Sawyer Point region of downtown Cincinnati.

It’s a beautiful setting for this first-ever festival, and also served as the site of the Bunbury Music Festival the week prior that featured a variety of rock performances.

My first goal was locating Grant Cambridge, the managing director of Event Enterprises, which serves as the production company for both festivals. Eventually we both arrived behind the Main Stage front of house position, covered by a tent intended to provide shade from the summer sun but now serving a vital role in warding off the steady drizzle.

I’d not met Grant in person and was a bit surprised by the youthful looks of the fellow hiking up to me with a full backpack and ubiquitous comm radio, but he exhibited the demeanor of an old pro despite serving as the tech pivot point in support of nearly 80 live performances over three days, including headliners such as Willie Nelson, Eli Young Band, Alabama, Emmylou Harris, and The Band Perry.

Event Enterprises managing director Grant Cambridge at a Yamaha QL5 console.

Making A Go
Upon graduating from Ohio University in Athens in 2003 with degrees in audio production and music, Cambridge started working free-lance audio gigs in the Cincinnati area, where he’d grown up. The work began to come more steadily, so he bought out a small local recording studio, essentially for the live gear that included a small PA and a couple of mixers.

A steady affiliation with the MidPoint Music Festival, an indie music event held annually in late September, led to strong ties with festival organizer Bill Donabedian, who’s gone on to found Bunbury and now Buckle Up festivals. “It goes to show the value of business relationships,” Cambridge notes. “And even though Bill eventually sold the MidPoint festival, we’re still a vendor for them as well.”

The “right price” NEXO GEO S8 line arrays and CD12 cardioid subs at the Lawn Stage.

He continued with a “day job” to supplement his income through 2009, eventually reaching a point where he could see making a go of working sound full time. It came down to having acquired consistent repeatable business combined with enough new prospects to make it a realistic pursuit. He marks January 1, 2010 as the official start of his full-time venture, christening it Event Enterprises.

Next came the process of building the business while still staying busy enough (and liquid enough financially) to pay the bills. Acquiring additional inventory was essential to the plan of becoming a full-service audio (as well as backline and lighting) provider and rental house, but he resisted the urge to go on a gear splurge, taking a more calculated approach.

“Believe it or not, the recession actually kind of helped our business,” he says. “Some folks were getting in over their heads on gear, so I kept my eyes open for opportunities.”

The NEXO Alpha rig stacked at the Amphitheater Stage.

For example, a church that had overextended itself led to the “right price” for a barely used compact main system comprised of NEXO GEO S8 line arrays, CD12 cardioid subwoofers and NX Series processors. And he smiles while recalling driving a box truck roundtrip to Nashville the day after Thanksgiving to pick up NEXO Alpha E full-range boxes as well as some processors.

By 2012, Event Enterprises occupied a small shop and began adding staff, as well as working with a host of free-lancers, to keep up with a growing client base.

New Directions
The Cincinnati market for production is showing growth, Cambridge notes, with a steady increase in festivals and street fairs offering live entertainment that requires production. The Major League Baseball All-Star game (and all of the festivities that surround it) is coming to the city next year.

There’s also a strong arts community with world-class symphony, ballet and opera, several theaters that host a steady schedule of live performances, and a new casino in the city along with several others within an hour of downtown.

Despite the optimistic outlook, Cambridge stresses careful planning. “As a business owner, I’m looking at how to make more from our existing inventory, and every decision to add inventory must be carefully considered. What to spend, how much to spend, what it’s going to mean over the long-term—these factors have to be analyzed,” he says, adding that his father, an engineer with a great head for numbers, has been exceptional in mentoring him on the power of the spreadsheet to carefully track costs such as depreciation and taxes.

It’s perhaps a bit of a different future than he imagined when coming out of college with two degrees but just a single (required) economics class under his belt. Yet the delicate dance between plying the audio trade and being informed on matters of commerce is essential for those seeking to also run the business.

“We’re in a simple supply and demand industry,” he states. “That’s what drives the decisions. The goal is to have every band and every engineer show up for every gig, look over what you’re supplying, and say “Great. We’ll sound good today.”

Columbus-based Phil Fox Band was one of almost 80 acts on the Buckle Up Festival bill.

NEXO and Yamaha Commercial Audio have proven to fit very well into this vision, he adds. “They’re extremely supportive and very available, and we really appreciate it. You can easily reach informed people on the phone, even on their cell phones at odd hours, when you have a question or a need. They’re really strong on training, and have even helped us in reconfiguring our amp/processing racks for different systems and scenarios.”

He also shares an anecdote about a time when a couple of processors malfunctioned the day before a gig, with the company shipping replacements overnight with no questions asked except a follow-up inquiring if the situation was back on track. “There are some other factors in play with Yamaha and NEXO, such as rider-friendliness, and the gear is very affordable in terms of what you’re getting for your money,” he says. “This type of 1-2-3 punch is what has gotten and held our attention.”

Walking The Grounds
Cambridge points to Yamaha consoles as fitting very well within his company’s worldview. “It’s exactly what you need. Solid, reliable, a load of functionality in a package that most folks are familiar with, and of course, great sound quality. Also efficient to pack, and kind to the budget.”

Event Enterprises has invested in Yamaha M7 and LS9 digital consoles, and continues to see a great return of them, and is now looking to the future with the CL and the new QL Series. In fact, every stage at both Buckle-Up and Bunbury had a Yamaha board for house and monitor mixing, accompanied by Rio stage boxes.

Nicholas Radina and Grant Cambridge at one of the QL5 consoles at the Bud Light Stage.

Cambridge notes, a bit wistfully, that he hasn’t had a chance to mix on one of the new QL consoles yet due to being occupied with management duties. “But the feedback from all of the engineers who’ve used the QL consoles here is that they’re digging them,” he adds. “The similarities and consistency with the other Yamaha consoles is great, while having the new effects packages and other advantages means more tools in a friendly, familiar package.”

Despite the misting rain that continued to fall (it thankfully ceased toward evening), we visited each stage to experience the various systems and take in several performances. The Amphitheatre Stage, sunken into a “concrete bowl,” was outfitted with the aforementioned Alpha E loudspeakers, and while they’re an “older” technology, they can still get it done.

Moving along, the Lawn Stage was flanked by the (also aforementioned) GEO S8 arrays, groundstacked on the wings next to the CD12 cardioid subs, while the performances on the humble Acoustic Stage were appropriately amplified with a couple of EAW KF Series loudspeakers. The River Stage, literally on the north bank of the Ohio River, presented another concrete bowl setting, with JBL VerTec arrays subcontracted from a local vendor flying left and right.

We next stopped by front of house at the Bud Light Stage, chatting with Nicholas Radina, who was providing mixes on a Yamaha QL5, feeding a PA with 10 NEXO GEO S12 modules per side, flown, with eight NEXO RS18 “Ray Gun” subwoofers placed equidistantly on the deck in front of the stage.

All loudspeakers were driven by NXAMP (4 x 4) DSP/amplifiers. Another QL5 was posted stage right for monitors, with both consoles networked with Rio stage boxes.

“The new Yamaha QL5 is a joy to use,” Radina tells me. “A marked improvement in overall sound quality, and I also enjoy the premium effects and additional rack spaces and wonderful routing over the Dante network. Custom fader layers make visiting band engineers feel right at home. Well done, Yamaha.”

True Detective
In February of this year, Cambridge began talking with John Mills, VP of Morris Light and Sound in Nashville, about the NEXO STM system in general, and more specifically, about the possibility of supplying an STM rig to serve the Main Stage at both Buckle Up and Bunbury. Morris had added a large-scale STM system last year and deployed it for Kenny Chesney’s North American tour.

NEXO STM arrays flanking the Main Stage.

“I wanted to do some ‘detective work’ on the system in a real working environment, see it up close—how it packs and comes off of a truck, how it goes together—all of the things you don’t get to see if you just go to a show,” Cambridge explains. “Morris is a known entity and I was quite confident the system would sound great, so I was also looking to network, define a partner to work with from a gear standpoint as well as learn from their considerable expertise.”

Those conversations came to fruition, with the Nashville company delivering an STM rig, S118 subs, and NXAMPs for both festivals. Specifically, main left and right arrays were made up of a dozen M46 main modules mated with B112 bass modules.

The system concept enables building line arrays that scale up or down depending on the application, and in addition to the main and bass modules, the S118 subbass module, sharing the same footprint as the other two, can also be utilized in arrays.

An STM array comprised of M46 main modules mated with B112 bass modules.

Mills was also on hand to serve as system engineer, demonstrating assembly, answering questions, and performing final tuning. “I saw how easily John and the crew were able to get it dialed in,” Cambridge says. “That was impressive, as well as how far it could throw and how it sounds.”

Yamaha CL5 consoles were posted at front of house and monitors, networked with Rio stage boxes. Other consoles could be easily swapped in when requested by certain artists and engineers, and there was also an analog snake as a back-up.

“Several guest engineers have tracked me down just to tell me how great it all sounds,” he says. “With the STM rig, I really like the modularity, the way the individual modules can be scaled for any situation—big, small, unusual—whatever the gig calls for.

Up and coming artist Eric Paslay on the Main Stage.

“There are a lot of efficiencies there. It’s very organized and packaged well. For a company like mine, this could be quite valuable, where every day presents a different type of gig. The flexibility is very attractive to a business of our type.”

I received the final word from John Mills: “The team at Event Enterprises are an amazing group. Grant’s a pleasure to work with, and as ever-changing as the details of a festival go, he’s always on top of finding an answer for us. He and his team are all very professional and work extremely hard.”

Keith Clark is editor in chief of ProSoundWeb and Live Sound International.

{extended}
Posted by Keith Clark on 09/11 at 06:21 PM
Live SoundFeatureBlogBusinessConcertConsolesDigitalEngineerLine ArrayMixerNetworkingProcessorSound ReinforcementSubwooferTechnicianPermalink

In The Studio: My Top 10 Microphone Mistakes

Article provided by Home Studio Corner.

 
Do you ever revel in someone else’s mistakes? Ever learn something from them?

Yeah, me too.

Here’s a list of some of my “best” mistakes I’ve made when it comes to using mics.

’Tis both enjoyable (and educational):

1. I almost blew up a $1,600 ribbon mic because I plugged it into a preamp with phantom power already on. (Turns out that’s a bit of a myth, but I freaked for a while.)

2. I stepped up to the mic in front of 400-plus people to sing, and…I had forgotten to turn on the wireless mic.

3. I sang an entire take of a vocal into the back of a condenser mic without realizing it.

4. I bought one of those headword condenser mics (a.k.a., the “Garth Brooks mic”) to do podcasts and webinars. Turns out it sounded like garbage, and I looked like a dork.

5. Recently tracked drums and didn’t realize I overloaded the overhead mics at the preamp. (Sounded cool in the end, but embarrassing that I didn’t realize it was happening at the time.)

6. Tracked lead vocals for an album through a (Shure) SM7B from roughly 1-foot away. It sounded okay, but had way too much sibilance. I was too far from the mic.

7. Spent a day tracking acoustic guitar (with two mics), only to realize afterwards that I had the mics too close and was recording a very boomy-sounding guitar.

8. After technical difficulties setting up a headphone mix, I tracked a female vocalist without really checking to see if I liked the vocal tone. Turns out I didn’t like it that much.

9. Close-miked a lead vocal once on a condenser with a hypercardioid pattern. Ended up sounding really weird due to the exaggerated proximity effect.

10. Recorded acoustic guitar for an EP right next to a window, while it was raining (during the great 2010 Nashville flood). The sound of rain is all over the EP.

While we’re on the topic of micropohones, be sure to check out Joe’s recent free webinar: No Frills Guide to Choosing and Using Microphones in Your Home Studio.

 
Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.

{extended}
Posted by Keith Clark on 09/11 at 05:04 PM
RecordingFeatureBlogEngineerMicrophoneStudioTechnicianWirelessPermalink

Keep It Cool: Three Rack Ventilation Methods

This article is provided by Commercial Integrator

 
It’s a truism that almost nothing is 100 percent efficient; a measure of the inefficiency of most devices we deal with is how much heat they produce.

Heat is energy that has been lost for one reason or another and is not available to do the task at hand, whether that is moving our car along the road, moving a loudspeaker’s cone to produce sound, or moving large quantities of 1’s and 0’s around at very high speeds.

Heat not properly dealt in our AV or IT systems can cause problems.

Digital electronics — be they satellite receivers, DVD players, codecs, or computers — may “lock up” and become unresponsive when overheated. Analog components appear to be more heat-tolerant, but in reality electrolytic capacitors are drying out and thinner-than-hair wires inside integrated circuits and transistors are being subjected to repeated thermal cycles of excessive expansion and contraction, leading to premature failure.

Modern AV and IT systems consist of various electronic components frequently mounted in racks, which may themselves be freestanding or in closets or other enclosures. Each electronic component in the system will generate some heat, and the systems designer and end user can ignore this at their peril.

The trivial case, in which a few devices mounted in a skeletal rack frame, in the open, in conditioned space, and consuming very low amounts of power, can safely be ignored. But such systems are few and far between today. More typical is the rack containing many power-hungry devices, all mounted in a rack either shrouded by side and back panels or located in a closet, millwork — or both.

In these cases, ignorance of likely damage from heat will be far from blissful. Overheated components will express their displeasure in any number of ways, from sub-par performance to catastrophic failure.

There are several ways to reduce the temperature within a rack. One is through passive thermal management; allowing natural convection currents to let heated air rise and exit at the top of the rack while cooler air enters through an opening at a low point.

Convection, while ‘free,’ is a very weak force. It is dependent on the small difference in density of hot and cold air, which is why a hot air balloon is huge, yet capable of lifting only light loads.

Convection currents are easily blocked or disrupted should a vent be even partially obscured. Heat loads today, given the increasing use of digital devices and the tendency to install more equipment in smaller racks and enclosures, are too often beyond the ability of convection to even approach the necessary level of heat removal.

Another way to cool a rack is through air conditioning, or active refrigeration. Air conditioning systems, properly sized and installed, let us set rack temperatures as low as we want; the only caveats being that we don’t cool below the dew point and condense moisture on our equipment, or raise our energy bill to unacceptable levels.

While expensive to buy, install, and operate, air conditioning systems that are dedicated to electronic systems may be the only practical solution when heat loads are large.

Be aware when the air conditioning system is shared with people, as when the supply and/or return ducts are an extension of an HVAC system that also serves the building and its occupants.

The danger is that the thermostat may turn the system off when the occupants are comfortable or keep it from running at all in the cooler parts of the year, while the electronics are still generating the same amount of heat.

There is also the extreme situation of HVAC systems installed in temperate areas. They can become the building’s heating system in cold weather.

If these potential problems can be avoided, dedicated air conditioning is an effective cooling technique, and in some cases the only practical solution to avoid damage by overheating.

Guidelines are not complicated; cool air should be delivered via a supply point high and in front of the rack, while the return for heated exhaust air should be located high and behind the rack.

Of the many types of analog and digital equipment being installed today, almost all fan-equipped components draw cooling air in front of their front panels and exhaust it to the rear. The arrangement described allows a “waterfall” of cold air to fall in front of the rack where it can be pulled in, while a high-mounted exhaust fan in the top of the rack, or high on its rear panel, pulls heated air out into the return duct.

Integrators can accommodate those components without internal fans by placing passive vent panels below them. If the exhaust fan has been properly sized, it will pull conditioned air in. In some cases, it may be necessary to use one or more small fans inside the rack to prevent pockets of stagnant heated air from accumulating.

If the building’s HVAC system can accommodate the extra heat load, it may only be necessary to use the third rack cooling technique. This will provide active thermal management using only strategically-located fans, eliminating the cost and complexity of refrigeration.

Moving the necessary number of cubic feet or air through a rack every minute can be accomplished using ventilation systems available on the market. For freestanding racks, it is a matter of pulling heated air out from the top of the rack and replacing it with cool room air entering at the bottom (we have made the assumption that the rack is in a conditioned space, and that the building’s HVAC system can deal with the heat generated in the rack).

Fan systems are available which can be mounted near or at the top of the rack. They draw heated air up from below and discharge it through their front panel into the room. Other systems discharge the heated air straight up through the top of the rack.

Neither of these systems is effective when the rack itself is enclosed in a closet or millwork. In this case, we must first get the hot air out of the rack, and then get the hot air out of the closet. Systems are available that perform both functions; they pull air up from lower parts of a rack, then move it through flexible tubing to an area outside the closet.

Better ventilation systems represent a trade-off between moving air and generating noise. When the system is in a remote equipment room, noise is not an issue; when it’s in the board room, noise from fan motors and air movement becomes bothersome. Consulting with cooling system makers’ technical personnel is a great help during the design process.

Frank Federman is CEO of Active Thermal Management.

Go to Commercial Integrator for more content on A/V, installed and commercial systems. And, read an extended version of this article at CorporateTechDecisions.com.

{extended}
Posted by Keith Clark on 09/11 at 01:37 PM
AVFeatureBlogStudy HallAmplifierAnalogAVDigitalInstallationInterconnectPowerProcessorTechnicianPermalink

A Quick, Easy Way To Preset Console Input Gain

 
Both analog and digital mixing consoles have an input gain control ahead of the channel fader.

The gain control’s job is to scale the input signal to an appropriate level. It’s setting is source-dependent, which means that proper setting requires a sound check that includes the vocalist and their respective microphone.

There are three variables in play:

1. Talker level

2. Microphone sensitivity

3. Talker-to-microphone distance

It’s not always possible to set the input gain in advance, but there is a simple method for “ball parking” the setting without the “talent” being present. I use a small, battery-powered loudspeaker with internal pink noise source (Figure 1).

Note that the ATI Audio NG-1 shown in Figure 1 has been discontinued, but an alternative is the Talkbox from Bedrock Audio. An iPod and Bluetooth speaker can also be used but must be calibrated first.

Figure 1: An ATI Audio NG-1 portable noise generator.

This device provides an acoustic reference level, and when maxed out, produces about 70 dBA-Slow at 1 meter, which is about the level of a “raised-voice” talker.

The level at the grill is about 100 dBA-Slow (Figure 2), which would be about the level of a raised-voice talker with their lips on the microphone.

1. Place the vocal mic against the grill of the reference source (Figure 3).

2. Set the channel fader and main fader to the desired setting (usually at or near “zero”).

3. Adjust the input gain of the mixer channel to a strong reading on the mixer’s main meter, allowing some “summing room” for the other channels that will be added to the mix (about 10 dB is usually sufficient for mixes of 12 channels or less).

Figure 2: Near-field level of acoustic source.

4. Repeat for the other vocal microphones.

I now have the input gain “roughed in” for a raised-voice talker with their lips against the mic. Now, we all know that the level presented by the talent will be different than this. If the level is lower, advance the input gain a bit to compensate. If the level is stronger, reduce it a bit.

Since a reference has been set, the input gain is compensating for the difference between “raised-voice” and “actual.” It’s much faster and less obvious than starting from scratch with a dead microphone. This technique is especially useful for wireless mics, since there are gain structure considerations ahead of the mixer.

I recently mixed for a local school play that used a pool of wireless mics of differing brands and types. Here is what I did to setup each microphone.

Figure 3: The NG-1 is mounted on a bracket to allow clamping to the microphone stand.

1. Place the mic against the grill of the reference source.

2. Adjust the transmitter to produce the desired signal level at the receiver (yellow on the receiver’s meter) (Figure 4).

3. Adjust the mixer’s input gain as described above.

4. Repeat for each wireless microphone.

This optimized the gain structure of the wireless system, and produced the same level into the mixer (post input gain) from all of the wireless systems (a potpourri mixture of Shure and Lectrosonics wireless systems with three different types of head-worn mics), including handheld and headworn units. Now any mic could be handed to any performer, and the input gain of the mixer quickly tweaked to compensate for their actual level relative to the reference.

Figure 4: With 100 dBA-Slow at the microphone, I adjusted the transmitter level to an optimal level on the receiver (Shure ULX-D).

To finish the system gain structure, I played a music track to “meter zero” on the console and adjusted the amplifier levels (in this case, powered loudspeakers) for the desired SPL in the house. Remember that meter zero is approximately what the multiple channels will sum to with all mics in use.

Of course, there are other ways to get there, but getting levels in the ballpark using a reference acoustic source greatly simplifies system setup.

Pat & Brenda Brown lead SynAudCon, conducting audio seminars and workshops online and around the world. For more information go to http://www.synaudcon.com.

{extended}
Posted by Keith Clark on 09/11 at 12:08 PM
AVFeatureBlogStudy HallAmplifierAVInterconnectLoudspeakerMeasurementMicrophoneSignalSound ReinforcementWirelessPermalink

Wednesday, September 10, 2014

The Art Of Sound Company Logistics

I’ve worked in this business for well over three decades, and some things haven’t changed and probably never will. The promoter always wants more for less, the rider sent to me was the “old” one, and if the gear doesn’t work I don’t get paid.

There’s little that can be done about the first two, but I can make sure my equipment stays in top shape in order to be able to pay the rent, staff, and bills.

Taking care of gear is an absolute top priority, making it far less likely to break down or malfunction unexpectedly at a show. In addition, keeping it clean and looking good inspires confidence from clients who are apt to pay more because the stuff looks and (probably) sounds better than the competition.

Whether touring the world or doing a gig at the nearby tavern, successful production companies master the art of logistics, which I define as the management of materials and equipment between the warehouse and the end user.

Make A List (Or Three)
The first aspect to address is inventory management, or where to store the gear. One small system may be easier to keep together in a truck or trailer. (Just remember to safeguard against extreme temperatures.)

Most of us have more stuff than that, however, so we need to keep it in a garage, self-storage unit or office/warehouse. Devote dedicated floor areas for larger items and use shelving, cabinets, and/or drawer units for storage of smaller items.

At my company, we re-purpose old filing cabinets to store and organize microphones and a lot of small parts. The mics are kept in their factory cases or small foam-lined pistol cases, organized by type into different drawers along with stand adapters, drum claws and clamps, mic clips and DIs. 

Next, think about preventative maintenance and figure out a maintenance schedule. (I’ve covered this in depth previously, most recently here.) Things like cables might need attention after each gig while other items, such as road cases, might only need a little attention once a year.

We do a check of gear both as we set it up and then pack it up at each show, making a list of any items that may need attention back at the shop (like a bad caster) and marking any bad cables and separating them so they don’t get taken to the next gig by mistake.

When booking a show, we create an event equipment list, a listing of all the equipment and spares that are required for that particular event. There are software programs that do inventory management, or you can simply set up spreadsheets to list all items required for a particular system.

It becomes our pull list for the gig, serving as a handy way to check off items as we gather them for the show and stage them in one spot for packaging and loading into the truck.

The events my company handles run the gamut from simple to complex, and we end up with a lot of smaller items that need to get to the gig. Instead of schlepping a lot of individual small cases, we use some larger road trunks that can carry all of the little items in one package.

To make things more efficient during load out, we label the large trunks as to their contents so stage hands can figure out which trunk each item goes into.

Some folks prefer trailers on smaller shows, while others use cargo vans or small box trucks. Larger shows typically require big box trucks or even tractor trailers. No matter, securing the load in the truck or trailer in a safe manner so it will ride well down the road is a must. 

There are a few options for cargo retention, with truck straps being the most common (and best) option. The straps can hook onto D-rings and truck cargo rails, or they can be used with E-track, a metal track that has a series of slots that allow straps to clip at any place along the track. Packing blankets can be used to pad items that are not in cases to keep them looking good.

Truck load bars are also a common way to secure cargo for over-the-road commercial trucks. They come in two main styles—bars that simply clip into E-track, and bars with a ratcheting system to expand and wedge themselves between the truck walls. The ratchet-style bars are not the best choice for cargo that’s on wheels, like road cases, because they rely on friction alone and can slip as the truck moves.

Moving & Protecting
We do a lot of corporate gigs at venues with loading docks, so we prefer dock-height trucks, but for those who rarely or never encounter docks, then a truck with a lower deck might be the better choice because it places the truck’s center of gravity lower, making for a more stable ride.

If you prefer ramps over lift gates (as we do), then a lower deck provides a more shallow ramp angle, making it easier to push heavy things into the truck. Lift gates are great in moving large, heavy items to the ground and back up, but they add some weight to a smaller box truck, lessening its overall carrying capacity.

Because we like a dock-height truck with a ramp, the ramp angle is pretty steep. A trick to get around this is to mount a 12-volt automotive winch in the box and use it to pull the heavy items up the ramp. Just don’t pull by the item’s handles or you might just pull them off. Instead, wrap a spanset or two around the item to distribute the force around the box, and hook the winch to the spanset.

Don’t forget to include truck and trailer maintenance on your equipment maintenance schedule. Regular lube and oil changes coupled with equipment and safety inspections help keep trucks in good shape and can also be useful in catching smaller problems before they turn into big ones.

To protect equipment and keep it looking good, consider investing in covers, cases and trunks. I like to think of them as an insurance policy that pays off bit by bit, every time we move gear. Sure, cases can cost quite a bit, sometimes even more than the items they carry, but every cost analysis shows they’re worth it in the long run.

Covers are normally used for loudspeakers, especially subwoofers because they’re often just too large to put in a road case. Many manufacturers offer covers for their products, and there are also several companies that provide quality padded covers and custom covers for just about anything you can think of.

Plenty Of Options
Cases and road trunks make up the bulk of what protects audio equipment, with larger sizes offering wheels for ease of movement. They’re made from a variety of materials, including plastic, metal or plywood.

A variety called ATA or Flight Cases use thin (1/4-inch to 1/2-inch) laminate-covered plywood panels joined with extruded aluminum edging and heavy ball corners for added protection. (By the way, ATA is short for the Air Transport Association of America, and more specifically, the organization’s Specification 300, which covers reusable transit and storage containers.)

Many manufacturers offer cases and trunks sized to fit 2-, 3-, and 4-across in standard trucks and trailers. These truck pack dimension cases usually include stacking cups in the lid that allow a similarly sized case to ride securely atop another, with the wheels of the uppermost case prevented from movement in the recessed stacking cups of the case below.

Road trunk is the common term for larger, heavy-duty cases. They can be item-specific, like a feeder cable trunk, or more generic, loaded differently depending on what is needed at the gig. Some are outfitted with removable dividers that allow different compartmentalization options. (Nice trunks are sometimes referred to as “Cadillacs.”)

Cases may have a hinged lid or be of the “pullover” or “slipover” style, where most of the case, except for a small lower tray, is lifted off. This allows for removal, or use of the item while it stays in the lower rolling tray, and is popular for backline amplifiers, snakes on reels, and loudspeakers.

As the name implies, mic boxes are cases designed to secure and transport microphones, direct boxes and accessories. They commonly include foam inserts that provide specific protection and organization advantages. Work boxes offer drawers and storage areas to organize tools and supplies at shows. I keep separate work boxes for audio, lighting, and backline so the specific tools, parts and supplies needed for each area of production are present and easily accessible.

Cases for mixing consoles come in a variety of styles. Smaller mixers might be stored in briefcase-sized foam-lined satchels, while larger consoles often travel in cases with lift-off lids, with the console sitting the case bottom during use. A feature many mixing consoles offer is a doghouse, a compartment at the rear of the case that allows cables and snake fans to be pre-connected to the console and stored in the case when not in use.

Riding The Rails
Racks are specialized cases that house electronic components, held into place on rack rails. The standard rack rail dimension is 19 inches wide, and gear is designated by how many vertical spaces (or rack units) they use in a rack: single space (1RU) is 1.75 inches, 2RU is 3.5 inches, 3RU, is 5.25 inches, and so on. The equipment, shelves or drawers have “ears” that extend on either side of the front panel, and these allow the item to be bolted onto the rails that are normally tapped for a 10-32 thread bolt.

Racks may offer front rails only or an additional set of rear rails to facilitate supporting larger/heavier equipment like amplifiers. Shock mount racks suspend the rack inside an outer shell either by springs or surrounding the inner rack with foam. These offer more protection for fragile electronics.

Another rack variation is a mixer rack, combo rack or slant top rack that can house a rack-mount mixer on top, oriented in a comfortable operating position with space for additional rack equipment below. We use these styles a lot with our smaller mixers because they allow us to roll in and have everything needed all wired up in one unit—just pop off the lids and plug in the loudspeakers.

A more recent trend has manufacturers incorporating table legs into the removable lids of the racks, turning them into tables. In fact, I won’t buy a new mixer rack or workbox without a table option because it provides a handy place to set up computers and other items by the mixer without having to scrounge up at table at the venue.

Not every case and rack has wheels, so additional helpers like hand trucks and mover’s dollys (a.k.a., “skateboards”) are required on many gigs. Hand trucks must be magical as they have a habit of growing legs and walking off when we’re not looking, so we bring a bicycle lock to secure them to a large road case or lock them back in the truck at a gig.

Don’t forget to add cases, racks, hand trucks and dollies to your preventative maintenance schedule so they too stay in good working condition. A nifty trick for hand trucks that use pneumatic tires is to put sealant (a.k.a., “slime”) inside the tires to lessen the chances of air leaks. 

Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.

{extended}
Posted by Keith Clark on 09/10 at 05:21 PM
Live SoundFeatureBlogStudy HallBusinessConsolesEngineerLoudspeakerProcessorSound ReinforcementSubwooferTechnicianPermalink

Designer Notebook: Adamson Systems E219 Subwoofer

The new E219 subwoofer joins the steadily growing line of Adamson Systems Energia Series (“E Series”) components that kicked off with the full-range E15 (15-inch) full-range line array module, later joined by the more compact E12 (12-inch) full-range line array module and E218 (dual 18-inch) subwoofer.

An Energia system is holistic and cohesive, as opposed to an assembly of individual, unrelated elements. All loudspeakers and subwoofers are expressly designed to work with networkable Class D amplification, DSP, cable and power distribution, software integration of control, and 3D simulation and diagnostics.

There are solid engineering reasons for providing turnkey rigs for touring, one-offs, festivals, and permanent installations. Approaching “system technology” as a whole helps insure a consistently high level of performance, delivers a known, quantitative reliability index, and offers the ability to provide software updates that future-proof the investment.

Specifically, all loudspeakers are powered by Lab.gruppen PLM 20000Q amplifiers with integrated Lake digital signal processing and Dante networking capability. Before shipping, the processors are equipped with a selection of optimized frame presets that cover the most common array and subwoofer configurations, while also accommodating differing array design characteristics.

The concept has proven to be quite successful in the marketplace, with Energia systems now a staple in the inventories of sound companies around the world, regularly deployed to serve a wide range of top-tier tours, festivals, and live events.

Lighter & Tighter
Extensive customer input indicated that the line would benefit with the addition of a musically exciting low-frequency experience, which served as the genesis of what would become the E219.

Click on the image for a larger view of the E219.

It can also be viewed as a superior successor to the T21 subwoofer, a dual 21-inch design developed for the legacy Y-Axis Series. To the point, the E219 was conceived to be smaller and lighter while capable of delivering high output and exhibiting improved efficiency in its intended bandwidth.

As the name implies, the dual cone drivers have been scaled down to 19 inches, a key factor in reducing enclosure size and weight. Further, the smaller, lighter drivers are able to produce a punchier, tighter LF sonic quality than their 21-inch predecessors while also displacing “more air” than 18-inch drivers.

The 19-inch format has proven to be an ideal middle ground between the two. Directly compared to an 18-inch cone, it provides additional piston area for increased power transfer, but only a very small increase in mass, hence the rapid and impactful response characteristics.

The design specification called for the E219 to be optimized to perform in the bandwidth heavily occupied by modern musical styles, 30 Hz to 60 Hz, with available power output of more than 140 dB SPL (at 1 meter) from a single enclosure.

To achieve these goals, a focus on obtaining uniform impulse response played a strong role in the R&D effort, resulting in a whole lot of computer modeling time.

Impulse response tells us a lot about an audio system’s performance, revealing time-domain characteristics that cannot be characterized by other means. Impulse response is generally described as a short duration time-domain signal, often annotated as h[t] for continuous-time systems (also called time-invariant), or h[n] for discrete-time systems (time variant).

The net effect is measured at the output of a system when an impulse is applied to the system’s input. In other words, impulse response describes the reaction of the system as a function of a specific period or “slice” of time.

And because most music and virtually all speech are time-domain dependant for accurate reproduction, the impulse response of a given system is an accurate means of identifying that system’s characteristics.

Why is this useful? It allows us to measure the system’s transient response and phase response, and the data can be readily converted to frequency response plots by applying a Fast Fourier Transform (FFT). Since the impulse function contains all frequencies, it accurately defines the response of a linear time-invariant system for all frequencies.

Figure 1: The polar plots on the left represent a standard end-fired subwoofer system, resulting in 1 null. The plots on the right depict Adamson’s EF66 array response.

Figure 1 shows how the E219’s impulse response translates to improved directional control and musicality, particularly when used in cardioid configurations. As is evident, the improvement in directionality and overall pattern control is at least an order-of-magnitude greater with the Adamson algorithms, and even more so in the 40 Hz plot.

Shape Matters
The E219’s long-excursion SD19 drivers, manufactured by Adamson, employ Kevlar cone material, dual 5-inch voice coils and neodymium magnetic assemblies (Figure 2). Adamson pioneered the use of Kevlar with the MH225 loudspeaker, introduced back in 1987, and it’s at the heart of the company’s proprietary Advanced Cone Architecture design topology.

Made possible through the use of Kevlar, which has a higher Young’s modulus (low mass / high rigidity) than a standard paper cone, the driver has a lighter and stiffer cone assembly with faster acceleration and deceleration for a tight, punchy sonic quality, greater power handling before cone breakup, lower distortion, and improved durability.

The coil is centered by means of dual silicon spiders that serve to control X-Max, the maximum tolerable excursion before damage.

Figure 2: A look at one of the new SD19 Kevlar neodymium drivers.

The dual voice coil system, known as Symmetrical Drive Technology, is reinforced by a dual spider design providing added stability under high-excursion demands.

The SD19’s dual 5-inch voice coils provide exceptional power handling, determined through rigorous lab testing, as well as a reduction in power compression under severe operating conditions.

Additionally, the bass-reflex E219 enclosure, with the drivers front loaded, utilizes a unique tangential flow venting system that reduces harmonic distortion by minimizing air turbulence, thus fostering higher maximum SPL at lower distortion levels.

The E219 is designed to function as a “normal” subwoofer, while a system comprised of E219s will provide true cardioid output by physically reversing one or more of the enclosures to face rearward in respect to the forward facing enclosures (Figure 3, below) and selecting the appropriate DSP settings, which include a unique all forward-facing “End Fire” preset that enhances multi-band rear cancellation without smearing impulse response.

Long a proponent of cardioid subwoofer deployment, the Adamson design team developed Convertible Cardioid Technology that was initially seen in the SpekTrix subwoofer in 2003. The experience gained from that project led to the development of a series of advanced DSP algorithms for Energia subwoofers.

Standard presets include regular non-directional usage plus three commonly used directional configurations: two enclosures arranged front-back (FB), three enclosures arranged front-back-front (FBF), and enclosures (quantity dependent upon on how many are stacked) configured in an end-fired array (EF66) (Figure 4, below).

All three configurations offer improved directional control in comparison to “standard” sub arrays, with the EF66 offering the greatest directionality by using advanced filters that achieve wide-band cancellation with only two sources – while maintaining impulse response integrity.

In a standard end-fired configuration, the null frequencies are determined by spacing and amount of sources, but Adamson’s proprietary algorithms are intended to provide wider rejection with only two sources, thus allowing the user to achieve maximum rearward rejection from a minimal footprint. All cardioid presets are stand-alone, with no extra delay or phase reversal required to take advantage of the benefits of cardioid low-frequency propagation.

Figure 3: The E219 FBF (front/back/front) configuration.

The Full Package
As with all Energia enclosures, the E219 is constructed of marine grade birch plywood, measuring 23.5 x 55.8 x 35 inches (h x w x d) and weighing 249 pounds.

Boxes can be ground-stacked or flown using modular rigging frames made of a combination of aircraft grade aluminum and high-carbon steel. The integrated rigging permits a 0 or 3-degree box-to-box angle. The reason to splay subwoofers at 3 degrees in most application scenarios is to not block HF coverage of the lower boxes in a steeply curved main hang when flown side by side.

When landing an E219 array of enclosures set to 3-degree splay angles, the cabinets will automatically collapse to 0 degrees.

Figure 4: FB (front/back) configuration (top), and EF66 end-fired configuration.

No physical manipulation needs to be done. Transport carts that hold up to three E219s are supplied as standard with each turn-key system.

Lab.gruppen PLM 2000Q amplifiers supplied with Energia systems offer 4 channels in a 2RU package, supplying 5,000 watts per channel at 2.2 to 3.3 ohms, and 4,400 watts per channel at 4 ohms. A Universal Regulated Switch Mode power supply (100 to 240 volts) allows the amplifiers to operate properly anywhere in the world, while PFC (power factor correction) significantly reduces AC current draw on the mains, as well as parasitic noise on the power grid.

The onboard Lake processing provides crossovers, EQ, delay and protective limiter settings. It can be controlled via the front panel, a Lake Controller on a tablet, or addressed, controlled, and vital signs monitored via Ethernet or WiFi.

Additionally, Dante networking capability is built-in and offers a fully redundant hardware configuration that switches to AES, analog, or secondary Dante signal sources if signal loss is detected.

Lake processor presets are included for the Energia line as well as all other Adamson loudspeakers. The latest preset library (v2.6) includes new tools to adapt the system’s response in an intuitive manner, allowing users to bring focus back to the music and not get sidetracked by technical terminology.

And, all presets are equipped with impedance fingerprints in order to verify the transducer and cabling status before and after a show.

E219 subs interface with the amplifiers by means of high-grade cables terminated with Neutrik NL8 Speakon connectors. While only two wire pairs are used for an E219, the use of the 8-pole NL-8 makes pin-swap output connections possible to maximize efficient amplifier deployment.

In addition to the expected paralleled I/O connectors, a dedicated NL-8 output-only connector allows four E219s to be powered on a single NL8 cable.

Automating Quality
To aid in the task of efficiently planning and achieving an optimal Energia solution for each specific application, Adamson developed a 2D/3D modeling suite called Blueprint AV (Figure 5). It provides an accurate means of predicting directional performance via an easy-to-use application, allowing the designer to visually observe how a system will perform in a given venue on the computer screen—before deploying it. Parameters that can be manipulated include array size, cardioid configuration, individual gain and delay, coverage patterns, and splay angles.

Figure 5: Blueprint AV provides an accurate means of predicting directional performance and more.

E219s can be used as the only subwoofers in an Energia system or combined with smaller E218s, a recommendation regularly made for larger events and providing a complementary tonal character that adds to the punch and impact of the LF band. A typical configuration might consist of E219s on the ground with E218s flown with full-range line array modules.

Rigging fittings are fully compatible with Energia’s E-Frame rigging. Additionally, the frame works as an adapter frame for underhang; that is, suspending E15 or E12 full-range enclosures beneath E219 or E218 subs.

The new E219 subwoofer furthers the ability of the Energia family to serve the specific needs of high-end touring sound and festival applications. The plug-and-play mentality consumes very little time setup time while insuring an optimum result, freeing up the sound team’s time for other critical duties.

That said, by simply specifying a system without the portable road racks, a range of configurations are equally at home in meeting the needs of fixed installations such as performing arts centers, medium and large houses of worship, sheds, and arenas. (Find out more about the new E219 subwoofer here.)

Ken DeLoria has had a diverse career in pro audio over more than 30 years, including being the founder and owner of Apogee Sound.

{extended}
Posted by Keith Clark on 09/10 at 05:01 PM
Live SoundFeatureBlogProductAmplifierLoudspeakerMeasurementProcessorSoftwareSound ReinforcementSubwooferPermalink
Page 52 of 195 pages « First  <  50 51 52 53 54 >  Last »