Monday, May 11, 2015

Fast Forward: Can We Be Too Up To Date With Technology?

It’s becoming increasingly difficult to keep up with technology. What used to be “merely” hardware now has a software component. Everything has layers, touch screens, and incredible power and flexibility, which is great. But the not-so-great part is that the learning curve is ever steeper.

And while there are standards for MIDI, USB, .WAV, Ethernet, MADI, and many other methods of keeping all this gear connected and communicating, each mixing platform, DSP box and loudspeaker management unit has its own proprietary file structure and operating system. In other words, live sound and all that it entails has become more complex than ever.

Meanwhile, most of us also have smart phones, tablets, laptops and PCs with multiple programs and apps, and using Wi-Fi, LTE, 4G and other ways to access the Internet. And we communicate with each other via SMS, Facebook, Twitter, Instagram, Skype, message boards, e-mail, and phone calls (via cell or good ol’ land lines). We set up these devices and services, develop at least a working knowledge of each, and oh, by the way, need to keep track of passwords for all of them.

Shooting Ourselves In The Foot?
Of course, it’s important to keep up with these things in a general sense. But how “up to date” does we really need to be? Let’s start with the hardware.

What was once the occasional software update has turned into a near bombardment of update notices for our firmware (hardware), software and apps of all kinds. Most of the time, these updates are beneficial by fixing bugs, beefing up security, adding features or generally improving the user interface.

But sometimes, an update can “break” an otherwise working unit. We’ve probably all experienced this one. For me, it was when I was working on a DVD project on my iMac a couple of years ago and updated to the latest OS. I quickly discovered that not only had they stopped supporting DVD authoring, something else was broken so that my photos were rendered in strange colors. To finish the project I had to borrow a friend’s Macbook that hadn’t received the update.

And that’s when I learned a valuable lesson (again): the latest version isn’t always the best. In fact, it was basically folly on my part to do a major OS update like that during the middle of a project, and if I’d done any research beforehand, I would have known to expect problems.

Deliberate Obsolescence?
Now, not every manufacturer deliberately obsoletes its equipment or decides to negate interoperability with other equipment, but it does happen. And in our business, having the show go on is key. In fact, it’s everything. To paraphrase the late Albert Lecesse of Audio Analysts, “priority one is to make sound, and priority two is to keep making sound.”

Sure, that fancy vocal microphone or “black box” processor may be really nifty, but is it suitable for the tough, demanding world of touring sound? Or even for the perhaps less demanding but still critical world of worship sound? One reason we’re still using some “classics” in our daily work is because they keep working, day after day. Anything delicate, fussy, unreliable or otherwise non-conducive to this consistency will end up getting shelved in favor of devices that always work.

What’s The Point?
There’s another aspect to this, though. The best way I can describe it is that some of us tend to be, well, addicted to the latest technology, apps, social media or whatever it might be that’s new. Sure, it’s cool to be up on these things but where should we draw the line between professionalism and “cutting-edge” just for the sake of it?

Clearly this is a personal question, but sometimes we don’t ask it enough. Certainly we have to know about the platforms on which we work, including the standard plug-ins, how to do updates, and what the critical updates are for our systems.

But beyond that, it’s not usually wise to get caught up in an endless cycle of keeping up with the Joneses. It’s one thing to be perpetually hip when you’re a free-wheeling college student, but very much another thing when you’re tasked with the responsibility to deliver quality results, night after night, with a complex and powerful sound reinforcement system.

What most training manuals for technical equipment said in the past was along the lines of, “Learn the basics then choose just a few of your favorite settings, and use just those settings until you get really good at it.” This still applies today – if we can’t be fluid and confident with some of the basic setups using cutting-edge equipment and systems, we simply need more practice. Forget the fancy stuff, at least at the beginning of the learning curve, and master the fundamentals.

Potential Or Results?
One thing that I’ve found annoying over the years is an emphasis on potential at the expense of actually producing something of value. In other words, “Wow – look at all of the amazing features and all of the other whiz-bang things in this new box and what we can do with it!”

I love technology, ideas and “what ifs” as much as the next person (and maybe more). But in the final, clear-eyed analysis, we need to be producing consistently good results. This is what it means to be professional.

The “wouldn’t it be cool if” statements must be grounded in reality, and experience is what gives us the ability to determine what might be feasible or productive and what might not. It’s one reason the old salts in pro audio are so hard to impress – they’ve seen it all and then some. And the best of them are the first to acknowledge something useful and beneficial – after they see it proven.

To me, the bottom line is that it’s important to keep up and stay savvy about the latest technology, while at the same time keeping an eye on the real prize: productivity and professionalism. It is still – and will always be – the best path to success and to advancing our careers.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.

Posted by House Editor on 05/11 at 12:16 PM
AVFeatureNewsOpinionStudy HallAVPermalink

Friday, May 08, 2015

Consistent Tide Of Console Upgrades

One of the notable aspects of newer digital consoles is that they can be upgraded, often to a significant degree, via new software and firmware that’s usually available as a convenient download.

It’s a great way to garner even more capability without the need to invest in new console (or often, additional hardware), which is always a most welcome development.

There’s been a significant amount of activity of late in this regard, and we thought it useful to offer a round-up of what’s new, as well as perhaps overlooked upgrades, with digital console software and control app capabilities. As always, we encourage you to do further investigation.

New Allen & Heath GLD firmware version 1.5 provides DEEP plug-in architecture that allows users to select from a number of different processing units on every input and mix channel (more here).

Two new RMS-VCA inspired compressor models, the 16T and 16VU, are also included. Integrated within the mixer’s channel processing, all six compressor models can be selected on any of the input and mix channels on the fly, without burning valuable FX slots or adding latency.

Allen & Heath GLD Chrome.

In addition, Chrome firmware includes several additions to the onboard FX suite, including a new Stereo Tap Delay, with 2.7-second maximum delay time, split L/R beat fraction control, millisecond mode, and tap tempo functions. This is joined by a new Bucket Brigade delay emulating the non-linearity of solid state delay units, and Echo, an emulation of the classic tape echo system popular in the 1970s.

Recently released version 3.0 firmware for Yamaha Commercial Audio CL and QL Series consoles (more here) adds a new 8-band parametric EQ and real time analyzer. Specifically, an 8-band PEQ in the GEQ Rack and Effect Rack make it possible to select 8-band parametric EQ in the GEQ RACK and EFFECT RACK.

Yamaha Commercial Audio QL Series.

Version 3.0 also provides four banks of enhanced user defined keys, 5.1 panning and monitoring for surround broadcasts, and a newly developed bus compressor for insertion in the stereo mix bus. In addition, Dan Dugan Sound Design automatic mic mixing already included in QL is now also provided for CL Series consoles, and because turnabout is fair play, Mix Minus features previously only available in CL are now also provided in QL consoles.

Meanwhile, the new StageMix v5.0 control app for several Yamaha Commercial Audio consoles includes a 61-band real-time analyzer that receives input from the built-in iPad microphone. This function is integrated with the PEQ/GEQ displays, allowing the engineer to move around the stage while checking for problem frequencies at various locations as well as use PEQ or GEQ to make appropriate adjustments on the spot. And new v2.0 for R Remote I/O racks provides built-in Dante networking, and the GUI has been revised to allow numeric entry of gain values as well as generally smoother, more efficient operation.

The new Soundcraft Vi7000 and Vi5000 digital consoles (more here) provide a wealth of capability, with Vi’s patent-applied-for VM2 radio microphone status monitoring feature providing native control and monitoring of Shure ULX-D and QLX-D digital wireless systems as well as AKG DMS800 and WMS4500 systems. This new integration with the console enables automatic Shure device discovery, identification, and mapping of each wireless system to the appropriate mixer channel. When that channel is selected on the console, all essential wireless parameters are displayed.

Soundcraft Vi7000.

It also enables live monitoring of the channel’s RF and audio metering, with the ability to adjust receiver gain from the console, much like trim adjustment for a wired mic. In addition, battery life status for re-chargeables and standard AA alkalines is supported. Both Shure systems use Ethernet connectivity to deliver system data to the consoles. Along with the new Vi5000 and Vi7000, a subsequent software update will support Shure wireless integration, including the current Vi1 and the Vi3000.

Along the same lines, new Yamaha TF mixers (more here) also provide a range of input and output channel presets created in cooperation and consultation with mic manufacturers such as Audio-Technica, Sennheiser and Shure. The input channel presets are made to match a span of musical instruments and voices, covering parameters such as head amp gain, EQ, dynamics and much more, right down to details like channel name and color. Output channel presets include parameters optimized for in-ear monitors and Yamaha powered loudspeakers.

Soon to be released (Q2, 2015) Remote Control Software (RCS) for the now-shipping Roland Pro AV M-5000 console (more here) provides operation of the console from a computer (Mac/Windows). Connection can be made via USB or remote connector, allowing operation over a LAN.

Roland Pro AV M-5000.

The GUI for the M-5000 RCS allows multiple windows, and includes support for high-resolution displays and other optimizations. This enables use of a second display for viewing even more windows, such as a large meter view of inputs and outputs.

The dedicated M-5000 Remote app (also due Q2) supports remote control from an iPad, with three methods of connection offered. When using the console’s dock connector for iPad attachment, the iPad can be used to perform 2-channel recording and playback, and input sources and output channels can be assigned as desired. The GUI for the app offers full support of Retina displays for crystal-clear graphics.

The new DiGiCo S21 console (more here) offers exceptional capabilities, and this bodes well for future upgrades as well.

Using a high power QuadCore SoC, associated with high bandwidth memory, the S21 connects to a low-power 484-ball array FPGA which in turn connects to fourth-generation control SHARC DSP, capable of not only controlling the FPGA, but offering the potential for additional processing in the future.

The just-unveiled Avid flagship VENUE | S6L live mixing system (more here) utilizes the E6L engine to handle huge channel and plug-in counts at very low latency. All processing is at 96 Hz, with support for higher sample rates due to the amount of processing power available. Onboard plug-in processing is greater than previously available, and Pro Tools integration provides streamlined recording and playback functionality without the need for a separate audio interface.

The new system is powered by the same VENUE software as other Avid live sound systems. I/O can be shared across multiple networked systems with Avid True Gain advanced gain tracking technology.

New HD mode firmware for PreSonus StudioLive AI-series mixers (more here) make them capable of recording and playing back audio streams at sample rates up to 96 kHz over the onboard FireWire 800 interface.

PreSonus StudioLive AI-series.

The number of recording and playback streams is unaffected. HD mode limits cascading and output processing while retaining bus mixing, Fat Channel processing on every input channel, and one reverb and one delay processor.

Because included Capture and Studio One Artist for Mac and Windows already record high-definition audio, no updates to these applications are required. In addition, the company’s Nimbit online, direct-to-users marketing, promotion, and sales service now supports high-definition (24-bit, 96 kHz) audio.

Recently released V3 software introduces more than 40 new features and updates for the SSL Live console range (more here). With the software update, the L500 console increases from 192 mix paths to 256 with a doubling of effects processing power (depending on the effects selected).

The L300 also increases from 128 to 192 mix paths. V3 also includes remote control software, console expander mode, new effects, enhancements to the solo system, user interface changes, an optional Dante card and Super Q. Super-Q is the next generation of the Query function, offering workflow flexibility from the touch of a single button..

In addition, new SOLSA (SSL Offline Setup Application) software allows creation and editing of SSL Live console show files on a laptop, desktop or tablet PC. Almost anything that can be done on a console can be manipulated and configured using SOLSA.

Mackie recently announced the availability of new Master Fader v3.0 (more here), providing significant upgrades for the new DL32R wireless digital mixer with iPad control as well as DL1608 and DL806 mixers. New features include the addition of four subgroups and four VCAs. Subgroups can be stereo-linked and provide dedicated processing, while VCAs offer flexible control over groups of channels. Users can dial in the mix and get single-fader control over groups like drums, guitars and more.

Mackie My Fader v3.0.

Also new is the overview screen, which delivers at-a-glance information for all input and output channels. In addition, digital trim has been added to each DL1608/DL806 input.

Mackie has also released My Fader v3.0, an upgraded version of the control app for its digital mixers. My Fader v3.0 allows on-stage performers to control their own monitors and provides engineers with quick mobile control over a mix. The new version is headlined by a new updated user interface and an expanded feature set for additional control.

The QSC TouchMix Series (more here) just got a firmware upgrade to version 2.1 that includes multiple languages. In addition to English, the mixer’s built-in info system and demo screens now include user selectable options for Chinese (Mandarin), French, German, Russian and Spanish.

QSC TouchMix-16.

In addition, both TouchMix-16 and TouchMix-8 now offer password protected multi-level security access, programmable user buttons, expanded channel presets, and the ability to assign aux buses to the left and right main outputs, allowing the auxiliary to function as a sub group.

The TouchMix Series also now offers expanded Wi-Fi options (including wired connection to an infrastructure router) and an updated iOS app for remote control as well as personal mix apps for both iPhone and iPod Touch, which the user may limit or allow on a per-device basis. The new software also offers iPhone control of the record/playback transport controls, with Android support announced for the future.

Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.

Posted by Keith Clark on 05/08 at 11:32 AM
Live SoundFeatureBlogConsolesDigitalMixerProcessorSoftwareSound ReinforcementPermalink

Climbing The Sound Mountain: A Fictional (But Highly Instructional) Tale

It Begins: Anticipation

Ben awoke at dawn filled with hope and excitement – this was the day he was to kneel before the audio gods at Frank’s Sound Company and Show Rentals.

Ben had been waiting for this day for two years while he worked his way through the daunting challenges placed before him at the community college audio program.

He was ready to prove his worth and show his mettle to the gods, and he knew they would be impressed. There wasn’t a knob he didn’t know. There wasn’t an audio formula he couldn’t recite. There wasn’t a cool-guy piece of equipment whose alphanumeric name he couldn’t rattle off like machine-gun fire.

But even more impressive, he thought, would be his system test CD. Yes – he had IGY, Babylon Sisters, Toy Matinee, and some Diana Krall. He knew the gods would recognize one of their own when they saw this.

And last but not least, Ben would have a Sharpie and a greenie subtly placed in his right rear pocket in just a way that these gods would see, without being too obvious. He was ready.

Ben poured himself a bowl of Captain Crunch, doused it with 2 percent milk, and sat down to fortify himself for the coming gauntlet. Through the rhythmic munching and the heaping spoonful of goodness brought to his mouth every 6,500 milliseconds, he thought about the things he would need to know today.

In his mind’s eye, he visualized the Inverse Square Law in three dimensions. He silently calculated the impedance of a series/parallel loudspeaker cabinet loaded with six 8-ohm drivers. He recited to himself the resistance, both ways, of a 12AWG loudspeaker cable that was 50 feet long.

Ben’s education had been a good one – his teachers made every effort to ensure that their students understood the fundamentals.

But Ben was even more excited about mixing. He knew that the experience he’d gained by working on everything from that recording project with the reggae band to being an assistant system tech at the Spring Fling festival at his community college had perfectly prepared him to “mix the big time.”


Looking for a great position in pro audio? Seeking the most qualified candidates? It all starts with the ProSoundWeb Jobs Center.


And this is where Ben saw himself – up on the big platform at the big show with the big artist. “The Big Time” Ben said as a few drops of peanut butter-tainted milk dribbled down his chin and visions of thousands of screaming fans echoed in his head.

He was ready to take on the world as he grabbed his resume and CD, heading down the staircase at his apartment complex to his car, starting out on his way to Frank’s Sound Company. It was only a few miles away – just enough time to hear IGY again in the car.

“I wonder what this CD sounds like on Frank’s best sound system. Probably a line array,” thought Ben as he pulled into the parking lot at the sound company. It was 9:54 a.m., right on time for the interview at 10.

As he stepped out of his car and readied himself to enter the realm of the audio gods, Ben tucked his black EV t-shirt into his black Levis and looked down at his black Reeboks.

“I’m as ready as I’m ever gonna be,” he muttered under his breath as he strode purposefully toward the big double doors at the front of the building. He paused at the door, took a deep breath to still the twinge in his stomach, and then reached for the handle. He gave it a manly tug before entering the lobby with his head held high and walked straight to the reception desk.

“I’m here to see Frank,” he said with the most mature but casual voice he could muster for someone of a mere 20 years of age.

“And you are?” asked the receptionist, with just a faint trace of haughtiness, as if Ben could clearly not really be someone that Frank would know.

“Ben Davis” he said, again with the calm, mature and casual voice he had already used seconds before. “I’m here for an interview.” He could not see that she had rolled her eyes while looking down at the appointment book.

“Please have a seat,” she uttered. “I’ll let him know you’re here.”

Ben looked around at the lobby and noticed that there was a nice-looking leather couch along the wall behind him.

But what really caught his eye was the array of large, framed pictures of what looked like super-awesome shows and tours from years gone by. “Whoa…” came out of his mouth almost as a whisper as he chose one of the pictures and approached it. He stood staring at the rock stars on stage covered in a wash of colored light and dripping with sweat.

He walked over to another picture, this one of a political rally. “This is exciting stuff” he said under his breath just as he heard doors open behind him and the sound of footsteps approaching as someone walked into the room.

Ben turned on his heel just as the man said “Ben?”

“That’s me,” was his simple reply. The man introduced himself as Frank and asked Ben to follow him to his office. Frank Martin had the look of mature confidence to which Ben aspired. And although he must have been only 40, to Ben thought he looked like “the old guard.” Ben felt a tickle of electricity as the moment had finally come.

He felt for his Sharpie and greenie. Check.

He looked down at the CD jewel case in his hand. Check.

He had his resume in his hand. Check.

He was ready, and he knew it…

Taylor Jensen is a freelance pro audio writer.

Posted by Keith Clark on 05/08 at 10:31 AM
Live SoundFeatureBlogBusinessEngineerSound ReinforcementSystemTechnicianPermalink

In The Studio: The Evolution Of Recording

This article is provided by BAMaudioschool.com.

Once upon a time, there was no recorded music. 

To hear music you needed to go to a live performance. Eventually sheet music was printed and available to buy. If you liked the song, you bought the sheet music … if you were lucky enough to have an instrument and could play you could actually hear the song.

The piano (or other musical instrument) was an important part of home entertainment.

In the late 1800s, Thomas Edison developed the idea of moving a piece of tin foil under a needle that was attached to something like a stretched out balloon. When he spoke into the piece of balloon, the attached needle vibrated and those vibrations were stored in the sheet of tin that he moved under the needle. He invented the Phonograph and in 1887 formed the first record label, selling records that were cylinders with sound scratched along the outside, played on a hand-cranked device.

Emile Berliner (who had a hand in the invention of the microphone) patented a flat disk system that was better than the tube, called the Gramophone. His system eventually incorporated a spinning flat disk with sound scratched in a spiral played back on systems with needles connected to stretched-balloon type membranes that were themselves connected to large open flaring horns (like a tuba) to help the sound waves radiate out in a single direction with extra resonance from the horn itself. The system was crank-wound, and elaborate springs and gears would then spin the disc at a constant speed.

Eventually, sound was being captured by microphones and stored magnetically on steel wire magnetic recorders, which used spools of wire that would follow a path from a “supply” spool to a “take up” spool, passing a record head that stored the magnetic sounds onto the wire and a play head that could read the magnetic signals back from the wire. This was accomplished using electro-magnetic transducers rather than early technologies that utilized physical transfer of sound energy.

Steel wire recorders were developed using technology that was first proposed in the late 1870s, and were used at times to send secret messages (for example, in a shipment of piano wire).

In the 1950s, oxide-based magnetic tape replace steel wire as the material to store magnetic signals onto. Tape used magnetically sensitive particles glued to a wide piece of plastic, which allowed for more focused and controllable recording and eventually the ability to record multiple bands (tracks) rather than a single sound.

Magnetic tape recorders utilized many of the same features as steel wire recorders, including supply reel, take-up reel, record head, playback head, and tape path.

Magnetic tape can be saturated, which means that if the tape is overloaded it will compress the sound. Magnetic tape has a “hiss” on playback. Different methods used to reduce the noise include dbx and Dolby, which crank up the hiss while recording and then drop it back down when playing back (reducing the tape hiss along with the “cranked” hiss). “Single-ended noise reduction” devices do not change the recorded sound, but rather will gate the high frequencies on playback.

Les Paul (yes, the Les Paul) created a magnetic tape recorder with a sync head (a record head with limited playback abilities). This was the invention of sound on sound recording. Previously, if you were listening back to something that was playing back and recorded something new, the new would not be at the same physical point along the tape because the playback head and the record head were in two different physical locations. The sync head meant that you could listen back to sound and record new sounds at exactly the same point along the tape.

Suddenly, we have multi-track recording, allowing people to record up to eight different tracks individually and listen to the tracks on newly developed equipment called mixers that controlled the volume of the tracks both going into and out of the machine, and also mixed those sounds together at those controlled volumes. 

Suddenly, you could re-perform one part on one instrument rather than be forced to re-perform the entire song with all the musicians. You could even erase and replace small parts of individual tracks rather than have to re-do everything, as long as the sounds were separated enough when you recorded them that each track contained only the sound of one instrument. Replacing small parts involved going into and out of record at specific times, which was called punching in and punching out.

Isolating the instruments, even if it was just using a separate microphone for everyone rather than a single common microphone, had other benefits. By raising or lowering the volume of the microphones it was possible to either enhance or in some cases create dynamic interaction between the instruments. 

Tom Dowd used this possibility to further the expressive nature of the music he was recording and mixing, and was the first modern recording engineer. 

Engineers used to be only technicians who told musicians and producers what they could not do in order to make sure that records had usable grooves that would allow a needle to properly play without skipping or jumping. There were no moving mics, no riding levels while recording, nothing but documenting whatever happened to happen in the room with the microphones in it.

When a producer was forced to use Tom Dowd because his usual engineer was booked, he was able to make more expressive music as a result of having an engineer that would manipulate the equipment as required by the music, rather than change the music to fit the equipment limitations. This was the beginning of Tom using technology to enhance the creativity of the music he was working on.

He isolated instruments for better control later. He pioneered the fader console, with devices to control sound such as EQs to change tone or limiters to control volume built into the console channels. He was a musician as well as a tech, so everything he did served the music. He was also a nuclear physicist involved in the development of the atomic bomb when he was young (in fact since the work he did was top secret and could not be discussed in schools or industry he did not pursue a physics career after the army as planned but went back into music).

Tom built relationships with the people he worked with, overcoming the typical expectation that an engineer was just a technician without creativity. He was able to do this in different musical styles, and he became instrumental in very important music with pivotal artists throughout the years.

Tom Dowd brought out the best out of the people, the songs, and the sounds. 

Overdubbing means adding new parts to pre-recorded ones. This meant that it was no longer necessary for all musicians to play at the same time. A band could record one day and the singer could record the next day. Since the singer was now on a separate track, the singer could continue to re-perform the song and re-record the track until they were satisfied with their performance.

Now that recordings were an artificial combination of sounds rather than capturing a natural music occurrence, you had to mix the sounds together to simulate either a natural sound environment or even to create a new sound environment.  Normal dynamics that would take place between people performing music together had to be simulated, because now the people were performing at different times, or even in different locations.

Once music was stored on tape, people started to edit, which means to cut it up and move whole sections or individual parts around. Mono tapes were edited long before multi-tracks, but with multi-track recording it became possible for new tracks to be either newly recorded or flown (played from another tape machine) from other performances.

Things moved on fairly the same for a while, recording and overdubbing microphones and sound generating instruments onto individual tracks of magnetic tape recorders and then mixing those tracks through the separate channels of a mixing board into a cohesive combined sound.

Then came digital.

Digital tape recorders (DTRs) first looked and operated like analog magnetic tape recorders, with a supply reel, tape path, take-up reel, etc. Since DTRs recorded digital information onto the tape rather than actual magnetic signals it was only a matter of time before computer technology allowed you to record without the tape, directly to a computer hard drive. 

Pro Tools was the first vitual digital tape recorder (meaning it was tapeless) that existed completely within a computer. Early Pro Tools was very limited in quality and capabilities, but with the introduction of non-destructive editing, music production was changed forever.

Now music could be edited with a click of a mouse instead of a flick of a razor blade. And you now have undo. That’s right, undo in an industry that had always involved permanent decisions with physical tape and razors rather than backed up computer files.

Once digital audio was in a computer rather than a tape machine, it was easy to start to manipulate it. Moving, quantizing, replacing and harmonizing sounds became as easy as clicking on a button. Auto-Tune (a program that fixes out-of-tune vocals) is responsible for many of the “in tune” vocals heard today. Before harmonizers and Auto-Tune, you actually had to be able to sing in order to be a singer. Now you only need to look and dance well and the music part can be fixed automatically. Click.

Original Pro Tools systems cost tens of thousands of dollars. These days you can get much more powerful systems that actually work well in home computers for hundreds.

Now everyone with a home computer is an artist/musician/producer/engineer. The age of the “prosumer” is here.

Bruce A. Miller is an acclaimed recording engineer who operates an independent recording studio and the BAM Audio School website.

Posted by House Editor on 05/08 at 09:21 AM
RecordingFeatureStudy HallAnalogDigitalEducationEngineerMicrophoneMixerSoftwareStudioPermalink

Thursday, May 07, 2015

Apple Watch, Meet A/V Integration

This article is provided by Commercial Integrator

It only took the industry a few years to figure out how to utilize the iPad in a meaningful way to improve the user experience of audio-visual technology.

This isn’t a knock on AV, its just a fact. Most of the early iterations of iPad control and integration left something to be desired. The good news is we have moved past that.

Today, Android and iOS are part of our life. In fact, you may be reading this on your mobile device right now. For business owners you have probably heard about “Mobilegeddon” or the fact that companies who have websites that aren’t mobile friendly will be penalized with lower search ranking.

Another thing you have probably heard about lately is the launch of the Apple Watch.

As a tech geek, I dig the idea of one more way to stay connected. And while smart watches are nothing new, the marriage of Apple and just about anything means that I am going to buy it. If I could figure out why they didn’t make it the iWatch I would be a bit happier, however, I digress.

What hasn’t been addressed, at least not yet in the commercial integration space, is whether or not the Apple Watch or wearables in general are going to have any meaningful impact on what we do. The answer is it won’t, but it will. Confused yet? Good, then I’m doing my journalistic job of leading the reader.

Mobile Is The Future Of Everything
While we were solving the problem of getting iPads to control projectors and switchers, there was another shift brewing. We were evolving into a world where people are always on, always connected and almost always expecting technology to support this.

If you think about the technology you buy today, does any of it really require us to know how to use it? If you are anything like me, you haven’t read a manual in the better part of a decade and you are certainly don’t read one when you get a new mobile device. Apps, browsers and tools on our devices are so user friendly that we just launch and use. This is going to hugely affect the integration industry as well.

Remember all of the time we spent training and educating the user on how to utilize the technology that we installed? Often it included large binders and multi-hour trainings on how to fire up a projector and switch the input. The more parts and pieces in the system, the more time we spent training, re-training and then supporting the installation. This became common place and it was accepted as the way things are.

Then we got smart devices. We can now video on our iPhone and control our houses with an app on our iPad. We have Nest, FitBit, Sleep Monitoring and Nutritionists in our pocket and on our wrists at any given time. No training required; s#it just works.

The Apple Watch may not immediately control the rooms we install or change commercial integration drastically, but devices of its kind are a representation of a new world where life and business are experienced differently. We no longer are tethered to our desks or bound to a conference room to have a meeting. We can connect in the cloud, watch content on our mobile device and be notified of what is important on our wrist.

These new ways of living are a direct window into the future and they speak to how we will engage with technology in the future. The offerings and solutions that integrators deliver need to encompass this. It isn’t a smart room thing anymore, but a smart world thing and most certainly something to consider in every strategic move we make with our businesses in the future.

Daniel L. Newman currently serves as CEO of EOS, a new company focused on offering cloud-based management solutions for IT and A/V integrators. He has spent his entire career in various integration industry roles. Most recently, Newman was CEO of United Visual where he led all day to day operations for the 60-plus-year-old integrator.

Go to Commercial Integrator for more content on A/V, installed and commercial systems.

Posted by Keith Clark on 05/07 at 11:49 AM
AVFeatureBlogStudy HallAVBusinessDigitalEthernetInterconnectNetworkingRemotePermalink

Taming The RF Beast

Wireless systems are a key component in almost every facet of live entertainment production, especially concerts and corporate meetings and events. The demand continues to increase as the supply of available bandwidth is both shrinking and becoming more congested.

As a result, pre-planning and wireless frequency coordination are becoming more important, particularly as the FCC (Federal Communications Commission) is preparing to sell off more of the UHF spectrum where the majority of wireless microphones and monitoring systems operate (currently 470-698 MHz in the U.S.).

And, our systems already share portions of the broadcast spectrum with ever-proliferating TV band devices (TVBDs, formerly known as white space devices, or WSDs).

Add the problems of intermodulation interference (intermod) into the mix, and as audio professionals, we really need to focus on frequency coordination and wireless system design. Intermod happens when two (or more) transmitter signals mix in an active device (transmitter, active antenna, active splitter, receiver) and produce additional frequencies above and below each original transmitting frequency. These intermod products occur at the same spacing as the two original frequencies were apart.

Let’s say you need three wireless systems onstage – two mic systems and a guitar system. You select 498.000 MHz for one of the mic systems and 499.000 MHz for the other mic system. The spacing between the frequencies is 1 MHz, so don’t place the guitar system on 497.000 MHz or 500.000 MHz because that’s where the third-order intermod products will occur. 

This is simple enough to figure out for a few systems, but every time another frequency is added, it must be coordinated with every other frequency in service. It gets pretty complicated on bigger gigs, and astronomically so on larger events where there can be hundreds of frequencies in use.

Frequency coordination tab of Shure Wireless Workbench 6.11.

Modern Developments
There are several software programs that we can turn to for help with frequency coordination. Some are free, such as Sennheiser Intermodulation and Frequency Management software that provides rapid calculation of intermod-free frequencies, and Shure Wireless Workbench, with recently released version 6.11 also offering several new features to help manage wireless system performance over the network, from pre-show planning to live performance monitoring. RF Guru from Stage Research and IAS (Intermodulation Analysis System) from Professional Wireless Systems are two more programs that can help, available for a fee.

To make sure our wireless systems don’t get stepped on by TV transmitters and other high-power users (and/or cause any interference to them), we can turn to several websites that can help us steer clear of problems. The Shure Wireless Frequency Finder is a free tool where users can input their location, with the program selecting frequency bands and offering information on know TV transmitters in that region. Electro-Voice and Sennheiser are two more manufacturers that offer this kind of online help. Along with intermod calculations, the aforementioned software programs also pull in data from the FCC on active TV stations.

One way that manufacturers have adapted to the shrinking spectrum is to make their systems more efficient, with several systems now able to operate within the same amount of bandwidth that used to be occupied by a single system. For example, the Shure ULXD system can operate up to 47 active transmitters in one 6 MHz TV channel space (or 63 in one 8 MHz TV channel in high-density mode).

Another notable innovation is Lectrosonics Digital Hybrid Wireless technology, which utilizes a proprietary algorithm to encode the digital audio information into an analog format, which is then transmitted over an analog FM wireless link.

The receiver employs high-end filters, RF amplifiers, mixers and detector to capture the encoded signal and a DSP recovers the original digital audio. This hybrid approach enhances immunity to noise without compromising spectral and power efficiency, as well as operating range.

Lectrosonics Venue receivers also work in tandem with System Designer software that includes a spectrum scanner providing a visual display of RF activity within the tuning range of the system to quickly locate clear operating frequencies. In addition, a walk test recorder generates a visual of RF levels during a walk test of a project site.

Frequency-agile systems that can detect problems and automatically (and seamlessly) switch both transmitter and receiver to another (open) frequency represent another effective approach. The new System 10 Pro wireless from Audio-Technica (subject of a recent Road Test here) is a good example. The receiver and transmitter are actually transceivers that are constantly communicating with one another. If interference occurs, the units switch over to a clear frequency without a hitch.

Lectrosonics System Designer software that includes a spectrum scanner, walk test recorder and more.

Further Options
One way to avoid a lot of the headaches in the UHF band is to operate outside of it. Many years ago most (if not all) wireless microphones operated in the VHF (TV band channels 2-13) spectrum from 150-216 MHz. My company still has a few VHF systems that work quite well because the spectrum is pretty empty.

Radio Active Designs (RAD) recently introduced the UV-1G wireless intercom system, which can help in crowded UHF environments. The system offers up to 30 base stations and 180 belt packs in the same footprint as one base station and four belt packs that use traditional FM technology. And the belt packs operate in the VHF band, freeing up valuable UHF spectrum for wireless microphones and IEMs. The system uses proprietary Enhanced Narrow Band technology, a unique modulation scheme that is more spectrally efficient than current FM (Frequency Modulation) technology. Each channel in a UV-1G system has an occupied bandwidth of only 25 kHz.

Several wireless manufacturers also offer systems that operate in the 2.4 GHz band,  most commonly used by Wi-Fi. While it’s certainly true that there’s a lot of activity within the 2.4 GHz range, manufacturers have developed ways to make it work, including the aforementioned System 10 Pro that uses frequency agility to switch frequencies.

A Sennheiser evolution wireless D1 instrument system.

The Line 6 XD-V70 was an early professional caliber wireless system operating in the 2.4 GHz band. The system has proprietary technology called Digital Channel Lock (DCL) that distinguishes its own digital audio from any other third-party signal, including Wi-Fi. Up to 12 XD-V70 systems can operate in the band simultaneously, while the newer XD-V75 allows up to 14 systems.

The Sennheiser evolution wireless D1 is a recent addition to the 2.4 GHz club, with transmitters and receivers that automatically pair and select suitable transmission frequencies, while multiple D1 systems can automatically coordinate themselves.

Systems are also coming on the scene that operate in the 1.9 GHz range, such as the recently released Sennheiser SpeechLine Digital Wireless system with Automatic Frequency Management technology that searches and can switch to a clear frequency if transmission is disturbed. In addition, network integration enables the system status to be remote controlled and monitored using the Wireless System Remote (WSR) app, AMX or Crestron. 

Additional Aspects
And that brings up another concept, which is monitoring. Particularly when using multiple wireless channels, this is a good idea, with many newer systems offering networking and the ability to link receivers together as well as remote monitoring of the system as a whole.

Soundcraft and Shure recently announced a new collaboration that enables native monitoring and control of select Shure wireless systems on Soundcraft Vi Series digital consoles. The new Soundcraft Vi5000 and Vi7000 digital consoles support Shure ULX-D and QLX-D systems as well as AKG DMS800 and WMS4500 systems.

There are some excellent third-party spectrum analyzers available that are also useful in keeping an eye on things. A spectrum analyzer monitors a range of the frequency band that is determined by the user. It sweeps across the range over and over, measuring the strength of the present RF signals and displaying the results. Users can see the background noise level as well as their own wireless system signals and any unwanted or unknown signals that may cause interference.

RF Explorer offers a compact, portable spectrum analyzer that’s available in different models depending on the frequency bands the user needs to view. The unit can also interface with a computer for data storage, control and monitoring.

Kaltman Creations offers several handy tools along these lines, including the RF Command center that interfaces with a computer and the portable RF-Vue that interfaces with a tablet allowing the user to walk around and take measurements. The company also offers a nifty unit called the RF-id SOLO, a small frequency counter that can identify the exact frequency a transmitter is operating on as well as confirming power output, which helps in identifying and troubleshooting wireless problems both onsite and back at the shop.

The portable RF-Vue from Kaltman Creations.

Another way to reduce the potential for problems is to use directional antennas, especially for transmitting with IEMs. Directional “paddle-style” (a.k.a., log-periodic dipole array) antennas can provide up to 6 dB of gain, and Helical antennas can provide up to 10 dB. With receivers, they can reduce the amount of unwanted noise picked up in comparison to an omnidirectional antenna, and offer forward gain that helps when they’re placed at a decent amount of distance from the transmitters.

Being Prepared
Earlier I mentioned the importance of pre-planning. Along with monitoring the airwaves during an event, it’s really the key to a successful show involving wireless systems. Before every event where we’re using wireless, we check to see what transmitters are in the area and if any other wireless systems will be in use. Because we do a lot of work in Las Vegas casinos, it’s not uncommon for an in-house show in a theater that uses dozens of wireless to be located on the other side of the wall from a ballroom.

We also coordinate with the in-house A/V department and any other production vendors working in the building. Once we’ve selected clear frequencies that work well with each other (as well as a bunch of backup frequencies) we monitor the airwaves during the event, checking for potential problems.

Recently we were hired to provide some wireless systems at a general session. It wasn’t huge, just eight wireless mics plus four stations of wireless intercom, but there were more than 200 frequencies in use on three floors of meeting space in the facility.

It could have been a disaster, but fortunately the event manager contracted a wireless coordinator, who made sure all of the frequencies for our event, and the other frequencies in use at the property, all “played nice” together. On our end, I monitored with a spectrum analyzer during rehearsals and on show day, and had a list of backup frequencies we could switch to if needed. With this little bit of attention to the details, we had a great event with no wireless problems over the entire week.

Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.

Posted by Keith Clark on 05/07 at 09:22 AM
Live SoundFeatureBlogStudy HallMeasurementMicrophoneSignalSoftwareSound ReinforcementWirelessPermalink

Wednesday, May 06, 2015

Church Sound: Effectively Communicating With Your Worship Team & Musicians

It’s not easy, being a church tech. We face a barrage of challenges, every time we enter the room. Most often, it’s our diplomatic skills more than our mixing skills that keep us alive.

We must stay focused, and effective, while encountering one ridiculous situation after another. If you’ve mixed for more than nine minutes, you know some of the folks who seem determined to put you in a straitjacket.

For the church tech, it’s different than the guy mixing clubs or concerts. Its going to be a different crowd than the one that bought a ticket or has a few drinks in them. The church tech is mixing for a room full of very diverse people. Many are easily offended. Many have old school ideas about music. We usually end up being far more conscious of sound pressure levels within the church.

Let me offer a few examples of classic control problems, and possible solutions.

The lead guitar player, stuck in the 80s.
Sooner or later, your guitar player will become a “gear junkie.” He will want to recreate some effect or sound. It will require him to buy more stuff to plug his guitar into. Every piece he buys, takes up space on the stage. It adds more noise to the room. It consumes more electricity. Left unchecked, they will end up looking like Frankenstein in his lab, surrounded by flashing lights and lightning. Bringing some uncontrollable thing to life.

The last encounter I had, with this type, required heavy diplomacy. After agreeing to spend a Saturday afternoon, helping him adjust his tone, I finally cracked. I made him happy with his monitor, then made him use a wireless unit, and play in the middle of the sanctuary. After he commented that it sounded completely different, I induced logic.

“Yes, it does. That 15-inch wedge, six feet in front of you, will always sound different than a full rig, hanging 15 feet in the air. You are also being mixed with a full band and vocal team. Your tone has to find a place in the mix. One instrument is not going to dominate this room. You are at my mercy. You have to trust me. I promise your time and talents will not be wasted.”

The Muppet Show-style drummer.
Finesse. What a magical word. It implies the ability to truly feel things, and adjust yourself to the situation. Many drummers just play like Animal from the Muppet Show. “Beat drums! Beat drums! It’s their mantra. They refuse to finess their drums. Kick drums must cause headaches. Snares must cause hearing damage. Hi hat must cause the same emotional effect as a ticking time bomb. Yeah. That guy.

When the drummer is allowed to dominate the stage, it effects everything. Everything gets louder. Everyone is competing with those drums. Every monitor, every amp, every singer has to push past the level of those drums, just to hear themselves.

I once took all the cymbals off a drum kit, before a service. When the drummer freaked out, I told him to focus on keeping a beat and keeping it under control. I told him I absolutely could not make the house mix any better, if the stage noise didn’t calm down.

The oblivious bass player.
One of the bass players from my past \had incredible talent. He could take a simple riff and walk all over the neck, making beautifully complex patterns. He had a great tone and solid rhythm. But, if he got lost or missed a note, he would just stop playing. Cold. Dead. Stop. Not a good thing, when the room is rocking with the cleanest subs you ever experienced.

I caught him one morning, before service and asked him about it. “Why do you just stop sometimes?”

He was terrified of playing the wrong note, so he didn’t play any notes. I made him imagine the audience dancing. They are leaning into the music, they feel it. The subs are hitting hard and the room feels great. But, when you let go of the bass, it falls flat. You are propping everyone up. We are leaning into that sound. When you stop, we feel like we are going to fall down. The mix goes flat.

After that conversation, it never happened again. He would still get lost, once in a while. But he would keep something moving until he got back in. My mixes stayed full and rich.

The keyboard player from outer space.
So. We brought in a new keyboard player. He rehearsed with the band a few times. Played a few Sundays. Got into the groove. Then he did it. He exposed his true musician inside. He did one of the most ridiculous things ever.

He was waiting for me, early, when I unlocked the building. He was pulling a large rolling suitcase. No big deal. But, once inside. He began unpacking gear, all around his keyboard. Just before rehearsal began, he dragged two ten inch speakers out and stretched them to either side of the stage. Facing the audience.

Another actual experience. Honestly. I am not making these up.

In my most diplomatic tone, I asked him what those were for. He informed me that he didn’t believe my mix was properly conveying his stereo image into the audience. He felt that they truly needed to experience the full presence of his sound.

So I unplugged his speakers.

Then I informed him that I had no intention of competing with a second sound system in that room. He got offended and pouted. I also told him that his keyboard, like every other instrument and voice onstage, would find a place in the mix.

And, for the record, I ended up being great friends with each of these guys.

Except the keyboard player. He left.

The backseat-mixing worship leader.
I have experience virtually every worship leader personality. The bone crusher, who has authority issues, demanding perfection. The micro manager, who wants to select the exact delay time for the secondary effect on the background vocals.

There’s more. But it gets less comical as I identify them.

Over time, I developed respect for each personality. Not always to the point where we were going to hang out after service, but respect.

I remember rehearsing for six weeks, getting ready for a production, on Easter Sunday. Not really a show, just a strong choral performance with some powerful songs. Easter morning, an hour and a half before service, the worship leader walks into rehearsal and makes an announcement.

“I know we have been working on this song list for a while, but I want to change it.”

Seriously. Just like that. An hour’s worth of performance material was changed. We ended up doing a completely different set. The funny thing about it was, it worked. He knew his choir and he knew his congregation. It was a mix of old and new music. It worked fine.

I had to come to terms with my relationship, with worship leaders. After listening to some griping and complaining, about them, I made a decision. If I trust my pastor to speak into my life and help me make decisions, why wouldn’t I trust their judgement on selecting a worship leader? They have been given a job to do. They are in their place. Just like I was chosen to do my job because the pastor trusted me to do it effectively.

I eventually made a point of openly defending our worship leaders and making sure everyone knew that we were on the same team. It made a difference in their micromanaging, once they saw me edify them to the band and choir. The backseat mixing ended once they knew I was in their corner.

Diplomacy usually does the trick, but we have to be concerned with what we are producing and allowing into our mix. We can’t allow one musician to dominate the worship service because they are on some kind of ego trip. Get on the same page as the worship leader. Figure out what works and what doesn’t. Don’t be afraid to get involved and make suggestions. They need your input and assistance to bring the whole team together. That kind of communication creates greatness.

But hang on to the straightjacket, just in case.

Senior editor .(JavaScript must be enabled to view this email address) has worked in professional audio for more than 20 years in live, install, and recording. Read more of his random rants and tirades here.

This article is an excerpt from Erik’s latest book, Basic Training for the Church Audio Technician, available here.

Posted by House Editor on 05/06 at 12:54 PM
Church SoundFeatureBlogStudy HallBusinessEducationEngineerSound ReinforcementTechnicianPermalink

Tuesday, May 05, 2015

In The Studio: Five Simple EQ Tips That Work On Anything

PSW Top 20 presented by Renkus-Heinz

Equalization is one of the most difficult parts of recording to get the hang of since there’s literally almost an infinite number of possibilities.

Most of us learn by experience and usually massive amounts of trial and error, but there are some brief general guidelines that can be an enormous help for those new to the process.

Here’s an excerpt from The Mixing Engineer’s Handbook 3rd Edition featuring five simple EQ tips that will work in just about any situation.

1. If it sounds muddy, cut (decrease the level) at around 250 Hz. Although you can get that muddy sound from other lower frequencies (especially anything added below 100 Hz), start here first.

2. If it sounds honky or veiled, cut at around 500 Hz. This is where a huge build-up of energy occurs when close-miking instruments because of the proximity effect that naturally occurs with directional mics. Just cutting a bit in this area can sometimes provide instant clarity.

3. Cut if you’re trying to make things sound clearer. If the sound is cloudy, there’s usually a frequency band that’s too loud. It’s easier to decrease it than to raise everything else.

4. Boost if you’re trying to make things sound different. Sometimes you don’t want clarity as much as you want something to sound just different or effected. That’s the best time to boost EQ.

5. You can’t boost something that’s not there in the first place. You may be better off to decrease other frequencies than try to add a huge amount, like 10 or 15 dB, to any frequency band.

Although there are exceptions to every one of the above guidelines, you’ll always stay out of sonic trouble if you consider these tips first.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website, and go here to acquire a copy of The Mixing Engineer’s Handbook 3rd Edition.

Posted by Keith Clark on 05/05 at 11:39 AM
RecordingFeatureBlogStudy HallDigital Audio WorkstationsEngineerMicrophoneMixerProcessorSignalStudioPermalink

Monday, May 04, 2015

Backstage Class: How Not To Record A Live Album

Whenever we record music there are certain “golden rules” that aid us in getting the end results we require. These rules are based on years of experience and have been handed down from engineer to engineer as a useful guide to the best practices. While they’re not rigid, experience has taught us that it is wise not to break them if we hope to produce a recording of the required standard (i.e., to be released).

When recording a live concert we tend to rely heavily on these rules because we’re no longer in the controlled environment of the recording studio. The needs of the recording are secondary to the immediate requirement of running the show and providing a pleasing live experience for the audience, not to mention the fact that there’s only one take.

For a few years now I’ve been working as front of house engineer for a band that produces vibrant and energetic live shows, so I suggested trying to capture that energy by recording a live album. An upcoming European tour presented an opportunity but there was no budget available so I had to borrow what equipment I could in order to achieve this aim. The process caused me to pretty much break every one of the golden rules, yet the result was a releasable album.

Making The Math Work
The band consisted of seven musicians playing 10 instruments: full drum kit, bass guitar, acoustic guitar, electric guitar, charango (a 10-stringed South American instrument traditionally made from the shell of an armadillo), strum stick (a three-stringed instrument designed to be simple to play), violin, melodica, trumpet and clarinet. Combine all of that with two additional floor toms (used for the encore) and three vocals, and there was a grand total of 27 inputs. We had a Korg D888 (eight-track, eight-channel, eight-bus) hard disk recorder, and this is where I broke my first rule…

Rule 1: Always record all inputs separately. Due to the inherently fleeting nature of live performance it’s wise to record every single stage input on a separate channel because this enables maximum flexibility when it comes to treating and blending the individual elements into the final mix. It also provides the ability to repair any mistakes and even overdub replacement or additional parts – believe it or not, very few live albums are “warts and all” unaltered depictions of the actual event. In the context of a live concert any wrong notes or mistakes are fleeting and soon forgotten, all part of the immediacy of live music.

But if those same mistakes are captured on a live recording and replayed multiple times they quickly become glaringly obvious, jumping out every time you listen to it. So the ability to separate individual instruments and repair, if necessary, can be vital.

However, the recorder afforded no such luxury. I had to find a way to get those 27 inputs down to eight. The key was the realization that while there were multiple instruments on stage, there were still only seven musicians, and while they’re a talented bunch, none of them have yet figured out how to play two instruments at once (Figure 1). All I had to do was assign one channel to each musician (while separating the lead and backing vocals), which resulted in the following track listing:

Figure 1: Live and recording inputs. (click to enlarge)

1 - Drums
2 - Bass
3 - Guitars (acoustic, electric, charango and strum stick)
4 - Violin (plus melodica)
5 - Trumpet
6 - Clarinet (top and bottom microphones)
7 - Lead vocal
8 - Backing vocals

The hardest decision I had to make was to record all of the drums to one channel. This would be a major issue in the final mix, but there was simply no choice. A drum kit comprises multiple elements that cover the widest frequency range of just about any instrument (with the possible exception of the pipe organ). It’s also a physically wide instrument that benefits from spatial separation of the individual elements via panning, an option which would now be denied because I was recording all 11 inputs in mono on one track.

Rule 2: Always record the raw input signals. The common practice when recording a live show is to use a splitter to take a complete copy of all of the stage inputs and thus record them before they’re relayed to the front of house/monitor consoles. This ensures capturing the raw signals unaffected by any processing employed for the live mix.

The reason for this rule is that the demands of the recording mix are quite different to those of the live mix. The live mix is comprised of those elements required to reinforce and enhance the ambient sound coming from the stage to create a clear, precise and above all visceral experience for the audience.

The recording mix is much more about presenting a coherent sound that attempts to reproduce the energy and excitement of the concert that will withstand repeated listening.

The live mix is also designed to work on a specific sound system in a particular room, whereas the recording mix needs to take into account the fact that the end result will be listened to on a wide variety of reproduction equipment in various environments. The requirement of recording to eight tracks meant a splitter was not an option.

For the three tracks with just one input (bass, trumpet and lead vocal), I was able to use the desk channel direct outputs to record an unprocessed signal. For the other five tracks, there was no choice but to use sub groups to combine the channels I needed, which meant those signals would be post everything (i.e., inserts, EQ and fader).

This required that I be acutely aware at all times of how any of the processing would affect the recording mix, and to try to seek a compromise between the demands of the two outlets. Thankfully my methods for applying EQ, gates and compressors are quite subtle so I was confident this would work; the key was being careful not to make any large or sudden moves.

Rule 3: Always employ redundancy and have a back-up recording system. You only get one take at a live recording, so any equipment failure that causes the recording to fail is catastrophic. A second recorder, ADAT machine or laptop can provide a handy back-up should the primary device fail, and this redundancy should be considered a vital part of the recording set-up.

However, this option wasn’t available on this project, so I decided that the best option would be to record all 15 shows on the tour and hope that the best performances could be compiled into a coherent album.

Rule 4: Adequately capture the audience/ambient sound. Hearing the crowd respond to the live performance really helps the listener to immerse themselves in the playback and imagine they’re present at the concert. Ambient microphones will also pick up some of the sound of the band and PA system propagating in the room which, when added to the dry multi-channel recording, can really add life and glue the whole recording together.

But because the recorder didn’t have any additional tracks available, I came up with the simple solution of recording the audience sound separately using a stereo microphone plugged into my laptop. This meant I would have to manually synchronize the two recordings in the mix, but there really wasn’t any other choice.

Necessity & Invention
By far the biggest challenge in the mix was to inflate that single track of mono drums into something resembling a fully fledged drum kit – it would sound weak if the drums weren’t pumping.

The solution was to duplicate the drum track four times and EQ each of the copies differently: one low-passed to bring out the kick, one band-passed to bring out the snare, and a pair high-passed to act as overheads (one of which was delayed a small amount to create a pseudo stereo effect when panned). This enabled me not just to modify and mix the key elements of the kit, but also to apply reverb only where I wanted it.

The tracks featuring multiple inputs, which I had been forced to record post fader via sub groups, actually presented few problems. The fader moves done live translated well to the recording mix, balancing the instruments in the same way I had done live. Any wayward fader movements were easily corrected with fader automation.

The decision to record multiple shows in lieu of having a back-up system resulted in many hours of recordings to trawl through to find the best performances, but fortunately there were no issues with the equipment and we managed to capture every note of every show.

Synchronizing the audience tracks with the eight-track recording proved to be fiddly but ultimately straightforward. In most cases there were stick hits on the drum tracks that could be used to visually align them. I then fine tuned the alignment by panning the audience recording to one side and the eight-track recording to the other so that when I listened on headphones it was much easier to identify and correct any timing discrepancies.

About halfway through the project I needed to go out on tour again but the mix needed to be finished, leading me to break one last golden rule.

Rule 5: Never mix exclusively on headphones. When we listen to music via loudspeakers, our ears don’t just hear the signal from the nearest loudspeaker(s), they also hear signal from the further loudspeaker(s), which is slightly obscured by the acoustic shadow of our head. There are also reflections from the walls, ceiling and floor that combine in a complex way to create what we consider to be a natural sound.

But when using headphones, the left ear only hears signal from the left loudspeaker, and vice versa, which can make it harder to judge the sound and positioning of individual elements as well as the mix as a whole. This is why it’s generally considered a bad idea to mix exclusively on headphones.

The requirement to finish the recording mix while on tour meant I had no choice but to use headphones. Therefore, I had to trust the EQ and panning decisions made when mixing on loudspeakers, using them as my reference points for making any further changes to the mix. Every time I made a major change, it was saved as a different incremental version so that I could constantly reference back to the original to make sure I wasn’t straying too far from the path.

The Korg D888 recorder used on the project.

Different Path
The series of decisions forced by equipment restrictions ultimately resulted in spending much more time on the project than if I’d done it “by the book.” A healthy combination of stubbornness and perseverance helped in achieving the desired result of producing a releasable album, but this project would not have been possible without modern hard disk recording technology and DAW software.

While I enjoyed the challenges, this is not a path I recommend. Let it be an object lesson in how not to record a live album.

Andy Coules is a sound engineer and audio educator who has toured the world with a diverse array of acts in a wide range of genres.

Posted by House Editor on 05/04 at 12:01 PM
Live SoundRecordingFeatureBlogStudy HallConsolesDigital Audio WorkstationsEngineerInterconnectMixerMonitoringRemoteStagePermalink

Friday, May 01, 2015

In The Studio: Preventing Hum And RFI

A handy guide to greatly reduce the likelihood of hum and radio frequency interference (RFI) in your studio system

You patch in a piece of audio equipment, and there it is: HUM!

This annoying sound is a common occurrence in sound systems. Hum is an unwanted 60 Hz tone—50 Hz in Europe—maybe with harmonics. If the harmonics are especially strong, the hum becomes an edgy buzz.

Your sound system also might be plagued by RFI (Radio Frequency Interference). It’s heard as buzzing, clicks, radio programs, or “hash” in the audio signal.

RFI is caused by CB transmitters, computers, lightning, radar, radio and TV transmitters, industrial machines, cell phones, auto ignitions, stage lighting, and other sources. This article looks at some causes and cures of hum and RFI. Following these suggestions goes a long way in keeping your audio clean.

Hum And Cables
One cause of hum is audio cables picking up magnetic and electrostatic hum fields radiated by power wiring in the walls of a room. Magnetic hum fields can couple by magnetic induction to audio cables, and electrostatic hum fields can couple capacitively to audio cables. Magnetic hum fields are directional and electrostatic hum fields are not.

Most audio cables are made of one or two insulated conductors (wires) surrounded by a fine-wire mesh shield that reduces electrostatically induced hum. The shield drains induced hum signals to ground when the cable is plugged in. Outside the shield is a plastic or rubber insulating jacket.

Cables are either balanced or unbalanced. A balanced line is a cable that uses two conductors to carry the signal, surrounded by a shield (Figure 1). On each end of the cable is an XLR (3-pin pro audio) connector or TRS (tip-ring-sleeve) phone plug.

Figure 1. A 2-conductor shielded, balanced line.

Each conductor has equal impedance to ground, and they are twisted together so they occupy about the same position in space on the average.

Hum fields from power wiring radiate into each conductor equally, generating equal hum signals on the two conductors (more so if they are a twisted pair). Those two hum signals cancel out at the input of your mixer, because it senses the difference in voltage between those two conductors—which is zero volts if the two hum signals are equal. That’s why balanced cables tend to pick up little or no hum.

An unbalanced line has a single conductor surrounded by a shield (Figure 2). At each end of the cable is a phone plug or RCA (phono) plug. The central conductor and the shield both carry the signal.

They are at different impedances to ground, so they pick up different amounts of hum from nearby power wiring. There’s a relatively big hum signal between hot and ground that results in more hum than you get with a balanced line of the same length.

Figure 2. A 1-conductor shielded, unbalanced line.

Sometimes it’s impossible to avoid long unbalanced cables, and some cables used between pieces of equipment are unbalanced. An unbalanced line less than 10 feet long usually does not pick up enough hum to be a problem.

Wherever you can, use balanced cables going into balanced equipment. Keep unbalanced cables as short as possible (but long enough so that you can service them). Check inside cable connectors to make sure that the shield and conductors are soldered to the connector terminals. Route mic cables and patch cords away from power cords; separate them vertically where they cross. This prevents the power cords from inducing hum into the mic cables.

Also keep audio equipment and cables away from computer monitors, power amplifiers, lighting dimmers and power transformers.

Ground Loops
Another major cause of hum is a ground loop: a circuit made of ground wires.

It can occur when two pieces of equipment are connected to the building’s safety ground through their power cords, and also are connected to each other through a cable shield (Figure 3).

The ground voltage may be slightly different at each piece of equipment, so a 50- or 60-Hz hum signal flows between the components along the cable shield. It becomes audible as hum.

Also, the cable shield/safety ground loops acts like a big antenna, picking up radiated hum fields from power wiring. For example, suppose your mixer’s power cord is plugged into a nearby AC outlet.

The musicians amps are plugged into outlets on stage. So the mixer and amps are probably fed by two different circuit breakers at two different ground voltages.

When you connect an audio cable between the mixer and power amps, you create a ground loop and hear hum. To prevent ground loops, plug all audio equipment into outlet strips powered by the same breaker. (Make sure the breaker can handle the current requirements).

Figure 3. A ground loop.

Run a thick AC extension cord from the stage outlets to the mixer, and plug the mixer’s power cord into that extension cord. That way, the separated equipment chassis will tend to be at the same ground voltage—there will be very little voltage difference between chassis to generate a hum signal in the shield.

Caution: Some people try to prevent ground loops by putting a 3-to-2 safety ground lifter on the AC power cords. Never Do That. It creates a serious safety hazard.

If the chassis of a component becomes accidentally shorted to a hot conductor in its power cord, and someone touches that chassis, the AC current will flow through that person rather than to the safety ground. Lift the shield in the receiving end of the signal cable instead, and plug all equipment into 3-pin grounded AC outlets.

Figure 4. Lifting the shield from the pin-1 ground in a male XLR connector.

Let’s explain the signal ground lift in more detail. The hum current in a ground loop flows in the audio cable shield, and can induce a hum signal in the signal conductors.

You can cut the audio cable shield at one end to stop the flow of hum current. The shield is still grounded at the other end of the cable, and the signal still flows through the two audio leads inside the cable.

So, to break up a ground loop, disconnect the cable shield from pin 1 in line-level balanced cables at the male XLR end (Figure 4). You can either cut the shield, or plug in an inline audio cable ground-lift adapter.

Removing the shield connection at one end of the audio cable makes the connection sensitive to radio-frequency interference (RFI). So solder a 100 pF capacitor between the shield and XLR pin 1 (Figure 5). This effectively shorts RFI to ground, but is an open circuit for hum frequencies.

Figure 5. Supplementing the lifted shield with a capacitor prevents RFI.

Some engineers create a partial ground lift by placing a 100 ohm resistor between the cable shield and male XLR pin 1 (Figure 6. next page). This limits the current passing through the cable shield but still provides a good ground connection.

Label the XLR connector “GND LIFT” so you don’t use the cable where it’s not needed. For example, mic cables must have the shield tied to pin 1 on both ends of the cable. The ground lift is only for line-level cables.

Here’s another way to prevent a ground loop when connecting two balanced or unbalanced devices. Connect between them a 1:1 isolation transformer or hum eliminator.

Other Tips
Even if your system is wired properly, hum or RFI may appear when you make a connection. Follow these tips to stop the problem:

• Unplug all equipment from each other. Start by listening just to the output of your studio monitors PA speakers. Connect a component to the system one at a time, and see when the hum starts.

• Remove audio cables from your devices and listen to each device by itself. It may be defective.

• Partly turn down the volume on the amps, and feed it a higher-level signal from your mixer (0 VU maximum).

• Do not wire XLR pin 1 to the connector-shell lug because the shell can cause a ground loop if it touches grounded metal. If you are sure that the shell won’t touch metal, wire XLR pin 1 to the shell lug to prevent RFI.

• Try another mic. Some dynamic mics have hum-bucking windings.

• If you hear hum or buzz from an electric guitar, have the player move to a different location or aim in a different direction. Magnetic hum fields are directional, and moving or rotating the guitar pickup can reduce the coupling to those fields.

• If the hum is coming from a direct box, flip its ground-lift switch.

Figure 6. A ground lift using a 100 ohm resistor and a 100 pF capacitor.

• Turn down the high-frequency EQ on a buzzing bass guitar signal.

• If you think that a specific cable is picking up RFI, wrap the cable several times around an RFI choke (available at Radio Shack or other electronics supply houses). Put the choke near the device that is receiving audio.

• Install high-quality RFI filters in the AC power outlets. The cheap types available from local electronics shops are generally ineffective.

• Connect cable shields directly to the equipment chassis instead of to XLR pin 1, or in addition to pin 1. Some equipment is designed this way to prevent the “pin 1 problem”. The cable shield should be grounded directly to the chassis - - not connected instead to a ground terminal on a circuit board inside the chassis.

• Periodically clean connector contacts with Caig Labs DeoxIT, or at least unplug and plug them in several times.

By following all these tips, you can greatly reduce the likelihood of hum and RFI in your audio system. Good luck!

AES and SynAudCon member Bruce Bartlett is a recording engineer, microphone engineer and audio journalist. His latest books are Practical Recording Techniques (5th Ed.) and Recording Music On Location.

Posted by admin on 05/01 at 02:30 PM

Old Soundman: Beware Of The Evil DJ Invasion!

Hello Old Soundman:

Greetings from the shore, where the clams and oysters are fresh but the local celebrity DJ’s gear is not.

Then throw it out like you would do with bad seafood! But I guess that would get you in trouble. You probably want to keep your job – it sounds pretty swinging by the ocean.

I normally don’t have a problem with visiting DJs when I put them on stage to spin tunes. I set my channel EQ flat, give them enough juice to power their mixer, and then sit back, thankful I have an easy night ahead of me.

This was the norm for me for quite awhile – until recently.

What happened recently? I sense that it wasn’t nothin’ nice, as they say in the pen.

The club recently booked a well-known local DJ to bring in some extra paying customers to help pay the bills on slower nights.

Did they ever consider half-nekkid young ladies prancing around? That has proven to be quite profitable in some establishments in my neck of the woods.

This particular DJ is famous for drinking, blowing out speakers, drinking, insane fader and gain maneuvers, drinking, and last but not least, his mixer was state of the art 15 years ago.

Then put him in the way-back machine! He needs to do his drinking somewhere in the past and not bother you any more!

I put this guy on a limiter, he freaks; I play the house EQ game, he freaks; I refuse to turn up my mains, he freaks; I stand there and shake my head no, he freaks.

You are an excellent torturer – I’m very proud of you!

I actually get enjoyment out of doing all of this, but I’ve been told to help him out because, once again, he is “the guy” that people come to see. He also has a sidekick who “helps” him (setting up his gear, booze and lights).

That is the problem. You need to eliminate that guy. And then the DJ will be helpless.
Like the Lone Ranger without Tonto.
Like Batman without Robin.
Like Conan without Andy.

Well, you get the point…

I know you’ve dealt with this in the past and would appreciate any advice. I’ve tried every trick I know to appease this guy, and he doesn’t realize that job security is a two-way street when it comes to performer (sorry, DJ!) and the house sound tech.

Thanks for letting me vent,

I find that a compressor with a brickwall limiter allows you to resume your relaxation.

Also if you happen to have an ex-con friend who is about as big as a house, with tattoos on his neck, he can be really helpful in dealing with guys like this DJ and his buddy.

Luv -
The Old Soundman

Posted by admin on 05/01 at 09:58 AM
Live SoundFeatureOpinionConcertEngineerLoudspeakerMixerProcessorSignalSound ReinforcementSubwooferSystemPermalink

Thursday, April 30, 2015

Industry Insight: Planning For Profit In Your Sound Reinforcement Company

This is a great business.

We all love the romance of providing sound systems for the top musicians in the world, world political leaders and other luminaries. The travel is fun, the adrenalin rush is great ­what’s not to like?

Well… It just seems really hard for the companies that own all this equipment to make much money.

Let’s take a hard look at pricing your services and maybe move toward some changes. It’s vital to start thinking in terms of “planning for profits.” But before beginning, I have good news and bad news.

First, the good news. Because the market sets the price for the equipment and labor that we put on the road, and competitors that have been around for a long time have set the accepted market rates, it will take a long time for a new company to go out of business.

Now, the bad news. If you don’t plan carefully and understand overhead costs in your business, then you’ll go out of business. And fast. What do I mean?

As a consultant, I spent a fair amount of time helping companies price their products and services. Sometimes I worked with manufacturers, and sometimes I worked with service providers like sound companies ­- the same types of companies whose staff typically reads this publication.

What I saw consistently, regardless of whom I’m working with, were real problems with pricing. Simply, most companies don’t understand it. Instead, they rely on the market to set the price and they take a given job (or choose to pass on it) based on an emotional ­- rather than fiscal - ­response.

How a sound rental company arrives at prices for its goods and services is a very interesting proposition. As most of you are aware, the government restricts companies from setting pricing as a group. This would be a violation of anti-trust statutes and people do sometimes go to jail for breaking laws along these lines.

Most sound company owners and managers lack the financial skills and experience to understand pricing strategies, and usually resort to a “forensic accounting” method of managing their business. That is to say, they add up all of the expenses and revenue at the end of each reporting period and feel good about the profit, or bad about the lack of it.

While we can’t all get into a room together and set pricing, there are some components of setting pricing structure that are critical to profitably running a successful business over the long haul.

Rule number 1: Allocate your overhead to the planned business you’re expecting over the next year.

Overhead is made up of all of the costs associated with the activities in your business that are not billable directly to the client.

Rent, utilities, insurance, shop tools, indirect labor like administrative support staff, etc.­ all need to be factored into your pricing model to make sure that, at the end of the year, your company is profitable.

Rule number 2: ­ Develop a pricing model that breaks out the following components each time you quote a job:
  • The cost of the equipment that goes out on the project;
  • The cost of the labor that is directly linked to the job;
  • The overhead cost of running your business allocated to the job.

Rule number 3: Charge for the equipment that not only goes out of the shop for tours and projects, but also for the equipment that does not leave the shop as often. Everyone has “dog” inventory; everyone also has “funky” equipment in inventory that only goes out once in a while.

In a perfect world, you sell off the dog inventory and never buy anything else like it again. Likewise, funky inventory is best sold to a friendly local company that you can rent it back from on those rare occasions when its needed. If you have to carry inventory in these categories that doesn’t get rented out much, the carrying cost needs to be built into your pricing model.

Rule number 4: ­ Old equipment that you essentially got for nothing should be billed as if it were new. If you don’t bill the amount of the replacement cost for this equipment, you’re charging less than you would if you had to buy a whole new system to get the job.

In other words, as your business expands, you’re going to hit a point where you’re locked into old system pricing but have equipment costs that are much higher.

If you don’t create a budget plan to compensate for this, expect a lower margin for every new customer added to your client roster. Rank your customers, and resign accounts that are less profitable in favor of new customers willing to pay more for your company’s unique value proposition.

The three most critical issues in running a business today? Planning, planning and planning! You must plan for each month, and review your plan on a regular basis. If things are “going off the rail,” the sooner the problem is seen, the sooner necessary changes can be made to fix it.

The secondary benefit is that if you meet with a bank about a short-term loan to cover a problem, it will be a far easier task if the loan officer sees that you have a plan. And that you manage by it.

In this case, the bottom line is indeed the bottom line. It’s imperative to the present and future of your business to understand overhead and cost structure.

Running a sound company is hard enough ­you shouldn’t have to wait to the end of the year for the outcome, like trying to finish a really long novel. It can be particularly tough when it comes out like a Greek tragedy.

Michael MacDonald is the president of leading production company ATK Audiotek, based in Valencia, CA, and has been involved in the professional audio industry for more than 30 years. Beginning as a freelance mixer/engineer in the 1970s, he transitioned to working for manufacturers and has been employed by, developed products for, and consulted with major companies such as JBL Professional.

Posted by Keith Clark on 04/30 at 01:08 PM
Live SoundFeatureStudy HallBusinessEducationSound ReinforcementPermalink

Wednesday, April 29, 2015

Taking An Audio Power Trip

Trying to characterize amplifiers or loudspeakers only by power ratings is akin to trying to completely characterize a person by a photograph. There is always much more than meets the eye.

Few subjects generate more confusion in the audio world than power. There is a very good reason for this—it’s a confusing subject, and one that can easily fool our intuition.

Most of us are on a power trip—our attitude is that “more is better.” We want bigger amplifiers and more “powerful” loudspeakers so that our sound systems will be louder.

In fact, power ratings are often the main (or only) criteria considered regarding amplifiers and loudspeakers by equipment buyers. But here, let’s take a bigger look at the role of power in sound systems, hopefully without diminishing or overemphasizing its importance.

Power ratings are only one piece of a larger puzzle. Trying to characterize amplifiers or loudspeakers only by power ratings is akin to trying to completely characterize a person by a photograph. There is always much more than meets the eye.

To form our understanding about power, let’s initially forget about sound systems (with the exception of an occasional reference) and consider power in light of other ways that we use it in daily life.

We will begin with some basics. Power is both generated and consumed. From the perspective of generation—more is better. We always want to have more power available than what we need. From the perspective of consumption—less is better. If a task can be accomplished using less power, we save money since power generation usually costs money.

Power is wasted if it is not doing something useful. In sound systems, amplifiers and loudspeakers are both consumers and generators of power. The amplifier consumes power from the electrical service and generates power to drive the loudspeaker. The loudspeaker consumes power from the amplifier and generates sound power into the room.

Rating methods are used to describe both power generation and consumption (both are in watts). Great care must be taken to be sure which one a power rating is describing, since a larger rating may be better for a power generator, but a smaller rating may be better for a power consumer.

Universal Principles
No, this isn’t a self-help infomercial—it’s a discussion of some of the properties that affect the flow of power. Power principles are analogous in electrical and mechanical systems. Mechanical examples are more prevalent in our everyday lives, so it is easier to look at power from that vantage point.

We begin with energy. In fact, everything began with energy. Energy sources include power plants, automobiles, locomotives, bombs, animals and humans, and less-obvious sources such as plants, water, wind and even garbage.

All energy sources must get their energy from somewhere else, making the “big picture” question a religious one (we won’t go there). Most of the energy on planet Earth comes from the sun, the ultimate power source in our physical sphere of existence.

Let’s get the terminology in place. Energy is measured in joules. It can just sit captive (potential energy) or it can be put into motion (kinetic energy), be it water turning a turbine or a husband taking out the trash.

Work is the result of using a force to move something over a distance, so work is equal to force times distance. Power is the rate of doing work.

That’s an important point, and we will come back to it. Power can be rated in watts, with one watt being equal to one joule per second. So for power to be generated, work must be done for a period of time. If the time span is reduced, so is the generated power. Remember that when you look at specs such as “instantaneous peak power.”

You As Power Generator
A good way to get a better feel for power generation and consumption is to consider a generator that we all possess—our bodies. We humans consume energy in the form of food, store energy in the form of fat, and then burn it up in the course of day-to-day living. We convert energy from one form to another.

It is a process that we continually experience but rarely measure, other than the occasional reluctant glance at the bathroom scales, which provides a composite view of intake vs. output since the last glance.

Let’s attempt to measure power by hopping on an exercise bike—one that reads in watts, but could just as well read in calories, cycles/min, heart rate, or even body temperature.

There are many ways to measure power, and most methods are just estimates. Start pedaling and watch the display. If you work pretty hard you can get it up to 100 joules per second (100 watts). So as you pedal, you are generating 100 watts continuous average power as you pump the pedals.

If you do this for one hour, you have generated 100 watt-hours. If your bike were hooked to the power grid you could sell your hour’s effort to the utility company for about ten cents, assuming that they can convert all of the generated power into electricity.

After observing your efforts for a period of time, we could ascribe a rating that indicates how well you perform as a generator. The rating would indicate how much power you can produce on a continuous basis. The rating could be in watts, but there are other possibilities.

We could compare you to a horse and specify your abilities in horsepower. One horsepower equals 746 watts, which would put your power rating at about 0.075 horsepower. So it would take about 10 people to generate as much power as one horse, which is why a good horse (or the modern equivalent, a tractor) might be considered of greater value than a person, at least when it comes to farm work.

A person who could maintain 100 watts of power generation longer than another would be considered a more powerful source. If we exclude the time metric, then a meaningful comparison between two power sources would not be possible.If no time metric is stated, then the assumption is that the source can sustain its rated power indefinitely (not likely for our human generator). Remember this when you shop for amplifiers!

The Electric Bill
The utility company generates power by burning things, turning things, flowing stuff, or causing chain reactions—and we consume it. None of us boast about how much power we use, but rather about how little power that we can get by on.

The loudspeaker must be considered on this basis. Don’t just tell me what it uses, tell me what it produces!

Shopping for loudspeakers by looking for the one with the highest power rating might be like shopping for cars using only the miles-per-gallon rating, and then picking the one that gets the worst mileage!

We all know experientially that if we move something from point A to point B there is always a force present that will object to the movement and oppose it.

In a mechanical system, one such force is friction. Friction converts some of the applied power into heat (another form of energy). The analogous parameter in an electrical circuit is resistance. Resistance opposes the flow of current through a component or conductor, and dissipates some of the applied power in the form of heat.

Power is paid for by the kilowatt-hour, and the prudent owner or renter tries to get as much benefit as possible out of the least amount of consumption.

Resistance forms a load on the power source—something which it must overcome. Reducing the resistance causes more power to flow due to the lower opposition, and we say that the power source is under a greater load. This is why two loudspeakers in parallel cause an amplifier to produce more overall power than it would into a single loudspeaker (but usually less power per loudspeaker).

A bucket with two holes in it loses its water twice as fast as a bucket with single hole in it. There is less resistance to the water leaving the bucket, and hence more flow.

Power consumption is all about heat. If there is no heat, there is no power consumption. The factor that determines a loudspeaker’s power rating is its ability to dissipate heat. An overpowered loudspeaker is one that is getting too hot.

Note that there are other ways to devour loudspeakers besides toasting them (over-excursion for example), but these are not necessarily power issues.

The Light Bulb Deception
One reason for the confusion surrounding power in audio systems can be attributed to the light bulb. Light bulbs are rated in watts, and we all draw a correlation between the wattage rating and the brightness. This produces a “more power, more light” mentality, which many intuitively apply to sound.

But if you read the package closely, the real parameter of interest regarding the light generating properties of a light bulb is its luminosity—the lumens generated by the applied power. The power rating indicates how much power is dissipated in the process of generating the rated number of lumens.

Both numbers must be considered to evaluate the bulb’s performance. If I can get more lumens and consume less power, I have a better bulb—assuming that I have not compromised some other key parameter, such as longevity.

Like the light bulb, a loudspeaker has a wattage rating that indicates how much power it consumes continuously as it does its job (its job being to produce “X” amount of sound). But the real parameter of interest is the amount of sound power that results from the consumed electrical power. In fact, most of the applied power gets converted into heat rather than sound.

The sound power, like the electrical power, is rated in watts. A perfectly efficient loudspeaker would convert all of the applied electrical power into sound, with no resultant heat, so one watt of electrical power would yield one watt of acoustic power. 

In reality, the conversion rate is much lower, typically less than 25 percent for compression drivers and less than 10 percent for cone loudspeakers (ironically similar to the light bulb’s efficiency, with similar heat production!). It’s a good thing that we can’t touch those voice coils during use.

The ratio between the applied electrical power and the radiated sound power is the loudspeaker’s efficiency, which indicates the percentage of electrical power that is converted into sound power.

We can already see that it may make more sense to increase a loudspeaker’s efficiency than to increase its power handling. If less is wasted, we don’t need as much from the source. And some means of increasing the power handling of a loudspeaker actually reduce its efficiency, yielding a higher power rating but less sound production!

So it is entirely possible that a loudspeaker with a lower power rating is actually a better transducer than one with a higher rating. The only way to know is to consider the efficiency, which is something that is often neglected by equipment buyers.

Back To Sound Systems
Now, in review let’s bring it back to sound systems. The power generators in a sound system are the amplifiers and loudspeakers.

Notice that I didn’t say power amplifiers. Power isn’t amplified, it is generated. An amplifier that is rated at 100 watts continuous can do just that - generate 100 watts continuously just like the dude on the exercise bike. And while it must be measured using a load, the available power is actually independent of the load.

A greater or lesser load does not change how much power is available from a source. An automobile rated at 240 horsepower has the same available power whether it is coasting down hill or pulling a trailer. You are just more likely to max it out pulling the trailer. A light bulb rated at 1000 lumens can do so in either full sunlight or complete darkness.

Amplifier specifications do not usually describe available power, but rather how much power can be generated into a specific load, such as an 8-ohm loudspeaker. This number is always less than the available power from the amplifier to allow for stable operation, longevity and high fidelity.

Although it is often overlooked, the power that the amplifier consumes from the electrical service is also important. Some amplifier types consume less power (and run cooler) than others in the course of providing their rated power. Others may serve us well as a space heater while generating a few watts for our hi-fi system.

Loudspeaker Ratings 101
The electrical power rating for a loudspeaker is a measure of consumption. Loudspeakers consume electrical power and convert it into heat and sound via mechanical motion. Since the sound is the part we are interested in, the main parameter of interest should be how much sound we get for the applied power, not how much power we can apply.

The ideal loudspeaker would generate the desired sound level consuming no electrical power. But because this is impossible, we must rate the loudspeaker’s ability to dissipate the waste.

The electrical power rating of a loudspeaker is a measure of waste removal, not a measure of sound production. A higher wattage rating is a good thing only if was achieved by a method that doesn’t reduce the efficiency. Like the light bulb’s luminosity, the loudspeaker’s sound power is the most important specification.

And as important as it is, you won’t often find efficiency ratings on a specification sheet. Instead you will find sensitivity ratings—numbers which describe the sound levels that result from confining the sound power to smaller areas (directivity) and increasing the power transfer to the air (horns, baffles, etc.).

Hopefully this clarifies some of the relationships and terminology regarding power generation and consumption.

Pat & Brenda Brown lead SynAudCon, conducting audio seminars and workshops online and around the world. For more information go to www.prosoundtraining.com.

Posted by Keith Clark on 04/29 at 06:03 AM
AVFeatureBlogStudy HallAmplifierAVInterconnectLoudspeakerMeasurementPowerSignalPermalink

Tuesday, April 28, 2015

Bruce Swedien On Vocal Recording Microphone Techniques

To me, the first, the most interesting, the most capable musical instrument of all is the human voice. It has the ability to communicate a wide variety and range of emotion. Delicate timbral shading is easily accomplished by a well-trained vocalist, as well as an amazingly wide dynamic range.

The range of instantly recognizable vocal personality is astonishing. In other words, two voices can have the same range classification, so by category we could say that they are the same instrument, but the different vocal personality of the two individuals is fantastically clear and apparent.

Let’s talk a bit about the basic theory of voice production. The human voice can be regarded as a musical tone-generating system consisting of an oscillator and a tube resonator. In short, it is called the vocal tract. The sound that radiates from the vocalist’s vocal tract contains the individual physical peculiarities that help the vocal system shape a sound with a unique sonic character.

Looking at the vocal tract just a bit more medically, we can say that the voice organ is an instrument consisting of a power supply (the lungs), an oscillator (the vocal folds), and a resonator (the larynx, pharynx, and mouth). Singers adjust the resonator in special ways to produce music.

Even if two singers of the same voice classification sing the same vowel on the same pitch, we hear a distinct timbre difference, which enables us to discern that this is singer X and that is singer Y. This incredible range of sonics has made the voice a very fascinating subject for the music-recording person. It follows, then, that microphone choice and recording technique for a vocalist are among the most important jobs we will encounter in the studio.

My father was a choir director in our church, and my mother was a fine vocal soloist, so I guess it’s only natural that recording the human voice has been of special interest to me since the beginning of my recording career. My mother sang with, and was a featured soloist, with the Minneapolis Symphony’s Women’s Chorus. So, as a kid I went to chorus rehearsals with her, and in addition heard many Sunday afternoon concerts with that world-class musical organization.

My early years in the business were spent in my home town of Minneapolis, listening to and recording many of the fine church and college choirs of that area. Hearing those excellent vocalists sing in good acoustical surroundings gave my ear a benchmark that has been impossible for me to ignore. This valuable experience has stuck with me and has been a big help throughout my career.

Let’s talk for a minute about the technical aspects of the human voice. While the human voice is quite limited in frequency range, its sibilant sounds (the high, hissing sound present in “S,” “T,” and “F” – mainly the “S” sound) extend well into the high-frequency spectrum. The subtle yet extreme shading of dynamics (range of soft volume to high volume level) and great variation in timbre (i.e., a scratchy, harsh voice versus a soft, delicate voice) is equal to, or exceeds, any other musical sound source.

When recording a solo or lead vocal, it is also very important to consider the type of music to be performed. Generally speaking, jazz may be treated in a similar way to classical music. I almost never use an extreme close-mike solo vocal technique in either jazz or classical. A classical solo vocal always demands an even more conservative approach than a jazz vocal. The type of music to be recorded frequently dictates whether the lead vocal must be recorded at the same time as the orchestra. When all the musicians and singers are recorded at the same time, this is usually referred to as a “straight-ahead” session.

On a straight-ahead session, the lead singer is most often placed in a vocal booth, a smallish, satellite studio that affords good isolation of sound but allows the singer or singers to see the musicians and hear them through headphones or a small speaker.

Alternatively, you can record the lead vocalist while he or she is in the studio with the musicians. To accomplish this, use a group of “gobos,” or isolation flats, to screen off some of the orchestral sound from the vocal mike. This type of recording requires a musical arranger who is very aware of the problems particular to this style of recording. Most often in pop music, the rhythm tracks are recorded first, then the vocals, and then the rest of the orchestra. This technique allows the engineer a great opportunity to experiment with different sounds.

As far as mike technique goes, the pop music field is wide open to our imagination.

This is one thing that has always made it very exciting for an me as an engineer. I have always enjoyed combining mike and recording techniques to come up with a “hybrid” sound.

Of all the types of music that I have recorded, I love popular music the most for one simple reason: In recording and creating sound images in pop music, I am limited only by my imagination and the equipment at hand.

There are few boundaries or restrictions. It is incredibly exciting to create sound-fields that could not possibly occur in nature, and yet I can put a little taste of reality there, to give the ear a focal, or grounding, point to relate to.

Recording Vocals
Recording vocals for any type of music requires a good deal of thought and preparation. Whether it’s a solo voice, a choir of eighty voices, or a back-up chorus of five singers, there are many things to consider.

First off, the type of music to be recorded is most important. Pop, jazz, rhythm and blues, country, and classical all require a different approach. The biggest single difference in studio mike technique for vocal recording comes from recording of vocal sound sources in classical music, contrasted with pop vocal sound sources. The first and most important consideration is that I would never mike the vocalist in a classical recording as closely as I would a vocalist in a pop music recording.

Second, the vocal effect is important to consider. In other words, in a group vocal, is a choral effect with a massive sound desired, or should it be a smaller, warm, intimate vocal group sound? Occasionally, a mixture of the two can be musically very pleasing.

Good choral recorded sound is best achieved by using as few microphones as possible, with the singers placed well back from the microphones. This technique places most of the sound mixing responsibility on the room acoustics and the vocalists. Obviously, this approach requires an excellent studio or a room with extremely good acoustics.

This technique coupled with really good singers and a fine room yields a result that is not merely satisfying, but a thrilling musical experience. Close-miked vocal group sound requires several mikes and places most of the sound-mixing responsibilities on the engineer. It also removes a great deal of acoustical support from the sound.

When using this technique, it’s probably best to divide the miking first by voice quality, and next by harmony parts in the vocal arrangement. As a rule of thumb, you can figure four or five singers require two mikes and ten singers, five mikes. The singers work from 6 inches or less to 2 inches or more from the mikes.

With excellent singers, the result is very pleasing. When doubling, or stacking, vocal parts, I like to do a stack, or “layer,” with the singers moved back from the mikes far enough to add some early reflections to the sound. In choosing a microphone and recording technique for a solo or lead vocal in a pop or rock recording, the most important thing to consider is the artist’s vocal timbre.

To review, timbre is the sound characteristic of the voice (is it soft and breathy, or is it loud and penetrating?).

Your choice of vocal microphone should be made on the basis of the artist’s vocal quality and the sonic personality you want to project – nothing else.

In this instance, we must remember to think of music recording as an artist would think of trying to capture a scene on his canvas. (Our canvas is our recording medium.)

Like the artist, we cannot capture the true reality of the scene, and it would be a mistake to even try. We must project our interpretation of that reality.

As in everything else in music recording, there are no set rules, so at first you must realize some experimentation is usually in order. What may seem to be an obvious choice may not work well at all.

After a bit of experience, you will be able to hear someone speak or rehearse a vocal part, and you will instinctively know what mike will be a good choice.

Extreme equalization is most definitely not the way to achieve a superb vocal recording, though a small amount of EQ may be beneficial. If you find yourself having to apply a great deal of EQ to the mike channel to achieve an acceptable vocal sound, it’s time to try another microphone.

Here are some suggestions:

Well rounded, naturally good-sounding voice
Neumann U 47 tube mike
Telefunken 251
AKG C-12
Audio-Technica 4050
Sony C800-G
AKG C 414 eb
Neumann U 87
Neumann U 67

Good voice, thin, but not too sibilant

Neumann U 47 tube mike
Neumann U 47 fet
Audio-Technica 4050
Sony C800-G
AKG C 414 eb
Telefunken 251
Neumann U 67

Soft, quiet voice with little projection
Neumann U 47
Telefunken 251
Shure SM7
Audio-Technica 4050
AKG C 414 eb
Neumann U 67

Thin, weak voice (using the proximity effect to dramatize the low register)
RCA 44bx

Loud, brassy voice with good projection
Shure SM7
AKG C 414 eb
RCA 44bx

Good voice, but too sibilant
Shure SM7 (windscreen)
RCA 44bx
Neumann U 87 (windscreen)
Neumann U 67 (windscreen)

Of course this list could go on and on, but these suggestions should give you an idea or two.

Stacking, or “doubling,” a lead vocal is helpful. I frequently change the tape speed of the master recorder slightly during the recording of the “double,” or “stack” (or pitch down the cue mix coming from the digital audio workstation) and play that to the vocalists while recording the “double.” During this process, the amount of pitch change used should be very small.

To use this vocal recording technique, you must have a singer who is extremely good, with an excellent sense of relative pitch, because, of course, changing the speed of the tape machine when recording the double changes the pitch of the basic track.

When this sped-up or slowed-down vocal double is combined with the original, it seems to add a bit of sonority (a full, rich quality) to the lead vocal that makes it more interesting.

I ordinarily slow the music playback down during the recording process, not speed it up, if I am using this technique. If I am working with a digital audio workstation, the process is almost the same. I pitch down the cue mix coming from the DAW and play that to the vocalists while recording the “double.” The amount of pitching down that I do is usually only 3 or 4 cents in pitch.

When mixing a “stack,” or “double,” of a lead vocal track, I frequently keep the “double” at a slightly lower level in the mix than the basic lead vocal track. This serves to add support to the vocal without making it appear to be an obvious trick.

The final playback medium of the recording is important to consider when preparing for a vocal recording. Whether the eventual product is to be monaural or stereo should influence your vocal miking technique. If the eventual hearing medium is going to be monophonic, and if it is a complex vocal arrangement with few singers and a lot of harmony and interacting parts, you will probably need more mikes than if it is a basically unison background vocal arrangement.

By adjusting mike levels during the recording process, we can make sure that each part in the arrangement can be heard clearly and in its proper relationship to the other parts of the vocal arrangement. This, of course, is true to a degree in stereophonic recording, but monophonic recording gives no “panoramic” acoustical support to the vocal parts.

If the final product is a stereophonic recording medium such as a phonograph record or a motion picture score, I always try to preserve as much natural, acoustical stereo sound as possible and then keep this audio information as close to the original soundfield as possible right through to the final mix.

Here is how I would record a vocal group of five singers. My first choice of mikes would be two high-quality, good-condition, large-capsule condenser microphones such as the AKG C 414 eb or similar. Lately, I have been using my Audio-Technica 4050s with a great deal of success. The singers are positioned facing each other. The mikes are placed close together, back-to-back about 4 inches apart, or less.

This method of keeping the mikes close together allows some mixing of sound to occur acoustically. It also gives good phase coherency so that when the mix is heard monaurally, there will be no change in balance or quality. (Phase coherency is achieved by keeping the mikes close together so that the sound sources arrive at the two mikes at about the same time, thus minimizing phase distortion.)

I would record a basic vocal track using two channels of the multitrack tape, one mike on each track. I would then ask the singers to step back from the mikes about 2 feet or so, and record a stack (double) of the original part. This also would be recorded in discreet stereo on two channels of the multi-track.

It is very important to carefully watch the volume levels of the individual vocal tracks. Keep the levels on the individual vocal tracks as consistent as possible. By having the singers step back from the mikes during this vocal pass, in order to keep the track levels consistent, we are forced to raise the volume level of the two mikes on this pass, thus giving greater acoustical support to the sound.

Finally, I normally mix these four tracks in the final mix in the same proportion on the same side of the stereo panorama as they occurred during the performance. In some recordings of vocal groups, I hear the stereo tracks flopped over or reversed in the stereo panorama in the final mix in an effort to get a mixture, or as my old pal Phil Spector calls it, a “Wall of Sound.” The problem with this, to my ear, is that the acoustics also mix in reverse order.

This technique, to my way of thinking, removes much of the personality, or character, of the recording environment from the sound-field, sometimes resulting in a bland, not-too-interesting vocal group sound. I must take this opportunity to point out that this is entirely a matter of personal taste.

Very often in vocal recording, I hear a lot of single, monaural tracks merely panned either left or right in a halfhearted attempt at stereo sound. All this really creates is left and right mono and has nothing to do with the support of music. I call this technique “Two-Channel Monaural.” It has absolutely nothing to do with stereo recording and affords little or no acoustical support to the recording. The additional effort and planning required to preserve real stereo and the acoustical support it provides is well worth it.

When recording vocal duets, I frequently look for microphones for the vocalists that have an obviously different sonic character. This difference in microphone character adds to the already different timbre of the two voices and makes the resulting sonic picture more fascinating.

This article is excerpted from Make Mine Music, a fantastic book by a true recording legend. To acquire the book, click over to musicdispatch.com

Posted by Keith Clark on 04/28 at 02:18 PM

Church Sound: Are In-Ear Monitors Right For Your Ministry?

This article is provided by Church Audio Video.

In-ear monitors (IEMs) are finding their way onto the wish list of nearly every church technical director and worship leader these days. But are they the right choice for your ministry?

Answering “yes” to any of the following questions may provide direction.

• Does your church suffer from excessive stage noise?

• Does your sound operator have difficulty establishing a decent mix?

• Does your sound (especially your music) always seem a bit muddy?

• Does your talent always complain about the monitor mix?

• Does your talent wish that they had “more me” in the monitors?

Ok, so I answered yes to all of those questions…but what in the name of sound reinforcement are in-ear monitors?

IEMs provides the talent with a monitor mix sent to a set of earbuds worn in the ears instead of the traditionally used loudspeaker wedges found on stage. They can either be wired or wireless, and also be user-adjustable (personal monitor mixing). Some advantages include:

Lower Stage Volume
Since fewer wedges will be needed on the stage, the overall SPL level on stage will be considerably less. This lower stage volume will provide for a cleaner, more intelligible house mix.  It will improve the effectiveness of any monitor wedges left on the stage. 

It will also improve any audio recording due to less acoustic leakage into any open mics on stage. And less stage noise will also lead to fewer instances of acoustic feedback.

Greater Flexibility & Mobility
With wireless IEMs, the talent can move anywhere they see fit without any noticeable change in their monitor mix. If a personal monitor mixing system is also used, the sound engineer will no longer hear “I Need More ME!” because the talent can take care of it on their own.

The use of IEMs requires a lot less volume than typical stage monitors and can save your hearing if worn and used correctly. If you value your hearing, then this is a way to go.

The use of IEMs allows for discreet communication from front of house. You won’t ever have to worry about getting the attention of the talent (or the first six rows) when trying to fix a sound issue on the fly.

Tip: Place a couple of ambient mics around the room and feed the signal to the IEMs or you’ll have talent taking one of their earbuds out of their ear because they feel isolated from the congregation. The practice of wearing only one IEM will in most cases require an increase in SPL, resulting in an increased chance of hearing loss in that ear.

Casey Watson is a project manager and Certified Church Consultant for Church Audio Video.

Church Audio Video specializes in the design, installation and support of high-quality and affordable custom audio, video, lighting, broadcast and control systems for worship facilities. For more information, visit the company website.

Posted by admin on 04/28 at 08:10 AM
Church SoundFeatureStudy HallEngineerLoudspeakerMonitoringSignalSound ReinforcementStageWirelessPermalink
Page 52 of 202 pages « First  <  50 51 52 53 54 >  Last »