Friday, April 11, 2014
Yorkville Sound Now Shipping New Parasource Series Active Loudspeakers
Suitable for main, fill and monitoring; outfitted Class D amplification, integral DSP and onboard mini mixer
Yorkville Sound is now shipping the new Parasource Series of active 2-way loudspeakers that can be utilized for main, fill, and stage monitoring applications.
Parasource loudspeakers are available with a 10-, 12- or 15-inch ceramic woofer, joined by a 38 mm (1-inch exit) compression driver feeding a large conical horn that delivers improved constant directivity and improved midrange response. Transducers are bi-amplified, driven by efficient, high-headroom Class D switch-mode power amplifiers with passive cooling that reduce overall cabinet weight.
Integrated DSP eliminates the need for complex external processing. An integrated 2-channel mini mixer, with controls on the back panel, offers master, mic and line level controls, joined by XLR microphone, 1/4-inch and RCA line inputs. The mixer also includes a 1-button high-pass filter switch that rolls off low-frequency material when the loudspeaker is used with a subwoofer. Activating the dynamic bass boost enhances low-end response without cluttering midrange program or affecting the loudspeaker’s overall intelligibility.
Multi-band limiting delivers maximum output and very even frequency response while also protecting the components. Further, it ensures essential elements like vocals and solo instruments in the essential midrange band aren’t being modulated by transient attacks of low-frequency material like kick drums and bass tracks.
Parasource Series loudspeakers are manufactured in Canada using the same rugged light-weight, high-impact ABS cabinet construction as the company’s popular Paraline Series. Cabinets are paintable. Integrated, reinforced flypoints mean that the loudspeakers can be flown quickly and easily. Ergonomic handles are provided for easy transport, and cabinets also include integrated metal stand mounts.
Models in the Parasource Series:
PS10P: 10-inch woofer (with 2-inch voice coil), 90 x 70-degree dispersion (conical), 800/1600 watts (program/peak), 23 x 14 x 12 inches (h x w x d), 40 pounds, $1,399 U.S. MSRP
PS12P: 12-inch woofer (with 3-inch voice coil), 85- x 50-degree dispersion (conical), 1400/4400 watts (program/peak), 26.5 x 16.7 x 13.5 (h x w x d), 60 pounds, $1,549 U.S. MSRP
PS15P: 15-inch woofer (with 3-inch voice coil), 85- x 50-degree dispersion (conical), 1400/4400 watts (program/peak), 30.7 x 20.5 x 14.2 inches (h x w x d), 65 pounds, $1,799 U.S. MSRP
Cokesbury United Methodist Chooses Allen & Heath GLD-80 For Multiple Campuses
GLD consoles were chosen after an evaluation demo with several competitors
Cokesbury United Methodist Church in Knoxville, TN, has three locations as well as an online ministry that streams services on the web, with tech director Mischa Goldman recently choosing to implement several new Allen & Heath GLD-80 consoles.
The GLD consoles were chosen after an evaluation demo with several competitors that was provided by ML Sound, also of Knoxville, which has served as church’s audio consultant and contractor for more than a decade.
During the demo, Goldman and his team of engineers listened to the same audio track several times through an assortment of mixing consoles, with their backs turned. “We did a four-console shoot-out, and we all chose the GLD-80,” Goldman states, succinctly.
As a result, contemporary services at the church’s north campus are handed by two GLD-80s, one for front of house and the other for monitors, outfitted with a Dante card for networking. Mix engineers are also taking advantage of the i-Pad app to check sound levels throughout the sanctuary.
At the south campus, the sanctuary hosts more traditional services in a space that’s acoustically very “open,” with a thrust alter jutting out into the sanctuary. “Given this, the sound must be reinforced as opposed to replaced, it has to be amplified but transparent and not sound amplified,” Goldman says. “This is where the older congregation worships and they don’t like a loud sound; they’ll come up to you and say so. The GLD-80 processing really delivers an ideal curve on all the audio.”
The third Cokesbury church is a portable campus, hosted at a high school auditorium each Sunday, with the system needing to be assembled prior to services and then struck immediately afterward. “It’s just like doing sound for a road tour, but with a shorter amount of time.” Goldman notes. “We needed a console with a digital snake option, 48 channels of flexibility and processing that can happen on the fly; in other words, drag and drop what you need and forget the rest.”
“I felt like all these years I’ve been listening to our sound through a paper bag,” he adds. “We turned the GLD-80 on without any processing, and it was a whole new sound spectrum, It’s warmer with very clear articulation and the audio presence is very clean.”
“Allen & Heath has been embedded in our company for over two decades,” concludes Joe Hamilton of ML Sound. “I used their analog boards in the past, and they were always way above the other available mixers.”
Allen & Heath
American Music And Sound
Thursday, April 10, 2014
Allen & Heath Appoints Christian Luecke As Sales And Marketing Director
Previously managed European B2C business for Sony and Samsun
Allen & Heath has appointed Christian Luecke as its new sales and marketing director. He brings to the position 15 years of experience in the consumer electronics market, where he managed European B2C business for Sony and Samsung.
Joining the board of directors, which already comprises managing director Glenn Rogers finance director Dave Jones, operations director Tony Williams, and R&D director Rob Clark, Luecke will oversee Allen & Heath’s global sales and marketing operations.
“I’m excited to be joining the pro audio industry, and specifically the Allen & Heath team,” Luecke states. “My initial impressions of the company are that it is very passionate about audio design and innovation, two areas which are paramount in its product development process. With so many great products already on the market and many more in the pipeline, I am arriving at a very exciting time.”
Rogers adds, “Christian’s appointment will strengthen the existing leadership team and help us to significantly grow the business over the coming years. His extensive experience places him in a position to develop growth strategies and enable us to better support our customer network.”
Allen & Heath
Wednesday, April 09, 2014
In The Studio: Six Nuances You Feel, Not Hear
Identifying the "little things" that really add up over the course of a mix...
Have you ever believed that there’s just something badass engineers do that the rest of the world isn’t privy to? Are you disappointed when everyone on forums seems to agree that engineers are just using really good judgment and generally using basic processing?
Well, don’t get your hopes up too much. 95 percent of a great mix stems from great decision making and the use of basic processing that everyone has access to. But, that last 5 percent does contain a bit of secret sauce. Secret awesome sauce. Every seasoned engineer will have their own recipe. I certainly have mine.
I want to share some personal techniques. These are little things I do that really add up over the course of a mix. Each one of these techniques are based around one idea: you don’t really hear it when it’s there, but you miss it when it’s gone.
By building these subtle effects into my mix I create something that elevates the overall sound without dramatically changing it — which is often a desirable goal when mixing. They also amount to some of the things which just seem to separate a finished mix from a rough mix in that way that’s hard to put a finger on.
1. Fast decaying reverbs
One of my principal approaches to mixing is to create depth and polish.
Often times I may want something to have a 3D image and “glossed” tone, but I don’t necessarily want to hear an audible reverb or delay.
Tucking very short reverbs into generally dry sounds very quietly can add just a bit of depth and hi-fi-ness to the source sound. I’m constantly experimenting with algorithms, timing, and various other settings and I recommend you do the same.
The only generality here is that I tend to lean a bit more toward early reflections with medium diffusion (when diffusion settings are an option). There’s also a few presets in the delay plugin by FabFilter called “Timeless” that I like for this purpose.
You don’t need a lot of this stuff. I’m turning my returns down as low as -15 to -20 dB below the source sound. Just enough so you miss it when it’s gone!
2. Subtle distortion or saturation
A touch of distortion can really make a sound pop in a mix. If it doesn’t sound “distorted” but brings a bit of harmonic energy into the fold I’m usually into the idea.
Not to sound like a FabFilter commercial here, but I like to experiment with Saturn because it gives me very fine control over the specifics and degree of the distortion.
3. Micro panning
Finding movement is paramount to a successful mix. A tiny degree of panning, almost too little to hear unless you solo the source, can go a long way in this regard.
This is a go-to move for sequenced hi-hats (I’ll tend to pan quickly). And very useful for background pads/noises as well (a slightly slower pan is usually good for the sustainy sounds). Delay returns are also a great place to play with moving pan positions.
4. Subtle volume rides at section changes
Volume automation is not just good for evening things out — it can also be great for creating contrast. Next time you’re going from the verse of a song to the chorus try a few of these little techniques.
Bump the chorus up on your submix/master fader channel by 1 dB. Bump the very first moment of the chorus up 1 dB above that, and quickly return it back down. Find a sustaining element right before the chorus and start pulling it up a bit in level creating a subtle crescendo movement.
Even the vocal reverb/delay return can be good to bump right at that transition point.
5. EQ/compression/distortion on reverb and delay returns
I have a cool video tutorial on this but felt that it was worth mentioning here.
Reverb/delay returns are elements in the mix just like anything else. Coloring the ambience in a slightly unique way can help create tonal complexity and augment the sense of depth.
6. Removal of unwanted sounds
A great deal of what you’re hearing in a great mix is what you’re not hearing.
The removal of bleed and mouth noises, the reduction of breathes, the taming of plosives and sibilance. All of these excess sounds add up to one things: distraction.
Not to say breath noises don’t have their place — but you’re the master of the playback so be decisive about what you don’t want, what you do want and how much.
Ultimately we as engineers are doing our best to get the music through the speakers in the most captivating way possible. Sometimes that’s about the big picture. But it’s also about all the little things, the subtle decisions we make, that amount to something bigger than the sum of its parts. That’s why I may do things that the average listener probably won’t consciously hear.
Matthew Weiss engineers from his private facility in Philadelphia, PA. A list of clients and credits are available at Weiss-Sound.com. To get a taste of The Maio Collection, the debut drum library from Matthew, check out The Maio Sampler Pack by entering your email here and pressing “Download.”
Also be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article, go here.
Tuesday, April 08, 2014
TC Furlong Hosting New Yamaha QL Series Console Demos This Month In Chicagoland
Demo sessions to be held in Lake Forest, Rosemont and Chicago
TC Furlong of Lake Forest, IL, will be hosting a series of demonstrations of the new Yamaha Commercial Audio QL Series of digital consoles in the Chicago area this month. Attendance is free. Dates and locations:
Wednesday, 4/16, 4:30 pm—7 pm
TC Furlong in Lake Forest, IL
Click To Register And For More Info
Tuesday, 4/22, 9 am—1 pm
Regus Offices in Rosemont, IL (with Yamaha district manager Mike Eiseman)
Click To Register And For More Info
Tuesday, 4/22, 4 pm—7:30 pm
The Moody Church in Chicago, IL (with Yamaha district manager Mike Eiseman)
Click To Register And For More Info
The new QL Series inherits CL console features such as all-in-one mixing, processing, and routing capability for small- to medium-scale live touring sound, houses of worship, corporate A/V, and speech applications.
Yamaha Commercial Audio
Monday, April 07, 2014
Zen On Stage: The Latest On IEM & Personal Monitoring
Reliable monitoring is essential to performers on stage, allowing them to blend their musical contributions with the other players – keeping them in time, on pitch, and able to creatively interact.
Traditionally, this function was performed by low-profile loudspeakers aimed generally toward the areas where the performers were active, with level control, sufficient coverage, bleed into open microphones, and feedback all issues that needed to be overcome. Another issue, especially with acts performing at high levels, was/is a contribution to hearing loss.
I first became aware of in-ear monitors, and wireless delivery of the mix, more than 20 years ago when I was asked to check out a prototype from a company called Garwood, based in the UK. The system consisted of a transmitter and a commercial stereo receiver unit (as I recall it was from Sony) operating in the FM band, with a pair of ear buds for monitoring.
In talking with some sound engineers for corroboration, I heard that a handful of singers were trying the system but rarely the other players, and that not hearing other musicians and the audience “live” was a common objection. (It should also be noted that Future Sonics was another pioneer of this approach at the time.)
Fast-forward a couple of decades, and in-ears and other personal monitoring solutions see wide usage. Let’s explore how manufacturers and engineers are pushing the boundaries of monitoring.
Monitoring a performance and what the rest of the band is doing while wearing in-ear monitors has a number of advantages over monitor loudspeakers. Typically, the performer has personal control the level of the mix via a wireless beltpack receiver or other interface, and unless the level controls and limiters are overridden, that level will be safer than the uncontrolled output of stage monitors and additional sound sources.
So there’s less potential for hearing damage as well as listening fatigue, but still enough level to stay present with the performance. And no matter where the artist moves on stage, the mix will remain consistent and much cleaner.
That mix can be even more highly controlled, either by the monitor engineer or by personal monitor-mix stations where the performer can select exactly what they want to hear at which relative levels, and make adjustments on the fly. With the mix going straight from the board into the ears, personalization of a mix is much more refined, and can make achieving a satisfactory mix faster and easier.
For the engineer and audience, having fewer or no monitor wedges lowers the level coming off the stage into the house, so that the house loudspeaker system isn’t competing with the stage for attention. This can be further enhanced with isolation boxes on instrument amplifiers, along with side/rear-firing them, and similar methods. Also, either having no wedges on stage or having them at lower levels to supplement in-ear monitors will help with gain-before-feedback as well as mic isolation.
A major part of performing is making the connection with the audience, and that energy is part of the “live” feeling that can be compromised by wearing isolating in-ears delivering a clean personal mix. An early and ongoing solution to this problem is adding side-stage audience mics to feed applause and other ambient sounds into the monitor mix. Pulling out one ear bud or loosening them to hear what’s going on can defeat the benefits of hearing protection and a more consistent mix.
Engineer Sean Quackenbush (O.A.R., Robert Randolph) with part of the Sensaphonics 3D Active Ambient IEM system.
Performers also need to interact on stage, and this includes being able to talk with each other during or between tunes. Artists also want to communicate with techs and the monitor mixer during a show. With in-ear monitors sealing the ear canal and attenuating ambient sounds by 20 dB or more, that communication can be much more difficult.
A solution from Sensaphonics addressing these challenges is the 3D Active Ambient IEM system. Each custom earpiece contains a microphone, and what it “hears” can be added to a monitor mix at any desired level.
The beltpack has a toggle switch that goes between a performance mix with your preferred ambient level mixed in, and a communications mode that brings up the level of the mic and dials down the monitor mix for those necessary conversations. Another approach is found in the JH Audio Ambient FR earpiece, which has an “ambient bore” to let in an attenuated version of outside sounds.
At The Ear
The elements for personal monitoring include the method of mixing the sources – the monitor console or individual mixers for the musicians, the delivery system for those signals, and the transducers themselves.
Though headphones are occasionally used, ear pieces or “buds” as they’re commonly called are much less obtrusive. Some of the differences among these in-ear devices involve custom-molded versus standard foam tips, the number of individual transducers used to reproduce full-bandwidth audio, the types of drivers used and how they are crossed over, and how they are constructed.
Some companies, such as Ultimate Ears, Future Sonics, JH Audio and Sensaphonics, only offer custom-molded in-ears that fit the exact contours of a particular musician’s ear canals. This precision leads to a tighter seal to attenuate the ambient sound, potentially greater comfort, and a more controlled audio environment. The process begins with a visit to an audiologist who takes molds of both ears.
Some even provide guidance to find a qualified audiologist, with precise instructions of how deep into the canal the mold should go and that the person’s mouth should be open during the process “to ensure a more secure fit while the artist is singing, playing and instrument, or talking.”
Having a tight seal within the ear canal also enhances bass performance. Jack Kontney of Sensaphonics notes that “the soft silicone flexes with the ear canal when singing and changing facial expressions” so that a complete seal is maintained. An incomplete seal can lead to a loss of low frequencies, especially below 100 Hz – and is especially important when using balanced-armature drivers.
A tight seal also prevents the loud ambient sounds from entering, so that effective monitoring can be attained at lower levels. Further, according to Sensaphonics, medical-grade silicone provides several dB better attenuation than acrylic, reducing outside sounds by greater than 30 dB.
Inside an Ultimate Ears 18 Pro Custom earpiece.
Ear buds use either dynamic or balanced-armature drivers, or a combination of both, to reproduce the audio signal. Dynamic drivers function similarly to loudspeaker cones, only are miniaturized. They can be more efficient at reproducing bass frequencies, with potentially less detailed highs.
Balanced armatures suspend a rod surrounded by a coil within a magnetic field, and the motion of the rod is coupled with a diaphragm. Their response tends to be highly detailed. As an example, the AF140 uses a dynamic and a balanced-armature driver in tandem for the lows, crossed over to a balanced armature for the highs.
JH Audio JH16 and Future Sonics mg6pro multi-driver ear buds.
With some buds, the frequency spectrum is divided between a pair of drivers; others use multiple drivers with several crossover points, and offer models with three, four, or more. Ultimate Ears 18 Pro Custom IEMs have six balanced-armature drivers – two LF, one each low and high mid, and two HF – while the Audiofly AF180 offers four balanced armatures and the JH Audio JH16 is a 3-way design with eight drivers per ear (double dual LF, dual mid, and dual high).
Recently introduced mg6pro ear buds from Future Sonics incorporate multiple 13 mm proprietary miniature dynamic transducers, crossover-free, and with proprietary +/-20 dB Ambient Noise Rejection (A.N.R.).
Universal-fit in-ear monitors are available from a variety of companies, such as Shure, Audiofly, Avlex MIPRO, Westone, Etymotic, and others. These units couple the earpiece with replaceable foam tips that conform to the contours of the ear canal.
While they don’t offer the fit and seal of a customized system, they are high-performance audio devices, like studio headphones. Listening recently to a CD through a pair of Audiofly AF140s, I had a “what’s that?” reaction and realized that I was hearing the detail of the flute player’s breathing on the recording.
Making It Personal
Going beyond a handful of different mixes provided by the monitor engineer, compact monitor mixers can be positioned by the individual musicians who can then customize their own mixes.
Professional personal mixers allow musicians to select and custom-mix 16 channels or more (discrete channels or sub-mixes) of digital audio from all available channels, adjust levels, pan, EQ and effects for each channel, plus save and recall presets of previous mixes.
Aviom is a pioneer in personal mixing, and recently introduced the A360, offering 16 mono or stereo channels that can be selected from a 64-channel A-Net or Dante digital audio network, plus an additional dual profile channel that gives the musician instant access to a most important channel of their choice. The system also has an onboard mic that can be enabled for one-touch ambience, or a stereo ambience feed from the console can be tied to this control.
The Roland Systems Group M-48 provides access to either 16 or 40 channels of digital audio when the appropriate Roland digital snake is connected to a Roland V-Mixer console. The setup of connected M-48s can be controlled locally or via software on a control computer. The personal mixer offers multiple outputs to feed a pair of floor wedges as well as headphones or IEMs.
The Allen & Heath ME-1 personal mixer works seamlessly with the company’s iLive and GLD digital mixers, complemented by the ME-U hub that opens it up to use with other consoles via Dante, EtherSound or MADI. ME-1 also has an Aviom compatibility mode.
The dbx professional PMC16 personal monitor controller can be used with the dbx TR1616 converter or any other Harman BLU link compatible device, and multiple PMC16s can be daisy chained using Cat-5e, allowing each user to receive 16 channels. It also is outfitted with onboard Lexicon reverb. The Movek myMix system has a powerful yet simple interface that includes a large backlit screen, rotary controller, and four function push buttons, allowing the user to select and control a 16-channel mix.
myMix myMix-Mixer and Allen & Heath ME-1, both personal mixers.
And another step farther, Pivitec and PreSonus combine hardware with configuration and control software running on tablet PCs and smart phones. The Pivitec system is based on AVB Ethernet protocols, using compatible network routers and switches plus 16-channel rack-mountable input modules.
PreSonus offers an app called QMix to provide up to 10 musicians with individual wireless mixes on their iPhone or iPod Touch, when used in conjunction with the company’s StudioLive console. The iOS device will detect all StudioLive mixers on the network, and can create a mix that includes all mixer channels. Aviom has also announced that iOS support for the A360 is coming this year.
Today’s performer may be wearing at least two wireless packs – one to transmit voice or instrument to the console, and one to receive a personalized stereo mix. Being wireless provides freedom of movement while retaining a clean, consistent monitor mix. Several wireless microphone manufacturers also offer wireless IEM systems, including Shure, Sennheiser, Audio-Technica, Lectrosonics and others.
Shure offers the PSM900 single-channel and PSM1000 dual-channel wireless personal monitoring systems, which operate in the UHF band. They are analog systems with a frequency response of 35 Hz to 15 kHz, with a stereo separation of 60 dB. The PSM900 covers 36 MHz of spectrum, and up to 20 compatible frequencies can be used together. Transmitter power is selectable at three levels – 10, 50, and 100 mW. The slim bodypack is ruggedly constructed with a metal chassis, and has a detachable whip antenna, stereo mini jack, and a rotary level control.
Audio-Technica M2 (above) and Shure PSM 900 single-channel wireless monitoring systems.
Audio-Technica offers the M2 single-channel wireless monitoring system operating in the UHF band over 33 MHz of spectrum, with multiple bands available. Up to 10 systems can operate together per band. In addition to L/R inputs, an additional input for a click track or ambient mic is provided.
Meanwhile, Sennheiser SR 2000 single-channel and SR 2500 dual-channel wireless IEMs also operate in the UHF band, with the system spanning a 75 MHz band. The transmitter has a 5-band graphic equalizer that can be accessed via the menu.
Note that as the term “wireless” makes clear, these systems use RF spectrum, so these systems need to be coordinated along with wireless mic, instrument, and intercom systems at every show.
Quality in-ear monitors are available at many price points, ranging from a couple hundred to a couple thousand dollars. Hearing a consistent mix is certainly easier when using them, and at less damaging levels. There are benefits to be had in isolation, comfort, and sound quality with some of the custom units. For performers who want to instantly adjust their mix during the performance, the technology is available.
With all the movement on stage, many choices of reliable wireless delivery are available, and to my ears sound as good as wired. In the end, it all boils down to meeting the needs and preferences of the musicians for quality monitoring.
Gary Parks is a pro audio writer who has worked in the industry for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.
Avid Previewing User Collaboration Tools At NAB 2014
Cloud collaboration workflows, an online marketplace, and an open metadata schema that will help audio professionals to work together
At the NAB 2014 show in Las Vegas this week, Avid (booth SU902) is previewing cloud collaboration workflows, an online marketplace, and an open metadata schema that will help audio professionals to work together with remote contributors, manage projects and media, and find new outlets for content monetization.
“The media industry is going through a period of unprecedented change, and to be successful, audio professionals need to create, share, manage, track, and distribute their content in powerful new ways,” states Chris Gahagan, senior vice president of products and technology at Avid. “Our exciting Pro Tools technology preview demonstrates how the Avid Everywhere strategic vision meets the unique needs of audio customers by enabling them to collaborate via the cloud, monetize their content through a broader marketplace, and manage content more powerfully using an open metadata schema.”
Through a series of presentations at NAB, Avid preview several developments for Pro Tools audio software:
Collaborate via the cloud — To enable music and audio professionals to collaborate together everywhere, Avid announced plans to add cloud-based collaboration workflows to Pro Tools. Musicians, producers, mixers and other contributors will be able to work together on the same music session or soundtrack, in real-time or offline, no matter where they are.
With track-based collaboration Pro Tools users will be able to:
—Post sessions to cloud storage and invite others to collaborate
—Work on the same session at the same time or offline, and share updates directly within Pro Tools
—Record, edit, and mix tracks that will be pushed to all other collaborators upon completion
—Automatically keep track of all contributions and changes, as files are automatically tagged with rich metadata
With streaming capabilities, users will be able to:
—Securely stream mixes to an iOS device for real-time review and approval
—Sync collaborators’ Pro Tools sessions together to work on audio for video projects, such as remote ADR or voiceover sessions
—Stream audio across synched Pro Tools sessions
Securely share and archive work locally or in the cloud — To ensure that users can maintain access and provide collaborators access to all parts of their projects—even on systems that don’t have the same plug-ins—Avid is developing the PXF archival. This format will enable Pro Tools sessions to be exported with rich metadata and effects “frozen” into the media so that projects can be accessed and played further down the line, even if technologies change or are unavailable, no matter how far out in the future users re-access them.
Monetize content through an online marketplace — Content creators will be able to connect and collaborate with other media professionals, as well as connect with consumers, through a public marketplace, enabling them to share and monetize media, with all rights managed and delivery secured across the environment. Additionally, studios and media companies will be able to set up private marketplaces that enable collaboration and streamline production.
The new marketplace will allow audio professionals to:
—Publish session files, multichannel stems, and stereo mixdowns directly from Pro Tools for license in the public marketplace
—Gain exposure and opportunities to make money by connecting with media professionals looking to license music and sound assets
—Quickly find professional-quality content in the style and formats they need, as all files will contain rich, searchable metadata
—Rate and provide comments for media assets in the marketplace to help others in the community make more informed purchasing decisions
—Buy and sell music and audio content with peace of mind, as all rights will be managed and protected across the marketplace
—Create a private marketplace for media enterprise, making it possible to sell media assets, and control to whom they are available, through a storefront hosted in Avid’s marketplace
—Search for and purchase marketplace content and audio plug-ins directly from within Pro Tools—with no application restart required after installation
Manage content using a new metadata schema — A new universal open metadata schema will enable users to manage, protect, and track every single media asset created and edited across the entire production and media value chain, from content creation through consumption. The metadata schema will be integrated into Pro Tools, and document the roles of all creative contributors, as well as manage, protect, and track how the media performs in the marketplace.
Ahead Of The Game: Console Strategies For Festivals
The goal is to be as prepared as possible. Spring is nigh...
Mixing at festivals – good times! Or is it?
Anyone who has worked as either a guest mixer or system tech in a festival environment probably has stories about the inherent ups and downs and, certainly, the hyper pace and stress that are part of the gig. And we’ve all heard a few horror stories of artists hitting the stage patched incorrectly or without a sound check.
But there’s also the unique thrill of mixing in a hyped environment with tens of thousands of fans on hand, and sometimes in really cool outdoor settings. The goal of the mix engineer is to be as prepared as possible, particularly when it comes to working with the console. Spring is nigh…
Preferences & Strategies
It’s been common for years to see multiple consoles “leap-frogged” between acts, allowing one or more offline consoles to be dialed in while another is live. They may be switched over by the system engineer or sub-mixed to a master console, and in the latter case, gain structure or ground loop hum/noise issues can pop up between consoles. Carrying in-line pads and audio isolation transformers is always a good idea.
Digital consoles have obviously changed the workflow at festivals by allowing preset show files to be prepped and uploaded, which helps in terms of establishing baselines and promoting efficiency. Premium analog boards may still be carried by certain headliner acts, but they’re usually not shared.
Whatever the console(s) in use, advancing the date is still the most important step in a successful gig. Even the best system techs can’t prepare properly if they don’t have enough information in advance. Further, even when this info is available and shared ahead of time, it’s still wise to arrive at the gig with a copy of the stage plot, patch list, input list, and whatever else is important to the production.
Having mixed at plenty of festivals and other multi-act events, I’ve developed a number of personal preferences and strategies. And I’ve observed that the balance of science versus art that we know as “live mixing” tends to weigh heavily toward the science side when the “run-and-gun” mode common to festivals kicks in. After all, things just have just work, first.
A Yamaha CL5 provided by Gand Concert Sound to serve as the house console at the annual Pitchfork Music Festival in Chicago.
But as veteran freelance mix engineer Chris McMillan (John Mark McMillan/Promenade Media) told me, “Mixing is much better when the art takes priority over the science, and that means ergonomics can determine how nuanced your mix becomes. I like channels grouped the way I’m used to so that I see what I need and never know anything else exists.”
This is where festivals are so different than tours. Touring engineers get very used to their daily setup being consistent, and can take advantage of that repeatability to achieve highly detailed mixes. System techs that aren’t mixers should try to keep in mind that mix engineers aren’t always crazy or unrealistic when they want their console laid out a certain way.
It’s about familiarity. It really does matter if the lead vocal gets patched to the rack tom channel. Things like this can be dealt with in a pinch, and maybe quickly, but they can impact the end result by either causing a failure or a compromised (weaker) mix.
In talking with Chris and a couple of other festival mixing veterans, and thinking about my own experiences, certain themes are clear. Mix engineers desire a “perfect” console setup and the ultimate processing tweaks to satisfy their mix plans. But when working festivals, they do realize that it’s a daunting task to support many acts a day as opposed to one artist on multiple tour dates. As a result, they just hope for a reasonably well-tuned PA, a thoughtful system approach, solid gain structure, and an intelligent output bus layout.
Input patching is critical – particularly at festivals. What’s the best way to handle it? If the sound company has the qualified hands and there is enough change-over time, it’s great when stage inputs can be updated for each artist on the bill.
Whether the consoles are digital or analog, this extra effort goes a long way in helping keep things familiar for visiting engineers.
And if troubleshooting becomes necessary, engineer(s) are likely to have the stage patches for their artists memorized and know things like “hats are on line 5” and the like. All of this said, it’s simply not all that realistic in most festival situations…
Festival stages are typically patched in a logical order with plenty of lines, and the patches don’t change between acts. If one drummer needs 10 lines and another needs only six, then the latter has four open lines during his set – the overall count remains the same.
“Soft patching” on digital consoles allows laying out input and output channels in any order without making physical patch changes. This is extremely powerful. No longer does snake line 1 have to appear on input channel 1. Each engineer’s preferred console layout can be implemented without impacting the physical patches. But this requires sharing console show files in advance (pun definitely intended) or doing it on site while another act is playing.
It’s common to use matrixes to drive PA outputs such as main left and right, down fills, front fills, delay zones, subwoofers, etc. Many engineers simply distribute their stereo mix across these various zones (either L/R or L/R+sub), while some actually mix to each zone, which requires building specific mixes into each matrix. The exact PA zones and distribution varies per event, per stage, and not all companies do it the same.
But whatever the configuration, it’s imperative that the console’s output patches match the PA. With digital consoles this means soft patching the output patches, and for this reason, system techs need to be careful when loading each act’s show file, as output patching errors or surprises can create a perfect storm and wreck a system real quick.
A Soundcraft Vi6 as the front of house console provided by Premier Production & Sound Services for the main stage at Louisiana State University’s Groovin’ on the Grounds multi-act concert in Baton Rouge.
A couple of times I’ve worked as a guest mix engineer at a festival and then stayed on as a pre-booked system tech. While this isn’t my forte or preference, I found it very interesting to work from the other (host) side of things. Many visiting engineers arrive with an expectation of certain doom, and it was fun to “make their day” with exceptional support and PA organization.
In one case, the long-time mix engineer for a well-known classic rock band clearly wasn’t happy about the digital console at FOH. He just wanted to “get by and get out of there.” I knew this desk inside and out and did everything possible to make it painless for him. He sought to keep it simple, with input faders and EQs accessible, in order, but with no other processing – not even DCA groups.
Further, he actually broke out his console tape and Sharpie and proceeded to label the input channels analog style, in spite of the nice programmable LCD labels! When I pointed out that the tape was only applicable on “Bank A” and would be inaccurate as soon as he banked the faders, he simply replied, “I don’t bank.” The band fit on the 24 input faders without any banking (layering), and by the end of the first song, it sounded absolutely amazing. Simple setup, talented musicians, and great ears.
In considering this topic, I did some Q&A with long-time mix engineers Daniel Ellis (David Crowder Band, Jesus Culture) and the aforementioned Chris McMillan.
Here’s what they had to say.
What do you appreciate most from the host system tech in terms of console prep and work flow?
Chris McMillan: I love it when signal flow and busing are simple. That’s really the most important thing. I want to know I’m just responsible for a stereo mix and maybe a send for subs, and everything else is going to be fine. If that’s right, and there’s a solid talkback situation, then we’re golden. It’s also much appreciated when the system tech has thought through the input list and our specific goals and considered what that means in terms of the system configuration. There’s nothing as useless as taking the time to advance a show only to have nothing prepared and no feedback.
Daniel Ellis: I want to see a production console for videos, emcees, and things that I do not need/want in my show file. This also means that I can load and prep my show file in between acts without waiting for the perfect 30-second gap where nothing is happening on stage.
What’s your take on “festival patch”?
CM: In an ideal world, I stay away from festival patch, although this is pretty much only accomplished with a show file. I like channels grouped the way I’m used to so that I see what I need and never know anything else exists. You know, the typical spoiled brat method of engineering.
DE: As a headliner I want my show to be patched per my input list. The only problem with this is that many festival patch guys for some reason can’t get it right the first time so half of the sound check ends up being “fixing the patch.” At least this is how it works at Christian festivals. Sometimes it seems like a random guy has been hired off the street to patch when in essence, patching is one of the most important jobs.
A DiGiCo SD5 that’s one of numerous SD models supplied by Clearwing Productions for the annual Summerfest in Milwaukee.
Do you carry a show file if it’s a compatible digital console or do you send it in advance? Or neither?
CM: I carry a show file if it seems like it will make a difference. Sometimes the process of conforming a show file or the time it takes to be convinced it’s correct isn’t worth the effort, because patching and busing can become compromised. Anyway, the acts I work with aren’t doing anything so weird that a default festival scene can’t work as a great starting point.
DE: I always try to know ahead of time what console I’ll be using and have a show file ready. Even if it’s a blank show file built on my laptop, I find that it helps because at least I know where all of my inputs are. If you try to run a 48-input show from a festival console file, you spend the entire time switching between banks trying to remember where everything is. It helps me tremendously to have the same workflow every time even if I’m starting with flat EQ and no processing on anything.
Do you find that “artist EQ” or “output bus processing” is usually enough to get your sound or do you often wish (ask?) for access to the PA processing?
CM: Limited bus-style processing is usually acceptable, if not from a creative standpoint, then from the understanding that everyone else is working off of that same tuning.
DE: Lately I often find myself at an Avid desk at festivals, so I just slap a Waves Q10 (10-band paragraphic EQ plug-in) across the stereo bus. Luckily I haven’t had to do much to the systems themselves. Just two or three small cuts on the Q10 in problem areas and I’m usually happy. If it’s a console that doesn’t work with Waves, I simply use the parametric on the master out.
What makes for a good system tech?
CM: I don’t hesitate to communicate with the system tech about expectations and any changes I feel the PA needs. Most good techs can balance the reality of the promoter and their employer’s expressed interests and still meet your creative and technical needs. A good tech wants a good sounding show in reality and not just on paper.
DE: Good attitude and good ears! And please don’t set up a measurement mic in one spot and put in 15 EQ adjustments.
Kent Margraves began with a B.S. in Music Business and soon migrated to the other end of the spectrum with a serious passion for audio engineering. Over the past 25 years he has spent time as a staff audio director at two mega churches, worked as worship applications specialist at Sennheiser and Avid, and toured as a concert front of house engineer. He currently works with WAVE in North Carolina and can be contacted at firstname.lastname@example.org.
Mixing Multiple Guitars Live: Sculpting For Distinction, Unity & Placement
The year, 1992. The country music charts were topped by songs from Alan Jackson, Pam Tillis, and Garth Brooks, and peaking at the number 2 spot was Hal Ketchum’s “Past the Point of Rescue.” Pop on the headphones, cue the song, and let’s talk mixing multiple guitars.
The song arrangement includes three guitars. Panned right is a melodic finger-picked guitar. Panned left is rhythm. Dead center is the lead. Three guitars distinct in the mix, complementing each other, and sitting in precisely the right spot. Sculpting any multi-guitar mix, even in mono, comes down to two things: knowing each guitar’s role, and how to treat it.
Common guitar roles are lead, rhythm, and a melodic role such as the finger-picked guitar noted above. How guitars are used in these roles varies by music genre, arrangement, and the limitations of the band. Mixing an 8-piece country band requires different mixing than a 5-piece rock band. Ketchum’s song has been performed with only an acoustic guitar – the overall sound was equally as good but the mix requirements were quite different. Knowing the role of each guitar enables mixing in the right way for the song. And no role is less important than another.
The year 1992 also brought the release of the film “A Few Good Men,” with Jack Nicholson nominated for Best Supporting Actor for his role. Imagine the movie without him – what a difference! The same applies to mixing multiple guitars. It’s a mistake to perfect the lead mix while only spending moments on a supporting guitar(s). Consider mixing multiple guitars in two common arrangements: two rhythms, and a lead and rhythm.
Synchronized strumming is an arrangement that happens with multiple acoustic guitars, electric guitars, or a combination. The key is blending with uniqueness. An easy mistake to make is boosting the same EQ areas in an attempt to create perfectly matching sounds. As discussed in my two previous articles (February and March 2014 issues), each guitar has a unique sound. The EQ work for one acoustic guitar usually doesn’t work for another.
A rhythm electric combined with a rhythm acoustic provides a significant amount of tonal variation. Such variation increases the range of frequency dynamics and adds depth to the mix. In the case of two rhythm acoustic guitars, hope that one musician is playing chords with different colorings. Otherwise, it truly is a case of synchronized strumming.
Coloring comes from alternate chord formations that bring added depth because of sonic differences. A major chord is made up of the 1-3-5 notes of the chord. For example, a C-chord would have notes C-E-G. By dropping the 3, the guitar is now ringing out only C and G notes. A guitarist might use a short-cut capo for such alternate coloring, and the capo also enables playing the same notes in a different octave.
While two guitars can be similar in volume, their contributions to the mix are quite different. But there are a few ways of blending them together. The first method is mixing a single unified sound. Identify any frequencies that make one guitar clearly stand out from the other and apply a slight cut. Consider cuts in the 800 Hz to 1,000 Hz range for smoothing out an otherwise harsh sound.
In the case of guitarists playing the same chords and the same type of guitar, this is fairly easy. But if the guitar types and/or playing styles differ, it leads to a second method: for those unique frequencies, instead of boosting, apply a slight cut in the same area for the other guitar. By carving out frequencies areas, each guitar’s uniqueness can shine through without dominating the overall band mix.
Despite various methods for mixing multiple guitars, song arrangement is a limiting factor in creating an emotional mix. An instrumental interlude with two synchronized strummers lacks excitement, with few exceptions. Unless another instrument brings melodic variation, there isn’t much to do.
Additionally, unless the two guitarists are in lock-step, synchronized strumming can get ugly. In these situations, it’s a monitor problem, so try increasing the in-synch guitar volume to the out-of-synch guitarist’s monitor. If that doesn’t work, pull back the volume on the out-of-synch guitar and let the other take the starring role.
Lead Me On
Enter the lead guitar, which can carry a song hook, a melody, or a solo. These sub-roles, within the lead role, define how they should be mixed. Melodic lines, not instrumental solos, should sit under the lead singer but above the rhythm guitar.
For my initial mix work, I set the lead vocal level and then tuck the lead guitar underneath. Only then is the rhythm guitar pulled in to support the lead.
Separating the lead and rhythm guitars is done through volume and EQ work. Delving into volume levels, this requires experimentation and knowledge of the arrangement and the music genre.
Dave Pensado, in episode 38 of his excellent “Into The Lair” video series available on YouTube, demonstrates instrument volume differences and their effect on the overall mix. Volume changes can take a mix from rock to punk to pop. In some cases, the changes take the mix from amateurish to perfect.
The EQ work depends on the guitar types and the song needs. Electric guitars for both lead and rhythm have different requirements than if the rhythm is an acoustic guitar. The more similar the sound, the more instrument distinction is required. Pull back on the frequency band of the rhythm guitar where the lead should own those frequencies.
Owning a frequency band is a mixing rule of thumb but be careful. Having experienced it myself, it’s possible to have two guitars that each should own the same frequency band because they have the same sweet spot. The result is an awful mess. Let the lead own it and back it out in the rhythm guitar.
EQ & Volume
Once the lead mix and rhythm mix have been established, the job’s not done. A lead mix isn’t the same as a solo mix. A guitar solo benefits from additional mixing in both EQ and volume. The volume part is about pushing it out front in the mix, like a lead vocal. The EQ work is about beefiness and clarity so the solo stands out. Try boosting in the 2-4 kHz range for bite and the 8 kHz range for clarity.
And watch the 4 kHz area on guitars with distortion – the goal is bite, not added hiss. Also keep in mind that a guitarist can switch patches for a solo, so don’t be surprised if a solo suddenly sounds different without any mix changes.
Guitar arrangements vary from song to song, so use the methods discussed here as a basis for mixing these variations. The arrangement for Hal Ketchum’s “Past the Point of Rescue,” originally written by Mick Hanly, uses each guitar for a purpose. One is no less important than another. Each sounds unique. Each sits in the right spot. All three work together to support the song.
Oh, did I mention 1992 was also the year I started in live audio production?
Chris Huff is a long-time practitioner of church sound and writes at Behind The Mixer, covering topics ranging from audio fundamentals to dealing with musicians – and everything in between.
Yamaha Commercial Audio Announces NUAGE Version 1.5
Allows remote control of R Series audio interface head amplifiers from NUAGE Fader/Master control surfaces, and more
At this week’s NAB 2014 show in Las Vegas, Yamaha Commercial Audio Systems (booth C2143) has announced version 1.5 software of the NUAGE advanced production system, which will be available in May via free download.
The new version will allow remote control of R Series audio interface head amplifiers from NUAGE Fader/Master control surfaces. In addition to providing a broader selection of I/O options for NUAGE systems, the software also allows Yamaha CL Series digital console inputs to be shared via a Dante network for significant system expansion capability.
Although NUAGE already allows switching between up to three different DAWs, with v1.5 the “NUAGE PT Bridge” driver for Avid Pro Tools control gains OSX 10.9 compatibility so that Pro Tools running on Mac platforms is fully supported.
Also, adding Quick Control to the NUAGE Master unit, specified parameters can be assigned to the multi-function display so the user has even greater customization control. The NUAGE Master unit now has the ability to instantly access a wide range of VST instruments from the display and knobs for more efficient, effective sound crafting. The multi-function knobs will also provide as much as 512 times finer control than has been available in the Fine Mode.
“All of the powerful refinements in NUAGE v1.5 add up to enhanced efficiency and improved workflow for end users,” states Marc Lopez, marketing manager, Yamaha Commercial Audio Systems. “With ongoing feedback from the field, NUAGE will continue to evolve as the new standard for DAW system performance.”
Launched in 2012, the NUAGE system provides tight DAW software integration, exceptional operability, modular architecture, and Dante networking capability for audio post-production and recording facilities.
Yamaha Commercial Audio Systems
Fairlight Unveils EVO.Live Digital Mixing System For Live & On-Air Productions
Available in different chassis or table-mount configurations from 12 to 60 faders
At this week’s NAB 2014 show in Las Vegas, Fairlight is introducing EVO.Live, a new digital mixing system for demanding live and on-air productions that’s built on the company’s audio processing and control surface technologies, offering a compact, modular design well-suited for performing arts venues, house of worship, broadcast facilities and OB trucks.
The console is available in different chassis or table-mount configurations from 12 to 60 faders. The ergonomic control surface design with touch TFT monitors offer immediate access to all critical live functions with excellent visualization. The system maintains full redundancy with automatic takeover on any component failure.
Fairlight’s interactive control surface includes unique Picture Keys which self-label instantly for each task performed, displaying the right commands and functions at the right time. In addition, Fairlight’s new iCan (Integrated Control Across Network) technology provides the operator with an easy to use editor to design customized layouts.
The console also incorporates complete dual-operator functionality, allowing each audio engineer to independently access their own set of faders, solos, channel selections and monitoring controls.
Audio processing takes place in Fairlight’s FPGA-based Crystal Core audio engine, ensuring high channel and bus counts, low latency and high audio quality. Comprehensive Mix-Minus, advanced comms, extensive metering and flexible busing are also implemented.
The compact 2U Live Audio Processor includes an extensive range of built-in I/O. Adding a second hot-swappable FPGA based engine with dual-input power supplies further enhances system reliability. A choice of modular remote I/O is also available to meet the demands of a wide range of applications.
EVO.Live reaches beyond live production to provide full recording facilities, off-line preparation via laptop, a built-in cart machine for flying in sound effects, and control extensions to lighting systems, third-party DAW and sound library databases.
After the live event has completed, EVO.Live can be reconfigured to a post production system with audio editing, full video integration, plug-ins, automation and sophisticated 3D surround panning.
Jean-Claude Kathriner, CEO of Fairlight, states “Fairlight, as the brand trusted by high-end audio professionals for over 30 years, is proud to extend its state-of-the-art technology to the live production industry. EVO.Live includes an array of innovations that will benefit customers in terms of productivity gains, reliability and especially value for money. The intuitive level of customization of the user interface and its unique ability to switch between live and post modes will change audio production forever.”
Friday, April 04, 2014
Allen & Heath Releases V1.4 Firmware For Qu Series Consoles
Offers multiple access user permissions, Windows drivers for multitrack streaming, per scene recall filters and FX user libraries
Allen & Heath has released a new firmware update for its Qu Series of compact digital mixers. Downloadable from the company website, v1.4 introduces several new features, including multiple access user permissions, Windows drivers for multitrack streaming, per scene recall filters and FX user libraries.
V1.4 includes the much anticipated Windows drivers, enabling bit-perfect ASIO and WDM compliant multitrack playback and recording.
New user permissions enable three types of user—Admin, Standard and Basic—with different levels of access, for example, for volunteers operators in houses of worship, or guest engineers visiting venues. The Admin user has access to all functions and can protect selected functions and allocate passwords if required for the other users.
Other new additions include per Scene Recall Filters, FX User Libraries, RTA Peak Band indication, and Qu-Drive transport Soft Key functions, alongside a number of improvements to the User Interface such as 0 dB markers and PEQ band fill colours.
“V1.4 firmware makes the Qu Series an even more appealing proposition to millions of Windows users across the globe, be it musicians, recording engineers or venue owners,” states A&H product specialist Nicola Beretta.
A&H sales and marketing manager Debbie Maxted adds, “The Qu Series has been enthusiastically received in the market, with the Qu-24 receiving a Best in Show award when it was launched at NAMM. Since shipping started nine months ago, we have sold thousands of Qu-16s and have been back-ordered since launch, while Qu-24 is having a similar impact since shipping started in February, with similar back orders this year.”
Allen & Heath
SSL Appoints Dr. Enrique Perez Gonzalez As Chief Technology Officer On Board Of Directors
Spearheaded development of Tempest processing platform and new SSL Live console
Solid State Logic has announced the appointment of Dr. Enrique Perez Gonzalez as chief technology officer of the SSL board of directors, a promotion from his current role as head of R&D.
“Enrique is one of those rare individuals who has deep technical ability, first-hand experience of our markets and the skills to manage product research and development effectively,” states SSL CEO Antony David. “Even more importantly, he has the vision to build upon our technical heritage and the resources to take us forward.”
Perez joined SSL in 2011, spearheading the development of the Tempest processing platform and the new SSL Live console. An electronics and communications graduate from ITSEM, Mexico, which included a year at Australia’s Royal Melbourne Institute of Technology, he is an alumnus of the University of York (UK), and holds a doctorate in electronic engineering from the Queen Mary University of London.
“I am very excited at the opportunity to provide the technology, vision and leadership for a company with such an impressive technological heritage,” says Perez. “It is an honour to be able to work on a daily basis with the best engineering team there is in the industry. I look forward to a future of innovation and technology that will deliver the user experiences our customers have come to expect from SSL.”
Solid State Logic
Thursday, April 03, 2014
Church Sound: Stage Monitoring & Keeping Those Performers Smiling
Simple approaches and techniques to help optimize the situation...
A stage monitoring system is, simply, a complete and independent second sound system for the performers rather than the audience, made up of monitor loudspeakers (also called monitors for short or wedges due to their shape), power amplification and signal processing.
A monitor system can also have its own mixer/console, or receive a feed (or feeds) from the main system console. Note that adjustments made to the main system mix do not affect the monitor system mix, and vice versa.
People who sing and play instruments derive much enjoyment from both listening to and participating in music. Music sounds best when it is clear and balanced, when you can hear instruments and voices at appropriate levels. And singers and musicians play their best when experiencing these circumstances.
There are several keys to performer satisfaction with monitors and monitor mixes. Let’s start with positioning the performers. On more than one occasion I’ve been asked to come to a church because “we can’t hear the monitors”.
Upon arrival, the monitor loudspeakers themselves are actually tuned pretty well, and I can stand on the platform or riser and hear crisp, clear, and well balanced sound. This is when I’ve learned to ask where the people who “can’t hear” are physically positioned on the riser, and the answer, predictably, is that they’re almost always somewhere out of the primary monitor coverage field - too far to either side, too far forward, or too far back.
Getting performers to stand in the coverage field is usually a training issue - if necessary, the sound operator should show them where to stand, or, consider repositioning the monitor(s). The vast majority of times, it comes down to using what is already there (repositioning) rather than the need to add more equipment.
Other times I’ll come to a facility and experience monitor sound that is feeding back or poorly tuned - the equalizer (EQ) is not adjusted properly. EQ can help eliminate feedback, get rid of annoying, lingering overtones, and provide a way to “shape” the sound so it is pleasing to the ear.
What I find is that monitors tuned by novices are frequently bass heavy and “thick” in the mid frequencies, when instead, it should be rich and full, crisp and distinct. How to tune using an EQ is a complete topic in and of itself, and has been covered in a previous article.
Assuming monitors are well tuned, the next consideration is relative balance in relation to the main system. Aside from the problem of sound bleeding from the stage into the house, monitors that are too loud can also cause considerable distress to the performers.
Surprisingly, though, often when performers tell me they “can’t hear the monitors” it is because the monitors are too loud, rather than too soft. Being too loud robs all of the ambiance—a sense of space and the sound within that space. This causes a lot of discomfort, and really, what the performers should be saying is not “we can’t hear” but rather, “we’re not happy.”
Ever notice how cool it is to sing in a parking garage, a gymnasium, or a canyon? We enjoy the sound of our own voices returning to us from a distance, and this phenomenon also occurs, albeit at a much smaller scale, with stage performances. But if the monitors are too loud, the sound is too dry, in your face, and quite unmusical.
If you’re faced with this situation as a church sound operator, try this at rehearsal: tell the performers that you’re going to shut the monitors off, so at the beginning of the song, they’re only going to hear the sound of the main system.
Then let them know you’re going to slowly bring up the master monitor volume, and to raise their hands when the mix is clearly but gently lending enunciation to the sound they’re hearing from the room.
Assuming the voices and instruments are good quality, in tune and well presented, this is a nirvana situation for most performers. Sound is big and full in the room yet crisply defined by the monitors.
On smaller platforms/risers this method is particularly successful. Once the basic master level is set, tweak and adjust individual levels or do some physical repositioning until all the players and singers can hear everything in balance. The result is very musical and your players and signers will have a lot more fun.
When it comes to multiple monitor mixes, well, I may well get a lot of flack for saying it, but multiple mixes are frequently not necessary, particularly in smaller to mid-sized stage situations.
Players can position themselves so they can all hear each other adequately, and usually, the vocals are most prominent in the monitors while the instruments are far less prominent.
Now, on wide stages/platforms (especially those that are not very deep), multiple monitor mixes might be a good idea. The performers at stage right have a hard time hearing the performers at stage left, and vice versa. Multiple mixes allow them to request sonic information from the other side of the stage in their own mix.
Multiple mixes require separate signal pathways—from the aux on the console/mixer, to the equalizer, to the power amp, to the monitors. As such, it requires more investment in equipment.
Multiple mixes can also be necessary because many performers simply differ on what they prefer the monitor mixes to be. This preference is partially what gives rise to a number of earworn monitors (either wired or wireless) that have a remote control station for each performer to individually adjust the mix to their own liking.
Communication between the performers and sound operator is also vital. Performers must understand that the sound operator is not on stage with them, so if they want monitor sound changes and improvements, they need to tell the operator, and further, to be as specific as possible. Some praise when the monitor sound is good never hurts, either.
Operators, in turn, need to understand that requests and complaints aren’t usually personal, just a desire for improvement and the expression of frustration at a subpar situation. Address each issue with patient attention, and it usually works out.
And don’t be shy about asking performers if they’re happy and comfortable with the monitor sound. This can both head off problems before they start and result in further improvements.
Remember, some performers may be holding their tongues due to less-than-professional operator responses in the past.
Jon Baumgartner is a veteran system designer for Sound Solutions in Eastern Iowa, a pro audio engineering/contracting division of West Music Company.
Behringer X32 Digital Console Pulling Double-Duty On Capital Cities Tour
Console and companion S16 digital snake handling both house and monitors
A Behringer X32 digital console and S16 digital snake are pulling double-duty on the current tour by indie pop band Capital Cities in support of the multi-platinum single “Safe and Sound.”
Production manager and FOH engineer Jason Stiegler made the call on taking the X32 and S16 combo on the road: “It was just a no-brainer. You’re looking at a desk that is the most efficient product you could possibly get, in addition to the fact that it’s the most reasonably priced.”
He adds, “It’s a powerful machine that has tons of control at a price point where almost any band can get their hands on it, with everything needed all on board. We don’t use any external anything; everything we use is all internal – every reverb, every effect.”
The Capital Cities sound team also takes advantage of the X32’s remote control capability. “We’re able to tour without a monitor engineer, because all of my guys use their iPhones to control their own monitor mixes,” he explains. “We save time and money from not having to have another man on tour with us, and everybody gets the perfect mix.”
“We’re excited that Capital Cities has put their trust in our X32 digital console and S16 digital snake for this major tour,” states Music Group product manager Jan Duwe. “Music Group is proud to provide the equipment that turns artists dreams into reality, and wish Capital Cities the greatest success as they climb the ladder to stardom.”