Tuesday, September 03, 2013
In The Studio: An Interview With Legendary Engineer Al Schmitt
A focus on microphone selection and approaches from a master
Here’s an excerpt of an interview that I did with legendary engineer/producer Al Schmitt that’s featured in the second edition of The Recording Engineer’s Handbook.
After 18 Grammys for Best Engineering and work on over 150 gold and platinum records, Al Schmitt needs no introduction to anyone even remotely familiar with the recording industry. Indeed, his credit list is way too long to print here (but Henry Mancini, Steely Dan, George Benson, Toto, Natalie Cole, Quincy Jones, and Diana Krall are some of them), but suffice it to say that Al’s name is synonymous with the highest art that recording has to offer.
Do you use the same setup every time?
Al Schmitt: I usually start out with the same microphones. For instance, I know that I’m going to immediately start with a (Neumann) tube U 47 about 18 inches from the F-hole on an upright bass. That’s basic for me and I’ve been doing that for years. I might move it up a little so it picks up a little of the finger noise.
Now if I have a problem with a guy’s instrument where it doesn’t respond well to that mic then I’ll change it, but that happens so seldom. Every once in a while I’ll take another microphone and place it up higher on the fingerboard to pick up a little more of the fingering.
The same with the drums. There are times where I might change a snare mic or kick mic, but normally I use a (AKG) D112 or a 47 FET on the kick and a (AKG) 451 or 452 on the snare and they seem to work for me. I’ll use a Shure SM57 on the snare underneath and I’ll put that microphone out of phase. I also mic the toms with (AKG) 414s, usually with the pad in, and the hat with a Schoeps or a B&K or even a 451.
What are you using for overheads?
I do vary that. It depends on the drummer and the sound of the cymbals, but I’ve been using (Neumann) M 149s, the Royer 121s, or 451s. I put them a little higher than the drummer’s head.
Do you try to capture the whole kit or just the cymbals?
I try to set it up so I’m capturing a lot of the kit in there which makes it a little bigger sounding overall because you’re getting some ambience.
What determines your mike selection?
It’s usually the sound of the kit. I’ll start out with the mics that I normally use and just go from there. If it’s a jazz date then I might use the Royers and if it’s more of a rock date then I’ll use something else.
How much experimentation do you do?
Very little now. Usually I have a drum sound in 15 minutes so I don’t have to do a lot. When you’re working with the best guys in the world, their drums are usually tuned exactly the way they want and they sound great so all you have to do is capture that sound. It’s really pretty easy. And I work at the best studios where they have the best consoles and great microphones, so that helps.
I don’t use any EQ when I record. I use the mics for EQ. I don’t even use any compression. The only time I might use a little bit of compression is maybe on the kick, but for most jazz dates I don’t.
How do you handle leakage? Do you worry about it?
No, I don’t. Actually leakage is one of your best friends because that’s what makes things sometimes sound so much bigger.
The only time leakage is a problem is if you’re using a lot of crap mics. If you get a lot of leakage into them, it’s going to sound like crap leakage. But if you’re using some really good microphones and you’re get some leakage, it’s usually good because it makes things sound bigger.
I try to set everybody, especially in the rhythm section, as close together as possible. I come from the school when I first started where there were no headphones. Everybody had to hear one another in the room, so I still set up everybody up that way. Even though I’ll isolate the drums, everybody will be so close that they can almost touch one another.
What’s the hardest thing for you to record?
Getting a great piano sound. You know, piano is a difficult instrument and to get a great sound is probably one of the more difficult things for me. The human voice is another thing that’s tough to get. Other than that, things are pretty simple.
The larger the orchestra the easier it is to record. The more difficult things are the 8 and 9 piece things, but I’ve been doing it for so long that none of it is difficult any more.
What mics do you use on piano?
I’ve been using the M 149s along with old Studer valve preamps on piano, so I’m pretty happy with it lately. I try to keep them up as far away from the hammers as I can inside the piano. Usually one captures the low end and the other the high end and then I move them so it comes out as even as possible.
It sounds like you’re a minimalist. You don’t use much EQ or compression.
No, I use very little compression and very little EQ. I let the microphones do that.
What’s you’re setup for horns?
I’ve been using a lot of (Neumann) 67s. On the trumpets I use a 67 with the pad in and I keep them in omnidirectional. I get them back about 3 or 4 feet off the brass. On saxophones I’ve been using M 149s. I put the mic somewhere around the bell so you can pick up some of the fingering. For clarinets, the mic should be somewhere up near the fingerboard and never near the bell.
How do determine the best place in the studio to place the instruments?
I’m working at Capital now and I’ve worked here so much that I know it like the back of my hand so I know exactly where to set things up to get the best sound. It’s a given for me here. My setups stay pretty much the same. I try to keep the trumpets, trombones and the saxes as close as possible to one another so they feel like a big band. I try to use as much of the room as possible.
I want to make certain the musicians are as comfortable as they can be with their setup. That means that they have clear sightlines to each other and are able to see, hear and talk to one another. This means having all the musicians as close together as possible. This facilitates better communication among them and that, in turn, fosters better playing.
I start by setting members of the rhythm section up as close to each other as possible. To get a tight sound on the drums and to assure no leaking into the brass or strings’ mics, I’ll set the drums up in the drum booth. Then I’ll set the upright bass, the keyboard and the guitar near the drum booth so they all will be able to see and even talk easily to each other.
If there’s a vocalist, 90 percent of the time I’ll set them up in a booth. Very few choose to record in the open room with the orchestra, although Frank Sinatra and Natalie Cole come to mind.
If you had only one mic to use, what would it be?
A 67. That’s my favorite mic of all. I think it works well on anything. You can put it on a voice or an acoustic bass or an electric guitar, acoustic guitar, or a saxophone solo and it will work well. It’s the jack of all trades and the one that works for me all the time.
Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website and blogs. Get the second edition of The Recording Engineer’s Handbook here.
Community Professional Helps “Horses with Hope” Rescue Horses
The new Broadview Farms facility includes an audio system designed around Community Professional loudspeakers.
Broadview Farms, near Hope, Maine, is the home of “Horses with Hope,” an organization whose mission is to rescue and rehabilitate abused, neglected and abandoned horses and match them with compatible homes.
In 2013, Horses with Hope completed a spacious new venue at Broadview Farms with an indoor riding arena, a viewing area, horse stalls and living quarters for staff and visitors.
Designed and built by “Houses and Barns by John Libby”, the facility includes an audio system designed to recreate the sounds from a commercial venue, such as a riding tournament, and to play music and provide communication for horse training.
General contractor Houses and Barns by John Libby retained AV Systems of Maine of Bowdoinham, Maine, to design and install the audio system.
Owner Bill Wayman chose Community R-Series loudspeakers for the arena and Community D-Series ceiling loudspeakers for the viewing area, offices and other spaces.
“The arena is open to the outside,” said Wayman, “so I wanted loudspeakers that sounded good and could deal with the temperature and humidity extremes we have in Maine.”
Wayman designed an array for the arena with four Community R1-94TZ loudspeakers equipped with 200-watt auto-transformers. He used a single R.5-99TZ for down-fill.
For the viewing area, offices and other spaces, Wayman designed a 70-volt distributed system with six pairs of Community model D6 ceiling loudspeakers.
The audio system uses an Ashly 8-channel power amplifier and an Ashly controller/processor. Wayman provided two wireless headset microphones to allow the riders and trainers to talk to visitors in the viewing areas.
Trainers also use these microphones to communicate with riders. Users can select microphones, music or natural background sounds and control the system volume from a wall panel in each zone.
Wayman says the owners are very pleased with the sound quality and coverage of the audio system. “They agreed to have me install two additional zones after hearing the arena system,” he said.
Friday, August 30, 2013
In The Studio: Taming A Harshly Distorted Electric Guitar (Includes Video)
Keeping what you want while eliminating what you don't
One of the challenges when mixing a heavily distorted guitar is that sometimes it can sound a little bit harsh or brittle. This might be when the player slides his fingers across the strings, or maybe some fret noise comes into the performance — or even just certain notes/harmonics that are a bit unpleasant or too resonant.
You can try to notch out these frequencies with a conventional EQ by finding the parts of the performance that sound bad, finding the specific frequencies that sound bad, and reducing them by 3 or 6 dB.
However, a conventional equalizer is going to pull out those frequencies across the entire performance. So if there are parts that sound great, you’re still pulling out that harmonic content across the entire performance when you really just want to pull it out during parts that sound harsh. This is where a de-esser comes into play.
Conventionally a de-esser is used for vocals. When the singer makes “Sss” sounds, the idea is that the compressor is going to reduce the amplitude of those “Sss” sounds at the frequency you set, while letting the rest of the performance come through unprocessed. It’s a perfect application in this situation with a harsh distorted guitar, because it basically acts as an adaptive equalizer.
So if the performance gets a little too harsh, the compressor is going to turn those harsh parts down. However, when it’s not too harsh, it maintains the tone that you want to keep.
Check out the video tutorial below, where I use a Waves C-1 compressor with frequency-specific sidechain settings to tame a harsh distorted electric guitar.
Eric Tarr is a musician, audio engineer, and producer based in Columbus, OH. He is a Ph.D. student at the Ohio Sate University in electrical and computer engineering studying digital signal processing.
Be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article go here.
Thursday, August 29, 2013
Just Because It’s Sound Doesn’t Mean It Has To Be Mixed
When you only have a hammer...
Mixing is like driving – everybody does it, it gets you from here to there, and it seems like it’s been part of the culture forever. Driving is so pervasive that’s it’s easy to forget there are other ways of getting around.
Most of the time, though, it seems that we get behind the wheel simply out of habit. It’s only on those rare occasions when we deliberately walk to the store to pick up a loaf of bread, for example, that we remember there are other ways, perhaps better ways, of doing the simple things that need to get done.
It’s been said that when the only tool you have is a hammer, everything looks like a nail. This is particularly apt in the case of mixing for staged events. The bias toward mixing that’s pervasive in the industry is traceable in part to the large overlap in the designs of traditional recording, broadcast and live consoles, as well as to schools teaching audio (i.e., “recording schools”) that continue to focus on the art and techniques of the mixdown.
Originating in broadcast and recording sessions involving multiple microphones, and refined in multitrack recording studios producing mono or stereo masters, mixing has become so entrenched in the industry and in the minds of many who dream of working in it that it’s almost as if no other way of working with sound is even remotely conceivable.
But of course it is. Mixing live sound is a relatively recent innovation. It wasn’t so long ago that virtually all pop music acts relied on instrument amplifiers and the acoustic output of the drum kit to fill a venue, the level of the vocals being brought into balance either via a relatively primitive house sound system or a couple of loudspeaker columns at the sides of the stage. Even the Beatles toured that way. It was only during their first tour of the U.S. and Canada in 1964 that the limitations of this setup became apparent, when their then state-of-the-art 100-watt Vox amplifiers were drowned out by thousands of screaming fans.
Sound reinforcement systems evolved rapidly over the ensuing decade, spurred by the needs of bands touring large venues and the staging of outdoor festivals, such as those at Monterey, the Isle of Wight, and Woodstock, where sound system designer Bill Hanley mixed sound for the three-day festival using several Shure 4 x 1 mic mixers.
Amplifiers and drums were miked along with vocals, and consoles soon grew larger as a result, offering more and more input channels. They evolved to include EQ, dynamics processing, and eventually an output matrix section providing the ability to route a variety of inputs to any or all of a series of outputs, a capability that had proven useful on Broadway and in London’s West End theatres.
One significant difference between rock concerts and live theatre, however, is that in concerts, the sound system is employed primarily to achieve high sound pressure levels throughout the venue, whereas in live theatre, the goal is to increase intelligibility. That line is often blurred in big, dynamic rock musicals, such as those of Andrew Lloyd Webber, whose sound designers evolved separate vocal and orchestra mixes, striving simultaneously for clarity in the voices and power in the orchestra.
When the deployment of delayed loudspeakers in the audience area became a staple of sound reinforcement systems in the late 1970s and early 1980s, it became possible to permit lower volume levels at the front of a large venue than would otherwise be required to provide sufficient reinforcement at the rear. For theatres, houses of worship, and corporate events, this is critical: if voices are presented at a level that is significantly louder than their natural level, any attempt to convey an illusion of realism goes out the window. As an added benefit, remote loudspeakers can be equalized to compensate for the inevitable acoustic deficiencies that lurk under balconies and in other less-than-favorable acoustic areas of a hall.
“A matrix section on a console can help manage a complex PA system,” Craig Leerman wrote recently. “I’ll route the main console left and right outputs to the matrix and use the various matrix outputs to feed the different zones and delays. With EQ and delay available on the matrix outputs, it’s easy to tune and time align a loudspeaker zone to the rest of the PA.
“Once all of the various zones have been dialed in, any further overall level adjustments are simply handled via the left and right masters on the console. Using the matrix for this can give both stereo and mono feeds of the program, as well as a reduced stereo image feed.”
For recording or broadcast requirements with a limited channel count, a stereo or mono mix will usually fit the bill, but for the house, perhaps we can employ the matrix to go even further.
As a case in point, consider a talker at a lectern in a large meeting room. Conventional practice would dictate routing the talker’s microphone to two loudspeakers at the front of the room via the left and right masters, and then feeding the signal with appropriate delays to additional loudspeakers throughout the audience area. A mono mix with the lectern midway between the loudspeakers will allow people sitting on or near the center line of the room to localize the talker more or less correctly by creating a phantom center image, but for everyone else, the talker will be localized incorrectly toward the front-of-house loudspeaker nearest them.
In contrast to a left-right loudspeaker system, natural sound in space does not take two paths to each of our ears. Discounting early reflections, which are not perceived as discrete sound sources, direct sound naturally takes only a single path to each ear. A bird singing in a tree, a speaking voice, a car driving past – all of these sounds emanate from single sources. It is the localization of these single sources amid innumerable other individually localized sounds, each taking a single path to each of our two ears, that makes up the three-dimensional sound field in which we live. All the sounds we hear naturally, a complex series of pressure waves, are essentially “mixed” in the air acoustically with their individual localization cues intact.
Our binaural hearing mechanism employs inter-aural differences in the time-of-arrival and intensity of different sounds to localize them in three-dimensional space – left-right, front-back, up-down. This is something we’ve been doing automatically since birth, and it leaves no confusion about who is speaking or singing; the eyes easily follow the ears. By presenting us with direct sound from two points in space via two paths to each ear, however, conventional L-R sound reinforcement techniques subvert these differential inter-aural localization cues.
On this basis, we could take an alternative approach in our meeting room and feed the talker’s mic signal to a single nearby loudspeaker, perhaps one built into the front of the lectern, thus permitting pinpoint localization of the source. A number of loudspeakers with fairly narrow horizontal dispersion, hung over the audience area and in line with the direct sound so that each covers a fairly small portion of the audience, will subtly reinforce the direct sound as long as each loudspeaker is individually delayed so that its output is indistinguishable from early reflections in the target seats.
Such a system can achieve up to 8 dB of gain throughout the audience without the delay loudspeakers being perceived as discrete sources of sound, thanks to the well-known Haas or precedence effect. A talker or singer with strong vocal projection may not even need a single “anchor” loudspeaker at the front at all.
As an added benefit to achieving intelligibility at a more natural level, the audience will tend to be unaware that there is a sound system in operation, an important step in reaching the elusive system design goal of transparency – people simply hear the talker clearly and intelligibly at a more or less normal level. This approach, which has been dubbed “source-oriented reinforcement,” precludes the sound system from acting as a barrier separating the performer from the audience, because it merely replicates what happens naturally, and does not disembody the voice through the removal of localization cues.
Traditional amplitude-based panning, which, as noted above, works only for those seated in the “sweet spot” along the center axis of the venue, is replaced in this approach by time-based localization, which has been shown to work for better than 90 percent of the audience, no matter where they are seated. Free from constraints related to phasing and comb-filtering that are imposed by a requirement for mono-compatibility or potential down-mixing – and that are largely irrelevant to live sound reinforcement – operators are empowered to manipulate delays to achieve pin-point localization of each performer for virtually every seat in the house.
Source-oriented reinforcement has been used successfully by a growing number of theatre sound designers, event producers and even DJs over the past 15 years or so, and this is where a large matrix comes into its own. Happily, many of today’s live sound boards are suitably equipped, with delay and EQ on the matrix outputs.
The situation becomes more complex when there is more than one talker, a wandering preacher, or a stage full of actors, but fortunately, such cases can be readily addressed as long as correct delays are established from each source zone to each and every loudspeaker on a one-to-one basis.
This requires more than a console level matrix with just output delays, or even assigning variable input delays to individual mics, since it necessitates a true delay-matrix allowing multiple independent time-alignments between each individual source zone and the distributed loudspeaker system.
One such delay matrix that I have used successfully is the TiMax2 Soundhub, which offers control of both level and delay at each crosspoint in matrixes ranging from 16 x 16 up to 64 x 64 to define unique image definitions anywhere on the stage or field of play.
The Soundhub is easily added to a house system via analog, AES digital, and any of the various audio networks currently available, with the matrix typically being fed by input-channel direct outputs, or by a combination of console sends and/or output groups, as is the practice of the Royal Shakespeare Company, among others.
A familiar looking software interface allows for easy programming as well as real-time level control and 8-band parametric EQ on the outputs. A PanSpace graphical object-based pan programming screen allows the operator to drag input icons around a set of image definitions superimposed onto a jpg of the stage, a novel and intuitive way of localizing performers or manually panning sound effects.
For complex productions involving up to 24 performers, designers can add the TiMax Tracker, a radar-based performer-tracking system that interpolates softly between image definitions as performers move around the stage, thus affording a degree of automation that is otherwise unattainable.
Where very high sound pressure levels are not required, reinforcement of live events may best be achieved not by mixing voices and other sounds together, but by distributing them throughout the house with the location cues that maintain their separateness, which is, after all, a fundamental contributor to intelligibility, as anyone familiar with the “cocktail party” effect will attest. As veteran West End sound designer Gareth Fry says, “I’m quite sure that in the coming years, source-oriented reinforcement will be the most common way to do vocal reinforcement in drama.”
The TiMax PanSpace graphical object-based pan programming screen.
While mixing a large number of individual audio signals together into a few channels may be a very real requirement for radio, television, cinema, and other channel-restricted media such as consumer audio playback systems, this is certainly not the case for corporate events, houses of worship, theatre and similar staged entertainment.
It may sound like heresy, but just because it’s sound doesn’t mean it has to be mixed. We now have more than the proverbial hammer at our disposal. With the proliferation of matrix consoles, adequate DSP, and sound design devices such as the TiMax2 Soundhub, mixing is no longer the only way to work with live sound – let alone the best way for every occasion.
Sound designer Alan Hardiman is president of Associated Buzz Creative, providing design services and technology for exhibits, events, and installations.
PreSonus Now Shipping New Studio One 2.6 DAW Software
Offers advanced integration features as well as 50-plus improvements
PreSonus is now shipping Studio One 2.6, a significant upgrade to the company’s digital audio workstation software for Mac and Windows that adds integration with PreSonus StudioLive AI-series mixers, Nimbit and SoundCloud, as well as more than 50 other improvements and workflow enhancements.
Upon launching, users will notice that the Start page has been enhanced. A new Nimbit dashboard on the provides access to up-to-date Nimbit user account statistics (number of fans, number of active promotions, and sales) directly from Studio One. In addition, the user receives help messages on how to engage with fans and customers and boost sales.
A new SoundCloud dashboard displays key statistics from a user’s SoundCloud account, as well as a scrolling display of the SoundCloud activity stream.
Further, when recording to the newly updated Capture 2.1, users can now save StudioLive AI mixer scenes along with the Capture audio. In Capture, this means audio can be played back through the mixer using the original scene that was in use during recording, even if it was recorded on a different StudioLive AI mixer.
When a Capture 2.1 session is opened in Studio One 2.6 Artist, Producer, or Professional, and a mix scene is present, all the fader, pan, mute, and Fat Channel settings for each track are automatically imported into Studio One.
What enhances this is the new Fat Channel Native Effects plug-in, which is a native version of the StudioLive 32.4.2AI mixer’s Fat Channel, including the gate, compressor, limiter, and four-band fully parametric EQ. With the plug-in and the saved mixer scene, users can play tracks using the same processing and settings that were being used during recording, even without a StudioLive mixer available.
The Fat Channel presets are compatible with their StudioLive AI counterparts and can be exported from Studio One to a StudioLive AI mixer via Universal Control. The Fat Channel plug-in also is a regular Native Effects plug-in, so its powerful processing can be used on any Studio One tracks and mixes.
Studio One has long integrated with Mackie Control/HUI-compatible hardware controllers; in version 2.6 (all varieties, including Studio One Free), this support has been considerably enhanced. Mackie Control/HUI integration now includes Send slot navigation, Sends support, Control Link mapping, momentary Mute/Solo, Track Edit mode, FX Bypass mode (EQ Button), add insert/send/instrument, plug-in/instrument list and preset list navigation, and more.
The metronome features have been enhanced significantly, including accent and offbeat click, convenient click-track rendering, custom click samples with drag-and-drop and menu options, and even save all metronome settings, including click sounds, as a preset. A visual numerical count-in is provided when the user hits the Record button.
Studio One Professional users will also find enhancements to the Project page. A new CD time display shows the current CD length ofthe project, and the Transport bar shows relative song position. The update also delivers improved ISRC text input.
Version 2.6 also adds a long list of workflow and editing improvements, enhancements to the MIDI engine, and more. For more about Version 2.6, go here.
Harman’s Soundcraft Launches “Mixing with Professionals” For Si Series Consoles
Hands-on training with Si Expression and Si Performer consoles
In response to popular demand, Harman Professional’s Soundcraft has launched a new “Mixing with Professionals” program for its Si Series of digital mixing consoles. Building on the program for the Vi Series, these sessions will offer full hands-on training, focusing on Soundcraft Si Expression and Si Performer consoles.
“Mixing with Professionals” Si Series training sessions will be held throughout the United States and hosted by factory-trained Soundcraft sales representatives. Guest engineers will join them to share their experiences with Soundcraft consoles and to provide additional training.
These 3-hour sessions will follow a classroom format of six “stations,” with one to four people per station, working on either an Si Expression or Si Performer console.
“Our ‘Mixing with Professionals’ training programs have been so overwhelmingly popular that, combined with the success of our Si Expression and Si Performer lines, we have received considerable demand to expand our training to include these products,” states Katy Templeman-Holmes of Soundcraft Studer. “Training is free and open to anyone wanting to learn, or to more advanced users looking for additional tips and tricks to get the most out of their Si consoles.”
Anyone interested is invited to register online at http://usa.soundcraft.com/mwp/events.aspx or navigate to the Events Calendar at http://usa.soundcraft.com/mwp/events. Attendees are asked to bring their own headphones to the sessions (mini or 1/4-inch).
23rd Annual Parsons Audio Expo Slated For This Coming November
Expo will include Yamaha Rolling Showroom with NUAGE, and more
Parsons Audio (Wellesley Hills, MA) will hold its 23rd annual EXPO this coming Thursday, November 14, from 10 am to 7 pm at the Dedham Holiday Inn in Dedham, MA.
Parsons Audio, a leader in offering professional audio products in the New England area, will begin the day with Professional Development Workshops taught by industry leaders and focusing on trends. At noon, the manufacturer exhibit floor opens with product representation from all of the professional audio manufacturers represented by Parsons Audio.
This year, the expo will include the Yamaha Commercial Audio Systems Rolling Showroom with the new NUAGE system. Yamaha staff will be on hand for presentations and demonstrations.
NUAGE, a joint collaboration between Yamaha and Steinberg, is a hardware and software system that adds the power of the Audinate Dante audio network to world-class recording, post production, live to tape broadcast, and house of worship recording for re-broadcast.
“We enjoy keeping our customers and potential customers up to date on what is new in the professional audio market,” states Roger Talkov, general manager, Parsons Audio. “The Professional Development seminars are extremely useful In providing the latest industry trends.”
For more information go to www.paudio.com/expo
Yamaha Commercial Audio
In The Studio: An Uncommon Cure For A Muddy Mix (Includes Video)
Addressing an element that can detract from clarity
In this video, Joe Gilder shares an uncommon cure for a muddy mix. Of course, there are numerous cures for the problem—a lot of it depends upon the specific cause of the problem. There are so many variables.
But here’s one you might not be thinking of, and it can lead you on a “wild goose chase” if it’s not addressed early: reverb. As helpful as reverb can be in enhancing a mix in any number of ways—adding fullness and depth and so on—it can also cause some problems, detracting from the clarity of the track.
Joe provides a discussion of the problem and then some solutions to address it.
Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.
New 400i And 600i Join Yamaha STAGEPAS Portable PA Line
Offer high-efficiency amplifiers, newly designed loudspeakers and high-performance DSP
Yamaha has introduced the STAGEPAS 400i and 600i, successors to the 300 and 500 in the company’s popular STAGEPAS portable PA line.
Combining high-efficiency amplifiers, newly designed loudspeakers and high-performance DSP, the new STAGEPAS models deliver a significant increase in power output with substantial improvements in sound quality and reliability. They also include the addition of iPod/iPhone connectivity, SPX digital reverbs, an onboard feedback suppressor, and more versatile EQ in meeting a wider range of applications.
Both the STAGEPAS 400i and 600i include two lightweight loudspeakers and a detachable powered mixer, along with a pair of loudspeaker cables and power cord. The mixer also fits into one of the loudspeaker enclosures, creating an even smaller footprint when in use.
Delivering 400 watts and 680 watts respectively, the 400i and 600i offer a substantial increase in power output compared to previous models, with vital components protected by advanced limiter circuits.
The mixers include four mic/line inputs and six line inputs (the 400i includes four line inputs). Channels 3 and 4 feature a combo jack that can accommodate XLR and 1/4-inch cables.
In addition, both models offer RCA jacks, a 1/8-inch stereo mini jack and an iPod/iPhone USB input, allowing for a wide variety of connectivity options. The iPod /iPhone connection offers users high-quality playback while charging their devices.
The detachable mixer includes different high-resolution SPX reverbs that are available with the twist of a knob, while Yamaha’s 1-knob Master EQ offers optimized settings to match the sound with its environment via a single control. An onboard feedback suppressor lets users remove unwanted squeals with the touch of a single button.
Basic mixer functions have also been upgraded. The 400i offers a 2-band channel EQ, while the 600i adds mid-range control for 3-band EQ that particularly benefits guitar and vocal performances. A flexible new feature, switchable stereo/mono inputs, can transform each stereo channel into two independent mono channels if an application requires more input capability.
STAGEPAS also offers switchable Hi-Z inputs for hassle-free connection of passive pickup instruments, as well as phantom power for condenser microphones. The user can also turn the reverb off or on instantly using an optional footswitch, a handy feature for MC-ing an event or speaking during solo performances.
Both the 400i and 600i come equipped with monitor and subwoofer outputs, providing seamless expandability. For applications requiring a more powerful front-of-house setup, a monitor mix or a more prominent bottom end, plug in powered speakers such as Yamaha DXR Series loudspeakers or a DXS Series subwoofer.
“We designed these new models to offer high quality sound that anyone can dial in quickly so public speakers, presenters and musicians can get to what they do best,” said John Schauer, product manager, Yamaha Live Sound. “The STAGEPAS 400i and 600i continue the legacy of providing everything necessary to easily transform any environment into a personal stage.”
Yamaha Corporation of America (YCA)
Tuesday, August 27, 2013
Do You Want To Get Paid For All Of This?
Do your homework before agreeing to waste your life
Yes, the goal of the business. Getting paid.
I was told once that there are three parts of the gig: 1) Getting the gig. 2) Working the gig. 3) Collecting the check.
It’s like three legs on a stool. All three need to be there or you have a problem. If you are a volunteer at a church, this is irrelevant, at least until you transition into the paid side. Might be good to know all of this in advance.
I used to have a venue that I did a lot of shows for. They got me for a good price and they gave me shows when the place was rented out. Good relationship.
A small time beauty pageant rented the building. They were given my number and we worked out the details. Never had any problem working like that there.
I was there early. I went above and beyond. I assisted their video crew to make sure everything went well. I handled the lighting for them. I helped carry their gear out when we packed up. Normal service level for every client.
When I went to collect the check, it was written out for half of what we agreed on. I tried to give them the benefit of the doubt. We never took a deposit, I needed the full amount.
“Where’s your contract?”
That’s what she said.
She actually took off and left her husband to run interference when she left. That almost ended in a fistfight. I was furious. Half. Seriously. Half.
Two things happened after that. 1) That venue never let her work a show there again. 2) I completely changed how I work.
Unless it was a client I had good experiences with, everyone paid a deposit to hire me. Whether it was installing a system or running a show. Anywhere from 10-50 percent of the estimated cost. They had to put some skin in the game or I wasn’t blocking my schedule for them.
I created contracts and made sure there was a paper trail to each gig. Even the guys who used me regularly had to have a contract or paper trail.
I played dumb a lot. The regulars would call and try to get a verbal agreement. I would tell them to email me the details and I would confirm as soon as I got back. I never gave them a yes or no on the phone.
I told them how I was likely to forget the details. I was working on another project and couldn’t make notes or work it out with them right then: “Email me the dates and details. I will call you when I get time to go over them.”
Sometimes, they got frustrated. Eventually, they knew the routine. Whenever there was a conflict, after that, I just pulled out the contract or email and reminded them of what we agreed to do. Saved me a lot of headaches and time.
Think about it. Block a day or two out of your life. Don’t plan anything else, not even sleep. Plan to work yourself into a sweat and take orders from random people. Plan to spend your own money for lunch and gas to be there. Now. Plan to go home empty handed. No check. Wasted day. Hard work. Aggravated. Time taken away from your family. Cost you money to be there. It happens. Unless you plan ahead.
Don’t worry about them not calling you again. Don’t worry about losing that show. Do you want days like that? The legitimate clients understand that people need to get paid. The hustlers and hacks are the ones trying to get away with that crap. You don’t need their work anyway.
One of the crews I worked for taught me how to handle that. I saw this more than once.
We were hired to provide stage, sound, lights and techs for a local concert. They brought in some good bands and a good headliner. They rented a local stadium for about 3,000 people and expected to fill it up.
Our contract required a 50 percent deposit to hire us with the balance due as soon as the rig was up and operational. No exceptions. Not after the show or even after soundcheck. As soon as lights came on and we could check microphones we were to be handed the second payment.
The promoter had paid the deposit, but didn’t have the rest once we were live. He actually expected to raise it from walk-up ticket sales. He hadn’t sold enough to cover it by the time we finished setup.
The owner calls me over, tells me the story and has me pull power.
As the bands are setting up and the audience is listening to house music, I pull the power cables off the main racks. Everyone starts to freak out. We are an hour from the first scheduled soundcheck. Five bands waiting to set up. No sound or lights.
The owner apologizes to everyone, but clearly informs the people in charge what is about to happen.
“All of these guys on my crew are being paid to be here. Those trucks have burned a lot of fuel to haul this stuff here. That gear could be on another paid show right now, but it’s here because you signed a contract and agree to pay us before soundcheck. I’m not spending any more time or money on a show that isn’t gong to pay for it. Your deposit breaks us even as of right now. If the balance isn’t paid by soundcheck, we are loading it back in the trucks and going home.”
That promoter got busy. I don’t know what bank he robbed or if he raided grandma’s mattress, but that money was there by soundcheck.
Make sure the gig is worth your time. In the early days, you end up working free or cheap, just to get established. Nobody walks into six figures as a rookie tech. Get past that. You can find out what reasonable day rates are for the type of work you are doing. You can find out what is a fair price to run a whole show.
Do your homework before agreeing to waste your life. Do the math. If you will make more money working at Walmart, just do that. That nice check may sound great until you break it down over 14 hours, gas money and lunch. Not to even mention the steady job you could have instead of this hit and miss show money.
So. If you like working for free, keep that volunteer status. If you want a real career, learn the business side. If you aren’t willing to negotiate and discuss money, you will always be a volunteer. Whether you planned to or not.
If you need extra backup for the contract and collection side, check out the other way I make money on my blog. I work with a company that makes sure I have all the legal counsel I need. No more bad decisions or stupid advice for me. Watch the introduction video and see for yourself
M. Erik Matlock is a 20-plus-year veteran of pro audio, working in live sound, install, and studios over the course of his career, as well as owning Soundmind Production Services. Erik provides advice for younger folks working (or aspiring to work) in professional audio at The Art Of The Soundcheck—Random Stories and Wisdom from an Old Soundguy. Check it out here.
Keeping The Boss Happy: The Monitor Scene For Bruce Springsteen & The E-Street Band
Talking with the dynamic duo of monitor engineering
Troy Milner and Monty Carlo have worked seamlessly side-by-side for more than a decade as monitor engineers, riding the faders for Bruce Springsteen and his 17-piece E-Street Band at stage left and stage right respectively – and they wouldn’t have it any other way. I recently caught up with the dynamic duo backstage prior to a show at London’s Wembley Stadium.
Paul Watson: So, four hands are better than two, then?
Troy Milner: I guess so! [laughs] We are completely independent of each other though; we each get our own splits, and we each have our own set of stage racks and Waves servers.
Monty Carlo: With 18 people on stage, it’s pretty involved, and with Bruce, you never know – he does a set list but he doesn’t follow it, ever, so we’re always on our toes!
TM: They’ve actually always had two monitor engineers. Monty’s been here a lot longer, and I joined on in 2000. It’s actually the way they’ve always liked it for 20-plus years, but we can do a lot more now due to the technology advancements.
How does your partnership work, exactly?
TM: Well, I take care of the drummer, the violin player, the guitarist, the bassist, and the keyboard player who is right here next to me; then I deal with various wedges that are located around the stage for some solos for Bruce.
MC: I hadle pretty much everybody else, really. We each have a lot going on and there are a lot of cues for each song; and again, as Bruce doesn’t follow the set list, well…
I can’t see any wedges on stage – where are you hiding them?
MC: [smiles] There are a number of proprietary Solotech wedges imbedded in the stage, a mixture of double 12s, single 12s and single 15s, and we’re using JBL VT4888s for side fills. The rest of the band is on in-ears, but Bruce is completely old school.
Troy Milner (left) and Monty Carlo at one of the DiGiCo SD7s in Springsteen monitorworld prior to a show at Wembley Stadium in London. (click to enlarge)
What in-ear systems are you running, and do you have any RF issues?
TM: We use Shure kit, PSM1000 IEM systems and the Axient wireless mics, which we like a lot. The boxes underneath are Albatross headphone amps that I use for the drummer – he’s hard wired. When he sits down he plugs right into his seat on his left side and never moves, so he doesn’t need to be wireless.
MC: We have 70 channels of RF between backline and myself and Troy, and although here [in the UK] it’s not too bad, when we’re in Italy… Well, it’s notorious for RF issues. Thankfully, the kit we’re using makes life a whole lot easier than it could be!
What does Bruce like to hear in his wedges?
MC: He’s got a little bit of everything – it’s so tough as each musician has their own wants and needs, but with Bruce, I just kind of fill it up around him between the side fills and the floor wedges so that he hears everything. I have everything panned to make it feel more “live” – the piano is coming from his left and the organ from the right, and the same with the horns, just to kind of open things up, and so he knows where it’s all coming from.
And after all these years, Bruce is still on a classic Shure SM58 capsule.
MC: Absolutely – it still does the job great. We’ve tried a few different things, but it’s still the best sounding and most reliable solution. Also, when it rains and Bruce is out running through the crowd, we don’t have to worry about it falling apart. You can build a house with it.
You both mix on DiGiCo SD7s. Is it essential that you’re on the same console?
Proprietary Solotech wedges in the stage keep it really clean. (click to enlarge)
TM: For our setup, absolutely. We have snapshots for all of the songs, and I’m up to 205. There are some songs that I know Bruce won’t do, but every one is programmed for me on the snapshots. I couldn’t do that without the SD7.
MC: On the whole, the SD7 has been really flexible. It’s also great for moving stuff around. Troy double-assigns the drums so the drummer has his own set of drum inputs and the rest of the band has their own set too, so in terms of tailoring things quickly, everything’s just so easy to do on this desk.
TM: That’s right, the drummer is a little more demanding, so I kind of mix him old school; the control groups are pretty static for him. I’ll hammer him with certain parts that he just wants to hear. For example, he might want two bars of the opening riff from the guitar player, then he wants to get rid of it, so I have to be very hands-on. Monty’s obviously got different stuff that he handles, too.
So on one hand you’re mixing dynamically, yet you’re also relying on hundreds of snapshots… It must get scary if Bruce throws you a curveball.
TM: Oh it can get pretty crazy, that’s for sure! Although the SD7 is pretty much instant access with regard to recalling snapshots, because Bruce has so many songs, it does slow the process down a little: for example, 27 of his songs start with the letter “s,” so it can still take me a second to locate them even with the shortcut buttons.
In fact, I recently asked one of the software guys at DiGiCo if he could give me the first two or three letters rather than just one to search snapshots, as that would be perfect, and he was like, “You guys are worse than Broadway!” [laughs]
How advantageous is it having banks of 12 faders on the console rather than eight?
TM: Oh, very, and for drums especially. Also, having 12 in the center for the control groups is a real bonus – I have a bank for mixing control groups and another bank for mute groups and that works really, really well.
Additionally, the console’s assignable rotaries are perfect for me on my drum bank. I’m always writing thresholds on the gates for the drums because he is so dynamic, and so that I always know when I am in the drum bank – it’s just a visual thing. These functions save me huge amounts of time.
What are your mix counts?
TM: With all the reverbs, tech mixes and crew mixes, we’re at 60 outputs; and there’s two of us, remember. I scratch my head and think “how did I get to 60?” But I have a lot of sends that I use and the keyboard player has his own mixer, so instead of doing direct outs I just send 16 stem mixes to his mixer, then he sends his mix back to me so I can broadcast it wirelessly for him.
Is there much digital processing in Bruce’s vocal chain, or is this also old school?
TM: I’m real simple on it, because Monty is doing Bruce’s monitor mix. I take care of the vocal for everybody else, so I can tailor it a little more and control it as he is so dynamic and all over the place, which is awesome.
But again, it means you’re having to ride the fader?
Springsteen’s mic is a blend of cutting-edge Shure Axient wireless technology and an old school SM58 capsule. (click to enlarge)
TM: That’s right. I’m feeding Bruce’s vocals to the six people I take care of. I run the multiband compressor, which is just great, and then use a little bit of EQ before running it through the Waves Blue 76 just as an overall “grab.”
How are you finding the Waves SoundGrid?
TM: It’s been great, but obviously the stuff in the desk has been great too. We do have some guitar amp sims and distortions though – Bruce plays the harp through his vocal mic and it sounds like a distorted miked amp, which we’re using a Waves guitar simulator for.
You’ve got two DiGiCo SD Racks each?
TM: Yes. In my world, I’m running old school copper snakes from Monty, and Monty is basically “control central.”
MC: I have all of the splits of everything and we split copper to Troy, copper to me, and then copper to front of house. We’ve talked about sharing racks between us; we haven’t done it yet, but maybe down the road it’s something we can do.
Communication between the two of you during a show must be crucial – how vocal are you?
TM: It depends, but we do have a great talkback system. I basically have a stereo mix of all the talkbacks that I send to a matrix, then I send whatever I am cueing to that same matrix to my wireless cue system, so no matter what I am listening to, the talkbacks are also there too.
MC: Exactly, so if there’s a problem on my side I can say “hey we’re gonna switch this.” We always make sure we have direct communication, as it’s another tool to keep us ahead of the band. Some shows we’re more vocal than others, but we’re always on top of things.
Sounds like you’ve got the perfect setup going on…
TM: Unless we’re in Italy…
MC: [laughs] Yep, we’re all good until we go to Italy!
Paul Watson is the editor for Europe for Live Sound International and ProSoundWeb.
Church Sound: Mixing Like A Pro, Part 1—Gain
Getting it right can change your mixes for the better
One of the more common audio mistakes we see in churches is improper setting of input gain levels on the house mixer. Getting input gain right is one of the most critical steps in creating good audio mixes.
Setting the input gain affects absolutely everything we do on the console and it can make or break your house mix, monitor mix, recording mix, etc. Before we start mixing the house on the channel faders or do anything else, every channel’s input gain should be dialed in.
A Simple Analogy
Think of your audio channel as a garden hose, and the gain knob (a.k.a., trim or HA/head amp knob) as the water spigot. If we were going to fill up a bucket, there is an ideal amount to open the spigot.
Opening it up too little means we get very little water and while it works, it takes a while to fill the bucket and is not terribly effective. Opening it up too much often means it’s going to get messy with water going everywhere.
This concept holds true with audio. Too little gain will give us weak, inefficient sound. It’s not that there isn’t sound, but it doesn’t sound big and full as we’d typically like things to sound. Overcompensating on the input gain to get your weak level back up to where it should be also introduces unpleasant noise into the system.
We also make a mess when we turn our gain up too much. Pushing too much signal through causes clipping—not a pretty sound!
How Do I Know When Gains Are Set Right?
Unfortunately it’s a little different on every console, but there is a pretty easy indicator that you can use to find the sweet spot. On most consoles, analog or digital, you usually find an input meter—or at the least a little level indicator light.
On the meter, you usually see three different colored sections: green, yellow and red. Even if you have a single light, typically it’ll show green if you have signal, yellow if you’re getting a lot of signal and red if you’re getting too much.
On most consoles, a great target for your gain is right where the green and the yellow lights meet. On many consoles that’s the number 0 on the meter, or 0 dB. For others it’s a different number (for example, on Yamaha digital consoles this happens around -18 dB).
Regardless of the number, setting your gain to where the green light meets the yellow light should give you a big, full sound with plenty of head room to avoid clipping.
Remember, the first thing you need to do is set input gains before anything else. In fact, my ideal way to do begin sound check is to have the band run through 1-2 minutes of the biggest song on their list for that week.
You could do each instrument individually, but in order to save time and verify that the band and singers are giving me their full effort, I’ve found that having them run through something big for two minutes is enough time for me to quickly move through each channel and set gains accordingly. I don’t touch anything else in this period of time—in that 120 seconds I’m only looking at gain, making sure every channel is giving me enough and not too much.
Once completed, all house volume changes are made at the fader and monitor mix changes are done at the auxiliary (AUX) knobs.
Don’t Touch That Dial
Once I feel good about my gain, I do my very best to not touch it again. Sometimes someone jumps all over his/her mic or instrument and you have to decrease your gain a bit, but if at all possible I leave the gain alone at this point and just adjust their fader to change the house sound.
Why? If I adjust the gain at this point, I change the house mix, monitor mixes, recording mixes and everything else on the console. At that point I’ve also taken away the musician’s reference point. What she thinks was ground zero for her volume is no longer true. As she tries to play dynamically, she no longer knows whether he can trust what he is hearing.
We want big, full sound from our instruments and vocals in order to produce a good mix. Our musicians need consistency in their monitor mixes in order to be comfortable and know what they are playing is translating well in the mix. We want our live recordings and web/broadcast feeds to have good consistent sound.
All of these things are affected by how we set the input gains on our house mixer. We need to get our input gains set right at the beginning of the sound check by setting levels close to 0 dB, or between the green and yellow signal lights on our mixer. Once we get the gains set, we leave them alone to keep mains, monitor, recordings, etc., consistent.
Take the time to get you’re gain right, it’ll change your mix for the better!
Duke DeJong has more than 12 years of experience as a technical artist, trainer and collaborator for ministries. CCI Solutions is a leading source for AV and lighting equipment, also providing system design and contracting as well as acoustic consulting. Find out more here.
Monday, August 26, 2013
WavesLive Master Class Series Continuing With Session At Delicate Productions In CA
Live engineers to discuss integrating plug-ins into mixing consoles, and more
Waves Audio is continuing its Master Class event series—presented by live sound division WavesLive—with an upcoming mixing workshop at Delicate Productions in CA. This event is the second in a series, with the first recently held at Maryland Sound International (MSI) in Baltimore.
This next Master Class is scheduled for 10 am to 5 pm on Thursday, September 5 at the Delicate Productions facility located at 874 Verdulera Street in Camarillo, CA.
Front of house engineers Greg Price (Ozzy Osbourne, Black Sabbath), Ken “Pooch” Van Druten (Linkin Park, Kid Rock, Kiss) and Sean “Sully” Sullivan (Beck, Beastie Boys, Sheryl Crow) will be on hand to inform live engineers about how to integrate Waves plug-ins into live mixing consoles.
Specifically, they will be presenting an up-close look at plug-in “problem-solvers” and a demonstration of how they use, and benefit from using, Waves plug-ins in their live mixing duties.
The presentation will be followed by an open Q&A session, with the opportunity for attendees to do some hands-on mixing using various consoles and Waves plug-ins.
Waves product specialists Luke Smith and Noam Raz will also be on hand to offer in-depth information on Waves plugi-ns and new technologies, including a presentation of DiGiGrid MGO and MGB (http://www.digigrid.net/products/) solutions on a Midas PRO console, focusing on playback of multitrack audio and real-time low-latency processing (running multiple plugins) via MultiRack/SoundGrid.
Jason Alt, president of Delicate Productions, states, “It is Delicate Productions’ pleasure to host the WavesLive event which gives new and existing users the opportunity to explore and gain insight into Wave products.”
Price notes, “I’m very excited to be part of the WavesLive Master Class Series coming to Los Angeles. This is a great way for all of us to get that “hands on” look at the tools great engineers are using in audio production today.” Van Druten adds, “These events are some of my favorites. The information exchanged is so invaluable, I always come away having learned heaps. If you’re a live sound engineer, I don’t know how you could not be there.” Sullivan concludes, “I’m really looking forward to the Delicate/WavesLive workshop. What a great way for people to get to see and hear the amazing plug-ins Waves has created for us in the live sound industry.”
To register, go here (http://waveslive-delicate.eventbrite.com/).
Wednesday, August 21, 2013
Maximizing The Mix Of Live Recordings
Ten ways to make your live mix shine in post production
Anyone who’s ever mixed a live recording knows that it’s a lot different than mixing tracks recorded in the studio.
How? Dealing with audience tracks for one, and leakage for another. With that in mind, here are a series of steps you can use to maximize that mix.
1. Set Mixing Priorities
Assuming that you’re not going to replace instruments that are played badly or don’t sound that great, your mix may be determined not by the instruments themselves but by how they’re played.
The best strategy is to emphasize the strongest performances and keep the weakest ones low in the mix.
As an example, while it’s normal for most engineers to build their mix around the drums (although some mixers start with the bass and others with the lead vocal), that process might not work well if the drums are the weakest played part.
Emphasizing a stronger part will take the focus off some shaky playing, and even though the mix might not have the punch you’d like, the final product will be better for it.
2. Place The Audience
A big question for someone new to mixing a live show is how to balance the audience against the instruments. Generally speaking, the level of the audience is determined by the sound of the room and by the needs of the artist.
If the sound of the audience tracks is pretty good, they can be placed higher in the mix to add some “glue” where it’s appropriate.
Likewise, if the artist wants the energy of the audience as a prominent feature the audience level can also be raised.
If the sound of the audience and room sounds bad, the tracks have little room sound and just audience, or the artist or producer prefers a drier sound, then the audience tracks are brought back in the mix.
Regardless of where the audience tracks end up in the final product, don’t wait until the end to bring them into the mix. Check them against every instrument to see what the interaction is.
3. Embrace The Leakage
For those who normally mix studio tracks, instrument and monitor leakage is a departure from what you’re used to. The tracks are never as clean as in the studio, unless all the instruments are recorded direct (even then, the vocals will have some sort of leakage).
The real trick is to use that leakage to your advantage to fill out the sound. Don’t try to get a clean mix because it won’t likely happen. Best to get a general balance first and gently message levels with the leakage in mind.
4. Gates Can Help
One way to clean up the leakage a bit is to use a gate on any instrument with excessive leakage, but use it judiciously. If a gate is set to decrease the level of an instrument to infinity (mute it), it may sound unnatural and probably affect the sound of other instruments leaking into the one you gated.
Best to set the gate to gently decrease the level by 3 or 6 dB and see how that affects the sounds of other instruments.
5. Pan To The Picture
If the show is being mixed to picture, then you’re pretty much stuck with panning as you see it. That being said, it almost always helps with the phase if everything is panned the way the band was set up on stage.
6. Be Careful Of Phase
Speaking of phase, you can’t always be sure of the polarity of the microphone cables and signal chain, so it’s best to check every track as you add it to the mix.
Check both positions of the phase switch and select the one with the most low end.
This is especially important with audience mics, which can be subject to comb effects because of their distance from the stage and each other.
7. Be Careful Of Adding Effects
Because you already have a lot of leakage on different instruments, adding effects to one instrument or vocal can sometimes spill over onto other instruments or vocals, which might begin to wash the mix out.
Also, depending upon how large the venue is, there may be a lot of built-in ambience that you can use instead of adding something artificial (unless it’s a unique effect required by the song).
8. Be Careful With EQ
Just like with effects, any EQ added to one instrument or vocal that has leakage on the track can affect the sound of other instruments as well.
This is a good reason to build your mix first, then add any individual EQ to see how it affects everything else in the total mix.
9. Use The High Pass Filter
Just like in studio recording (maybe even more so), the high pass filter is your friend. The HPF can clean up the sound considerably on every instrument and vocal, even kick and bass.
Especially in a live recording, there’s a lot of low frequency information that gets recorded (like leakage from the subs) that isn’t useful and the HPF effectively eliminates it, which cleans the mix up like magic.
10. Don’t Over Compress
Once again, just like with effects, any compression added to one instrument or vocal that has leakage may affect the sound of another instrument as well.
Compression on individual tracks is almost always needed to control the dynamics of a live recording, but you’re better off to start with a gentle 2 or 3 dB of compression, then increase if needed until it begins to negatively affect the sound of another instrument.
It’s also best to build your mix first, then add compression. If you need more compression than what you can comfortably add to individual tracks, you can always squash the stereo bus.
Follow these 10 steps, and you’ll find that you’ll be able to produce a better sounding product – and in less time than you think. Happy mixing!
Bobby Owsinski is a veteran audio professional and the author of several books about live and recorded sound.
Audinate And Aviom Announce Licensing Agreement
Simplifies connection of Aviom personal mixers to digital consoles with Dante connectivity
Aviom has announced that it has licensed Audinate Dante digital audio networking technology.
“Audinate’s Dante has become a standard in digital audio networking for the pro audio industry, and incorporating Dante connectivity into our personal mixing system makes it possible for our personal mixers to be connected directly to many of the best digital consoles on the market,” explains Aviom president and CEO Carl Bader. “By incorporating Dante, we have both simplified the setup of systems with the new A360 personal mixer and added versatility for the user.”
Aviom’s new D800-Dante A-Net Distributor, which will start shipping later this year, connects Aviom personal mixers directly to an existing Dante audio network, making up to 64 channels available from a console so that each performer with an A360 can select a unique combination of mix channels from the network.
Using Network Mix Back, the stereo mix from each A360 can also be routed back into the Dante network for distribution to wireless in-ear systems or monitoring by an engineer.
“Aviom is known as one of the leaders in personal monitor mixing systems worldwide,” states Audinate CEO Lee Ellison. “We are very happy that Aviom has added Dante connectivity to their product line. Aviom’s personal mixing systems can now be directly networked with Dante to hundreds of products by a wide variety of manufacturers.”
The Dante networking solution is a self-configuring network that uses standard Internet Protocols over both 100Mbps and 1 Gigabit Ethernet. Set-up is easy, with devices that automatically discover one another and one-click signal routing with user-editable names. Dante has now been adopted by more than 95 OEMs.