Wednesday, February 03, 2016
Genelec 8351 Smart Active Monitor (SAM) Receives NAMM TEC Award
Genelec announces that its 8351 three-way Smart Active Monitor (SAM) has received a NAMM Technical Excellence & Creativity (TEC) Award for Outstanding Technical Achievement in the Studio Monitor Technology category.
Presented by the NAMM Foundation, the awards celebrate the best in professional audio and sound production. The 31st Annual TEC Awards were presented Saturday, January 23, 2016, at the NAMM Show in Anaheim, California.
“We are indeed grateful to those who voted for the 8351 as the leader in monitoring technology,” stated Will Eggleston, Genelec marketing director. “We are also humbled by those creative contributors, engineers and producers who use our products every day, year after year. The 8351 is without a doubt a real game changer in the monitoring world – an outstanding example of market vision and engineering implementation.”
Universal Audio Releases New UAD And Apollo Plugins
New software v8.5 includes Marshall JMP 2203 Guitar Amplifier, Oxford Envolution, and bx_digital V3 EQ plug-ins
Universal Audio announces the release of three new direct developer plug-ins for UAD hardware and Apollo interfaces — the Brainworx bx_digital V3 EQ Collection, Sonnox Oxford Envolution, and the exclusive Marshall JMP 2203 Guitar Amplifier, developed by Softube.
All three plug-ins are part of UAD Software v8.5, now available for download at the UA Online Store.
Marshall JMP 2203 Guitar Amplifier Plug-In — $199 [Upgrade for owners of any UAD Marshall plug-in — $149]
Developed by Softube exclusively for UAD-2 and Apollo interfaces, the Marshall JMP 2203 Guitar Amplifier Plug-In is an emulation of the 100-watt amplifier used by everyone from Iron Maiden and Slayer, to Jeff Beck and My Bloody Valentine. All Marshall plug-ins include UA’s Unison technology; Apollo user’s guitar pickups will see the exact impedance load as if they’re plugged into a vintage Marshall amp — a feature found only on Apollo audio interfaces.
Introduced in 1975, the JMP 2203 is widely regarded as one of Marshall’s premier amplifier designs. The amp quickly caught on with players of all stripes with its abundance of dense crunch and power.
Marshall JMP 2203 Guitar Amplifier Plug-In Key Features:
—Licensed plug-in emulation of the classic Marshall JMP 2203 amplifier
—Unison technology for Apollo interfaces offers authentic tone, touch, and feel of original JMP 2203
—Five essential virtual microphone options
—Over 50 presets designed by AC/DC engineer Tony Platt
Sonnox Oxford Envolution Plug-In — $249
Developed by Sonnox, the Oxford Envolution is a frequency-dependent envelope shaper that can add presence/distance to drums, piano, guitar, and any other percussive content. With separate transient and sustain sections, the Envolution can radically modify the sound of sources, boosting sustain for added ambiance or cutting it for quick, precise gating.
Providing a balance between flexibility and ease of use, the Envolution offers greater control of presence in a mix than an EQ or compressor, and it can also be used to generate negative ratio compression effects.
Sonnox Oxford Envolution Plug-In Key Features:
—Control of key shaping parameters including attack, hold, and release
—Separate and frequency dependent control of transients and sustain
—Tilt/parametric targeting of frequencies
—Huge gain ranges: Transients ±24 dB; Sustain -48 dB to +24 dB
—Warmth control for harmonic saturation/tone shaping
Brainworx bx_digital V3 EQ Plug-In Collection — $299 [Upgrade for UAD bx_digital V2 plug-in owners — $49]
The Brainworx bx_digital V3 EQ Collection is a reimagining of the bx_digital V2 EQ plug-in — a mastering EQ and M/S (mid/side) processor and tool for many of the world’s top mixing, mastering, and post-production engineers.
The bx_digital V3 EQ adds a Dynamic EQ section, updated filters, and a new proportional Q mode to the M/S capabilities of the plug-in. New user-friendly features also include Gain Scale and three tones of focus for the onboard Bass and Presence ‘Shifters’.
Brainworx bx_digital V3 EQ Plug-In Collection Key Features:
—Precise Mid/Side mastering processor and EQ with expanded feature set
—Controls for stereo and Mid/Side panorama, levels, and stereo width
—EQ features extended frequency range, proportional Q bandwidth, and Auto-solo function for critical listening applications
—Revamped Dynamic EQ/De-esser section
—Gain Scale feature for artifact-free alternative to Dry/Wet processing
Tuesday, February 02, 2016
iZotope Celebrates 15th Birthday With Free Vinyl Plugin
64-bit compatible for both Mac and PC, including AAX and VST3, emulating the characteristics of vintage records and record players.
iZotope announces the re-release of Vinyl, their free vinyl simulation plugin.
By emulating the characteristics of vintage records and record players, Vinyl is a lo-fi weapon for anyone looking to add the dirty, dusty feel of a different era to their sound.
The new version of Vinyl has been updated to support modern operating systems and audio editing software: the plug-in is now 64-bit compatible on both Mac and PC and supported formats now include AAX and VST3.
Vinyl also features a new Spin Down button that simulates slowly stopping playback of a record, with results that can range from a dramatic “record stop” to a subtler nuance on an individual instrument.
All of Vinyl’s classic features allow a user to simulate the dust, scratches, crackles, pops, and warping of a worn record as well as the mechanical noises of a turntable. In addition, the Warp features are now available for use in all supported plug-in formats.
Re-issued as part of the company’s 15th birthday celebration, the re-release hearkens back to iZotope’s very beginnings in a dorm room at MIT (Massachusetts Institute of Technology). Back in 2001, Vinyl was iZotope’s very first product-experimental audio processing software made by a small group of like-minded engineers and music lovers who wanted to bring more high-quality creative tools to the audio community. The novelty of a free digital effect that offered such flexibility over nostalgic, analog sounds made the plug-in an instant success. Before long, Vinyl’s touch could be heard on everything from songs to soundtracks to sound effects for film, television, and video games.
“When we look back on iZotope’s history, we’re so grateful to the early fans of Vinyl who championed our efforts,” says Mark Ethier, iZotope’s CEO and co-founder. “It’s their support that paved the way for Ozone, Trash, RX, and everything that we have created since. In this birthday year, we’re excited to bring Vinyl back as a thank you to our old friends, and to help us meet new friends, too.”
Availability and Pricing
Vinyl is available now for free.
How to get Vinyl:
New customers may download Vinyl for free from the iZotope website.
Customers who have downloaded previous versions of Vinyl will receive their free update details via email and within their iZotope Account.
In The Studio: Successful Rhythm Section Tracking
Rhythm section tracking is the most important recording session in the production cycle of a record. The recording engineer captures the feel and sound of the musicians as they interpret the song and support the artist’s performance.
The rhythm track’s sound is a component of the production style and identifies the record’s musical genre. I liken the track to the foundation of a house: you can’t build very high on a weak base!
Subsequent overdubbed sweetening is just “window dressing” to reinforce and/or beautify what was laid down originally by the rhythm track musicians.
Recording one musician at a time is a valid process when the vision of the song is still hazy (maybe you are still writing it) or you don’t have the facility for recording everyone at once. The “one-brush-stroke-at-a-time” method is especially good in the case of small home/project studio productions.
The all-at-once-approach is old-school—and valuable when there are budget and time constraints. Just like in the old days, when primitive recording studios only “documented” a live performance directly to an acetate disc and then later to a tape recorder, many Jazz and alternative Rock producers (like Frank Black) today are recording bands directly to two-track stereo machines.
I have always thought that the more music you could record at one time, the better. For me, from a music mixer’s “big picture” standpoint, hearing everyone playing together results in a better recording, a better mix and a better vibe - even though you can always fix and change it later in a computer.
Convincing fickle artists and trend du jour-following A&R people of this fact is sometimes difficult. Seems like some people get more heat out of the Pro Tools rig than from the tracking room.
Whether you record all at once or one musician at a time, you’ll probably want live drums, and I want to focus on tracking drums because I find it’s what people struggle with.
Starting the track recording sessions with drums is another holdover from early recording days when most drummers got to the session early to set up and work on sounds with the engineer.
Whether you’re tracking everyone together or overdubbing the drums on an existing track, the drum kit needs the most floor space. The rest of the musicians should setup around the drummer for good eye contact and easy communication.
Unless I’m familiar with the room beforehand, I start out putting the drums in the middle of the room away from walls. If you place the kit with the drummer’s back to a wall, you’ll get a tight reflection and a sonic coloration from the wall’s construction material…be it plaster, wood, cinder block or a glass window—usually to be avoided, but could be good.
Putting the kit in a corner will increase low frequencies and add the reflections of two walls, which also could be good! The drummer will probably like to set up on a carpet throw rug if the studio is not carpeted. Some of the best sounds come from studios with wood or tile floors where area rugs will “stop” the room down for a reverb time (rt) of around one to two seconds…or much less for a very dry, funk drum sound. I have used a wooden riser that adds a woody, overall stage quality especially to the bass drum.
Some metal bands use a huge PA power amplifier with separate mics on the kick and snare that drive subwoofers built-in to the riser…essentially reinforcing the low frequencies for the benefit of the room mics. This subwoofer rig, done right, sounds huge! Similar studio PA schemes will add a very live quality to the drum recording.
Miking the kit can take lots of microphones… or not! If you have a large console and lots of microphones and a patient drummer (that will work with you tweaking in the control room), go for using a lot of mics on everything that moves on the kit.
I think a lot of engineers think by separately miking a kit they should get isolation between the different drums and cymbals. All manner of gating, wacked equalization, and strange mic positioning goes on to try to obtain this drum machine quality.
In my opinion, you should have your drummer play pads and trigger samples if that is what you looking for…because it is unrealistic to think that a well-miked, real drum set will ever sound that way. Think of the close microphones as “spotlights” that enhance the sound of the overheads. The close mics add low frequencies, attack and panoramic image focus to the overheads’ sound.
Tracking with limited console real estate (ten inputs or less) requires you to use less mics and spend more time on their exact choice, placement, processing and submix. Naturally, drum tuning and balance (the drum mix the drummer produces when playing), room ambience, cymbal choice and playing volume are even more important and less controllable after-the-fact than a multi-miked setup that you get to “remix” later.
I get a great drum sound, as good as anything on the radio with about five microphones: kick, snare, hat and two overheads. One way to conserve microphones and inputs is to know what the drummer is going to play beforehand.
It’s kind of silly to mike up tom-toms, put them on separate tracks, and never have the drummer use them! Actually, some of the coolest “dialed-in” drum sounds I ever got was when the drummer and I worked on the actual drum part in the song and tailored each microphone’s position and processing together to exactly “fit” the groove.
If you have any outboard mic preamps or good condenser mics, I would use them on the overheads first. The sound in the overheads is the sound of the kit.
Another fallacy is that the overheads are for cymbals only as if you can get rid of the rest of the kit through some kind of engineering science that defies the laws of physics! Get those “overs” sounding good and in balanced with your drummer and the rest is easy!
It’s a major commitment to mix the room mics into the drum mix. Back in the day of 4 and 8-track machines, I used to record 2-track drums: kick on one track and everything on the other track.
Later, with 16-track, stereo drums required three tracks: left and right and the kick and using a separate snare track was a luxury. I would mix the close mics along with the overs to the stereo tracks using pan pots that matched the actual physical position of the overhead mics over the kit. If I used room mics, they would also go into those stereo tracks.
Looking at the drummer from the audience’s perspective, the right overhead is over the hi-hat side (assuming a right-handed drummer) and I’ll add in the close hi-hat mic panned mid-right.
I use audience perspective when monitoring and most drummers like to hear themselves in drummer’s perspective which I provide in their headphones… or they can just wear the phones backwards.
Nowadays, unless you have a great sounding room and most of the drum sound comes from room mics, John Bonham style, you should record those mics on extra tracks. If you like to compress the room mics or the overheads, do it while recording…commit to that process.
Compressors react much better and differently to live sound mic sources than already recorded tracks. If you are unsure of what you have, record it with and without compression at the same time and later, erase the tracks you don’t need.
Headphones and their mix always seems to be a major source of complaint. When recording a drummer, two guitar players, bass, keyboard and a singer, you’ll probably need more than one headphone mix and cue system. When recording one musician at a time, you’d need only one, and it can usually be the same mix you have in the control room.
Drummers require loud headphones that firmly fit his/her head. That’s means expensive, low impedance studio headphones and a big power amp to run them.
You can get volume with a 5-watt amp and a pair of Walkman phones, but you can’t get clean dynamic range and good low frequencies - the drummer will need to feel the bass instrument, the warmth of the track mix and a sense of the volume changes himself and the other musicians make.
It’s very important that the cue mix represent what each musician is playing dynamically. If you do a lot of live recording, it is worth spending money on good phones and a powerful amplifier to run them.
If you can only derive two cue mixes, one of them should be for the drummer. The mix I like to give the drummer (in order of loudness): plenty of himself so that he doesn’t play overly hard (unless you want that), a click track (if you use one) good bass and the rest of the band including just enough vocal to know where he is in the song.
I use stereo cues and try to get a spacious sound mix with subtle effects and wide panning of instruments that is hopefully inspiring. Click tracks and instruments for keeping time should be dry and mono…in the middle.
Auto Talkback Mic
Instant communication is super important during the tracking session. Close miking of amps and DI recording won’t allow you to hear normal conversation levels.
If you’ve ever try using an extra microphone just to hear what your players are saying, you have to make sure you always mute it when they start playing or risk getting a loud surprise—that could do damage.
Here’s a trick for an automatic talkback mic. I use a single omnidirectional microphone placed in the center of the recording room. I use very high microphone gain and patch the pre-amp’s output through a compressor cranked all the way up (minimum threshold).
Use 20:1 ratio or higher with the fastest attack possible (1 ms or faster) and a slow release (5 seconds). Any adjustable compressor will work, but I use a Universal Audio (UA) 1176LN for this purpose and route the output to it’s own track and, of course the monitor mix and the headphones.
This compressor will nearly always be in constant gain reduction mode but when nobody is playing, compression releases, and you’ll hear the quietest sounds due to the high mic pre-amp setting. If the drummer hits his drum unexpectedly, the fast attack will grab immediately and compress completely and avoid hurting anybody’s ears.
When the band starts playing, the compressor squashes the gain of the microphone so much, it’s not heard any more in the phones or the monitor mix. Once things get quiet again, the compressor releases and you’ll hear conversation levels easily again.
I use this system as a “set and forget” auto talkback and the musicians seem to like it because they can hear themselves talking to one other without ever taking the headphones off.
Recognize that no human ever born stays exactly with the click at all times! I have worked with many drummers and they tell me that they can “push” or “lay back” on the click but are rarely “inside” the click for very long time periods.
A drummer might complain that they don’t groove as well having to focus on staying with a click. Other drummers practice to click all the time and develop a need for it when recording.
If you have a track recorded with a sequencer, there is no choice but to overdub drums to the sequencer’s click or a drum machine track or loop. If you’re recording all at once, using a click will insure (more or less) that you could probably add MIDI sequenced instruments later.
One trick is to let the drummer count off and start the song listening to the click and then turn it off. Then, at least the song started out at an agreed upon tempo.
I leave the choice of click sound to the drummer. You can use a cowbell sound playing quarter notes or eight notes if the tempo is slow and/or there is an implied eight note feel. A fat, low-pitched cowbell sound can be very loud in the drummer’s phones and not hurt too much.
I’ve seen drummers request both the cowbell and a simple drum machine loop just for feel and vibe. I give them whatever they want because they have to work with it! Generally the rest of the musicians DO NOT want to hear the click. Finally, on playback, check that the click is not leaking to the room or drum mics.
If you’re overdubbing a drummer and there is no click or the only tempo reference are pre-recorded musical instruments, your drummer will play more freely, but you may have to fix downbeats and areas where things come unglued. It may be necessary, at times, to play only the single instrument that represents the pulse of the music and that pulse may be represented by other instruments at differing places in the song.
Strict tempo requirement is one of two stigmas we inherited from 1980s computer/music making; the other is the strict intonation made possible by the digital tuner and quartz-locked synthesizers. Now, music consumers (even though they don’t realize it) are much more aware of tuning and timing than ever.
Expensive vs. Cheap Mics
There is no doubt about it: expensive mics sound great! Expensive condenser microphones are investments in your recording future. No matter what comes and goes in musical trends or recording gear, you’ll always need microphones.
I get e-mails about using less expensive microphones when recording and people wonder if they are good enough for the task. If you can, rent some expensive microphones for your session just to see the differences. I think you’ll be surprised when and where you can and can’t hear the money.
I once recorded, as sort of an experiment, the entire drum kit for a song using only Shure SM57s. The producer and I called it the $100 drum sound. The album, by Daryl Hall and John Oates, was a big hit and the song “Sara Smile” was a top 10 record. We received more phone calls than anything else asking how we got this cool drum sound. What did we use? Where did we record them?
Moral of the story: use whatever mics you have. Keep them in good shape by putting them away and not abusing them with cigarette smoke, moisture and rough handling.
If you’re recording everyone at the same time and there is leakage, it is not the end of the world. Actually the overall track might sound better WITH the leakage than cleaned up.
Rather than thinking that you are “stuck” with not be able to replace some of the musician’s tracks later because they leaked on tracks you wanted to keep, you make a commitment to the band’s performance and resolve that the track is great or as best (given all circumstances) as it is going to be.
I personally think, as a band, going into a live tracking session thinking that any of it can be replaced is self-defeating, because there will be no urgency to achieve a great performance. It puts a sense of purpose and responsibility on all involved including the engineer and producer to be a part of this super effort.
If you feel that recording together is a good idea but want to reserve the ability to fix it later, just make sure nothing leaks, that’s all!
Leakage can work for you. In my early tracking days, I would place the grand piano right next to the drummer. People would say “wow you are going to get a lot of drum leakage being so close.”
Fact is, I used very thick and dense gobos or baffles and blankets to cordon off the piano from the drums. Sure, I got a little low frequency drum leakage into the piano mics, but since the two instruments were only about six feet apart, the time delay in the leakage was very short and only added to the drum sound.
Moving the piano further away would have increased the delay of the leakage and washed out the drum sound. Putting all the musicians close together is always a good idea anyway.
Recording rhythm tracks requires a lot of preparation and planning. With precise ideas on exactly how you want the session to proceed, you’ll be ready for a good time capturing great performances.
Barry Rudolph is a veteran L.A.-based recording engineer as well as a noted writer on recording topics. Visit his website at www.barryrudolph.com.
Russia’s Television Technical Centre Ostankino Chooses Calrec Consoles
Complex of 40 studios in Moscow adds 5 more consoles, bringing the total to 33 Calrec units.
Television Technical Centre Ostankino (TTC) has installed five more Calrec digital audio consoles as part of its continued effort to upgrade studios in its Moscow facility.
The studios are used by Russian broadcast companies such as Channel One, Technostyle, and NTV Plus on a wide variety of programs, including the May 9th “Victory Day” parade in Red Square.
“We recommended that TTC install Calrec consoles because Calrec is known for its operational reliability and system redundancy,” said Roman Katrovsky, sound specialist for OKNO-TV, the Russian system integrator that supplied and installed the consoles. “These consoles give TTC the flexibility to accommodate any number of broadcasters and their various requirements — for both studio recordings and live production — all in 5.1.”
TTC has installed Apollo consoles in eight of its studios, each with a companion Artemis console that can be used on the studio floor for tasks such as mixing a band or a live audience.
Katrovsky added: “It is important to TTC that the Apollo and Artemis both sit on the Hydra2 network, as mixed audio can be sent between the consoles via Calrec’s Hydra Patchbay system. This enables TTC to share signal paths without having to use any physical I/O boxes.”
The latest Apollo installation in Studio 27 is used to produce programs such as Channel One’s “The Field of Wonders.” The Studio 27 installation brings the total number of Calrec consoles in the Ostankino complex to 33.
“TTC Ostankino houses more than 40 studios and is the largest broadcast production company in Eastern Europe,” said Michael Reddick, European sales manager, Calrec Audio. “Among the studios now equipped with Calrec technology is Studio 1, which is the largest studio within the TTC complex at 1,000 square meters. This repeat order is a vote of confidence from TTC and confirms it made the right choice in deciding to standardize on Calrec. I’m happy to say that Calrec consoles will play a major role in Russian television for years to come.”
Posted by House Editor on 02/02 at 08:27 AM
Monday, February 01, 2016
Manley Labs Wins 2016 TEC Award For Outstanding Technical Achievement
Manley FORCE four-channel high voltage vacuum tube microphone preamplifier recognized in the category of Microphone Preamps
The Manley FORCE from Manley Labs has been awarded a 2016 Technical Excellence and Creativity Award, recognized for Technical Achievement in the category of Microphone Preamps.
Celebrating its 31st year, the TEC Awards are presented by the NAMM Foundation in celebration of finest in professional audio and sound reproduction.
Chosen by a select panel of pro audio and music industry professionals, this year’s winners were announced Saturday, January 23 during the NAMM Show in Anaheim, CA.
The Manley FORCE is a four-channel high voltage vacuum tube microphone preamplifier incorporating proprietary hand-wound Manley Iron mic input transformers and a 12AX7 vacuum tube amplifying stage. Each channel includes a high-impedance, ¼-inch instrument input as well as an XLR microphone input. As with all Manley gear, each unit is hand-wired using silver solder and audiophile-grade components and hand-built in California.
“With the FORCE, we set out to build a mic preamp that would blow the doors off the price-to-value ratio, while retaining every single ounce of Manley quality, reliability, and integrity,” remarked Manley Labs president and co-founder EveAnna Manley. “This award is testament to our success, and to the hard work of everyone at Manley Labs. Every entry in this category is a worthy contender, and it’s an honor to be named the winners.”
Earlier the same day, in a separate ceremony, the Manley VOXBOX was awarded to the NAMM TECnology Hall of Fame. This year’s nominees also included the Neumann KM84 and Shure SM58 microphones, Auratone Sound Cubes, Eventide H3000 UltraHarmonizer, Lexicon PCM41, and Roland RE-201 Space Echo, and the Decibel.
Don Was, one of music’s most significant artists and executives, received the evening’s highest honor, the Les Paul award and performed live. Jeff “Skunk” Baxter along with Record Plant’s Chris Stone and the late Gary Kellgren became the newest inductees to the NAMM TEC Awards Hall of Fame.
Posted by House Editor on 02/01 at 08:50 AM
Grammy-Winner Pablo Stennett Chooses Mackie DL32R
Renown bassist chooses digital mixer for everything from working with major artists to creating music for film soundtracks, stage productions, and video games.
Bassist Paul “Pablo” Stennett is a winner of multiple Grammys with a long list of credits as a producer, composer, session player, and performer with such artists as Ziggy Marley, Willie Nelson, Raphael Saadiq, Chaka Chan, Diana King, Pink, and Jimmy Cliff.
His reputation depends on top-quality sound, which is one reason Stennett is a devotee of Mackie‘s DL32R 32-channel rack-mount digital mixer.
“I’ve always loved Mackie products, and my studio is based on Mackie gear,” Stennett begins. “Mackie’s sound quality is consistently excellent. I used a Mackie 32-channel 8*Bus analog mixer for 20 years, and I did a lot of good work with it. But now I’ve switched to the DL32R digital mixer, and I’m even happier. It has all the features I need, and I love the sound. Everything in the studio is patched into the DL32R, including my vintage keyboards, my outboard gear, and my computer DAW. From the mixer, my audio is routed to a Mackie Big Knob, which controls my Mackie monitor speakers. I control the DL32R with the Master Fader app for iPad. So it’s a Mackie system from end to end.”
In addition to his work with major artists, Stennett creates music for film soundtracks, stage productions, and video games. He composes, tracks, edits, and mixes everything with the DL32R.
“The DL32R is at the core of everything I produce,” he explains. “I can track a group, add live instruments, then pull it into the computer, add virtual instruments, and mix-all with the DL32R. I often cut live drums in my studio, and they sound totally authentic coming from the DL32R, with no grit or coloration.”
Although he long relied on the physical faders of his Mackie 8*Bus console, Stennett is very happy mixing with an iPad. “The interface in Mackie’s Master Fader control software is amazing,” he maintains. “It made the transition to iPad mixing easy.”
A long-time fan of Ampeg’s SVT-4PRO bass amplifier head and PN-410HLF and PN-115HLF bass cabinets, Stennett has recorded and toured extensively in recent years as the bassist for Ziggy Marley. He also maintains a busy studio schedule for his other projects.
“Right now, I’m doing a lot of music for Sony Playstation games,” he relates, “and I use my DL32R for everything. I know I can rely on it completely, as I have always relied on Mackie.”
Thursday, January 28, 2016
Focusrite Adds Drawmer S73 Plugin To Softube Time And Tone Bundle
Clarett, Scarlett and Saffire owners offered access to S73 Intelligent Master Processor Plugin.
Focusrite announces that, beginning February 2, 2016, the Focusrite/Softube Time and Tone bundle (which already includes three of Softube’s plugins: TSAR-1R Reverb, Tube Delay and Saturation Knob) will be expanded to include the Drawmer S73 Intelligent Master Processor plugin.
The Drawmer S73 features an enhanced multi-band compressor sound design, fine-tuned to improve your mix and to make the choices a mastering engineer would typically do when treating your mix.
Use the Type parameter to switch between ready made mastering processing techniques, featuring multi-band compression, equalization and mid-side (M/S) processing to quickly find the sound that suits your mix the best.
Under the hood is technology created by Drawmer for the 1973 Three Band FET Stereo Compressor, a multi-band analog compressor design famous for its precision and flexibility.
Softube has carefully modeled all the characteristics of the original hardware, and have taken the concept a step further by adding sound design that adds tricks used by mastering engineers to create the Intelligent Master Processor – this is the perfect way to balance and shape your mix, and get great results fast.
All Focusrite Clarett, Scarlett and Saffire owners who registered on or after September 1, 2015, are eligible to download the bundle, and it is available with any new purchase of those interface ranges. The Drawmer plugin, with a value of $99, brings the total bundle value to $297.
Grammy-Winning Producer Josh Wilbur Chooses Metric Halo
Multiple Grammy nominated producer.engineer selects ChannelStrip plugin for Lamb of God, Steve Earle and Sons of Texas albums.
Josh Wilbur picked up a Grammy for engineering Steve Earle’s Washington Square Serenade and has subsequently earned three Grammy nominations with metal powerhouse Lamb of God, who is up for another Grammy in the Best Metal Performance category for “VII: Strum and Drung.”
Wilbur produced, recorded, and mixed the project, and, as with all of his recent work, he used Metric Halo’s ChannelStrip plug-in on all the drums, guitars, and bass.
“I kind of accidentally fell into producing,” claims Josh Wilbur. “I started playing in bands in high school and then naturally gravitated to the other side of the glass. I thought I was just an engineer until someone pointed out that I was really engineering and producing the bands I was working with.”
That’s fine with Wilbur, whose passion is great music and the creative process that brings it to life. In his view, recording technology is merely a tool to elicit and record great music. He started out in rock music, then moved to working with pop and R&B acts like NSYNC and Lil’ Kim, and then Wilbur gravitated, over the years, back to his roots with rock music.
Other recent work of Wilbur’s that features Metric Halo ChannelStrip on all of those “organic” instruments includes Gojira’s “L’Enfant Sauvage,” Hatebreed’s “The Concrete Confessional,” and Sons of Texas’ debut “Baptized in the Rio Grande.”
“I’m especially proud of Sons of Texas,” Wilbur said. “I think they’re a really excellent band. I was able to get in on the ground floor with them and help them define their sound.”
He continued: “I like to consider myself a ‘band-first’ producer. A lot of producers get caught up in themselves and start thinking that they’re the rock star. I focus on helping a band be the very best band it can be, period, without putting my stamp all over it. The way I see it, if people start talking about the production, then I failed. They should be blown away by the music.”
To that end, Wilbur is always conscientious about making every discussion a two-way affair. Moreover, he works hard to get the bands he works with to write and rehearse as a group. “The vibe is almost always better when every band member gets to add their thing to a song,” he said. “The modern approach of writing everything in a computer and then coming into the studio to lay it down tends to make things stale and predictable.”
Wilbur first started using ChannelStrip in the early 2000s, shortly after Metric Halo created it.
“I remember thinking at the time that of all the plug-ins on the market, ChannelStrip was the closest to an analog console in terms of both the sound and the way it reacted. Especially the compressor; it knocks the same way an SSL compressor knocks.” But then Wilbur moved studios and lost track of ChannelStrip.
“Around the time that Pro Tools 10 came out, I was reading an article that mentioned ChannelStrip, and I thought, ‘what ever happened to that plug-in?’ I downloaded the demo and immediately kicked myself. It was like, ‘oh yeah, this is awesome.’ It’s still at the top of the heap. I’ve used ChannelStrip on every album I’ve mixed since then.”
Wilbur is particularly pleased with how quickly he can get a great sound with ChannelStrip.
“I’m not going to sit there and fuss with things,” he said. “I work with my gut instincts. Either something feels good or it doesn’t, and if it doesn’t, I just move on. ChannelStrip makes it easy to set a few controls quickly and to A/B different parts of the plug-in. I rip through it and find the sound that feels right, and then I’m off and away, on to something else.”
In The Studio: Bruce Swedien On Developing Your Own “Sonic Personality”
Excerpted from the excellent “Make Mine Music” by Bruce Swedien, available from musicdispatch.com.
It’s my opinion that after all is said and done, psychoacoustics is really why we are interested in recording music in the first place.
Psychoacoustics can be defined simply as the psychological study of hearing. The true aim of psychoacoustic research is to find out how our hearing works.
In other words, to discover how sounds entering the ear are processed by the ear and the brain in order to give the listener useful information about the world outside. I’ve never felt that psychoacoustics is concerned with how sounds produce a particular emotional or cognitive response. That is another matter entirely.
To me, the three most fascinating areas of psychoacoustic analysis are:
—How does the human ear separate sounds occurring simultaneously (e.g., two musical instruments playing at once)?
—How do we localize sounds in space?
—How does the human ear determine the pitch of, say, a sound source or, more important, a musical instrument?
Psychoacoustics is not about sonic mind control. (I must confess that I was a little disappointed when I first learned that fact!)
Determining the abilities and limitations of human hearing is invaluable to us involved in the production of music recordings. Any resource that produces sound for the purpose of human listening should \ take into account what the listener’s ears are going to do with that sound, if we are going to take that resource to its utmost potential.
I have always been a very curious person. I have always had to know why things are the way they are, especially when it comes to the recording of music.
Sound is so important to us in so many different areas that it has always been fascinating to me to think about why we perceive sounds the way that we do. I have heard it said that the purpose of the ears is to point the eyes.
Knowing that, I think it is safe to say that the primary use of our sense of hearing is to localize sound sources. Keep these thoughts in mind the next time you are doing a mix.
Sound as a stimulus is the arena of the physicist. Sound as a sensation is in the arena of the psychologist. We, as professional music recording people fall somewhere in between these two areas of
In actuality, to be truly successful in music recording, we may have to be a little bit of both. So, what I hope to accomplish is to help you discover, with the help of the little bit of the psychologist that I think is present in all of us, your own “sonic personality.”
A Card-Carrying Record-Buying Junkie
I think the first step on the road to developing our own “sonic personality” is to find a benchmark for our mind’s ear that has as its basic component true “reality” in sound. From that stark, uncolored point we can then add a new viewpoint for the listener that we can call truly our own.
Many recording engineers and producers spend a lot of their time listening to and trying to learn their craft from records. In my opinion, this is a serious mistake and is precisely the reason why there are so few engineers and producers in the industry today, who have a truly unique sonic character to their work.
A certain amount of information can be gained by listening to other people’s records, but my problem with this approach is that one’s own “audio personality” is short-circuited.
In other words, if you try to learn about music mixing by listening to records, in actuality what is happening is that you are hearing the music, or sonic image of the music, with someone else’s “audio personality” already imposed on the sonic image.
I do believe that it is true that we must listen to records to keep up with sonic styles and trends.
Personally speaking, I am a bona-fide, card-carrying record-buying junkie. When I hear a record on the radio or in a club that has interesting music or an interesting sonic hook, I am off to the record store in a minute and buying a copy for myself.
However, to have an “audio personality” that is truly your own, you must start your personal sonic development with a knowledge of natural, acoustical sounds.
Let’s Talk About Acoustical Support
To take that line of thought a step further, I think I should say that I feel that the best way to develop your ears’ “benchmark” is to hear good acoustical music in a fine acoustical setting. How many of you get out to hear live music on a regular basis? It’s very important!
Let’s talk about acoustical support as it relates to music. All music is conceived to be heard with some sort of acoustical support. This does not necessarily mean long concert-hall-type reverberation. It can mean very short, closely-spaced early reflections and minimal reverb content. Both of those components constitute acoustical support.
Once we know what music sounds like in a natural setting with good-quality acoustical support, we can then take that “audio benchmark,” and through our work, give our sonic images our own distinctly personal touch.
An engineer’s or producer’s listening ability does not descend on him in a single flash of inspiration. It is built up by countless, individual listening experiences. So let’s make a real effort to hear the music and sound with as open a mind as possible.
One of our most important abilities as a professional listener is judging balance. So let’s consider balance as the first thing to listen for today. The balance of an orchestra’s instruments in classical music is the sole responsibility of the conductor.
In our work of recording music, that responsibility is transferred to us. It doesn’t matter whether the orchestra is acoustical instruments or the orchestra is represented by a synthesizer. We must be able to judge balance.
Over a long period of time, if we have the native ability, we will develop a seemingly uncanny sense of hearing nuances of balance and sound that would pass unnoticed by the inexperienced.
This ability seems to be acquired almost by osmosis through thousands of seemingly insignificant listening experiences. This random approach is effective and vital.
The antithesis of balance is imbalance. When you are at a concert listening to good music in a good acoustical situation, listen for any imbalances that might be there. Think about your spontaneous reactions later.
When you are at a concert, ask for very good seats. That way, you should be able to judge balance and many other elements with a certain amount of accuracy. Listen for spectral balance. In other words, how well balanced is the frequency spectrum of the orchestra in that specific acoustic setting?
See how your ears and psyche react to the overall volume level of the orchestra, particularly at fff (extremely loud) dynamic levels. How does the orchestra sound at ppp (extremely quiet) dynamic levels?
Make sure that you have a good working knowledge of the different levels of musical dynamics and learn how they are expressed in musical terms. This will help you later on when you discuss these very important values with the musicians and composers that you will be working with.
Here are some important aspects of sonic values to listen for when you are listening to good music in a good acoustical situation:
—Listen for early reflections in the acoustical support of the hall. Listen for the reverb quality of that specific room.
—Listen for reverb spectrum.
—Listen for the amount of reverb that you perceive in relation to the direct sound of the orchestra – in other words, reverb balance.
Let’s Talk A Bit About Reverberation And Echo
Most of the time, we are unaware of how much of the sound that we hear comes from reflections from environmental surfaces.
Even when we are outdoors, a significant amount of sonic energy is reflected back to the ears by the ground and nearby structures – even by surrounding vegetation.
We only begin to notice these reflections when the time delay is more than about 30 milliseconds to 50 milliseconds, in which case we become consciously aware of them as individual sounds and call them echoes.
Special rooms called anechoic chambers are built as research rooms to absorb reflected sound energy. In a test situation staged in an anechoic chamber, only the directly radiated sound energy reaches the ears.
Upon entering an anechoic chamber for the first time, most people are astonished by how much softer and duller any sound source sounds. If reflected sound is so common in an ordinary acoustic environment, I’ve always wondered why these reflections don’t interfere with our ability to localize sound sources.
I guess it’s because our binaural hearing sense can quickly adapt to a new acoustic environment. I do know that our hearing system uses only partially understood mechanisms to suppress the effects of reflections and reverberation.
The fact that we localize sound sources on the basis of which signals reach our ears first is known as the precedence effect. This is not to say that we are unaware of the reflections that follow. Actually, we subconsciously use the subsequent reflections to estimate range, or the distance we are from the sound source. In my opinion, a music producer/engineer is no better than his tools.
Our main tools are, of course, a good pair of ears and the wonderful brain to which the ears are connected. If the hearing is faulty, only faulty judgments can result. Please try to remember that good hearing is a rare and wonderful gift.
How Do We Achieve Depth, Or That Third Dimension, In A Stereo Image?
The feeling of depth perception in a recording is the result of a combination of values, including the ratio of direct to reverberant sound. The intensity of a sound source relative to others in the same field, and even EQ, especially in the presence area of about 1.5 kHz to, say, 5 kHz.
Probably the most important factor in creating a feeling of depth is the change in the ratio of direct to reverberant sound. As reverberant energy becomes more prominent, the source appears to move back.
The absence of early reflections in a sound source makes it seem much closer. As you change the quality of early reflections in a soundfield, they greatly affect the depth of field. These reflections are generally less than 40 milliseconds.
When they are longer than that, the ear can pick them out as individual reflections, but below 40 milliseconds they tend to smear into one sound. Early reflections in a sound source must be part of the sound-field of the original recording to be effective. There are virtually no effects devices that seriously address this important issue!
Thinking out and carefully designing a sound-field yields big benefits. Careful thought and intent will make your work memorable and separate you from the also-rans.
That is also why intelligent use of pre-delay with reverb devices can give a tremendous feeling of depth of field. By increasing the length of the pre-delay of a reverb device (to make sure the reverb itself does not cover the early reflections), your recording will have a unique sonic character that is truly your own.
The Attack Wall
Here’s something about “tube traps” that you may find interesting. I am very excited about a new and intriguing recording-room acoustical treatment.
In essence, this new theory creates a reflection-free listening zone for music recording and mixing. The concept was perfected by my friend Arthur Noxon of Acoustic Sciences Corporation.
The “attack wall” at Westviking Studio. (click to enlarge)
It’s called the “attack wall.” It is a free-standing wall that surrounds the monitor speakers. (I think we could call this speaker position “midfield” monitoring.) It acoustically loads the monitor speakers, and causes them to play as if there actually mounted into a wall. This gives the monitor speakers increased acoustic efficiency.
With an array of studio traps behind the listening space, the “attack wall” makes a 100 percent acoustically “dead” space. This creates a reflection-free zone for music mixing and recording.
I have found that with the “attack wall,” no monitor EQ is necessary.
With good monitor speakers, you hear smooth, linear sound. The low end is exceptionally clean and articulate. One of the additional advantages of the “attack wall” is its portability. It can be moved from place to place with a great deal of predictability and reliability.
In my next article, I’ll be discussing speakers, amplifiers, control room volume levels and much more…
Click to enlarge book cover
This is an excerpt from Bruce Swedien’s Make Mine Music. To acquire a copy of this book, click over to www.musicdispatch.com.
Synchron Stage Vienna Opens New Studio With Solid State Logic
Vienna Symphonic Library selects SSL Duality δelta analog consoles with an L500 Live console for new scoring stage and studio complex
Sample library and virtual instrument developer Vienna Symphonic Library has opened the main rooms in its new scoring stage and studio complex, Synchron Stage Vienna.
Located in the West of the Austrian capital, the complex features extensive analog and digital infrastructure from Solid State Logic, acoustic and architectural design by WSDG, plus system consultation and equipment supply by TSAMM Professional Audio Solutions.
Two SSL Duality δelta analog consoles, an L500 Live console, and a high-capacity Dante Network via SSL Alpha-Link MX converters and SSL Network I/O products, form the technical backbone of the studio.
Herb Tucmandl, CEO and founder of Vienna Symphonic Library Gmbh, identified the need for a hybrid studio some years ago – one that would provide uncompromising orchestral recording facilities yet was flexible enough to accommodate all modern production approaches. Tucmandl eventually purchased an original 1940s stage that had been neglected for some time. Local regeneration had seen the demolition of many buildings of similar age in the same area, though this building had been saved by a preservation order on a rare cinema organ housed in the central live space.
“We could not have built a building this size from scratch,” comments Tucmandl. “...We were lucky that it became available at the same time we were looking.”
The studio features a substantial, original, room-in-room construction with up to a three meter gap around the central 540m2 Stage A. ” It sounds really open,” notes Bernd Mazagg, technical director & chief audio engineer at Synchron Stage Vienna. “I think it’s one of the best rooms in the world.”
Additional studio and office facilities surround that central stage, including the 80m2 ‘B’ live room, Control Rooms A and B, production lounges and facilities, and office space for the Vienna Symphonic Library staff and operations. There is also a temperature-controlled basement instrument storage area with direct elevator access to the main stage.
The center piece of Control Room A is the 96-channel SSL Duality δelta Pro Station console, while Control Room B will house a 48-channel Duality δelta. Monitoring for the musicians in Stage A is managed by an SSL L500 Live console.
According to Mazagg, the Duality δelta console was their first choice for several reasons, not least being its flexible input path. This allows the operator to choose between using just the ultra-clean Super Analogue path or adding the VHD (Variable Harmonic Drive) circuit for an adjustable, characterful input.
The Duality Pro Station ‘wrap-around’ frame allows the center section to accommodate a large display so that the operator can have direct DAW control from the sweet spot, while keeping the rest of the console within easy reach and taking advantage of the Duality’s DAW control aspect. Dualities were chosen for both the A and B control rooms so that clients can easily move between the two spaces, depending on need. This also allows both consoles to be used simultaneously, effectively creating a single 144-channel recording console.
In addition to analog wiring between the control rooms and recording spaces, audio is converted and distributed via a high-capacity, redundant Dante network to all spaces and rooms in the facility for recording and monitoring. SSL Alpha-Link MX 16-4 and 4-16 interfaces do the A/D and D/A conversion, while MADI-Dante bridging and sample rate conversion is handled by SSL Network I/O MADI-Bridges. Pro Tools HD I/O is taken care of by SSL Delta-Link interfaces.
The Network is clocked at the recording sample rate (normally 192kHz) and, by dividing this word clock and using the fast, high-quality sample rate conversion in the Network I/O Bridges, other components in the system can be clocked for appropriate sample rates. Pre-lay sessions can be played back at 48kHz, for example, while the monitoring feeds and the L500 console can be clocked at 96kHz. The network latency is less than 0.7 milliseconds from microphone back to headphones - something that Mazagg notes was only possible using the SSL Network I/O and Alpha-Link combination together with careful network design.
Mario Reithofer of TSAMM Professional Audio Solutions helped specify much of the technology in Synchron Stage Vienna, and supplied not only all of the SSL products, but most of the other equipment as well. He was instrumental in specifying the Dante infrastructure. “First we came up with a MADI solution,” he explains, “But it was too complicated…. Dante has given them fantastic flexibility and redundancy in a simple network.”
For monitoring, the small footprint, high channel count, and flexible structure of the L500 made it an ideal choice for this role. Monitoring inputs are picked straight from the Dante network and mixed on the L500 to a large number of monitor outputs - sometimes over twenty for large orchestral sessions. “The Query function makes it really easy to handle those mixes,” says Mazagg. “You can simply select the monitor path you want to mix into and you have everything at your fingertips.”
Initial recording tests at the facility were undertaken with full orchestra, and attended by a number of influential film music personalities, including conductor and orchestrator Conrad Pope, and music scoring mixer Dennis Sands. The Duality pre-amps were preferred for most inputs, with the console’s VHD circuit used to add just the right amount of color to certain inputs.
“For me it was quite amazing to hear the complete studio for the first time,” says Mazagg. “....I was nervous. I knew that it could sound really good, but when I heard the Orchestra playing through the speakers, and though the whole signal path for the first time it was simply amazing… It was much better than we’d hoped it could be.”
TSAMM’s Mario Reithofer comments: “I think this by far the best studio I have been involved in. There’s no other place like this on the planet… I am very happy to have been a part of its creation.”
In a recent video made in Synchron Stage Vienna’s Control Room A, Dennis Sands said: “It’s incredible… The sound translates so well into this space. It’s very smooth and open… And it sounds good no matter where you are… Truly a world class facility… Certainly one of the best rooms in the world, there’s no question.”
Solid State Logic
Posted by House Editor on 01/28 at 07:39 AM
Wednesday, January 27, 2016
Keeping It Transparent
Editor’s Note: This is an excerpt from Audio Production and Critical Listening: Technical Ear Training, available from Focal Press.
In the recording process, engineers regularly encounter technical issues that cause noises to be introduced or audio signals to be degraded inadvertently.
To the careful listener, such events remove the illusion of transparent audio technology, revealing a recorded musical performance and reminding them that they are listening to a recording mediated by once invisible but now clearly apparent technology.
It becomes more difficult for a listener to completely enjoy any artistic statement when technological choices are add-ing unwanted sonic artifacts.
When recording technology contributes negatively to a recording, a listener’s attention becomes focused on artifacts created by the technology and drifts away from the musical performance.
There are many levels and types of sonic artifacts that can detract from a sound recording, and gaining experience in critical listening promotes increased sensitivity to various types of noise and distortion.
Distortion and noise are the two broad categories of sonic artifacts that engineers typically try to avoid or use for creative effect. They can be present in a range of levels or intensities, so it is not always easy to detect lower levels of unwanted distortion or noise.
In this excerpt we focus on extraneous noises that sometimes find their way into a recording as well as some forms of distortion, both intentional and unintentional.
Although some composers and performers intentionally use noise for artistic effect, we will discuss some of the kinds of noise that are unwanted and therefore detract from the quality of a sound recording. Through improper grounding and shielding, loud exterior sounds, radio frequency interference, and heating, ventilation, and air conditioning (HVAC) noise, there are many sources and types of noise that engineers seek to avoid when making recordings in the studio.
Frequently, noise is at a low yet still audible level and therefore will not register significantly on a meter, especially in the presence of musical audio signals. Some of the various sources of noise include the following:
Clicks: Transient sounds resulting from equipment malfunction or digital synchronization errors
Pops: Sounds resulting from plosive vocal sounds
Ground hum and buzz: Sounds originating from improperly grounded systems
Hiss, which is essentially low-level white noise: Sounds originating from analog electronics, dither, or analog tape
Extraneous acoustic sounds: Sounds that are not intended to be recorded but that exist in a recording space, such as air-handling systems or sound sources outside of a recording room
Clicks are various types of short-duration, transient sounds that contain significant high-frequency energy. They may originate from analog equipment that is malfunctioning, from the act of connecting or disconnecting analog signals in a patch bay, or from synchronization errors in digital equipment interconnection.
Clicks resulting from analog equipment malfunction can often be random and sporadic, making it difficult to identify their exact source. In this case, meters can be helpful to indicate which audio channel contains a click, especially if clicks are produced in the absence of program material. A visual indication of a meter with peak hold can be invaluable to chasing down a problematic piece of equipment.
With digital connections between equipment, it is important to ensure that sampling rates are identical across all interconnected equipment and that clock sources are consistent. Without properly selected clock sources in digital audio, clicks are almost inevitable and will likely occur at some regular interval, usually spaced by several seconds.
Clicks that originate from improper clock sources are often fairly subtle, and they require vigilance to identify them aurally. Depending on the digital interconnections in a studio, the clock source for each device needs to be either internal, digital input, or word clock.
Pops are low-frequency transient sounds that have a thump-like sound. Usually pops occur as a result of vocal plosives that are produced in front of a microphone.
Plosives are consonant sounds, such as those that result from pronouncing the letters p, b, and d, in which a burst of air is produced in the creation of the sounds. A burst of air resulting from the production of a plosive arriving at a microphone capsule produces a low-frequency, thump-like sound.
Usually engineers try to counter pops during vocal recording by placing a pop filter in front of a vocal micro-phone. Pop filters are usually made of thin fabric stretched across a circular frame.
Pops are not something heard from a singer when listen-ing acoustically in the same space as the singer. The pop artifact is purely a result of a microphone close to a vocalist’s mouth, responding to a burst of air.
Pops can distract listeners from a vocal performance because they are not expecting to hear a low-frequency thump from a singer. Usually engineers can filter out a pop with a high-pass filter inserted only during the brief moment while a pop is sounding.
Hum and Buzz
Improperly grounded analog circuits and signal chains can cause noise in the form of hum or buzz to be introduced into analog audio signals. Both are related to the frequency of electrical alternating current (AC) power sources, which is referred to as mains frequency in some places.
The frequency of a power source will be either 50 Hz or 60 Hz depending on geographic location and the power source being used. Power distribution in North America is 60 Hz, in Europe it is 50 Hz, in Japan it will be either 50 or 60 Hz depending on the specific location within the country, and in most other countries it is 50 Hz.
When a ground problem is present, there is either a hum or a buzz generated with a fundamental frequency equal to the power source alternating current frequency, 50 or 60 Hz, with additional harmonics above the fundamental.
A hum is identified as a sound containing primarily just lower harmonics and buzz as that which contains more prominent higher harmonics. Engineers want to make sure that they identify any hum or buzz before recording when the problem is easier to solve. Trying to remove such noises in post-production is possible but will take extra time.
Because a hum or buzz includes numerous harmonics of 50 or 60 Hz, a number of narrow notch filters are needed, each tuned to a harmonic, to effectively remove all of the offending sound.
Although we are not going to discuss the exact technical and wiring problems that can cause hum and buzz and how such problems might be solved, there are many excel-lent references that cover the topic in great detail such as Giddings’s book titled Audio Systems Design and Installation.
Bringing up monitor levels while musicians are not play-ing often exposes any low-level ground hum that may be occurring.
If dynamic range compression is applied to an audio signal and the gain reduction is compensated with makeup gain, low-level sounds including noise floor will be brought up to a more noticeable level. If an engineer can apprehend any ground hum before getting to that stage, the recording will be cleaner.
Extraneous Acoustic Sounds
Despite the hope for perfectly quiet recording spaces, there are often numerous sources of noise both inside and out-side of a recording space that must be dealt with.
Some of these are relatively constant, steady-state sounds, such as air-handling noise, whereas other sounds are unpredictable and somewhat random, such as car horns, people talking, footsteps, or noise from storms.
With most of the population concentrated in cities, sound isolation can be particularly challenging as noise levels rise and our physical proximity to others increases. Besides airborne noise there is also structure-borne noise, where vibrations are transmitted through building structures and end up producing sound in a recording space.
Although engineers typically want to avoid or remove noises such as previously listed, distortion, on the other hand, can be used creatively as an effect, or it can appear as an unwanted artifact of an audio signal.
Sometimes distortion is applied intentionally—such as to an electric guitar signal—to enhance the timbre of a sound, adding to the palette of available options for musical expression. At other times, an audio signal may be distorted through improper parameter settings, malfunctioning equipment, or low-quality equipment.
Whether or not distortion is intentional, an engineer should be able to identify when it is present and either shape it for artistic effect or remove it, according to what is appropriate for a given recording. Fortunately engineers do have an aid to help identify when a signal gets clipped in an objectionable way.
Digital meters, peak meters, clip lights, or other indicators of signal strength are present on most input stages of analog-to-digital converters, microphone preamplifiers, as well as many other gain stages. When a gain stage is overloaded or a signal clipped, a bright red light provides a visual indication as soon as a signal goes above a clip level, and it remains lit until the signal has dropped below the clip level.
A visual indication in the form of a peak light, which is synchronous with the onset and duration of a distorted sound, reinforces an engineer’s awareness of signal degradation and to help identify if and when a signal has clipped. Unfortunately, when working with large numbers of microphone signals, it can be difficult to catch every flash of a clip light, especially in the analog domain.
Digital meters, on the other hand, allow peak hold so that if a clip indicator light is not seen at the moment of clipping, it will continue to indicate that a clip did occur until it is reset manually by an engineer.
For momentary clip indicators, it becomes that much more important to rely on what is heard to identify overloaded sounds because it can be easy to miss the flash of a red light. In the process of recording any musical performance, engineers set microphone preamplifiers to give as high a recording level as possible, as close to the clip point as possible, but without going over.
The goal is to maximize signal-to-noise or signal-to-quantization error by recording a signal whose peaks reach the maximum recordable level, which in digital audio is 0 dB full scale. The problem is that the exact peak level of a musical performance is not known until after it has happened.
Engineers set preamplifier gain based on a representative sound check, giving themselves some headroom in case the peaks are higher than what is expected. When the actual musical performance occurs following a sound check, often the peak level will be higher than it was during sound check because the musicians may be performing at a more enthusiastic and higher dynamic level than they were during the sound check.
Although it is ideal to have a sound check, there are many instances in which engineers do not have the opportunity to do so, and must jump directly into recording, hoping that their levels are set correctly. They have to be especially concerned about monitoring signal levels and detecting any signal clipping in these types of situations.
There is a range of sounds or qualities of sound that we can describe as distortion in a sound recording. Among these unwanted sounds are the broad categories of distortion and noise. We can expand on these categories and out-line various types of each:
Hard clipping or overload. This is harsh sounding and results from a signal’s peaks being squared off when the level goes above a device’s maximum input or output level.
Soft clipping or overdrive. Less harsh sounding and often more desirable for creative expression than hard clip-ping, it usually results from driving a specific type of circuit designed to introduce soft clipping such as a guitar amplifier.
Quantization error distortion. Resulting from low bit quantization in PCM digital audio (e.g., converting from 16 bits per sample to 8 bits per sample). Note that we are not talking about low bit-rate perceptual encoding but simply reducing the number of bits per sample for quantization of signal amplitude.
Perceptual encoder distortion. There are many different artifacts, some more audible than others, that can occur when encoding a PCM audio signal to a data-reduced version (e.g., MP3 or AAC). Lower bit rates exhibit more distortion.
There are many forms and levels of distortion that can be present in reproduced sound. All sound reproduced by loudspeakers is distorted to some extent, however insignificant. Equipment with exceptionally low distortion can be particularly expensive to produce, and therefore the majority of average consumer audio systems exhibits slightly higher levels of distortion than those used by professional audio engineers.
Audio engineers and audiophile enthusiasts go to great lengths (and costs) to reduce the amount of distortion in their signal chain and loudspeakers. Most other commonly available sound reproduction devices such as intercoms, telephones, and inexpensive headphones connected to digital music players have audible distortion. For most situations such as voice communication, as long as the distortion is low enough to maintain intelligibility, distortion is not really an issue.
For inexpensive audio reproduction systems, the level of distortion is usually not detectable by an untrained ear. This is part of the reason for the massive success of the MP3 and other perceptually encoded audio formats found on Internet audio—most casual listeners do not perceive the distortion and loss of quality, yet the size of files is much more manageable and audio files are much more easily transfer-able over a computer network connection than their PCM equivalents.
Distortion is usually caused by amplifying an audio signal beyond an amplifier’s maximum output level. Distortion can also be produced by increasing a signal’s level beyond the maximum input level of an analog-to-digital converter (ADC).
When an ADC attempts to represent a signal whose level is above 0 dB full scale (dB FS), called an over, the result is a harsh-sounding distortion of the signal.
Hard Clipping and Overload
Hard clipping occurs when too much gain is applied to a signal and it attempts to go beyond the limits of a device’s maximum input or output level. Peak levels greater than the maximum allowable signal level of a device are flattened, creating new harmonics that were not present in the original waveform.
For example, if a sine wave is clipped, the result is a square wave whose time domain waveform now contains sharp edges and whose frequency content contains additional harmonics.
A square wave is a specific type of waveform that is composed of odd numbered harmonics (1st, 3rd, 5th, 7th, and so on). One of the results of distortion is an increase in the numbers and levels of harmonics present in an audio signal.
Technical specifications for a device often indicate the total harmonic distortion for a given signal level, expressed as a percentage of the overall signal level. Because of the additional harmonics that are added to a signal when it is distorted, the sound takes on an increased brightness and harshness.
Clipping a signal flattens out the peaks of a waveform, adding sharp corners to a clipped peak. The new sharp corners in the time domain waveform represent increased high-frequency harmonic content in the signal, which would be confirmed through frequency domain analysis and representation of the signal.
A milder form of distortion known as soft clipping or overdrive is often used for creative effect on an audio signal. Its timbre is less harsh than clipping, the shape of an overdriven sine wave does not have the sharp corners that are present in a hard-clipped sine wave.
As is known from frequency analysis, the sharp corners and steep vertical portions of a clipped sine waveform indicate the presence of high-frequency harmonics. Hard clipping distortion is produced when a signal’s amplitude rises above the maximum output level of an amplifier. With gain stages such as solid-state microphone preamplifiers, there is an abrupt change from linear gain before clipping to nonlinear distortion.
Once a signal reaches the maximum level of a gain stage, it cannot go any higher regardless of an increasing input level; thus there are flattened peaks. It is the abruptness of the change from clean amplification to hard clipping that introduces such harsh-sounding distortion.
In the case of soft clipping, there is a gradual transition, instead of an abrupt change, between linear gain and maximum output level. When a signal level is high enough to reach into the transition range, there is some flattening of a signal’s peaks but the result is less harsh than with hard clipping. In recordings of pop and rock music especially, there are examples of the creative use of soft clipping and overdrive that enhance sounds and create new and interesting timbres.
Quantization Error Distortion
In the process of converting an analog signal to a digital PCM representation, analog amplitude levels for each sample get quantized to a finite number of steps. The number of bits of data stored per sample determines the number of possible quantization steps available to represent analog voltage levels.
An analog-to-digital converter records and stores sample values using binary digits, or bits, and the more bits available, the more quantization steps possible.
The Red Book standard for CD-quality audio specifies 16 bits per sample, which represents 216 or 65,536 possible steps from the highest positive voltage level to the lowest negative value. Usually higher bit depths are chosen for the initial stage of a recording.
Given the choice, most recording engineers will record using at least 24 bits per sample, which corresponds to 224 or 16,777,216 possible amplitude steps between the highest and lowest analog voltages. Even if the final product is only 16 bits, it is still better to record initially at 24 bits because any gain change or signal processing applied will require requantization.
The more quantization steps that are available to start with, the more accurate the representation of an analog signal. Each quantized step of linear PCM digital audio is an approximation of the original analog signal. Because it is an approximation, there will be some amount of error in any digital representation. Quantization error is essentially the distortion of an audio signal.
Engineers usually minimize quantization error distortion by applying dither or noise shaping, which randomizes the error. With the random error produced by dither, distortion is replaced by constant noise which is generally considered to be preferable over distortion.
The interesting thing about the amplitude quantization process is that the signal-to-error ratio drops as signal level is reduced. In other words, the error becomes more significant for lower-level signals.
For each 6 dB that a signal is below the maximum recording level of digital audio (0 dB FS), 1 bit of binary representation is lost. For each bit lost, the number of quantization steps is halved. A signal recorded at 16 bits per sample at an amplitude of ₃12 dB FS will only be using 14 of the 16 bits available, representing a total of 16,384 quantization steps.
Although the signal peaks of a recording may be near the 0 dB FS level, there are often other lower-level sounds within a mix that can suffer more from quantization error. Many recordings that have a wide dynamic range may include significant portions where audio signals hover at some level well below 0 dB FS.
One example of low-level sound within a recording is reverberation and the sense of space that it creates. With excessive quantization error, perhaps as the result of bit depth reduction, some of the sense of depth and width that is conveyed by reverberation is lost. By randomizing quantization error with the use of dither during bit depth reduction, some of the lost sense of space and reverberation can be reclaimed, but with the cost of added noise.
Jason Corey is an assistant professor of audio engineering and performing arts technology at the University of Michigan School of Music, Theatre & Dance, and is an active member of the Audio Engineering Society. Go here to find out more and order a copy of Audio Production and Critical Listening: Technical Ear Training.
Fullerton College Invests In Audient
Audio Recording program in Fullerton, California adds ASP8024 analog mixing console to studio.
Markus Burger, coordinator of Music Technology at Fullerton College is in no doubt that the recent installation of an Audient ASP8024 analog mixing console in the studio is an important addition to Audio Recording program.
“The era of consoles is coming back,” he declares with confidence.
“We decided to buy the Audient console to allow students to experience a real studio environment,” he explains, citing the fact that ALL students benefit from the upgrade.
“Having a real board is an integral part of the authentic studio experience. The console itself helps conjure up that extra magic, highlighting to the musician that they are in a special situation. Seeing and hearing a great console also makes sure everyone is in that moment together.”
Burger knows what he’s talking about, as he has a foot in both camps: as a studio engineer and a professional musician. His personal technology collection includes an Audient ASP880 8-channel mic pre and the iD22 audio interface. “Wherever I go, I take these with me. We’ve done multiple recordings with the ASP880 and our RME gear,” he says. “My piano recordings with the iD22 are really good.”
Burger’s drive and dedication to the Recording Program at Fullerton has had a direct effect on student intake, too.
“We grew from 40 students when I took over the program, to about 450 students a year. Every one of these 450 students will come into contact with the console during the year.” If Burger gets his way, this number will increase, too. “I believe that students who study traditional music and play an instrument should learn how to record themselves. If all of them were to take our classes as well, we would more than triple the demand for our classes. That may come once we get a new building.”
So what does he think of the desk? “It sounds great. From an educational standpoint, it’s important for the student to really understand signal flow. A DAW will be the first tool most students see, but with a console all of a sudden it all comes to life.
“It’s also the reason we bought more preamps – and will buy more Audient preamps in the future. They sound great for jazz, classical and pop music alike and offer great value for money. They are clean, transparent and breathe without forcing too much color onto any given recorded sound, and yet still have that real analog warmth.”
Some Fullerton students have already bought themselves an iD22 or iD14 after their experience with the Audient desk at the College. “They are excited to have the ‘big console’ sound in their home studios,” says Burger, clearly pleased that he’s passed on the audio technology bug.
Today he continues to champion Fullerton College’s music technology courses and helping propel them into the 21st century. “We cannot fear technology. People forget that someone like Bach was an avid music technology fan – organs were the synths of his day.”
Shure: A Long Journey That Continues To Pick Up Steam
Seventy-five years ago, a confident young entrepreneur in Chicago embarked on his career, combining an interest in sound with his work.
On April 25, 1925, Sidney N. Shure rented a one-room office at 19 South Wells Street for five dollars per month and founded the Shure Radio Company, a business that sold kits for building radios at a time when factory-made radios were not yet available.
As a child, S. N. Shure was fascinated by radio. In 1913, when he was eleven years old, he received his license to operate an amateur radio station.
Even at this young age, Mr. Shure’s instincts would serve him well. Listening to the radio that he built aroused his interest in many subjects.
By 1928, the company’s sales were climbing. At this time, S. N. Shure’s brother, Samuel, was invited to join the new business, and the company name was changed to Shure Brothers Company. Despite rapid growth, unfortunately, Shure Brothers had some tough times in store.
In 1929, the stock market crashed, and the Great Depression gripped the U.S. and the world.
In addition, factory-built radios entered the marketplace, making it unnecessary for consumers to buy parts kits to build their own radios. These hard times forced S. N. Shure to lay off most of his employees.
Shure logos through the years. (click to enlarge)
With the steep decline in business, Samuel Shure decided to pursue a different career. (Though the “brother” left the firm in 1930, the company retained “Brothers” in its name until 1999, when it became Shure Incorporated.)
While selling radio parts kits, S. N. Shure had published a mail-order catalog which advertised other products as well.
Among them was a microphone produced by a small manufacturer, for which Shure Brothers was the exclusive distributor.
After the depression, S. N. Shure decided to go into the microphone business.
The first microphone manufactured by Shure was the Two-Button Carbon Microphone (1932).
As the first lightweight, quality product in a market dominated by large, costly devices, it quickly gained acceptance.
President Franklin Delano Roosevelt and U.S. news broadcaster Walter Winchell were among the well-known individuals who used early Shure microphones.
But if the Two-Button Carbon Microphone got Shure started, it was the 1939 introduction of the Unidyne I that secured the company’s place in audio history.
Famed figures such as Elvis Presley, Groucho Marx, John F. Kennedy, Martin Luther King, Jr., and Indira Gandhi have been photographed with a version of this popular microphone. Over the years, it has become a cultural icon.
As the first single-element unidirectional microphone, the Unidyne I was smaller, better sounding, and more affordable than any other microphone on the market.
The iconic Shure Unidyne.(click to enlarge)
Thanks to this innovative product, broadcasters were no longer the only ones seen stationed behind a microphone. More and more, microphones were becoming part of the everyday world.
When the United States entered World War II in 1941, Shure organized its operation to supply specialized microphones for war communications, including throat microphones for bomber pilots, “battle announce” microphones for the Navy, and microphones in plastic cases for tanks.
As a result of strict military specifications, or MILSPEC, new standards of ruggedness and reliability were necessary for these products to do the job.
Shure’s version of “Rosie The Riveter” – as in other industries during World War II, women took up the slack in producing products vital to the war effort.
The company worked hard to meet the stringent specifications, developing testing procedures to ensure that its products would work under the most adverse conditions.
After the war, Shure returned to the manufacture of civilian products. Its first phonograph cartridge had been developed in 1937, and by the mid-1940s, Shure was producing cartridges for major manufacturers of the popular phonographs of the era, including Philco, RCA, Emerson, and Magnavox.
Then, in 1958, the company introduced a new stereophonic cartridge, the M3D, which gained acclaim as the first cartridge that effectively met the performance requirements of stereo recording.
With further innovative products, such as the V15 Stereo Dynetic Cartridge in 1964, Shure became the market leader in phono cartridges, a legacy that continues today through the hip-hop artistry of turntablists and scratch DJs.
In the early 1960s, Shure engineer Ernie Seeler led a team to build the ideal vocal microphone, one that provided high-quality sound and was rugged and dependable.
After three years and hundreds of tests involving dropping, throwing, cooking, freezing, salt spray, and water immersion, the SM microphone series was born.
Easily recognized by its unique ball-shaped grille, the SM58 proved its worth in its early days by surviving field tests performed by young rock-and-roll bands like the Rolling Stones.
Into 2000, musical performers such as Beck, Buddy Guy, and Melissa Etheridge, along with broadcasters, politicians, and speakers the world over are heard through the SM58.
One of the most recognizable and most used audio products in the world, the Shure SM58 has been the best-selling, all-purpose vocal microphone for over 30 years.
Shure then leveraged its expertise in audio to embark upon the creation of a small, integrated sound system widely used by musicians, religious institutions, schools, auditoriums, etc. The Shure Vocal Master, introduced in 1967, integrated a power amplifier, mixer, and speakers in a compact package – the first “portable total sound system.”
Subsequently, in 1968, the company introduced the first of a line of mixers that brought greater mobility to broadcasters - the M67, a lightweight, rugged, portable mixer.
Eddie Kramer, famed producer and engineer for such artists as Jimi Hendrix, Led Zeppelin, Santana, and David Bowie, proved that the M67 could be applied in live music when he used three of them to record all of the live performances at the Woodstock music festival in 1969.
The late ‘80s and early ‘90s were a time of both audio refinement and the desire for greater freedom of movement, resulting in development of the Beta line of microphones and the company’s return to the wireless market more than three decades after an early foray in 1953.
Shure also introduced another key product for performers in 1997 with the PSM 600. This in-ear personal monitoring (IEM) system went a long way in perpetuating this then-new concept to the mass of the sound reinforcement market by making it affordable while retaining professional quality.
S. N. Shure died at the age of 93 in 1995. A true visionary, whose rare blend of integrity and perseverance made an impact on the world, his legacy continues to guide the company today.
Link to related articles:
Timeline of Notable Achievements
Interview With Michael Pettersen
Shure Incorporated Mourns Passing Of Chairman, Mrs. Rose L. Shure
Chicago Tribune Obituary For Sydney N. Shure - 1995
Tuesday, January 26, 2016
Audio-Technica Releases New AE2300 Dynamic Cardioid Instrument Microphone
Designed to capture sound from guitar amps, brass and percussion instruments, and other high-SPL sources
Audio-Technica has unveiled the new Artist Elite AE2300 dynamic cardioid instrument microphone, incorporating the company’s proprietary double-dome diaphragm construction to enhance high-frequency and transient response.
With rugged, brass metal construction and low-profile design (able to be placed unobtrusively in a variety of applications) and the ability to handle high SPLs, the AE2300 is versatile, able to capture sound from guitar amps, brass and woodwinds, drums and percussion instruments with clarity and precision.
The double-dome diaphragm construction allows the AE2300 to maintain directionality across the entire frequency range, with little off-axis coloration (frequency response is nearly identical at 0, 90 and 180 degrees) for excellent phase cohesion in multiple-mic setups.
The microphone is also equipped with a switchable low-pass filter that cuts out harsh, high-frequency noise, such as hiss from a guitar amp or high-hat bleed, without negatively affecting the overall tone of an instrument.
Polar Pattern: Cardioid
Frequency Response: 60 Hz – 20 kHz
Open Circuit Sensitivity: -57 (1.4 mV) re 1V at 1 Pa
Impedance: 250 ohms
Weight: 14.6 oz (415 g)
Dimensions: 3.7 inches (94.0 mm) long, 1.1 inch (28.0 mm) maximum diameter
Output Connector: Integral 3-pin XLRM-type
Accessories Included: AT8471 isolation stand clamp for 5/8”-27 threaded stands; 5/8”-27 to 3/8”-16 threaded adapter; soft protective pouch
The AE2300 will be available in February 2016 with a U.S. street price of $269.