Thursday, September 08, 2016
Crane Song Debuts Updated Egret With Quantum DAC
Entire line of digital hardware products has been updated to take advantage of proprietary 5th generation digital to analog converter technology.
Crane Song (AES Booth #1123) announces that its entire line of digital hardware products has been updated to take advantage of its proprietary 5th generation digital to analog converter technology.
With its AES debut, the Egret 8 channel D/A converter / summing mixer joins the Avocet monitor controller, the HEDD 192 AD/DA converter, and Solaris stand alone digital to analog converter to complete the line up of Crane Song products equipped with Crane Song’s Quantum DAC.
The Quantum DAC uses a 32-bit converter and asynchronous sample rate conversion for jitter reduction with up sampling to 211 KHz. The reference clock uses a proprietary reconstruction filter for accurate time domain response; and with jitter less than 1pS.
“I have done several years of measurable analysis and subjective listening in the development of this technology; the Quantum series DAC is the most accurate that I have ever designed,” explains Crane Song founder and developer Dave Hill. “Typical jitter from 10Hz to 20 KHz from the internal clock is 0.055pS and from 1 Hz to 100KHz it is less than 1pS. The result is a very 3D sound that is exceptionally transparent and accurate.”
The Crane Song 5th generation Quantum DAC has been shipping in Avocet IIA since November, 2015, and in April, 2016 Crane Song quietly updated the HEDD 192. As of AES show the Egret will be shipping with the upgraded DAC. This completes the updating of the DACs in all Crane Song digital hardware.
Egret is a flexible digital audio workstation back end. It contains eight channels of Quantum D/A converters and a stereo line level mixer with color options to help bring analog summed digital mixes to life. Each channel of the stereo mixer has a level control, an aux send (which is post level control), a color control, and a pan control. Each channel also contains an analog / digital source button, and solo - mute buttons. The color function is adjustable from a transparent sound to a complex mix of second and third harmonic content, creating the possibility of having clean modern sounds mixed with vintage sounds.
DAC upgrades are available for previous generation Crane song digital hardware products.
See Crane Song at AES in Los Angeles at Booth # 1123 at the Los Angeles Convention Center, Sept 29-Oct 1, 2016.
There are a lot of things to focus on during a tracking session, especially when you’re recording a dozen or more inputs at once.
You want to make sure you’re getting a good sound from each microphone. That’s step one. Let’s be honest, you’ll spend the rest of your recording life perfecting step one…
For now, I want to focus on step two – getting good levels, both when you’re tracking and during mixing.
When I first started recording, I was taught that you want to get the level as close to peaking as humanly possible without going into the red.
I would keep cranking up the mic pre on the snare drum mic until it was pixels away from clipping.
What happened? Everything clipped, of course. Apparently musicians play louder during the actual take than they do during sound check.
So I would turn the preamps down a little bit. Everything looked good, then BAM! More clipping.
I would keep turning down the preamps little by little until the clipping stopped. By this time, the musicians are tired of me coming over the talkback and saying, “Whoops! Sorry guys, that clipped. Let’s start again.”
Not a good scenario.
The reason people tend to think that you need to really “peg” the meters is leftover from the analog days. The harder you hit tape, the better the recording would sound. If you had lower levels, the tape noise would become much too audible.
Today, however, just about everyone is recording a 24-bit digital signal. Digital signals don’t sound better when you turn them up, they simply get louder.
If you record the same track really close to the clip light and then again with plenty of headroom, you won’t notice a difference in the quality of the signal, only the volume.
Analog equipment tends to saturate and add color the harder you drive it. Digital systems do not.
What does this mean?
If you’re recording at 24-bit (and you should be), you’ve got a whopping 144 dB of signal to work with. What does that mean? The noise floor of your system is significantly lower than on an analog system. In fact, the noise floor of a decent digital system is virtually non-existent.
Let’s say you record a snare drum in Pro Tools, and its loudest part is 6 dB below clipping. So, you technically could have recorded it 6 dB louder, but even 6 dB down you still have 138 dB of signal left in your system. You’re still WAY above the noise floor.
My suggestion? Give yourself some room to breathe! Rather than trying to make the signal get as close to the top of the meter as possible, have it max out somewhere between one-half and three-fourths of the way up the meter.
This way the drummer can do an awesomely loud fill without clipping every track in the session, and you won’t end up smacking your forehead every time the clip light goes off during and awesome take.
So, what about mixing, anyway? What was your biggest issue when you mixed your first song? I bet you a nickel it was getting the levels right.
You probably got half-way through the mixing process, and suddenly several of your tracks are clipping, or your master fader is clipping.
So you turn the clipped tracks down a bit. Well now the mix doesn’t sound right, so you try to turn every other track down by the same amount. Still doesn’t sound right.
You go back to work, re-balancing everything. Before you know it, your tracks are clipping again.
You think to yourself, “Did I really turn these up that much again?” You slam your fists into your desk…or kick the dog…or yell at the cat…or maybe you do all of these at the same time.
Welcome to the world of mixing.
You are not alone. This was my experience, and I bet (another nickel) that if you asked any experienced engineer, he’d share a similar story of his own frustrated journey.
My Advice to You
This is the part where I could go into a list of techniques for keeping your track levels down to prevent clipping, but you know what? I’m not gonna do that.
Why? Because I think there is one single reason why people have such trouble with clipping during mixing. I’ll get to that in a second.
The first thing you can do to make your life easier as a mix engineer is to make sure you don’t record everything at a super-hot level. I talk about this in Setting Levels for Recording.
You don’t have to peg the meters to get a great-sounding recording. If you just get a decent level, you’ll be much better off when it comes time to mix.
Sometimes you don’t have control over the levels. Perhaps you’re only mixing the song, not recording it. If so, then you’re at the mercy of the recording engineer who recorded the tracks.
Okay, back to getting rid of all that nasty clipping on your tracks. My suggestion to you?
Turn up your monitors/headphones!
Seriously, this is the biggest reason I get clipping on my mixes, I keep the volume knob too low on my monitors and headphones.
Rather than turn up the monitor volume, I push up the track levels in my mix. That’s a recipe for failure. (It’s also a recipe that will produce more dog-kicking outbursts if you don’t fix it.)
Let’s say you’re mixing a rock tune, and you’re listening to just the drums. Before pushing the kick drum up to zero or (even worse) above zero, reach for the volume knob on your speakers or headphones instead.
You’ll be able to hear everything better, and your mixing levels will be well below clipping. Remember this next time you’re mixing. I bet (yet another nickel) it will help.
Joe Gilder is a Nashville based engineer, musician, and producer who also provides training and advice at the Home Studio Corner.
Allen & Heath Launches Xone:PX5 DJ Performance Mixer
4+1 channel mixer combines Xone analog audio with digital connectivity and next generation performance-focused FX processing.
Allen & Heath has launched the Xone:PX5, a new 4+1 channel DJ performance mixer, that combines Xone analog audio quality with digital connectivity and next generation performance-focused FX processing.
Equipped with the Xone VCF filter, 3-band total kill EQ on all channels, and a new internal Xone:FX engine, the PX5 also features a 20-channel USB sound card with Xone:Sync and MIDI integration, all packaged within an intuitive surface layout.
The PX5’s FX engine features the newly developed Xone:Xcite library of performance-focused FX, coupled with familiar hands-on controls for instant, instinctive operation. A hybrid FX mode combines the internal and external send & return FX for advanced FX processing. Meanwhile, the Xone filter system includes HPF, BPF, LPF, resonance control and frequency sweep, plus the option to route the Aux channel and External Return to the filter.
The internal 20 channel / 24bit / 96kHz USB2 sound card is class compliant on Mac and enables 5 stereo channels to be streamed into the mixer from performance software, whilst the new Xone:Sync engine features comprehensive MIDI clock options, enabling the connection of external software/hardware instruments, such as drum machines and synthesizers. Xone:PX5 also features plug n’ play connection via X:LINK to the Xone:K series controllers for further MIDI control options.
“By combining our extensive knowledge with essential feedback from DJs, we have designed a new breed of DJ performance mixer,” comments Allen & Heath’s DJ Sector specialist, Adrian Pickard. “The Xone:FX engine has uncompromising attention to detail in terms of the Xone:Xcite FX algorithms, whilst focusing on a familiar workflow for performance. The Xone:Sync engine enables tight integration of MIDI tools, while the newly engineered sound card has been specifically created for the DJ environment and offers electronic music artists unparalleled options for integrating into every performance setup.”
If you have trouble with hum, buzz or a higher than normal noise floor, this could be due to twisted pairs which are unbalanced or even ‘”slightly” unbalanced. The Rebalancer (PA819) resolves many of these issues.
A badly balanced input signal emerges from the other side as an improved, balanced signal, improved common mode rejection, lower noise and lower EMI/RFI.
Many classic old devices (limiters, compressors etc.) have poorly balanced inputs or outputs. A Rebalancer solves most noise problems.
The Rebalancer passes digital audio signals. Although digital devices are much less prone to noise problems, usually not heard, it affects the clock recovery and conversion of the digital signal back to analog.
The Rebalancer (PA819) balances signals into a true balanced-line performance.
If you’re looking for a prime example of what Toffler wrote about in Future Shock, look no further than analog tape.
In little more than a decade, the two-inch multitrack tape machine has gone from studio staple to relic rarity. And while many audio veterans wax nostalgic for that warm analog sound, few will admit to missing the work that went with it.
These days, owning an analog tape machine is somewhat akin to driving a classic car, with ongoing maintenance, scarcity of parts, and exotic fuel (analog tape) that’s expensive and hard to find.
So while a handful of top studios still offer those classic spinning reels (and the engineers to maintain them), the good news for the rest of us is that there are now more convenient ways to achieve that classic magnetic sound.
A Bit of History
Analog recording, of course, predates tape — with everything from wax cylinders to wire being used to capture a performance. But when American audio engineer Jack Mullin discovered a pair of German Magnetophon machines during World War II, he knew right away he was on to something big.
The format offered two major advantages over the acetate disks of the day: a recording time of more than 30 minutes, and the ability for recordings to be edited. It was the first time audio could be manipulated.
Mullin brought the two Magnetophons back home after the war and demonstrated them for Bing Crosby at MGM Studios in 1947. Crosby immediately saw the potential for prerecording his radio shows, and invested a small fortune of $50,000 in a local electronics company called Ampex to develop a production model.
Ampex and Mullen soon followed with commercial grade recorders. One of the first Ampex Model 200 recorders was given to guitarist Les Paul, who took the concept of audio manipulation to a higher level. Paul had already been experimenting with overdubbed recording on disks and, quickly realizing the potential for adding more channels and additional recording and playback heads, came to Ampex with the idea for the first multi-track tape recorders.
The format evolved from two tracks to three and four, and although Ampex built some of the first eight-track machines in in the late 1950s, most commercially available machines were limited to four tracks until 1966, when Abbey Road recording engineers Geoff Emerick and Ken Townshend began experimenting with multiple machines during the recording of Sgt. Pepper’s Lonely Hearts Club Band.
The Studer A800 Multichannel Tape Recorder records up to 24 tracks of audio.
Ampex responded to the demand the following year, introducing the revolutionary MM-1000, which recorded eight tracks on one-inch tape. Scully also introduced a 12-track one-inch design that year, but it was quickly overshadowed by a 16-track version of the MM-1000, using two-inch tape. MCI followed in 1968 with 24 tracks on two-inch tape, and the two-inch 24-track became the most common format in professional recording studios throughout most of the 1970s and 1980s.
With the prevalence of home and project studios and digital technology in the late 1980s and 1990s, a number of other tape formats emerged, including various multitrack on-reel and cassette configurations as well as multiple digital tape formats. But for the sake of this article, we’ll focus mainly on multitrack analog tape, the most sonically revered recording medium of all time.
How A Tape Machine Works
In the simplest of terms, magnetic tape consists of a thin layer of Mylar or similar material coated with iron oxide. The tape machine head exerts a charge on the oxide, which polarizes the oxide particles and effectively “captures” the signal. It’s a process that creates some interesting byproducts, many of which directly influence the sound of the recording.
Probably the most commonly cited chracteristic of analog recording is its “warmth.” Tape warmth adds a level of color to the sound, primarily softening the attacks of musical notes, and thickening up the low frequency range. Recording at slightly hot levels to analog tape can also produce a nice distortion that works well with certain types of music such as rock, soul, and blues.
As multitrack recording evolved, a number of different manufacturers began to emerge. By the early 1980s, Ampex was no longer the dominant multitrack manufacturer, facing stiff competition from MCI, Studer, 3M, and Otari. Although a handful of smaller manufacturers, including Stephens, Aces, and a few others also entered the fray, Ampex, Studer, 3M, MCI (later owned by Sony), and Otari became the dominant brands.
Each of these manufacturers’ different models became loved (or despised) for their mechanical attributes and characteristic sound. In the day, a recording studio’s model of multitrack tape recorder was considered as intrinsic to its sound as their acoustics, console or microphone collection.
The Subtle Differences
A multitude of factors influence each machine’s characteristic sound, beginning with the tape heads, amplifiers, and other electronics.
Beyond that, other factors have a bearing on the sound of an analog recording, some of which are unique to each particular machine. Variations in the machine’s speed stability (wow and flutter), alignment of the tape heads and the angle of the tape, condition of the heads (cleanliness, magnetization, etc), tape tension, and other physical factors are just a few things that can affects the sound of a recording.
Besides the machine itself, other factors can affect the particular sound of an analog recording, including the brand of tape used. Back in the heyday of analog, the major brands of tape each had their supporters and detractors. Ampex tape was one of the leading brands, with their 456 formula being the most prominent.
The brand of tape used subtly affect the tonal color of a recording.
Other popular brands included AGFA and 3M. Each tape formulation imparted its own subtle sound to a recording, and each machine had to be realigned each time a brand was changed. Some studios made a policy of sticking to one brand of tape, but it was not uncommon for variations to occur even within different batches of the same brand of tape.
Tape speed is another major factor. Faster tape speeds tend to deliver cleaner sound quality, since the signal is spread over a larger area and the signal-to-noise ratio is increased. The most commonly used speeds with two-inch tape are 15 and 30 IPS (inches per second). Although 30 IPS delivers better overall sound quality, most pros agree that lower frequencies sound better at 15 IPS. Indeed, in the modern era, when tape is most often being used for its sonic effect, slower speeds prevail.
Getting That Analog Tape Sound
Although owning a classic two-inch Studer or Ampex tape machine certainly earns bragging rights, in today’s DAW-oriented world, the fact is that fewer of us would opt to record to analog tape, even if we could. Space considerations, cost considerations, and the scarcity of tape and parts are only the beginning.
The fact is, tape’s destructive editing can be a slow and tedious process in a world where time is truly money. And even the medium itself is no longer cost effective. A single reel of two-inch tape averages around $200. At 15 IPS, that tape holds around 30 minutes of 24-track recording time (half that at 30 IPS). Compare that to a 2TB hard disk, currently selling for just over $100, that can hold many hours of multitrack audio, and you can see why running tape for every project is not an option for most of us.
Screenshot of the Studer A800 plug-in from UA.
Fortunately there are a number of great sounding plug-in processors for your DAW that can bring some of that analog tape warmth and “glue” to your tracks. For example, Universal Audio’s popular Studer A800 Multichannel Tape Recorder is a very faithful emulation of the original machine’s sound, having been painstakingly developed over a one year period with input from the original manufacturer. In fact, the sonic differences between the A800 plug-in and the original A800 hardware are so minute, that many of the world’s top engineers opt to use the plug-in for their day-to-day work.
Aside from the obvious convenience factor, one of the biggest advantages of tape emulation plug-ins are their flexibility. You can choose to process only certain tracks, rather than the whole mix — imparting the warmth, low-end bump, and cohesive properties of tape to the drums and bass tracks, for example, without adding any tape color to guitars and vocals. Or you can add just a hint of tape compression to the mix, without oversaturating things the way an actual two-track machine might.
Regardless of how you choose to implement it, the sound of analog tape can be a great addition to your digital mix. Take a UAD tape emulation plug-in like the Studer A800 and test it on a few tracks. You might find it’s just the thing to add a bit of nice, understated warmth, cohesiveness and punch to your mix. Tape lives on.
Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys. Despite being immersed in professional audio his entire adult life, he still refuses to grow up. This article is courtesy of Universal Audio.
Offers digital and analog operating modes featuring full transformer balanced output isolation and 24-bit digital to analog conversion.
ARX announces the release of its 2nd Generation USB DI, the first with dual mode audio input interface - digital and analog - for two DIs in one.
“ARX released our first USB DI back in 2007 and the unit has become an industry standard for USB - analog audio conversion” says ARX managing director, Colin Park.
“Recently we’ve been looking at where and how ARX’s USB DIs are being used, and concluded that along with USB enabled devices, some users also stored their audio playback material on Tablet Devices, Phones and iPads which have no direct USB Digital outputs, and that interfacing these devices outputs reliably with the professional world of balanced audio can be a problem.”
“At ARX we concluded that our new second generation ‘USB DI Plus’ should offer users a “one box - dual mode” solution to both their USB digital and analog audio interface needs. To put it simply - Two DIs in one.”
Park continues, ” In analog mode the USB DI Plus offers both L & R 6.5mm and broadcast quality Amphenol mini jack socket inputs. To ensure reliability, analog mode is passive, with no batteries or phantom power required for use.”
“Switching over to USB digital mode offers a standard USB type B port with inbuilt 24-bit high resolution digital to analog 44.1 / 48 KHz converter (DAC). This installs as a fully compatible generic “plug and play” USB audio device, requiring no special driver program installation, digital mode powers off the USB connection.”
“Both left & right outputs also feature ground lift and mono mode switching.”
In conclusion, Park says, “Both the USB DI Plus digital and analog operating modes feature full transformer balanced output isolation. This eliminates earth loops / ground hum and helps remove extraneous interaction noise and distortion ensuring ultra low noise operation even in the harshest audio environments.”
Greg Fidelman Tracks Metallica With BAE Audio Preamps (Video)
Producer selects 1073 and 1028 preamps to match the sound of the band's existing vintage gear for new album.
When producer Greg Fidelman hunkered down to begin work on Metallica’s latest record, he knew that the (30) channels of vintage preamps the band had already in the studio would not be enough to simultaneously facilitate the multi-mic drum sound and detailed guitar micing setups he was envisioning for the sessions.
He turned to BAE Audio preamplifiers to nearly double his available inputs, matching the sound of Metallica’s existing vintage gear with the added reliability of BAE’s modern construction and components.
“They’re really busy guys, so we wanted to have all of our sounds for drums, bass, and guitar set up all at once and leave them that way,” Fidelman says. “That way they can get in there and cut a few takes without having to spend time dialing in sounds or breaking down mics each time.”
Fidelman had first worked with BAE Audio preamplifiers on a recent Slipknot record he was producing.
“On that record we were working on a full vintage console but had to change studios halfway through recording to one with a more modern console,” Fidelman recalls. “The band was concerned about retaining a consistent guitar sound so we took extensive notes of our mic placements and EQ settings and took snapshots of direct and reamped guitar to test in the new space. The new studio had a rack full of BAE 1073s and other BAE preamplifiers so we ran everything through those with our notated settings and A/B’d it with what we had done in the previous studio.”
Fidelman said the results were indistinguishable. “I think if anything our sounds were 5-10% better because of the new components in the BAE gear,” he adds.
Fidelman’s experience with BAE Audio gear on the Slipknot record gave him the confidence to recommend it for the Metallica sessions to expand their vintage input count. He procured (11) channels of the BAE Audio 1073 and (8) channels of the 1028 in a mix of module and standalone rack format. Fidelman opted for the mix of vintage and new vintage inputs over the studio’s built-in modern console because of the unique qualities of the vintage circuit design.
“With the 1073, the way you can manipulate the bottom end is pretty unique, with the low-end boost and the filter working together,” Fidelman says. “There’s also a quality to the top end that’s always musical. If you need a little extra you can really dig into it without it becoming harsh. And not just the high frequency boost/cut, but also the higher frequencies in the midrange band.”
Fidelman notes that the midrange band is particularly key for articulating the top end of kick and snare drums.
“It’s pleasing with drums, you can boost what you want without the other garbage,” he says. “To get the core guitar sounds for Metallica I’m sometimes routing a mic into the 1073 and then out into the direct input of another 1073 or 1028 to get access to another midrange or low end band for extra control.”
Fidelman appreciates the additional frequency selections provided by the 1028 on things like overheads. “You can dig in deeply with some of those additional frequencies to define the sound you’re looking for,” he says. “It provides the versatility I need.”
Though Fidelman says he “grew up” on vintage consoles and loves their sound, he acknowledges that working with older gear has its perils. “I was working at a studio in Hollywood recently with a (great) vintage desk, but even with the techs working through one or two modules every day, the reality was that stuff was failing faster than they could keep up with,” Fidelman says. “BAE has nailed the sound and since you’re not dealing with 30-year old contacts, dusty pots, and worn connectors it’s way more reliable.”
Both preamps sport the same Carnhill/St Ives transformers specified in the original vintage circuit design and feature BAE’s renowned hand-wired construction, conducted at their facility in California, enabling them to capture the vintage sound that has been the signature of many beloved recordings.
Fidelman and the band’s approach have proven fruitful over the course of the tracking sessions.
“We began these sessions back in June and have been tracking bits and pieces as recently as two weeks ago,” Fidelman says. “We never had to stop and reset things to switch instruments, and we’ve got consistent sound on every single channel, whether it’s with our vintage channels or the BAE channels. We can hop from laying down Kirk’s guitars to Lars’s drums seamlessly and know we have sounds worthy of a Metallica record ready to go at all times.”
Working in tandem with vintage and “new vintage” gear by BAE Audio, Fidelman has also kept a coherent and consistent sound on a record that’s been in process for several months.
“There are always interruptions when you’re working on a high-profile record, but we were able to eliminate technical interruptions from the project with the consistency and reliability of BAE hardware, all without sacrificing that vintage sound that I love.” BAE preamps are the new first choice for Fidelman. “I can’t really tell the difference between BAE and the original.”
Editor’s Note: This article originally appeared in 1997, but the principles are still valid today and worthy of repeating.
Like everything else in the world, the audio industry has been radically and irrevocably changed by the digital revolution. No one has been spared.
Arguments will ensue forever about whether the true nature of the real world is analog or digital; whether the fundamental essence, or dharma, of life is continuous (analog) or exists in tiny little chunks (digital). Seek not that answer here.
Rather, let’s look at the dharma (essential function) of audio analog-to-digital (A/D) conversion.
It’s important at the onset of exploring digital audio to understand that once a waveform has been converted into digital format, nothing can inadvertently occur to change its sonic properties. While it remains in the digital domain, it’s only a series of digital words, representing numbers.
Aside from the gross example of having the digital processing actually fail and cause a word to be lost or corrupted into none use, nothing can change the sound of the word. It’s just a bunch of “ones” and “zeroes.” There are no “one-halves” or “three-quarters.”
The point is that sonically, it begins and ends with the conversion process. Nothing is more important to digital audio than data conversion. Everything in-between is just arithmetic and waiting. That’s why there is such a big to-do with data conversion. It really is that important. Everything else quite literally is just details.
We could even go so far as to say that data conversion is the art of digital audio while everything else is the science, in that it is data conversion that ultimately determines whether or not the original sound is preserved (and this comment certainly does not negate the enormous and exacting science involved in truly excellent data conversion.)
Because analog signals continuously vary between an infinite number of states while computers can only handle two, the signals must be converted into binary digital words before the computer can work. Each digital word represents the value of the signal at one precise point in time. Today’s common word lengths are 16-bits, 24-bits and 32-bits. Once converted into digital words, the information may be stored, transmitted, or operated upon within the computer.
In order to properly explore the critical interface between the analog and digital worlds, it’s first necessary to review a few fundamentals and a little history.
Binary & Decimal
Whenever we speak of “digital,” by inference, we speak of computers (throughout this paper the term “computer” is used to represent any digital-based piece of audio equipment).
And computers in their heart of hearts are really quite simple. They only can understand the most basic form of communication or information: yes/no, on/off, open/closed, here/gone, all of which can be symbolically represented by two things - any two things.
Two letters, two numbers, two colors, two tones, two temperatures, two charges… It doesn’t matter. Unless you have to build something that will recognize these two states - now it matters.
So, to keep it simple, we choose two numbers: one and zero, or, a “1” and a “0.”
Officially this is known as binary representation, from Latin bini—two by two. In mathematics this is a base-2 number system, as opposed to our decimal (from Latin decima a tenth part or tithe) number system, which is called base-10 because we use the ten numbers 0-9.
In binary we use only the numbers 0 and 1. “0” is a good symbol for no, off, closed, gone, etc., and “1” is easy to understand as meaning yes, on, open, here, etc. In electronics it’s easy to determine whether a circuit is open or closed, conducting or not conducting, has voltage or doesn’t have voltage.
Thus the binary number system found use in the very first computer, and nothing has changed today. Computers just got faster and smaller and cheaper, with memory size becoming incomprehensibly large in an incomprehensibly small space.
One problem with using binary numbers is they become big and unwieldy in a hurry. For instance, it takes six digits to express my age in binary, but only two in decimal. But in binary, we better not call them “digits” since “digits” implies a human finger or toe, of which there are 10, so confusion reigns.
To get around that problem, John Tukey of Bell Laboratories dubbed the basic unit of information (as defined by Shannon—more on him later) a binary unit, or “binary digit” which became abbreviated to “bit.” A bit is the simplest possible message representing one of two states. So, I’m six-bits old. Well, not quite. But it takes 6-bits to express my age as 110111.
Let’s see how that works. I’m 55 years old. So in base-10 symbols that is “55,” which stands for five 1’s plus five 10’s. You may not have ever thought about it, but each digit in our everyday numbers represents an additional power of 10, beginning with 0.
Figure 1: Number representation systems.
That is, the first digit represents the number of 1’s (100), the second digit represents the number of 10’s (101), the third digit represents the number of 100’s (102), and so on. We can represent any size number by using this shorthand notation.
Binary number representation is just the same except substituting the powers of 2 for the powers of 10 [any base number system is represented in this manner].
Therefore (moving from right to left) each succeeding bit represents 20 = 1, 21 =2, 22 =4, 23 =8, 24 = 16, 25 =32, etc. Thus, my age breaks down as 1-1, 1-2, 1-4, 0-8, 1-16, and 1-32, represented as “110111,” which is 32+16+0+4+2+1 = 55. Or double-nickel to you cool cats.
Figure 1, above, shows the two examples.
The French mathematician Fourier unknowingly laid the groundwork for A/D conversion in the late 18th century.
All data conversion techniques rely on looking at, or sampling, the input signal at regular intervals and creating a digital word that represents the value of the analog signal at that precise moment. The fact that we know this works lies with Nyquist.
Harry Nyquist discovered it while working at Bell Laboratories in the late 1920s and wrote a landmark paper describing the criteria for what we know today as sampled data systems.
Nyquist taught us that for periodic functions, if you sampled at a rate that was at least twice as fast as the signal of interest, then no information (data) would be lost upon reconstruction.
And since Fourier had already shown that all alternating signals are made up of nothing more than a sum of harmonically related sine and cosine waves, then audio signals are periodic functions and can be sampled without lost of information following Nyquist’s instructions.
This became known as the Nyquist Frequency, which is the highest frequency that may be accurately sampled, and is one-half of the sampling frequency.
For example, the theoretical Nyquist frequency for the audio CD (compact disc) system is 22.05 kHz, equaling one-half of the standardized sampling frequency of 44.1 kHz.
As powerful as Nyquist’s discoveries were, they were not without their dark side, with the biggest being aliasing frequencies. Following the Nyquist criteria (as it is now called) guarantees that no information will be lost; it does not, however, guarantee that no information will be gained.
Although by no means obvious, the act of sampling an analog signal at precise time intervals is an act of multiplying the input signal by the sampling pulses. This introduces the possibility of generating “false” signals indistinguishable from the original. In other words, given a set of sampled values, we cannot relate them specifically to one unique signal.
Figure 2: Aliasing frequencies.
As Figure 2 shows, the same set of samples could have resulted from any of the three waveforms shown. And from all possible sum and difference frequencies between the sampling frequency and the one being sampled.
All such false waveforms that fit the sample data are called “aliases.” In audio, these frequencies show up mostly as intermodulation distortion products, and they come from the random-like white noise, or any sort of ultrasonic signal present in every electronic system.
Solving the problem of aliasing frequencies is what improved audio conversion systems to today’s level of sophistication. And it was Claude Shannon who pointed the way. Shannon is recognized as the father of information theory: while a young engineer at Bell Laboratories in 1948, he defined an entirely new field of science.
Even before then his genius shined through for, while still a 22-year-old student at MIT he showed in his master’s thesis how the algebra invented by the British mathematician George Boole in the mid-1800s, could be applied to electronic circuits. Since that time, Boolean Algebra has been the rock of digital logic and computer design.
Shannon studied Nyquist’s work closely and came up with a deceptively simple addition. He observed (and proved) that if you restrict the input signal’s bandwidth to less than one-half the sampling frequency then no errors due to aliasing are possible.
So bandlimiting your input to no more than one-half the sampling frequency guarantees no aliasing. Cool…Only it’s not possible. In order to satisfy the Shannon limit (as it is called - Harry gets a “criteria” and Claude gets a “limit”) you must have the proverbial brick-wall, i.e., infinite-slope filter.
Well, this isn’t going to happen, not in this universe. You cannot guarantee that there is absolutely no signal (or noise) greater than the Nyquist frequency.
Fortunately there is a way around this problem. In fact, you go all the way around the problem and look at it from another direction.
If you cannot restrict the input bandwidth so aliasing does not occur, then solve the problem another way: Increase the sampling frequency until the aliasing products that do occur, do so at ultrasonic frequencies, and are effectively dealt with by a simple single-pole filter.
This is where the term “oversampling” comes in. For full spectrum audio the minimum sampling frequency must be 40 kHz, giving you a useable theoretical bandwidth of 20 kHz - the limit of normal human hearing. Sampling at anything significantly higher than 40 kHz is termed oversampling.
In just a few years time, we saw the audio industry go from the CD system standard of 44.1 kHz, and the pro audio quasi-standard of 48 kHz, to 8-times and 16-times oversampling frequencies of around 350 kHz and 700 kHz, respectively. With sampling frequencies this high, aliasing is no longer an issue.
O.K. So audio signals can be changed into digital words (digitized) without loss of information, and with no aliasing effects, as long as the sampling frequency is high enough. How is this done?
Quantizing is the process of determining which of the possible values (determined by the number of bits or voltage reference parts) is the closest value to the current sample, i.e., you are assigning a quantity to that sample.
Quantizing, by definition then, involves deciding between two values and thus always introduces error. How big the error, or how accurate the answer, depends on the number of bits. The more bits, the better the answer.
The converter has a reference voltage which is divided up into 2n parts, where n is the number of bits. Each part represents the same value.
Editors Note: For those working the math, “2n parts” is also known to be “2 to the nth power.” Use the x^y function on a scientific calculator to achieve the correct result.
Since you cannot resolve anything smaller than this value, there is error. There is always error in the conversion process. This is the accuracy issue.
Figure 3: 8-Bit resolution.
The number of bits determines the converter accuracy. For 8-bits, there are 28 = 256 possible levels, as shown in Figure 3.
Since the signal swings positive and negative there are 128 levels for each direction. Assuming a ±5 V reference , this makes each division, or bit, equal to 39 mV (5/128 = .039).
Hence, an 8-bit system cannot resolve any change smaller than 39 mV. This means a worst case accuracy error of 0.78 percent.
Each step size (resulting from dividing the reference into the number of equal parts dictated by the number of bits) is equal and is called a quantizing step (also called quantizing interval—see Figure 4).
Originally this step was termed the LSB (least significant bit) since it equals the value of the smallest coded bit, however it is an illogical choice for mathematical treatments and has since be replaced by the more accurate term quantizing step.
Figure 4: Quantization, 3-bit, 50-volt example.
The error due to the quantizing process is called quantizing error (no definitional stretch here). As shown earlier, each time a sample is taken there is error.
Here’s the not obvious part: the quantizing error can be thought of as an unwanted signal which the quantizing process adds to the perfect original.
An example best illustrates this principle. Let the sampled input value be some arbitrarily chosen value, say, 2 volts. And let this be a 3-bit system with a 5 volt reference. The 3-bits divides the reference into 8 equal parts (23 = 8) of 0.625 V each, as shown in Figure 4.
For the 2 volt input example, the converter must choose between either 1.875 volts or 2.50 volts, and since 2 volts is closer to 1.875 than 2.5, then it is the best fit. This results in a quantizing error of -0.125 volts, i.e., the quantized answer is too small by 0.125 volts.
If the input signal had been, say, 2.2 volts, then the quantized answer would have been 2.5 volts and the quantizing error would have been +0.3 volts, i.e., too big by 0.3 volts.
These alternating unwanted signals added by quantizing form a quantized error waveform, that is a kind of additive broadband noise that is generally uncorrelated with the signal and is called quantizing noise.
Since the quantizing error is essentially random (i.e. uncorrelated with the input) it can be thought of like white noise (noise with equal amounts of all frequencies). This is not quite the same thing as thermal noise, but it is similar. The energy of this added noise is equally spread over the band from dc to one-half the sampling rate. This is a most important point and will be returned to when we discuss delta-sigma converters and their use of extreme oversampling.
Successive approximation is one of the earliest and most successful analog-to-digital conversion techniques. Therefore, it is no surprise it became the initial A/D workhorse of the digital audio revolution. Successive approximation paved the way for the delta-sigma techniques to follow.
The heart of any A/D circuit is a comparator. A comparator is an electronic block whose output is determined by comparing the values of its two inputs. If the positive input is larger than the negative input then the output swings positive, and if the negative input exceeds the positive input, the output swings negative.
Therefore, if a reference voltage is connected to one input and an unknown input signal is applied to the other input, you now have a device that can compare and tell you which is larger. Thus a comparator gives you a “high output” (which could be defined to be a “1”) when the input signal exceeds the reference, or a “low output” (which could be defined to be a “0”) when it does not.
Figure 5A: Successive approximation, example.
A comparator is the key ingredient in the successive approximation technique as shown in Figure 5A and Figure 5B. The name successive approximation nicely sums up how the data conversion is done. The circuit evaluates each sample and creates a digital word representing the closest binary value.
The process takes the same number of steps as bits available, i.e., a 16-bit system requires 16 steps for each sample. The analog sample is successively compared to determine the digital code, beginning with the determination of the biggest (most significant) bit of the code.
The description given in Daniel Sheingold’s Analog-Digital Conversion Handbook offers the best analogy as to how successive approximation works. The process is exactly analogous to a gold miner’s assay scale, or a chemical balance as seen in Figure 5A.
This type of scale comes with a set of graduated weights, each one half the value of the preceding one, such as 1 gram, 1/2 gram, 1/4 gram, 1/8 gram, etc. You compare the unknown sample against these known values by first placing the heaviest weight on the scale.
If it tips the scale you remove it; if it does not you leave it and go to the next smaller value. If that value tips the scale you remove it, if it does not you leave it and go to the next lower value, and so on until you reach the smallest weight that tips the scale. (When you get to the last weight, if it does not tip the scale, then you put the next highest weight back on, and that is your best answer.)
The sum of all the weights on the scale represents the closest value you can resolve.
In digital terms, we can analyze this example by saying that a “0” was assigned to each weight removed, and a “1” to each weight remaining—in essence creating a digital word equivalent to the unknown sample, with the number of bits equaling the number of weights.
And the quantizing error will be no more than 1/2 the smallest weight (or 1/2 quantizing step).
As stated earlier the successive approximation technique must repeat this cycle for each sample. Even with today’s technology, this is a very time consuming process and is still limited to relatively slow sampling rates, but it did get us into the 16-bit, 44.1 kHz digital audio world.
PCM, PWM, EIEIO
The successive approximation method of data conversion is an example of pulse code modulation, or PCM. Three elements are required: sampling, quantizing, and encoding into a fixed length digital word. The reverse process reconstructs the analog signal from the PCM code.
The output of a PCM system is a series of digital words, where the word-size is determined by the available bits. For example. the output is a series of 8-bit words, or 16-bit words, or 20-bit words, etc., with each word representing the value of one sample.
Pulse width modulation, or PWM, is quite simple and quite different from PCM. Look at Figure 6.
Figure 6: Pulse width modulation (PWM).
In a typical PWM system, the analog input signal is applied to a comparator whose reference voltage is a triangle-shaped waveform whose repetition rate is the sampling frequency. This simple block forms what is called an analog modulator.
A simple way to understand the “modulation” process is to view the output with the input held steady at zero volts. The output forms a 50 percent duty cycle (50 percent high, 50 percent low) square wave. As long as there is no input, the output is a steady square wave.
As soon as the input is non-zero, the output becomes a pulse-width modulated waveform. That is, when the non-zero input is compared against the triangular reference voltage, it varies the length of time the output is either high or low.
For example, say there was a steady DC value applied to the input. For all samples, when the value of the triangle is less than the input value, the output stays low, and for all samples when it is greater than the input value, it changes state and remains high.
Therefore, if the triangle starts higher than the input value, the output goes high; at the next sample period the triangle has increased in value but is still more than the input, so the output remains high; this continues until the triangle reaches its apex and starts down again; eventually the triangle voltage drops below the input value and the output drops low and stays there until the reference exceeds the input again.
The resulting pulse-width modulated output, when averaged over time, gives the exact input voltage. For example, if the output spends exactly 50 percent of the time with an output of 5 volts, and 50 percent of the time at 0 volts, then the average output would be exactly 2.5 volts.
This is also an FM, or frequency-modulated system—the varying pulse-width translates into a varying frequency. And it is the core principle of most Class-D switching power amplifiers.
The analog input is converted into a variable pulse-width stream used to turn-on the output switching transistors. The analog output voltage is simply the average of the on-times of the positive and negative outputs.
Pretty amazing stuff from a simple comparator with a triangle waveform reference.
Another way to look at this is that this simple device actually codes a single bit of information, i.e., a comparator is a 1-bit A/D converter. PWM is an example of a 1-bit A/D encoding system. And a 1-bit A/D encoder forms the heart of delta-sigma modulation.
Modulation & Shaping
After 30 years, delta-sigma modulation (also sigma-delta) emerged as the most successful audio A/D converter technology.
It waited patiently for the semiconductor industry to develop the technologies necessary to integrate analog and digital circuitry on the same chip.
Today’s very high-speed “mixed-signal” IC processing allows the total integration of all the circuit elements necessary to create delta-sigma data converters of awesome magnitude.
Essentially a delta-sigma converter digitizes the audio signal with a very low resolution (1-bit) A/D converter at a very high sampling rate. It is the oversampling rate and subsequent digital processing that separates this from plain delta modulation (no sigma).
Referring back to the earlier discussion of quantizing noise, it’s possible to calculate the theoretical sine wave signal-to-noise (S/N) ratio (actually the signal-to-error ratio, but for our purposes it’s close enough to combine) of an A/D converter system knowing only n, the number of bits.
Doing some math shows that the value of the added quantizing noise relative to a maximum (full-scale) input equals 6.02n + 1.76 dB for a sine wave. For example, a perfect 16-bit system will have a S/N ratio of 98.1 dB, while a 1-bit delta-modulator A/D converter, on the other hand, will have only 7.78 dB!
Figures 7A - 7E: Noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.
To get something of a intuitive feel for this, consider that since there is only 1-bit, the amount of quantization error possible is as much as 1/2-bit. That is, since the converter must choose between the only two possibilities of maximum or minimum values, then the error can be as much as half of that.
And since this quantization error shows up as added noise, then this reduces the S/N to something on the order of around 2:1, or 6 dB.
One attribute shines true above all others for delta-sigma converters and makes them a superior audio converter: simplicity. The simplicity of 1-bit technology makes the conversion process very fast, and very fast conversions allows use of extreme oversampling.
And extreme oversampling pushing the quantizing noise and aliasing artifacts way out to megawiggle-land, where it is easily dealt with by digital filters (typically 64-times oversampling is used, resulting in a sampling frequency on the order of 3 MHz).
To get a better understanding of how oversampling reduces audible quantization noise, we need to think in terms of noise power. From physics you may remember that power is conserved, i.e., you can change it, but you cannot create or destroy it; well, quantization noise power is similar.
With oversampling, the quantization noise power is spread over a band that is as many times larger as is the rate of oversampling. For example, for 64-times oversampling, the noise power is spread over a band that is 64 times larger, reducing its power density in the audio band by 1/64th.
Figures 7A through 7E illustrate noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.
Noise shaping helps reduce in-band noise even more. Oversampling pushes out the noise, but it does so uniformly, that is, the spectrum is still flat. Noise shaping changes that.
Using very clever complex algorithms and circuit tricks, noise shaping contours the noise so that it is reduced in the audible regions and increased in the inaudible regions.
Conservation still holds, the total noise is the same, but the amount of noise present in the audio band is decreased while simultaneously increasing the noise out-of-band—then the digital filter eliminates it. Very slick.
As shown in Figure 8, a delta-sigma modulator consists of three parts: an analog modulator, a digital filter and a decimation circuit.
The analog modulator is the 1-bit converter discussed previously with the change of integrating the analog signal before performing the delta modulation. (The integral of the analog signal is encoded rather than the change in the analog signal, as is the case for traditional delta modulation.)
Figure 8: Delta-sigma A/D converter. (click to enlarge)
Oversampling and noise shaping pushes and contours all the bad stuff (aliasing, quantizing noise, etc.) so the digital filter suppresses it.
The decimation circuit, or decimator, is the digital circuitry that generates the correct output word length of 16-, 20-, or 24-bits, and restores the desired output sample frequency. It is a digital sample rate reduction filter and is sometimes termed downsampling (as opposed to oversampling) since it is here that the sample rate is returned from its 64-times rate to the normal CD rate of 44.1 kHz, or perhaps to 48 kHz, or even 96 kHz, for pro audio applications.
The net result is much greater resolution and dynamic range, with increased S/N and far less distortion compared to successive approximation techniques—all at lower costs.
Now that oversampling helped get rid of the bad noise, let’s add some good noise—dither noise. Dither is one of life’s many trade-offs. Here the trade-off is between noise and resolution. Believe it or not, we can introduce dither (a form of noise) and increase our ability to resolve very small values.
Values, in fact, smaller than our smallest bit… Now that’s a good trick. Perhaps you can begin to grasp the concept by making an analogy between dither and anti-lock brakes. Get it? No? Here’s how this analogy works: With regular brakes, if you just stomp on them, you probably create an unsafe skid situation for the car… Not a good idea.
Instead, if you rapidly tap the brakes, you control the stopping without skidding. We shall call this “dithering the brakes.” What you have done is introduce “noise” (tapping) to an otherwise rigidly binary (on or off) function.
So by “tapping” on our analog signal, we can improve our ability to resolve it. By introducing noise, the converter rapidly switches between two quantization levels, rather than picking one or the other, when neither is really correct. Sonically, this comes out as noise, rather than a discrete level with error. Subjectively, what would have been perceived as distortion is now heard as noise.
Lets look at this is more detail. The problem dither helps to solve is that of quantization error caused by the data converter being forced to choose one of two exact levels for each bit it resolves. It cannot choose between levels, it must pick one or the other.
With 16-bit systems, the digitized waveform for high frequency, low signal levels looks very much like a steep staircase with few steps. An examination of the spectral analysis of this waveform reveals lots of nasty sounding distortion products. We can improve this result either by adding more bits, or by adding dither.
Prior to 1997, adding more bits for better resolution was straightforward, but expensive, thereby making dither an inexpensive compromise; today, however, there is less need.
The dither noise is added to the low-level signal before conversion. The mixed noise causes the small signal to jump around, which causes the converter to switch rapidly between levels rather than being forced to choose between two fixed values.
Now the digitized waveform still looks like a steep staircase, but each step, instead of being smooth, is comprised of many narrow strips, like vertical Venetian blinds.
Figure 9: A - input signal; B - output signal (no dither); C - total error signal (no dither); D - power spectrum of output signal (no dither); E - input signal; F - output signal (with dither); G - total error signal (with dither); H - power spectum of output signal (with dither).
The spectral analysis of this waveform shows almost no distortion products at all, albeit with an increase in the noise content. The dither has caused the distortion products to be pushed out beyond audibility, and replaced with an increase in wideband noise. Figure 9 diagrams this process.
Wrap With Bandwidth
Due to the oversampling and noise shaping characteristics of delta-sigma A/D converters, certain measurements must use the appropriate bandwidth or inaccurate answers result. Specifications such as signal-to-noise, dynamic range, and distortion are subject to misleading results if the wrong bandwidth is used.
Because noise shaping purposely reduces audible noise by shifting the noise to inaudible higher frequencies, taking measurements over a bandwidth wider than 20 kHz results in answers that do not correlate with the listening experience. Therefore, it’s important to set the correct measurement bandwidth to obtain meaningful data.
Download a PDF copy of this article from Rane here.
Dennis Bohn is a principal partner and vice president of research & development at Rane Corporation. He holds BSEE and MSEE degrees from the University of California at Berkeley. Prior to Rane, he worked as engineering manager for Phase Linear Corporation and as audio application engineer at National Semiconductor Corporation. Bohn is a Fellow of the AES, holds two U.S. patents, is listed in Who’s Who In America and authored the entry on “Equalizers” for the McGraw-Hill Encyclopedia of Science & Technology, 7th edition.
Hardware-based model leverages FPGA technology for low latency performance and the character of the original vintage gear.
Antelope Audio announces the arrival of its FET-A76 solid-state compressor to its latest hardware interfaces, as well as a further expansion of its EQ offerings with four new vintage hardware models.
Each hardware-based model leverages Antelope’s proprietary Field Programmable Gate Array (FPGA) technology, and joins a growing range of signal processing offered by Antelope Audio with near-zero latency performance and the character of the original vintage gear.
The FET-A76, VEQ-HLF, Helios 69, NEU-PEV and Lang PEQ2 add more essential tools for tracking and mixing to the growing Antelope Audio digital platform, and are available immediately as an update for the Goliath, Zen Tour and Orion Studio hardware.
FET-based compression has been a staple in the studio since its invention in the late 60s. The Antelope FET-A76 captures all of the nuances of a vintage FET compressor, and like its analog progenitor, is useful not only for controlling dynamics and sculpting tone but also for its ability to add punch and presence to anything passing through its circuit. Mirroring the original hardware’s easy-to-use interface, the FET-A76 features input and output gain controls and a selectable 4-way ratio control with hidden “all-buttons” mode for a more aggressive compression character.
The FET-A76 shines in a wide range of applications, from vocals to bass guitar to buss compression, and is a companion to Antelope Audio’s already-available FPGA EQ models, including the recently released BAE Audio 1073.
“The FET compressor has been a studio workhorse for decades, and now thanks to our proprietary FPGA technology we can model the character of this classic outboard gear in-hardware on our interfaces with near-zero latency,” says Marcel James, director of U.S. sales, Antelope Audio. “Whether you apply the it to individual tracks or to the mix bus, the FET-A76 puts the musical analog character and all the incredible dynamics shaping potential of this classic circuit at the fingertips of Antelope Audio’s users to make their mixes shine.”
Ferrofish A32 Converter & RME Interface Play Handy Role For Ellie Goulding On Tour
New Ferrofish A32 AD/DA converter for the keyboard and playback rig, joined by RME MADIface XT 394-Channel, 192 kHz USB 3.0 audio interface
On the last leg of Ellie Goulding’s recent tour of Europe, next coming to North America, the sound team upgraded the keyboard and playback rigs with technology from Ferrofish and RME, which are both distributed by Synthax.
Specifically, Will Sanderson, the tour’s MIDI, playback and keyboards technician, decided to deploy the new Ferrofish A32 AD/DA converter for the show’s keyboard and playback rig and the RME MADIface XT 394-Channel, 192 kHz USB 3.0 audio interface.
“The aim was to design a system that would handle keyboard sounds, electronic drum sounds, playback, plus autocue and timecode while enabling the musical director and the musicians to have free reign with their sound design and full control of it during the show from on stage,” Sanderson explains. “I knew the system would need to sound great and be robust, yet remain flexible enough to handle any changes or developments as the campaign progressed. With so many elements of the show connected to this rig, it could not be the weak link.”
Sanderson continues, “I’m very lucky that Ellie Goulding’s musical director, Joe Clegg, and the musicians in the band are all very open to embracing new technology. Together, we’ve designed a rig that allows us to explore both the creative and technical possibilities. I needed to select hardware that would not compromise the band’s workflow and give them the performance they needed while also providing the quality, reliability, and flexibility that I require to function efficiently on the road.
“To address these requirements, the RME and Ferrofish combination was a natural choice. The gear neatly handles all of our keyboard sounds, electronic drum sounds and playback, as well as autocue and timecode. In the end, we selected four RME MADIface XT units, the RME MADI Router, DirectOut EXBOX.BLDS MADI switchers (for auto-switching redundancy) and the Ferrofish A32.”
Sanderson provides specifics on the application of the Ferrofish A32. “The sound quality of this converter is fantastic and it couldn’t be easier to operate,” he reports. “We were one of the first people to get hold of it and we didn’t have a lot of time for extensive testing. As it turns out, it really was a case of unboxing it, bolting it into the rack, and turning it on. Straight out of the box, it worked exactly as you’d expect it to. The Ferrofish A32 is a very well thought out and intuitive piece of equipment. And while I’m certain we’re only using it for a fraction of its capabilities, for our requirements, it couldn’t be better.”
“With other artists I’ve worked for,” he adds. “I had a lot of success using RME products, so I knew the gear would work well. We needed equipment that was going to be robust, not compromise on sound or build quality, and give us the flexibility we required. The gear needed to be scalable up and down for different touring requirements.”
Sanderson also clarified why he and his team chose to go with the RME MADIface XT over the smaller RME MADIface USB, which was also considered. “Because this setup was never going to be a carry-on fly rig, compact size was less important. A key reason for choosing the XT is the unit’s visual display, which makes operation easier and speeds up problem diagnosis.
“Another important consideration is that I have the option to run them from the thunderbolt port via an adapter. This way, I can free up the USB architecture within the computer should it need to focus solely on MIDI.”
Sanderson reports that using MADI provides him extra flexibility and more sophisticated channel routing options. “We were quite ambitious with what we wanted to achieve regarding channel routing,” he notes, “and it was these products that enabled us to commit to a design without having to feel constricted by hardware specifications.”
He also points out the importance of having redundancy options in a live playback rig. The team uses two duplicate playback and keyboard rigs, ensuring that, if a fault were to occur during a show, the backup rigs would immediately take over, resulting in no audio dropouts. “
There are two sides to this system,” he says. “There’s a keyboard rig and a playback rig that also provides all of our electronic drum sounds and timecode, which gets sent out to FOH, as well as the Lighting and Video departments. Both sides to the rig have redundant back-up systems, but I also wanted the flexibility to be able to run the keyboard rig from the playback rig if necessary. The extra security provided by this system breeds confidence in the setup—for everyone involved.”
The rig is positioned offstage, though it is remotely controlled by the band on stage. The musicians are in total control of the show. There are five keyboards and two sample pads, plus nine drum triggers on the drum kit—all of which communicate directly with this system via MIDI. “I’m basically just monitoring the rig during the show, though I can make adjustments on the fly as required,” he says.
SSL AWS 948 Console Making Music At The New Anexe Studio In The UK
Owners Steve and Lindsey Trougton decided that the SSL AWS 948 could draw a wide crowd in addition to offering the desired functionality
The Anexe Studio, a new ground-up commercial facility in Exeter, UK, has invested in a Solid State Logic (SSL) AWS 948 console.
Steve and Lindsey Trougton’s long-held ambition to own and operate their own commercial recording studio has finally been realized with The Anexe. Returning from a spell in New York, where the pair originally went in 2011 to study sound recording after a dramatic change in careers, they saw opportunity in the region’s growing cultural credentials.
“It’s an up-and-coming city with a vibrant music and arts scene,” explains Lindsey. “There’s a lot of investment and growth, so we thought it would be an exciting place to build our studio, and our home.”
Both come from musical backgrounds. Steve is a trumpet and bass player as well as a vocalist, and has been in many bands from the age of 12. Lindsey is a vocalist who has trained in stage performance and spent a lot of time in recording studios over the years.
“There’s no direct competition for us here, which we thought was quite exciting,“ continues Lindsey. “We’re only a two-hour train journey from London and we have a local airport with regular flights to and from Europe.”
They decided that the SSL AWS 948 could draw a wide crowd. It uses a dual path channel design to fit 48 channels into a 24-channel frame, and offers three operating modes, selectable per channel. “To have 48 channels in such a small footprint is fantastic,” Steve says. “That was a massive selling point.
“It’s got 4-band EQ on every channel, selectable between E and G-Series,” he continues, “and SSL dynamics, and of course the master bus compressor from the G-Series console. I love that.”
The hybrid nature of the AWS is another aspect that the duo finds to suit their purposes well. The console’s focus button selects analog and DAW focus modes, allowing fast switching between an analogue and DAW operator focus, with reassigned meter, fader, select switch and V-pots. “Even when we’re tracking we’re on that Focus button doing the rough mix,” Steve explains. “Which is pretty much ready to go when the band has finished a take. That workflow is incredible.”
The Anexe Studio was built from scratch, starting with a 2.6-meter hole so the building could partially submerged into the landscape, giving it 4-meter-high ceilings in the live room without imposing too much on the local skyline.
UK studio design and installation specialists Studio Creations came up with the architectural, acoustic, and technical plans for the studio, and completed the construction, internal finishes, and technical installation.
Mark Russell, studio creations director, is complimentary about Steve and Lindsey’s vision for the studio: “The flow of the place, the vibe, everything is high quality. They have a fantastic eye for style and were very clear on their preferences. It’s not a ‘typical’ or ordinary studio. There’s a great mixture of materials used inside, including the Cedar paneling taken from the original building, the New York-inspired brick work, fabric panels. Lots of things.”
To help with the design process, the Studio Creations service includes 3D renderings of the designs plus an ‘auralization’ demo, which allows clients to assess different levels of sound proofing and isolation before committing to anything, and to satisfy any planning concerns.
Inside, the studio consists of a control room, a main live room with three isolation booths, and a recreational room with kitchen facilities. Even the bathroom sink is special, a one-off moulded concrete basin created from the body of one of Steve’s Telecasters.
The Trougtons’ have been putting the new facility through its paces with a stream of local acts. “We’re getting to know our space at the moment,” says Lindsey. “It’s been a learning experience for us and we’ve had some great feedback.
“Bands love the fact that they can record live. The local laptop studios have their place, but there’s no substitute for a dedicated live space and great equipment. Everybody knows the SSL name, though of course not all bands have experienced an SSL. They all love the look of the AWS 948, but they especially love the sound.”
A live concert or event often serves a wider audience than those seated in the arena. In addition to the audio in the house and video at the side of the stage, it might also be simulcast to a separate location, streamed on the web, recorded for archival or other purposes, and/or broadcast via radio or television.
To accomplish this task, audio signals from microphones and instruments must be split beyond the traditional front of house and monitor positions and sent to other locations, be they other control rooms at the venue or inside production trucks.
Complicating matters more, these locations will probably be on different legs of the electrical system, potentially adding ground loops and hum to the signal.
With the more widespread adoption of digital consoles and networking in live sound applications, there’s much more potential to access the same signals and be tied together in ring or other topologies with the audio remaining in the digital domain. I was curious whether (and how) these newer technologies are being used.
My search began at the Monterey Jazz Festival, where the main stage shows are simulcast to the Jazz Theatre and other locations onsite as well as often shared via radio or webcast. McCune Audio (San Francisco) handles the audio and video for the festival. I then talked with folks at several major sound companies about how they handle the issues of signal splitting, grounding, and interfacing consoles.
Main Stage Simulcast
The 6,500-seat arena at the Monterey Jazz Festival, where the main acts perform, is the focal point of the simulcast and broadcast activities. DiGiCo SD10 consoles are located at FOH and monitors, with their D-Racks at the side of the stage near the monitor station. Four video cameras are positioned in the house for full-stage shots as well as on either side of the stage for close-ups.
“Split world” adjacent to the monitor position at the Jazz Theater at Monterey, the focal point of simulcast and broadcast activities. (Photo by Eva Bagno)
The video and simulcast control center is set up behind and underneath the main stage. All camera shots are called from this location, audio is embedded with the video, simulcast feeds are processed and distributed, and in 2014, the show was also webcast. Archiving is done from a production truck outside the arena, with an Avid VENUE Profile console building its own mix.
This mix is also sent to radio, as it has been every year with the exception of the 2015 edition of the festival. The raw feeds from the onstage cameras are provided to screens at both FOH and the archiving truck so that the engineers have visual information about what’s happening on stage, even during changeovers between acts.
Splitting & Routing
To accommodate the different mix requirements for FOH, monitors, archiving, and simulcast, stage signals are split three ways.
Though all mixing consoles are digital, with A/D converters on stage, the split itself is analog via a custom Ramtech STGBX-54 three-way splitter. Each of the 54 channels can be input directly or via four 12-channel and one 6-channel Ramtech CPC onstage sub-snakes.
The SD10 console at FOH is directly connected, and the monitor and archiving consoles receive transformer-isolated feeds. “Any mic that’s in the system, even if it’s only for a record input, has to connect to FOH for phantom power,” adds Nick Malgieri, FOH mixer.
From the splitter, the stage signals go from each of the three multi-pin outputs to the A/D converter boxes. Both FOH and monitors use two 32-input DiGiCo D-Racks, with an optical fiber loop to transmit the signals to the consoles. To feed the Profile console for archival recording, the third split goes to a VENUE Stage Rack using MADI digital protocol. The video control area receives two separate stereo mixes, from FOH and the archival truck, and uses one or the other to embed with the live-edited video signal – which goes from there to simulcast, recording, and webcast.
On the decision to use an analog split rather than digital networking to share audio signals, Malgieri explains, “There are ways we could network it all together, but for speed and efficiency we keep it an analog split. That way, no one is tied to anyone else. Sharing preamps together would make us interdependent, which wouldn’t be conducive to a festival-style event with fast changes and guest engineers.”
Recording & Archiving
Ron Davis has been mixing and producing the Monterey Jazz archival recordings for many years. The mix is independent of FOH, starting with the raw signal “straight off the mic” plus the audience mics above the stage and in the house for crowd response. Having his own multi-channel feed from the stage allows him to “fine-tune the mix for recording purposes,” he notes, since his environment is more conducive to critical listening.
Ron Davis, mixer and producer of Monterey Jazz archival recordings, at his Avid VENUE Profile console. (Photo by Eva Bagno)
The VENUE Profile console interfaces with Pro Tools, and Rob Macky monitors the recording along with other technical details. Other members of the team include an onstage liason, who is in touch continually with a comms person in the truck (also connected with FOH, video, and other positions throughout the venue), and another who archives the recordings to digital media as soon as they’re finished.
Davis says that 48 channels are usually more than enough for the acts plus the audience mics, though at times he needs to drop a couple of inputs. In those cases, he may choose one of a stereo pair of mics or just use the DI from the bass rather than adding the mic on the cabinet. His mix is patched to the video area as a potential simulcast feed, and is fed to any radio broadcast trucks airing the show. The archival truck also receives the FOH mix for redundancy.
Monterey Jazz Simulcast
Because simulcast is an important feature at the festival, a control area is designated for video and tasked with live video for the side-stage screens, creating the simulcast feed and monitoring the venues where it plays, video archiving and performance MP4 recordings for the artists, and at times, webcasting.
The crew includes the camera operators, directed by Jesse Block from the control room, a person controlling the video and audio embedding, another on recording, and a “grounds technician.”
A portion of the simulcast control center underneath the main stage at Monterey Jazz. (Photo by Eva Bagno)
Simulcast receives mixes from FOH and the record truck, plus an ambient mic submix. Malgieri states that “Depending on who’s ready first, video makes a judgment depending on what’s coming down the pipe on which mix they’re going to go with. This could change for each act.” Also, having both mixes available provides a backup in case of trouble, giving the same content but different mixes.
The simulcast feed goes to the Jazz Theatre, where patrons who have purchased ground passes for the other stages can experience what’s happening on the arena stage. The signal is sent about 750 feet via fiber to the theatre, and then decoded into L/R audio to full-range loudspeakers and subwoofers, with video projected on a large screen. The Premier (VIP) Lounge is a smaller venue, closer to the arena, and it also receives the simulcast via HD/SDI.
On The Road
Beyond Monterey, I checked in with several other touring companies to learn how they handle signal splitting for live events, especially when multiple splits are required for broadcast, recording, or similar applications.
In most cases, an “old-school” analog splitter with transformer-isolated outputs is the rule on the road (at least among those I spoke with), rather than sharing a common digital signal among the various applications.
Dave Skaff, senior tour support for Clair Global (Lititz, PA), says that “There seems to be two distinct camps between the live world of traveling music and broadcast. The live mixers are very ingrained with having their own head amp control. To give them that control, a digital split is kind of ruled out.”
A downside is that each console position needs its own stage racks with an analog split. Continuing, he adds, “In the broadcast world, the idea of using one set of head amps, and having several people follow with digital trim or some kind of gain tracking is a fairly accepted way of doing things – their comfort level is higher.”
On a recent U2 tour, Skaff notes, “We did entertain the idea of having digital splits, with certain people having control of stage racks and others using digital trim for levels.” During the planning, he adds that there were incidents where, if the digital loop went down, “you lost a lot of control.” The show was especially complex, with six different consoles that would be on the loop – FOH plus a backup, and three separate monitor setups with a backup.
Skaff notes that some of the tour staff’s fear of relying on the newer technology came from second-hand conversations they’ve “heard from others,” plus Clair’s own observations of small glitches that persuaded them to stay with the tried-and-true methods.
The company went back to a custom-designed 6-way analog splitter for the U2 shows so that each console would have control over levels, with proper loading for six mic preamps per channel and transformers that accept a wide range of signal levels without saturation. There were also conventional splitters that facilitate 3- or 4-way passive splits.
For a show that also needs to accommodate broadcast trucks, the engineer might ask for an isolated analog split or a digital split sent to them as AES3, or possibly a MADI split off the stage racks; Skaff has had all of the above requested recently. Many tours will provide an open analog split, available for a recording truck or other production application.
Smaller Or Larger Flexibility
Dave Rat of Rat Sound Systems (Oxnard, CA) discussed with me a “baseline method” of signal splitting, using a custom-built XLR panel with two 56-pair Whirlwind W4 MASS connector outputs. For smaller shows, the choice is usually a single panel for FOH and monitors, while for larger shows or ones that require separate recording or broadcast feeds, the approach is multiple panels with a single input and a pair of ISO outputs – with the direct signal going to FOH.
A portion of the isolated split recording approach with the Red Hot Chili Peppers in Europe earlier this year.
Rat Sound has also designed multi-connector panels that are fed from stage boxes, and by changing the tails, the switch between opening and headlining acts can be accomplished more quickly and reliably. Occasionally there are bands where both the FOH and monitor engineer are working with the same mixing console, and each will use a common digital split; any additional feeds for a production truck are likely to come from the analog splitter.
Rat observes that recording seems to fall into two categories.
The first is an isolated split recording where the signals go to a recording truck, or in the case of the Red Hot Chili Peppers, to a console located in a remote room in the venue that is fed from a separate A/D rack stage-side and mixed there. This mix might go to broadcast or another application.
The second is that the FOH or monitor console sends a recording feed, taking the mic outputs in their raw form to a multitrack recorder.
Greg Snyder of Thunder Audio (Livonia, MI) also confirms that even though many of the latest mixing consoles can share a stage rack and digitally split the signal, his team often opts not to share mic preamps and to use an analog split. He finds that off-the-shelf splitters can be very reliable, and that “with today’s digital consoles, we find that passive splitters are very easy to use as go-to packages.”
Snyder adds that quite often the FOH engineer will create a mix to be embedded with video, which is transported from the console to the video truck via an analog snake or fiber interface. He notes that mixing for both live and video “requires that the engineer be very conscious of the mix they’re providing so that it will be usable for broadcast.”
Hall Of Fame & More
When I caught up with him recently, Mark Dittmar, the live broadcast events engineer at Firehouse Productions (Red Hook, NY), had just returned from the Rock & Roll Hall of Fame show, which he’s worked for several years running. This year’s event combined a live show for about an audience of about 15,000, plus broadcast, at Brooklyn’s Barclay Center arena.
In addition to Firehouse’s live audio setup that included infrastructure, splits and comms, All Mobile Video provided the television truck and a Music Mix Mobile truck did the audio mixing. Firehouse handles overall coordination of the show, and then, Ditmar notes, “informs the others how we’re handing things off to them.”
Both 3-way and 6-way splitters were deployed to route audio signals to the various stage racks, and then to mixing consoles in the venue and out to the trucks. “We always split everything analog; we don’t do any digital sharing,” he explains. “That has a major negative impact on the speed and workflow that we’re doing. We keep everything analog in the split world, and then it goes digital from that point on out. With how fast changes come at us in this type of show, it’s proven to be impractical for any type of preamp sharing.”
He also points out that the FOH music and production desks, monitor desk, music mix desk, and broadcast desk are usually different makes/models, and there’s typically only one song during rehearsal to set levels and EQ, along with a quick camera check, and then it’s on to the next act.
Firehouse utilizes modified Whirlwind splitters, with Dittmar noting that “one of the cornerstones of our company is having absolutely zero split issues.” It’s not uncommon to see a 192-input show split six ways, so the company uses a very specific grounding scheme and is “militant” about sticking to it.
Part of the splitter’s design is focused on enforcing proper grounding, and the tech crew also follows a rigid power distribution scheme that also reinforces best practices in grounding. Dittmar concluded our conversation by stressing the basics: “Splitting and grounding is something where you can be 99.9 percent correct, and the 0.1 percent that you’re wrong about brings everything down.”
Grounded In Analog?
While digital networking has matured greatly over the past several years and can effectively distribute audio signals to multiple sources reliably, there are still some areas within the audio chain where old-school analog devices remain a standard. Signal splitting and isolation seems to be one of those areas.
In part, this is a practical decision driven by the nature of shows being set up in different venues every night while accommodating the rapid changes between acts and the desire of engineers to have full control over the inputs into their consoles. There also seems to be some resistance to surrendering that control, based on prior experiences with earlier networking technology and anecdotes from fellow engineers.
The bottom line is that analog splitting is a proven solution for sharing live audio. It will take more time, positive experiences and perhaps technical development before digital splitting becomes more commonplace in live sound.
Gary Parks is a writer who has worked in pro audio for more than 25 years, holding marketing and management positions with several leading manufacturers.
A-Designs Audio Unveils Mix Factory Summing Unit For Enhanced Workflows
Accommodates up to 16 audio channels that come into the device on two D-sub inputs and sum to stereo XLR outputs
A-Designs Audio has introduced Mix Factory, a new concept and approach to “out-of-the-box” summing for engineers and musicians looking to enhance their current sound and workflow.
“Our new Mix Factory isn’t just any old summing unit,” says A-Designs Audio’s Peter Montessi. “It delivers analog warmth with the depth and imaging needed to make your mixes truly stand out from the crowd.”
Based on a concept developed by producer/engineer/mixer Tony Shepperd and brought to life by designer Paul Wolff, Mix Factory accommodates up to 16 audio channels, which come into the device on two D-sub inputs and sum to stereo XLR outputs.
All 16 channels have a continuous FDR (gain) knob, pan pot with center detent, and cut (mute) switch that acts as a signal indicator with an audio sensitive LED, which glows when signal is passing into the channel and intensifies when the signal is stronger. That same cut button illuminates red when used as a mute, and green when signal is passing through the channel.
There are two eight-channel groups on the Mix Factory: 1-8 and 9-16. Each group has an insert for a compressor or EQ, and there is also a master insert for all 16 channels, along with three mute buttons for each Insert.
Mix Factory has a push-button option to go from clean—the standard setup bypassing the transformers—to tonal using the custom-made output transformers manufactured by Cinemag. The difference between using the transformers and not provides users an option for analog tone and/or color.
In situations where more than 16 channels are necessary, Mix Factory is linkable, providing 64 or more channels.
“Like many other present day engineers, I used to think that everything could be done ‘in the box’,” recalls Shepperd. “However, I finally came to realize that analog and digital could, in fact, co-exist very well together, and the hybrid of the two was the special combination that took my mixes to a whole new level. It’s what ultimately prompted many other recording engineers to call me for tips and practically demand to know how I made my recordings sound so great. After several years of R&D on this product, I can say that nothing will make a mix pop quite like A-Designs’ new Mix Factory.”
An external, switchable power supply allows the unit to be used for both 120-volt domestic (U.S.) studio environments as well as 230-volt export markets. Including the PSU, the 2U device weighs 17 pounds.
Immediately available, Mix Factory carries a street price of $2,750 (USD).
One day on a freelance gig I walked into the room to discover that the A/V company provided me with an older analog console with two racks of outboard gear. While setting up front of house and patching in all of the effects and processing, I found myself wishing for a digital console.
The very next freelance gig I was presented with a brand-new digital console and no extra gear at front of house. As I waded through menus trying to set up a console that I ‘d never used before and struggled to read the manual I’d just downloaded on my phone, I found myself wishing for an analog console and some simple outboard processing.
The death of analog consoles and processing hasn’t happened yet, nor will they go away in the foreseeable future. Many sound companies and installations still utilize analog components, and there are “old sound guys” like me who still like to grab knobs instead of sort through menus.
For shows with just a few inputs, analog is usually the most cost-effective choice, and for some larger shows they’re a proven item that (often) sound great while offering all of the necessary features needed.
If you work in only one venue, you learn the console and system, and have adapted a workflow for that gear. If you freelance a lot like me, there’s the need to adapt to whatever console and equipment is provided. Here are some of the approaches I’ve developed over the years.
I have slightly different setup routines depending if it’s a digital console or an analog model with outboard processing. The first thing I do, if at all possible ahead of time, is ask what gear is being provided so I can download the manuals and quick start guides if I don’t already have a copy. There are folders on my laptop filled with manuals for consoles, outboard processors and recording units, so answers can be found quickly at a show even if I don’t have internet access.
If I find out about the gear in advance then I print out a quick start guide and/or pertinent pages of a manual (like how to change aux sends from pre to post) for quick reference at the gig. It’s also a good idea read the manuals on unfamiliar gear before the show.
With analog consoles at FOH, I start by setting up the outboard racks close to the console so the patch cables can reach. Depending on how the equipment is packaged I may place the console on top of the outboard racks to save space or put the outboard racks off to the side at a 90-degree angle.
Next up is figuring out what’s needed with respect to inputs, outputs and processing. Digital consoles usually have processing available for every input and output, but analog requires a bit of forethought, including strategies and “workarounds” when processing channels and devices are limited.
Digital consoles, particularly more recent models, carry an impressive amount of processing onboard.
Patching It Up
Once the game plan is ready, I kick things off by patching the outputs for the main loudspeakers as well as any delay and fills. This involves running short patch cables between the console outputs and the outboard EQs in the rack, then patching the amplifiers (or powered loudspeakers) from the outboard EQ units.
There may be the need to patch in a delay unit as well if running remote loudspeakers. Once the loudspeakers are patched, it’s time to can address any output feeds that are needed, such as an audio send to video world or a feed to lobby loudspeakers.
Next up is patching in effects and channel processing like compressors and gates. Most analog systems that I run across have between two and eight channels of compression available, so it’s a good idea to plan ahead with a limited number.
With analog consoles, outboard processing is usually “inserted” into a channel via an insert jack. Some consoles have two send and return connectors that are usually 1/4-inch TRS (balanced) for inserting external gear, but more common is a single 1/4-inch TRS insert jack that provides an unbalanced send and return on a single plug.
A special “insert cable” that consists of a 1/4-inch TRS plug on one end that breaks out into two 1/4-inch TS plugs at the processing end is used to interface the outboard gear.
The cable is wired so the Tip of the TRS is the send to one of the breakout legs, the Ring is the return on the other breakout leg, and the Sleeve is wired to the sleeves on both breakout legs. I always carry some insert cables to shows as they seem to be the most forgotten item on the pack list.
Now it’s time to plug in the inputs and label the console. I also take the time to label the aux sends as well as the outboard gear so I know where they’re patched in.
A Bit Easier, But…
Digital consoles are a bit more simple to set up as most offer the necessary processing onboard. Before doing anything with a digital desk, I like to start from scratch, wiping it back to the factory settings. Some models have a default scene that can be recalled, some require a few clicks in the menu, and others may require a boot-up while holding down a few buttons to reset back to the default.
An insert cable with 1/4-inch TRS plug and two 1/4-inch TS plugs.
My reasons for starting with a clean slate are simple. I don’t know what the last user did and don’t want to get “bit” during my show trying to make an adjustment only to find out that the last user changed a setting that was not readily apparent, like switching all of the auxes to post fader.
Once the console is reset, I still start at the outputs and make sure there’s an EQ in-line with the loudspeakers. Many models have an EQ assigned to each output. but a few require assigning any needed processing to a specific output from a limited number of items onboard.
After outputs comes inputs, and I assign each channel the processing it requires. Again, some boards have a limited number of processing and effects, and these must be chosen and patched before they can be used. If the console has scribble strips, label each input and output as you go. I also like to label the console with tape, especially for items that don’t have scribble strips like user-assigned buttons. At this point it’s also a good idea to save the settings on the console in a scene.
If I’m using a remote app for mixing on a tablet, now is the time to make sure it’s working and then walk around the venue to see if it stays connected around the room.
On most corporate gigs and small festival band gigs, the supplied analog console usually has between 24 and 32 inputs and four or eight subgroups. Larger shows may get a larger frame board with 40-plus inputs and VCAs. Since VCAs aren’t available on many shows, I use the subgroups to make my job a little easier by grouping like inputs together.
With four groups available on corporates, I normally place the podium mic into its own group, any presenter wireless into a group, table microphones for panel discussion into a group and Q+A audience mics in another. With four subgroups available on band gigs I split up the groups into drums, guitars, keyboards with horns (if any) and vocals.
It can help to add a bit of compression to a podium and lavalier mics used by presenters. With music it’s likely to add some compression to kick drum, bass guitar, lead guitar and vocals.
I work with a lot of headline singers who have some background vocalists behind them. One trick that comes in handy when there’s not a lot of outboard compression channels is to run all of the background vocals into a subgroup and compress the group as a whole, but insert a compressor into the lead singer’s channel and then just run them straight to the L+R masters.
That way there’s tailoring available for the lead vocalist and the ability to add compression to the background singers, and without taking up an entire subgroup for one vocalist.
On corporate gigs it’s not uncommon for me to get stuck with a very small analog console with limited channel EQ but there’s a high-profile presenter who needs some drastic EQ help to sound good in the room. If the console has a channel insert jack, I insert an outboard graphic EQ or parametric EQ into the channel.
I carry a stereo 15-band graphic just in case there’s not an extra one provided onsite. This EQ has often come in handy for the mains when the A/V company thinks that a small console with built-in five- to seven-band graphic is all you need and they don’t provide an outboard EQ.
Other outboard gear regularly carried to freelance gigs include a stereo leveler, stereo compressor/gate and multi-channel feedback suppression processor. While some audio folks laugh when they see my feedback unit, they soon realize that it does a great job taming wireless lavalier mics because each of the 24 filters can have a bandwidth as narrow as 1/80 of an octave. It works great when inserted on the lavalier subgroup.
Another limitation on some smaller analog consoles is a lack of buses. More than a few times I’ve been mixing monitors from a smaller front of house board and have run out of aux sends. If the console has a matrix section, it can be used to set up side fill mixes, and in a pinch, some individual performer mixes.
Even the smallest digital consoles usually include comps and gates on every channel but many of them lack VCAs (also called DCAs) or subgroups. An easy way to get subgroup control on a digital board that does not offer groups is to use a post fade aux bus. Simply assign every input channel that you want in the group to the same post fade aux send. Make sure to un-assign those channels from routing to the main L+R outputs. Now assign the output of that aux send to the L+R mains and it acts as a subgroup.
While the aux master may be on a different layer, many consoles have a user-configurable layer that can be tailored to specific needs. For example, I place my “money channels” like lead vocalist, lead guitar, podium, presenter wireless and others on the user layer, along with any effects masters and subgroups or VCA/DCAs. This way, mixing can be done mostly on a single layer.
One of the few drawbacks with digital desks, at least for me, comes when using a digital audio network instead of an analog snake. The problem is that an intercom channel or lighting DMX can’t be run on the same cable as can be done if the analog snake has extra channels. Sure, the solution is running a few single cables from FOH to backstage, but finding hundreds of feet of extra XLR cable onsite can be an exercise in futility and carrying around hundreds of feet of “extra” cable as a freelancer is not an option.
Catapult, a recently released 4-channel Cat snake from Radial Engineering.
My solution is to deploy a snake system that provides four analog audio runs down a single shielded Cat cable. A small reel with 100 meters of cable makes for a compact package I can keep in my truck. Radial Engineering, Whirlwind and others offer nice 4-channel snake boxes for Cat cable. In addition to comms and DMX, they work great for analog mics, line level returns and even AES signals.
Speaking of networks, Dante has become the de facto standard of audio networks, so I carry a few small gigabit switches to shows to help route signals. I also just added a small wi-fi router to my bag. It comes in handy for interfacing an iPad for remote console control and might also come in handy with the new version (3.10) of Dante Controller able to connect to networks over wi-fi.
Finally, for almost every gig, I bring a small utility mixer just in case the console has a problem or a submixer is needed (or would come in handy). In the past this was an 8-channel analog unit but recently has switched to a very compact digital mixer. With 16 inputs that have 4-band EQ and onboard multitrack recording, I sometimes just replace the console provided by the A/V company with my own mixer and a small rack.
The key as a freelance audio technician is to be able to adapt to the equipment that is provided no matter if analog or digital, and to be prepared outside of that to make the show happen, and as well as possible. Making the client happy insures I get another call.
Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.
Precision Sound Studios Steps Up To Solid State Logic
Malvicino Design Group outfits Manhattan recording facility with 48-channel SSL Duality δelta Pro Station SuperAnalogue console.
Precision Sound Studios owner and engineer/producer Alex Sterling has created a technical and creative oasis on the Upper West Side of Manhattan, close to the City’s American Museum of Natural History, Central Park, and many cosmopolitan shops and restaurants nearby.
As part of a recent re-fit and technical upgrade package, Sterling has installed a 48-channel Solid State Logic Duality δelta Pro Station SuperAnalogue console in the Precision Sound Control Room A.
The Studio upgrade, specified by Sterling and implemented by the Malvicino Design Group, also included a comprehensive new wiring scheme, a large video screen for Film/TV Post work, and overall layout adjustments and refurbishment.
“I have always wanted to create a working space for music production that has the comfort of a person’s home or living room but with the technical and professional capabilities of a larger commercial facility,“ explains Sterling.
One of the most striking features of the studio is the live room space, which also happens to be a library of around 3000 books. “Believe it or not,” says Sterling, “They have an acoustic value as well as an aesthetic value.”
The live room can host around 15 musicians, which means that Sterling is as in demand for band recording, film, and television work as he is for Electronic, Pop, and Hip Hop mixing and production. For Sterling, the new Duality δelta console is a creative tool that meets his own high standards, yet also puts his studio onto a more high-end commercial footing with outside producers.
“During my console search I carefully researched and demoed several other modern consoles, many of which did have some substantial sonic attributes, however the Duality has the most developed functionality for a modern workflow and its sonics are nothing less than spectacular.
“The integration with the DAW was very important to me, as was the high channel count - and having a full complement of processing available on every channel… I could be spending twice as much to get full filters, dynamics, and EQ on every channel with another console, and I still wouldn’t be getting any of the DAW control functionality that Duality offers.”
Precision Sound has now been up and running with the new console for several months. The very first session on the new console was a TV scoring session for composer Michael Bacon. “That was a good first test,” says Sterling. “Everything was flawless, everything sounded great…
“I’ve used the console’s channel preamps for most of the tracking that I’ve done through the desk… I was not expecting to like the preamps as much as I do. For tracking, the SSL pre-amp is as transparent as any of the esteemed, clean boutique pre-amps, and it’s extremely low noise, which some other pre-amps just can’t claim.”
Sterling is also complimentary about the SuperAnalogue bus architecture of the Duality.
“One of the things I’ve been experimenting with is using the console’s mix bus to give me volume and level for a final mix print, but without having to use peak limiting. By driving the console mix bus with a lot of level, I am able to get a much more aggressive full and forward sound - without needing to lose or cut off transients with a dynamics processor for volume.
“I’ve been shocked how rich and full I can make things sound by essentially ignoring the VU meters and letting them pin completely into the red… just completely brutalizing the capture chain. “...The desk can really take it. You can clip the channels a bit, but the mix bus itself is pretty much unclippable. At least, I haven’t managed to do any damage with it yet…”
For Sterling, the last few months have proven that the Duality delivers superb sonics, an integrated DAW workflow, and a creative approach to production.
“To my ear, signal processing is generally superior in the analogue domain,” he says, “But some of the creative things that people are doing now really only exist within the DAW environment. To not become disconnected from the DAW while working on the console was very important to me because I’m working on modern productions that have modern production requirements… This console really has set the professional standard for this decade.”
This page has been viewed 0 times
Page rendered in 2.4217 seconds
Total Entries: 21231
Total Comments: 1967
Total Trackbacks: 0
Most Recent Entry: 10/26/2016 11:59 am
Most Recent Comment on: 01/19/2012 02:32 am
Total Members: 4924
Total Logged in members: 0
Total guests: 2
Total anonymous users: 0
Most Recent Visitor on: 02/10/2012 11:04 am
The most visitors ever was 774 on 02/08/2012 02:19 pm