Friday, September 23, 2016
Recording The Masters Launches Kerwax Replica
New analog tube processor is a 2-channel excerpt of the 24-channel custom tube mixer installed at Kerwax recording studio
The Kerwax Replica analog tube processor has been designed at Kerwax Recording Studio, a residential facility located in Brittany, France. The Kerwax Replica is developed by Recording The Masters factory to distribute it as a worldwide product.
Its unique design features easy change and combination of tubes, to sculpt and color the sound artistically, with either vintage or modern tubes. The controls interact with each other to create a wide variety of tube harmonic distortion characteristics.
The Kerwax Replica is a 2-channel excerpt of the 24-channel custom tube mixer installed at Kerwax recording studio, a unique mixing desk conceived in the pure tradition of historical studios to meet the requirements of in-house sound engineers and producers. Its 2 independent channels are ideal for stem or individual instrument processing.
The Kerwax Replica creates a wide variety of tube harmonic distortion characteristics, with an easy tweaking combination of gain drive and output volume stages, bias adjustment and EQ knobs. It is the ideal companion as an insert or a front-end unit for processing and warming up your stems or mix.
Now, you can build your distinctive sound, and stand out in the digital crowd. You can easily warm-up and add distortion to any input audio signal: use trim and bias settings to adjust depth of sound and produce harmonics, and select drive function with gain setting to add progressive saturation to the sound.
Thanks to its gentle Baxandall curves, the integrated treble and bass EQs are very musical and smooth. In case you need to remove some harmful low frequencies, a high-pass filter allows you to choose between 80 Hz and 120 Hz frequencies.
The Kerwax Replica uses two vacuum tubes per channel: the first one for preamplification, the other for saturation. We selected 12AX7 reference because it offers the best audio results for a wide variety of sources. However, if you want to experiment and obtain different results, the tubes are replaceable. You can use all references from the 12AX7 family.
Available for pre-order from September 15th 2016, Kerwax Replica will be showcased and demonstrated in exclusivity by its designer Christophe Chavanon on Recording The Masters’ booth during next AES show in Los Angeles (September 29th - October 1st). Recording The Masters will be represented on booth #526 by experts and US representatives (RMGI USA) to discuss together about analog recording, tapes and machines.
With the plethora of new (and affordable) digital mixers on the market, it’s easy to be overwhelmed with features, options and pricing.
So how do you determine the right board for your situation? I use the same criteria as purchasing an analog console with just a few additional twists.
When purchasing an analog console, there are five primary aspects that I suggest looking at. (Well, six if you include price.)
1) Quality. I’m a big fan of “touching and feeling” a console. Being a tactile person, the overall construction quality can be useful in terms of determining reliability and lifespan. I’m also a big fan of listening before purchase. Some of the apparently well-built consoles have sounded just as rugged as they look! Touch and listen carefully.
2) Reputation. With the internet, it’s pretty easy to find out about a company’s reputation, which is a good way to start further informing your decision. I also rely heavily on the peer network I’ve developed over the years—the opinion of my colleagues matters greatly to me. If you’re newer to the craft, tap into some seasoned veterans via product reviews and by posting questions for the community on the PSW Church Sound Forum.
3) I/O. How many inputs and outputs do you need today? And how many do you anticipate needing in the future? My rule is to add at least 25 percent more to what you are currently using. For example, say you’re currently using 15 inputs and 4 aux/monitors. Look at the next step up from a 16-channel board, which is most likely a 24-channel board with at least 6 aux/monitors. A word of caution: not all manufacturers count channels the same way. My own methodology is to count the number of channels with preamps and consider that the total.
4) Busing. This ties closely with I/O when talking about the number of aux sends available. There should also be focus on what other outputs a console offers (i.e., matrix, control room), and what type of routing is available. Are there sub groups? Can I use an insert on a sub group? And so on…
5) EQ. How robust is the EQ section? Are there sweepable EQs? Is there a Q knob? Is there high-pass filtering? Also, listen to the EQ when making changes: does it sound responsive? Does it seem to overly color the sound?
I/O configuration. Digital consoles usually offer a digital snake and more I/O than analog models. The challenge is to figure out, physically, where you need the I/O. Does the console itself have limited I/O? Will you need to add a local input rack? How does the digital snake connect to the console? What configuration do the stage boxes come in, and where do you need to place them?
iPad/remote mixing integration. The majority of digital consoles now offer iPad remote mixing integration. This is an incredible tool! However, look carefully at how it integrates, taking into consideration things like the need to add a host computer or a router. This will increase cost and complexity.
I also wouldn’t get overly hung up on the actual app (unless you’re going to mix exclusively from an iPad). There are many scenarios where the iPad option is really helpful, such as walking around the room and checking the mix, as well as setting monitors while standing on stage.
But I don’t see many scenarios with a live band where I’d want to do the entire mix on an iPad because there’s just not enough surface area to feel comfortable (at least for me). That said, I’ve been mixing exclusively on consoles for decades, and with that experience comes the comfort factor of doing what you know.
Personal monitoring options. With the popularity of personal monitor mixing, it’s a good idea to find out options offered by digital console manufacturers, and also, to look at integrating third-party solutions such as Aviom.
Some console makers have chosen to use their own proprietary digital bus, so be aware that your existing personal monitoring system might not interface with your new digital console in a very elegant way. And if you don’t have a personal monitoring system, it’s still a good idea to understand future options.
Storage/recall/presets. Evaluate how the presets/snapshots work. Are they global, or can you recall individual channel settings separate from a global preset? There are several options of how this is done.
A friend who is new to the world of mixing purchased a board that gave him suggested settings for different types of inputs. He starts by simply recalling (per channel) the suggested setting for a particular instrument or vocalist. He loves this feature because it gives him a great starting point.
Recording. What type of recording software is available? Can you record multi-track? Is there a USB option that allows recording stereo right to a USB drive?
Onboard effects. I suggest not only evaluating the available effects, but also the EQ and dynamics sections (compression/gate) sections. Find out exactly how many things can be used at the same time. Some boards have limited processing.
Best wishes on selecting your new digital mixing console—mixing has never been more fun and never have so many tools been available in a console that most can afford.
Gary Zandstra has worked in church production and as an AV systems integrator for more than 35 years. He’s also contributed numerous articles to ProSoundWeb over the past decade.
RE/P Files: An Interview With The 1971 NARAS Engineer Of The Year Roy Halee
From the November/December 1971 issue of the late, great Recording Engineer/Producer (RE/P) magazine, this feature is an in depth look back at the career of a legendary engineer.
When a child is learning to walk, he is able to do no more than put one foot in front of the other and shift his weight.
He learns quickly (after a couple of falls), that he must master these basics before he can advance to running, skipping, or dancing.
His thoughts may not be quite so complex as we seem to imply, but the fact remains that for the time, he can only do one thing, and that without much skill: he can walk.
After several years, he becomes adept to any number of methods of getting himself from place to place.
He may walk, skip, run, dance, or whatever. It all depends on the situation he finds himself in. Further, if he’s been aware of his learning process, he knows he’s gotten beyond his walking stage by experimenting, by playing around with his balance and coordination.
Great dancers are great experimenters. They’ve discovered that they have to go beyond just walking in order to fully express themselves, and to respond artfully to the music to which they dance.
We think the analogy is not too strained when we compare the artistry of a great dancer to the artistry of a great recording engineer.
Such an engineer is beyond the elementary repetition of, “It worked then, and it’ll work now. Why take chances?”, just as the dancer is beyond carefully putting one foot in front of the other and merely walking.
The techniques of the engineer and dancer are always growing, changing, expanding, in order to better express the music and feeling they deal with daily.
Roy Halee dances with his fingers. For his artistry and technique as an engineer, he was awarded a Grammy and the title Engineer of the Year for 1971 by the National Association of Recording Arts and Sciences.
He is the man-behind-the-scenes, engineer, friend, and cohort of Simon and Garfunkel. The classic album “Bridge Over Troubled Water” is the product of his engineering skill combined with the musical genius of Paul Simon and Art Garfunkel. A quick listen to the album will show that its greatness is not music alone.
Columbia has been giving out gold records for only two years, but Halee already has 16 years of them.
His association with Simon and Garfunkel began just as he graduated from Columbia’s editing room to the studio. He was starting as a recording engineer when Simon and Garfunkel arrived for their first audition.
He engineered, they played, and that first audition became their first album, “Wednesday Morning, 3 A.M.” He’s been with them ever since.
Just as the dancer must obey the law of gravity, there are certain limits that Halee must work within. But those limits are becoming frayed and dented by his insistent forays against them.
Halee says it succinctly: “I don’t like to make hard and fast rules. When experimentation goes out the window, new sounds go out the window.”
It’s hard to pin the man down on exact and repeat- able techniques; he’s always changing. We managed however, to get some insight on him, and the contribution he’s made to engineering artistry.
It is well known that Halee gets tasteful though unusual sounds from his drums.
He normally mikes them with a U-47 overall, snare top and bottom with a salt shaker or RE-20, floor torn often top and bottom, a mike over the sock cymbal (high hat), high enough to get some splash from the snare, and the bass drum front and back.
The second mike on the snare allows him to get a bit of crack without over eq’ing, a technique that is especially effective on the louder rock dates. Double miking of the other drums allows similar effects to be employed.
Phasing seems generally not to be a problem, but readers are cautioned when employing such a technique to be certain that any double miking done is constructive, both electrically and acoustically. Halee is unhappy with copying for the sake of copying. As he puts it:
“There was the day Halee put a drummer next to the elevator shaft.”
“Some people think, ‘That record was successful, so I’ll always mike drums the way I did [or Halee did] on that date.’ But that was that day, that temperature, that studio, and it was Hal Blaine’s set of drums. Tomorrow you’ve got another drummer coming in with his set of drums and the humidity is 80%.”
The point is well-taken. Creativity is most productive when one first recognizes exactly what he is dealing with, and then builds from there.
The creative building of tracks takes strange forms on occasion. There was the day Halee put a drummer next to an elevator shaft,
“We wanted to get an explosion effect, so I put the guy out in the hall next to an elevator at 49 East 52nd street in New York. The hallway itself was extremely live, so I put mikes in the shaft and in the hall, and limited the hell out of them. And we got an explosion sound. It’s in ‘The Boxer’.”
In “Bridge Over Troubled Water” there is a snapping sound, like a whip in the distance. It was created by physically placing the drummer inside an echo chamber.
The willingness to seek unusual methods and sounds is certainly worthwhile, but to be effective, it must be coupled with a more gut feeling for the effect of music on a listener.
When we asked Halee about his use of stereo spread on drum and piano mikes, his response was typically non-commital, but at the same time clear.
“The degree of stereo spread I use depends on the piece. If it’s disconcerting to the song and music, I won’t do it. A lot of times I put drums on one track. It depends. If you’re doing a thing like “Cecilia”, where you want it to dance around, it adds to the arrangement.”
“And generally, when I mike the drums and split them into stereo, I won’t split them extreme left and right. I’ll split them left-center, right-center, and center.”
“If ... the tune elevates itself, or picks up, I will sometimes pan the drums extreme left and right to give it more motion.”
“If a tune calls for a lift, as when it goes into the waltz section, or some such thing where the tune elevates itself or picks up, I will sometimes pan the drums extreme left and right, to give it more motion.
You won’t even be aware that it happened unless you have headphones on. But it does create the effect of lifting that particular section of the tune. Then I’ll bring it back again.”
Simon and Garfunkel’s first album was guitar and voice. To this day, though other instruments overwhelm the guitar in much of their music, it is still given as much attention as ever, particularly in terms of how it is miked.
The exact configuration is dependent on what kind of guitar it is, how it is being played, and whether it is being finger picked, strummed, or flat picked. When a flat pick is used, Halee likes to stay away from a condenser mike, and uses a dynamic mike instead. Otherwise, one, two, or three condensers are often employed.
Normally, two is the number, one at an angle over the hole (to prevent the hand coming between the hole and the mike), and one down over the guitarist’s right side, behind the hand.
When Halee does other stringed instruments , he invariably uses condensers, notably a U-87, 67, or M-49. He’s made the comment that he likes a “wall” of strings. How does he accomplish this?
“I try to use more than one track, like for violins, so I can spread, and the same for low strings. Instead of putting all the violins on one track, and viola and eel I i on another, I try to use a lot of tracks.”
“For eight violins I’d use two mikes. Again it depends on what you’re doing. If it’s a hard rock date, where you can’t get far away because of leakage problems, I’ll mike every two players. But on overdubs, I use an average of two mikes; with eight violins, one on the front four and one on the back four. If there are two violas and two celli, I’ll put a mike on the violas and a mike on the celli.”
“That’s why I got them to put this mixer in this console [A small mixer, independent of the standard console inputs, is mounted on the right hand side of the console].
If I had strings on a hard rock date, I might mike every two violinists, as I said, to get a lot of presence on the string-., bring them up and mix them on this mixer, and take thorn all in on one channel. All eight mikes.”
“Then again, there’ve been occasions where I’ve used one mike on twelve violins, and I’d put it far away. But in a room where the air conditioning and rumble weren’t ridiculous.”
“I like to create a lot of crazy rhythms.”
“I mult the strings, and I usually use a little tape reverb. Just a little bit. Then afterwards in the mix. It depends, again, on what the mix is, what the tune is, what the arrangement is, whether they’re going to have echo or be dry, whether they’re going to have delayed echo. But I always put a little tape reverb on strings.”
Piano is approached according to the nature of the piece. In “Bridge Over Troubled Water” (the song), the piano was miked as would be a classical piano solo.
Three mikes were placed high and back, about six feet horizontally and six feet vertically. Condenser microphones were employed.
On a rock date however, the dynamics and presence of the piano are altered considerably, and dynamic microphones, with an occasional condenser, are placed in tight.
Listening to tracks Halee has done immediately impresses the listener that considerable innovation and work have gone into their making. Technique follows technique, building a complex stereo matte of interweaving and overlaid effects.
“Through the years I’ve put guitars in bathrooms, drums in bathrooms. Sometimes I phase the echo. You don’t know it’s being phased, but it is. I did that on “At The Zoo.”
I’ve put a couple of choruses of voices inside a bathroom or an echo chamber. Sometimes I put Dolbies in when I record and then take them out when I mix. I use a lot of tape reverb. Sometimes we use it to create our own rhythms. When I say “our”, I mean Simon and Garfunkel. We create our own rhythms in the mix.”
“That happened, for instance, on “Bridge Over Troubled Water” Hal Blaine is playing the bass drum, but he’s not playing the part you hear. There’s tape reverb on that bass drum.
Oddly enough, when we did it, we found that it would only work with a Scully four track. If we used another machine, because of the head distance, it was out of rhythm.”
“When you hear ‘Ba da Ba da da da da’, that’s not what he’s playing. He’s playing something like ‘Ba . . . da da’. Something like that. I’d have to listen to the original tape to get the exact figure he plays, but what you hear is different.”
“I do an awfully lot of that. I like to create a lot of crazy rhythms. A lot of times it’s out of rhythm, and you strike out. But I like to fool around and find out when it’s in rhythm, and then if it’s good, I’ll use it.
I’ll flip it in for a couple of bars, and take it out. Sometimes I’ll program it to another track, so it’ll answer itself.”
Halee has been asked to remix “Bridge Over Troubled Water” for quad, but so far he has been reluctant to do so. This might seem out of character for a man so willing to experiment and try new techniques, but Halee feels the quality of quad is not up to par yet.
Further, he says he’s like to get into miking and recording in quad, rather than just remixing.
“I don’t care for drums in back of me, or swishing around. There are phasing problems.”
It seems to me the direction everybody is going now is to have completely isolated tracks, so you can place things in definite positions. I’d like to get into room sound—dimension rather than direction. I feel a lot more experimentation is in order. Until I’ve done it, I’m not going to get into remixing old Simon and Garfunkel tunes. I want it to be good.”
What about the controversy over more tracks, 24, 32, or more?
“The more the merrier,” says Halee.
The technology doesn’t seem to frighten him. He plays with it until he knows it well enough to master it. Like a dancer, he finds new movements through experimentation, and learns them well before performing them for the public.
The spirit, coordination, and balance are all there. The engineer is an artist. Roy Halee dances with his fingers.
Editor’s Note: This is a series of articles from Recording Engineer/Producer (RE/P) magazine, which began publishing in 1970 under the direction of Publisher/Editor Martin Gallay. After a great run, RE/P ceased publishing in the early 1990s, yet its content is still much revered in the professional audio community. RE/P also published the first issues of Live Sound International magazine as a quarterly supplement, beginning in the late 1980s, and LSI has grown to a monthly publication that continues to thrive to this day.
Our sincere thanks to Mark Gander of JBL Professional for his considerable support on this archive project.
In The Studio: Control Room Techniques To Foster Great Vocals
During a session, I remember when an artist was on mic, out in the studio ready to start vocal overdubs, and the producer asked: “How do we look in here from out there?”
Interesting, because he knew the appearance of the control room to the artist might affect the vocal performance. The control room (from the studio) does look like an aquarium with the huge window and the silent action of the animals encased within it.
Reactions to performances reflected in facial expressions and body language are everything to singers and musicians isolated out in the studio. The concern is that the working in the studio does not feel like being in a Petri dish under the microscopic scrutiny of the control room.
A great vocal sound starts with a good singer who has the artistic goal to perform the best vocal possible. Control room personnel—producer, engineer, assistants, gofers, etc., all have a professional responsibility to work towards the pursuit of the artist’s goals.
For the first hour of a new vocal session, everyone in the control room is on a kind of “audition” until the artist feels comfortable and performs well. It is the producer’s job to create the studio setting—the whole “vibe” to help get a good vocal performance from the artist and to make everyone else produce their best work.
During a vocal session, the producer is the arbiter of the feeling and quality of the vocal performance. The producer is the artist’s confidant, coach, good friend, creative partner, mentor, and most importantly the de facto proto audience - the first public ears on the artist and their music.
Because a well-prepared singer might give immediately the best and freshest performance in the beginning of the day, even during the microphone audition process, the engineer should be prepared and ready to record and capture a great vocal sound. The producer may require those mic audition recordings later and will be thankful for their useable fidelity.
The engineer’s vocal signal chain should be powered up, working and adjusted somewhere in the “ballpark”, the song booted up in the DAW with a new vocal track(s) ready to record, and a suitable monitor mix made and a usable cue mix done and checked in the singer’s headphones.
How Pro Can You Go?
Getting to know your favorite signal chain intimately is very useful for getting good vocal sounds quickly—especially in the case of the aforementioned first takes/microphone audition.
You need to know what different combinations of mic pre-amps, EQs and compressors produce in terms of vocal sound possibilities. Experiment often if time and your clientele’s interest permits.
I find the overarching difference between true pro gear and lower-end products is that professional gear is much more forgiving in it’s operational requirements than cheaper gear.
And that is not to say you cannot record usable sound using a $300 mic pre-amp versus a multi-thousand dollar boutique piece. You’ll have to work a lot harder to get a good sound with the low dollar gear and you can make just as crappy of a recording with either it or with the high-end boxes!
For example, pro gear usually has much more headroom, a lower noise floor and higher dynamic range. You’re less likely to overload the front end of pro mic pre-amps with a signal from a hot mic and a loud singer.
High-end pro gear is also smoother with less harmonic distortion at any operating level so the sound is automatically purer. Finally, pro gear will more often interface well, i.e., drive any subsequent processor you’d like in your vocal recording chain—from pro to junk.
I try to start with the best gear possible and I have my own collection to use when I’m “camping out” in a studio that does not offer my fave pieces. As an independent engineer, it’s a smart investment to own a high quality professional vocal recording chain you can use anywhere.
For more reasons that are not necessary to cover here, there are two schools of thought about the design philosophies and sound of mic pre-amps:
—“Super pristine” and transparent to convey accurately the microphone’s signal —“Enhancement” - sonic embellishment through harmonic coloration and/or the inherent characteristics of non-linear, small signal amplifiers
Both types have their place in today’s recording but my preference for vocals (unless directed otherwise by the artist and producer) is for a transparent and clean signal chain. Some of my choices for clean, discrete transistorized mic pre-amps include the George Massenburg Labs GML 8302, Audio Engineering Associates Audio Engineering Associates RPQ, Avalon Design M5, and Millennia Media HV-3 and STT-1.
For tube-based pre-amps that can be operated in clean modes, I’ve always liked the Manley Mic/EQ-500 Combo, Groove Tube ViPre, DW Fearn VT-1, and old Telefunken V72 units. Tube mic pre-amps, by virtue of the tubes, have a built-in “personality.” They can be very clean but, when overdriven, get into coloration zones unique to each of them.
“Colorful” transistor microphone pre-amps I like are the old British Neve 1066, 1073 and 1084 modules for their thick-sounding Class-A design and line input, mic input and output transformers; for more punch and purity, I like the API (Automated Processes Inc.) model 512C amplifier for its Class-AB amp that gives a harder and “in your face” presence; the Helios Type 69 mic/EQ unit is ‘60s-era technology and a very “vibey” sounding unit also from England; and the Chandler Germanium, which uses esoteric transistors to produce its unique sound.
Mic and Signal Chain System
When deciding on a vocal recording chain, consider both the mic and mic preamp as a system. If you’re looking for a super warm and “tubey” sound, try using a tube condenser and a tube mic pre-amp.
Such was my choice for recording Rod Stewart on a couple of albums. He sounded best on a completely stock and original Neumann U67 tube condenser (no pad and no roll-off) into a tube-based Manley EQ500 Mic/EQ Combo that I followed with a TubeTech CL1-B compressor—a slightly colorful and all tube signal chain. This chain did not accentuate Rod’s raspy vocal quality we all love, yet it kept enough mid-range cut to compete with the track.
A much cleaner and more pristine path might be a Brauner Phanthera FET-based condenser microphone into a GML 8302 mic pre-amp followed by a dbx 160SL compressor that uses a VCA (voltage controlled amplifier) for nearly transparent gain control.
This signal chain would produce a more neutral or uncolored sound that is completely faithful to the source. I’ve found recording choirs with a super-clean chain like this reproduces the rich harmonic content in the best way.
I liked the Phanthera into a Neve 1073 module followed by an UA 1176LN (Rev D) limiter for recording Pat Benatar’s vocal. The Phanthera will handle all the loud level Patty can produce right on top of it without clipping.
The Neve 1073, like all old Neve modules, doesn’t sound good in clip and is a little unforgiving with regard to getting an exact gain setting so I set it a little low for the additional headroom.
After compression, I made up the record level within the very distinctive sounding 1176LN. Between the thickness of the Neve, the gritty edge of the 1176LN, and the pristine sound capture of the Brauner, this is a killer rock vocal sound signal chain.
EQ & Compressors
Generally adding equalization in the recording is to make up for what the microphone is not giving you. In some studios, there is not a big choice of mics so you have to add or carve out frequencies to try and mimic the sound you’d get automatically with the right mic. Along with a signal chain, owning a few classic vocal mics is an obvious asset for a recording engineer.
Again, unless requested by the producer or artist, I go very conservative when recording with EQ. For example, if you are adding a lot of low frequencies, there is something wrong with the microphone or the pre-amp or more likely the way the singer is addressing the mic.
If you’re finding that adding a lot of high frequencies sounds better then you’ve got the wrong mic, as if you were using an old RCA 77BX ribbon but really were looking for the ultra bright sound of a modern Sony C800G condenser.
The same goes for compression. There is a wealth of sonic possibilities using vocal compression especially with vintage classics like the Fairchild 670 limiter.
I love those sounds but when and how much depends very much on the “bigger picture” - the mix!
If you and/or the producer are unsure, compress only enough (at a low ratio) to get it recorded at a good level without distortion and errant peaks—and then back the compression down from there. For a “vibey” sound, go with a tube compressor like the TubeTech CL1-B or UA Teletronix LA-2 leveling amp.
Cleaner or more transparent compression comes from VCA-based units such as a dbx 165. You could also record the vocals on two tracks: one with compressor and the other without. I like to provide as many options for the mixer as possible.
Broadcast Devices Introduces 8/16 Series Passive Audio Switcher
New unit is designed to fill the requirement of preventing single points of failure in analog and digital audio systems.
Broadcast Devices (BDI) announces the releases a new product for the sound contractor and broadcast markets, the 8/16 Series passive audio switcher.
The 8/16 Series passive audio switcher is designed to fill the requirement of preventing single points of failure in analog and digital audio systems. This product can accept either 8 or 16 sets of (A/B) balanced pairs and route them to a common output.
Two models are available: the 8/16-8 8 input and the 8/16-16 16 input version.
Because the switching function is passive it can perform emergency switching for analog and digital audio signals and for other types of control pairs. The products intended uses are for failure bypass of routers, consoles, monitor systems or just about any application where multiple pairs need to be shifted from one input to another.
Interface is the standard Tascam 25 pin interface. Remote control interface is standard DB25 connection. Accessory DB25 to XLR and/or 75 Ohm BNC interfaces are available.
Four versions are available within the series. 8/16-8, 8/16-16 (8 and 16 channels respectively) and a set of dual power supply versions 8/16-8-D, 8/16-16-D.
United Recording Launches Comprehensive Archiving Division
New head of archiving, Dan Johnson, brings extensive experience as a former recording engineer at Capitol Studios and Ocean Way.
United Recording of Hollywood, California has launched its new archiving division, it was announced by studio manager Robin Goodchild. United was founded in 1957 by recording engineer, studio designer and electronics inventor Bill Putnam and expert archiving is a key element of the studio’s heritage.
“United has a 60 year history of uncompromised audio excellence and innovation,” commented Goodchild. “We have assembled a vintage treasure trove of virtually all modern recording machine formats and the ancillary equipment crucial to accurate archiving to insure the new masters will be preserved for the ages.”
United’s new suite is a secure, climate-controlled suite that features such attention to detail as a specially built anti-static floor to prevent any electrical mishaps. A full-time dedicated maintenance staff means the gear is well cared for and running correctly at all times.
United’s new head of archiving, Dan Johnson, spent the past five years as a dedicated audio preservation engineer working with priceless masters by such artists as Led Zeppelin, Jimi Hendrix, The Doors, Eagles, Prince, Red Hot Chili Peppers, The Ramones, Van Halen, Rod Stewart and Otis Redding. Prior to that, Johnson was a recording engineer at Capitol Studios and Ocean Way (now United).
“I started my engineering career at United almost 20 years ago, and opening an audio archiving facility here is a timely decision,” commented Johnson. “The studio’s high standard of quality and excellence, as well as the commitment to an unparalleled legacy provided me with the foundation that I have built my career on. It’s good to be home.”
Archiving masters lowers insurance/storage costs by having digital back-ups, rescues audio from deteriorating tapes, and keeps assets viable after the tape becomes unplayable. Sending digital copies for mastering/mixing means no damage/mishaps to fragile tapes. Digital files can be electronically delivered, which saves on shipping and packing costs. Assets are always safe and available.
The archiving process involves tape condition being checked precisely and processed accordingly. Formats are correctly determined and documentation checked regarding speed, noise reduction, etc. All tape boxes, notes, and track sheets scanned at 300dpi. Tape preparation includes baking when necessary, as well as replacing damaged splices and bad leader tape. Multi-track tapes are transferred in real time and synchronized to Pro Tools. Final assembly of recorded assets are transferred to archival DVD or Blu-Ray discs and .wav files.
Entire line of digital hardware products has been updated to take advantage of proprietary 5th generation digital to analog converter technology.
Crane Song (AES Booth #1123) announces that its entire line of digital hardware products has been updated to take advantage of its proprietary 5th generation digital to analog converter technology.
With its AES debut, the Egret 8 channel D/A converter / summing mixer joins the Avocet monitor controller, the HEDD 192 AD/DA converter, and Solaris stand alone digital to analog converter to complete the line up of Crane Song products equipped with Crane Song’s Quantum DAC.
The Quantum DAC uses a 32-bit converter and asynchronous sample rate conversion for jitter reduction with up sampling to 211 KHz. The reference clock uses a proprietary reconstruction filter for accurate time domain response; and with jitter less than 1pS.
“I have done several years of measurable analysis and subjective listening in the development of this technology; the Quantum series DAC is the most accurate that I have ever designed,” explains Crane Song founder and developer Dave Hill. “Typical jitter from 10Hz to 20 KHz from the internal clock is 0.055pS and from 1 Hz to 100KHz it is less than 1pS. The result is a very 3D sound that is exceptionally transparent and accurate.”
The Crane Song 5th generation Quantum DAC has been shipping in Avocet IIA since November, 2015, and in April, 2016 Crane Song quietly updated the HEDD 192. As of AES show the Egret will be shipping with the upgraded DAC. This completes the updating of the DACs in all Crane Song digital hardware.
Egret is a flexible digital audio workstation back end. It contains eight channels of Quantum D/A converters and a stereo line level mixer with color options to help bring analog summed digital mixes to life. Each channel of the stereo mixer has a level control, an aux send (which is post level control), a color control, and a pan control. Each channel also contains an analog / digital source button, and solo - mute buttons. The color function is adjustable from a transparent sound to a complex mix of second and third harmonic content, creating the possibility of having clean modern sounds mixed with vintage sounds.
DAC upgrades are available for previous generation Crane song digital hardware products.
See Crane Song at AES in Los Angeles at Booth # 1123 at the Los Angeles Convention Center, Sept 29-Oct 1, 2016.
There are a lot of things to focus on during a tracking session, especially when you’re recording a dozen or more inputs at once.
You want to make sure you’re getting a good sound from each microphone. That’s step one. Let’s be honest, you’ll spend the rest of your recording life perfecting step one…
For now, I want to focus on step two – getting good levels, both when you’re tracking and during mixing.
When I first started recording, I was taught that you want to get the level as close to peaking as humanly possible without going into the red.
I would keep cranking up the mic pre on the snare drum mic until it was pixels away from clipping.
What happened? Everything clipped, of course. Apparently musicians play louder during the actual take than they do during sound check.
So I would turn the preamps down a little bit. Everything looked good, then BAM! More clipping.
I would keep turning down the preamps little by little until the clipping stopped. By this time, the musicians are tired of me coming over the talkback and saying, “Whoops! Sorry guys, that clipped. Let’s start again.”
Not a good scenario.
The reason people tend to think that you need to really “peg” the meters is leftover from the analog days. The harder you hit tape, the better the recording would sound. If you had lower levels, the tape noise would become much too audible.
Today, however, just about everyone is recording a 24-bit digital signal. Digital signals don’t sound better when you turn them up, they simply get louder.
If you record the same track really close to the clip light and then again with plenty of headroom, you won’t notice a difference in the quality of the signal, only the volume.
Analog equipment tends to saturate and add color the harder you drive it. Digital systems do not.
What does this mean?
If you’re recording at 24-bit (and you should be), you’ve got a whopping 144 dB of signal to work with. What does that mean? The noise floor of your system is significantly lower than on an analog system. In fact, the noise floor of a decent digital system is virtually non-existent.
Let’s say you record a snare drum in Pro Tools, and its loudest part is 6 dB below clipping. So, you technically could have recorded it 6 dB louder, but even 6 dB down you still have 138 dB of signal left in your system. You’re still WAY above the noise floor.
My suggestion? Give yourself some room to breathe! Rather than trying to make the signal get as close to the top of the meter as possible, have it max out somewhere between one-half and three-fourths of the way up the meter.
This way the drummer can do an awesomely loud fill without clipping every track in the session, and you won’t end up smacking your forehead every time the clip light goes off during and awesome take.
So, what about mixing, anyway? What was your biggest issue when you mixed your first song? I bet you a nickel it was getting the levels right.
You probably got half-way through the mixing process, and suddenly several of your tracks are clipping, or your master fader is clipping.
So you turn the clipped tracks down a bit. Well now the mix doesn’t sound right, so you try to turn every other track down by the same amount. Still doesn’t sound right.
You go back to work, re-balancing everything. Before you know it, your tracks are clipping again.
You think to yourself, “Did I really turn these up that much again?” You slam your fists into your desk…or kick the dog…or yell at the cat…or maybe you do all of these at the same time.
Welcome to the world of mixing.
You are not alone. This was my experience, and I bet (another nickel) that if you asked any experienced engineer, he’d share a similar story of his own frustrated journey.
My Advice to You
This is the part where I could go into a list of techniques for keeping your track levels down to prevent clipping, but you know what? I’m not gonna do that.
Why? Because I think there is one single reason why people have such trouble with clipping during mixing. I’ll get to that in a second.
The first thing you can do to make your life easier as a mix engineer is to make sure you don’t record everything at a super-hot level. I talk about this in Setting Levels for Recording.
You don’t have to peg the meters to get a great-sounding recording. If you just get a decent level, you’ll be much better off when it comes time to mix.
Sometimes you don’t have control over the levels. Perhaps you’re only mixing the song, not recording it. If so, then you’re at the mercy of the recording engineer who recorded the tracks.
Okay, back to getting rid of all that nasty clipping on your tracks. My suggestion to you?
Turn up your monitors/headphones!
Seriously, this is the biggest reason I get clipping on my mixes, I keep the volume knob too low on my monitors and headphones.
Rather than turn up the monitor volume, I push up the track levels in my mix. That’s a recipe for failure. (It’s also a recipe that will produce more dog-kicking outbursts if you don’t fix it.)
Let’s say you’re mixing a rock tune, and you’re listening to just the drums. Before pushing the kick drum up to zero or (even worse) above zero, reach for the volume knob on your speakers or headphones instead.
You’ll be able to hear everything better, and your mixing levels will be well below clipping. Remember this next time you’re mixing. I bet (yet another nickel) it will help.
Joe Gilder is a Nashville based engineer, musician, and producer who also provides training and advice at the Home Studio Corner.
Allen & Heath Launches Xone:PX5 DJ Performance Mixer
4+1 channel mixer combines Xone analog audio with digital connectivity and next generation performance-focused FX processing.
Allen & Heath has launched the Xone:PX5, a new 4+1 channel DJ performance mixer, that combines Xone analog audio quality with digital connectivity and next generation performance-focused FX processing.
Equipped with the Xone VCF filter, 3-band total kill EQ on all channels, and a new internal Xone:FX engine, the PX5 also features a 20-channel USB sound card with Xone:Sync and MIDI integration, all packaged within an intuitive surface layout.
The PX5’s FX engine features the newly developed Xone:Xcite library of performance-focused FX, coupled with familiar hands-on controls for instant, instinctive operation. A hybrid FX mode combines the internal and external send & return FX for advanced FX processing. Meanwhile, the Xone filter system includes HPF, BPF, LPF, resonance control and frequency sweep, plus the option to route the Aux channel and External Return to the filter.
The internal 20 channel / 24bit / 96kHz USB2 sound card is class compliant on Mac and enables 5 stereo channels to be streamed into the mixer from performance software, whilst the new Xone:Sync engine features comprehensive MIDI clock options, enabling the connection of external software/hardware instruments, such as drum machines and synthesizers. Xone:PX5 also features plug n’ play connection via X:LINK to the Xone:K series controllers for further MIDI control options.
“By combining our extensive knowledge with essential feedback from DJs, we have designed a new breed of DJ performance mixer,” comments Allen & Heath’s DJ Sector specialist, Adrian Pickard. “The Xone:FX engine has uncompromising attention to detail in terms of the Xone:Xcite FX algorithms, whilst focusing on a familiar workflow for performance. The Xone:Sync engine enables tight integration of MIDI tools, while the newly engineered sound card has been specifically created for the DJ environment and offers electronic music artists unparalleled options for integrating into every performance setup.”
If you have trouble with hum, buzz or a higher than normal noise floor, this could be due to twisted pairs which are unbalanced or even ‘”slightly” unbalanced. The Rebalancer (PA819) resolves many of these issues.
A badly balanced input signal emerges from the other side as an improved, balanced signal, improved common mode rejection, lower noise and lower EMI/RFI.
Many classic old devices (limiters, compressors etc.) have poorly balanced inputs or outputs. A Rebalancer solves most noise problems.
The Rebalancer passes digital audio signals. Although digital devices are much less prone to noise problems, usually not heard, it affects the clock recovery and conversion of the digital signal back to analog.
The Rebalancer (PA819) balances signals into a true balanced-line performance.
If you’re looking for a prime example of what Toffler wrote about in Future Shock, look no further than analog tape.
In little more than a decade, the two-inch multitrack tape machine has gone from studio staple to relic rarity. And while many audio veterans wax nostalgic for that warm analog sound, few will admit to missing the work that went with it.
These days, owning an analog tape machine is somewhat akin to driving a classic car, with ongoing maintenance, scarcity of parts, and exotic fuel (analog tape) that’s expensive and hard to find.
So while a handful of top studios still offer those classic spinning reels (and the engineers to maintain them), the good news for the rest of us is that there are now more convenient ways to achieve that classic magnetic sound.
A Bit of History
Analog recording, of course, predates tape — with everything from wax cylinders to wire being used to capture a performance. But when American audio engineer Jack Mullin discovered a pair of German Magnetophon machines during World War II, he knew right away he was on to something big.
The format offered two major advantages over the acetate disks of the day: a recording time of more than 30 minutes, and the ability for recordings to be edited. It was the first time audio could be manipulated.
Mullin brought the two Magnetophons back home after the war and demonstrated them for Bing Crosby at MGM Studios in 1947. Crosby immediately saw the potential for prerecording his radio shows, and invested a small fortune of $50,000 in a local electronics company called Ampex to develop a production model.
Ampex and Mullen soon followed with commercial grade recorders. One of the first Ampex Model 200 recorders was given to guitarist Les Paul, who took the concept of audio manipulation to a higher level. Paul had already been experimenting with overdubbed recording on disks and, quickly realizing the potential for adding more channels and additional recording and playback heads, came to Ampex with the idea for the first multi-track tape recorders.
The format evolved from two tracks to three and four, and although Ampex built some of the first eight-track machines in in the late 1950s, most commercially available machines were limited to four tracks until 1966, when Abbey Road recording engineers Geoff Emerick and Ken Townshend began experimenting with multiple machines during the recording of Sgt. Pepper’s Lonely Hearts Club Band.
The Studer A800 Multichannel Tape Recorder records up to 24 tracks of audio.
Ampex responded to the demand the following year, introducing the revolutionary MM-1000, which recorded eight tracks on one-inch tape. Scully also introduced a 12-track one-inch design that year, but it was quickly overshadowed by a 16-track version of the MM-1000, using two-inch tape. MCI followed in 1968 with 24 tracks on two-inch tape, and the two-inch 24-track became the most common format in professional recording studios throughout most of the 1970s and 1980s.
With the prevalence of home and project studios and digital technology in the late 1980s and 1990s, a number of other tape formats emerged, including various multitrack on-reel and cassette configurations as well as multiple digital tape formats. But for the sake of this article, we’ll focus mainly on multitrack analog tape, the most sonically revered recording medium of all time.
How A Tape Machine Works
In the simplest of terms, magnetic tape consists of a thin layer of Mylar or similar material coated with iron oxide. The tape machine head exerts a charge on the oxide, which polarizes the oxide particles and effectively “captures” the signal. It’s a process that creates some interesting byproducts, many of which directly influence the sound of the recording.
Probably the most commonly cited chracteristic of analog recording is its “warmth.” Tape warmth adds a level of color to the sound, primarily softening the attacks of musical notes, and thickening up the low frequency range. Recording at slightly hot levels to analog tape can also produce a nice distortion that works well with certain types of music such as rock, soul, and blues.
As multitrack recording evolved, a number of different manufacturers began to emerge. By the early 1980s, Ampex was no longer the dominant multitrack manufacturer, facing stiff competition from MCI, Studer, 3M, and Otari. Although a handful of smaller manufacturers, including Stephens, Aces, and a few others also entered the fray, Ampex, Studer, 3M, MCI (later owned by Sony), and Otari became the dominant brands.
Each of these manufacturers’ different models became loved (or despised) for their mechanical attributes and characteristic sound. In the day, a recording studio’s model of multitrack tape recorder was considered as intrinsic to its sound as their acoustics, console or microphone collection.
The Subtle Differences
A multitude of factors influence each machine’s characteristic sound, beginning with the tape heads, amplifiers, and other electronics.
Beyond that, other factors have a bearing on the sound of an analog recording, some of which are unique to each particular machine. Variations in the machine’s speed stability (wow and flutter), alignment of the tape heads and the angle of the tape, condition of the heads (cleanliness, magnetization, etc), tape tension, and other physical factors are just a few things that can affects the sound of a recording.
Besides the machine itself, other factors can affect the particular sound of an analog recording, including the brand of tape used. Back in the heyday of analog, the major brands of tape each had their supporters and detractors. Ampex tape was one of the leading brands, with their 456 formula being the most prominent.
The brand of tape used subtly affect the tonal color of a recording.
Other popular brands included AGFA and 3M. Each tape formulation imparted its own subtle sound to a recording, and each machine had to be realigned each time a brand was changed. Some studios made a policy of sticking to one brand of tape, but it was not uncommon for variations to occur even within different batches of the same brand of tape.
Tape speed is another major factor. Faster tape speeds tend to deliver cleaner sound quality, since the signal is spread over a larger area and the signal-to-noise ratio is increased. The most commonly used speeds with two-inch tape are 15 and 30 IPS (inches per second). Although 30 IPS delivers better overall sound quality, most pros agree that lower frequencies sound better at 15 IPS. Indeed, in the modern era, when tape is most often being used for its sonic effect, slower speeds prevail.
Getting That Analog Tape Sound
Although owning a classic two-inch Studer or Ampex tape machine certainly earns bragging rights, in today’s DAW-oriented world, the fact is that fewer of us would opt to record to analog tape, even if we could. Space considerations, cost considerations, and the scarcity of tape and parts are only the beginning.
The fact is, tape’s destructive editing can be a slow and tedious process in a world where time is truly money. And even the medium itself is no longer cost effective. A single reel of two-inch tape averages around $200. At 15 IPS, that tape holds around 30 minutes of 24-track recording time (half that at 30 IPS). Compare that to a 2TB hard disk, currently selling for just over $100, that can hold many hours of multitrack audio, and you can see why running tape for every project is not an option for most of us.
Screenshot of the Studer A800 plug-in from UA.
Fortunately there are a number of great sounding plug-in processors for your DAW that can bring some of that analog tape warmth and “glue” to your tracks. For example, Universal Audio’s popular Studer A800 Multichannel Tape Recorder is a very faithful emulation of the original machine’s sound, having been painstakingly developed over a one year period with input from the original manufacturer. In fact, the sonic differences between the A800 plug-in and the original A800 hardware are so minute, that many of the world’s top engineers opt to use the plug-in for their day-to-day work.
Aside from the obvious convenience factor, one of the biggest advantages of tape emulation plug-ins are their flexibility. You can choose to process only certain tracks, rather than the whole mix — imparting the warmth, low-end bump, and cohesive properties of tape to the drums and bass tracks, for example, without adding any tape color to guitars and vocals. Or you can add just a hint of tape compression to the mix, without oversaturating things the way an actual two-track machine might.
Regardless of how you choose to implement it, the sound of analog tape can be a great addition to your digital mix. Take a UAD tape emulation plug-in like the Studer A800 and test it on a few tracks. You might find it’s just the thing to add a bit of nice, understated warmth, cohesiveness and punch to your mix. Tape lives on.
Daniel Keller is a musician, engineer and producer. Since 2002 he has been president and CEO of Get It In Writing, a public relations and marketing firm focused on audio and multimedia professionals and their toys. Despite being immersed in professional audio his entire adult life, he still refuses to grow up. This article is courtesy of Universal Audio.
Offers digital and analog operating modes featuring full transformer balanced output isolation and 24-bit digital to analog conversion.
ARX announces the release of its 2nd Generation USB DI, the first with dual mode audio input interface - digital and analog - for two DIs in one.
“ARX released our first USB DI back in 2007 and the unit has become an industry standard for USB - analog audio conversion” says ARX managing director, Colin Park.
“Recently we’ve been looking at where and how ARX’s USB DIs are being used, and concluded that along with USB enabled devices, some users also stored their audio playback material on Tablet Devices, Phones and iPads which have no direct USB Digital outputs, and that interfacing these devices outputs reliably with the professional world of balanced audio can be a problem.”
“At ARX we concluded that our new second generation ‘USB DI Plus’ should offer users a “one box - dual mode” solution to both their USB digital and analog audio interface needs. To put it simply - Two DIs in one.”
Park continues, ” In analog mode the USB DI Plus offers both L & R 6.5mm and broadcast quality Amphenol mini jack socket inputs. To ensure reliability, analog mode is passive, with no batteries or phantom power required for use.”
“Switching over to USB digital mode offers a standard USB type B port with inbuilt 24-bit high resolution digital to analog 44.1 / 48 KHz converter (DAC). This installs as a fully compatible generic “plug and play” USB audio device, requiring no special driver program installation, digital mode powers off the USB connection.”
“Both left & right outputs also feature ground lift and mono mode switching.”
In conclusion, Park says, “Both the USB DI Plus digital and analog operating modes feature full transformer balanced output isolation. This eliminates earth loops / ground hum and helps remove extraneous interaction noise and distortion ensuring ultra low noise operation even in the harshest audio environments.”
Greg Fidelman Tracks Metallica With BAE Audio Preamps (Video)
Producer selects 1073 and 1028 preamps to match the sound of the band's existing vintage gear for new album.
When producer Greg Fidelman hunkered down to begin work on Metallica’s latest record, he knew that the (30) channels of vintage preamps the band had already in the studio would not be enough to simultaneously facilitate the multi-mic drum sound and detailed guitar micing setups he was envisioning for the sessions.
He turned to BAE Audio preamplifiers to nearly double his available inputs, matching the sound of Metallica’s existing vintage gear with the added reliability of BAE’s modern construction and components.
“They’re really busy guys, so we wanted to have all of our sounds for drums, bass, and guitar set up all at once and leave them that way,” Fidelman says. “That way they can get in there and cut a few takes without having to spend time dialing in sounds or breaking down mics each time.”
Fidelman had first worked with BAE Audio preamplifiers on a recent Slipknot record he was producing.
“On that record we were working on a full vintage console but had to change studios halfway through recording to one with a more modern console,” Fidelman recalls. “The band was concerned about retaining a consistent guitar sound so we took extensive notes of our mic placements and EQ settings and took snapshots of direct and reamped guitar to test in the new space. The new studio had a rack full of BAE 1073s and other BAE preamplifiers so we ran everything through those with our notated settings and A/B’d it with what we had done in the previous studio.”
Fidelman said the results were indistinguishable. “I think if anything our sounds were 5-10% better because of the new components in the BAE gear,” he adds.
Fidelman’s experience with BAE Audio gear on the Slipknot record gave him the confidence to recommend it for the Metallica sessions to expand their vintage input count. He procured (11) channels of the BAE Audio 1073 and (8) channels of the 1028 in a mix of module and standalone rack format. Fidelman opted for the mix of vintage and new vintage inputs over the studio’s built-in modern console because of the unique qualities of the vintage circuit design.
“With the 1073, the way you can manipulate the bottom end is pretty unique, with the low-end boost and the filter working together,” Fidelman says. “There’s also a quality to the top end that’s always musical. If you need a little extra you can really dig into it without it becoming harsh. And not just the high frequency boost/cut, but also the higher frequencies in the midrange band.”
Fidelman notes that the midrange band is particularly key for articulating the top end of kick and snare drums.
“It’s pleasing with drums, you can boost what you want without the other garbage,” he says. “To get the core guitar sounds for Metallica I’m sometimes routing a mic into the 1073 and then out into the direct input of another 1073 or 1028 to get access to another midrange or low end band for extra control.”
Fidelman appreciates the additional frequency selections provided by the 1028 on things like overheads. “You can dig in deeply with some of those additional frequencies to define the sound you’re looking for,” he says. “It provides the versatility I need.”
Though Fidelman says he “grew up” on vintage consoles and loves their sound, he acknowledges that working with older gear has its perils. “I was working at a studio in Hollywood recently with a (great) vintage desk, but even with the techs working through one or two modules every day, the reality was that stuff was failing faster than they could keep up with,” Fidelman says. “BAE has nailed the sound and since you’re not dealing with 30-year old contacts, dusty pots, and worn connectors it’s way more reliable.”
Both preamps sport the same Carnhill/St Ives transformers specified in the original vintage circuit design and feature BAE’s renowned hand-wired construction, conducted at their facility in California, enabling them to capture the vintage sound that has been the signature of many beloved recordings.
Fidelman and the band’s approach have proven fruitful over the course of the tracking sessions.
“We began these sessions back in June and have been tracking bits and pieces as recently as two weeks ago,” Fidelman says. “We never had to stop and reset things to switch instruments, and we’ve got consistent sound on every single channel, whether it’s with our vintage channels or the BAE channels. We can hop from laying down Kirk’s guitars to Lars’s drums seamlessly and know we have sounds worthy of a Metallica record ready to go at all times.”
Working in tandem with vintage and “new vintage” gear by BAE Audio, Fidelman has also kept a coherent and consistent sound on a record that’s been in process for several months.
“There are always interruptions when you’re working on a high-profile record, but we were able to eliminate technical interruptions from the project with the consistency and reliability of BAE hardware, all without sacrificing that vintage sound that I love.” BAE preamps are the new first choice for Fidelman. “I can’t really tell the difference between BAE and the original.”
Editor’s Note: This article originally appeared in 1997, but the principles are still valid today and worthy of repeating.
Like everything else in the world, the audio industry has been radically and irrevocably changed by the digital revolution. No one has been spared.
Arguments will ensue forever about whether the true nature of the real world is analog or digital; whether the fundamental essence, or dharma, of life is continuous (analog) or exists in tiny little chunks (digital). Seek not that answer here.
Rather, let’s look at the dharma (essential function) of audio analog-to-digital (A/D) conversion.
It’s important at the onset of exploring digital audio to understand that once a waveform has been converted into digital format, nothing can inadvertently occur to change its sonic properties. While it remains in the digital domain, it’s only a series of digital words, representing numbers.
Aside from the gross example of having the digital processing actually fail and cause a word to be lost or corrupted into none use, nothing can change the sound of the word. It’s just a bunch of “ones” and “zeroes.” There are no “one-halves” or “three-quarters.”
The point is that sonically, it begins and ends with the conversion process. Nothing is more important to digital audio than data conversion. Everything in-between is just arithmetic and waiting. That’s why there is such a big to-do with data conversion. It really is that important. Everything else quite literally is just details.
We could even go so far as to say that data conversion is the art of digital audio while everything else is the science, in that it is data conversion that ultimately determines whether or not the original sound is preserved (and this comment certainly does not negate the enormous and exacting science involved in truly excellent data conversion.)
Because analog signals continuously vary between an infinite number of states while computers can only handle two, the signals must be converted into binary digital words before the computer can work. Each digital word represents the value of the signal at one precise point in time. Today’s common word lengths are 16-bits, 24-bits and 32-bits. Once converted into digital words, the information may be stored, transmitted, or operated upon within the computer.
In order to properly explore the critical interface between the analog and digital worlds, it’s first necessary to review a few fundamentals and a little history.
Binary & Decimal
Whenever we speak of “digital,” by inference, we speak of computers (throughout this paper the term “computer” is used to represent any digital-based piece of audio equipment).
And computers in their heart of hearts are really quite simple. They only can understand the most basic form of communication or information: yes/no, on/off, open/closed, here/gone, all of which can be symbolically represented by two things - any two things.
Two letters, two numbers, two colors, two tones, two temperatures, two charges… It doesn’t matter. Unless you have to build something that will recognize these two states - now it matters.
So, to keep it simple, we choose two numbers: one and zero, or, a “1” and a “0.”
Officially this is known as binary representation, from Latin bini—two by two. In mathematics this is a base-2 number system, as opposed to our decimal (from Latin decima a tenth part or tithe) number system, which is called base-10 because we use the ten numbers 0-9.
In binary we use only the numbers 0 and 1. “0” is a good symbol for no, off, closed, gone, etc., and “1” is easy to understand as meaning yes, on, open, here, etc. In electronics it’s easy to determine whether a circuit is open or closed, conducting or not conducting, has voltage or doesn’t have voltage.
Thus the binary number system found use in the very first computer, and nothing has changed today. Computers just got faster and smaller and cheaper, with memory size becoming incomprehensibly large in an incomprehensibly small space.
One problem with using binary numbers is they become big and unwieldy in a hurry. For instance, it takes six digits to express my age in binary, but only two in decimal. But in binary, we better not call them “digits” since “digits” implies a human finger or toe, of which there are 10, so confusion reigns.
To get around that problem, John Tukey of Bell Laboratories dubbed the basic unit of information (as defined by Shannon—more on him later) a binary unit, or “binary digit” which became abbreviated to “bit.” A bit is the simplest possible message representing one of two states. So, I’m six-bits old. Well, not quite. But it takes 6-bits to express my age as 110111.
Let’s see how that works. I’m 55 years old. So in base-10 symbols that is “55,” which stands for five 1’s plus five 10’s. You may not have ever thought about it, but each digit in our everyday numbers represents an additional power of 10, beginning with 0.
Figure 1: Number representation systems.
That is, the first digit represents the number of 1’s (100), the second digit represents the number of 10’s (101), the third digit represents the number of 100’s (102), and so on. We can represent any size number by using this shorthand notation.
Binary number representation is just the same except substituting the powers of 2 for the powers of 10 [any base number system is represented in this manner].
Therefore (moving from right to left) each succeeding bit represents 20 = 1, 21 =2, 22 =4, 23 =8, 24 = 16, 25 =32, etc. Thus, my age breaks down as 1-1, 1-2, 1-4, 0-8, 1-16, and 1-32, represented as “110111,” which is 32+16+0+4+2+1 = 55. Or double-nickel to you cool cats.
Figure 1, above, shows the two examples.
The French mathematician Fourier unknowingly laid the groundwork for A/D conversion in the late 18th century.
All data conversion techniques rely on looking at, or sampling, the input signal at regular intervals and creating a digital word that represents the value of the analog signal at that precise moment. The fact that we know this works lies with Nyquist.
Harry Nyquist discovered it while working at Bell Laboratories in the late 1920s and wrote a landmark paper describing the criteria for what we know today as sampled data systems.
Nyquist taught us that for periodic functions, if you sampled at a rate that was at least twice as fast as the signal of interest, then no information (data) would be lost upon reconstruction.
And since Fourier had already shown that all alternating signals are made up of nothing more than a sum of harmonically related sine and cosine waves, then audio signals are periodic functions and can be sampled without lost of information following Nyquist’s instructions.
This became known as the Nyquist Frequency, which is the highest frequency that may be accurately sampled, and is one-half of the sampling frequency.
For example, the theoretical Nyquist frequency for the audio CD (compact disc) system is 22.05 kHz, equaling one-half of the standardized sampling frequency of 44.1 kHz.
As powerful as Nyquist’s discoveries were, they were not without their dark side, with the biggest being aliasing frequencies. Following the Nyquist criteria (as it is now called) guarantees that no information will be lost; it does not, however, guarantee that no information will be gained.
Although by no means obvious, the act of sampling an analog signal at precise time intervals is an act of multiplying the input signal by the sampling pulses. This introduces the possibility of generating “false” signals indistinguishable from the original. In other words, given a set of sampled values, we cannot relate them specifically to one unique signal.
Figure 2: Aliasing frequencies.
As Figure 2 shows, the same set of samples could have resulted from any of the three waveforms shown. And from all possible sum and difference frequencies between the sampling frequency and the one being sampled.
All such false waveforms that fit the sample data are called “aliases.” In audio, these frequencies show up mostly as intermodulation distortion products, and they come from the random-like white noise, or any sort of ultrasonic signal present in every electronic system.
Solving the problem of aliasing frequencies is what improved audio conversion systems to today’s level of sophistication. And it was Claude Shannon who pointed the way. Shannon is recognized as the father of information theory: while a young engineer at Bell Laboratories in 1948, he defined an entirely new field of science.
Even before then his genius shined through for, while still a 22-year-old student at MIT he showed in his master’s thesis how the algebra invented by the British mathematician George Boole in the mid-1800s, could be applied to electronic circuits. Since that time, Boolean Algebra has been the rock of digital logic and computer design.
Shannon studied Nyquist’s work closely and came up with a deceptively simple addition. He observed (and proved) that if you restrict the input signal’s bandwidth to less than one-half the sampling frequency then no errors due to aliasing are possible.
So bandlimiting your input to no more than one-half the sampling frequency guarantees no aliasing. Cool…Only it’s not possible. In order to satisfy the Shannon limit (as it is called - Harry gets a “criteria” and Claude gets a “limit”) you must have the proverbial brick-wall, i.e., infinite-slope filter.
Well, this isn’t going to happen, not in this universe. You cannot guarantee that there is absolutely no signal (or noise) greater than the Nyquist frequency.
Fortunately there is a way around this problem. In fact, you go all the way around the problem and look at it from another direction.
If you cannot restrict the input bandwidth so aliasing does not occur, then solve the problem another way: Increase the sampling frequency until the aliasing products that do occur, do so at ultrasonic frequencies, and are effectively dealt with by a simple single-pole filter.
This is where the term “oversampling” comes in. For full spectrum audio the minimum sampling frequency must be 40 kHz, giving you a useable theoretical bandwidth of 20 kHz - the limit of normal human hearing. Sampling at anything significantly higher than 40 kHz is termed oversampling.
In just a few years time, we saw the audio industry go from the CD system standard of 44.1 kHz, and the pro audio quasi-standard of 48 kHz, to 8-times and 16-times oversampling frequencies of around 350 kHz and 700 kHz, respectively. With sampling frequencies this high, aliasing is no longer an issue.
O.K. So audio signals can be changed into digital words (digitized) without loss of information, and with no aliasing effects, as long as the sampling frequency is high enough. How is this done?
Quantizing is the process of determining which of the possible values (determined by the number of bits or voltage reference parts) is the closest value to the current sample, i.e., you are assigning a quantity to that sample.
Quantizing, by definition then, involves deciding between two values and thus always introduces error. How big the error, or how accurate the answer, depends on the number of bits. The more bits, the better the answer.
The converter has a reference voltage which is divided up into 2n parts, where n is the number of bits. Each part represents the same value.
Editors Note: For those working the math, “2n parts” is also known to be “2 to the nth power.” Use the x^y function on a scientific calculator to achieve the correct result.
Since you cannot resolve anything smaller than this value, there is error. There is always error in the conversion process. This is the accuracy issue.
Figure 3: 8-Bit resolution.
The number of bits determines the converter accuracy. For 8-bits, there are 28 = 256 possible levels, as shown in Figure 3.
Since the signal swings positive and negative there are 128 levels for each direction. Assuming a ±5 V reference , this makes each division, or bit, equal to 39 mV (5/128 = .039).
Hence, an 8-bit system cannot resolve any change smaller than 39 mV. This means a worst case accuracy error of 0.78 percent.
Each step size (resulting from dividing the reference into the number of equal parts dictated by the number of bits) is equal and is called a quantizing step (also called quantizing interval—see Figure 4).
Originally this step was termed the LSB (least significant bit) since it equals the value of the smallest coded bit, however it is an illogical choice for mathematical treatments and has since be replaced by the more accurate term quantizing step.
Figure 4: Quantization, 3-bit, 50-volt example.
The error due to the quantizing process is called quantizing error (no definitional stretch here). As shown earlier, each time a sample is taken there is error.
Here’s the not obvious part: the quantizing error can be thought of as an unwanted signal which the quantizing process adds to the perfect original.
An example best illustrates this principle. Let the sampled input value be some arbitrarily chosen value, say, 2 volts. And let this be a 3-bit system with a 5 volt reference. The 3-bits divides the reference into 8 equal parts (23 = 8) of 0.625 V each, as shown in Figure 4.
For the 2 volt input example, the converter must choose between either 1.875 volts or 2.50 volts, and since 2 volts is closer to 1.875 than 2.5, then it is the best fit. This results in a quantizing error of -0.125 volts, i.e., the quantized answer is too small by 0.125 volts.
If the input signal had been, say, 2.2 volts, then the quantized answer would have been 2.5 volts and the quantizing error would have been +0.3 volts, i.e., too big by 0.3 volts.
These alternating unwanted signals added by quantizing form a quantized error waveform, that is a kind of additive broadband noise that is generally uncorrelated with the signal and is called quantizing noise.
Since the quantizing error is essentially random (i.e. uncorrelated with the input) it can be thought of like white noise (noise with equal amounts of all frequencies). This is not quite the same thing as thermal noise, but it is similar. The energy of this added noise is equally spread over the band from dc to one-half the sampling rate. This is a most important point and will be returned to when we discuss delta-sigma converters and their use of extreme oversampling.
Successive approximation is one of the earliest and most successful analog-to-digital conversion techniques. Therefore, it is no surprise it became the initial A/D workhorse of the digital audio revolution. Successive approximation paved the way for the delta-sigma techniques to follow.
The heart of any A/D circuit is a comparator. A comparator is an electronic block whose output is determined by comparing the values of its two inputs. If the positive input is larger than the negative input then the output swings positive, and if the negative input exceeds the positive input, the output swings negative.
Therefore, if a reference voltage is connected to one input and an unknown input signal is applied to the other input, you now have a device that can compare and tell you which is larger. Thus a comparator gives you a “high output” (which could be defined to be a “1”) when the input signal exceeds the reference, or a “low output” (which could be defined to be a “0”) when it does not.
Figure 5A: Successive approximation, example.
A comparator is the key ingredient in the successive approximation technique as shown in Figure 5A and Figure 5B. The name successive approximation nicely sums up how the data conversion is done. The circuit evaluates each sample and creates a digital word representing the closest binary value.
The process takes the same number of steps as bits available, i.e., a 16-bit system requires 16 steps for each sample. The analog sample is successively compared to determine the digital code, beginning with the determination of the biggest (most significant) bit of the code.
The description given in Daniel Sheingold’s Analog-Digital Conversion Handbook offers the best analogy as to how successive approximation works. The process is exactly analogous to a gold miner’s assay scale, or a chemical balance as seen in Figure 5A.
This type of scale comes with a set of graduated weights, each one half the value of the preceding one, such as 1 gram, 1/2 gram, 1/4 gram, 1/8 gram, etc. You compare the unknown sample against these known values by first placing the heaviest weight on the scale.
If it tips the scale you remove it; if it does not you leave it and go to the next smaller value. If that value tips the scale you remove it, if it does not you leave it and go to the next lower value, and so on until you reach the smallest weight that tips the scale. (When you get to the last weight, if it does not tip the scale, then you put the next highest weight back on, and that is your best answer.)
The sum of all the weights on the scale represents the closest value you can resolve.
In digital terms, we can analyze this example by saying that a “0” was assigned to each weight removed, and a “1” to each weight remaining—in essence creating a digital word equivalent to the unknown sample, with the number of bits equaling the number of weights.
And the quantizing error will be no more than 1/2 the smallest weight (or 1/2 quantizing step).
As stated earlier the successive approximation technique must repeat this cycle for each sample. Even with today’s technology, this is a very time consuming process and is still limited to relatively slow sampling rates, but it did get us into the 16-bit, 44.1 kHz digital audio world.
PCM, PWM, EIEIO
The successive approximation method of data conversion is an example of pulse code modulation, or PCM. Three elements are required: sampling, quantizing, and encoding into a fixed length digital word. The reverse process reconstructs the analog signal from the PCM code.
The output of a PCM system is a series of digital words, where the word-size is determined by the available bits. For example. the output is a series of 8-bit words, or 16-bit words, or 20-bit words, etc., with each word representing the value of one sample.
Pulse width modulation, or PWM, is quite simple and quite different from PCM. Look at Figure 6.
Figure 6: Pulse width modulation (PWM).
In a typical PWM system, the analog input signal is applied to a comparator whose reference voltage is a triangle-shaped waveform whose repetition rate is the sampling frequency. This simple block forms what is called an analog modulator.
A simple way to understand the “modulation” process is to view the output with the input held steady at zero volts. The output forms a 50 percent duty cycle (50 percent high, 50 percent low) square wave. As long as there is no input, the output is a steady square wave.
As soon as the input is non-zero, the output becomes a pulse-width modulated waveform. That is, when the non-zero input is compared against the triangular reference voltage, it varies the length of time the output is either high or low.
For example, say there was a steady DC value applied to the input. For all samples, when the value of the triangle is less than the input value, the output stays low, and for all samples when it is greater than the input value, it changes state and remains high.
Therefore, if the triangle starts higher than the input value, the output goes high; at the next sample period the triangle has increased in value but is still more than the input, so the output remains high; this continues until the triangle reaches its apex and starts down again; eventually the triangle voltage drops below the input value and the output drops low and stays there until the reference exceeds the input again.
The resulting pulse-width modulated output, when averaged over time, gives the exact input voltage. For example, if the output spends exactly 50 percent of the time with an output of 5 volts, and 50 percent of the time at 0 volts, then the average output would be exactly 2.5 volts.
This is also an FM, or frequency-modulated system—the varying pulse-width translates into a varying frequency. And it is the core principle of most Class-D switching power amplifiers.
The analog input is converted into a variable pulse-width stream used to turn-on the output switching transistors. The analog output voltage is simply the average of the on-times of the positive and negative outputs.
Pretty amazing stuff from a simple comparator with a triangle waveform reference.
Another way to look at this is that this simple device actually codes a single bit of information, i.e., a comparator is a 1-bit A/D converter. PWM is an example of a 1-bit A/D encoding system. And a 1-bit A/D encoder forms the heart of delta-sigma modulation.
Modulation & Shaping
After 30 years, delta-sigma modulation (also sigma-delta) emerged as the most successful audio A/D converter technology.
It waited patiently for the semiconductor industry to develop the technologies necessary to integrate analog and digital circuitry on the same chip.
Today’s very high-speed “mixed-signal” IC processing allows the total integration of all the circuit elements necessary to create delta-sigma data converters of awesome magnitude.
Essentially a delta-sigma converter digitizes the audio signal with a very low resolution (1-bit) A/D converter at a very high sampling rate. It is the oversampling rate and subsequent digital processing that separates this from plain delta modulation (no sigma).
Referring back to the earlier discussion of quantizing noise, it’s possible to calculate the theoretical sine wave signal-to-noise (S/N) ratio (actually the signal-to-error ratio, but for our purposes it’s close enough to combine) of an A/D converter system knowing only n, the number of bits.
Doing some math shows that the value of the added quantizing noise relative to a maximum (full-scale) input equals 6.02n + 1.76 dB for a sine wave. For example, a perfect 16-bit system will have a S/N ratio of 98.1 dB, while a 1-bit delta-modulator A/D converter, on the other hand, will have only 7.78 dB!
Figures 7A - 7E: Noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.
To get something of a intuitive feel for this, consider that since there is only 1-bit, the amount of quantization error possible is as much as 1/2-bit. That is, since the converter must choose between the only two possibilities of maximum or minimum values, then the error can be as much as half of that.
And since this quantization error shows up as added noise, then this reduces the S/N to something on the order of around 2:1, or 6 dB.
One attribute shines true above all others for delta-sigma converters and makes them a superior audio converter: simplicity. The simplicity of 1-bit technology makes the conversion process very fast, and very fast conversions allows use of extreme oversampling.
And extreme oversampling pushing the quantizing noise and aliasing artifacts way out to megawiggle-land, where it is easily dealt with by digital filters (typically 64-times oversampling is used, resulting in a sampling frequency on the order of 3 MHz).
To get a better understanding of how oversampling reduces audible quantization noise, we need to think in terms of noise power. From physics you may remember that power is conserved, i.e., you can change it, but you cannot create or destroy it; well, quantization noise power is similar.
With oversampling, the quantization noise power is spread over a band that is as many times larger as is the rate of oversampling. For example, for 64-times oversampling, the noise power is spread over a band that is 64 times larger, reducing its power density in the audio band by 1/64th.
Figures 7A through 7E illustrate noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.
Noise shaping helps reduce in-band noise even more. Oversampling pushes out the noise, but it does so uniformly, that is, the spectrum is still flat. Noise shaping changes that.
Using very clever complex algorithms and circuit tricks, noise shaping contours the noise so that it is reduced in the audible regions and increased in the inaudible regions.
Conservation still holds, the total noise is the same, but the amount of noise present in the audio band is decreased while simultaneously increasing the noise out-of-band—then the digital filter eliminates it. Very slick.
As shown in Figure 8, a delta-sigma modulator consists of three parts: an analog modulator, a digital filter and a decimation circuit.
The analog modulator is the 1-bit converter discussed previously with the change of integrating the analog signal before performing the delta modulation. (The integral of the analog signal is encoded rather than the change in the analog signal, as is the case for traditional delta modulation.)
Figure 8: Delta-sigma A/D converter. (click to enlarge)
Oversampling and noise shaping pushes and contours all the bad stuff (aliasing, quantizing noise, etc.) so the digital filter suppresses it.
The decimation circuit, or decimator, is the digital circuitry that generates the correct output word length of 16-, 20-, or 24-bits, and restores the desired output sample frequency. It is a digital sample rate reduction filter and is sometimes termed downsampling (as opposed to oversampling) since it is here that the sample rate is returned from its 64-times rate to the normal CD rate of 44.1 kHz, or perhaps to 48 kHz, or even 96 kHz, for pro audio applications.
The net result is much greater resolution and dynamic range, with increased S/N and far less distortion compared to successive approximation techniques—all at lower costs.
Now that oversampling helped get rid of the bad noise, let’s add some good noise—dither noise. Dither is one of life’s many trade-offs. Here the trade-off is between noise and resolution. Believe it or not, we can introduce dither (a form of noise) and increase our ability to resolve very small values.
Values, in fact, smaller than our smallest bit… Now that’s a good trick. Perhaps you can begin to grasp the concept by making an analogy between dither and anti-lock brakes. Get it? No? Here’s how this analogy works: With regular brakes, if you just stomp on them, you probably create an unsafe skid situation for the car… Not a good idea.
Instead, if you rapidly tap the brakes, you control the stopping without skidding. We shall call this “dithering the brakes.” What you have done is introduce “noise” (tapping) to an otherwise rigidly binary (on or off) function.
So by “tapping” on our analog signal, we can improve our ability to resolve it. By introducing noise, the converter rapidly switches between two quantization levels, rather than picking one or the other, when neither is really correct. Sonically, this comes out as noise, rather than a discrete level with error. Subjectively, what would have been perceived as distortion is now heard as noise.
Lets look at this is more detail. The problem dither helps to solve is that of quantization error caused by the data converter being forced to choose one of two exact levels for each bit it resolves. It cannot choose between levels, it must pick one or the other.
With 16-bit systems, the digitized waveform for high frequency, low signal levels looks very much like a steep staircase with few steps. An examination of the spectral analysis of this waveform reveals lots of nasty sounding distortion products. We can improve this result either by adding more bits, or by adding dither.
Prior to 1997, adding more bits for better resolution was straightforward, but expensive, thereby making dither an inexpensive compromise; today, however, there is less need.
The dither noise is added to the low-level signal before conversion. The mixed noise causes the small signal to jump around, which causes the converter to switch rapidly between levels rather than being forced to choose between two fixed values.
Now the digitized waveform still looks like a steep staircase, but each step, instead of being smooth, is comprised of many narrow strips, like vertical Venetian blinds.
Figure 9: A - input signal; B - output signal (no dither); C - total error signal (no dither); D - power spectrum of output signal (no dither); E - input signal; F - output signal (with dither); G - total error signal (with dither); H - power spectum of output signal (with dither).
The spectral analysis of this waveform shows almost no distortion products at all, albeit with an increase in the noise content. The dither has caused the distortion products to be pushed out beyond audibility, and replaced with an increase in wideband noise. Figure 9 diagrams this process.
Wrap With Bandwidth
Due to the oversampling and noise shaping characteristics of delta-sigma A/D converters, certain measurements must use the appropriate bandwidth or inaccurate answers result. Specifications such as signal-to-noise, dynamic range, and distortion are subject to misleading results if the wrong bandwidth is used.
Because noise shaping purposely reduces audible noise by shifting the noise to inaudible higher frequencies, taking measurements over a bandwidth wider than 20 kHz results in answers that do not correlate with the listening experience. Therefore, it’s important to set the correct measurement bandwidth to obtain meaningful data.
Download a PDF copy of this article from Rane here.
Dennis Bohn is a principal partner and vice president of research & development at Rane Corporation. He holds BSEE and MSEE degrees from the University of California at Berkeley. Prior to Rane, he worked as engineering manager for Phase Linear Corporation and as audio application engineer at National Semiconductor Corporation. Bohn is a Fellow of the AES, holds two U.S. patents, is listed in Who’s Who In America and authored the entry on “Equalizers” for the McGraw-Hill Encyclopedia of Science & Technology, 7th edition.
Hardware-based model leverages FPGA technology for low latency performance and the character of the original vintage gear.
Antelope Audio announces the arrival of its FET-A76 solid-state compressor to its latest hardware interfaces, as well as a further expansion of its EQ offerings with four new vintage hardware models.
Each hardware-based model leverages Antelope’s proprietary Field Programmable Gate Array (FPGA) technology, and joins a growing range of signal processing offered by Antelope Audio with near-zero latency performance and the character of the original vintage gear.
The FET-A76, VEQ-HLF, Helios 69, NEU-PEV and Lang PEQ2 add more essential tools for tracking and mixing to the growing Antelope Audio digital platform, and are available immediately as an update for the Goliath, Zen Tour and Orion Studio hardware.
FET-based compression has been a staple in the studio since its invention in the late 60s. The Antelope FET-A76 captures all of the nuances of a vintage FET compressor, and like its analog progenitor, is useful not only for controlling dynamics and sculpting tone but also for its ability to add punch and presence to anything passing through its circuit. Mirroring the original hardware’s easy-to-use interface, the FET-A76 features input and output gain controls and a selectable 4-way ratio control with hidden “all-buttons” mode for a more aggressive compression character.
The FET-A76 shines in a wide range of applications, from vocals to bass guitar to buss compression, and is a companion to Antelope Audio’s already-available FPGA EQ models, including the recently released BAE Audio 1073.
“The FET compressor has been a studio workhorse for decades, and now thanks to our proprietary FPGA technology we can model the character of this classic outboard gear in-hardware on our interfaces with near-zero latency,” says Marcel James, director of U.S. sales, Antelope Audio. “Whether you apply the it to individual tracks or to the mix bus, the FET-A76 puts the musical analog character and all the incredible dynamics shaping potential of this classic circuit at the fingertips of Antelope Audio’s users to make their mixes shine.”
This page has been viewed 0 times
Page rendered in 1.3119 seconds
Total Entries: 21480
Total Comments: 1953
Total Trackbacks: 0
Most Recent Entry: 12/08/2016 08:46 am
Most Recent Comment on: 01/19/2012 02:32 am
Total Members: 4924
Total Logged in members: 0
Total guests: 2
Total anonymous users: 0
Most Recent Visitor on: 02/10/2012 11:04 am
The most visitors ever was 774 on 02/08/2012 02:19 pm