Analog

Thursday, September 11, 2014

Keep It Cool: Three Rack Ventilation Methods

This article is provided by Commercial Integrator

 
It’s a truism that almost nothing is 100 percent efficient; a measure of the inefficiency of most devices we deal with is how much heat they produce.

Heat is energy that has been lost for one reason or another and is not available to do the task at hand, whether that is moving our car along the road, moving a loudspeaker’s cone to produce sound, or moving large quantities of 1’s and 0’s around at very high speeds.

Heat not properly dealt in our AV or IT systems can cause problems.

Digital electronics — be they satellite receivers, DVD players, codecs, or computers — may “lock up” and become unresponsive when overheated. Analog components appear to be more heat-tolerant, but in reality electrolytic capacitors are drying out and thinner-than-hair wires inside integrated circuits and transistors are being subjected to repeated thermal cycles of excessive expansion and contraction, leading to premature failure.

Modern AV and IT systems consist of various electronic components frequently mounted in racks, which may themselves be freestanding or in closets or other enclosures. Each electronic component in the system will generate some heat, and the systems designer and end user can ignore this at their peril.

The trivial case, in which a few devices mounted in a skeletal rack frame, in the open, in conditioned space, and consuming very low amounts of power, can safely be ignored. But such systems are few and far between today. More typical is the rack containing many power-hungry devices, all mounted in a rack either shrouded by side and back panels or located in a closet, millwork — or both.

In these cases, ignorance of likely damage from heat will be far from blissful. Overheated components will express their displeasure in any number of ways, from sub-par performance to catastrophic failure.

There are several ways to reduce the temperature within a rack. One is through passive thermal management; allowing natural convection currents to let heated air rise and exit at the top of the rack while cooler air enters through an opening at a low point.

Convection, while ‘free,’ is a very weak force. It is dependent on the small difference in density of hot and cold air, which is why a hot air balloon is huge, yet capable of lifting only light loads.

Convection currents are easily blocked or disrupted should a vent be even partially obscured. Heat loads today, given the increasing use of digital devices and the tendency to install more equipment in smaller racks and enclosures, are too often beyond the ability of convection to even approach the necessary level of heat removal.

Another way to cool a rack is through air conditioning, or active refrigeration. Air conditioning systems, properly sized and installed, let us set rack temperatures as low as we want; the only caveats being that we don’t cool below the dew point and condense moisture on our equipment, or raise our energy bill to unacceptable levels.

While expensive to buy, install, and operate, air conditioning systems that are dedicated to electronic systems may be the only practical solution when heat loads are large.

Be aware when the air conditioning system is shared with people, as when the supply and/or return ducts are an extension of an HVAC system that also serves the building and its occupants.

The danger is that the thermostat may turn the system off when the occupants are comfortable or keep it from running at all in the cooler parts of the year, while the electronics are still generating the same amount of heat.

There is also the extreme situation of HVAC systems installed in temperate areas. They can become the building’s heating system in cold weather.

If these potential problems can be avoided, dedicated air conditioning is an effective cooling technique, and in some cases the only practical solution to avoid damage by overheating.

Guidelines are not complicated; cool air should be delivered via a supply point high and in front of the rack, while the return for heated exhaust air should be located high and behind the rack.

Of the many types of analog and digital equipment being installed today, almost all fan-equipped components draw cooling air in front of their front panels and exhaust it to the rear. The arrangement described allows a “waterfall” of cold air to fall in front of the rack where it can be pulled in, while a high-mounted exhaust fan in the top of the rack, or high on its rear panel, pulls heated air out into the return duct.

Integrators can accommodate those components without internal fans by placing passive vent panels below them. If the exhaust fan has been properly sized, it will pull conditioned air in. In some cases, it may be necessary to use one or more small fans inside the rack to prevent pockets of stagnant heated air from accumulating.

If the building’s HVAC system can accommodate the extra heat load, it may only be necessary to use the third rack cooling technique. This will provide active thermal management using only strategically-located fans, eliminating the cost and complexity of refrigeration.

Moving the necessary number of cubic feet or air through a rack every minute can be accomplished using ventilation systems available on the market. For freestanding racks, it is a matter of pulling heated air out from the top of the rack and replacing it with cool room air entering at the bottom (we have made the assumption that the rack is in a conditioned space, and that the building’s HVAC system can deal with the heat generated in the rack).

Fan systems are available which can be mounted near or at the top of the rack. They draw heated air up from below and discharge it through their front panel into the room. Other systems discharge the heated air straight up through the top of the rack.

Neither of these systems is effective when the rack itself is enclosed in a closet or millwork. In this case, we must first get the hot air out of the rack, and then get the hot air out of the closet. Systems are available that perform both functions; they pull air up from lower parts of a rack, then move it through flexible tubing to an area outside the closet.

Better ventilation systems represent a trade-off between moving air and generating noise. When the system is in a remote equipment room, noise is not an issue; when it’s in the board room, noise from fan motors and air movement becomes bothersome. Consulting with cooling system makers’ technical personnel is a great help during the design process.

Frank Federman is CEO of Active Thermal Management.

Go to Commercial Integrator for more content on A/V, installed and commercial systems. And, read an extended version of this article at CorporateTechDecisions.com.

{extended}
Posted by Keith Clark on 09/11 at 01:37 PM
AVFeatureBlogStudy HallAmplifierAnalogAVDigitalInstallationInterconnectPowerProcessorTechnicianPermalink

Monday, September 08, 2014

Large-Format Audient ASP8024 Console Chosen For Studio des Bruères In France

As well as the attractive feature set, the large-format desk creates a visual impact in the studio

An Audient ASP8024 console was the first choice for Jean-Christian Maas, owner of Studio des Bruères, a highly specified studio situated in a tranquil yet accessible location in Poitiers, France.

Dedicated to the production—and co-production—of artists, Maas offers aa different service to that of standard commercial studios.

“We prefer to work on projects that we record and mix, so that we can maximize the synergy and efficiency,” he explains. “The bulk of our work is recommended by word-of-mouth and are mostly jazz and classical music, but more recently we have also had some pop/rock.

“The Audient ASP8024 console is exactly what we were looking for because it suits the way we work in both the analog and digital domains,” Maas continues. “We use a wide range of software and hardware outboard and therefore were after a very transparent console to preserve signal integrity. Its transparency also allows us to use it as a summing unit (with real panoramic faders) and greatly contributes to the overall sound of the final mix.

“The preamps have an excellent dynamic and the EQ strips are more than correct. The routing is very well thought out too,” he concludes.

As well as the attractive feature set, the large-format desk—supplied by Funky Junk France—creates a visual impact in the studio. Comprising 100 square meters of analog consoles and outboard, vintage instruments and an array of digital tools, Studio des Bruères is perhaps best described as a venue created by musicians for musicians.

Audient

{extended}
Posted by Keith Clark on 09/08 at 08:08 AM
RecordingNewsAnalogConsolesInstallationMixerStudioPermalink

Tuesday, September 02, 2014

Second Edition Of “Small Signal Audio Design” By Douglas Self Now Available From Focal Press

Provides an extensive repertoire of circuits that can be put together to make almost any type of audio system.

Focal Press has just released the second edition of Small Signal Audio Design by Douglas Self, providing ample coverage of preamplifiers and mixers, as well as a new chapter on headphone amplifiers. The handbook provides an extensive repertoire of circuits that can be put together to make almost any type of audio system.

Essential points of theory that bear on practical performance are lucidly and thoroughly explained, with the mathematics kept to a relative minimum. Self’s background in design for manufacture means that he keeps a wary eye on the cost of things. The book also includes a chapter on power-supplies, full of practical ways to keep both the ripple and the cost down, showing how to power everything.

The book also teaches how to:

—Make amplifiers with apparently impossibly low noise

—Design discrete circuitry that can handle enormous signals with vanishingly low distortion

—Use humble low-gain transistors to make an amplifier with an input impedance of more than 50 Megohms

—Transform the performance of low-cost-opamps, how to make filters with very low noise and distortion

—Make incredibly accurate volume controls

—Make a huge variety of audio equalizers

—Make magnetic cartridge preamplifiers that have noise so low it is limited by basic physics

—Sum, switch, clip, compress, and route audio signals

The second edition is expanded throughout (with added information on new ADCs and DACs, microcontrollers, more coverage of discrete op amp design, and many other topics), and includes a completely new chapter on headphone amplifiers.

Author Douglas Self studied engineering at Cambridge University, then psychoacoustics at Sussex University. He has spent many years working at the top level of design in both the professional audio and hi-fi industries, and has taken out a number of patents in the field of audio technology. He currently acts as a consultant engineer in the field of audio design.

Find out more and get Small Signal Audio Design, 2nd Edition here.

Focal Press

{extended}
Posted by Keith Clark on 09/02 at 12:36 PM
AVLive SoundRecordingNewsProductTrainingAmplifierAnalogAVDigitalEducationMixerProcessorSound ReinforcementStudioPermalink

Tuesday, August 26, 2014

Transform Your Mind: Chapter 6 Of White Paper Series On Transformers Now Available

Chapter 6, the final installment of PSW’s ongoing free white paper series on transformers, is now available for free download. (Get it here.)

The new installment, entitled “Exploring The Electrical Characteristics Of Audio Transformers,” explores the basic electrical characteristics of audio transformers to better understand the differences among various types, and why one transformer is better for a given application than another.

The white paper series is presented by Lundahl, a world leader in the design and production of transformers, and is authored by Ken DeLoria, senior technical editor of ProSoundWeb and Live Sound International,  who has mixed innumerable shows and tuned hundreds of sound systems, and as the founder of Apogee Sound, developed the TEC Award-winning AE-9 loudspeaker.

Note that Chapter 1: An Introduction to Transformers in Audio Devices, Chapter 2: Transformers–Insurance Against Show-Stopping Problems, Chapter 3: Anatomy Of A Transformer, Chapter 4: An Interview With Per Lundahl, and Chapter 5: Understanding Impedance In Audio Transformers, are also still available for free download.

Again, download your free copy of “Chapter 6: Exploring The Electrical Characteristics Of Audio Transformers”—and get the entire set—by going here.

Lundahl

{extended}
Posted by Keith Clark on 08/26 at 06:47 AM
AVFeatureBlogProductTrainingWhite PaperAnalogAVEducationMeasurementMicrophoneProcessorSignalPermalink

Monday, August 18, 2014

CADAC CDC four Digital Console Fronts Large-Scale System For Celebration At Iconic Ibiza Club

Console heads up large-scale system incorporating Funktion-One loudspeakers powered by Full Fat Audio amps with XTA processing

Iconic Ibiza club Space recently celebrated its 25th anniversary with a birthday bash centered on its outdoor Flight Club arena, with regular Ibiza sound specialists Project Audio providing a CADAC CDC four compact digital console in front of a large-scale system incorporating Funktion-One loudspeakers powered by Full Fat Audio amps with XTA processing.

The club’s earlier 2014 season “Opening Fiesta” in late May saw the Funktion-One rig fronted with a CADAC LIVE1 analog console. Ibiza, noted for its legendary nightlife, is the third largest of the Balearic Islands in the Mediterranean Sea, 50 miles off the coast of the city of Valencia, in eastern Spain.

Dave Millard, founder of Full Fat Audio, was Project Audio’s sound engineer for both events at Space, working in partnership with Funktion-One chief Tony Andrews and Project Audio’s Ibiza system technician George Yankov.

“We used the LIVE1 on the opening party, but for the 25th Anniversary we needed to wireless mic a troupe of flamenco dancers on stage and use some effects on them, so we went with the CDC four for that,” says Yankov. “It was a real pleasure to use both the analog LIVE1 and digital CDC four. The audio performance of both consoles is equally excellent. I cannot recall another desk so transparent and with so much drive and finesse.

“The CDC four really allows the audio to breath and just does not sound digital at all,” he continues. “Every nuance of a recording or live input can be heard, with even subtle changes to the controls. Bass performance is exciting with every note precise. Build quality is also first class and user interaction is straightforward.”

The 25th anniversary party featured a line-up of Playa d’en Bossa regulars and legends, including Nina Kraviz, Carl Craig, Jimmy Edgar, Shaun Reeves, Layo and Bushwacka!, Alfredo, José Padilla, Jose De Divina and César De Melero, a four-hour set from Erick Morillo, and ‘cameo’ spots from Fatboy Slim and Annie Mac.

CADAC

{extended}
Posted by Keith Clark on 08/18 at 10:29 AM
AVLive SoundNewsAnalogAVConsolesDigitalMixerSound ReinforcementSystemPermalink

Thursday, August 14, 2014

Digital Dharma: A/D Conversion And What It Means In Audio Terms

This article is provided by Rane Corporation.

 
Like everything else in the world, the audio industry has been radically and irrevocably changed by the digital revolution. No one has been spared.

Arguments will ensue forever about whether the true nature of the real world is analog or digital; whether the fundamental essence, or dharma, of life is continuous (analog) or exists in tiny little chunks (digital). Seek not that answer here.

Rather, let’s look at the dharma (essential function) of audio analog-to-digital (A/D) conversion.

It’s important at the onset of exploring digital audio to understand that once a waveform has been converted into digital format, nothing can inadvertently occur to change its sonic properties. While it remains in the digital domain, it’s only a series of digital words, representing numbers.

Aside from the gross example of having the digital processing actually fail and cause a word to be lost or corrupted into none use, nothing can change the sound of the word. It’s just a bunch of “ones” and “zeroes.” There are no “one-halves” or “three-quarters.”

The point is that sonically, it begins and ends with the conversion process. Nothing is more important to digital audio than data conversion. Everything in-between is just arithmetic and waiting. That’s why there is such a big to-do with data conversion. It really is that important. Everything else quite literally is just details.

We could even go so far as to say that data conversion is the art of digital audio while everything else is the science, in that it is data conversion that ultimately determines whether or not the original sound is preserved (and this comment certainly does not negate the enormous and exacting science involved in truly excellent data conversion.)

Because analog signals continuously vary between an infinite number of states while computers can only handle two, the signals must be converted into binary digital words before the computer can work. Each digital word represents the value of the signal at one precise point in time. Today’s common word lengths are 16-bits, 24-bits and 32-bits. Once converted into digital words, the information may be stored, transmitted, or operated upon within the computer.

In order to properly explore the critical interface between the analog and digital worlds, it’s first necessary to review a few fundamentals and a little history.

Binary & Decimal
Whenever we speak of “digital,” by inference, we speak of computers (throughout this paper the term “computer” is used to represent any digital-based piece of audio equipment).

And computers in their heart of hearts are really quite simple. They only can understand the most basic form of communication or information: yes/no, on/off, open/closed, here/gone, all of which can be symbolically represented by two things - any two things.

Two letters, two numbers, two colors, two tones, two temperatures, two charges… It doesn’t matter. Unless you have to build something that will recognize these two states - now it matters.

So, to keep it simple, we choose two numbers: one and zero, or, a “1” and a “0.”

Officially this is known as binary representation, from Latin bini—two by two. In mathematics this is a base-2 number system, as opposed to our decimal (from Latin decima a tenth part or tithe) number system, which is called base-10 because we use the ten numbers 0-9.

In binary we use only the numbers 0 and 1. “0” is a good symbol for no, off, closed, gone, etc., and “1” is easy to understand as meaning yes, on, open, here, etc. In electronics it’s easy to determine whether a circuit is open or closed, conducting or not conducting, has voltage or doesn’t have voltage.

Thus the binary number system found use in the very first computer, and nothing has changed today. Computers just got faster and smaller and cheaper, with memory size becoming incomprehensibly large in an incomprehensibly small space.

One problem with using binary numbers is they become big and unwieldy in a hurry. For instance, it takes six digits to express my age in binary, but only two in decimal. But in binary, we better not call them “digits” since “digits” implies a human finger or toe, of which there are 10, so confusion reigns.

To get around that problem, John Tukey of Bell Laboratories dubbed the basic unit of information (as defined by Shannon—more on him later) a binary unit, or “binary digit” which became abbreviated to “bit.” A bit is the simplest possible message representing one of two states. So, I’m six-bits old. Well, not quite. But it takes 6-bits to express my age as 110111.

Let’s see how that works. I’m 55 years old. So in base-10 symbols that is “55,” which stands for five 1’s plus five 10’s. You may not have ever thought about it, but each digit in our everyday numbers represents an additional power of 10, beginning with 0.

Figure 1: Number representation systems.

That is, the first digit represents the number of 1’s (100), the second digit represents the number of 10’s (101), the third digit represents the number of 100’s (102), and so on. We can represent any size number by using this shorthand notation.

Binary number representation is just the same except substituting the powers of 2 for the powers of 10 [any base number system is represented in this manner].

Therefore (moving from right to left) each succeeding bit represents 20 = 1, 21 =2, 22 =4, 23 =8, 24 = 16, 25 =32, etc. Thus, my age breaks down as 1-1, 1-2, 1-4, 0-8, 1-16, and 1-32, represented as “110111,” which is 32+16+0+4+2+1 = 55. Or double-nickel to you cool cats.

Figure 1, above, shows the two examples.

Building Blocks
The French mathematician Fourier unknowingly laid the groundwork for A/D conversion in the late 18th century.

All data conversion techniques rely on looking at, or sampling, the input signal at regular intervals and creating a digital word that represents the value of the analog signal at that precise moment. The fact that we know this works lies with Nyquist.

Harry Nyquist discovered it while working at Bell Laboratories in the late 1920s and wrote a landmark paper describing the criteria for what we know today as sampled data systems.

Nyquist taught us that for periodic functions, if you sampled at a rate that was at least twice as fast as the signal of interest, then no information (data) would be lost upon reconstruction.

And since Fourier had already shown that all alternating signals are made up of nothing more than a sum of harmonically related sine and cosine waves, then audio signals are periodic functions and can be sampled without lost of information following Nyquist’s instructions.

This became known as the Nyquist Frequency, which is the highest frequency that may be accurately sampled, and is one-half of the sampling frequency.

For example, the theoretical Nyquist frequency for the audio CD (compact disc) system is 22.05 kHz, equaling one-half of the standardized sampling frequency of 44.1 kHz.

As powerful as Nyquist’s discoveries were, they were not without their dark side, with the biggest being aliasing frequencies. Following the Nyquist criteria (as it is now called) guarantees that no information will be lost; it does not, however, guarantee that no information will be gained.

Although by no means obvious, the act of sampling an analog signal at precise time intervals is an act of multiplying the input signal by the sampling pulses. This introduces the possibility of generating “false” signals indistinguishable from the original. In other words, given a set of sampled values, we cannot relate them specifically to one unique signal.

Figure 2: Aliasing frequencies.

As Figure 2 shows, the same set of samples could have resulted from any of the three waveforms shown. And from all possible sum and difference frequencies between the sampling frequency and the one being sampled.

All such false waveforms that fit the sample data are called “aliases.” In audio, these frequencies show up mostly as intermodulation distortion products, and they come from the random-like white noise, or any sort of ultrasonic signal present in every electronic system.

Solving the problem of aliasing frequencies is what improved audio conversion systems to today’s level of sophistication. And it was Claude Shannon who pointed the way. Shannon is recognized as the father of information theory: while a young engineer at Bell Laboratories in 1948, he defined an entirely new field of science.

Even before then his genius shined through for, while still a 22-year-old student at MIT he showed in his master’s thesis how the algebra invented by the British mathematician George Boole in the mid-1800s, could be applied to electronic circuits. Since that time, Boolean Algebra has been the rock of digital logic and computer design.

Another Solution
Shannon studied Nyquist’s work closely and came up with a deceptively simple addition. He observed (and proved) that if you restrict the input signal’s bandwidth to less than one-half the sampling frequency then no errors due to aliasing are possible.

So bandlimiting your input to no more than one-half the sampling frequency guarantees no aliasing. Cool…Only it’s not possible. In order to satisfy the Shannon limit (as it is called - Harry gets a “criteria” and Claude gets a “limit”) you must have the proverbial brick-wall, i.e., infinite-slope filter.

Well, this isn’t going to happen, not in this universe. You cannot guarantee that there is absolutely no signal (or noise) greater than the Nyquist frequency.

Fortunately there is a way around this problem. In fact, you go all the way around the problem and look at it from another direction.

If you cannot restrict the input bandwidth so aliasing does not occur, then solve the problem another way: Increase the sampling frequency until the aliasing products that do occur, do so at ultrasonic frequencies, and are effectively dealt with by a simple single-pole filter.

This is where the term “oversampling” comes in. For full spectrum audio the minimum sampling frequency must be 40 kHz, giving you a useable theoretical bandwidth of 20 kHz - the limit of normal human hearing. Sampling at anything significantly higher than 40 kHz is termed oversampling.

In just a few years time, we saw the audio industry go from the CD system standard of 44.1 kHz, and the pro audio quasi-standard of 48 kHz, to 8-times and 16-times oversampling frequencies of around 350 kHz and 700 kHz, respectively. With sampling frequencies this high, aliasing is no longer an issue.

O.K. So audio signals can be changed into digital words (digitized) without loss of information, and with no aliasing effects, as long as the sampling frequency is high enough. How is this done?

Determining Values
Quantizing is the process of determining which of the possible values (determined by the number of bits or voltage reference parts) is the closest value to the current sample, i.e., you are assigning a quantity to that sample.

Quantizing, by definition then, involves deciding between two values and thus always introduces error. How big the error, or how accurate the answer, depends on the number of bits. The more bits, the better the answer.

The converter has a reference voltage which is divided up into 2n parts, where n is the number of bits. Each part represents the same value.

Editors Note: For those working the math, “2n parts” is also known to be “2 to the nth power.”  Use the x^y function on a scientific calculator to achieve the correct result.

Since you cannot resolve anything smaller than this value, there is error. There is always error in the conversion process. This is the accuracy issue.

Figure 3: 8-Bit resolution.

The number of bits determines the converter accuracy. For 8-bits, there are 28 = 256 possible levels, as shown in Figure 3.

Since the signal swings positive and negative there are 128 levels for each direction. Assuming a ±5 V reference [3], this makes each division, or bit, equal to 39 mV (5/128 = .039).

Hence, an 8-bit system cannot resolve any change smaller than 39 mV. This means a worst case accuracy error of 0.78 percent.

Each step size (resulting from dividing the reference into the number of equal parts dictated by the number of bits) is equal and is called a quantizing step (also called quantizing interval—see Figure 4).

Originally this step was termed the LSB (least significant bit) since it equals the value of the smallest coded bit, however it is an illogical choice for mathematical treatments and has since be replaced by the more accurate term quantizing step.

Figure 4: Quantization, 3-bit, 50-volt example.

The error due to the quantizing process is called quantizing error (no definitional stretch here). As shown earlier, each time a sample is taken there is error.

Here’s the not obvious part: the quantizing error can be thought of as an unwanted signal which the quantizing process adds to the perfect original.

An example best illustrates this principle. Let the sampled input value be some arbitrarily chosen value, say, 2 volts. And let this be a 3-bit system with a 5 volt reference. The 3-bits divides the reference into 8 equal parts (23 = 8) of 0.625 V each, as shown in Figure 4.

For the 2 volt input example, the converter must choose between either 1.875 volts or 2.50 volts, and since 2 volts is closer to 1.875 than 2.5, then it is the best fit. This results in a quantizing error of -0.125 volts, i.e., the quantized answer is too small by 0.125 volts.

If the input signal had been, say, 2.2 volts, then the quantized answer would have been 2.5 volts and the quantizing error would have been +0.3 volts, i.e., too big by 0.3 volts.

These alternating unwanted signals added by quantizing form a quantized error waveform, that is a kind of additive broadband noise that is generally uncorrelated with the signal and is called quantizing noise.

Since the quantizing error is essentially random (i.e. uncorrelated with the input) it can be thought of like white noise (noise with equal amounts of all frequencies). This is not quite the same thing as thermal noise, but it is similar. The energy of this added noise is equally spread over the band from dc to one-half the sampling rate. This is a most important point and will be returned to when we discuss delta-sigma converters and their use of extreme oversampling.

Early Conversion
Successive approximation is one of the earliest and most successful analog-to-digital conversion techniques. Therefore, it is no surprise it became the initial A/D workhorse of the digital audio revolution. Successive approximation paved the way for the delta-sigma techniques to follow.

The heart of any A/D circuit is a comparator. A comparator is an electronic block whose output is determined by comparing the values of its two inputs. If the positive input is larger than the negative input then the output swings positive, and if the negative input exceeds the positive input, the output swings negative.

Therefore, if a reference voltage is connected to one input and an unknown input signal is applied to the other input, you now have a device that can compare and tell you which is larger. Thus a comparator gives you a “high output” (which could be defined to be a “1”) when the input signal exceeds the reference, or a “low output” (which could be defined to be a “0”) when it does not.

Figure 5A: Successive approximation, example.

A comparator is the key ingredient in the successive approximation technique as shown in Figure 5A and Figure 5B. The name successive approximation nicely sums up how the data conversion is done. The circuit evaluates each sample and creates a digital word representing the closest binary value.

The process takes the same number of steps as bits available, i.e., a 16-bit system requires 16 steps for each sample. The analog sample is successively compared to determine the digital code, beginning with the determination of the biggest (most significant) bit of the code.

Figure 5B: Successive approximation, A/D converter.

The description given in Daniel Sheingold’s Analog-Digital Conversion Handbook offers the best analogy as to how successive approximation works. The process is exactly analogous to a gold miner’s assay scale, or a chemical balance as seen in Figure 5A.

This type of scale comes with a set of graduated weights, each one half the value of the preceding one, such as 1 gram, 1/2 gram, 1/4 gram, 1/8 gram, etc. You compare the unknown sample against these known values by first placing the heaviest weight on the scale.

If it tips the scale you remove it; if it does not you leave it and go to the next smaller value. If that value tips the scale you remove it, if it does not you leave it and go to the next lower value, and so on until you reach the smallest weight that tips the scale. (When you get to the last weight, if it does not tip the scale, then you put the next highest weight back on, and that is your best answer.)

The sum of all the weights on the scale represents the closest value you can resolve.

In digital terms, we can analyze this example by saying that a “0” was assigned to each weight removed, and a “1” to each weight remaining—in essence creating a digital word equivalent to the unknown sample, with the number of bits equaling the number of weights.

And the quantizing error will be no more than 1/2 the smallest weight (or 1/2 quantizing step).

As stated earlier the successive approximation technique must repeat this cycle for each sample. Even with today’s technology, this is a very time consuming process and is still limited to relatively slow sampling rates, but it did get us into the 16-bit, 44.1 kHz digital audio world.

PCM, PWM, EIEIO
The successive approximation method of data conversion is an example of pulse code modulation, or PCM. Three elements are required: sampling, quantizing, and encoding into a fixed length digital word. The reverse process reconstructs the analog signal from the PCM code.

The output of a PCM system is a series of digital words, where the word-size is determined by the available bits. For example. the output is a series of 8-bit words, or 16-bit words, or 20-bit words, etc., with each word representing the value of one sample.

Pulse width modulation, or PWM, is quite simple and quite different from PCM. Look at Figure 6.

Figure 6: Pulse width modulation (PWM).

In a typical PWM system, the analog input signal is applied to a comparator whose reference voltage is a triangle-shaped waveform whose repetition rate is the sampling frequency. This simple block forms what is called an analog modulator.

A simple way to understand the “modulation” process is to view the output with the input held steady at zero volts. The output forms a 50 percent duty cycle (50 percent high, 50 percent low) square wave. As long as there is no input, the output is a steady square wave.

As soon as the input is non-zero, the output becomes a pulse-width modulated waveform. That is, when the non-zero input is compared against the triangular reference voltage, it varies the length of time the output is either high or low.

For example, say there was a steady DC value applied to the input. For all samples, when the value of the triangle is less than the input value, the output stays low, and for all samples when it is greater than the input value, it changes state and remains high.

Therefore, if the triangle starts higher than the input value, the output goes high; at the next sample period the triangle has increased in value but is still more than the input, so the output remains high; this continues until the triangle reaches its apex and starts down again; eventually the triangle voltage drops below the input value and the output drops low and stays there until the reference exceeds the input again.

The resulting pulse-width modulated output, when averaged over time, gives the exact input voltage. For example, if the output spends exactly 50 percent of the time with an output of 5 volts, and 50 percent of the time at 0 volts, then the average output would be exactly 2.5 volts.

This is also an FM, or frequency-modulated system—the varying pulse-width translates into a varying frequency. And it is the core principle of most Class-D switching power amplifiers.

The analog input is converted into a variable pulse-width stream used to turn-on the output switching transistors. The analog output voltage is simply the average of the on-times of the positive and negative outputs.

Pretty amazing stuff from a simple comparator with a triangle waveform reference.

Another way to look at this is that this simple device actually codes a single bit of information, i.e., a comparator is a 1-bit A/D converter. PWM is an example of a 1-bit A/D encoding system. And a 1-bit A/D encoder forms the heart of delta-sigma modulation.

Modulation & Shaping
After 30 years, delta-sigma modulation (also sigma-delta) emerged as the most successful audio A/D converter technology.

It waited patiently for the semiconductor industry to develop the technologies necessary to integrate analog and digital circuitry on the same chip.

Today’s very high-speed “mixed-signal” IC processing allows the total integration of all the circuit elements necessary to create delta-sigma data converters of awesome magnitude.

Essentially a delta-sigma converter digitizes the audio signal with a very low resolution (1-bit) A/D converter at a very high sampling rate. It is the oversampling rate and subsequent digital processing that separates this from plain delta modulation (no sigma).

Referring back to the earlier discussion of quantizing noise, it’s possible to calculate the theoretical sine wave signal-to-noise (S/N) ratio (actually the signal-to-error ratio, but for our purposes it’s close enough to combine) of an A/D converter system knowing only n, the number of bits.

Doing some math shows that the value of the added quantizing noise relative to a maximum (full-scale) input equals 6.02n + 1.76 dB for a sine wave. For example, a perfect 16-bit system will have a S/N ratio of 98.1 dB, while a 1-bit delta-modulator A/D converter, on the other hand, will have only 7.78 dB!

Figures 7A - 7E: Noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.

To get something of a intuitive feel for this, consider that since there is only 1-bit, the amount of quantization error possible is as much as 1/2-bit. That is, since the converter must choose between the only two possibilities of maximum or minimum values, then the error can be as much as half of that.

And since this quantization error shows up as added noise, then this reduces the S/N to something on the order of around 2:1, or 6 dB.

One attribute shines true above all others for delta-sigma converters and makes them a superior audio converter: simplicity. The simplicity of 1-bit technology makes the conversion process very fast, and very fast conversions allows use of extreme oversampling.

And extreme oversampling pushing the quantizing noise and aliasing artifacts way out to megawiggle-land, where it is easily dealt with by digital filters (typically 64-times oversampling is used, resulting in a sampling frequency on the order of 3 MHz).

To get a better understanding of how oversampling reduces audible quantization noise, we need to think in terms of noise power. From physics you may remember that power is conserved, i.e., you can change it, but you cannot create or destroy it; well, quantization noise power is similar.

With oversampling, the quantization noise power is spread over a band that is as many times larger as is the rate of oversampling. For example, for 64-times oversampling, the noise power is spread over a band that is 64 times larger, reducing its power density in the audio band by 1/64th.

Figures 7A through 7E illustrate noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.

Noise shaping helps reduce in-band noise even more. Oversampling pushes out the noise, but it does so uniformly, that is, the spectrum is still flat. Noise shaping changes that.

Using very clever complex algorithms and circuit tricks, noise shaping contours the noise so that it is reduced in the audible regions and increased in the inaudible regions.

Conservation still holds, the total noise is the same, but the amount of noise present in the audio band is decreased while simultaneously increasing the noise out-of-band—then the digital filter eliminates it. Very slick.

As shown in Figure 8, a delta-sigma modulator consists of three parts: an analog modulator, a digital filter and a decimation circuit.

The analog modulator is the 1-bit converter discussed previously with the change of integrating the analog signal before performing the delta modulation. (The integral of the analog signal is encoded rather than the change in the analog signal, as is the case for traditional delta modulation.)

Figure 8: Delta-sigma A/D converter. (click to enlarge)

Oversampling and noise shaping pushes and contours all the bad stuff (aliasing, quantizing noise, etc.) so the digital filter suppresses it.

The decimation circuit, or decimator, is the digital circuitry that generates the correct output word length of 16-, 20-, or 24-bits, and restores the desired output sample frequency. It is a digital sample rate reduction filter and is sometimes termed downsampling (as opposed to oversampling) since it is here that the sample rate is returned from its 64-times rate to the normal CD rate of 44.1 kHz, or perhaps to 48 kHz, or even 96 kHz, for pro audio applications.

The net result is much greater resolution and dynamic range, with increased S/N and far less distortion compared to successive approximation techniques—all at lower costs.

Good Noise?
Now that oversampling helped get rid of the bad noise, let’s add some good noise—dither noise. Dither is one of life’s many trade-offs. Here the trade-off is between noise and resolution. Believe it or not, we can introduce dither (a form of noise) and increase our ability to resolve very small values.

Values, in fact, smaller than our smallest bit… Now that’s a good trick. Perhaps you can begin to grasp the concept by making an analogy between dither and anti-lock brakes. Get it? No? Here’s how this analogy works: With regular brakes, if you just stomp on them, you probably create an unsafe skid situation for the car… Not a good idea.

Instead, if you rapidly tap the brakes, you control the stopping without skidding. We shall call this “dithering the brakes.” What you have done is introduce “noise” (tapping) to an otherwise rigidly binary (on or off) function.

So by “tapping” on our analog signal, we can improve our ability to resolve it. By introducing noise, the converter rapidly switches between two quantization levels, rather than picking one or the other, when neither is really correct. Sonically, this comes out as noise, rather than a discrete level with error. Subjectively, what would have been perceived as distortion is now heard as noise.

Lets look at this is more detail. The problem dither helps to solve is that of quantization error caused by the data converter being forced to choose one of two exact levels for each bit it resolves. It cannot choose between levels, it must pick one or the other.

With 16-bit systems, the digitized waveform for high frequency, low signal levels looks very much like a steep staircase with few steps. An examination of the spectral analysis of this waveform reveals lots of nasty sounding distortion products. We can improve this result either by adding more bits, or by adding dither.

Prior to 1997, adding more bits for better resolution was straightforward, but expensive, thereby making dither an inexpensive compromise; today, however, there is less need.

The dither noise is added to the low-level signal before conversion. The mixed noise causes the small signal to jump around, which causes the converter to switch rapidly between levels rather than being forced to choose between two fixed values.

Now the digitized waveform still looks like a steep staircase, but each step, instead of being smooth, is comprised of many narrow strips, like vertical Venetian blinds.

Figure 9: A - input signal; B - output signal (no dither); C - total error signal (no dither); D - power spectrum of output signal (no dither); E - input signal; F - output signal (with dither); G - total error signal (with dither); H - power spectum of output signal (with dither).

The spectral analysis of this waveform shows almost no distortion products at all, albeit with an increase in the noise content. The dither has caused the distortion products to be pushed out beyond audibility, and replaced with an increase in wideband noise. Figure 9 diagrams this process.

Wrap With Bandwidth
Due to the oversampling and noise shaping characteristics of delta-sigma A/D converters, certain measurements must use the appropriate bandwidth or inaccurate answers result. Specifications such as signal-to-noise, dynamic range, and distortion are subject to misleading results if the wrong bandwidth is used.

Because noise shaping purposely reduces audible noise by shifting the noise to inaudible higher frequencies, taking measurements over a bandwidth wider than 20 kHz results in answers that do not correlate with the listening experience. Therefore, it’s important to set the correct measurement bandwidth to obtain meaningful data.

Dennis Bohn is a principal partner and vice president of research & development at Rane Corporation. He holds BSEE and MSEE degrees from the University of California at Berkeley. Prior to Rane, he worked as engineering manager for Phase Linear Corporation and as audio application engineer at National Semiconductor Corporation. Bohn is a Fellow of the AES, holds two U.S. patents, is listed in Who’s Who In America and authored the entry on “Equalizers” for the McGraw-Hill Encyclopedia of Science & Technology, 7th edition.

 

{extended}
Posted by Keith Clark on 08/14 at 02:08 PM
AVFeatureBlogStudy HallAnalogAVDigitalEthernetInterconnectNetworkingProcessorSignalPermalink

Monday, August 11, 2014

In The Studio: Audio Effects Explained (Includes Audio Samples)

This article is provided by Audio Geek Zine.

 
A while ago I mentioned using modulation effects to help create movement within a mix. Here, I’ll explain the different types of modulation effects that we have available for mixing, and then move along to gates, compression, EQ, delay, reverb, de-essing, and a whole lot more.

The modulation effects I’ll be discussing include:

—Tremolo
—Vibrato
—Flanging
—Phasing or Phase Shifting
—Chorus

I’ll start with some easy ones then move on to the harder to explain—but more commonly used—effects.

All of them are built around a low-frequency oscillator, more commonly referred to as just an LFO. An LFO is an audio signal usually less than 20 Hz that creates a pulsating rhythm rather than an audible tone.

These are used for manipulate synthesizer tones, and as you will see, to create various modulation effects. All of the effects listed use sine wave as the wave shape for the LFO.

image

Tremolo is an effect where the LFO is modulating the volume of a signal. The signal attenuation amount is controlled by the depth and the rate adjusts the speed of the LFO cycles.

Listen to an example of Tremolo

Vibrato is an effect where the LFO is modulating the pitch of a signal. This is accomplished by delaying the incoming sound and changing the delay time continually. The effect usually not mixed in with the dry signal. The depth control adjusts the maximum delay time, and rate controls the lfo cycle.

Listen to an example of Vibrato

Flanging is created by mixing a signal with a slightly delayed copy of itself, where the length of the delay is constantly changing. Historically this was accomplished by recording the same sound to two tape machines, playing them back at the same time while pushing down lightly on one of the reels, slowing down one side. The edge of a reel of tape is called the flange, hence the name of the effect.

These days we accomplish the same effect in a much less mechanical way. Essentially the signal is split, one part gets delayed and a low frequency oscillator keeps the delay time constantly changing. Combining the delayed signal with the original signal results in comb filtering, notches in the frequency spectrum where the signal is out of phase.

We usually have depth and rate controls. The depth controls how much of the delayed signal is added to the original, and the rate controls how fast it will change.

Phasing (or phase shifting) is a similar effect to flanging, but is accomplished in a much different way. Phasers split the signal - one part goes through an all-pass filter then into an LFO, and is then recombined with the original sound.

An all-pass filter lets all frequencies through without attenuation, but inverts the phase of various frequencies. It actually is delaying the signal, but not all of it at the same time. This time the LFO changes which frequencies are effected.

Phase shifters have two main parameters: Sweep Depth, which is how far the notches sweep up and down the frequency range; and Speed or Rate, which is how many times the notches are swept up and down per second.

Listen to an example of Phasing

Chorus is created in nearly the same way as Flanging, the main difference is that Chorus uses a longer delay time, somewhere between 20-30 ms, compared to Flanging which is 1-10 ms. It doesn’t have the same sort of sweeping characteristic that Flanging has, instead is effects the pitch.

Again the LFO is controlling the delay time. The depth control affects how much the total delay time changes over time. Changing the delay time up and down results in slight pitch shifting.

Listen to an example of Chorus

You may have noticed that the majority of effects here involve delay. You can recreate most of the effects by using a digital delay with rate and depth controls, such as the Avid ModDelay2.

Reverb

The different types and methods, and I’ll also explain the most important parameters. I’ll mostly be talking about the kinds you will be using when mixing and what is available as plugins.

Digital Reverb Technology
There are two ways of creating a reverb effect in the digital world, by using mathematical calculations to create a sense of space, which is called algorithmic. And, by creating an impulse response, a snapshot of a real space, and applying that to the sound, which is called convolution.

Reverb is essentially a series of delayed signals, and algorithmic reverbs work pretty well to recreate this. Most reverb plugins, stomp boxes, and racks are algorithmic style.

When you want really realistic reverb, then convolution can not be beat. To create an impulse response the creator goes into a room and records the sound of a starter pistol going off and the natural reverb of the room.

The recordings are then deconvolved in software which is removing the sound of the starter pistol from the recording, leaving only the reverb.

Sine wave sweeps can also be used for the impulse creation. This is a more accurate way of creating reverb because it also captures the character of the room, and the way different frequencies react in the room.

The same process can be used to create impulse responses of speaker cabinets, guitar amps, vintage rack gear or basically anything that can make a sound.

Analog Reverb Types
In the analog world there are a few other ways, most of which will not be available to the home studio musician, except for their recreations in plug-ins. Analog reverbs come in three flavors—plate, spring, and chamber.

Invented in 1957 by EMT of Germany, the plate reverb consist of a thin metal plate suspended in a 4-foot by 8-foot sound proofed enclosure. A transducer similar to the voice-coil of a cone loudspeaker is mounted on the plate to cause it to vibrate. Multiple reflections from the edges of the plate are picked up by two (for stereo) microphone-like transducers. Reverb time is varied by a damping pad which can be pressed against the plate thus absorbing its energy more quickly.

This is what a plate reverb sounds like: platereverb.mp3

A spring reverb system uses a transducer at one end of a spring and a pickup at the other, similar to those used in plate reverbs, to create and capture vibrations within a metal spring. You find these in many guitar amps, but they were also available as stand alone effect boxes. They were a lot smaller than plate reverbs and cost a lot less.

This is a spring reverb: springverb.mp3

The first reverb effects used a real physical space as a natural echo chamber. A loudspeaker would play the sound, and then a microphone would pick it up again, including the effects of reverb. Although this is still a common technique, it requires a dedicated soundproofed room, and varying the reverb time is difficult.

This is a chamber: Chamber.mp3

These three types of reverb are all available in digital form in addition to a few other styles simulating real spaces, and others not found in nature.

Natural Reverb Types

Room – A room is anything from a classroom to conference room. There is generally a short decay time of about 1 second: room.mp3

Hall – A hall is larger than a room, it could be from a small theatre with 1 second of decay up to a large concert hall with a decay time up to 2.5 seconds: hall.mp3

Church – The decay time of a church can vary between 1.5 seconds to 2.5 seconds: church.mp3

And Cathedral decay times can go above 3.5 seconds: cathedral.mp3

Remember, the sound of a room is not just the decay time. The materials it was built with make a huge impact on the character of the sound. Stone, wood, metal and tile all sound drastically different.

There are also a few other types of reverb that are not natural - these are Non Linear, Gated and Reversed.

Non-Linear has a decay that doesn’t obey the laws of physics: non-lin.mp4

Gated was a popular effect in the 1980s, but it’s sounding pretty cheesy these days: gated.mp3

Reversed sounds like this: reverse.mp3

Reverb Parameters

Reverb Type – What kind of reverb emulation it is. There are Halls, Rooms, Chambers, Plates, etc…

Size – What the physical size of the space is. This can range from small through large.

Diffusion – How far apart the reflections are from each other.

Pre-Delay – Sets a time delay between the direct signal and the start of the reverb

Decay Time – Also known as RT60, which is how long it takes for the signal to reduce in amplitude 60 decibels.

Mix (Wet/Dry) – Sets the balance between the dry signal and the effect signal. When you have the reverb effect on an insert you need to adjust the wet and dry ratio, when you are sharing the reverb in a send and return configuration you want the mix to be 100 percent wet.

Early Reflection Level – Controls the level of the first reflection you hear. Early reflections help determine the dimensions of the room.

High Frequency Roll Off – Helps control the decay of high frequencies (as it is found in natural reverb).

Tips For Using Reverb

—Using pre delay can help keep your vocals up front, while still giving them space.

—Try to keep decay times short for faster tempo music.

—Filter out low frequencies before the reverb to keep it from sounding muddy

—Try de-essing the reverb to reduce harsh sibilance.

EQ & Filtering

The terms EQ and filter seem to mean different things.

Filtering is generally what we say when we want to remove frequencies, and EQ is when we want to shape the sound by boosting and cutting.

The truth is, it’s all filtering.

Parameters

Cutoff – Selects the frequency. This is measured in Hertz

Gain – How much boost or attenuation at the cutoff frequency. This is measured in decibels

Shape or Type – This chooses what kind of filter you will be using. The filter shapes are hi and low pass, band pass, peaking, notch, and shelf

Quality or Width – Usually just referred to as Q is the shape of the EQ curve and how much of the surrounding frequencies will be affected.

What does an equalizer actually do?
An equalizer adjusts the balance of frequencies across the audible range. EQ is an incredibly powerful tool for crafting a mix.

Filter Shapes

A low-pass filter also known as high-cut filter removes frequencies above the cutoff. A high-pass filter or low-cut filter does the opposite, it removes everything below the cutoff.

Low-Pass Filter Clip (click to play)
High-Pass Filter Clip (click to play)

When you use both these filter types at once, it’s called a band-pass filter, the top and bottom frequencies are removed. With these three filter shapes, Q effects the steepness of the filter.

Band-Pass Filter Clip (click to play)

A notch filter is the opposite of a band pass filter, it lets all frequencies through except for a narrow notch in the spectrum which is attenuated greatly. The Q effects the width of the notch.

Notch Filter Clip (click to play)

There are two filter shapes that allows you to control how much the frequency will be attenuated.

A low-shelf EQ will boost or cut anything below the cutoff, a high-shelf EQ gives you boost or cut above the cutoff. You can choose how steep the slope is with the Q control.

Low-Shelf Clip (click to play)
High-Shelf Clip (click to play)

A peaking filter is also known as a bell curve EQ; you can boost or cut any frequency with a peak at the cutoff and a slope on either side.

Peaking Filter Clip (click to play)

EQ Designs

There are two main types of EQ designs: graphic and parametric.

Graphic equalizers have multiple peaking filters at specific frequency bands and give you a few overlapping frequency bands with adjustable gain. These are most commonly seen on consumer music players, but in professional audio these are very useful for mixing live music.

Parametric equalizers give you the most flexibility you can choose the shape, cutoff, gain, and quality. This is the type you will be using when mixing. Nearly all plug-in equalizers will be parametric.

EQ Usage Tips

—Use high-pass filters to remove unnecessary low frequencies from your tracks

—Use notch filters to remove unwanted noises from a recording

—Get rid of the frequencies you don’t need before boosting the ones you do, although it may not be your first instinct when EQing, it works a lot better

—High Q values will cause ringing or oscillation when boosted, this is not usually something you want to happen

—Adjust the EQ so that the level remains constant whether engaged or bypasses, it’s too easy to be fooled by louder being better

Some of my favorite equalizer plug-ins:

Apulsoft ApQualizer: Very clean EQ with 64 bands, frequency analyzer and complete control.

Stillwell Audio Vibe-EQ: A vintage style EQ that has some nice coloration, I like it most on electric guitars.

Avid EQ III: Standard included Pro Tools plug-in does the job 99 percent of the time.

Delay

In its simplest form, a delay is made up of very few components.An audio input, a recording device, a playback device and an audio output.

Tape Delay
Early delay processors, such as the Echosonic, Echoplex and the Roland Space Echo, were based on analog tape technology. They used magnetic tape as the recording and playback medium.

Some of these devices adjusted delay time by adjusting the distance between the playback and record heads, and others used fixed heads and adjustable tape speed.

Analog Delay
Analog delay processors became available in the 1970s and used solid state electronics as an alternative to the tape delay.

Digital Delay
In the late 1970s, inexpensive digital technology led to the first digital delays. Digital delay systems function by sampling the input signal through an analog-to-digital converter, after which the signal is recorded into a storage buffer, and then plays back the stored audio based on parameters set by the user. The delayed (“wet”) output may be mixed with the unmodified (“dry”) at the output.

Software Delay
And these days you’ll most likely be using plug-ins for your delay processing, same principles, just without the moving parts, additionally, they can sound pretty close to any of the other styles or be totally unique like OhmBoyz below.

image

Effect Parameters
OK, so that’s it for the history of delay processors. Now let’s move on to the parameters.

Delay Time—How long before the sound is repeated

Tempo Sync—Each repeat of the delay will be in time with the song, ¼ notes or 1/8 notes

Tap Tempo—Tap this button along with the song to set the delay time

Feedback—Output is routed back to input for additional repeats

Mix/Wet-Dry—Mix of original signal with delayed signal

Rate—LFO rate to change delay time

Depth—Range of delay time change for LFO

Filter—Usually a high cut filter, each repeat gets darker sounding

Stereo delays often have separate Left and Right controls.

What Does It Do?
Now, what sort of sounds can we get with delay processors?

An automatic double tracking effect can be accomplished by taking a mono signal, run it into a stereo delay, have no processing on the left side, and a very short delay on the right side. Have a listen here.

A slap back or slap delay has a longer delay time from about 75 to 200 milliseconds. This is the sort of delay was a characteristic of the 50s rock n roll records. Listen to it on guitar here.

A ping pong delay uses two separate delay processors that feed into each other. First the dry signal is heard, the signal is sent to the left side, this delayed signal is sent to the right side, and the right side is sent back to the left.

Chorus, flanging and phasing can all be created with delays as well. Listen to The Home Recording Show #11 or read about it here for more on that.

Tips On Using Delay

—On vocals, try using a short delay instead of reverb, sometimes it works better.

—Set up a ping pong delay after a large reverb, so the reverb seems to get steadily wider.

—Be careful with that feedback control, things can get very loud, very quickly.

Gates, Comps, De-Essers

A noise gate is a form of dynamics processing used to increase dynamic range by lowering the noise floor, and it is an excellent tool for removing hum from an amp, cleaning up drum tracks between beats, background noise in dialog, and can even be used to reduce the amount of reverb in a recording.

The common parameters for a noise gate are:

Threshold – Sets the level that the gate will open, when the signal level drops below the threshold the gate closes and mutes the output.

Attack – How fast the gate opens.

Hold – How long before the gate starts to close.

Release – a.ka decay—how long until the gate is fully closed again.

Range – How much the gated signal will be attenuated.

Sidechain – For setting an alternate signal for the gate to be triggered from, sometimes called a Key.

Filters – The filters section allows you to fine tune the sidechain signal.

What’s It For?

The normal use for gating is for removing background noise. An essential tool for clean dialog recording. Some other uses for gates are gated reverb and using the sidechain to activate other effects.

How To Set A Noise Gate

To set up a gate properly, start with the the attack, hold, and release as fast as possible. Set the range to maximum, and the threshold to 0 dB.

Start lowering the threshold until the sound starts to get chopped up by the gate. Slow down the attack time to remove any unnatural popping. Adjust the hold and release times to get a more natural decay.

If you don’t want the background noise to be turned down as much then you can reduce the range control.

Other Uses

Gated reverb was a popular effect in the 80s, mostly because of Phil Collins records.

To set it up, take your drum tracks and send them to a stereo reverb with a large room preset. After the reverb, insert a stereo gate. Adjust the gate settings so that the reverb is cut off before then next hit.

In this example you’ll hear the unprocessed drums, then with reverb, then adding the gate. (Listen)

Favorite Gates

The classic Drawmer DS201 is a hardware noise gate that is hard to beat.

The gate on the Waves SSL E-Channel is good, simple and effective.

The free ReaGate VST is quite good as well.

Noise gates aren’t very much fun to talk about, but they are a powerful tool that you need to know how to use.

Compression

Compression is an effect that can take a while to understand because the results are not always as obvious as other effects. To explain it as simply as possible, when a signal goes into a compressor, it gets turned down. That’s it. How it does this, how fast, and smoothly is what makes each one unique.

Compressor Controls

Most compressors will have the same set of controls:

The Threshold control sets what level will start the gain reduction.

The Ratio sets how much gain reduction, with a 4:1 ratio for every 4 dB of signal above the threshold 1 dB will be allowed through.

The Attack control sets how fast the compressor reacts to peaks.

The Release control sets how fast the compressor reacts as the signal lowers

Makeup gain is used to bring up the overall level of the compressor after the peaks have been reduced.

Sometimes there is an auto makeup gain control, which will increase the output level to match the gain reduction.

Some compressors have a knee control that starts compressing at a lower ratio as the threshold is approached, this is very helpful for a more natural compression.

Compressors will usually have a few meters, an input level, gain reduction and output level. If there are only two meters there is usually a switch to change the output level to show gain reduction. Gain reduction meters go in the opposite direction of the level meters.

Setting A Compressor

This is my method for setting a compressor:

I choose a ratio depending on how aggressive I want the compression to be. The type of sound I’m using it on determines this, softer sounds like voice get lower ratios, bass gets a medium ratio and drums get a higher ratio.

I turn the attack and release controls to the fastest setting, and make sure the meter is showing gain reduction.

Then I lower the threshold level until I’m getting about 1 decibel of gain reduction on the peaks.

From there I’ll fine tune the attack and release for whatever sound most natural, and use the makeup gain to match the output with the input level.

If I want more compression, I’ll lower the threshold more.

Here’s an example of some electric guitar with and without compression. I’m using more compression than I normally would on this so that the effect will be easier to hear. It should be pretty obvious that the compressor has evened out the dynamics of the performance. (Listen)

Compression can bring out more details in a performance, but it will also bring up background noise especially at higher ratios, that’s not usually what you want.

A slow attack will let some of the transient through, you can use this when you want to increase the punch of drums. You want to compress the sustain of the drum, and use the make up gain to make the drums larger than life.

In this example there is an ambient room mic for a drum kit. First you will hear it without compression, then with (actually with a ton of compression), and I’ll increase (slow down) the attack time with each loop. Notice the increased bigness of the drums, and how the transients get through and keep it punchy. (Listen)

Limiter

A limiter is a compressor that’s output stays at or below a specific level regardless of the input level. It only turns down remember. The compression ratio starts at 10:1 and can go up to infinity. Limiters need very fast attack and release to be effective.

A brick-wall limiter (aka Maximizer), is a mastering tool used to increase the volume of a song as much as possible. These brick-wall limiters have an infinite ratio and will not let anything past the threshold. This type of limiter has two main controls, one for threshold and one for the maximum output level.

With these you basically set the maximum output level, something like -0.02 dB and then crank the threshold to crush everything and make it sound really loud and obnoxious (like Death Magnetic). The misuse of the brick-wall limiter is often associated with the loudness war and with compression in general.

image

Multi-Band

Another common mastering tool is the multi-band compressor.

A multi-band compressor is actually four compressors in one. The frequency range is split up into four bands like an equalizer, Low, low mid, high mid and high frequency bands. This can give you a much smoother compression with a lot more control.

De-Esser

There is one more type of dynamics processor, the de-esser. A de-esser is designed to reduce the harsh esss sounds in a voice. The compression works on a single frequency or frequency range rather than the entire input signal. These are generally used for voice processing but you might find some other uses for it.

Recommended Plug-Ins

Simple compressor: Massey CT4

Advanced compressor: Avid Smack!

Master limiter: Massey L2007

Multi-band compressor: Wave Arts MultiDynamics 5

De-esser: Massey De-esser

Distortion

I find it hard to think about the electric guitar without thinking about distortion. There was a time when electric guitars were always clean. Hard to imagine now.

Traditionally distortion was an unwanted feature in amplifier design. Distortion only occurred when the amp was damaged or overdriven. Possibly the first intentional use of distortion was in the 1951 recording of “Rocket 88″ by Ike Turner and the Kings of Rhythm.

Chuck Berry liked to use small tube amps that were easy to overdrive for his trademark sound and other guitarists would intentionally damage their speakers by poking holes in them, causing them to distort.

Leo Fender then started designing amps with some light compression and slight overdrive and Jim Marshall started to design the first amps with significant overdrive. That sound caught on quickly and by the time Jimi Hendrix was using Roger Mayer’s effects pedals, distortion would forever be associated with the electric guitar.

Not Just For Guitars

When you’re recording and mixing, you can use a bit of distortion to give any sound more edge, grit, energy and excitement. Drums, vocals, bass, samples – they can all benefit from a touch of distortion at times. Understanding the different ways distortion can be created and how they sound can help you get better sounds and make better recordings.

So What Is Distortion?

The word distortion means any change in the amplified waveform from the input signal. In the context of musical distortion this means clipping the peaks off the waveform. Because both valves and transistors behave linearly within a certain voltage region, distortion circuits are finely tuned so that the average signal peak just barely pushes the circuit into the clipping region, resulting in the softest clip and the least harsh distortion.

Because of this, as the guitar strings are plucked harder, the amount of distortion and the resulting volume both increase, and lighter plucking cleans-up the sound. Distortion adds harmonics and makes a sound more exciting.

Amp Distortion—Tube & Solid State

Valve Overdrive. Before transistors, the traditional way to create distortion was with vacuum valves (also known as vacuum tubes). A vacuum tube has a maximum input voltage determined by its bias and a minimum input voltage determined by its supply voltage.

When any part of the input waveform approaches these limits, the valve’s amplification becomes less linear, meaning that smaller voltages get amplified more than the large ones. This causes the peaks of the output waveform to be compressed, resulting in a waveform that looks “squashed.”

It is known as “soft clipping”, and generates even-order harmonics that add to the warmth and richness of the guitar’s tone. If the valve is driven harder, the compression becomes more extreme and the peaks of the waveforms are clipped, which adds additional odd-order harmonics, creating a “dirty” or “gritty” tone.

Valve distortion is commonly referred to as overdrive, as it is achieved by driving the valves in an amplifier at a higher level than can be handled cleanly. Multiple stages of valve gain/clipping can be “cascaded” to produce a thicker and more complex distortion sound.

In some modern valve effects, the “dirty” or “gritty” tone is actually achieved not by high voltage, but by running the circuit at voltages that are too low for the circuit components, resulting in greater non-linearity and distortion. These designs are referred to as “starved plate” configurations.

Transistor Clipping. On the other hand, transistor clipping stages behave far more linearly within their operating regions, and faithfully amplify the instrument’s signal until the input voltage falls outside its operating region, at which point the signal is clipped without compression, this “hard clipping” or limiting. This type of distortion tends to produce more odd-order harmonics.

Electronically, it is usually achieved by either amplifying the signal to a point where it must be clipped to the supply rails, or by clipping the signal across diodes. Many solid state distortion devices attempt to emulate the sound of overdriven vacuum valves.

Distortion Pedals

Overdrive distortion. While the general purpose is to emulate classic “warm-tube” sounds, distortion pedals can be distinguished from overdrive pedals in that the intent is to provide players with instant access to the sound of a high-gain Marshall amplifier such as the JCM800 pushed past the point of tonal breakup and into the range of tonal distortion known to electric guitarists as “saturated gain.”

Some guitarists will use these pedals along with an already distorted amp or along with a milder overdrive effect to produce radically high-gain sounds. Although most distortion devices use solid-state circuitry, some “tube distortion” pedals are designed with preamplifier vacuum tubes. In some cases, tube distortion pedals use power tubes or a preamp tube used as a power tube driving a built-in “dummy load.”

The Boss DS-1 Distortion is a pedal with this design. This is what that sounds like: Listen

Overdrive/Crunch. Some distortion effects provide an “overdrive” effect. Either by using a vacuum tube, or by using simulated tube modeling techniques, the top of the wave form is compressed, giving a smoother distorted signal than regular distortion effects. When an overdrive effect is used at a high setting, the sound’s waveform can become clipped, which imparts a gritty or “dirty” tone, which sounds like a tube amplifier “driven” to its limit.

Used in conjunction with an amplifier, especially a tube amplifier, driven to the point of mild tonal breakup short of what would be generally considered distortion or overdrive, or along with another, stronger overdrive or distortion pedal, these can produce extremely thick distortion.

Today there is a huge variety of overdrive pedals, including the Boss OD-3 Overdrive: Listen

Fuzz. This was originally intended to recreate the classic 1960s tone of an overdriven tube amp combined with torn speaker cones. Old-school guitar players would use a screwdriver to poke several holes through the the guitar amp speaker to achieve a similar sound.

Since the original designs, more extreme fuzz pedals have been designed and produced, incorporating octave-up effects, oscillation, gating, and greater amounts of distortion.

The Electro-Harmonix Big Muff is a classic fuzz pedal: Listen

Hi-Gain. High gain in normal electric guitar playing simply references a thick sound produced by heavily overdriven amplifier tubes, a distortion pedal, or some combination of both – the essential component is the typically loud, thick, harmonically rich, and sustaining quality of the tone.

However, the hi-gain sound of modern pedals is somewhat distinct from, although descended from, this sound. The distortion often produces sounds not possible any other way. Many extreme distortions are either hi-gain or the descendants of such.

An example of a hi-gain pedal is the Line 6 Uber Metal: Listen

Power-Tube. A unique kind of saturation when tube amps output stages are overdriven, unfortunately, this kind of really powerful distortion only happens at high volumes.

A Power-Tube pedal contains a power tube and optional dummy load, or a preamp tube used as a power tube. This allows the device to produce power-tube distortion independently of volume.

An example of a tube-based distortion pedal is the Ibanez Tube King: Listen

Other Ways To Distort

Tape Saturation. One way is with magnetic tape. Magnetic tape has a natural compression and saturation when you send it a really hot signal. Even today, many artists of all genres prefer analog tape’s “musical,” “natural” and especially “warm” sound. Due to harmonic distortion, bass can thicken up, creating the illusion of a fuller-sounding mix.

In addition, high end can be slightly compressed, which is more natural to the human ear. It is common for artists to record to digital and re-record the tracks to analog reels for this effect of “natural” sound. While recording to analog tape is likely out of the home studio budget, there are tape saturation plugins that you can use while mixing that simulate the effect quite well.

Here’s a bass guitar with a bit of tape saturation from the Ferox VST plug-in: Listen

Digital Wave Shaping. The word clipping in recording is usually a bad thing. And generally it is, unless we’re trying to distort something on purpose. In the digital world we can use powerful wave shaping tools to drastically distort and manipulate a sound.

Rather than subject you to the technical explanation of how it works, just listen to Nine Inch Nails, they use this a lot. It’s perfect for really harsh, aggressive, unnatural and broken sounds.

Here’s some examples of Ohmforce Ohmicide on a drum loop: Listen

Why Is This Important?

Knowing those sounds can help you be a better musician, engineer and producer. It will help you make decisions on what gear to purchase and what is appropriate for a song.

What Else?

Besides guitar, what else is distortion good for? Well, pretty much anything, as long as it’s appropriate for the song.

—Slight distortion can make something sound more exciting, too much can sometimes make it really tiny sounding.

—When recording electric guitars, you can get a way bigger sound by using less gain and recording the same part multiple times, double or quad-tracking.

—Distortion can sound really cool on drums, but you may have to heavily gate the drums, the sustain can get out of control.

*Note: All audio samples except the last two were copied from various internet sources, mostly manufacturer websites.

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. To comment or ask questions about this article go here.

 

{extended}
Posted by Keith Clark on 08/11 at 01:51 PM
RecordingFeatureBlogStudy HallAnalogDigitalDigital Audio WorkstationsMeasurementProcessorSignalSoftwareStudioPermalink

Wednesday, August 06, 2014

In The Studio: 4 Tips For Better Gain Staging

This article is provided by the Pro Audio Files.

 
Maintaining proper levels throughout your signal chain is important for achieving great tone. This is an important discussion for guitarists and engineers. I think pretty hard about my gain staging when I’m tracking. It has great affect on your sound.

But, why and how does gain affect sound? Let’s investigate.

1. Digital Wall
There are many reasons to love the analog medium.

One, so old guys can sit around the coffee machine and reflect on the golden days of recording (yawn).

The second? Analog takes change well. It’s forgiving. That means if you hit it with a hot signal, it’s not going to sound horrible. Disclaimer: that’s not guaranteed. I mean, fire is great, but if you leave something on the stove too long it burns, right? Not that I would know as my form of cooking is local delivery here in NYC. But, I do read things.

Analog distortion is considered flattering by many, including myself. Digital clipping is harsh and rarely desirable. Although, I got a cool snare drum sound once by clipping digital converters. (To which my mixer asked “why ya’ gotta like be such a rebel all the time”). I think he may have been afraid I would fray the fragile woven fabric that is our conscious being. Whoa, deep right?

When running digital gear, you have to be very aware of the signals running before it. If your signal is too hot before a digital reverb or delay, it’s going to clip in a harsh way.

Digital clipping can sneak up on you too. I’ve had experiences where I didn’t really notice it. I took for granted my gain staging was good.

2. Analog Barbells
Hot analog signals can be swell. Those that have spent time messing with tubes have discovered the joy in hitting tubes with a little gain.

A secret trick of guitarists is to use a preamp before their guitar amp. It’s usually the last piece in the chain. A popular choice is the preamp from an Echoplex.

What does it sound like? It livens up the sound. I always make sure the preamp is pushing a few dB hotter then when it’s in bypass. The idea here is that you leave it on all the time.

Nowadays everyone is in on the secret. There are pedal manufactures that make boosters or Echoplex preamps in small boxes. I use a Fulltone Tube Tape Echo for this trick with the delay off.

It kisses the front end of my tweeds nicely. We’re not talking a porn kiss here, but a romantic long embrace. Ah, who am I kidding… They’re getting it on.

3. Hearing Aid
A common problem with gain staging is when there isn’t enough signal coming from effects.

Sometimes guitarists will come in for sessions and turn their various pedals on. The volume becomes lower then when in bypass mode.

They may not notice when playing by themselves in a room, but by the time the band kicks in, the difference is very noticeable. It’s as if their sound disappears when the pedals are kicked on.

The reason is overdrive pedals compress the sound. They bring up the overall sound but limit the peaks. The overall sound might seem the same, but the transients may still be louder on the clean sound.

This is going to be important if you’re recording bands live in the studio. You don’t want the quality of sound jumping all over the place from poor gain staging.

Hitting an amp with too little signal (unless you’re rolling off with your volume knob) usually results in a sound that is muddy and has less character. There is something dead about it. It does take a while to get a feel for matching signals. Your ears will play tricks on you.

Always check your meters from when an effect is on and off. Take consideration into whether your signal path is analog or digital.

4. Piggy Back
Effects can do some unpredictable things when you mess with gain. Try sending a hot signal to some analog effects and see what happens.

Fuzz pedals can do some weird ring modulation type effects when given too little gain. Again, you have to be aware of what is digital in your signal chain (keep saying that over and over). You may even have to move effects around to keep the digital effects out of the line of fire.

Old analog phasers can sound cool overdriven. Using two compressors in a series can be awesome too. Who doesn’t like some compressor on compressor action? Don’t tell me you audio geeks haven’t thought about it!

Use the first compressor mostly as a gain device. Slam the output of Comp 1 into the input of Comp 2. This is how they got the guitar sound on “Black Dog” by Led Zeppelin. They used two 1176 compressors in series. No guitar amps used.

You can use these principles with any effects. Spring reverbs? Analog delays? Analog chorus? Just remember, it’s rare when you want the signal lower than when going in. Only for a special effect.

Now, go get freaky with signal flow.

 
Mark Marshall is a producer, songwriter, session musician and instructor based in NYC.

Be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article, go here.

{extended}
Posted by Keith Clark on 08/06 at 03:11 PM
RecordingFeatureBlogStudy HallAnalogConsolesDigitalDigital Audio WorkstationsEngineerMeasurementMixerProcessorStudioPermalink

Friday, August 01, 2014

In The Studio: Control Room Techniques To Foster Great Vocals

During a session, I remember when an artist was on mic, out in the studio ready to start vocal overdubs, and the producer asked: “How do we look in here from out there?”

Interesting, because he knew the appearance of the control room to the artist might affect the vocal performance. The control room (from the studio) does look like an aquarium with the huge window and the silent action of the animals encased within it.

Reactions to performances reflected in facial expressions and body language are everything to singers and musicians isolated out in the studio. The concern is that the working in the studio does not feel like being in a Petri dish under the microscopic scrutiny of the control room.

A great vocal sound starts with a good singer who has the artistic goal to perform the best vocal possible. Control room personnel—producer, engineer, assistants, gofers, etc., all have a professional responsibility to work towards the pursuit of the artist’s goals.

For the first hour of a new vocal session, everyone in the control room is on a kind of “audition” until the artist feels comfortable and performs well. It is the producer’s job to create the studio setting—the whole “vibe” to help get a good vocal performance from the artist and to make everyone else produce their best work.

During a vocal session, the producer is the arbiter of the feeling and quality of the vocal performance. The producer is the artist’s confidant, coach, good friend, creative partner, mentor, and most importantly the de facto proto audience - the first public ears on the artist and their music.

Because a well-prepared singer might give immediately the best and freshest performance in the beginning of the day, even during the microphone audition process, the engineer should be prepared and ready to record and capture a great vocal sound. The producer may require those mic audition recordings later and will be thankful for their useable fidelity.

The engineer’s vocal signal chain should be powered up, working and adjusted somewhere in the “ballpark”, the song booted up in the DAW with a new vocal track(s) ready to record, and a suitable monitor mix made and a usable cue mix done and checked in the singer’s headphones.

How Pro Can You Go?
Getting to know your favorite signal chain intimately is very useful for getting good vocal sounds quickly—especially in the case of the aforementioned first takes/microphone audition.

You need to know what different combinations of mic pre-amps, EQs and compressors produce in terms of vocal sound possibilities. Experiment often if time and your clientele’s interest permits.

I find the overarching difference between true pro gear and lower-end products is that professional gear is much more forgiving in it’s operational requirements than cheaper gear.

And that is not to say you cannot record usable sound using a $300 mic pre-amp versus a multi-thousand dollar boutique piece. You’ll have to work a lot harder to get a good sound with the low dollar gear and you can make just as crappy of a recording with either it or with the high-end boxes!

For example, pro gear usually has much more headroom, a lower noise floor and higher dynamic range. You’re less likely to overload the front end of pro mic pre-amps with a signal from a hot mic and a loud singer.

High-end pro gear is also smoother with less harmonic distortion at any operating level so the sound is automatically purer. Finally, pro gear will more often interface well, i.e., drive any subsequent processor you’d like in your vocal recording chain—from pro to junk.

I try to start with the best gear possible and I have my own collection to use when I’m “camping out” in a studio that does not offer my fave pieces. As an independent engineer, it’s a smart investment to own a high quality professional vocal recording chain you can use anywhere.

Mic Preamps
For more reasons that are not necessary to cover here, there are two schools of thought about the design philosophies and sound of mic pre-amps:

—“Super pristine” and transparent to convey accurately the microphone’s signal
—“Enhancement” - sonic embellishment through harmonic coloration and/or the inherent characteristics of non-linear, small signal amplifiers

Both types have their place in today’s recording but my preference for vocals (unless directed otherwise by the artist and producer) is for a transparent and clean signal chain. Some of my choices for clean, discrete transistorized mic pre-amps include the George Massenburg Labs GML 8302, Audio Engineering Associates Audio Engineering Associates RPQ, Avalon Design M5, and Millennia Media HV-3 and STT-1.

For tube-based pre-amps that can be operated in clean modes, I’ve always liked the Manley Mic/EQ-500 Combo, Groove Tube ViPre, DW Fearn VT-1, and old Telefunken V72 units. Tube mic pre-amps, by virtue of the tubes, have a built-in “personality.” They can be very clean but, when overdriven, get into coloration zones unique to each of them.

“Colorful” transistor microphone pre-amps I like are the old British Neve 1066, 1073 and 1084 modules for their thick-sounding Class-A design and line input, mic input and output transformers; for more punch and purity, I like the API (Automated Processes Inc.) model 512C amplifier for its Class-AB amp that gives a harder and “in your face” presence; the Helios Type 69 mic/EQ unit is ‘60s-era technology and a very “vibey” sounding unit also from England; and the Chandler Germanium, which uses esoteric transistors to produce its unique sound.

Mic and Signal Chain System
When deciding on a vocal recording chain, consider both the mic and mic preamp as a system. If you’re looking for a super warm and “tubey” sound, try using a tube condenser and a tube mic pre-amp.

Such was my choice for recording Rod Stewart on a couple of albums. He sounded best on a completely stock and original Neumann U67 tube condenser (no pad and no roll-off) into a tube-based Manley EQ500 Mic/EQ Combo that I followed with a TubeTech CL1-B compressor—a slightly colorful and all tube signal chain. This chain did not accentuate Rod’s raspy vocal quality we all love, yet it kept enough mid-range cut to compete with the track.

A much cleaner and more pristine path might be a Brauner Phanthera FET-based condenser microphone into a GML 8302 mic pre-amp followed by a dbx 160SL compressor that uses a VCA (voltage controlled amplifier) for nearly transparent gain control.

This signal chain would produce a more neutral or uncolored sound that is completely faithful to the source. I’ve found recording choirs with a super-clean chain like this reproduces the rich harmonic content in the best way.

I liked the Phanthera into a Neve 1073 module followed by an UA 1176LN (Rev D) limiter for recording Pat Benatar’s vocal. The Phanthera will handle all the loud level Patty can produce right on top of it without clipping.

The Neve 1073, like all old Neve modules, doesn’t sound good in clip and is a little unforgiving with regard to getting an exact gain setting so I set it a little low for the additional headroom.

After compression, I made up the record level within the very distinctive sounding 1176LN. Between the thickness of the Neve, the gritty edge of the 1176LN, and the pristine sound capture of the Brauner, this is a killer rock vocal sound signal chain.

EQ & Compressors
Generally adding equalization in the recording is to make up for what the microphone is not giving you. In some studios, there is not a big choice of mics so you have to add or carve out frequencies to try and mimic the sound you’d get automatically with the right mic. Along with a signal chain, owning a few classic vocal mics is an obvious asset for a recording engineer.

Again, unless requested by the producer or artist, I go very conservative when recording with EQ. For example, if you are adding a lot of low frequencies, there is something wrong with the microphone or the pre-amp or more likely the way the singer is addressing the mic.

If you’re finding that adding a lot of high frequencies sounds better then you’ve got the wrong mic, as if you were using an old RCA 77BX ribbon but really were looking for the ultra bright sound of a modern Sony C800G condenser.

The same goes for compression. There is a wealth of sonic possibilities using vocal compression especially with vintage classics like the Fairchild 670 limiter.

I love those sounds but when and how much depends very much on the “bigger picture” - the mix!

If you and/or the producer are unsure, compress only enough (at a low ratio) to get it recorded at a good level without distortion and errant peaks—and then back the compression down from there. For a “vibey” sound, go with a tube compressor like the TubeTech CL1-B or UA Teletronix LA-2 leveling amp.

Cleaner or more transparent compression comes from VCA-based units such as a dbx 165. You could also record the vocals on two tracks: one with compressor and the other without. I like to provide as many options for the mixer as possible.

(See Barry’s related article on this topic here.)

Barry Rudolph is a veteran LA-based recording engineer as well as a noted writer on recording topics. Be sure to visit his website.

{extended}
Posted by Keith Clark on 08/01 at 05:33 AM
RecordingFeatureBlogStudy HallAnalogConsolesDigitalEngineerMicrophoneProcessorSoftwareStudioPermalink

Thursday, July 31, 2014

LaChapell Audio Launches Single-Bay 500 Series Tube Preamp

High-voltage vacuum tube preamp accommodates mics and Hi-Z instruments

LaChapell Audio has released the new 583s MK2 single-bay tube preamplifier, replacing the company’s first 500 series tube preamp, the 583S, after a successful run.

The 583s MK2 improves on the original while retaining identical microphone and Hi-Z topologies. Noted as the only true 500 series high-voltage vacuum tube preamp that accommodates mics and Hi-Z instruments, it incorporates a Cinemag input transformer and Jensen JT-11 output transformers. A convenient handle is provided for easy insertion and removal.

“Our users have been asking for a single bay 583s so that they could fit more of them in a 500 series chassis. Not only have we been able to accomplish that, we have also improved the signal-to-noise ratio,” explains Scott LaChapell, owner of LaChapell Audio. “Our technology allows us to properly power the 12AX7 tube with a full 250-volt supply so recorded sources sound natural with the vacuum tube warmth that other manufacturers cannot deliver, all while staying within API/VPR power requirements.”

The new LaChapell Audio 583s MK2 is available now, with a street price of $945 (U.S.).

{extended}
Posted by Keith Clark on 07/31 at 11:51 AM
Live SoundRecordingNewsProductAnalogInterconnectMicrophoneProcessorStudioPermalink

Tuesday, July 29, 2014

Church Sound: AFL/PFL—What’s the Difference?

This article is provided by ChurchTechArts.

 
Let’s look at a common button on most audio consoles. The labels may vary, but the difference is important.

AFL stands for After-Fade Listen while PFL stands for Pre-Fade Listen. Depending on the current state of your console, pressing solo in either mode may result in the same thing. Or it may be completely different.

Both AFL and PFL are solo modes. When you press the solo button on the channel, the output of that channel is routed to the solo bus and you hear it all by itself. We use solo for auditioning an input, checking for signal, and possibly setting EQ. We’ll get to this later.

On many consoles, you can also solo groups, VCAs and the master. So what’s the difference between AFL and PFL?

It’s All About The Pick-Off Point
Pre-Fade Listen is just what it sounds like; the signal is picked off from the channel strip before the fader. Most of the time, it’s also pre-EQ, pre-dynamics and pre-Mute.

You’ll have to read your manual to find out where the pick point is. Sometimes it’s after the HPF and LPF, but not always. Some digital consoles allow you to choose the PFL point, which is cool. Because PFL is pre-processing, it’s a great way to check the quality of the incoming signal before you do anything to it.

After-Fade Listen is a pick-off point after the fader. Typically, it’s also after EQ, dynamics and mute. So that means anything you’ve done to the signal with any of those processing blocks will be reflected in the solo output. In AFL mode, you will hear the effects of EQ, dynamics and filters. If the fader is off on a channel that you AFL, you won’t hear anything. It’s after the fader, remmember.

When To Use Them?
PFL is most useful for checking signal. When I line check a stage, I set the console to PFL and use the headphones to verify each input. Most of the time, the faders are all down (or turned off with VCAs), so nothing comes through the house. But I can hear it clearly with PFL.

It’s also useful for verifying signal of a muted mic during a service. It’s not a bad idea to PFL your pastor’s mic a few minutes before he goes up to be sure you have signal. This has saved me many times.

AFL is useful for seeing if what you’re doing is helping or hurting the sound. If you’re trying to zero in on an offending frequency on an instrument, a quick AFL while you check the EQ can save you a lot of time. Many of my FOH friends and I generally prefer to EQ channels in the context of the mix—because it is a mix after all—but sometimes some isolation is helpful to solve a particular problem.

AFL is also useful to hear the blend of a group of instruments or vocals. I use it often on the BGV VCA to hear how my vocals are blending. Because the AFL happens after faders, I hear the blend based on the fader position. A quick AFL of the VCA can make short work of getting your vocals or drum mic’s blended.

Bonus: Solo In Place
This is known by a few other names, but what it does is the same. When SIP is pressed, instead of routing the PFL’d or AFL’d signal to the headphones or solo outputs, it routes it to the main left and right (L&R) bus. That means everything but the solo’d channel is shut off and all you hear is that solo signal.

This can be useful or incredibly dangerous, depending on the situation. When you’re running a rehearsal, SIP can be helpful to identify a channel that might be lighting up a room resonance or something similar. But during a service, it can be devastating. It’s so dangerous that DiGiCo requires you to press the SIP button for full two seconds just to engage it, and then it blinks red the entire time.

Don’t try out SIP during a service—ever! I rarely use SIP as I much prefer to EQ and alter dynamics within the context of the mix. But that’s what it does. Proceed with caution.

Mike Sessler now works with Visioneering, where he helps churches improve their AVL systems, and encourages and trains the technical artists that run them. He has been involved in live production for over 25 years and is the author of the blog Church Tech Arts.

{extended}
Posted by Keith Clark on 07/29 at 03:44 PM
Church SoundFeatureBlogStudy HallAnalogConsolesDigitalMixerSound ReinforcementPermalink

Monday, July 21, 2014

CADAC Sets Up New Jersey Sales/Distribution HQ; Appoints Mitch Mortenson Technical Support Manager

Mitch Mortenson CADAC US & Canada Technical Support Manager

CADAC US and Canada has secured premises in Hoboken, New Jersey that will house the new North American division’s sales, distribution and service operations, headed up by general manager Paul Morini. Currently under refurbishment, the new HQ will be fully operational by the end of August.

At the same time, the company announced the appointment of console engineer Mitch Mortenson as technical support manager. Mortenson is a renowned pro audio industry personality having spent some 20 years in the industry in recording studio and sound touring engineering, and technical sales. His background includes nine years in front line technical support roles for Midas consoles in North America.

Morini, who worked alongside Mortenson at Midas, describes him as “uniquely qualified within our industry to take on this role for CADAC.”

“He is long associated in the US as Midas’ technical trouble-shooter, in providing pre and post sales support, and as the in-the-field concert touring tech; as well as establishing comprehensive training programs,” Morini adds. “He is going to be a very major asset to our new operations in the US and Canada.”

Mortenson adds, “I am honored to have the opportunity to work with a company with such a legendary status in professional audio. The release of the CDC eight console, and the potential to put the brand at the very forefront of the live sound industry, is exciting. After a two-year break working outside of the industry, it is great to be coming back to join such a fantastic team.” 

CADAC

{extended}
Posted by Julie Clark on 07/21 at 02:22 PM
Live SoundRecordingNewsAnalogConsolesDigitalInstallationSound ReinforcementStudioPermalink

Nashville’s Historic Sound Emporium Studios Installs API Legacy Plus

Nashville studio installs API console in historic room.

Sound Emporium Studios is a go-to destination for artists including Willie Nelson, Kenny Chesney, and Alison Krauss, not to mention major projects by T-Bone Burnett. The facility was built by “Cowboy” Jack Clement in 1969.

To meet the growing needs of its diverse clients, Sound Emporium consulted Rob Dennis at API Audio reseller Rack-N-Roll, who directed the studio towards a 48-channel Legacy Plus console.

“Our clients have long requested an API in that room,” said studio manager, Juanita Copeland. “The flexibility, build and sonic quality, along with the reputation of API made it an easy decision.

“It just brings together a lot of great API sonic solutions into one user-friendly package. Having the 2500 bus compressor is the icing on the cake.”

Copeland also mentioned having 48 inputs, 48 returns, and 12 aux inputs is a huge improvement to the studio’s former setup. The room can also be used for mixing now thanks to the Uptown 2 automation.

Since commissioning the console early this summer, it’s been used for tracking, overdubs, and some mixing. The Legacy Plus is already booked for major clients in the coming months, including one of the top 100 guitar players of all time, and a critically-acclaimed alternative country artist.

Recordings for the hit ABC show Nashville are also booked for when filming resumes this month, as well as several rock projects later in the year.

“We are just thrilled to finally have such an amazing console in that historic room!” adds Copeland.

API Audio

{extended}
Posted by Julie Clark on 07/21 at 01:40 PM
RecordingNewsAnalogConsolesEngineerStudioPermalink

Thursday, July 17, 2014

Zona Recording Studios In NY Adds THE BOX From API

Recording engineer John Zona works with a lot of talent in the artistic-rich area of Huntington, NY through Zona Recording Studios.

While currently running the studio around his full-time gig of pin-striping motorcycles, he is transitioning his time mixing and recording from a hobby to a full-scale operation. As part of that transition, Zona has added THE BOX from API to help bring enthusiasm and quality to clients, working to insure they come back and refer others.

“I’m very pleased with my decision to purchase THE BOX,” Zona says. “After shopping around, doing research, and even demo-ing the board at the API headquarters in Maryland, it became clear API was the way to go.

“I’m also impressed with the quality of the EQs and mic preamps, as well as the reputation, construction, and warranty of the board.”

Zona, who is also an accomplished drummer, bass guitarist and composer, is presently taking the final steps in reconfiguring and wiring his updated studio space. While doing that, he’s using THE BOX to re-mix music he’s previously recorded.

“I’m getting results I had not even anticipated,” shares Zona. “I look forward to being an API customer for years to come.”

API

{extended}
Posted by Keith Clark on 07/17 at 12:51 PM
Live SoundRecordingNewsAnalogConsolesEngineerMixerProcessorStudioPermalink

Tuesday, July 08, 2014

Focus On The Knobs? Making Technology Transparent In The Quest Of Art

At one time or another, all of us who have sat behind a mixing console at a show are asked “do you know what all those knobs do?” Of course the answer is “yes”—or at least it should be.

What they don’t ask is “do you know anything about acoustics?” or “do you have a handle on power and grounding?” because these subjects are not nearly as interesting or obvious to the novice observer. Maybe the real question is along the lines of “do you know how to bring out/enhance the art using the tools in front of you?”

So what about all those knobs? I often wonder if we can relate them to the concept of “if you’re a hammer, then everything looks like a nail.” In other words, if we know what all those knobs (and buttons) do, does it mean we’re compelled to twist the knobs and push the buttons? In many cases I’m afraid it’s true, and yet, we can miss something in the process.

Practice Makes Perfect
As an amateur photographer growing up in the days of film and mechanical cameras, I always found it useful to practice with the equipment empty before putting real film at risk. In those days, every exposure cost money, and frankly, I didn’t have much to spare.

But more importantly, I wanted to always get past the awkwardness with the gear and get on to the whole point: capturing good images. My friend Pat Moulds, a retired professional upright bass player, used to say that “the point of practice is to get to where you can play a passage without hesitation.” In other words, the technique becomes transparent and the art comes through.

Back to our business of sound. Knowing what every knob and button does, and how the sound system is put together, is obviously important as long as the end result is kept in mind. The audience probably won’t know if you used an actual LA-2A leveler or a plug-in equivalent on the vocals. But they know when they can’t hear the words or if the bass is overwhelming the mix.

Adopting new technology into a system should not be about trying to find ways to use it so we get our money’s worth. Instead, it’s about having the new stuff integrate so seamlessly that we almost forget it’s there, except for whatever benefits it brings to the table in terms of better sound, smoother workflow, or faster set-up time.

Visualize (Auralize?)
Another photography analogy: Ansel Adams espoused the idea of visualizing the result you wished to have when viewing a scene, to imagine how you would want it to appear in a photographic print.

Then, using the technology at hand and the technique to go with it, achieve the desired results. One of the challenges is that a natural scene has levels of light and dark, i.e., dynamic range, that cannot be captured or reproduced with photographic equipment.

First, Adams suggested exposing the film in order to ensure that there were details in the shadows (above the noise floor). Then he gave pointers as to how the film should be developed in order to prevent the highlights from blowing out (headroom).

Finally, he formulated a precise method of printing so that—although the real-world levels of light and dark could of course not be reproduced—the relative levels could be kept intact, providing the viewer with the impression desired by the photographer in the original vision. With the tools of the day, this was a very involved process, with lots of smelly chemicals and expensive equipment, and it required a whole lot of patience and discipline while stumbling around in the darkroom.

Sound is not that different. For one thing, the real dynamic range of many instruments or ensembles is greater than what can be reproduced through loudspeaker systems. And yet the listener generally wants to have a bit less than reality for the sake of comfort, especially when it comes to things like vocals. Thus, dynamic compression is routinely used for this purpose.

However, let’s get back to the main point: cultivating a vision about the desired end result. What kind of music is it? Do the performers have an idea of how they want to be presented? Is there a recording we’re trying to match or to which the audience is comparing our efforts? All these things affect our choices in technology and technique. That is, if we’re paying attention.

Wherefore Art Thou, Reverb?
What are some other examples of using technology to achieve a “vision” in the mix? Application of reverb to create space, for sure. Applying delay to enhance the rhythmic elements of the music or to create “size” by panning a delayed copy of a source. Drawing on distortion to supply “color.” And certainly, using EQ to carve out space for each instrument or voice, draw attention to or away from an element in the mix, or to create vertical “size.” All these approaches are certainly valid, and there are dozens (if not hundreds) more.

One way to learn these and other creative uses of technology is to carefully analyze recordings and performances with disciplined listening. One of my best audio teachers in college would start every class with an analytical listening exercise, where we would make a chart with the relative levels of each instrument or voice, what effects were used, panning and space, etc.

After months of doing this with dozens of songs, it was very eye-opening because we realized how each different producer and engineer had exploited the available technology to achieve certain results, thereby enhancing the musical experience. Once in a while we’d also notice the bad examples where some aspects of the recording or mixing techniques got in the way of the results, and even ruined the recording.

One final thought: it’s easy to get caught up in the technology itself. But really, our jobs are to get past that, figure out what works, get really good at it, and make music. After all, that’s what it’s all about.

Karl Winkler is director of business development at Lectrosonics and has worked in professional audio for more than 20 years.

{extended}
Posted by Keith Clark on 07/08 at 03:11 PM
Live SoundFeatureBlogOpinionAnalogDigitalEngineerMixerProcessorSound ReinforcementPermalink
Page 2 of 41 pages  <  1 2 3 4 >  Last »