Analog

Friday, July 15, 2016

Factors Of A Good Sound Reinforcement System

EDITOR’S NOTE: This fine article was featured in the March 2004 issue of Live Sound International. We reprint it here in celebration of our 25th anniversary.

How many sound systems are in use? Many millions, for sure, and they’re found in all types of venues and for all kinds of programs.

So one would think we’d know exactly how to do it by now. But there seems to be plenty of examples to prove that we don’t. Why should this be? What is it we don’t yet understand? Do we even know enough to know what we don’t know?

Perhaps we should start by trying to define the characteristics of a good system. Not just “it sounds good” but what—exactly—makes the difference between “good” sound and not so good. Then we might be able to quantify how good each characteristic needs to be and how to judge whether it’s good enough or not.

After more than 40 years spent designing and testing sound systems, I’ve finally come up with a list of the factors that I feel make up what we could call quality in a system, and why. In this installment, I’m going to confine my list and discussion to systems for speech reinforcement only. Next time, we’ll look at factors for music systems.

Reliability
The most important quality factor has to be reliability. No matter how good the performance of a system may be, if it fails to work, it is useless.

Reliability is largely an engineering matter, involving component selection, configuration design, and assembly and installation correctness, for example, but any system can be abused to the point of failure. Significantly, failure may not be abrupt and catastrophic, but instead may take the form of performance decline due to damage.

One particular, and common, example of damage-induced deterioration can be found commonly-used transducer for higher audio frequencies, the horn and compression driver combination. Drivers have a severe amplitude limit; if over driven, the driver diaphragm will impact the phasing plug, an essential part of the structure. If the diaphragm material is metallic, it can fracture and fail.

Some diaphragms, however, are made of a resin-impregnated fabric, which is much less brittle and, therefore, more able to survive a collision with the phasing plug. Repeated collisions, however, still cause progressive deformation (or warping) of the diaphragm, resulting in eventual failure and therefore, progressive decline of the driver’s performance characteristics.

Predicting and detecting this impending failure, however, is not easy to do. The audible change in performance is fairly subtle and can be detected reliably only by careful comparison of the sound of a single questionable driver with that of a known good one. In the field, such a comparison is usually impractical.

Further, a driver that has been used heavily for some time will also exhibit some performance deterioration, even though it has never been over driven into diaphragm collision.

Figure 1 illustrates these performance differences. The frequency response (amplitude versus frequency) of three drivers of the same model (with an impregnated-fabric diaphragm), one new, one well used but apparently undamaged, and one with observable damage.

Figure 1: Measurement of output acoustical and input electrical impedance characteristics of three high-frequency horn drivers of identical model but different usage histories (click to enlarge)

It can be seen that the response at higher frequencies changes with use or abuse. The differences between the upper two measurements are slight, while the third one is significantly different.

There seems to be a good relationship between the measured and (subjectively) observed performances in cases like these, but no real study of this relationship has been performed.

So it would seem that a response measurement could be a valid substitute for a listening test. In fact, such a relationship has been established under certain circumstances, but not definitively in a sound reinforcement context. An investigation of this relationship would certainly be worthwhile. 

However, there is another measurement that is easy to make, even though it’s seldom done. The bottom three curves on Figure 1 represent the measured electrical impedance at the input terminals of each of the three drivers.  Such a measurement is usually quite easy to make, even on a driver installed in a system. 

It’s apparent that these curves separate the characteristics of the three drivers as well as any other common measurement does, especially in the case of the damaged unit, and much more easily. In fact, automated tests of this type have been designed into integrated systems as performance and reliability checks, with good results.

Thus it appears that different types of tests on the same items can yield corresponding results. In fact, experience has shown that such relationships hold in some cases but not in others, and that it may be difficult to predict which is which.

And in many cases, no acceptable substitute for a listening test has yet been found. Worse, some widely accepted tests might prove inadequate.

Loudness
It’s obvious that any sound system must provide enough sound level at the audience locations to ensure a satisfactory listening experience. Defining what this level actually should be is less obvious, and use of a valid measurement technique is not obvious at all.

Subjective opinions on appropriate sound levels often vary widely as well, depending on a host of factors. (Investigating this matter alone could become a major research project!)

In fact, the correct sound level may not be just a matter of loudness. How well speech is understood (intelligibility) is often the overriding concern, and this is the result of many factors other than just loudness. In some cases, the loudness may need to be set other than as would normally be expected, because of adverse acoustical or system functional characteristics.It may also be found that the audience prefers a sound level different from that which exists near the performer.

Other acoustical factors may be highly significant as well The level of the reinforced sound must be sufficiently higher than that of any background noise so that speech intelligibility or program enjoyment is maintained.

Some desire the added distortion that can result from tube-based components. But is this a bad thing? (click to enlarge)

Some guidelines in this regard have been established empirically, and they may be adequate for most situations. A common and complicating factor is that background noise level may vary significantly, rapidly and unpredictably. Further, since adequate performance in this area may be a matter of life safety, accuracy can be quite important.

It’s often the case that the desired sound level is greater than that which the system is capable of producing without difficulty. This difficulty is the result of one or more components overloading, which results in an audible distortion of the sound.

Distortion may take various forms, depending on the type of component that is overloaded, the magnitude of the overload, and the nature of the program material, among other factors. Therefore, the audibility of the distortion may vary greatly with the situation, and each type of distortion must be evaluated individually.

Many listeners even believe that certain types of distortion are desirable, such as that typically produced by vacuum tube amplifiers. This usually applies to music playback systems in small rooms, however, so it’s unclear if such an effect is valid in a larger sound reinforcement situation. 

Some devices are available that deliberately introduce controlled distortion, specifically for pro audio applications. Many have noticed that a limited amount of distortion adds to the apparent loudness of amplified sound, and without being objectionable. If anyone has actually studied this effect, the results remain obscure

Timbre
The overall timbre, or tonal balance, of a sound system undoubtedly has the strongest influence on the overall perceived quality.This characteristic is easy to measure, both subjectively and objectively, and there is a very good correlation between the two in a small-room configuration. In a large-room sound reinforcement situation, however, this correlation does not hold.

If the system has an overall response that is measurably flat (has nearly the same input-to-output level ratio at all frequencies), it will sound too bright, with the high frequencies being too loud.

A system which sounds subjectively flat, so that the reproduced sound is perceived as being a close duplicate of the source, will have a measured response which rolls down at high frequencies. 

Should the analysis be done with a swept filter, which yields more information, or is a stepped filter technique acceptable? What amplitude smoothing or averaging is appropriate? If measurements are taken at single, discrete frequencies, as are commonly done with contemporary techniques, how many measurement points are needed and at what spacing? This could be a major source of misleading data, especially at lower frequencies.

Whatever the technique, how many measurement locations should be taken, and where should they be located? And exactly how should the individual measurements be averaged to yield the overall system response? Also, how much variation between individual measurements is acceptable, and what should be done if the variation exceeds this tolerance?

Schulein documented this discrepancy in 1975 in an elegant experiment and offered a plausible explanation. He noted that in all rooms, the listener receives sound directly from the source and also reflected from the room surfaces.

Here’s the scoop:

Schulein’s Experiment
Two identical loudspeakers are arranged in a room so that the listener receives the direct sound from both, one close and one distant.

The bandpass filter is first set at 1000 Hz and the attenuator adjusted so that the sound level appears to be the same when the noise is switched from one loudspeaker to the other. 

Then the filter is changed to another band and the corresponding band on the equalizer is adjusted until the apparent loudness is the same from each loudspeaker.

This is continued through all bands, and repeated as necessary until no further adjustments are necessary. 

Finally, the overall frequency response of the equalizer is measured, this being the correction necessary to make the distant loudspeaker sound like the near one in this room.

In a small room, the level of the direct sound is almost always higher than that of the reflected sound and, therefore, dominates in the perception process.

Because of directional characteristics of human hearing at high frequencies, largely due to head shadowing effects, less total sound energy enters the ears at high frequencies than at lower. This imbalance is perceived as normal. 

In a large room with typical acoustics, however, the opposite is true; the level of the reflected, or reverberant, sound is significantly higher than that of the direct at most listener locations. Since this reverberant sound arrives at the listener from all directions rather than just one, more of it enters the ears at high frequencies. Thus the highs are perceived as being louder.

A simple experiment tends to confirm this theory. A loudspeaker is located at head level in a relatively non-reverberant environment and fed with broadband noise. A listener stands one to two meters (about three to six feet) in front of the loudspeaker and slowly turns around while listening to the tonal character of the noise. Typically, the overall tonal balance will change little, if at all, with head direction. 

However, if two identical loudspeakers are placed two or three meters apart facing each other and both are fed the same broadband noise, a listener between them, turning around as before, will hear the high frequencies more loudly when his ears are toward the loudspeakers than when he is facing one or the other loudspeaker.

The measured response (and perceived timbre) of a loudspeaker in a room deviates significantly from its performance in an anechoic environment, in ways that are complex and quite difficult to predict.

Also, these deviations are different at each location in the room. Therefore, the only practical solution is to measure the actual response of the completed system and correct it as needed with additional circuitry.

This turns out to be a bit trickier than one might expect, however. If a pure tone, slowly swept in frequency, is fed over a sound system and the resulting level is measured at a point in the audience area, it will be found to consist of strong peaks and valleys, tens of decibels in amplitude, and spaced at intervals of about 1 Hz, caused by room resonances.

It’s almost impossible to get meaningful information from such readings. Besides, we don’t perceive these variations because they are averaged by our hearing process in ways that are only partly understood. The measurements must incorporate averaging which simulates the hearing process.

Unanswered Questions
However, this presents us with a shopping list of unanswered questions pertaining to the measurement techniques. What frequency resolution (bandwidth) is needed?

A first assumption might be to use a bandwidth similar to that of the auditory (critical bandwidth) filters, but system measurements are typically done with third-octave filters, which are considerably wider than critical over much of the spectrum. 

Should the analysis be done with a swept filter, which yields more information, or is a stepped filter technique acceptable? What amplitude smoothing or averaging is appropriate? How many measurement locations should be taken, and where should they be located? And exactly how should the individual measurements be averaged to yield the overall system response? 

Despite countless practical field experiments in this area, beginning at least 65 years ago, little critical research has been carried out. As a result, there exist only a few de facto standards, and the actual results of these procedures vary considerably in quality.

In addition to the these considerations, it might be expected that nonlinear distortion in any of the system’s components, especially the loudspeakers, would significantly affect its timbre, but such does not seem to be the case. The distortion levels of modern components, properly used, are low enough to be unnoticeable in a reinforcement situation. 

Intelligibility
As the name suggests, intelligibility is the measure of how easy or difficult it is to understand speech over a system. It’s ultimately measured subjectively and directly, typically using rhyming words as the test signal.

The execution of this test is tedious and time-consuming with only one test subject, which is quite inadequate. Different subjects will render somewhat different results even under apparently identical conditions, and conditions vary significantly with location, program sound levels, room noise, hearing acuity, and many other factors. 

Tools like Rational Acoustics Smaart and Meyer SIM can be of great help along with an entire understanding of what’s happening with a system and room? (click to enlarge)

The typically broad variance of test results makes it difficult to determine whether a system is actually performing acceptably or not. It hardly seems worth the rather considerable effort required to execute such a test, but there may be little choice.

Because of these difficulties, a lot of effort has gone into devising an objective test regime, with several products resulting. All involve dedicated gear and techniques, which, while not simple, are quite preferable to subjective tests.

These objective tests have been demonstrated to produce results comparable to those obtained subjectively in some, but not all, conditions. Unfortunately, the worst correlations tend to occur in conditions that produce low scores, exactly where accurate results are most desired. In fact, after extensive experience with all the commonly used objective techniques, Mapp has concluded that all are inadequate.

More Physical Approach
It gets worse. Low intelligibility scores, which indicate serious problems, usually provide little or no information on the nature of these problems. Sometimes one or more physical problems are apparent in such cases, but are these really the causes of the poor performance?

Often, the only way to be sure is to correct the problems and see if that improves the scores. Of course, this may be completely impractical, and in fact, there may be multiple problems, some masking others, so that correcting the most obvious might accomplish nothing useful.

A much more practical approach might be to identify exactly which physical factors adversely affect speech intelligibility, and how, and calibrate physical measurements to subjective effects.

If this were accomplished, then not only would meaningful test methods be available, but effective design criteria could be established to predict results and avoid problems in the design stage. Some significant work has already been done in this area, with results pointing to the ratio of direct to reflected (or reverberant) sound being the most important factor.

Think this over, work on your own list, and next time, we’ll look at key quality factors of music reinforcement systems. (You can read the follow-up article here.)

At the time this article was published in LSI, Bob Thurmond served as principal consultant with G. R. Thurmond and Associates of Austin, TX. Also note that it was originally presented as a paper at the 146th Meeting of the Acoustical Society of America.

{extended}
Posted by Keith Clark on 07/15 at 01:09 PM
Live SoundFeatureBlogStudy HallAnalogAVDigitalEducationLoudspeakerMeasurementSound ReinforcementSystemPermalink

Thursday, July 14, 2016

API Announces New 512v Microphone Preamp

Incorporates every feature of the 512c including API's proprietary transformers, 2520 op amp, and a variable output level control.

In response to market demand for built-in signal attenuation, API announces the new 512v microphone preamp.

Incorporating every feature of the 512c, including API’s proprietary transformers and 2520 op amp, the new 512v also includes a variable output level control designed to meet the needs of audio professionals using DAW’s or other input level-sensitive devices in their workflow.

The 512v offers 65 dB of gain as a mic preamp, 45 dB of gain as a line/instrument preamp, and a 20 dB pad switch that can be applied to the incoming signal (whether mic, line or instrument). The 512v also features a 3:1 output transformer tap switch that produces a lower output level, allowing the user the option of driving the transformer harder to achieve higher levels of saturation.

Also included are front and back panel mic input access, front panel combo-style XLR + 1/4-inch line/instrument input, an LED VU meter for gain level indication, 48v Phantom power switch, all built around the traditional API circuit design.

The 512v effectively eliminates the need for third party attenuators, and produces the same analog signal that audio professionals expect from API. Like all API products, the 512v features API’s original 5 year warranty.

The 512v has an MSRP of $995. Units are now shipping.

API

{extended}
Posted by House Editor on 07/14 at 01:24 PM
RecordingNewsProductAnalogProcessorStudioPermalink

Monday, July 11, 2016

Schertler Now Shipping ARTHUR Format48 Modular Mixer

Swiss manufacturer announces release of new self-configurable mixer created from a choice of different Class-A input and output modules.

Schertler Group announces the release of ARTHUR (Format48) modular mixer, designed for studio or live use, which can be configured and built by the user.

The mixer is created from a choice of different Class-A input and output modules - mic input, mic input ultra low noise, and mic input x4 units, yellow instrument input unit, stereo input unit, L/R master, aux master and external power-in units. These can be combined in any order and quantity to form both compact and large-scale mixer configurations.

The number of units that can be included depends on the power supply used. For simpler combinations of 12 or 25 units, there is a choice of two compact power supplies. A further high-end power supply is also available: This can be used with any combination of units, from just a few to 70 or more. (Power-In units are also required for larger configurations.)

Modules can be added, removed or re-ordered at any time, making ARTHUR one of the most flexible mixers on the market. Further modules are planned for release over the coming months.

ARTHUR’s electronic design offers a complete absence of negative feedback (NFB) from input to output. All filters and summing amps are free from restricting back loops in the mixer’s straightforward design, resulting in a fast response and natural attack. All circuits are built using discrete Class-A components and pure high-voltage DC-amps (without any capacitors in the signal path), offering 30dB headroom and low noise, as well as stability, warmth and transparency.

Configuring and ordering an ARTHUR mixer can be done through the Schertler website (Mixer area). Alongside the technical information and specs for all available modules and accessories, a Configurator enables the various units to be selected in their required quantities. The mixer’s “virtual build” can be previewed along the way as different components are added or removed. External components such as power supplies, side panels and accessories are not supplied as standard and must be ordered separately.

Once delivered, physical assembly of modules is a straightforward process involving a series of connecting rods and hexagonal screws. Users have total freedom to design their own personal channel sequence, as there are no mechanical or electrical restrictions, with the exception of ensuring that the appropriate power supplies have been purchased to support the number of units being used.

ARTHUR is available for pre-order from the Schertler online store (Europe, USA and Canada only) and can also be purchased via Schertler showrooms and distributors worldwide.

image


Schertler Group

{extended}
Posted by House Editor on 07/11 at 09:11 AM
AVNewsProductAnalogAVConsolesSound ReinforcementStageStudioPermalink

Tuesday, July 05, 2016

Jake Sinclair Selects Manley ELOP+

Producer for Sia, Weezer, and Fall Out Boy chooses the ELOP+ stereo electro-optical tube compressor/limiter in the studio.

GRAMMY-nominated producer/songwriter/multi-instrumentalist Jake Sinclair recently produced new records for Sia, Weezer, and Fall Out Boy, while simultaneously crafting his side project, Alohaha.

Sinclair’s list of credits includes such artists as Taylor Swift, P!nk, Train, and 5 Seconds of Summer. He runs his studio off a laptop in a room filled with instruments, a few preamps, and his prized Manley ELOP+ stereo electro-optical tube compressor/limiter and not much else.

The ELOP+ is a completely re-engineered version of Manley’s coveted early 1990s-vintage ELOP stereo electro-optical limiter. The ELOP+ uses the company’s highest-performance tube line amplifier and White Follower output stage and relies on a high-voltage switching power supply designed specifically for Manley vacuum tube audio circuits. The “+” also adds a 3:1 compression ratio for greater versatility.

Sinclair wires his two-channel ELOP+ as two separate compressors. One side is the last piece in his vocal chain. The other? “I use the other channel for bass as a tracking compressor, and I go pretty squishy with it,” he explains.

While he respects the old classic optical compressors, to Sinclair’s ears, most of the newer models leave something to be desired-except the ELOP+.

“I’ve tried all of the opto-style compressors,” Sinclair affirms, “and the Manley is by far my favorite. It’s every bit as good as the classic compressors. With a lot of newer opto compressors, as you get past 3 to 5 dB of compression, you start to lose some top end, and it suffocates the sound. But I can put the ELOP+ in Limit mode, keep the filter off, and get a good 5 to 7 dB of compression without losing any high-end tone.”

That’s especially important because Sinclair doesn’t go easy on the compression. “I’m a really guilty overcompressor,” he laughs. “It’s part of the fun! It’s an effect and it also changes the way singers perform. When they’re singing, I apply the compression I intend to use in the final mix. I’d rather track it the way it’s going to be because the singer can react to it, and you get a different performance.”

To do that, it would seem, you need a compressor that can sound transparent. “Yeah,” Sinclair agrees, “and you get that signature sound that we’ve heard on millions of records; the ELOP+ just nails it.” The result, he says, is an airy vocal that sounds like it’s supposed to. “EveAnna Manley described the ELOP+ as just a volume knob,” he muses, “but it’s more than that for me. There’s some kind of tube-y glueness.”

Sinclair has reduced his use of tube devices in recent years but you’ll never get his ELOP+ away from him. “Except for microphones, the ELOP+ is the only piece of tube equipment I still have,” he admits. “Tube preamps aren’t for me. But I will always have an ELOP+.”

Manley

{extended}
Posted by House Editor on 07/05 at 08:03 AM
RecordingNewsAnalogEngineerProcessorStudioPermalink

Tuesday, June 28, 2016

Suffolk New College Adds Audient For Music Technology Course

ASP4816 analog console installed as part of a studio upgrade at college in Suffolk, United Kingdom.

“My background is in the studio so I can appreciate good workmanship,” says Suffolk New College Music Technology Course leader, Craig Shimmon, who’s very pleased with the arrival of the new Audient ASP4816 desk. “The other lecturers have commented on the superior build quality and the clean preamps; the layout is a dream to teach with,” he adds.

Installed as part of an upgrade to the studio at the college, the analog mixing console forms an integral part of the Music Technology course, teaching Level 2 and two years of Level 3 Diploma students.

Shimmon explains, “Our department consists of industry professional engineers, composers, producers and DJs teaching a range of different Music Technology specialisms, with industry-standard equipment and facilities. All in all approximately 60 Music Technology students will have hands-on access to the studio, with a further 60 music performance students benefiting from being recorded.

With all the key features of a large console in a compact and ergonomic form, the ASP4816 also features in-line architecture, ideal for teaching signal flow, which fits perfectly at Suffolk New College.

“I love the in-line faders,” enthuses Shimmon. “With previous desks we have had to use Mix B pots or split the desk which can be confusing. Teaching with the Audient console just makes it a whole lot easier. This coupled with the ability to route different channels to different tape outs makes this a very powerful machine.” Which is exactly how New Suffolk College meets its strategic aim, “to provide high quality learning and teaching, and gain recognition as an excellent and innovative provider.”

Audient has every faith that with the new, improved set up and under Craig Shimmon’s instruction, New Suffolk College music technology students will go far.

Audient
Suffolk New College

{extended}
Posted by House Editor on 06/28 at 09:19 AM
RecordingNewsAnalogConsolesEducationStudioPermalink

Thursday, June 23, 2016

Smalltown America Studio Captures Live Performances With Audient

Northern Irish facility showcases live bands from across the UK and Ireland with its series of ‘Live @ STA’ events.

Smalltown America Studio (STA) in Derry, showcases acts from across the UK and Ireland with its series of ‘Live @ STA’ events.

“The bands play live on the floor in our 1600sq foott live room,” confirms studio engineer, Caolán Austin. “We run our lines through our (Audient) ASP8024 then to ProTools; we send the mixed feed from the desk to an encoder box and stream the audio and video on our website. People can watch at home if they can’t make it to the gig.

“Fitting bands, PA and a crowd into the space is quite a feat,” he laughs, explaining that the small audience usually numbers about 50. It’s not a problem if you missed it – or want to relive the experience, either. “After the show we mix everything and each show is released as an album.”

Austin describes the 36-channel Audient desk as “beautiful” and “the focal point of our whole studio – and quite a magnificent one, at that,” taking pride in the Northern Irish facility. “For traditional recording sessions, our tracking/monitoring workflow has been entirely designed around the analog desk.

“As we have such a spacious live room, most bands want to come in and track live. We have devised a system of ‘stations’ within the studio. Drums are permanently set up, we have an amplifier room to send signals to, there is a vocalist station and so on. This reduces initial set up time to around 30 minutes. Bands on a budget want to get working straight away; using this method, we can get great results quickly. It wouldn’t be possible without the ASP8024’s flexible routing and the use of TFS for line/insert points for patching our outboard presets.

“For monitoring, we run a Behringer Powerplay headphone system with P-16 modules. Each auxiliary bus on the desk is wired to correspond with a headphone channel, including control room foldback. During a session, we can route new audio paths quickly, while still giving bands a flexible channel count to ensure headphone mixes are good as they can be,” adds Austin, continuously mindful that some clients need to watch their pennies – but not scrimping on quality. “All of this ensures our workflow is still incredibly quick and efficient.”

image


Audient
Smalltown America Studio

{extended}
Posted by House Editor on 06/23 at 09:13 AM
RecordingNewsAnalogConsolesDigitalDigital Audio WorkstationsStudioPermalink

Friday, June 03, 2016

PSW Top 5 Articles For May 2016

ProSoundWeb presents at least two feature articles every day of the working week, meaning that there are 40-plus long-form articles highlighted each and every month.

That’s a lot. In fact, so much so that we thought it would be handy to present a round-up of the most-read articles for those who might have missed at least some of them the first time around.

What follows is the top 5 most-read articles on PSW for the month of May 2016. Note that since the articles aren’t all posted at the same time, we apply the same timeframe (length of time) for each when measuring total readership.

Also note that immediately following the top 5, PSW editor Keith Clark offers some additional suggestions of recently published articles worth checking out. These articles also scored quite well in terms of readership but were just outside the head of the list.

Without further adieu, here are the top 5 articles on PSW in May.

1. Sonic Atmosphere
Calculating the speed of sound in air and clearing up some popular misconceptions. By Merlijn van Veen

2. The Secret To Overdubs
Highlighting techniques that are helpful in the overdub process, along with things to avoid. By Bobby Owsinski

3. Who Are You Mixing For?
Becoming better engineers and team players by serving others rather than our own egos. By Andrew Stone

4. Financial Focus
Key factors in determining a freelance day rate in the pro audio biz. By Samantha Potter

5. RE/P Files: An Interview With Bill Hanley
A pioneer in the field of large-scale sound reinforcement reflects on the early years. By Barry McKinnon

Editor Recommendations

Meter Madness
What your level meters tell you… and what they don’t. By Mike Rivers

Avoiding Seven Common Mute Mistakes
You wouldn’t think one little button with one simple function would cause many problems… By Chris Huff

Streaming & Casting
Getting quality audio to the recorder or computer and making things sound their best. By Craig Leerman

{extended}
Posted by Keith Clark on 06/03 at 12:48 PM
AVLive SoundRecordingChurch SoundFeatureBlogAnalogDigitalEngineerSound ReinforcementStudioTechnicianPermalink

Thursday, June 02, 2016

Applying Modular Genius

Every time a new audio innovation arises, it’s met by a certain amount of resistance. This is ironic and puzzling to me, seeing as our field is driven almost entirely by technology. What motivates this response?

Perhaps fear that technology will replace us. Are system techs quaking in their boots, fearful of being replaced by a system processor with auto-EQ abilities? How many engineers have become unemployed as a result of automixer technology?

More likely, I think it’s fear that technology will make us lazy. I know an engineer who won’t use a dynamic equalizer because it’s “too easy.” He also thinks that RTAs and FFT programs such as Smaart (not to mention consoles with built-in analyzers) make it possible for “anyone to be a monitor engineer” since there’s now no need to “learn your frequencies.”

While this may be true at the low end of the spectrum (no pun intended), I doubt that any professional-caliber monitor engineer feels that his livelihood is threatened by a $200 feedback suppressor unit.

This is an interesting point, though. Does technology make us lazy? Does the prevalence of Spell Check mean that today’s generation of kids don’t have to learn to spell? Do our ever-present smartphones absolve us of the responsibility to memorize important phone numbers or do mathematical calculations by hand?

With audio technology advancing at such a rapid rate, perhaps too much emphasis is put on workflows that are changed as a result of new tech, and not enough emphasis on brilliant new uses of the new technology to enhance our jobs… in other words, finding the art among the science.

The excitement lies, for me, in discovering new uses for existing technology to do cool things. GPS is nothing new, and neither are camera phones or online restaurant reviews. But when you combine the three, you get a smartphone app that will offer up dining suggestions simply by panning your phone’s camera across the storefronts.

Likewise, online banking has been around for decades, and Optical Character Recognition is coming into its own… nothing new or exciting there, right? But what about the bank apps which let you photograph a check with your phone and deposit the funds into your account? That’s cool!

These are not new inventions, but rather “cobbling together” chunks of existing technologies to allow us to do new, creative, amazing things.

In my mind, one of the figures at the forefront of this “modular genius” is Dave Rat, whose “Rat-isms” include obtaining an accurate of a device’s latency by listening to its signal in one ear while the “dry” signal panned to the other ear is increasingly delayed. When it sounds “in the middle,” the added delay is the device’s latency.

According to Dave, this resulted, in some cases, in a measurement that was more accurate than the manufacturer’s stated values. He also used water and bags of salt to ground the stage at a Red Hot Chili Peppers show and saved the band from problematic RF. (We all know that saltwater is electrically conductive, but who would have thought to apply that knowledge in such a way?)

The underlying principles of these examples are basic, known by any engineer worth his or her salt (OK, that pun was intended). The brilliance, to me, lies in the application of astoundingly simple principles to find ways to do really cool stuff.

That feeling of “Wow! That’s so cool!” is the audio engineer’s equivalent of watching a magic show. We’ve all experienced that moment of excitement when we learn of a new technique or application that’s just plain neat. As our technologies and gear continue to advance, let’s not feel threatened and “disarmed.” Instead, let’s flip it on its head and blow people’s minds.

Jonah Altrove is a veteran live audio professional on a constant quest to discover more about the craft.

{extended}
Posted by Keith Clark on 06/02 at 12:09 PM
Live SoundFeatureBlogAnalogBusinessDigitalEngineerMeasurementProcessorSound ReinforcementPermalink

Dr. Ian Corbett Selects Audient For Kansas City Kansas Community College

Professor of Audio Recording & Music Technology selects iD22 interface as portable solution.

Kansas City Kansas Community College has been a proud owner of an Audient ASP8024 for over 6 years, a decision that professor of Audio Recording & Music Technology, Dr. Ian Corbett still stands by, having seen hundreds of students hone their audio engineering skills on the large format analog mixing console.

The notion to add an iD22 to his teaching arsenal this year was born out of that trust in Audient. Dr. Corbett uses his new audio interface in his office to grade assignments, as well as with his laptop, which he uses extensively throughout the curriculum, at various places throughout the campus and also when travelling and representing the college, presenting at conferences and conventions.

He explains, “Tired of carting around a full rack unit interface, or suffering the hums and buzzes of the laptop’s built-in audio output, the iD22 was a small solution that also sounds better than the smaller ‘budget’ interfaces available – particularly important when I’m giving a presentation on audio quality. It also gives me the flexibility to be able to make great sounding recordings directly into the computer, and get reliable readings from a test mic when pinking or sweep-tone analyzing a room. The fact that the iD22 can function as a playback device without external power was an unexpected plus point and convenience.”

Three years ago the Audio Engineering Program moved premises, more than quadrupling the size of the facilities to a staggering 4000 square feet and now includes a multi-station classroom/lab, three control rooms and four recording rooms plus admin areas.

Dr. Corbett says, “The Audient desk made the move with us and went from being the mixer that first semester students learned on, to the mixer used in the second semester of audio classes. There was no need to upgrade it – it was purchased with the capabilities needed upon future expansion, and being a great sounding analog board, it is of course fairly “future proof” as the technology changes around it.”

Dr. Corbett points out that this second semester is a hardware based class.

“No DAW,” he confirms. “The students use the ASP8024 with a hardware multi-track recorder and outboard gear, refining their skills and knowledge of patching, routing, effects processing, dynamics processing, and later in the semester recording and mixing a music project. We do this with the multitrack recorder in ‘destructive mode’ so that they have to confidently perform operations, make decisions and commit to them with no ‘Command-Z’ to undo if they mess up.” Important lessons to learn, especially if graduates want to have the best chance at finding work after leaving college.

“One major ‘gripe’ I’ve heard from employers, concerns schools who are producing graduates who are great DAW operators, but struggle to quickly troubleshoot problems outside of the DAW,” continues Dr. Corbett. “Given the state of the industry today, few graduates are going to end up using a DAW in a music recording studio as their main source of income, so it is important that we prepare them to easily transition into the widest variety of potential employment opportunities – and only extensive training and understanding of an analog signal path can give them this flexibility.”

And it was this analog console from British audio manufacturer, Audient that caught the attention of Dr. Corbett.

He lists a few of the reasons why: “The features for the mid-price-point: the in-line design with a fader on both input paths, the flexibility to switch most sections of the channel strip between each input path, the design and layout of the console (it is not cramped or cluttered, and it is less intimidating to new students than more compact consoles). It is very easy to understand and teach on, and a good preparation for students moving on to less intuitive consoles or digital consoles. I was 100% comfortable teaching beginner students in their first semester audio recording classes on the ASP 8024 – it’s THAT easy to understand and get around.”

Dr. Corbett believes Kansas City Kansas Community College, centered around analog consoles as it is, to be “…well prepared for the future. Unlike large control surfaces built by the DAW software company, there is no worry about the board or its audio quality standard being obsolete in 5 years, or being stuck with a partially functional control surface when the favored DAW changes.

“So far the console has stood up to years of hard student use (and the mistakes that are part of learning) and a major facility move, with only a couple of minor repairs,” he continues. “As a community college, our tuition rates are very low, and that combined with the type of equipment the students get to be hands-on with from day one, results in an educational experience unrivaled in the area.”

Audient

{extended}
Posted by House Editor on 06/02 at 09:13 AM
RecordingNewsAnalogConsolesEducationEngineerStudioPermalink

Wednesday, June 01, 2016

Amphenol Expands Analog Equipment Connectivity Options For Dante Networks

Amphe-Dante adapters, to debut at InfoComm 2016, offer cable dongle housing design for Dante-enabled products.

Amphenol has expanded its offerings in professional connector and cabling solutions to the Dante audio networking universe with the introduction of Amphe-Dante adapters.

To be unveiled at InfoComm (June 8-10, Las Vegas Convention Center), the Amphe-Dante range of Dante digital to analog audio adapters are engineered to simplify the connection of analog equipment, including amplifiers and loudspeakers, to Dante networks.

Importantly, its unique cable dongle housing design is a first for Dante-enabled products, allowing systems integrators to mount the adapter without the costs and labor of additional rack or shelf space.

Amphe-Dante adapters receive audio channels from a Dante network and provide studio-quality, low-latency audio via an XLR connector to analog audio equipment.  Any audio available on the Dante network can be routed via the XLR outputs to an amplifier, powered speaker, mixing console, digital signal processor (DSP), or other analog audio device.

The initial release offers a single- or two-channel option with premium XLR connectors and a durable molded housing. XLR adapters (for example XLR-to-RCA and XLR-to-phono) can be used to connect to audio equipment without built-in XLR connectors, ensuring flexible options for customers.

“The new Amphenol adapters allow a huge market of legacy products like amplifiers and loudspeakers to now be Dante-enabled,” said Lee Ellison, CEO of Audinate. “Affordable solutions that allow analog products to connect to a Dante network are an important piece of the audio networking ecosystem, and the Amphe-Dante products do just that.”

“We are excited to be working with Audinate to bring a truly unique product to the audio networking market.” said Stephen Richards, Sales & Marketing director at Amphenol Australia. “The opportunity to collaborate with another successful Australian company offering world class technology will enable us to be at the forefront of digital audio connectivity.” 

The Amphe-Dante adapters will be on display at InfoComm in the Amphenol booth #C12012 and the Audinate booth #C11529.

Amphenol

{extended}
Posted by House Editor on 06/01 at 07:41 AM
AVNewsProductAnalogAVDigitalEthernetInterconnectPermalink

Thursday, May 26, 2016

Studio Owner Manolito Galea Invests In Audient For Lito’s Place

Growing studio in Valetta, Malta adds ASP4816 compact analog mixing console for audio recording and ADR sessions.

Studio owner Manolito Galea (Lito) puts the success of Lito’s Place down to the purchase of an Audient ASP4816. He explains that since the arrival of the compact analog mixing console two years ago, the studio’s fortune has been on an upward trajectory.

“Investment in the ASP4816 has generated enough business to allow us to grow the studio, both in the film and music industry,” he says.

“A number of European composers and producers from the music industry now choose Lito’s Place to work on their projects, because we offer such a professional set up in the beautiful Maltese climate.” This increase in clients has also allowed Lito to undertake a major redecoration of the premises, which has just been finished. “The studio is much more flexible and comfortable to work in.

“We are experiencing a change in the type of clientele,” he adds, citing Brad Pitt’s visit to work on ADR sessions just a few months back as testament to that fact. “This has opened the doors to be able to market to major film companies,” he continues, happy to report that it “…provides a constant flow of work, as various artists are regularly on the island for other jobs, and their management contracts them to do voice-overs or ADRs of previous film shooting.”

A full upgrade of the studio equipment over the past year has seen the addition of a surround sound controller ASP510, and most recently the 8 channel mic preamp, ASP800, which has enabled him to upgrade to the ProTools HD 12 set up. “This has improved the workflow and flexibility of the studio as we can now offer the option to mix either hybrid or digitally.”

Although the addition of Audient has helped, it’s in no small part down to Lito’s tenacity that he’s come this far. A voracious autodidact, he confesses to reading reviews and researching, explaining, “I need to keep up to date with my ‘three Ts’: trends, techniques and technology.” Having designed the whole studio project himself, Audient reckons he should be very proud.

Audient

{extended}
Posted by House Editor on 05/26 at 09:47 AM
RecordingNewsAnalogConsolesStudioPermalink

Wednesday, May 25, 2016

Spartan Recording On The Road With Solid State Logic

Mobile recording service selects analog SSL XL analog console with integral 18-slot 500 format rack for rolling studio.

Spartan Recording is a new US-based mobile recording service owned and operated by Joe Costner. The studio is built into a re-furbished and modernized original 1951 Spartan Royal Manor trailer and features a Solid State Logic XL Desk analog console with integral 18-slot 500 format rack.

Based in California, but available to bands across the US, Costner was inspired by classic recordings from the likes of Led Zeppelin, Deep Purple, and The Rolling Stones, where the studio came to the artists and the live spaces were not necessarily built for purpose. “The idea is to travel, record in interesting spaces, and record those experiences,” says Costner.

“It was really a hobby project that became something much bigger. I was just going to build this thing for myself, take it around the country and share it with people, but as I started building it the idea got bigger and bigger and a lot of people were pushing me to market it as a business… There has been a lot of interest in it.”

Costner notes that the studio itself is not normally used as the live space, though it does have its own isolation booth – mostly used for voice-over work. Technically, the facility is designed to use whatever buildings or environments are available at the destination.

“I just love the idea of artists being able to record in spaces that they are comfortable in,” he says. “...Or spaces that they’ve always wanted to record in… People have told me it’s a big change of pace - doing a take then walking outside, wherever you’re recording. Being out in the open gives people a new perspective.”

Costner purchased the XL Desk from Vintage King in Los Angeles after considering a range of analog ideas. “I took in some stems from a session I did in Nashville and tried out lots of options.

“I wanted to bring analog gear to bands, but in a trailer there’s not a lot of space to haul racks of outboard gear. The fact that the XL Desk has the 18-slot 500 rack built into it made it very easy for me to incorporate 500 format gear… That was a huge plus.”

The small footprint of the XL Desk was a significant selling point for Costner too. The XL Desk has a 21-fader frame, but packs in over 40 SuperAnalogue inputs, a full-featured monitor section, four stereo channels, two dedicated stereo returns, and four stereo mix buses. All inputs, returns, mix buses, and the listen mic compressor circuit feature direct outputs, while inputs and mix buses have access to the 500 rack as well as dedicated insert points. The console comes with a stereo SSL Bus Compressor in slots 17 and 18.

“I wanted to give people a professional recording session, where you’re putting 12 mics on the drums in the same way you do in a normal recording studio,” says Costner. “The XL Desk gives me that.

“I’m still exploring all the routing possibilities - what I can do with the cue sends and the auxes - feeding headphones but also feeding FX down the channels. It really is a small version of a console you would use in a big analog studio, but still with all the amenities.”

Costner currently has eight EQs in slots 1-8 and eight mic pre-amps in slots 9-16. As well as being available internally, these are also brought out to a patchbay. “If I’m doing a narration gig I’ll have a mic pre and EQ in line with the channel. For tracking bands I can use the console’s built-in VHD preamps, and the preamps in slots 9-16.”

Costner likes the idea of tracking with live takes where possible; it fits well with the inspiration behind the mobile studio. “A lot of people assume we’re recording just in the isolation booth, individually, but I’m trying to do this with live takes - running the cables into houses and buildings and having the band perform live is the kind of method I’d like to continue.”

He sees distinct creative advantages to this approach, possibly born of the process, and the resourcefulness in that process.

“I think the ‘mobile’ process of recording requires a lot of active listening – understanding how different rooms and environments are affecting your sound,” he explains. “…And it requires a lot of creative problem solving: Just because instruments look good set up one way, doesn’t mean they’ll sound good.  Wherever you record, you definitely have to listen to the rooms and how your sounds live within that room.  With the mobile studio I take a lot of time during setup, listening critically before I commit to a sound.”

According to Costner, his next purchase will be a four-track tape machine, both for printing mixes and for use as a process for things like stereo drum mixes: “I’m always looking to expand with more outboard gear, while still conserving space. It’s important to me that people don’t feel cooped up or confined when they enter the mobile studio.”

“To be able to do quality recordings in the same space you live in, rehearse in, or vacation to, is a huge luxury.  It’s a very enjoyable process that delivers some very interesting results.”

Solid State Logic
Spartan Recording

{extended}
Posted by House Editor on 05/25 at 07:52 AM
RecordingNewsAnalogConsolesStudioPermalink

Tuesday, May 24, 2016

What To Do When Your Mixes Don’t Translate

This article is provided by Home Studio Corner.

 
You hear it all over the place. “Help! My mixes don’t translate!”

In other words, “My mix sounds awesome in my studio, but then when I play it anywhere else – in my car, on my stereo, on my iPod – it sounds awful.”

What’s the problem? It could be any number of things – your monitors, your room, your headphones…maybe even your recordings themselves.

But let’s step away from talking about gear, and let’s focus on your ears.

It’s no secret that mixing is a learning experience. It simply takes time. Every mix I do, I get a little bit better. I mix a little faster. I’m able to get the sounds I want more quickly. I know how to solve common problems. I pick up little tricks along the way.

What’s one thing you can do right now to start improving your mixes?

The answer? Start listening to professional mixes in your studio.

This may seem like a stupid suggestion, but take a second to think about it.

Where do you listen to music the most? When you get a new album, where do you go to listen to it? Your car? Your stereo? Your phone?

Ask yourself this question: of all the time you spend listening to music, what percentage of that time are you actually in your studio, listening on your studio monitors or studio headphones?

This is really important.

You’re spending hours trying to get your mixes to sound good, to translate to other systems, but do you really know what a good mix sounds like through your system?

If you’re not spending a lot of time listening to good, professional mixes in your studio, you’re mixes are in trouble.

How can you expect to get pro mixes if you don’t have an intimate familiarity with how pro mixes sound in your studio?

Counterfeit
Pull out a dollar bill. Take a look at it. Could you tell if it was a fake? Chances are the answer is no.

There are professionals out there who are experts at spotting counterfeit bills. How do they do it?

Do they spend a lot of time studying counterfeits? Or do they spend a lot of time studying the real thing?

Answer: They study the real thing.

Once you know what the real thing looks like, you can easily spot a fake.

You’re doing the same thing in your studio. You’re trying to pass your mix off as professional, but how will you know it’s professional if you don’t spend a lot of time getting to know what makes a mix professional?

A few suggestions
• Check your email in your studio, and listen while you do it.

• Eat breakfast in your studio (safely!), and listen while you eat.

• When you buy a new album, set aside an hour to listen to it entirely in your studio before you go listening on earbuds or in your car.

These are just a few suggestions. The point is, do what you can to train your ears to hear a pro mix, and to hear it in your studio.

Joe Gilder is a Nashville based engineer, musician, and producer who also provides training and advice at the Home Studio Corner.

{extended}
Posted by admin on 05/24 at 06:16 AM
RecordingFeatureBlogProductionAudioAnalogEducationEngineerMeasurementMonitoringSignalStudioSystemPermalink

Monday, May 23, 2016

The Rocketeers Music Studio Invests In Solid State Logic Duality Console

Dutch production facility adds SSL Duality δelta SuperAnalogue console for creating radio and television jingles and leaders.

Dutch two-in-one production facility, The Rocketeers - incorporating The Producer’s House - has invested in a Solid State Logic Duality δelta SuperAnalogueTM console for a new and versatile approach to their ideal sounds.

Experienced radio producers Joost Griffioen and Stas Swaczyna met in in 2007, and by 2008 The Rocketeers music production company was up and running. The company’s mission was to supply radio and television with custom-made jingles and leaders and the pair have gone on to work with many stations, including Radio 10, Radio 538, 100%NL, and 3 FM, NPO TV, RTL TV, and TALPA.

In 2014, the pair were joined by Hansen Tomas who has since become the main face of The Producers House. This new development in the business offers song-writing, production, mixing, and mastering services to artists and record companies, and is quickly developing demand in the UK and the US.

Both operations use the same facility in Hilversum, just outside Amsterdam - created from an existing but long-abandoned music facility. It took Griffioen and Swaczyna just three months to get it up and running with two music production rooms plus a main studio, featuring a comfortable control room and a good-sized live space with separate drum room.

Their most recent addition is a 24-channel SSL Duality δelta console (with SSL Alpha-Link conversion and δelta -Link Pro Tools HD interfacing) - bought primarily because the team wanted to bring something special to their sounds, and their clients.

“When we started the studio in 2008 we mainly worked in the box and everything was going through plugins,” explains Swaczyna. “But Joost and I wanted to create a new analog sound for jingles and leaders… something wider and bigger.

“We started with another desk… But then we checked out a few other studios and sounds, and so on, and we came across the SSL sound. The biggest hits on earth are mixed on an SSL, so our ultimate dream was to mix our own jingles, leaders and songs on one.”

Benelux SSL partner Joystick Audio arranged a demo of Duality δelta and The Rocketeers were hooked, not just on the SuperAnalogue sound, but on the dual DAW control and the δelta DAW-based console automation technology: “Of course it sounds fantastic, but it’s especially great for the way we integrate DAWs into our workflow…” says Swaczyna. “It’s the best of analog sound and digital control in one board.”

The Rocketeers Music Studio console has 24 input channels and a master section, but also incorporates 19-inch racks into additional frame space either side of the main console tiles.

One of Swaczyna’s favorite features is the versatile SSL Variable Harmonic Drive (VHD) circuit, available on every channel of a Duality console. The operator can choose the ultra-clean SSL preamp, or can switch in VHD. As input gain is increased, the circuit introduces either 2nd or 3rd harmonic distortion, or a blend of the two, for anything from a little warmth or edge to fierce grunge. Swaczyna: “That sounds great on drums, strings… anything. It sounds really cool.”

Solid State Logic
The Rocketeers
The Producer’s House

{extended}
Posted by House Editor on 05/23 at 12:01 PM
RecordingNewsAnalogConsolesDigitalDigital Audio WorkstationsStudioPermalink

Spotlight On Signal Processing

Go here to read part 1 of this series.
—————————————————

“In the beginning there was graphic EQ.”

The first standard tool for system equalization was the graphic equalizer. Early versions were 10 bands at octave intervals, but the 1/3rd-octave version took over the market completely by the late 1970s.

The 31 bands were standardized to a series of 1/3rd-octave intervals beginning with 31 Hz. There was no standardization of the shape of the filters, however. One model might use 1/3 octave filters, another would use octave filters.

One of the primary attractions of the graphic equalizer was that its front panel settings seemed to represent the response it was creating (hence the name “graphic”).

This was mostly true if the settings were all flat, but once the sliders were moved the resemblance faded because the parallel filters also affect the range of their neighbors. The parallel filter interaction dramatically affected how closely the “graphic” shape of the front panel actually resembled the curve that was being created.

The reality is that the picture on the graphic EQ front panel was never accurate but some (especially the ones with wide filters) were wildly inaccurate. This confused people because they attempted to use these tools for what I call “ear to eye training.” Engineers learned to distrust the “graphic” settings. They knew it wasn’t doing what it showed on the front panel – but they didn’t know what it “was” doing. The inaccuracy of the graphic EQ created a lot of false conclusions.

Graphic equalizers with narrow filters create a ripple in the response, which increases as the cuts deepen. The center frequencies cut deeper than the mid-point between. Wider filters reduce the ripple, but increase the overlap, which decreases the accuracy of the front panel. Graphics with narrow filters more closely correlate to the panel but have higher ripple. I know of only one graphic EQ that old engineers still have romantic feelings for: the Klark Teknik DN360 (wide filters, low ripple and low accuracy). Bottom line: front panel accuracy doesn’t matter much when you are tuning by ear, but ripple does.

The Graphic Details
A substantial culture arose of what I would call the “graphic EQ code of conduct,” a set of visual rules that governed the fader placement. The foremost of these was the “move the neighbors” rule, which mandated that a deep cut at 500 Hz meant you had to move the 315, 400, 630 and 800 Hz faders down as well to make it look like a gradual curve. Never mind that this causes 500 Hz to cut much deeper and wider than you intended.

The ubiquitous (at the time) Klark Teknik DN360 graphic EQ, joined by other popular models from BSS (FCS-960) and dbx (231).

Another of the folk legends was the belief that cutting more than 6 dB would create a “phase problem” of some mysterious unquantifiable variety. This was taken seriously: Everyone knows you don’t push that fader past 6 dB! I can’t say this phasor vortex never happened to somebody, but I can say I never saw such a thing occur on my analyzer (which reads phase). The phase problems that we did see were primarily the side effects of amplitude problems associated with ripple and having the wrong center frequency and bandwidth to do the job.

This gets to the heart of the graphic EQ’s principal shortcoming: fixed center frequencies and bandwidth choices. Simply put, the graphic could never succeed as an optimization tool because the problems it is trying to solve do not have a single fixed bandwidth and are not obliged to fall on the ISO approved center frequencies. Our challenge is more complex than that.

At one of the first concerts that John Meyer and I measured with SIM, we came up against the graphic EQ rule mentality.

We measured a 10 dB peak at 100 Hz and knocked it out with a single cut on the graphic EQ. We high-fived each other at the perfection of the lucky match of center frequency and bandwidth as we watched the amplitude and phase flatten out.

A short time later the system engineer saw the single deep cut on the graphic and freaked out. He then reworked the settings and made them look nice and gentle on the graphic, which looked good there but no longer solved the problem.

He explained to us that he needed to do this because we were messing up the phase response. It never occurred to him that we could actually see the phase response right there on the analyzer. On that evening John and I knew that we could never beat the graphic EQ police and needed to make a better tool.

Parametric EQ
The inadequacies of the graphic equalizer became totally apparent once we began to see high-resolution frequency response data. Our analyzer could now show us a problem centered at 450 Hz, but we were stuck with the graphic EQ’s fixed filters at 400 Hz and 500 Hz in that area. The inability of the graphic EQ to create a complementary response to what we measured was impossible to ignore. 

The parametric equalizer was immediately seen as the superior tool, since it had independently adjustable center frequency, bandwidth and level. Anything that we could see on the analyzer that was worth equalizing could be precisely complemented with the parametric filters.

The high-resolution transfer function analyzer put an end to the usage of the graphic EQ (although it took a long time to die). Now we could see the phase response (one mystery solved) and we could also see the actual amplitude response of the combined filters (second mystery). The analyzer proved that the graphic could never satisfy our needs. The graphic EQ is now used only for gross tonal shaping by ear, i.e., an artistic tool or for combat EQ (stage monitors), not a system optimization tool.

There were several reasons why parametric equalizers had attained such minimal acceptance before that time. The first was that people had trouble visualizing in their mind what the filters were doing. Filters could be set anywhere – including right on top of each other. You had to look at all the settings and then conjure in your mind what it all meant. This made many engineers understandably insecure. Most modern parametrics accurately display their response on their front panel display or software program, even incorporating filter interaction. The parametric response is no longer a mystery.

The second issue was that most commercially available parametric EQs used a filter topology that was poorly suited to system optimization. The filters were asymmetric, having a different type of response for peaks and dips. The dip side of these devices used a notch filter topology, which does not properly complement the peaks it’s trying to treat in the sound system. The notch-type parametrics with wide peaks and narrow dips actually mimic the problematic comb filter effects rather than compensate them.

SIM measurements that highlight some of the problems with graphic EQ.

The high-resolution measurements taken with SIM showed us the advantages of complementary phase equalization: a parametric EQ with symmetric second-order filters with minimum phase shift, which became the guiding principles behind the 1984 development of Meyer Sound’s CP-10 equalizer.  We could now put the “equal” in equalization by producing an equal and opposite amplitude and phase response to the peaks and dips found in the field.

As previously stated, there was a lot of resistance to parametric EQ in those days because of the lack of a graphic user interface. The SIM analyzer gave us something better than a graphic interface. We could view the actual measured amplitude and phase of the EQ without repatching or taking it off line.

Transfer function measurement allows us to probe across any two points in the signal path of our sound system. We can monitor the EQ output versus the EQ input and see precisely what response the device is creating. From the outset, the SIM system was always set up to be able to view the EQ electrical response as well as the response of the speaker system in the room.

The Digital Age
The digital era dawned with the introduction of digital delay lines. These replaced the previous generation of analog delays (yes there were such things but their dynamic and temporal range was very poor).

The first-generation digital delay was a noise floor choke point, so it was used only sparingly, when absolutely needed. The digital delay within the modern DSP is different from its first-generation version only for its higher dynamic range and resolution (and better A/D conversions).

The systems of that era had digital delays, analog equalizers, and analog level distribution, all in separate devices, each of which had only a few inputs and outputs.

Once digital equalizers started to hit the market we quickly reached a tipping point in favor of merging all of these functions under one roof. Go to a rental house tomorrow and ask for a component digital delay or analog EQ. There will be hundreds to choose from once you blow the dust off.

There is great advantage to minimizing the number of A/D conversions, the wiring, patch bays, ground references, power supplies, etc. All of these functions are now done with multichannel input and output devices.

We have evolved to two families of DSP: open topology and fixed topology. The open topology systems (e.g., BSS Soundweb, QSC Q-Sys, Peavey MediaMatrix) are inputs, outputs and a mountain of malleable memory. They are an open interior waiting for us to arrange the furniture. Users can pull “devices” off the virtual shelf and “wire” them up to customize them as needed.

Meyer Sound CP-10 parametric EQs in the racks at Carnegie Hall circa 1988.

Fixed topology devices (e.g., Lake Controller, Meyer Sound Galileo) have pre-ordered the parameters and signal routing, incorporating all the features relevant to system optimization (and more). The filters in the modern DSP mime the filters of our analog world but can also go beyond them to make exotics. Very few of our optimization needs can’t be solved by the analog filter shapes (parametrics, band filters and all-pass filters), so these are the still the workhorses.

The digital exotics such as FIR filters and free-pass filters require adult supervision. But there’s good news: a digital version of the graphic EQ can still be found as an option in most of these devices. Works great with vinyl.

Bob McCarthy has been designing and tuning sound systems for over 30 years. The third edition of his book Sound Systems: Design and Optimization is available at Focal Press. He lives in NYC and is the director of system optimization for Meyer Sound.

{extended}
Posted by Keith Clark on 05/23 at 10:27 AM
Live SoundFeatureBlogStudy HallAnalogDigitalMeasurementProcessorSignalSound ReinforcementPermalink
Page 2 of 55 pages  <  1 2 3 4 >  Last »