Feature

Monday, November 10, 2014

Live Capture: A Host Of Interface Options To Foster Live Recording

Live recording is a whole lot easier in the age of digital consoles. It not only provides the means to make better board tapes, but provides a multitude of options in capturing quality 2-track and multi-track recordings that can be used for everything from virtual sound check to a product that’s ready to go to market.

In the majority of cases, a separate console isn’t needed because today’s digital boards offer plenty of processing and recording options, furthered by the routing capabilities of digital snakes and networking.

Against this backdrop, let’s take a look at a range of options—many of them quite recent—for recording in the live realm, both in terms of console capabilities as well as other interface and hardware options.

Many consoles provide basic 2-track recording to a USB stick, dedicated recorder or computer. For example, Yamaha Pro Audio MG Series mixers (analog, by the way) offer 2 x 2 connection via USB to both computers and to iPads, the latter using Apple’s camera connection kit Lightning to USB adapter.

Then there’s the ability to multi-track record right off the desk to a hard drive via USB. Recently introduced QSC TouchMix Series compact mixers offer the capability to record all tracks (22 for the TouchMix-16, 14 for the TouchMix-8) plus a stereo mix directly to USB hard drive for mixdown or import to a DAW. The Innovason Eclipse GT console with the M.A.R.S. option includes a built-in 64-track recorder that feeds directly to a hard disk plugged into the back of the console.

Allen & Heath Qu Series compact consoles offer integrated multi-track USB recorders, providing 18 channels of 48 kHz recording straight to a USB hard drive. In addition to the built-in multi-track unit, all input channels and the main L/R can be streamed via USB to a digital audio workstation (DAW) for both PC and Mac.

Interface for the new Mackie DL32R.

Mackie’s newest iPad-based mixer, the DL32R, provides two methods for multi-track recording (and playback), controlled wirelessly. The first is direct to USB hard drive, which is currently a 48 kHz, 24-input by 24-output platform that will be expanded soon to 32 x 32 via a free firmware update. An additional USB 2.0 interface is available for 32 x 32 recording and playback that can directly connect with any DAW.

“Port to Port,” a new approach incorporated in Yamaha Commercial Audio QL Series consoles, offers the ability to directly connect input ports to output ports without going through mixing channels, providing more freedom in routing signals between analog, (Audinate) Dante networking, and MY slot inputs and outputs. This provides a lot of capability—for example, mixing analog signals received at the console while directly transmitting them to a multitrack recording system via Dante.

Option cards that plug into the console or stage box are a popular, effective way to interface a variety of analog and digital protocols, as well as to pass audio to a recording device.

Options on new PreSonus RM-series mixers.

New PreSonus RM-series mixers include a 52 x 34 FireWire 800 recording interface and integrated Capture recording software with Virtual Soundcheck mode and Studio One Artist DAW for Mac and Windows.

The rear panel contains an option slot that comes with two FireWire 800 ports, an Ethernet control port, and S/PDIF I/O. The slot also accepts the same option cards as the StudioLive AI-series digital mixers, and with Dante, AVB, and Thunderbolt cards coming soon.

The new Avid VENUE | S3L-X is a leading example of very tightly integrated live and recording capabilities, both operating under the united Avid MediaCentral Platform. Recording can be done directly to Pro Tools (or other DAW) through a simple laptop Ethernet connection.

Further, EUCON and Ethernet AVB network protocols are also supported, ensuring compatibility with a variety of Avid and third-party products.

MADI and ADAT networking protocols have been popular for years to interface DAWs and hardware recorders, with Soundcraft recently releasing the new 64 x 64 MADI-USB combo card for Si Series consoles.

It offers a Cat-5 MADI stream for use with Soundcraft stageboxes and other MADI devices as well as a low latency multi-channel USB interface for live and studio recording to DAWs and other recording systems. The card comes configured to provide 32 x 32 via MADI and 32 x 32 via USB.

Two card options for the new Midas M32 console are tailor-made for recording. The DN32-USB card supplies 32 x 32 routing of tracks to PC or Mac computers with USB 2.0 connectivity, while the DN32-ADAT card provides 32 channels of ADAT inputs and outputs that can connect to computer interface cards or stand-alone recording decks.

The new 64 x 64 MADI-USB combo card for Soundcraft Si Series consoles.

Many manufacturers select a single network protocol to route signals between their stage boxes and the console, but also offer a network “bridge” that can convert between different protocols allowing the end user to interface between various gear. Solid State Logic (SSL) Live consoles has MADI-Bridge, supplying a MADI to Dante IP audio network interface that provides 64 channels at 48 kHz or 32 channels at 96 kHz.

Roland Pro AV consoles operate on the S-MADI REAC Bridge that allows interface between their REAC (Roland Ethernet Audio Communication) protocol and MADI. It supports 44.1 kHz and 48 kHz and can sync to Word, MADI or REAC clocks.

DiGiCo UB MADI.

There are some very smart ways to take advantage of the ubiquity of USB. For example, the DiGiCo UB MADI is a hot-pluggable USB bus-powered device for USB 2.0 that can handle 48 channels of I/O simultaneously (full-duplex at 48 kHz). It can clock to itself, or a valid MADI, AES3 or Word clock.

A few years ago, RME devised MADIface USB, an interface approach that’s proven popular. Using optical MADI connections, it’s able send and receive up to 64 channels of digital audio up to 2,000 meters (6,500 feet) in distance. It can be used as a bridge between USB 2.0 and MADI, or as a MADI repeater between MADI equipped units.

The MADIface USB can also convert between MADI optical and coaxial formats, and supports 64 x 64 at 48 kHz, 32 x 32 at 96 kHz, and 16 x 16 channels 192 kHz.

Stand-alone hardware recorders are making a comeback because they offer simple and reliable operation in a rugged format that can be rack mounted, perfect for regional and touring companies. They offer recording to internal or external media, or even both.

JoeCo offers up the BLACKBOX recorder series in a variety of 24- and 64-channel units with each unit able to switch between 44.1, 48, 88.2 and 96 kHz sample rates.

The new flagship BBR1MP supplies 24 channels and has 24 integrated microphone preamps (96 kHz). With balanced inputs and outputs plus user-installable MADI and DANTE card options, it provides a lot of connection options, and is fully controllable via the JoeCoRemote app for iPad.

A variety of other BLACKBOX options are available, including models with 24 channels of unbalanced I/O, 24 channels with balanced I/O on D Sub connectors, 24 tracks with ADAT and unbalanced analog I/O, and 24 channels with AES/EBU and unbalanced connections. Need more tracks?

The larger BBR64MADI offers 64 channels and sports coaxial and optical I/O for MADI with an 8-channel balanced line input option. Meanwhile, the BBR64DANTE offers 64 channels with Dante I/O and also has an 8-channel balanced input option.

The new JoeCo BLACKBOX BBR1MP.

Cymatic Audio just released the uTrack 24 that records at 96 kHz sample rate directly to off-the-shelf USB hard drives. It’s just 1RU and provides 24 channels of balanced input/output through 25-pin D-Sub connectors.

The Roland Pro AV R-1000 is a rack-mountable, dedicated 48-track unit that can record and playback audio directly from the REAC network. It offers a removable hard disk that can integrate with DAWs, with about 20 hours of 44.1/48 kHz recording using a 500 GB HDD. Coupled with an S-MADI REAC bridge, it can easily integrate into a MADI network.

Allen & Heath offers the ICE-16 stand-alone recorder, which can record 16 tracks of audio directly to a USB key or hard drive, and up to 6 hours of 16-channel audio can fit on a 32 GB USB stick. It has 16 analog inputs and outputs and offers both USB and FireWire connectivity, and can also function as a 16 x 16 interface to a computer, streaming 96 kHz over FireWire or USB 2.0. Units can be linked to expand channel count. The ICE-16D, a balanced I/O version, has fully balanced inputs and outputs on standard D Sub connectors.

Allen & Heath ICE-16.

At the other end of the spectrum are several stand-alone 2-rack recording options, that record directly to USB, CD, solid state media, or a combination of all three. The TASCAM SS-200 records WAV & MP3 files to compact flash, SD/SHC, and USB sticks in a 1RU format. The unit has XLR balanced and RCA unbalanced I/O, coaxial S/PIDF and AES/EBU connections.

The Denon Professional DN-700R Network SD/USB can record to SD/SDHC and USB media in MP3 and WAV at 96 kHz. The unit includes a dual record feature that allows it to simultaneously record to two media options for primary and backup recordings.

iPads are everywhere, with a variety of apps available that foster recording and mixing, but getting audio in and out of the iPad requires an interface. That’s where dock-style interfaces come in.

Focusrite iTrack dock.

Alesis offers two dock units, including the iO Dock II that works with iPad, iPad 2, iPad 3rd generation and iPad 4th generation with 30-pin or Lightning connectors. The unit offers two combo XLR-1/4-inch input jacks and works with virtually any Core Audio or Core MIDI app available from the App Store. And the Alesis iO Mix delivers a bit more functionality by adding faders and recording up to four channels at once.

The Focusrite iTrack dock accommodates iPad, iPad Air and iPad Mini that have Lightning connectors. Equipped with Focusrite mic preamps and 96 kHz recording, it can charge and power an iPad at the same time, and it too works with Core Audio apps.

Finally, Apogee Electronics makes a desktop interface for iPad called the Quartet that provides 4 inputs and 8 outputs, and it also includes 4 mic preamps. Being a non-dock style unit, you can leave your iPad in its case during use.

Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.

{extended}
Posted by Keith Clark on 11/10 at 06:49 PM
Live SoundFeatureStudy HallConsolesDigitalEthernetInterconnectMixerNetworkingSoftwarePermalink

Creative From Conventional: A Unique Stage Sound Approach For Jeff Beck

This is a story about a man and his side fills.

Recently I had the opportunity to hear a show where the side fill arrays were used in a very unconventional manner—as components of an artist’s guitar rig.

This twist on a traditional monitor treatment peaked my curiosity, and in learning more I found an audio practitioner who reminded me that there are ways to apply conventional kit in an unconventional manner that, if rendered successfully, can support creativity in addition to amplifying the performance.

Usually I shy away from superlatives when describing professional musicians. We shouldn’t have to use hype; touring artists are supposed to be great players. On this occasion, though, please allow me an exception. Guitarist Jeff Beck is in a class by himself, and that’s no hyperbole.

With such a unique talent onstage, I wasn’t surprised to discover some out-of-the-box thinking by his monitor engineer that helps this exceptional artist deliver performances that are genuinely in a class by themselves. That monitor engineer (and production manager) is Shon Hartman, charged with implementing the onstage sonic environment in which the artist performs, a job he’s filled on every Beck tour since 2006.

Beyond Utility
Hartman started in the audio business unexpectedly when a friend who owned a small sound company in Northampton, MA booked too many shows and needed some help.

Shon Hartman

Soon after he was working local shows for regional legacy promoter Don Law and mixing in local clubs The Iron Horse and Pearl Street. He was picked up by Ben Harper in 1996, transitioned to the staff of Rat Sound Systems (Camarillo, CA), and eventually moved along to a life of free-lance touring that now consists mostly of supporting Beck and punk-rock purveyors The Offspring.

I caught up with Hartman after sound check on the artist’s recent concert tour, and he described the unique methods he’s formulated to produce a comfortable environment to help Beck to ply his trade.

“Everything that Jeff is doing is so organic; he’s unlike any other musician I’ve worked with,” he notes. “Over time I’ve developed techniques that are a bit unorthodox but serve his needs very well. Most musicians use monitors as a utility, to make their instrument louder. With Jeff, every piece of equipment in his guitar’s signal path, including the monitor system, is an extension of his instrument. He actually uses the side fills to contour his sound.”

Hartman deploys a combination of d&b audiotechnik M2 floor monitors on stage and L-Acoustics ARCS loudspeakers with dV-SUB subwoofers as side fill components. He is mixes on a Midas PRO6 console, with the package supplied by Shubert Systems (North Hollywood, CA). His equipment choices are, candidly, conventional, but it’s how he has chosen to deploy some of them that make the difference.

Specifically, he’s devised an untraditional approach to configuring and aiming his side fill arrays. There are two arrays, each consisting of two ARCS enclosures separated by a single dV-SUB. Each ARCS provides a 22.5 degree horizontal dispersion in this configuration, and the fact that the enclosures are uncoupled results in a very directional coverage pattern.

The two arrays are placed asymmetrically, which creates defined zones running across stage where the arttst can find “sweet spots” to stand in. The stage right array is located about 6 feet upstage of the stage edge, rotated on-axis slightly downstage. The stage left stack is situated about even with the downstage edge, and at first glance, it looks as though it’s aimed cross-stage, but it’s actually rotated slightly upstage.

Hartman’s stage plot for Beck and his band.

The result is two very directional, narrow angled cross-stage coverage patterns, or as Hartman describes them, alleys. “Jeff can walk into one or both coverage patterns,” he explains. “What I like about this setup is that it allows me to micro-manage the onstage acoustical space. Unlike having huge side fills washing all over the deck, I can keep levels high for wherever Jeff walks, yet prevent all of that level from clipping the other guitar player, or the bass and drums.

“You’re probably thinking this approach might generate a lot of comb filtering, but it’s not been a problem,” he adds. “It allows me to create a real tight and very directional focus.”

In And Out
Beck uses three Marshall cabinets to amplify his guitar, one facing downstage and two facing upstage. “The majority of the time, the downstage-facing Marshall is not on, and if it is, it provides just a little bit of fill for Jeff,” Harman says.

One of the upstage-facing amps is used to create an ambient guitar sound, while the other serves as the primary amp, miked with an AKG C 414 and a Shure SM57. This signal is fed at a very high level to the side fills.

Hartman elaborates: “It’s an extremely loud stage. A powerful stage. Jeff is moving a lot. I’m giving him a level where it’s slamming out there, but with this rather unusual side fill solution, he can walk in and out of the defined coverage areas to get sustain. He does have the guitar rig behind him for a taste of his guitar, but he mostly prefers the side fills to get the right blend.

Lead singer Jimmy Hall uses in-ear monitors to help him deal with the high ambient levels. “Jimmy has a wedge, and he’s loud in the side fills as well,” Harman says. “But sometimes it’s better for him to put his ‘ears’ in and retreat a little bit from what he’s being bombarded with. He deals with it well though.”

A more conventional setup is used with the other musicians. Guitarist Nicolas Meier and bassist Rhonda Smith are supplied with d&b M2 wedges, while drummer Jonathan Joseph has an ARCS cabinet, a dV-SUB, and a thumper on his drum throne.

During the show that followed our sound check conversation, I listened from behind the monitor console for a couple of songs and was able to hear the various mixes as Hartman cued them up. Nothing particularly unusual in the mixes. But if I leaned my head past the monitor desk, the level on the deck was loud—very loud. This appeared to be mostly from side fills, and Beck really did use the arrays as extensions of his guitar rig.

And just as Hartman described, the artist finds a sweet spot in one of the aforementioned alleys, settles down for a solo, and generates harmonics and sustain that I simply did not believe were possible.

It’s always satisfying to see audio practitioners applying their equipment and expertise in new ways; it’s how the craft advances. With the unique approach described here, I observed a willingness to think inventively in a quest to provide a listening environment where not only the artist can hear, but can create.

Danny Abelson believes that an old soundman can always learn new tricks if he’s willing to listen.

{extended}
Posted by Keith Clark on 11/10 at 02:55 PM
Live SoundFeatureBlogStudy HallConcertEngineerLoudspeakerMicrophoneMonitoringSound ReinforcementStagePermalink

In The Studio: The “Snap Track” Secret For Mixing Drums (Video)

Article provided by Home Studio Corner.

 
Looking for a simple way of adding more snap and punch to drum mixes, but without changing the actual mix itself?

In the following video, Joe demonstrates an approach to getting it done, where you can fairly simply add to the dynamic without significantly altering the drum mix track.

This technique can really come in handy, for example, when after finishing up other parts of the mix, you loop back to check your drum mix and it’s kind of disappeared, devoid of definition.

It’s also a process that has some similarities to parallel compression, but isn’t that involved.

 

 
And, here’s the link to the “nerdy video” he references at the end.

 
Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner. Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.

{extended}
Posted by Keith Clark on 11/10 at 01:45 PM
RecordingFeatureBlogVideoStudy HallDigitalDigital Audio WorkstationsMixerProcessorSignalStudioPermalink

Building A Legacy: The Story Of Jensen Transformers

The name “Jensen” is likely familiar to many in the pro audio industry as a hallmark of technical excellence, particularly with respect to transformers. And that will live on, especially in light of Jensen Transformers recently becoming a member of the growing group of companies in the Radial Engineering stable.

But the story of how that name came to prominence is assuredly not well known, and it’s a fascinating account.

Born in 1942, Deane Jensen grew up in Princeton, NJ. His Norwegian father, Dr. Arthur S. Jensen, was a physicist who earned his Ph.D. at the University of Pennsylvania, and he became an electronic-imaging expert who was awarded 25 patents, taught physics at the U.S. Naval Academy, and later worked for RCA Labs and the Westinghouse Defense and Electronics Center. In this heady atmosphere, Deane recalled Albert Einstein coming to their home to visit his father.

Deane followed the path to the University of Pennsylvania, majoring in physics as well as electrical engineering. He served as chief studio engineer for WXPN-FM, the school’s student-operated FM station, and after college, worked for several broadcasters in Baltimore and Washington DC as well as a recording studio in Camden, NJ before heading to California.

Accumulated Experience
By the mid-1960s, recording had advanced from just capturing a live performance to using studio processes that could not be replicated in live performances. The Beatles “Sgt. Pepper’s Lonely Hearts Club Band” album, released in 1967, was a prime example. During this period and into the early 1970s, the recording industry became highly competitive, driving improvement in technology.

The finest equipment used discrete transistor amplifiers, often in the form of potted modules. Although the first analog operational amplifier integrated circuit (IC) was introduced in 1963, their use was limited until inexpensive and easy-to-use ICs were introduced in 1968. But they didn’t “sound good” and years would pass before “op-amp” ICs worthy of pro audio applications were available. Likewise, most low-level audio transformers of the period were just adaptations of designs intended for industrial or communications use.

The Jensen JT-DB-EPC transformer at the heart of the Radial JDI direct box.

Against this backdrop, Deane moved to Hollywood in 1966, working as a systems engineer for Studio Electronics Corp. (later UREI), creators of the famous LA-2A and LA-3A limiters. He moved on to positions that furthered his technical and artistic understanding: engineering manager for Sicodim, a manufacturer of high-power lighting controls; systems engineer for Bushnell Electronics, which created custom consoles and complete studios; systems engineer and then VP of engineering for Quad-Eight Sound.

In 1969, Deane supervised the installation of a new Quad-Eight console in studio A at Wally Heider Recording Studios, for whom he was also consulting. By 1971, he interviewed Bill Whitlock, who he recommended to replace him as he left Quad-Eight.

Deane continued working for Heider, a legend in his own right as both a studio and remote recording engineer dating back to the big band era who worked with a “who’s who” of top rock/pop artists, such as the Grateful Dead, Fleetwood Mac, Crosby Stills & Nash and numerous others. In 1972, Heider sent Deane to API in New York, where he oversaw bus fixes, improvements, and final acceptance of a large recording console that API was building for Heider, and he also served on Heider’s recording crew for Bob Dylan’s live album “Before the Flood.”

This accumulated experience in the recording industry told Deane that audio transformers often were the root cause of “bad sounding” mixing boards and audio systems. He decided to embark on a quest to fix the shortcomings, starting with transformers. Initially, his development lab and office was his apartment in Hollywood. 

A 990 op-amp in “DIY” version.

Ed Reichenbach, who supplied transformers to Quad-Eight, had been building made-to-order transformers since the 1940s for Altec, Langevin, Electrodyne, and others, and Deane used these designs as his starting point. He became obsessed with how audio transformers interacted with system electronics, and began research to solve the “bad sounding” problems.

In the meantime, he’d also started optimizing audio operational amplifier designs while at Quad-Eight, and after leaving, continued to work on an improved design that he eventually called the 990. He completed the design in July of 1979, and his paper describing it was published in the AES Journal later that year.

In 1981, he was awarded a patent for the inductive emitter degeneration feature of the 990. The amplifier set new standards for its combination of low noise, low distortion, slew rate, stability, and high output drive capabilities. When Deane published the circuit, he intentionally made the patent public domain, an indication of his dedication to furthering the state of the art.

Multiple Pursuits
Deane was also a pioneer in computer-aided design. He quickly realized that his work with transformers needed powerful tools to make quick work of circuit analysis and optimization, so he decided to create his own circuit analysis program to run on his Hewlett-Packard programmable calculators (predecessors of today’s computers).

The final version of this powerful AC circuit analysis program, called COMTRAN, allowed him to accurately model and fine tune the transformer designs—optimizing them for use with “real world” electronics. Hewlett-Packard was so impressed with COMTRAN that it became the first third-party software HP ever distributed, and it was ultimately used by hundreds of engineers to optimize their transformer, filter, and amplifier designs.

With the help of COMTRAN, Deane systematically re-designed his transformers to have the response of a Bessel low-pass filter, which by definition has virtually no phase distortion, ringing, or waveform overshoot. He had a level of expertise that rarely existed at competing companies—where audio transformers were just a sideline to other commodity transformers.

Deane officially founded Jensen Transformers in 1974—the first models to carry the Jensen name were “custom” ones, in that they were developed direct from customer requests. This was followed by the JE-series of audio transformers, which quickly became a benchmark of the audio industry.

The HP 9845A computer used to write COMTRAN.

Of particular note is the original Jensen direct box transformer, the model JE-DB-D, which Deane developed after meeting an electrical engineer named John “Jack” Crymes while with Wally Heider. Jack was a pioneer in professional remote audio recording who designed and built the world’s first mobile audio recording truck for Heider in 1974.

Jack discussed with Deane the need to not only convert guitar level signals down to mic level, but to also completely isolate the guitar amp from the recording console. This lead to Deane creating a dual Faraday shield design, where both the primary and secondary windings had their own internal copper foil “Faraday” shields. Another innovation was special acceptance tests for core materials (still a Jensen trade secret).

The handwritten sheet of paper on which Deane wrote down the specs and measurements for the first version of the Jensen direct box transformer, the model JE-DB-D. Handwritten sheets like this were the beginnings of what would eventually become Jensen’s formal printed data sheets.

Until 1979, when he hired office assistant Bruce Black plus software specialists Rob Robinett and Jerry Jensen (no relation) to continue the COMTRAN work, Jensen was essentially a one-man business. By 1980, demand for Deane’s transformers required a move to larger offices on Burbank Boulevard in North Hollywood, and the staff grew to include a bookkeeper and a facility manager, Dave Hill, who today continues to manage operations.

Deane was well known for answering simple application questions with dissertations that often lasted an hour. The famous Jensen 3-ring binder catalog, now a collector’s item, contained the most technically complete data sheets in the industry. Deane’s many papers and lectures on high-frequency phase response and its audible effects have helped many engineers improve their products. As a contributor to the AES Journal, he became well known for his efforts to improve the fidelity of sound in the recording, sound reinforcement, and broadcast industries.

By 1988, tens of thousands of Jensen transformers were in service. Their performance also led to use in diverse applications such as seismic sensing, lab instrumentation, test equipment (including the Audio Precision System 1), and even aboard the space shuttle.

Troubled Seas
In 1983, Deane suffered a debilitating bicycle accident that forced him to use a wheelchair for several years, and he resided in personal living quarters on the Burbank Boulevard property. Further, production quality problems began to increase under new management after Ed Reichenbach passed away in November 1985.

Still, despite these and other setbacks, Deane continued to persevere. Several years earlier, Bill Whitlock had departed for Capitol Records, and when he decided to leave Capitol in 1988, he was asked by Steve Desper to engineer a stereo spatial enhancement system called the Spatializer.

At the same time, Deane also asked Bill to work with him on some consulting work, and for nearly two years, they re-kindled their friendship and worked on a variety of challenging projects as Jensen-Whitlock Engineering at an office in Bill’s home.

Unfortunately, Deane took his own life in 1989. He stated in both his suicide note and his will that he wanted Bill to take over Jensen Transformers. Despite several years of turmoil due to company debt and lawsuits, Bill carried on, saving the company and carrying on Deane’s legacy.

A local contract manufacturer was selected, and transformer production was relocated and brought back on-line following his precise designs. To ensure quality control, Jensen also purchased its first Meteor Swiss-made computer-controlled winding machine.

Deane sporting a Heider Recording t-shirt in the mid-1970s.

In 1990, Deane was posthumously inducted into the TEC Awards Hall of Fame, along with Quincy Jones and George Massenburg, in recognition of his vital contributions to professional audio.

Fortuitously, the Spatializer project provided much-needed cash flow to keep the doors open at Jensen during those “down” years. Bill also designed the Spatializer Pro and helped Matsushita engineers develop a Spatializer IC for consumer gear. The Spatializer Pro was a joystick-controlled multi-channel spatial image enhancement device. The design included the first commercial use of the patent-pending InGenius balanced inputs and novel active decoupling in its power supply.

Spatializer Pro has been used to enhance hundreds of television broadcasts, motion pictures and compact discs by such recording artists as Michael Jackson (the “HIStory” album), Quincy Jones, Barbra Streisand, Bonnie Raitt and The Eagles.

Bill Whitlock

Bill also led the transition from the JE transformer design to JT design, which improved longevity—so much so that it’s backed by a 20-year warranty. In addition, he revised the format of Jensen data sheets and added several in-depth application notes, along with nearly a hundred application schematics to the catalog.

From the beginning, it had bothered Bill that he couldn’t explain precisely why a transformer solved real-world system noise problems so much better than the “active-balanced” differential amplifier input circuits that had largely replaced input transformers in audio equipment. After exhaustive sessions of circuit simulation and analysis, and conversations with colleague Neil Muncy, the “a-ha moment” came and he shared his findings in a 1994 AES paper. What he found also ignited his interest in debunking the myths about balanced interfaces that were so rampant in the audio industry.

The result is his landmark 1994 AES paper “Balanced Lines in Audio: Fact, Fiction, and Transformers,” published along with one by Neil Muncy in the AES Journal issue of June 1995. According to the AES, it has become the best-selling issue ever printed. Bill has been an AES member since 1966 and active in standards work, being chairman of the AES Standards Committee working group that produced the AES48 standard on interconnections and grounding. (He’s also a contributor to LSI, ProSoundWeb, and several other publications.)

Bright Future
In 1994, responding to several customer requests, Bill decided to offer Jensen’s transformers in “plug-and-play” packages under the ISO-MAX brand. The first offering, model CI-2RR, was a 2-channel box with RCA connectors aimed at consumer interfaces. Its popularity grew rapidly in the emerging home theater installation market, giving installers a no-compromise, safe solution for ground-loop issues that had been routinely dealt with by illegal and dangerous “ground lifting” at equipment power cords. Jensen expanded the line to include isolators for balanced audio, video, and cable TV as well as special-purpose interface boxes of all kinds.

A row of Jensen transformers inside a PreSonus M80 preamp.

By 2012, the company decided to bring all manufacturing in-house in order to better control quality. More Meteor winders were purchased, bringing the total to five. The transition took longer than anticipated, slowing delivery schedules. One of the most affected customers was Radial Engineering, maker of the game-changing JDI direct box (among dozens of innovative products). In fact, the company’s relationship with Jensen stretches all the way back to 1992 when Radial served as Jensen’s Canadian distributor.

With Bill contemplating retirement and pondering the company’s future, Radial owner Peter Janis expressed an interest in buying Jensen, and an agreement was signed in April 2014. Subsequently, the company has already invested in two more Meteor winders, increased raw goods inventory, and hired more assembly workers to meet delivery demands. The company will remain at its current home base in Chatsworth, CA, and Bill is staying onboard as technology manager.

And that’s how a name can become ubiquitous in an industry, and further, how’s it’s legacy will continue live on.

Editor’s note: Our grateful thanks to Bill Whitlock for his work in the composition of this article.

{extended}
Posted by Keith Clark on 11/10 at 12:08 PM
Live SoundFeatureBlogStudy HallAmplifierBusinessEngineerManufacturerPowerProcessorSignalTechnicianPermalink

Thursday, November 06, 2014

Back To The Future: Looking Into Notable Recent Microphone Milestones

Though microphones are more than a century old, the breadth of what is available continues to grow, and new innovations and applications are regularly introduced—while some of the old standards are given new life with a different form factor or are adapted to wireless use.

In interviews with a number of mic developers, we’ll explore some relatively recent milestones, as well as miniaturization and the impact of new materials and processes.

Dynamic Developments
Shure’s 1939 introduction of the Model 55 “Fat Boy” brought the world’s first unidirectional, single-element dynamic microphone. Building on a history of transducer development going back to 1925, the company’s seminal designs helped shape mic technology. Matt Engstrom, category director for wired products at Shure, describes how the availability of new materials has driven changes in design.

“When the world saw the introduction of plastic over a hundred years ago, it really helped with transducers themselves, and it paved the way for the dynamic microphone, which as far as live sound goes really helped turn the industry on its head,” Engstrom states. “Shure was lucky enough to be around at that time and take advantage of that still relatively new material called plastic.”

John Born, product manager for wired microphones at the company, adds that “the Unidyne element developed for the Model 55 is the basis for every directional dynamic microphone made today by anyone, the design that Shure invented in 1939. This year is the 75th anniversary of the technology. We’ve just basically iterated this design with the Unidyne II and Unidyne III in the 1940s, which was the basis of the SM58 and SM57.”

The Sennheiser MD 421 pushed the envelope on sensitivity and frequency response for dynamic designs and still enjoys wide usage.

For Sennheiser, a big milestone in dynamic mics proved to be the introduction of the MD 421 in 1960. Brian Walker, technical services and market development, points out that the mic “has just become one of those absolutely classic dynamic microphones; people still love the sound of it. It’s still in production today, and at the time really reset the bar for what a dynamic mic is capable of doing in terms of sensitivity and frequency response.” The MD 421 is stated as providing a usable frequency response of 30 Hz – 17 kHz, sensitivity of -54 dB, and rear attenuation of -18 dB at 1 kHz—along with a hum compensation coil.

Elaborating on protecting dynamic microphones from induced hum, Electro-Voice introduced this innovation to mics in 1934—another development that is ubiquitous today. Speaking with Jim Long, whose career with EV spanned from 1963 to 2011, he points to other of the company’s contributions to microphone technology.

An Electro-Voice RE15 with a Variable-D slot along the handle.

One of them is Variable-D technology, which uses a series of ports to minimize the proximity effect with a directional mic, is most widely recognized in the RE20, with its wide application in broadcast as well as for kick drum and instrument miking. 

Long continues, “The Variable-D principle appears to have been first used in the Model 664 microphone from the mid-1950s (a patent was filed in 1954), and a couple years later AKG introduced a dynamic with a much more svelte appearance and a 3/4-inch barrel—which was the impetus for EV to develop the RE15, which had a Variable-D slot along the handle. Though you wouldn’t normally do this, if you talk into the side of the microphone up near the head, you’ll hear the high frequencies, and as you move down the slot along the handle, the lower frequencies are emphasized.  But when you talk into the head of the mic either close up or at a distance, the frequency response is the same.”

Rugged Ribbons
Ribbon designs were a fairly early innovation, dating back to the early 1920s, with the RCA PB-31 introduced in 1931. They offered superior frequency response compared with the condensers of the time, yet their structure was typically too delicate to go on the road. Recent improvements in materials and technologies have made ribbon mics tough enough to tour.

Gary Boss, Audio-Technica marketing director, explains that “we wanted a ribbon that could endure the rigors of the audio world – both studio and stage, and the mics we came up with (the AT4080 and AT4081) have a lot of firsts. We developed a tool where the ribbon itself is stamped with a patterned imprint, providing a lot more rigidity than earlier ribbons that were corrugated by running the material through two gear-like cogs.

“The mics actually have two ribbons mounted back-to-back, which gives a much higher output,” he adds. “And they are active mics, so are not preamp-dependent and have more of a condenser-like output level.”

Shure’s Born notes, “We’ve really advanced technology in our ribbons with Roswellite material, creating a really low distortion, really high SPL mic that you can pretty much take an airgun to and the ribbon isn’t going to warp, tear, or change its shape. It’s not foil, but is a molecularly-bonded film.”

The Audio-Technica AT4080 and AT4081, two ribbon mics designed for the road.

Roswellite is used in the company’s KSM313/NE and KSM353/ED mics. In addition, beyerdynamic has taken ribbon mics into a handheld format with the M 160 dual-ribbon instrument and TG V90r vocal mics, and Royer has advanced these technologies with the R-101 and other models. 

Rare-Earth Magnets
Using a powerful neodymium-iron-boron magnetic structure to drive dynamic mics is widespread today, but had its beginnings in EV’s microphone lab in the mid-1980s. The N/DYM program formed around the company’s determination to create a rugged dynamic mic for live performance that integrated the key characteristics of condenser mics – including extended high-frequency response and a considerably “hotter” output. Al Watson headed the development team, with senior engineer Mike Bryson shepherding the design effort. 

The polar pattern of a Shure KSM353ED, a ribbon design enhanced by Roswellite.

The first step was to find a magnetic material with enough power to double the gap flux, compared with then state-of-the-art alnico magnets. Watson had learned of neodymium magnets, which were not yet commercially available, and “bought one square magnet for $90 and machined it down to size” to begin the design experiments.

Numerous trials with that one magnet eventually resulted in a magnetic structure that yielded about 6 dB greater output. The company also took the risk that these magnets would be available in production quantities by the time the products were ready for launch.

Electro-Voice led the neodymium revolution in mic design, and it’s still at the heart of designs like the N/D767a and N/D468.

It was only the beginning of the effort, according to Watson. To complement the more powerful magnetic structure, Bryson developed a porting system and diaphragm that extended the LF and HF response—“a very advanced capsule design for the time”—as well as advanced shock mounting and handle design.

Watson adds that though neodymium was only responsible for increased sensitivity, for simplicity of the marketing message it was credited with all of the mic’s other audio attributes. 

Innovations in the N/DYM program extended into manufacturing, including the development of a conical engraver to letter the mic collar with model and serial number, a production-line test chamber to QC plus print and record a response curve for each mic, and an automated process mated with a measurement system to adjust the mic damping for mic-to-mic consistency.

Roadworthy Condensers
In 1961, Sennheiser introduced MKH mics, using an RF condenser principle of operation. As explained by Walker, “With a regular condenser, you put 48 volts of DC across both plates, and as the front plate vibrates the capacitance changes. What Sennheiser did is to take this another step. Instead of using a DC voltage, we put a sine wave of 8 MHz across there.

“As the diaphragm moves,” he continues, “it actually creates an FM modulation, which is then decoded to audio right inside the microphone. Today we use AM, and using some better techniques, we get an even better signal than with FM.”

Born brings up the Shure SM81 pencil condenser—“the world’s first professional pre-polarized electret condenser mic”—introduced in the mid-1980s. He continues, “By professional, I mean it’s rugged enough to be used on stage and didn’t lose voltage over time.” Building upon this achievement, the dual-diaphragm KSM9 was the first switchable pattern handheld condenser mic designed for live performance.

A-T’s Gary Boss elaborates, “As PA fidelity increased, various sound engineers experimented with mics like our ATM4033 on stage, which was one of the first cost-effective side-address studio microphones—happening right around the birth of digital recording, ADATs, and so on. We responded to their needs for a roadworthy studio-quality mic with our dual large-diaphragm, multi-pattern ATM4050, which was a very pivotal mic on stage.  It started to become a guitar cabinet and overhead mic of choice, being very rugged and under $1,000. You could pretty much use it on any tour.”

One of the versions of the Earthworks SR30 with its measurement-mic form factor.

Earthworks has deep roots in measurement microphones, noted for their accuracy and wide frequency response. Engineering director Daniel Blackmer discusses how the hypercardioid, “high-definition” SR40V handheld condenser came into being. “Touring engineers began experimenting with our SR30 cardioid mic, with its measurement-mic form factor, for vocals, and asked if we could develop a rugged handheld with similar audio characteristics. After an intensive development process, along with field testing on critical tours, we were able to deliver a mic that exceeded their expectations and that we warranty for 15 years.“

Blackmer continues, “The main thing has actually been to get back to the true basics: to fully understand the acoustic impulse response, and what a true free field microphone actually is, and to design every product based on that philosophy. And to fully realize that the testing methods that are often accepted as “standard,” such as reciprocity calibration, have some very substantial drawbacks that are not properly acknowledged (such as relying heavily on mathematically modeling the coupler between the microphones diaphragms).

Earthworks has developed testing methods and apparatus that allow us to produce and test mics in the process of being assembled in such a way that we can maintain and tune each unit in a true free field condition, to see its actual behavior in the real world.“

Rounding out the condenser discussion, the form factor of the DPA d:facto II handheld mic has a creative innovation, such that the mic capsule can be quickly transferred from a wired handheld to a wireless format – allowing the same transducer to be used differently as the need arises. Product manager Mikkel Nymand notes that “we started all over with a new design that made it possible to use the same capsule for a wired, phantom-powered application as well as within adapters for the most commonly used wireless handles.”

Miniatures
A major development within the last couple of decades is in the quality, consistency, and increased miniaturization of mics. The Sennheiser MKE-2 was a seminal step in this process, as was the Countryman ISOMAX. In a conversation with Chris Countryman, he elaborated on his company’s concepts and processes toward miniaturization.

He notes that it begins with stripping down the elements of microphone technology to the basics, followed by an ongoing review of materials and methods developed in other industries and incorporating them into the designs, very careful process control in manufacturing, and extensive QC and testing.

“We push for every last thousandth of an inch,” Countryman says. “For example, we’re drilling precisely placed holes the size of a human hair in our H6 and other capsules, exploring paint chemistry to be able to make a very thin film that will still be flexible, moisture resistant, and durable, and researching adhesives.

He adds, “We use semi-conductor etching technologies to make micro-structures that form the core of a capsule. And the Kevlar stuff – I can’t tell you how many cable designs we went through to get one that’s supple yet is also very strong.” 

The Countryman ISOMAX microphone, now also available mounted on a headset.

DPA mics are widely used in touring sound for their combination of small size and sound quality. The 4060 Series that debuted in 1996 incorporated all of the knowledge and experience the company had acquired into a miniature capsule.

“With these mics as a starting place, we use the technology of interference tubes as one of our main principles to create well-controlled miniature directional mics,” Nymand says. “In addition, our back-plates are pre-polarized at a very high voltage so that the mics can handle very high SPL without distorting, and our unique preamps within the mics use principles found in hi-fi and the Danish hearing aid industries.”

Headset manufacturers must factor wireless transmission into their designs, since most are used in that manner. Countryman says that “about 90 percent of what we sell goes into a wireless device. We have at least 50 unique configurations with various combinations of connectors and wiring schemes to match the requirements of the electronics in the transmitter, and a database that contains hundreds of different wireless systems that we keep up to date.

The latest iteration of the DPA d:screet 4060 omnidirectional microphone.

“Customers also use our headsets with computers for podcasts and other audio applications,” he notes, “so we also must pay attention to USB connections and compatibility with a variety of sound cards.”

On It Goes
This overview has just scratched the surface of more recent mic development and innovation. A key benefit of competition, in conjunction with the advent of new technologies and applications, is that manufacturers push each other to develop products that sound and work better, are more reliable, and can adapt to changing requirements.

Also, when handled with a commitment to users rather than profit being foremost, even pricing competition can lead to better products for less money, rather than to lower quality, disposable goods. With the talent and dedication that these mic designers show, live sound engineers can look forward to new and improved designs, and mic-to-mic consistency—even with those old standards in their mic toolkits.

Gary Parks is a pro audio writer who has worked in the industry for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.

{extended}
Posted by Keith Clark on 11/06 at 05:44 PM
Live SoundFeatureBlogStudy HallEducationMeasurementMicrophoneProcessorSignalSound ReinforcementStudioPermalink

Master Of The House: Superior Sound For London’s Longest Running West End Show

 
Les Misérables is now officially London’s longest running West End show. It celebrates its 30th anniversary next year, and is without doubt one of the most meticulously run, well oiled theatrical ships in the business.

The audio system at Shaftesbury Avenue’s Queen’s Theatre (Les Mis’ home for the last decade) was the work of renowned sound designer, mick potter, and the kit was deployed by leading rental house, autograph, so it’s hardly surprising that the sound in the auditorium is just as impressive as the show’s 40-strong cast. We go behind the scenes to find out more…

Although many of you will have undoubtedly seen this magnificent production, here’s a quick summary of the plot: Set in early 19th Century France, Les Misérables is the story of a French peasant, Jean Valjean, and his quest for redemption, after serving 19 years in jail for stealing a loaf of bread for his sister’s starving child.

After breaking parole and starting over, thanks to an inspired act of mercy from a particularly understanding Bishop, Valjean is then hunted down by the ultimate bad cop, Javert. The story jumps through a revolutionary period in France, where a group of young idealists decide to make their last stand at a street barricade, and there’s even a love triangle intwined in there to boot.

It was Cameron Mackintosh, in cahoots with the Royal Shakespeare Company (RSC) that first brought Les Mis to the UK. After listening to [Les Mis songwriters] Boublil and Schönberg’s concept album in 1982, he drew inspiration to bring [Les Mis writer] Victor Hugo’s story to the stage; and on October 8th 1985, it debuted at London’s Barbican Theatre.

Despite some initial bad press, ticket sales immediately soared, and it’s now arguably the most loved musical of all time. After the West End, similar heights of success were achieved on Broadway, and Les Mis has now been seen by more than 70 million people in 42 countries, and in 22 languages around the globe.

It’s still breaking box-office records everywhere, 29 years on. And if you need more convincing, just look at the phenomenon that was Tom Hooper’s 2012 movie adaptation: it won three Academy Awards (was nominated for eight), and grossed close to half a billion U.S. dollars worldwide. Enough said.

Keep Calm & Get Dynamic
Sitting down to watch Les Mis in a quirky old auditorium like the Queen’s Theatre is a fantastic experience; it has that effortless ‘olde world’ vibe, and although acoustically it’s pretty unforgiving, Adrian Cobey does a tremendous job at mixing the sound, especially considering the amount that’s going on at any one time… So what’s his secret?

Adrian Cobey

“Staying calm, and riding the faders… All the time!” Cobey smiles, giving a demonstration on his DiGiCo SD7T at the back of the theatre. I notice not even half the stage is visible from FOH, and ask him how much of an issue that is during the show. “Not much really, as the line arrays and the subs are all buried within the panels, so I get the bottom five boxes or so, plus the delays; some subs are buried in the ceiling here, too, which gives a great image in the room.

“Also, the surround plays a big part. It’s never a point of source; the big moments just happen around you, which is great, as you can just drop them away at any time. A show like this, where there are so many intimate moments, but also a lot of big battles, means the dynamics are enormous.”

Despite the console’s fantastic bells and whistles, as Cobey puts it, he works with the SD7T in a surprisingly manual fashion. Most of his previous experience in mixing theatre has been with analog, and he says it’s virtually the same in terms of operation working with the DiGiCo.

“It doesn’t work any other way Les Mis—and it’s a very delicate beast,” he says. “We don’t have click tracks, so there’s nothing to hide behind; it’s all about the cast, the band, and the FOH operator. So all we’re doing scene by scene is using the control groups, and just recalling what we need on those faders at any one point,” says Cobey, who started out on stage himself, playing in bands, before fusing his love of electronics and music, and hopping behind the console.

“Sure, I’m sending various MIDI messages out to other bits of kit to change for individual scenes, but beyond that, it’s line for line all the way through, and then I’m using all the DiGiCo’s internal effects and processing, which are great. All I’ve got outboard-wise is one external reverb.”

Cobey, who started out on a DiGiCo D5, says one of the key reasons for using DiGiCo consoles is their simplicity and configurability. “I love the SD7T; the fact anything can be anywhere means it really is a truly blank surface, and as a sound designer and as an operator, it’s extremely easy to move things around,” he says. “If you want things grouped together, it’s very simple; and let’s say you suddenly need to do a Sunday concert here, which is the day off for the Les Mis show, there will never be much replugging involved, just soft patching, so you can find it all very quickly. And being a Windows-based system, there are a lot of ways of doing it, so you can find your own little style when working with DiGiCo.”

Cobey is also utilizing QLab at FOH, which talks to the SD7T, and vice versa: “On this show, QLab is remarkably simple, really; it’s running a lot of sound effects. Each battle isn’t just a sound effect piped out of every speaker, though; it’s running a series of them at the same time, so they’re rebounding all over the auditorium to get that ‘in the middle of the battle’, epic feel going on.

“The SD7T is sending MIDI messages to QLab, to update and do what it needs to do, and then that comes back and updates the desk. This is so you can step through the sound effects, and the DiGiCO isn’t stepping through the scenes, it’ll just do the sound effects it needs to do, and when we’re ready to change a scene, QLab and the SD7T are always on the same page, so to speak.”

Beautifully Orchestrated
The Queen’s Theatre has quite a shallow orchestra pit, so a fair bit of the sound “just comes straight out”, Cobey says, which can make the intimate moments quite difficult to deal with.

Credit: Johan Perrson

“Once you get to a certain point, you really don’t have any control. Every instrument has a DPA mic on it – there are 25 DPAs in total in the percussion room, which is buried under the stage, so it’s very well isolated,” Cobey explains. “They’re such high fidelity microphones, and we’ve been using DPA here for years. We’ve got everything close miked with DPA 4061s, which are superb for capturing the detail of the instruments; and we’ve also got everything miked with overheads, which we use DPA 4011s for.

“The overhead mics are going straight to the surround mix,” he adds, “and that’s all part of Mick Potter’s design: making it bigger as opposed to making it louder.”

A plethora of string instruments are utilised on Les Mis: violin, viola, cello, bass, and a keyboard to back up the string section; and there are three woodwind players, who also double up on clarinet, flute, and sax. Furthermore, there is a trumpet player and trombonist, the latter of which doubles up on the french horn.

“They’re all on the DPA 4061 and 4011, in one combination or another, and we wouldn’t use anything else, as the combination of high fidelity and clarity is unbeatable,” insists Cobey. “I don’t crush much, compression wise, either; and sometimes, because of the arrangement of the show, I end up doing some backwards mixing: when it dies away, you still need to keep the ball in the air, as it were; it’s all about following the dynamic of the show, but also just keeping that energy up, so it doesn’t die away. The dynamics of the show are immense, and it doesn’t run like a typical musical. The principles singing these songs are all the same shape; it’s a sung through musical, so there’s never time to rest.”

Hats Off
For the performers, foldback is non-existent. They rely on the sound of the mic, and the mix out front; and from my seat, bang in the center of the stalls, it sounded incredible.

“We’ve got all 40 cast members on DPA mics, too; they’re so discreet, and the capsules are tiny. We close-mic everyone with a 4061, and the principles each have a pair, one is a backup,” Cobey says. “It’s a tricky balance, really, as in the ideal world, you want a mic here [mimics singing into a handheld], but that’s just not possible in theatre, as all theatre producers want them hidden. Thankfully, the DPAs are not only durable, but they really do deliver; the quality of audio is fantastic, and in this environment, you’re as strong as your weakest link, so that’s crucial.

“There are also an extraordinary amount of hats being used on this show, and we put a 4061 on each of them, too. It might sound strange, but often, as soon as a cast member puts on a hat, the mic on their forehead becomes covered, so that’s the end of that! So we just add them to the hats, and then we paint up the 4061 capsules to suit the respective performers’ hairlines, so we have every base covered.”

He adds that the crew is also phenomenal – and it sounds like they have to be. “Les Mis is an oiled ship; there are lots of contingencies, but it’s a strict plot, and the whole team know the show so well. We always keep a dialogue open with the cast and the musical director, and although with musical theatre, there is always a compromise, there are no standoffs or Chinese whispers here; it’s a great team, and it’s very transparent,” he insists, very matter of fact.

Credit: Michael Le Poer Trench

“And having been a musician and performer here and there myself, if there ever is a problem, I can usually see it before it becomes apparent; working in confined spaces with orchestras, I can always tell before the musicians arrive that there won’t be enough room, for example, so I can be ahead of the game for sound checks, which is very useful.”

It’s a tough job, mixing for theatre – no doubt about it. So what advice can Cobey offer to any wannabe theatre engineers?

“[smiles] Don’t dive into Les Mis as your first job in theatre, as it’s a big ask! I’d say get involved in amateur theatre, and bug people; keep on their case, and watch what they do. As a friend of mine says, be a sponge – but without making a nuisance of yourself. Ask to dep on shows, certainly. Being at the desk comes a bit later, but being involved in musical theatre is about learning the radio side of it, and then being involved in the cast; and there are a lot of departments you have to liaise with. Mics travel through wigs and wardrobes, so being a people person is a big part of it.

“You also have to love it to do it, as it’s different every night. People are always on holiday, and the nature of how the Musicians’ Union works means we never have the same set of musicians in; we can have up to eight deps per performance, and that can really affect the show. Not in a bad way, but you end up building up a database of things in your head, really; that’s how I treat it as I come in the building when I see who is on, who is off, and who is covering what, and so on. It’s your job to have that snapshot in your head out there, and do your best to recreate what the designer wants on that day.”

Showtime
I was genuinely gobsmacked throughout this Les Mis performance. It’s powerful, and it’s moving; it’s hilarious, and it’s heartbreaking. In short, it’s a bloody masterpiece. What was also evident was the work of the crew, as Cobey had told me about earlier. The scene changes were entirely seamless, and to me, it looked like the whole production went without a hitch.

Audio-wise, it was just magical, and although it wasn’t loud, it somehow filled the auditorium; when the chorus sang, they were bang on the money, and some of the solos were mesmerising. A shout-out must go to the brilliant Tom Edden for his portrayal of Thénardier, and to the awesome vocal talents of Carrie Hope-Fletcher, who has clearly made Eponine her own. To the whole cast and crew, a big thumbs up. The only question left is: When can I come again?

Headliner editor Paul Watson has 10 years live touring experience with bands in the UK and the US, and ran an independent recording studio for five years close to London. He also serves as the editor for Europe for Live Sound International and ProSoundWeb.

Headliner is a UK-based publication that supports the creative community, focusing on live performances, recording sessions, theatre productions, and major broadcast events. The spotlight is on the technology, but with a lifestyle approach. Find out more here, and subscribe here.

{extended}
Posted by Keith Clark on 11/06 at 03:37 PM
Live SoundFeatureBlogConsolesDigitalEngineerMicrophoneMixerProcessorSound ReinforcementStagePermalink

Wednesday, November 05, 2014

In The Studio: Techniques For Dealing With Phase

This article is provided by Audio Geek Zine.

 
Phase is a constant concern for recording and mixing engineers. Problems with phase can ruin your music; it can be easily avoided or corrected, but first you need understand how the problem occurs.

This guide will attempt to explain almost everything there is to know about phase, what it is, how it happens, what it can sound like and some techniques to deal with it.

What Is Phase?
I’m going to consult my engineering school textbook Audio In Media for this.

It says:

The time relationship between two or more sounds reaching a microphone or signals in a circuit. When this time relationship is coincident, the sounds or signals are in phase and their amplitudes are additive. When this time relationship is not coincident, the sounds or signals are out of phase and their amplitudes are subtractive.

No wonder people are confused about phase. Even I got confused at that, looking up other entries on phase in the book were even worse. (I guess I shouldn’t read books.) I’ll try to break it down more simply.

Phase Vs Polarity
Lets define things a bit more starting with phase and polarity. These two terms are often used interchangeably but they ARE different.

Phase is an acoustic concept that affects your microphone placement. Acoustical phase is the time relationship between two or more sound waves at a given point in their cycle. It is measured in degrees. When two identical sounds are combined that are 180 degrees out of phase the result is silence, any degree between results in comb filtering.

Polarity is an electrical concept relating to the value of a voltage, whether it is positive or negative. Part of the confusion of these concepts, besides equipment manufacturers mislabeling their products, is that inverting the polarity of a signal, changing it from plus to minus is the basically the same as making the sound 180 degrees out of phase.

In case you missed that. Phase is the difference in waveform cycles between two or more sounds. Polarity is either positive or negative.

In And Out Of Phase
When two sounds are exactly in phase (a 0-degree phase difference) and have the same frequency, shape, and peak amplitude, the resulting combined waveform will be twice the original peak amplitude. In other words, two sounds exactly the same and perfectly in phase will be louder when combined.

Two waves combined that are exactly the same but have a 180-degree phase difference will cancel out completely. Silent output. These conditions rarely happen in real world recording, more likely the two signals will either be slightly different, like two different mics on the same source, or the phase difference will be anything other than 180 degrees out of phase.

In cases where the signals are not 0 or 180 degrees, or the signals are somehow different, you get constructive and destructive interference or comb filtering. The peaks and nulls of the waveforms don’t all line up perfectly and some will be louder and some will be quieter. This is the key to combining mics on a single source.

Examples
Here are some examples using sine waves.

This is a 250 Hz sine wave with a peak amplitude of -20 dB: LISTEN

If I add another the exact same and combine them (in mono) the output is the same but louder, a combined peak amplitude of -14 dB. They have a 0-degree phase difference, amplitudes are additive.  LISTEN


image


Now if I change the phase, the time and waveform relationship of these by having the second track start 2 milliseconds later it’s like this: LISTEN  Silence, this is 180 degrees out of phase.


image


Here is the same kind of thing with some white noise, -20 dB, the same audio file is copied to another track and combined. Louder same as before, -14 dB combined.


image


Now I’ll use the invert function on the second track and since these sounds are exactly the same it the completely cancel out.


image


I think we understands it now, so here’s something slightly more interesting. I’ve taken the first one second of the white noise clip and repeated it nine more times. The second track is the same, but on each repeat I’ve delayed it by an additional 1 ms.

This provides an idea of the constructive and destructive interference and the comb filtering. If you look at the frequency spectrum in an analyzer you will actually see notches cut out like the teeth of a comb. LISTEN

Real World Examples
Here is a bass guitar going into a DI box, the signal splits and goes to an amp and to the audio interface. The amp is miked and the mic is going into the interface too.

This is a very common way to record bass, but you may run into phase problems when the fast as light signal from the DI box is combined with the air pushing out of a speaker into a mic some distance away from the signal.


image


Here is the bass DI signal: LISTEN

Here is the microphone signal: LISTEN

Combined they sound a bit funny, definitely some hollowness going on: LISTEN

Correction By Inverting Polarity

The first thing I would try to troubleshoot this is to invert the polarity of one of the tracks and see if that’s better or worse.

I know this because there is a time delay with these tracks that inverting alone won’t fix everything.

I can invert the polarity in two different ways, I can either use an offline process and invert the whole wav file or I can insert a plug-in on the track.

Some DAWs have a polarity reverse button on each channel of the mixer that will do the same thing.

I’m just going to play the tracks and invert the polarity a few times so you can hear the difference: LISTEN

Correction By Time Adjustment
I’m going to keep it the way it was and then go to the next strategy. That is moving the tracks around in time. The microphone track is delayed compared to the DI track just slightly. So I can either nudge the microphone track earlier or delay the DI track a little.

I’m going to delay the DI track just slightly. To do this I’m going to insert a delay plug-in that works in samples, such as the Time Adjuster plug-in in Pro Tools.

I’m going to invert the phase, then scroll through the delay value 1 sample at a time until I achieve the most cancellation, then switch the phase back to normal. I found that 152 Samples did the trick. You can also zoom in really close on your waveforms and nudge a track until it lines up. LISTEN

Processing Delay
Another common cause of phase cancellation is when doing parallel processing with delay causing effects or external gear. If the delay is long enough you will hear it as a discrete echo, if it’s short then you will get the comb filtering problem.

The way to get around this is to delay all the other tracks by the same amount so they all reach the master bus at the same time. Go here for more on handling DAW delay compensation.

Multiple Mics On A Single Source
When using two mics on a speaker cabinet you need to be aware of the phase relationship. You can never get the mics to perfectly cancel out, but you find a place where the two mics work together really well.

Start by positioning the first mic in any way you like. Put on headphones and start moving the second mic around, you’ll hear all kinds of phase cancellation but there will be at least one placement that sounds really good.

It helps to invert the polarity of the second mic while doing the listening in headphones find the place where you get the most cancellation then flip the polarity again for a nice big sound.

Phase Issues With A Single Mic?
Believe it or not, you can also get phase issues when using just a single mic. Reflected sound from nearby surfaces like the floor or walls can get into your mics and cause partial phase cancellation.

There is only one way to deal with this, and that is at the source. Put down carpet, sound damping materials, lift the amp off the floor whatever you have to do to get rid of the problem reflections.

This is one of the only things you can’t fix after it’s recorded. Never time adjust when using multiple microphones, especially for things like drum mics, get it right with your mic placement.

Wow, this article got really long, congrats if you followed along to the end. It was exhausting preparing this article and I feel like I’ve only covered about half of what I should. I didn’t talk about drum miking, stereo, M/S or tricks using phase. I hope it’s a helpful guide

Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his Audio Geek Zine blog—check it our here.

 

{extended}
Posted by Keith Clark on 11/05 at 03:58 PM
RecordingFeatureBlogStudy HallConsolesDigital Audio WorkstationsMeasurementMicrophoneProcessorSignalSoftwareStudioPermalink

Church Sound: That Extra 10 Percent Really Matters

This article is provided by Gary Zandstra.com.

 
I recently had two experiences, unrelated on the surface, that really got me thinking.

The first happened at a church that was talking with me about upgrading their sound system. If you’ve ever been through the process of updating a system, be it sound, lighting or video, you know it’s a chore—or actually, a set of chores.

There’s the pursuit of determining what’s needed, soliciting proposals, selecting a proposal, getting the church/committee to sign off on it, overseeing the install of the new components, and then figuring out how to operate them. I could talk at length about any one of the steps, but based on my recent experience, let’s start with a question (actually two): Why upgrade, and what are the expectations?

My meeting was with the head sound tech (volunteer) and the worship leader. We were primarily focused on switching to a new digital console and main loudspeakers. The existing stuff is almost 20 years old, still works fine and sounds pretty good (20 years ago it would have been considered a near-premium system), but it is showing increasing signs of age.

As we talked the sound tech made a statement that I hear way too often. Paraphrasing it, he said, “We’re not looking for something excellent, or top of the line, but more middle of the road.”

Every time I encounter statements along these lines, I want to reply, “Sorry, I’m not the guy for you. Please see one of my competitors because they have the whole ‘doesn’t suck too bad’ thing nailed down.”

Of course, what I really say is “Well, let’s see what we can come up with” and then I start questioning them about their goals and needs, working to steer them to the best solution for the budget they have available. And if they don’t have a budget, I gently push them to establish a reasonable one.

My question: Why do so many churches talk about middle of the road? My own experiences, both as a church member and as an A/V pro, have shown me that most/all churches striving for excellence are growing, while the ones doing the “mediocrity thing” are stagnant or shrinking. 

The specific church I’m discussing here did its upgrade 20 years ago in an excellent fashion. They invested in the best they could afford at the time. (I remember it because I was involved with the project.)

The minister of music (as we referred to them in those days) solicited a couple of proposals. It was a growing church, the place was pretty full, and he laid out the system needs while stressing that he wanted top quality. “I want these speakers to be hanging here 20 years from now,” he said, prophetically.

My proposal sought to meet his challenge. My competitor tried to go the middle of the road route. Of course, the minister of music did not want to settle for that.

A couple of years after we did the install, he said to me. “I never thought we’d get to use your company because you’re known as the provider of ‘Cadillac’ systems, but it turned out that you were less the 15 percent more than your competitor—and we knew with you that we would get something that would serve us well and stand the test of time.”

With that context and memory in mind, I move to my second recent experience, where I was working front of house at a seminar at a mid-sized church. It was a very simple event, a headset mic and a handheld mic. Doing EQ on a mid-level console (only one band of sweepable EQ), I listened closely to how the system sounded. It was “just OK,” and there really wasn’t anything else I could do to make it better.

During the event, a church member who serves on the sound team stopped in to pick something up, and in passing he said, “Keep your hand on the fader—every once in a while the system just doubles in volume for no real reason, and if you don’t catch it the feedback is painful.” Nice! 

So now as I paid much more attention to the board, keeping my finger on the fader, I also began mentally adding up the cost of the system. My conclusion was that for maybe 10 percent or so more investment, the church could have purchased far better equipment. Sound quality would be higher, and more than likely, they wouldn’t be having issues just five years after the installation.

It really does seem that with most things in life, it’s that extra 10 percent that takes things from good to great. Something we all need to keep in mind when we’re thinking about system upgrades, because it can very much pay off in the long run.

Gary Zandstra is a professional AV systems integrator with Parkway Electric and has been involved with sound at his church for more than 30 years. Read more from Gary at garyzandstra.com.

 

{extended}
Posted by Keith Clark on 11/05 at 02:47 PM
Church SoundFeatureBlogStudy HallBusinessConsolesEducationEngineerInstallationLoudspeakerMixerSound ReinforcementSystemTechnicianPermalink

Real World Gear: Small Format Line Array Designs & Capabilities

A line array effectively functions as a multi-element column loudspeaker, with a long vertical and narrow horizontal outline, for narrower vertical pattern control and wider horizontal dispersion. They’re designed to be flown and taken down quickly, often in “blocks” of individual modules, and to be flexibly adjustable to different curvatures.

This flexibility can be particularly useful in venues where arrays need to be adjusted regularly to accommodate different types of acts, such as a performing arts center that has a symphony one day, a small choral ensemble the next, followed by a two-person dramatic play and a jazz or rock group on the weekend. Properly designed and deployed, they deliver even coverage from front to back, largely because the pattern control helps lessen the drop-off of level with distance.

Small format line arrays, which we’re defining here as models with woofers in the 8-inch range, bring the technology into venues with limited vertical height above the stage or where a more modest throw is required compared with arenas and expansive outdoor coverage areas, while still retaining the ability to control vertical dispersion well into the lower midrange. With their capability to function with wider splay angles, they can cover the front regions of the main floor to the rear seating of a balcony relatively evenly.

Because the overall vertical height of the array in relation to the wavelengths of the lower frequencies is important to the ability to control vertical dispersion, these smaller format loudspeakers typically don’t control coverage to as low of a frequency as their larger brethren. With smaller and often fewer drivers per cabinet, their maximum SPL is typically several dB less.

Larger systems are often 3-way or even 4-way, with multiple drivers for each frequency band, while smaller arrays are more likely to be 2-way or quasi 3-way designs (with a pair of LF drivers coupling at the lowest frequencies, and one of them then rolled off while the other covers the mids).

Amplification and signal processing are handled in different ways. Many are self powered, requiring only a line-level signal delivered to each cabinet; because the electronics are flown with the array and out of reach during the performance, particular attention is paid to their reliability and the ability to control and adjust levels and other parameters remotely.

Others have separate, dedicated amplifiers and processors that are matched to the requirements of the specific line array cabinets, and still others work with a wider variety of amplification and processing under manufacturer-specified settings.

It’s common practice that sections of the line array are run at slight different output levels depending on whether they’re covering closer or farther seating areas, and sometimes equalization is adjusted as well with an additional HF boost for the long-throw cabinets.

Supporting this ability, some systems can be networked so that the performance of individual transducers and amplifiers can be monitored, with various parameters that can be adjusted during setup and the show. Many manufacturers complement their systems with predictive software that will calculate the expected performance of particular line arrays at differing splay angles and output levels, across a variety of frequency ranges and array lengths.

The models we’re covering in our Real World Gear Gallery Tour are not the smallest format available. There’s been a growth in systems with woofers in the 6-inch range over the past several years. (We classify these as “compact” format.) Yet systems in the 8-inch cone range remain a great choice for a wide range of applications. Some may use only a single LF cone per cabinet, while others use two or more.

The product specifications are compiled from manufacturer data sheets, bringing this information into one handy location. Enjoy our Real World Gear Gallery Tour of recent small-format line arrays.

Gary Parks is a pro audio writer who has worked in the industry for more than 25 years, including serving as marketing manager and wireless product manager for Clear-Com, handling RF planning software sales with EDX Wireless, and managing loudspeaker and wireless product management at Electro-Voice.

{extended}
Posted by Keith Clark on 11/05 at 09:03 AM
Live SoundFeatureBlogProductSlideshowAmplifierLine ArrayLoudspeakerProcessorSoftwareSound ReinforcementPermalink

Tuesday, November 04, 2014

One Size Doesn’t Fit All: A Relevant Blast From The Past…

The cartoon at left (and larger below), which first appeared in Live Sound International at least 15 years ago, is always good for a laugh, but it also brings up a good point.

One of the difficulties we regularly face as live audio practitioners is working with a variety of parties in determining what’s best for a gig in terms of system size, scale, and complexity.

My approach is to first make there’s a solid foundation—basics such as enough console channels and processing, suitable loudspeaker size and quantity, appropriate stage monitoring, and so on. In other words, I’m not going even going to talk about miking up an entire choir with wireless lavs until the house system is at least adequate to deliver coverage to the entire space.

While most sound providers offer system packages of various sizes (usually with the biggest called the “A” rig, the next size down the “B” rig, etc.), not every package is the exact correct formulation for every gig. Variables usually come into play, and we need to be flexible in this regard.

We also must work to educate our various customer bases. Last month my company was asked to handle a corporate event in the same ballroom we’d worked successfully twice before. Knowing the room, we could have easily just supplied the same rig and had a “good enough” show. And in fact, the client, who was the same client as with the previous gigs, wanted us to bring the same system.

But in looking at the floor plan for this particular event, I realized that we should do things a bit differently—in this case, adding some delay stacks and also a few front fills. I talked at length with the client about coverage versus volume, and how the additions to the system would result in better overall coverage for the specific configuration of this event. After the show, he noted how much he appreciated our attention to detail, as well as the extra work involved in getting it exactly right.

So: foundation, flexibility, attention to detail, communication, and education. After that, by all means discuss the extras.

image

 

And click here for a pdf version.

Senior contributing editor Craig Leerman is the owner of Tech Works, a production company based in Las Vegas.

 

{extended}
Posted by Keith Clark on 11/04 at 02:54 PM
Live SoundFeatureBlogStudy HallBusinessConcertEngineerLoudspeakerSound ReinforcementSystemTechnicianPermalink

First Look: The Roland Pro A/V M-5000 Console

An in-depth tour of a new larger format digital mixing console backed by an equally new platform...

The Roland Pro A/V division has introduced the M-5000, a new flagship console built on the company’s new O.H.R.C.A. platform.

O.H.R.C.A. stands for Open High Resolution Configurable Architecture, and its hallmarks include 96 kHz operation, compatibility with existing REAC snakes, I/O and mixers as well as Dante, MADI, and Waves SoundGrid, and a configurable architecture of 128 total audio “paths.”

The 28-fader surface employs dual rows of 8 encoders below a 12-inch touch screen, all with extensive color-coding.

This console will appeal to long-time Roland Pro A/V users looking for an upgrade and to new users looking for more than previously offered. Roland was one of the first audio manufacturers to offer an affordable digital snake, with its live sound products operating on the proprietary REAC (Roland Ethernet Audio Communication) protocol that provides 40 x 40 channels of bi-directional digital audio transmission on a single Cat-5e/6 cable with 96 kHz operation and low latency.

The company has continued to expand its audio product line to include three digital consoles (M-480, M-300 and M-200i), as well as the M-48 personal mixing system, the R-1000 48-channel player/recorder, and a wide selection of REAC digital snake I/O devices, including the S-MADI REAC/MADI bridge.

(click to enlarge)

#CONFIGURABLE
The maximum number of 128 audio paths in the M-5000 includes inputs, subgroups, auxiliaries, mains or matrices in mono or stereo, in addition to LCR—with or without center panning—or 5.1 surround mixing with 5.1 monitoring. The ability to select the number of mix-minus buses is available for broadcast applications.

All mix audio paths are full featured with high- and low-pass filters, 4-band fully parametric EQ and digital delay. Dual multi-functional dynamics processors can be placed either before or after the parametric EQ. Insert points before and/or after the EQ/dynamics processing can be used for outboard electronics, Waves SoundGrid plug-ins, GEQ/PEQ, or digital effects.

(click to enlarge)

Eight stereo multi-effect processors can be flexibly patched or inserted, including digital reverbs, delays, multi-band compressors, and dynamic EQ, with many effects modeled on popular Roland processors like the SRV-2000, SDE-3000, SDD-320, RE-201 and CE-1. Separate from the effects, there are also 32 insertable equalizers that can be 31-band graphic or 8-band parametric equalizers. A 31-band RTA display can be opened while adjusting each insertable equalizer.

In addition to its main input, each channel has an alternate input that can be employed as a backup input, as well as a third track input for virtual sound check playback. There are 8 mute groups and 24 DCA groups. A channel link function allows up to 12 groups of channels to have some or all of their parameters linked together.

(click to enlarge)

The 28 channel faders are in four banks: 3 banks of 8 faders, plus a bank of 4 fixed faders on the right that remain static and can be used for “money” channels such as a master, a lead vocal, an effect return, and a DCA. The 8-fader banks have a horizontally scrollable 5-layer design for inputs, outputs and DCAs, including 3 freely assignable user layers.

In addition, the 8-fader banks also have an “isolate” function that enables scrolling or layer switching independently or in tandem with other fader banks. An “anchor” function allows setting often-used fader scrolling positions for quick recall.

Above the 4 assignable faders, on the right, is a user-assignable section of 4 encoders and 8 buttons in 3 banks allowing users to assign key functions for quick access. Both the user-assignment area and the fader channels use bright, full-color organic EL displays.

A bright, vivid full-color 12-inch touch screen is positioned at the center of the control surface, with 2 rows of 8 encoders and buttons immediately below. Colored rings and illuminated buttons around the encoders are matched with on-screen parameters by using color-coding. There’s also an extra freely assigned “touch-and-turn” encoder and button beside these.

#MONITORS
The monitor section of the M-5000 offers 2 solo buses so that one can be used for stage monitors and the other can be used for in-ear monitors, with either or both selectable per channel. Monitor 1 can be auditioned in LCR or 5.1 with automatic down-mix to stereo.

The headphones bus has a dedicated delay for alignment in front of house applications. Soloing an input with “solo in place” mutes the other channels, activated by holding the SIP button for 2 seconds.

Three talkback systems make communication possible between three locations, such as front of house, broadcast and the monitor position, as well as to band or tech mixes. During talkback return, the desired TALK switch can be made to flash to indicate the call source. In addition, the “monitor dimmer” volume reduction during talkback can be adjusted individually for Monitor 1 and Monitor 2.

For use with Roland M-48 personal mixers, the M-5000 has an engineering monitor function that mirrors the musician’s M-48, enabling the engineer to check the mix and hear exactly what the musician is hearing from the console.

(click to enlarge)

#I/O
In addition to the built-in dual 40-channel REAC A and B Cat-5e/6 connections, which can provide a total of 80 input channels to the console, 2 expansion card slots each allow up to 80 x 80 more channels via REAC, Dante, MADI or Waves SoundGrid expansion cards.

Up to 300 inputs and 296 outputs (460 inputs and 456 outputs at 48 kHz) are managed in separate patch bays and can be used independently of the mixer. Any input can be patched to any or multiple outputs without having to be patched through a mixing channel.

(click to enlarge)

In addition to dual AES digital I/O and 16 local analog input and output XLRs on the rear panel, a USB connection provides up to 16 channels of recording and playback with any ASIO DAW. One or more Roland R-1000 48-track recorder/players can be connected via REAC and used with input channels’ third track input for virtual sound check, rehearsals, playback, and recording.

#CONTROL
Roland’s Remote Control Software (RCS) provides offline editing and runs on either a Mac or Windows PC, also allowing the M-5000 to be operated from a computer nearby over USB or remotely by using the desk’s REMOTE connector for operation over a LAN.

(click to enlarge)

The dedicated M-5000 Remote iPad app provides remote control using any of three methods: a wired hookup using the console’s dock connector; through a router connected to the LAN port; or via a direct ad-hoc connection using a wireless USB LAN Adapter. Also, 2-channel recording or playback with an iPad using the dock connector is also supported.

The Roland M-5000 is about 3 feet wide, 30 inches deep, just over a foot in height, and weighs about 80 pounds… And will be available late February 2015.

Go here for further specifics on the new M-5000.

Mark Frink has engineered live sound for 30 years, worked in pro audio journalism for half that, and is available for #consultation at .(JavaScript must be enabled to view this email address).

{extended}
Posted by Keith Clark on 11/04 at 11:58 AM
Live SoundFeatureBlogProductConsolesDigitalInterconnectMixerSoftwareSound ReinforcementPermalink

Friday, October 31, 2014

Behind The Glass With Hugh Padgham: Does It Sound Any Good?

Ah, the eighties. Every record sounded like it was made in a stadium, every singer working their uppermost range until it seemed as if their vocal cords were about to leap out of their throat, every hit wrapped in a glossy package of shimmering guitar leads and silky bass.

And, of course, every snare drum was passing through a gated reverb.

Hugh Padgham is largely responsible for many of those sounds—particularly the latter— but he’s also responsible for crafting many of the greatest records of the era, The Police’s “Every Breath You Take,” Genesis’ “Tonight’s the Night,” and Phil Collins’ “In the Air Tonight” among them.

His ultra-clean signature sound raised the bar for every engineer and producer of the era and had a major impact on the shift from the dead, close-miked records of the seventies to the open, ambient sounds of the nineties and beyond.

Padgham’s unique abilities and versatility are probably best reflected in the fact that he’s won four Grammys in four different categories: Album of the Year (Collins’ 1985 No Jacket Required), Record of the Year (Collins’ 1990 “Another Day in Paradise”), Best Engineered Album of the Year (Sting’s 1993 Ten Summoner’s Tales), and the 1985 Producer of the Year award.

Padgham’s career started at London’s Advision Studios, where he served as tea-boy (the British equivalent to a runner), but it wasn’t until he moved to Landsdowne Studios in the mid-1970s that he received formal training, quickly rising through the ranks from assistant engineer to chief engineer.

In 1978, he took a job at Richard Branson’s Townhouse studio (which sadly closed its doors only recently), which gave him an opportunity to engineer for various Virgin artists, including XTC, Peter Gabriel, and Phil Collins.

It was also at the Townhouse that Padgham first met a young bass player by the name of Gordon Sumner. . . soon to be known to the world as Sting. A couple of years later, just as Sting’s band The Police were poised to reach the heights of international fame, Padgham was brought onboard to co-produce their massive hit album Ghost in the Machine.

We met up at his West London studio, Sofa Sound, one bright summer afternoon, where the affable Mr. Padgham, looking more like a ruffled professor than a superstar pop producer, shared his unique perspective on the evolution of record-making through the past two decades.

Howard Massey: Are you fully sold on digital recording these days, or do you still use tape?

Hugh Padgham: I’m not anti-digital per se, because you’ve always got to stay as current with things as you can. But people who grew up with analog gear can hear the difference, and there’s no doubt in my mind that analog sounds better: it’s kinder to your ears, and not as harsh.

Hugh Padgham

Having said that, there’s also no question that digital now sounds better than ever before. These days I’m running all my sessions at 96k, 24-bit, and that’s a big improvement over 44.1 or 48. Of course, the original RADAR, which was 44.1, 16-bit, sounded a lot better than other machines, so I think a lot of it is down to the converters.

One thing I really miss about analog recording is tape compression, though. By using it carefully, you can actually get some 10 dB of extra level before a well-recorded transient signal like a snare drum clips.

That’s one reason that digital sounds so harsh—because you’re not getting any of that nice rounding off of the transients. So these days, I tend to do my initial tracking onto 24-track tape and then copy that into Pro Tools. That way, at the very least, my drums, bass, and guitars hit tape.

If I have the time and budget, I will continue doing things onto analog, either by premixing and bouncing tracks, or by running a second machine in sync.

However, I still never go over 48 tracks; I set that as my limit. It just gets really difficult to manage more tracks than that, especially if you’re mixing on an analog console.

Don’t forget, we used to quite successfully make records on a single 24-track machine.

How do you know when a recording is complete, when it’s time to stop adding overdubs and start mixing it?

It’s really just instinct. For me it always comes down to one simple question: “Does it sound any good?” Sometimes you run into situations when you suddenly think, “I’m not so sure this sounds good anymore.”

That’s when you realize that the last thing you added didn’t need to be there. “Less is more” sounds like a cliché, but it often is true, and it often takes a lot of effort to have less rather than more.

I actually spend more time pruning stuff down than adding things. Doing so can often require a musician to learn or evolve an altogether different part to be played, so that what was two tracks is now one track.

Every song is different, of course, but I’m always looking for ways to simplify and reduce.

I have one criteria that is probably my bottom line: is it embarrassing or not? If somebody is singing and it’s really out of tune, that to me would sound really embarrassing if you put it out on a record.

A guitar part could be equally embarrassing—the kind of thing you’d play when you were in your first band in school, when you were 13 or 14 and playing a lot of crap.

Something that goes back to the days when the guitar player was focusing so hard on getting the chord shape or string bend right that he couldn’t put any feeling into it.

Those moments are tough for me, because I find myself thinking, “Oh my god, what am I going to tell them?”

What do you tell them?

Well, I hope they’ll come to that conclusion themselves when they hear it played back. Still, I always subscribe to the idea that it’s not my record, it’s the artist’s record; I’m making it for them.

So all I can do is ask the artist, “Are you really happy with that? Or are you going to be embarrassed when you hear that in five years’ time?”

What happens if the artist is happy with a part he’s played but you feel strongly that it’s embarrassing?

I occasionally had that problem with Sting, who sometimes couldn’t be bothered, or thought what he’d done was good enough. Usually I’d just fix it when he wasn’t looking.

Of course, now in Pro Tools you can do things that were unimaginable years ago. I made a record not long ago with a singer who, frankly, was not on the ball—he’d often come in hung over or whatever.

We had the usual problem of time and budget, plus he was physically incapable of improving things sometimes. But somehow, by doing a lot of fiddling around and editing, I was able to make him sound really good.

The problem was, he thought that was all him! He thought he’d done a great job, when in reality what he’d done was quite embarrassing.

But if there’s a conflict with the artist, it’s like a conflict in any job or any aspect of life: you talk it through and either you come to a compromise or one person wins and gets their way.

People usually get over it, though. If I have a really strong feeling about something that the artist disagrees with, I’ll say, “Look, it’s your record, not mine; if you really want it to be like that, that’s fine… as long as it’s not embarrassing.” [laughs]

How do you feel the role of the producer has changed since you started making records?

The main role used to be quality control, but one of the worrying things about making records nowadays is that the concept of things sounding good rarely comes into it.

It used to be that you would run down to the record store to buy a particular new album because you knew it was going to be a work of art sonically; you’d race home and put it on the best stereo you could find and it was an amazing experience listening to it.

Sadly, nowadays, kids grow up listening to everything on earbuds. My daughter, who’s a teenager, once plugged her iPod into some little computer speakers I have and she said, “Dad, that sounds amazing!”

They were just tiny satellite speakers with a small subwoofer, but she was amazed . . . and the reason, I think, is that she had never heard bass before!

It’s almost a complete reverse evolution, really. If you look at video quality, things have evolved forward, from VHS to DVD to high-def.

But in the world of audio, it seems that things have gotten worse and worse: we’ve gone from vinyl to CD—and the early CDs sounded way worse than vinyl—and now we’ve gone to MP3s, which sound even worse than the earliest CDs.

Personally, I think the era of the disc is well and truly gone. Hopefully our file sizes will get bigger—meaning better quality audio—and so too will storage capacity.

I really hope that, as memory becomes cheaper and more prevalent, we’ll be able to restore the quality of audio.

Soon there will be massive flash drives with high bus speeds, and hopefully then we’ll be able to at least store good quality uncompressed audio. People won’t notice files that are ten times the size of MP3s if you actually have ten times the space to store them in.

Or perhaps there will be new forms of compression invented that will preserve full-quality audio. Or maybe we’ll all just be wired into a central server. The problem with that is, what happens when you lose service?

There will be caching schemes, I’m sure, and hopefully they will improve all the time as well. What lies ahead is exciting, and you can’t stay rooted in the past.

Another contributing factor to any perceived decline in quality is that budgets are shrinking, so people aren’t given adequate amounts of time to hone their sounds in a professional environment.

That’s true, and, as a producer, I find that very frustrating. These days, the budgets are so small that the only way you can make an album is to do it as quickly as you possibly can; otherwise somebody ends up not being paid.

As a result, there’s very little room for experimentation, so it’s very bad from an artistic point of view. And they’re cutting the budgets all the time— every day, there seems to be less and less available and more and more corners being cut.

Yet somehow you don’t ever hear about record company executives taking a cut in salary.

Still, I honestly don’t think it’s been economics that have been the sole downfall of record labels.

The problem is that, generally speaking, they have gotten themselves into an irreparable situation, and so they’ve become very adept at signing music that most people don’t want to listen to. That’s because most of today’s A&R people don’t come from a proper musical background.

They’re much more into trends rather than something being good. If something is on the front page of the newspapers, they want to sign it, and then all the other labels want to sign the same thing.

In fact, very often, labels sign artists just to stop other labels from getting them, not because they really believe in them. My daughter likes a lot of current music because she’s young, but she often asks me, “Why is it only old stuff that gets covered, or sampled?”

What do you think is the solution?

It’s a question of rejigging the model. The major labels still have huge overheads—huge offices in New York and LA, and big staffs to run.

But if you run a tighter ship and share the ownership of the product with the artist, if you don’t con them into thinking you’re going to be selling millions of records when you know you’re not, and if you keep the costs down, then the artist can make the same amount of money selling far fewer records.

That’s a model that a lot of people are starting to look into now.

Even in the old days, when a lot of records were being sold by people like Sting or Phil Collins, it was only because they were selling eight or nine million records that nobody was complaining.

The people associated with them were making good money—nowhere near the huge amounts of money the record labels were making, but good money—so you put up with it, just as you put up with the fact that you weren’t going to get paid anything from certain foreign territories because of bootlegging. You were just educated by the record labels into assuming this was normal.

But eventually, hopefully, those kinds of things will be policed properly, so that everyone gets paid what they’re owed.

In the old days, artists had to have a record deal because they needed that advance to afford to pay for expensive studio time and they needed the label to do marketing and promotion.

Today, people have the ability to do those things for themselves, and it has made a huge difference. Ironically, in some ways it’s made it harder for an artist to gain recognition, because how do you get your stuff heard?

Suggested Listening:
The Police: Ghost in the Machine, A&M, 1981; Synchronicity, A&M, 1983
Genesis: Abacab, Atlantic, 1981; Genesis, Atlantic, 1983; Invisible Touch, Atlantic, 1986
Phil Collins: Face Value,Virgin, 1981; Hello, I Must Be Going!, Atlantic, 1982; No Jacket Required, Atlantic, 1985; But Seriously, Atlantic, 1989
Sting: Nothing Like the Sun, A&M, 1987; Ten Summoner’s Tales, A&M, 1993; Mercury Falling, A&M, 1996
Peter Gabriel: Peter Gabriel, Geffen, 1980
XTC: Black Sea, Geffen, 1980; English Settlement, Geffen, 1982

To acquire “Behind The Glass: Volume II” from Backbeat Books, click over to www.musicdispatch.com.

{extended}
Posted by Keith Clark on 10/31 at 05:15 AM
RecordingFeatureBlogStudy HallBusinessEducationEngineerStudioTechnicianPermalink

Thursday, October 30, 2014

Church Sound: An In-Depth Loudspeaker Buying Guide

This article is provided by ChurchTechArts.

 
Buying loudspeakers is perhaps the most daunting task a church tech will face.

Today we have powered and unpowered loudspeakers; line arrays and point source boxes; flown and ground stacked; cheap and eye-watering expensive. In each of those categories, we have dozens of manufacturers with hundreds of models to choose from.

While it’s not possible in the space of this article to tell you what to buy, we will attempt to guide you through the process of selecting the proper loudspeakers for your space.

The Perfect Loudspeaker
First, there is no perfect loudspeaker. All designs make compromises in deference to the laws of physics. The right loudspeaker for one room might well be entirely the wrong one for another room.

Don’t get sucked into the trap of thinking that the loudspeakers in the church that put on that big conference are the right ones for you. They may be, but they also may not be.

Second, once you get beyond putting up one or two loudspeakers in a small room, I believe there needs to be some design involved. A competent integrator should be able to model the room and show you some options based on prediction software and help narrow down your choices.

Far too many churches make the mistake of just hanging some boxes in the room, pointing them wherever and hoping it sounds good. From experience, I can tell you that most of the time it doesn’t. Plan on spending at least some of your loudspeaker budget on an actual design. You can thank me later.

As I said, there is no “best” loudspeaker. What you want is what’s right for your environment. To get to that right loudspeaker, we have to ask some questions, and determine what we are trying to accomplish. Once we know the intended result, we can begin to make a selection that will effectively deliver the results.

It’s much like buying a vehicle; you wouldn’t buy a two-seater convertible if you intend to haul around a lot of mulch. Then again, a pickup would probably not be the best choice to drive a large family to baseball practice. With that in mind, let’s ask some questions.

What Is The Source?
Believe it or not, the requirements for a loudspeaker system that will deliver primarily the spoken word and one that will engage the audience with concert-level sound are quite different. Different churches have vastly different programming styles, and the PA needs change as we consider those styles.

In a very traditional, liturgical setting, the loudspeaker system really just needs to deliver the frequency spectrum of the human voice evenly throughout the room and with great clarity. The volume levels don’t need to be that high (relatively speaking), so we don’t need a bunch of drivers in the air.

Don’t be fooled, however; getting a system like this to sound good requires some careful design. It’s just not likely to be as expensive as a full-on modern service system.

As amplified music becomes more and more of a priority, the system needs to adjust. Some churches want concert-level audio, and the only way to get that is with a big PA. Even in smaller rooms, you’ll need to move a lot of air, and that requires a good number of full-range loudspeakers, as well as low-frequency drivers (subwoofers) to deliver the goods. Most churches fall somewhere in between those extremes and will need a system designed accordingly.

What’s The Vibe?
This goes along with the source; are we looking for quiet and contemplative or loud and energetic? Do we simply want to reinforce some acoustic sounds so they can be heard in the back of the room, or do we want to put the sound right in your face? Even in the extremes, we have options.

For example, if we’re going for more of a concert feel, what genre do we wish to emulate? Some systems will deliver a very edgy, rock ’n’ roll sound, while others are more hi-fi. Knowing what vibe you want to create will begin to dictate the system you ultimately install.

What Is The Environment?
Churches run the gamut from acoustically live, highly reflective cathedral-type rooms, to dampened and treated theatrical venues. Like everything else, the environment will effect the choice of loudspeakers.

Highly reverberant rooms will require speakers that have excellent pattern control to keep sound from bouncing off the walls, ceilings and floors. Very dead rooms will require more loudspeakers to energize the space and overcome all the absorption.

There is also the issue of aesthetics. Many congregants would object to a modern, black flown line array in a historic cathedral. In such a room, a smaller, less visually intrusive system is required.

Even in modern churches, sight lines, trim heights and other architectural features will dictate one loudspeaker type or another. Make sure your integrator is asking these questions.

Can We Hang ‘Em High?
Some rooms make it easy to hang—or fly—loudspeakers. In others, it’s impossible. In still others, it’s impractical or not necessary.

Before you get your heart set on 600 pounds of beautiful flown line array, make sure the roof structure can actually support it. And yes, it’s possible your roof cannot support that much weight.

In more traditional venues, wall or column mounted loudspeakers are often the best choice as they can blend into the architecture rather easily (especially if they can be custom painted). In some smaller, multi-purpose rooms, portable loudspeakers on sticks (stands) might be the best option.

Can We Afford Them?
Loudspeaker systems can range from a few hundred to a few thousand dollars for small rooms; from tens of thousands for medium rooms; and upwards of a hundred thousand to almost a million dollars for very large rooms.

In those vast categories are all kinds of variations. Some well-known manufacturers are very good, and rather expensive. Other lesser-known companies can be almost as good and considerably more affordable. Not everyone needs or can afford a Mercedes; quite often, we can get by quite nicely with an Infinity or even a Nissan.

Just be sure to buy enough PA for your room. Too many churches buy on budget and end up unhappy with the results. Build in some headroom; make sure the system can go louder than you need it to so you’re not pushing it to the edge every weekend.

Those are some general questions and parameters you should be considering before beginning to hone in on your speaker selection. Now let’s consider some of the categories and subcategories of loudspeaker systems.

Powered Or Unpowered?
A decade ago, an audio amplifier was big, heavy and required a lot of current to work well. Today, even powerful amplifiers can fit into small spaces and don’t weigh nearly as much.

As a result, more manufacturers are opting to include them in their loudspeakers. There are some significant benefits to this approach. First, the amplifiers can be exactly matched to the loudspeakers.

Also, since the amp is in the box, cable runs are incredibly short, which means nearly 100 percent of the amp’s power is delivered to the loudspeaker, not turned into heat in the cable. Crossover points between the loudspeaker drivers can be optimally set, and often DSP is included in the box, which makes for a far more predictable system.

The downside is that if an amp goes on a loudspeaker that is 50 feet in the air, someone has to go up and change it. You also have to supply power to your powered loudspeakers, which means double the number of cables running to each box.

And the inclusion of amps also means that they will be slightly heavier than their unpowered brethren. This is not typically a problem, but it has to be considered.

Which is better? Like all things in audio, it depends. Often, powered loudspeakers are an excellent choice because many of the tuning decisions have been optimized at the factory, which means it should take less time getting them sounding great in the field.

On the other hand, if your installer wants to do something rather custom to accommodate a specific situation, sometimes the added control of separate components is better. The availability of power and space for amps also factor into the decision.

Thankfully, there are excellent choices in both powered and unpowered varieties, and it’s not uncommon to see the same loudspeaker available in both powered and unpowered versions.

Line Array Or Point Source?
Line arrays—multiple identical boxes hung close together in a vertical line—are all the rage right now.

And to be sure, they solve a lot of problems in certain situations. They typically boast good pattern control, are very efficient, and are easy to rig—characteristics that make them excellent choices for large venues.

Nearly every large tour is using line arrays right now for those (and other) reasons. They are not the right choice for every venue, however.

Smaller rooms (under 500) will often be better served with a more traditional point source box. In small rooms, it’s difficult to hang a long enough array to achieve good pattern control, and they get very expensive very quickly when compared to a point source system.

Don’t fall into the trap of thinking that since line arrays are “new technology” they are inherently better. There has been a lot of development going on in both types, and modern point source loudspeakers can be incredibly effective when designed well.

A relatively new type of system is emerging as a great problem solver for certain rooms; the digitally steerable array. Using a larger number of small drivers and a bunch of digital signal processing (DSP), these systems can be life-savers for problematic rooms.

A digitally steerable array can vary it’s coverage both vertically and horizontally to keep sound going where the people are—and away from where they are not. Because they typically use a bunch of small drivers, the footprint is small, making them ideal for very traditional rooms where aesthetics are a big deal.

Get A Listen
If it’s at all possible, you want to listen to the loudspeakers before buying. Ideally, you would be able to hear them in your space. This may not always be possible, or it may not be free. You may have to spend some money to rent the ;pidspeakers, or at least pay for a demo.

If you’re looking at a smaller system, the local rep may be able to visit with some boxes. You may not get a whole system, but you’ll get a good idea of whether the louspeakers will work for you or not. Having a set of tracks of your band using virtual sound check is a terrific way to audition the loudspeakers.

And if you can’t arrange for the loudspeakers in your room, try to visit a venue that has them. This is less ideal, but will give you a good idea of what they sound like.

Conclusion
Which type of loudspeaker to buy comes down not to selecting the “best,” but rather the best for the room. Thankfully, the science of loudspeaker design has evolved to a point where we can accurately predict performance before hanging boxes.

Being able to try out different models inside the computer is a great aid to developing a great sounding system. The loudspeakers you select will vary depending on the room, style of service and what environment you are trying to create.

There are plenty of options out there, so with proper research and a good design, the end result will be a system that meets the needs for your church.

Mike Sessler now works with Visioneering, where he helps churches improve their AVL systems, and encourages and trains the technical artists that run them. He has been involved in live production for over 25 years and is the author of the blog Church Tech Arts.

{extended}
Posted by Keith Clark on 10/30 at 06:16 PM
Church SoundFeatureBlogStudy HallAmplifierBusinessEngineerLine ArrayLoudspeakerPowerSound ReinforcementTechnicianPermalink

All The World’s A Stage… But Does It Need Sound Reinforcement?

 
I recently attended two “straight” plays, i.e., the kind without musicians. Such events are all about the dialog (and the lighting, of course). One production had no discernible sound reinforcement and the other had totally overt and apparent microphones, loudspeakers and amplification, thereby opening an opportunity for me to compare and contrast.

One play was Shakespeare’s King Lear. Historians still dispute the identity of Shakespeare, but all agree that these plays were written without a sound system in mind. Shakespearian actors know how to deliver. The other play was set in a modern era New York apartment and even had loudspeakers on stage as a set piece. Both plays featured high-profile actors best known for their film and television work.

This set the stage for a sonic showdown: traditional unamplified versus modern amplified. It was a dead heat in the intelligibility category. No words were injured in the making of these shows.

Envelope Please
The sound image award goes, hands-down, to the unamplified show. There was never a moment where the image did not exactly track the sources. An actor on stage left could talk to an actor on stage right and the sound individually tracked their locations. This held up even though our seats were way off to the side.

By contrast. the amplified show had mono sound that imaged all actors from all locations in the same overhead loudspeaker, which was hidden in plain sight.

There were very few sonic distractions in the “natural” sound show. No mic pickups were missed. No RF intrusion or bumped mics. The amplified show was operated extremely well, and yet the chances of getting through any show without some sonic mishap is somewhere around that of pitching a perfect game. There were a few moments that brought the sound system overtly into the patron’s brains and then things settled back down to normal.

And now in the category of sounding natural; the envelope please. The winner is amplified sound. By 20 dB. Why? Because the unamplified “natural” sound show was an artistic disaster (on many levels) but most noticeably for the unnatural sonic quality. 

The actors were so focused on delivering their lines with enough LEVEL and AR-TIC-U-LA-TION that the sound quality was extremely disconnected from the dialog content. We were distracted and confused even though we understood the words. The difference in delivery between “I love you” and “I want to kill you” was barely noticeable. It was two hours of speeches but zero minutes of conversation.

On the other hand, the amplified show had tremendous dynamic range that the actors exploited to the fullest. The range of emotions in the play was extreme and the dialog was able to realistically link the way the actors spoke to the emotions of the moment. There was pin-drop whispering as well as full-throated bellowing. The “unnatural” reinforcement from the loudspeakers allowed for natural sound transmission from the actors.

The unamplified production, by contrast, required the actors to provide the complete sound system transmission, leaving them with a very small range of natural expressions (and even postures). This is the cost of traditional purity in the modern age. And it is totally unnecessary.

The Shakespeare in the Park production of King Lear used amplified sound to provide a fantastic experience for the theatergoers (sound design by Mark Menard and Sten Severson of Acme Sound Partners, and skillfully mixed by Craig Freeman).

We experienced the full emotional impact of John Lithgow, Annette Bening and other great actors working their craft. Distributed loudspeakers hang in plain view on wires above the Delacorte Theater (in Central Park, Manhattan).

Sonic image is ever present in the minds of sound engineers, but the image distortion must be very strong for the average patron to take note for more than a passing moment. A full range of emotions delivered from actors, by contrast, is universally applauded.

Here To Help
I won’t divulge the details of the “natural” show but will provide some context. The Broadway-area venue holds more than 1,100 people, and the lack of sound system was certainly an artistic rather than budgetary decision. They could afford it. It’s a “closet drama” with actors never more than a few feet away from each other. I kept expecting a sound cue of the next-door neighbor pounding the wall to tell them to shut up, as I would do if they shared my walls.

Movie actors (in movies) don’t have to yell the quiet parts; modern theatrical actors shouldn’t have to either. And yet these are the words that so many old-school theater personnel most fear: “I’m from the sound department and I’m here to help.” We now have the tools and techniques to support actors without taking over the show.

Modern microphones and loudspeaker systems are extremely linear and capable of providing reinforcement with minimal detection. They are small and make far less noise than the lights and staging. Current DSP technology gives us the ability to track actors on cue and other subtle manipulations that expand the artistic range open to actors. We are ready, willing and able, and we want nothing more than for nobody to know we are even in the room.

Traditionalism dies hard in any field, but in the world of the stage it’s always the sound department that’s forced to wear the prairie dress. Does anybody tell the lighting folks they need to burn lime, or forbid the stagehands the use of electrical winches? Is polyester forbidden in costumes? And yet the sound department is obliged to play by the rules of the 50s (1550s to 1950s), even on a closet drama set in the 1980s. As Tevya sings in Fiddler on the Roof, “Tradition!”

It was easy to detect the presence of sound systems in the olden days. The amplified sound contained a variety of clues that betrayed us. Loudspeakers were not good enough, mic placement options were poor, and we had only primitive capabilities for signal processing. It’s time to update the files because none of these things are true any more.

Natural Solutions
It’s easy to forget that theater is an inherently amplified event. The makeup artists greatly amplify the facial features. Stage props are overly contrasted with exaggerated features. A single floor lamp in an apartment lights up the entire room—evenly! Up close, however, the actors look like clowns, the stage looks like a cartoon, and the lights can be blinding.

But out in the house these amplifications serve the purpose of making a distant visual event appear much closer, and it all feels natural to us. That’s the magic of theater.

The actor’s voices also need to rescale to reach the house at a proper level. They must be amplified. The questions are simply how much, and by whom. If all of the gain comes from the actor, the price is unnatural diction, inappropriate tonal/emotional cues and stiff posture.

Any doubt about the veracity of this can be erased by the following exercise: (1) stand up and read the previous sentence in a normal voice, then (2) read it like a movie actor would, and (3) project like a Broadway stage actor. One of these is not like the others, and the fact that you know exactly what to do to imitate the stage actor tells it all.

If part of the required acoustic gain can come from the sound system, the actors can relax into a more natural speech pattern that matches the emotions of the spoken words. As the sound system carries more of the burden, the actors can reduce the need to constantly broadcast their voices. The downside risk is increasing vulnerability to loudspeaker detection. Modern sound systems can bend this much further than ever, but only if we have the cooperation of other departments.

The four key factors where interdisciplinary coordination pays off in favor of supporting the actor’s voices are location, location, location and noise. We need multiple positions so we can get loudspeakers that align well with the varying audience sightlines to the stage. If lighting and stage noise can be minimized, then everyone needs less acoustic gain and our range of stealth operation expands.

Bringing It To Life
Today’s audiences have moved on. The presence of loudspeakers does not offend them. Hint: they put loudspeakers in their ears all day. They expect the show to be as up close and personal as watching TV at home (or like the movie the play is based on), but with the added extras of live actors, 3-D staging, lighting and sound (without the glasses). It’s time now for the rest of the theater world to move on and use all of the tools available to them.

It’s true that a sound reinforcement system is not required for every production. But then again, neither are makeup, costumes, lighting and staging. The only mandatory attendee is the playwright’s story brought to life by the actors. All of the other disciplines are invited to enhance the conveyance of that experience to the audience. The question of “to be, or not to be” reinforced should be weighed in the context of how well we can actually enhance the actor’s transmission rather than romantic notions of tradition.

Writing this to sound engineers is like preaching to the converted. But maybe it will help toward getting some understanding the next time you hear “Sound system? We don’t need no stinking sound system!”

Bob McCarthy has been designing and tuning sound systems for over 30 years. His book Sound Systems: Design and Optimization is available at Focal Press (www.focalpress.com). He lives in NYC and is the director of system optimization for Meyer Sound.

{extended}
Posted by Keith Clark on 10/30 at 02:21 PM
Live SoundFeatureBlogOpinionConsolesEngineerLoudspeakerMicrophoneProcessorSound ReinforcementWirelessPermalink

Wednesday, October 29, 2014

Far-Field Criteria For Loudspeaker Measurement

 
The most common reference distance for loudspeaker SPL specifications is 1 meter (3.28 feet). The choice is one of convenience – any distance will do.

The 1 m reference simplifies distance attenuation calculations by eliminating the division required in the first step:

image

 
Loudspeakers must be measured at a distance beyond which the shape of the radiation balloon remains unchanged. The changes are caused by path length differences to different points on the surface of the device.

These differences become increasingly negligible with increasing distance from the source, much in the same way as any object optically “shrinks” as the observer moves to a greater distance.

The distance at which the path-length differences become negligible marks the end of the near-field and beginning of the far-field of the device.

Figure 1.

An infinitely small source (a point source) can be measured at any distance and the data extrapolated to greater distances using the inverse-square law without error.

A very small loudspeaker might possibly be measured at 1 meter, but for larger loudspeakers it’s a different story. For large devices, the beginning of the far-field must be determined, marking the minimum distance at radiation parameters can be measured.

The resultant data is then referenced back to the 1 meter reference distance (Figure 1) using the inverse-square law. This calculated 1 meter response can then be extrapolated to further distances with acceptable error.

Figure 2.

A Rule Of Thumb
A working “rule of thumb” for determining the boundary between near-field and far-field is to make the minimum measurement distance the longest dimension of the loudspeaker multiplied by 3.

While this estimate is generally acceptable for field work, it ignores the frequency-dependency of the transition between the near and far fields. More accurate estimates of the far field are found to be:

1. The point of observation where the path length differences to all points on the surface of the loudspeaker perpendicular to the point of observation are the same. Unfortunately this is at an infinite distance and the pressure is zero.

2. The distance at which the loudspeaker’s three-dimensional radiation balloon no longer changes with increasing distance from the source with regard to frequency.

3. The distance from the source where the radiated level begins to follow the inverse-square law for all radiated frequencies. And, a practical definition useful for determining the required measurement distance:

4. The distance from the source where the path length difference for wave arrivals from points on the device on the surface plane perpendicular to the point of observation are within one-quarter wavelength at the highest frequency of interest (Figure 2).

Consideration of any of these definitions reveals that the far-field is wavelength (frequency)-dependent.

As previously stated, the need to measure loudspeakers in their far-field arises when it is necessary to project the data to greater distances using the inverse-square law, which is exactly what acoustic modeling programs do.

If this is not the purpose of the data, then measurements can be carried out in the near-field. The resultant data will be accurate for the position at which it was gathered, but will be inappropriate for extrapolation to greater distances using the ISL.

It’s often thought that a remote measurement position is necessary for low frequencies since their wavelengths are long. Actually, the opposite is true. It’s more difficult to get into the far-field of a device at high frequencies, since the shorter wavelengths make the criteria in Item 4 more difficult to satisfy.

The most challenging loudspeakers to measure are large devices that are radiating high frequencies from a large area. The near-field can extend to hundreds of feet for such devices, making it impractical or even impossible to get accurate balloon data with conventional measurement techniques.

Alternatives for obtaining radiation data for such devices include acoustic modeling and Acoustic Holography – a technique pioneered by Duran Audio. David Gunness of Fulcrum Acoustic has authored several important papers on how such devices can be handled.

So, some factors tend to increase the required measurement distance, and, as with all engineering endeavors, there are also some factors that tend to reduce the required distance. They include:

1. Large loudspeakers with extended HF response do not typically radiate significant HF energy from the entire face of the device. HF by nature is quite directional, making it more likely that the radiated energy is localized to the HF component. As such, only the dimension of the HF device itself may need to be considered in determining the far-field.

2. Beam-steered line arrays (i.e., JBL Intellivox or EAW DSA) do not radiate HF energy from their entire length. The array length is made frequency-dependent by band pass filters on each device. This may allow a closer measurement distance than may be apparent at first glance.

Passive line arrays are among the most difficult devices to measure, especially when used in multiples. Each device is full-range, so the path length difference between the middle and end devices can be quite large.

A compromise is to measure the radiation balloon of a single unit and predict the response of multiples using array modeling software.

Equally difficult are large ribbon lines and planar loudspeakers, again due to the large area from which high frequency energy radiates.

It would appear that all that is necessary is to pick a very large measurement distance. While this solves the far-field problem, it creates a few also. They include:

1. Air absorption losses increase with distance. While these can be corrected with equalization, the HF boost puts a greater strain on the DUT.

2. It becomes increasingly difficult to maintain control over climate with increasing distance (drafts, temperature gradients, etc.). These effects produce variations in the measured data, making the collection of phase data difficult or impossible.

3. Indoors, the anechoic time span becomes shorter with increasing distance, since the path length difference to the ceiling, floor, or side walls is reduced as the microphone is moved farther from the source. The effect is an increase in the lowest frequency that can be measured anechoicly (a reduction in frequency resolution).

4. Direct field attenuation will be 10 dB greater at 30 m (100 ft) than at 9 m (30 ft). This reduces the signal-to-noise ratio of the measured data by 10 dB, or requires that ten times the power be delivered to the DUT to maintain the same S/N ratio that exists at 30 feet.

5. Outdoor measurements are difficult due to unstable noise and climate conditions over the time span of the measurement (up to 8 hours).

Large measurement distances are possible if the above problems are solved. A large aircraft hanger with a time windowed impulse response represents a good way to collect balloon data at remote distances.

Our chamber at ETC, Inc. allows measurement out to 9 m (30 ft). This is an adequate distance for the majority of commercial sound reinforcement loudspeakers, but not all of them.

The loudspeaker rotator is portable, so devices that cannot be measured at 9 meters are measured in a very large space at a distance out to 30 meters. A time window provides the required reflected-field rejection. Determination of the required measurement distance is made on a case-by-case basis after considering the device-to-be-tested.

Using the above criteria for the far-field, and fixing a measurement distance of 30 feet (9 m), the highest frequency balloon possible for different size devices can be determined (Figure 3).

Figure 3.

Note that this is the largest dimension of the HF device. If the far-field condition is met for it, it will typically be met for all lower frequencies.

The far-field prerequisite for loudspeaker attenuation balloons must be met to allow the data to be projected from one meter to listener seats with acceptable error. The condition is easily satisfied for physically small devices, i.e., bookshelf loudspeakers.

Since sound reinforcement loudspeakers are often physically large, there exists a highest frequency limitation in what can be measured at a fixed measurement distance.

Ideally, data for which the far-field criteria is not met should be excluded or marked as suspect on specification sheets or within design programs. Usually it’s not, so the user must use some intuition in HF modeling of sound coverage in auditoriums.

Pat & Brenda Brown lead SynAudCon, conducting audio seminars and workshops online and around the world. For more information go to http://www.prosoundtraining.com.

{extended}
Posted by Keith Clark on 10/29 at 02:10 PM
AVFeatureBlogStudy HallAVLine ArrayLoudspeakerManufacturerSignalSound ReinforcementPermalink
Page 52 of 197 pages « First  <  50 51 52 53 54 >  Last »