Feature

Wednesday, July 01, 2015

Show Central: News & New Products From InfoComm 2015

InfoComm 2015, the annual conference and exhibition for the professional audio-visual market held in June in Orlando, was visited by a record 39,105 professionals attending from more than 108 countries. This represents a 5.6 percent increase in attendance over InfoComm 2014 in Las Vegas.

“InfoComm is the ideal place to make AV purchasing decisions, connect with contacts and learn principles that will boost your effectiveness,” states David Labuskes, CTS, RCDD, executive director and CEO of InfoComm International. “The industry strongly supports the InfoComm show, and we are grateful for the attendee support and for the innovative exhibitors who are committed to making InfoComm a can’t-miss event on the industry calendar.”

There were 950 exhibitors participating at InfoComm this year, with exhibitors occupying a record 515,000 net square feet of exhibit and special events space.

And more than 5,600 seats were filled at InfoComm University sessions, which provided practical training. More than 200 professionals sought out the new advanced class series. Popular courses included Future Trends, Large Scale Projection Mapping, How to Build an App, and Designing Classrooms that Deliver Exceptional Experiences.

We’ll be providing updates here on the “show central” page, so be sure to check back for the latest. And don’t miss our Photo Gallery Tour of new products and people at InfoComm 2015.

News
Outline North America Names Joe Fustolo As Senior Sales Engineer/Product Support
Crown Adopts Common Amplifier Format For DriveCore Install Series
Lectrosonics Announces Milestone Retirement And Two Company Promotions
Biamp VP Of Global Sales Named InfoComm 2015 Volunteer Of The Year
Audinate Hosts Dante AV Networking World Conference At InfoComm 2015
Eighteen Sound Assumes Control Of Ciare Srl Worldwide Sales

Loudspeakers
Renkus-Heinz Introduces ICONYX Gen5 And RHAON II
NEXO Debuts New ID Series Compact Loudspeakers
Meyer Sound Announces LEOPARD & 900-LFC Line Array System
Bose Professional F1 Model 812 Array Loudspeaker & F1 Subwoofer
New JBL Professional PD500 Series Loudspeakers
InfoComm Debut For EAW Redline Powered Loudspeaker Series
L-Acoustics X Series Coaxial Loudspeakers Make North America Debut
Fulcrum Acoustic Launches FL283 Cardioid Line Array
Martin Audio CDD Installation Loudspeakers Make North America Debut
Danley Introduces The Exodus System Line Array
New Additions To Yamaha VXC Series Of Loudspeakers
RCF Introduces Compact Active SUB 702-AS II
Eminence Introduces Two New Neodymium Compression Drivers
Outline Showcases STADIA100 LA Loudspeakers
New Subs For Grund Audio LC Series Column Line Source Loudspeakers
New JBL VTX V20-DF Down Fill Adapter
JBL Professional Showcases VTX V25-II Line Array
New d&b audiotechnik MAX2 Stage Monitor
RCF Adds TTL6-A Line Source To Touring & Theatre Family
JBL Releases New VERTEC DrivePack DP-DA V5 Loudspeaker Presets
FBT Introduces New VERTUS CLA 406A And CLA 118SA

Consoles/Mixers
DiGiCo S21 Digital Console Makes North America Debut
Allen & Heath Launches dLive Digital Mixing System
InfoComm Unveiling Of Yamaha Rivage PM10 Digital Console
XI-Dante Expansion Interface Card For Roland M-5000 Console
Soundcraft Spotlights New Si Impact Digital Console
Waves SoundGrid Integration For Lawo mc²36 Console
New V3 Software For SSL Live Consoles
RCF Shipping 16 & 24-Channel L-Pad Mixers
AKG Hightlights DMM8 U and DMM14 U Reference Digital Microphone Mixers
PreSonus Ships AVB Networking For StudioLive AI Mixers
Yamaha Debuts TF Series Digital Small to Mid Format Consoles
Avid Introduces VENUE | S6L Modular, Scalable Live Mixing System
Allen & Heath Launches New GLD Chrome Edition
PreSonus Studio 192 Interface Doubles As Studio Command Center
New Soundcraft Vi5000 And Vi7000 Digital Consoles
PreSonus StudioLive CS18AI Control Surface For RM Mixers & Studio One

Microphones/Wireless Systems
Sennheiser Introduces New SL Headmic 1
Upgraded Intermodulation Analysis Software From Professional Wireless Systems
New MIPRO MI-909 Digital Wireless In-Ear-Monitoring System
Sennheiser Debuts SpeechLine Digital Wireless System
DPA Microphones Introduces Tabletop Shock Mount
AKG Debuts C314 Dual-Diaphragm Condenser Microphone
Audix Releases HT7 Single Ear Headworn Microphone
AKG Introduces D112 MKII Kick Drum Microphone
Radio Active Designs Debuts TX-8 Wireless Antenna Combiner
U.S. Debut Of AKG D5 C Dynamic Directional Microphone
New Listen Technologies Infrared Language Distribution System

Networking/Interfaces
Q-SYS Core 110f DSP Networking Product From QSC
Next-Generation Focusrite RedNet Range Of Networking
New Aviom D400 And D400-Dante A-Net Distributors
Stage Tec Debuts XACI Control Card
Audinate Announces Availability Of Firmware Update To Support AES67

Processors/Amplifiers
Yamaha Adds New MRX7-D Open Architecture Signal Processor
Expansion Of Powersoft Ottocanali Series Amplifier Series
New SPA Amplifier Series From QSC
d&b audiotechnik Expands Installation Amplifiers With 10D & 30D
Danley Introduces New “DNA” Series British-Made Amplifiers
WorxAudio Technologies Debuts PDA Series Amplifiers
Four Models Join Bose Pro FreeSpace Amplifier Line
New BSS BLU-103 Teleconferencing Processor With VoIP
Lab.gruppen Announces LUCIA Amplifier Series Additions
Yamaha Rio 64-Input/Output Rack Makes InfoComm Debut
Upgraded XLS DriveCore 2 Amplifiers From Crown Audio
dbx DriveRack VENU360 Loudspeaker Management System Now Shipping
New BSS Audio BLU-DAN Dante To BLU link Bridge
Outline Highlighting STADIA100 LA Loudspeaker & iP48 Processor
New Attero Tech unDX4I Dante Wall Plate Interface
dbx Presents goRack Performance Processor For Portable PA Systems
Two New Crown Audio DCi Network Series Amplifiers

Software/Apps
Meyer Sound Introduces MAPP XT System Design Tool
New Yamaha ProVisionaire App For iPad
Biamp Systems Introduces The Oreno Suite
Allen & Heath Releases Android Version Of Qu-You App
New Version Of SystemVUE Software From VUE Audiotechnik
Harman Professional Releases JBL HiQnet Performance Manager Version 1.7
L-Acoustics Releases Soundvision 3.0—Faster and Free of Charge
VUE Audiotechnik Announces Dante Support For DSP-Based Products
JBL Professional Releases New VERTEC DrivePack DP-DA V5 Loudspeaker Presets
d&b audiotechnik Launches ArrayCalc V8 Simulation Software
DiGiCo Enhances SD Series With New Software Suite

Hardware/Accessories
New 2-Line VoIP Interface Card From Symetrix
Radio Active Designs Debuts TX-8 Wireless Antenna Combiner
New Attero Tech unDX4I Dante Wall Plate Interface
Debut Of QSC TSC-7t PoE Tabletop Touchscreen
Version 2.0 Roland XS Series Multi-Format Matrix Switchers
SurgeX Smart Energy Management Platform
Attero Tech Debuts unDAES-O Dante To AES3 Bridge
JBL Professional VTX V20-DF Down Fill Adapter

{extended}
Posted by Keith Clark on 07/01 at 07:35 AM
AVLive SoundFeatureBlogProductSlideshowAVBusinessEducationManufacturerSound ReinforcementSystemPermalink

Tuesday, June 30, 2015

Microphone Characteristics Vital To Know For Sound Reinforcement

Microphone techniques (the selection and placement of microphones) have a major influence on the audio quality of a sound reinforcement system.

In order to provide some background for these techniques it is useful first to understand some of the important characteristics of the microphones themselves.

The most important characteristics of microphones for live sound applications are their operating principle, frequency response and directionality.

Secondary characteristics are their electrical output and actual physical design.

Operating Principle
The type of transducer inside the microphone, that is, how the microphone picks up sound and converts it into an electrical signal.

A transducer is a device that changes energy from one form into another, in this case, acoustic energy into electrical energy. The operating principle determines some of the basic capabilities of the microphone. The two most common types are Dynamic and Condenser.

Dynamic microphones employ a diaphragm/ voice coil/magnet assembly which forms a miniature sound-driven electrical generator. Sound waves strike a thin plastic membrane (diaphragm) which vibrates in response.

A small coil of wire (voice coil) is attached to the rear of the diaphragm and vibrates with it. The voice coil itself is surrounded by a magnetic field created by a small permanent magnet. It is the motion of the voice coil in this magnetic field which generates the electrical signal corresponding to the sound picked up by a dynamic microphone.

Dynamic microphones have relatively simple construction and are therefore economical and rugged. They can provide excellent sound quality and good specifications in all areas of microphone performance. In particular, they can handle extremely high sound levels: it is almost impossible to overload a dynamic microphone. In addition, dynamic microphones are relatively unaffected by extremes of temperature or humidity. Dynamics are the type most widely used in general sound reinforcement.

Condenser microphones are based on an electrically-charged diaphragm/backplate assembly which forms a sound-sensitive capacitor. Here, sound waves vibrate a very thin metal or metalcoated- plastic diaphragm.

The diaphragm is mounted just in front of a rigid metal or metal-coated ceramic backplate. In electrical terms this assembly or element is known as a capacitor (historically called a “condenser”), which has the ability to store a charge or voltage.

When the element is charged, an electric field is created between the diaphragm and the backplate, proportional to the spacing between them. It is the variation of this spacing, due to the motion of the diaphragm relative to the backplate, that produces the electrical signal corresponding to the sound picked up by a condenser microphone.

The construction of a condenser microphone must include some provision for maintaining the electrical charge or polarizing voltage. An electret condenser microphone has a permanent charge, maintained by a special material deposited on the backplate or on the diaphragm. Nonelectret types are charged (polarized) by means of an external power source. The majority of condenser microphones for sound reinforcement are of the electret type.

All condensers contain additional active circuitry to allow the electrical output of the element to be used with typical microphone inputs. This requires that all condenser microphones be powered: either by batteries or by phantom power (a method of supplying power to a microphone through the microphone cable itself). There are two potential limitations of condenser microphones due to the additional circuitry: first, the electronics produce a small amount of noise; second, there is a limit to the maximum signal level that the electronics can handle. For this reason, condenser microphone specifications always include a noise figure and a maximum sound level. Good designs, however, have very low noise levels and are also capable of very wide dynamic range.

Condenser microphones are more complex than dynamics and tend to be somewhat more costly. Also, condensers may be adversely affected by extremes of temperature and humidity which can cause them to become noisy or fail temporarily. However, condensers can readily be made with higher sensitivity and can provide a smoother, more natural sound, particularly at high frequencies. Flat frequency response and extended frequency range are much easier to obtain in a condenser. In addition, condenser microphones can be made very small without significant loss of performance.

The decision to use a condenser or dynamic microphone depends not only on the sound source and the sound reinforcement system but on the physical setting as well. From a practical standpoint, if the microphone will be used in a severe environment such as a rock and roll club or for outdoor sound, dynamic types would be a good choice. In a more controlled environment such as a concert hall or theatrical setting, a condenser microphone might be preferred for many sound sources, especially when the highest sound quality is desired.

Frequency Response
The output level or sensitivity of the microphone over its operating range from lowest to highest frequency.

Virtually all microphone manufacturers list the frequency response of their microphones over a range, for example 50 - 15,000 Hz. This usually corresponds with a graph that indicates output level relative to frequency.

A microphone whose output is equal at all frequencies has a flat frequency response.

Flat frequency response.

Flat response microphones typically have an extended frequency range. They reproduce a variety of sound sources without changing or coloring the original sound.

A microphone whose response has peaks or dips in certain frequency areas exhibits a shaped response.

A shaped response is usually designed to enhance a sound source in a particular application.

Shaped frequency response.

For instance, a microphone may have a peak in the 2 - 8 kHz range to increase intelligibility for live vocals. This shape is called a presence peak or rise.

A microphone may also be designed to be less sensitive to certain other frequencies. One example is reduced low frequency response (low end roll-off) to minimize unwanted “boominess” or stage rumble.

Directionality
A microphone’s sensitivity to sound relative to the direction or angle from which the sound arrives.

There are a number of different directional patterns found in microphone design. These are typically plotted in a polar pattern to graphically display the directionality of the microphone. The polar pattern shows the variation in sensitivity 360 degrees around the microphone, assuming that the microphone is in the center and that 0 degrees represents the front of the microphone.

The three basic directional types of microphones are omnidirectional, unidirectional, and bidirectional.

Omnidirectional microphone.

The omnidirectional microphone has equal output or sensitivity at all angles. Its coverage angle is a full 360 degrees.

An omnidirectional microphone will pick up the maximum amount of ambient sound. In live sound situations an omni should be placed very close to the sound source to pick up a useable balance between direct sound and ambient sound. In addition, an omni cannot be aimed away from undesired sources such as PA speakers which may cause feedback.

The unidirectional microphone is most sensitive to sound arriving from one particular direction and is less sensitive at other directions. The most common type is a cardioid (heart-shaped) response. This has the most sensitivity at 0 degrees (on-axis) and is least sensitive at 180 degrees (off-axis).

The effective coverage or pickup angle of a cardioid is about 130 degrees, that is up to about 65 degrees off axis at the front of the microphone. In addition, the cardioid mic picks up only about one-third as much ambient sound as an omni. Unidirectional microphones isolate the desired on-axis sound from both unwanted off-axis sound and from ambient noise.

Cardioid microphone.

For example, the use of a cardioid microphone for a guitar amplifier which is near the drum set is one way to reduce bleed-through of drums into the reinforced guitar sound. Unidirectional microphones have several variations on the cardioid pattern. Two of these are the supercardioid and hypercardioid.

Supercardioid microphone.

Both patterns offer narrower front pickup angles than the cardioid (115 degrees for the supercardioid and 105 degrees for the hypercardioid) and also greater rejection of ambient sound. While the cardioid is least sensitive at the rear (180 degrees off-axis) the least sensitive direction is at 126 degrees off-axis for the supercardioid and 110 degrees for the hypercardioid.

When placed properly they can provide more focused pickup and less ambient noise than the cardioid pattern, but they have some pickup directly at the rear, called a rear lobe. The rejection at the rear is -12 dB for the supercardioid and only -6 dB for the hypercardioid. A good cardioid type has at least 15-20 dB of rear rejection.

The bidirectional microphone has maximum sensitivity at both 0 degrees (front) and at 180 degrees (back). It has the least amount of output at 90 degree angles (sides). The coverage or pickup angle is only about 90 degrees at both the front and the rear. It has the same amount of ambient pickup as the cardioid. This mic could be used for picking up two opposing sound sources, such as a vocal duet. Though rarely found in sound reinforcement they are used in certain stereo techniques, such as M-S (mid-side).

Microphone polar patterns compared.

Ambient Sound Rejection
Since unidirectional microphones are less sensitive to off-axis sound than omnidirectional types they pick up less overall ambient or stage sound. Unidirectional mics should be used to control ambient noise pickup to get a cleaner mix.

Distance Factor
Because directional microphones pick up less ambient sound than omnidirectional types they may be used at somewhat greater distances from a sound source and still achieve the same balance between the direct sound and background or ambient sound.

An omni should be placed closer to the sound source than a uni—about half the distance—to pick up the same balance between direct sound and ambient sound.

Off-axis Coloration
Change in a microphone’s frequency response that usually gets progressively more noticeable as the arrival angle of sound increases. High frequencies tend to be lost first, often resulting in “muddy” off-axis sound.

Proximity Effect
With unidirectional microphones, bass response increases as the mic is moved closer (within 2 feet) to the sound source. With close-up unidirectional microphones (less than 1 foot), be aware of proximity effect and roll off the bass until you obtain a more natural sound.

You can (1) roll off low frequencies on the mixer, or (2) use a microphone designed to minimize proximity effect, or (3) use a microphone with a bass rolloff switch, or (4) use an omnidirectional microphone (which does not exhibit proximity effect).

Proximity effect graph.

Unidirectional microphones can not only help to isolate one voice or instrument from other singers or instruments, but can also minimize feedback, allowing higher gain. For these reasons, unidirectional microphones are preferred over omnidirectional microphones in almost all sound reinforcement applications.

The electrical output of a microphone is usually specified by level, impedance and wiring configuration. Output level or sensitivity is the level of the electrical signal from the microphone for a given input sound level. In general, condenser microphones have higher sensitivity than dynamic types. For weak or distant sounds a high sensitivity microphone is desirable while loud or close-up sounds can be picked up well by lower-sensitivity models.

How a balanced input works.

The output impedance of a microphone is roughly equal to the electrical resistance of its output: 150-600 ohms for low impedance (low-Z) and 10,000 ohms or more for high impedance.(high-Z).

The practical concern is that low impedance microphones can be used with cable lengths of 1000 feet or more with no loss of quality while high impedance types exhibit noticeable high frequency loss with cable lengths greater than about 20 feet.

Finally, the wiring configuration of a microphone may be balanced or unbalanced. A balanced output carries the signal on two conductors (plus shield). The signals on each conductor are the same level but opposite polarity (one signal is positive when the other is negative). A balanced microphone input amplifies only the difference between the two signals and rejects any part of the signal which is the same in each conductor.

Any electrical noise or hum picked up by a balanced (two-conductor) cable tends to be identical in the two conductors and is therefore rejected by the balanced input while the equal but opposite polarity original signals are amplified. On the other hand, an unbalanced microphone output carries its signal on a single conductor (plus shield) and an unbalanced microphone input amplifies any signal on that conductor.

Such a combination will be unable to reject any electrical noise which has been picked up by the cable. Balanced, low-impedance microphones are therefore recommended for nearly all sound reinforcement applications.

How an unbalanced input works.

The physical design of a microphone is its mechanical and operational design. Types used in sound reinforcement include: handheld, headworn, lavaliere, overhead, stand-mounted, instrument-mounted and surface-mounted designs.

Most of these are available in a choice of operating principle, frequency response, directional pattern and electrical output. Often the physical design is the first choice made for an application. Understanding and choosing the other characteristics can assist in producing the maximum quality microphone signal and delivering it to the sound system with the highest fidelity.

Tim Vear is a veteran audio professional who works with Shure Incorporated. For more information visit www.shure.com.

{extended}
Posted by Keith Clark on 06/30 at 11:31 AM
Church SoundFeaturePollStudy HallMicrophoneSound ReinforcementPermalink

A Heavy Load: Amplifying Orchestral Instruments At Rock Concert Levels

One of the most challenging tasks confronted by a sound engineer is amplifying orchestral instruments on a loud stage. Problems abound, including bleed, resonance, feedback… oh, and frustration! It’s important to first understand the environment before dealing with the challenges.

When in a “classical” concert hall, orchestral instruments such as violin, cello or upright bass are usually miked with omnidirectional condenser microphones. Omnis are particularly effective at producing a natural sound as they do not focus their attention on a particular area of the instrument, but capture a larger area that includes the bow, strings, F-holes and so on.

During classical concerts, feedback is usually not a concern because the PA system is only used for “sound reinforcement,” and sound pressure levels rarely exceed 90 dB.

However, problems set in when a louder group, such as a rock band, joins the stage. Drums, electric guitars and bass generate significant SPL, which in turn must be compensated for by turning up the stage monitors. The sound generated by the orchestral instruments is lost. To compensate, one can either try close miking the instruments using a directional cardioid mic that attaches to the instrument, or by using some form of piezo pickup.

The cardioid mic can work reasonably well, but is not without issues. It only captures the sound from a specific area, which may or may not sound “right.” And because it’s still a “microphone,” it will inevitably pick up sounds from adjacent instruments, the house system and the monitors.

In order to hear themselves on stage, the violins ask for more sound from the monitors, and next thing you know, here comes feedback problems. Things can get even worse when playing outdoors, where feedback due to room acoustics is replaced by wind noise and levels are increased due to lack of room acoustics. These factors can lead engineers to use alternatives such a piezo electric transducers.

Point Of Contact
Brad Madix, long-time front-of-house engineer for Rush, recently experienced this problem first-hand. The production design for the Clockwork Angels 2013 tour called for setting up four violins and four cellos directly behind Neil Peart’s drums.

“We ruled out miking the strings pretty early on, having experimented a little with small mics in proximity of the drums and deciding we were going to get as much snare in the mic as violin,” Madix notes. “All of the players were on IEMs, so even if we did mic the strings in order to provide a proper mix to the band (not to mention the audience), we would have to additionally use contact mics. A combination of mics and pickups might have worked, but in the end we decided to go strictly with pickups.”

Piezo electric transducers.(click to enlarge)

A piezo is a contact pickup that captures the vibration of the instrument. It’s typically connected to a preamplifier of sorts and the signal is processed like a mic. But anyone who has tried a piezo pickup will tell you that for the most part, they do not sound all that great.

They tend to sound peaky, and with violin, they can sound shrill. The problem us not so much the piezo transducer, but the way it is loaded.

In researching this problem at my company, we discovered that when you apply the typical load of a mixing console –- say 10 k-ohms on a piezo -– it causes the bass and high frequencies to roll off, narrowing the response and generating peaks in the mid range. As you increase the load, it begins to flatten out.

For years, electronic manufacturers have employed a “one size fits all” 1 meg-ohm input impedance as a means to satisfy as many sources as possible. (We can attribute this to Leo Fender, as the typical input impedance on his tube amps is 1 meg-ohm.)

As the impedance rises above 4 meg-ohms, the response extends and flattens out further and seems to really sound great at around 10 meg-ohms.

“We knew that impedance matching was going to be a problem right from the get-go, and we just wanted to get the signal into the console without mucking
it up by mismatching the impedances,” Madix says. “I thought we were going to have to have something custom made, but Peter told me that Radial was working on a possible solution.”

That solution was the PZ-DI orchestral instrument direct box. To maximize signal handling and reduce distortion, the PZ-DI incorporates a DC-to-DC switching supply that converts the phantom power to provide extra headroom, which in turn supplies more dynamic range.

“Our first tests in Toronto prior to the tour with the PZ-DIs were impressive,” Madix says. “The pickups required some EQ –- I would say ‘the usual’ –- but the frequency response was great, as were dynamic range and signal-to-noise… there were none of the issues that become apparent when impedances are mismatched. We had the players come in and listen to the tracks afterwards, and they were really impressed.”

Further Concerns
But increasing the load for piezos is not without challenges. As you increase the load, signal-to-noise increases. Careful attention must be paid to the circuit that buffers and amplifies the minute signal.

Brad Madix with a Radial PZ-DI that he employed for the Clockwork Angels 2013 tour. (click to enlarge)

And if you want to use the same DI for a magnetic pickup, the opposite problem arises: applying a 10 meg-ohm input impedance to the magnetic pickup on an electric bass makes it sounds glassy and lacks warmth. This is addressed with a 3-position load switch to set the unit at 220 k-ohms for magnetic pickups, 1 meg-ohm for traditional loads, or 10 meg-ohms for piezos.

Properly loading the piezo transforms the device into a truly functional transducer. However, there can be too much bottom end, so we added a variable high-pass filter to the PZ-DI that eliminates excessive bass.

In general, high-pass filters are probably one of the most important and under-used controls that a sound engineer can employ. Cleaning up excessive resonance eliminates low frequency modulation and enables the various instruments to better sit in the mix, making it easier to balance them as a whole.

“The band is really happy with the sound of the strings,” Madix states. “The composer, David Campbell, was impressed as well. He said it was one of
the first shows where you could really hear the instruments and it sounded quite natural.”

A final note: one more important factor to account for in this scenario is that many classical artists get nervous about attaching mics and piezos to their expensive, often vintage instruments. So great care must be taken. Be sure to evaluate instrument mounting kits carefully—some are fine, while others can possibly produce scratches. Piezos typically employ a double-sided tape that will not harm the finish, so it’s another factor in their favor.

Peter Janis is president of Radial Engineering (www.radialeng.com), which has been producing snake systems, direct boxes and interfaces for 20 years.

{extended}
Posted by Keith Clark on 06/30 at 06:33 AM
Live SoundFeatureBlogStudy HallConcertInterconnectMicrophoneSignalSound ReinforcementStagePermalink

Monday, June 29, 2015

Church Sound: Finding Technical Team Members While Learning & Growing

This article is provided by Church Soundcheck.com.

 
I am often ask variants on this single question. What characteristics do you look for in a potential member for your sound team?

Should you look for a frustrated musician? A rocket scientist? A computer geek? A telephone lineman?

Maybe, or maybe not. Attitude is usually more important that pre-existing aptitude.

In this article, we’ll first examine how to identify the proper individuals to serve in your technical support ministry. We’ll also show you how to train them to achieve technical excellence.

A Servant’s Heart
After serving on the production staff at churches for nearly ten years, I’m here to testify you that you never want to choose a person based solely on their technical knowledge.

Instead, look for someone with a willing heart first. Then ask about their technical knowledge.

Why not look for technical knowledge first? While both are important, technical stuff is easy to teach over time.

Finding someone with a servant’s heart can be more difficult. It’s part of their core personality.

It also illustrates their relationship with God and predicts their ultimate utility within your tech support staff.

Serving in a church ministry requires a boatload of grace and more patience than many people have left at the end of a busy week. In some churches, it means working with difficult people every weekend.

The worship team and tech support team depend on each other’s gifts to be at full muster at the downbeat of the service. The process is much like preparing a weekly meal for all your worshipers.

The worship team and tech support team need to be in unity before, during and after the service. Being on the same page, spiritually, is the key ingredient to this recipe.

Staying F.A.T.
One principle I’ve relied on over the years is that anyone involved in tech support ministry needs to be F.A.T.— faithful, available, and teachable, in that order. Once they’ve joined the tech support team, these people must also be faithful to be there when they’ve promised to be there.

Most of our lives are too busy. Many people over-schedule our arrivals and departures to the nanosecond. But as a wise friend of mine once suggested, the only way we can be somewhere on time is to arrive there early.

The volunteer should also make themselves as available as is practical. To say they’re committed to the success of the ministry, but then to only make themselves available for one monthly service doesn’t work well in most situations.

Only operating a console one time a month isn’t often enough to become proficient at it. Would you climb on that airplane next weekend if you knew that the pilot only flies once a month? Granted, I’ve never heard of anyone dying from a bad mix, but you get my point.

There is another side of this issue, however. I’ve seen some volunteers make themselves too available, to the point that their relationship with their family starts to suffer. If you get your priorities out of line, your work in that ministry will.

Profiles & Personalities
These days, it’s common to find people who work in, on, or around computers, volunteering to serve in the tech support ministry.

Musicians who love all things electronic are another fertile source of tech support volunteers.

My friend, Blair McNair, worked on missiles while he was in the Navy.

At some point he started volunteering in the sound team at his local church.

Years later he became the Technical Director for Benny Hinn at Orlando Christian Center, and today designs sound systems for a living.

Most volunteers do something else for living. Your church may be blessed with a seasoned audio pro as the volunteer head of the sound team, but that’s not the norm.

This is why any successful volunteers must be clearly, consistently teachable.

This means to say that in the likely event that a particular volunteer doesn’t make his/her living in pro audio, they need to make a committee effort to learn the craft so they can reliably deliver technical excellence in every worship service.

I’m unconvinced that there is any one type of personality to look for. That’s because I don’t think we need assume that every sound team volunteer must be able to drive the FOH mixing desk.

The individual who typically seeks involvement in a tech support ministry has a detail-oriented personality. These folks make lists for everything.

I have a detail-oriented personality. Knowing that the guitarist is going to take a solo on the third chorus isn’t enough. I want to know what kind of sound he’s going to use, and how loud he will play. I want to know if he’s going to start out soft and build to a loud ending.

I must know if he’s going to use his own effects, or if I should plan on adding some echo effects on my own. Notice that I’m the audio guy, so I really don’t care what he’s wearing that day ­ that’s for the lighting guy or the programming director to think about.

Musical Background
This is one debate that has gone on for years and years. Should the person who will be driving the FOH mixing desk be a trained musician?

It’s easy for me to say yes, because I made my living as a player for twelve years, and I have a Bachelor of Music degree.

Clearly, someone who has experience as a player or a singer can be respected and accepted more readily by the players in the worship band simply because of the common bond and similar background.

But I do know of very capable mixers who have no formal music background, just a love of the music. I think this decision has to be a very individual one.

But I think we can agree that not everyone should be behind a console. Some can put together a great mix without even breaking a sweat. For others, it’s just not their gifting.

If the interest is there, however, the art of mixing can be learned. It’s not something they’ll grasp overnight, but time and practice and listening analytically are great teachers.

I’ve trained literally thousands of church music pastors, sound team volunteers and technical staff in my workshops. Of all of those people, I can only think of two individuals who just never seemed to get it.

Being a part of the church tech support team isn’t for everyone, but the majority of those who seem naturally drawn to the ministry seem capable of learning and managing the task.

Gifting & Getting The Job Done
Mixing sound is just one of the tasks that the sound ministry is charged with.

You could also find people who are thrilled to do a good job of running the tape duplicators after each service.

Others might enjoy fixing broken mic cables.

Still others might be happy setting up the stage every Saturday night.

Perhaps there’s a self-employed someone who could carve out some time to set the stage or run essential weekday errands.

Someone with a theatrical background might enjoy serving as a stage manager, a runner, or in some other role.

If your pastor has a daily or weekly radio program, someone must learn to use your nonlinear editing software to edit those programs.

If you identify all the tasks that need to be accomplished during a week, and then spread them out over a handful of people, you should find that the job can get done with excellence and without anyone getting overly stressed.

In a large church, you’ll find a trained individual at every post. The FOH desk, monitor desk, lighting desk, in the TV control room, at the video projection desk, all require trained technicians. Still, in the majority of churches, one person may serve all of those roles simultaneously.

The best idea is to cross-train everyone who becomes part of the tech support ministry. The lighting guy should at least be able to get sound out of the system, and the audio guy should at least be able to get the stage lights up and running if needed. (Editor’s note: Which one do you suspect will do a better job?)

People need a weekend off. People get sick. Cars break down in transit. Your staff needs to be prepared to help out as needed, in season and out of season.


Why Train The Team?
We must recognize that there’s a great disparity between the tech support team and the worship team in most churches.

Think about it. Every worship team member, who sings or plays, has inevitably studied music at sometime in his or her life.

Even if they are self-taught, they’ve invested their time and managed to learn how to play.

North American culture has given us easy access to musical training.

Most public schools have some form of music program.

I began to play music when I was in elementary school, played in various music groups all the way through college, and made my living playing in bands until I was thirty years old.

It was only after I got my music degree that I quit playing music for a living.

Even if we didn’t pursue music as our lifelong ambition, our studies helped us in numerous ways.

In contrast, the tools or programs to learn how to run sound, or the stage lights, or work with video hasn’t had the same kind of easy access, at least not until very recently.

After all, in school, I played a saxophone. I didn’t need a sound system. Maybe you played in the brass section, and they really didn’t need a sound system either.

So, is it fair to compare the talents of a stage full of trained musicians and singers with that of a beginning audio student? No, this is an unfair comparison or expectation.

In real life however, that is what many churches do every week. Predictably and unfortunately, some get frustrated and lose their cool in the process.

Training your crew also helps to strengthen their bond as friends and teammates. It can even enhance their self-esteem as individuals, giving them more confidence.

Where To Find Training
Churches all across the world are crying out for trained sound technicians. Strangely, only a very small percentage of these churches are willing to pay for that training. That’s one very clear reason you rarely see such training opportunities.

If you’re a eager student of audio, reasonably certain that you have your facts straight, and you believe you are ready to start training others, then do what all the rest of us who have trained others in audio have done.

Put together an outline to clearly and logically organize the materials and dig into the resource materials to gather your supporting information. Then gather up your courage and go for it.

I choose to organize the material according to signal flow. That’s an intentional approach. Understanding signal flow logic is key.

When I’m teaching someone to connect an amplifier, for example, and I see them connect the speaker cable first to the speaker, and then to the amp, I have them disconnect both ends and do it over again.

Obviously, this makes no difference to the signal itself and, because it’s an AC signal, it constantly reverses directions. In general, as you already know, audio signal flows directionally from the amplifier to the speaker.

One day, years after they’ve stopped calling me nasty names, they’re going to run into an exciting moment when five minutes before the downbeat of their Christmas Cantata, with 2,000 people out in the audience, their sound system stops working.

Suddenly, the success of the event falls squarely on their shoulders and rests in any audio team’s to troubleshoot and resolve the problem in a timely manner.

If the concept of signal flow logic is firmly ingrained into their thinking, they’ll be able to rest in their knowledge and resolve the problem quickly and efficiently.

Once, I had the great pleasure of visiting with Bill Johnson, Chief Audio Engineer for Kenneth Copeland Ministries.

As we were touring the facilities at Eagle Mountain Church, he shared with me that they require their tech support volunteers to attend a training session once a month.

Through a simple test, the audio team is divided into beginning, intermediate, and advanced groups.

The classes are taught by technical support staff. That is so cool.

Ultimately, it helps bring the entire crew onto the same page, and because it keeps everyone growing in their knowledge, so they can do an ever better job of supporting the technical needs of the worship services.

Source Knowledge
The Internet is overflowing with information about audio. Some of it is even correct. If you’ve been in audio for some time and you’re reasonably confident in your knowledge, then go ahead and explore.

Just be alert for the occasional piece of audio mythology. If you’re a beginning student, I encourage you to stick to the main information highway.

We strive to make our own ChurchSoundcheck.com a mythology free zone. Obviously, ProSoundWeb.com focuses on performance audio technology and works hard to ensure accuracy.

Believe it or not, you can trust comments that you may read posted on web sites by the major manufacturers. For example, you’ll find accurate, reliable information on sites by Rane, Crown, EAW, QSC, Allen & Heath, dbx, and others.

I also suggest looking at the training courses from Syn-Aud-Con.

Wake Up & Smell The Silicone
Finally, I’d like to leave you with a wakeup call. Have you stopped growing in your technical knowledge? Have you stayed on top of the DSP revolution in regard to digital consoles, or are you letting digital know-how pass you by?

Even worse, are you a know-it-all? Are you the type of individual who figures that they know all there is to know about audio, or lighting, or video?

Let me suggest to you that one day, in the not too distant future, you’re going to find yourself left in the digital dust of some young kid who just figured out how cool audio is, who has never even touched an analog audio console and been raised on digital.

There’s so much new stuff in play these days. It is impossible to stay on top of every technological change, in every equipment category, but that’s no reason to roll over and ignore the digital revolution.

It’s cool to learn from the past, to apply micing techniques learned from the masters, for example. It’s not cool to have been mixing at your church for the past thirty years and to walk up to a new console one day only to discover that you can’t even locate the ON switch.

If you’re not achieving the level of technical excellence that you aspire to each week, maybe it’s not the gear. A simple lack of knowledge could be standing between your audio education goals and the reality you live with.

Fortunately,, technical stuff can be taught and technical savvy learned, but you must work at it. Likewise, your volunteers and tech support staff must work at it.

Stay on task. Read. Study, study, study. Attend trade shows, workshops and seminars. Subscribe to trade magazines. Buy technical books. Read and study some more. After that, go teach someone else.

 
Curt Taipale heads up Church Soundcheck.com, a thriving community dedicated to helping technical worship personnel, and he also provides expert systems design and consulting services with Taipale Media Systems.

{extended}
Posted by admin on 06/29 at 09:20 AM
Church SoundFeatureStudy HallProductionAudioEducationEngineerMixerSignalSound ReinforcementSystemPermalink

The PA Trifecta: Handing Off A Live System That Checks Out

“Here…these are for you.

“Let me count that back - One (1) pair floppy plaid shoes; one (1) Bravo-52 latex red rubber nose (with custom strap); two (2) mechanical chickens with servo-waste evacuation system; and three and a half (3 1/2) cans of I told you so (generic substitute).

“I must ask you to put on these garments, pick up the chickens and proceed about your business until load out.

“This demand is in accordance with our technical rider, which clearly states in verbiage Grouping 3, Subsection F, Line U: ‘…all drivers must exhibit the correct DC polarity as specified by their manufacturer’.

“Should a DC polarity inversion be suspected and confirmed through qualified empirical method by (the artist’s) engineer, the specified and introduced lead system engineer (A1) for the sound system provider must and will don a garb of traditional JESTER SUPPLIES (as provided by artist’s representative) and wear said fashion until such time as artist’s engineer deems the A1 has reached a sufficient level of contrition.

“We appreciate your cooperation in this matter.”

It doesn’t really say that. The production manager for the artist that I mix does, however, begin the sound portion of our show advance with this caveat: our front of house guy will confirm the DC polarity of your rig using a combination of software-based FFT and handheld pulse checkers.

If he’s just been mugged prior to load in and the thief has absconded with all his gear, he will still stagger naked up to your PA with a 9-volt battery and a screw gun. This will happen. If there is a problem, he will find it.

Along the way, he will also find every off center diaphragm, sketchy cone compliance and secret TRS-to-Edison connection in your rig. To spare yourself the misery of having your pants pulled down in public, please confirm that everything is moving the right way before it leaves the shop.

Is he kidding? Nope—we burned out two Makita batteries this year alone opening boxes. I’ve officially dubbed last summer the “Summer Of Left Isn’t Right.”

Using my nifty calculator with LOG function, I’ve determined that there was a 13 dB increase in the amount of systems we were provided that had a significant difference between the left stack and the right stack.

Some days it was subtle…pink noise coming from the left, the sound of me asking if the all amps were powered up coming from the right. Then there were the 4 dB to 5 dB (SPL) inconsistencies in magnitude responses I was seeing in April, which paled against the 9 dB to sometimes 10 dB (again, SPL) variances I was encountering in September.

This wasn’t comb filtering from a spherical array in a geometrically symmetric room; this was daddy’s leaving the house on one side and mama’s coming home for the night on the other. Big deluxe, textbook inversions with a very demonstrative phase wrap right through the middle of a null at crossover.

I call it the PA Trifecta: the sound of the Left, the sound of the Right, and then the composite of both. Pick one, flip front of house, and make sure management stands only there, with their heads strapped into a neck immobilizers.

My hands-down favorite was panning a signal to the left and getting three of the six compression drivers on one side and four of the 12 cone drivers on the other.
Panning it right yielded all of the compression drivers on one side, a front fill, two subs, and I’m pretty sure the soda machine started vending skittles.

Time to pause for a second and make sure something is clear—I am ALWAYS pulling for the PA company. Honestly, what I want to do is turn to the system tech, shake his/her hand vigorously and gush about how truly splendid the boxes are. (I’ve also found that proffering a half-eaten yet properly wrapped Milky Way will also go a long way toward cementing a friendship).

Instead, too often, I’m treated to wildly different responses coupled with the now absolutely classic (and I actually have this printed on a T-shirt), “worked fine last night…” This statement is usually uttered with a spectacular amount of affected indifference.

My retort, now equally worn out, is a paraphrasing of a statement made by a very good friend of mine years ago—to wit; “the only difference between last night and tonight is you.” Takin’ it right to third grade…

I guess, given my nature, what I really want is intense concern. You know, a significant “Hmmm…” from the system tech, a quick confab with his second, and a suggestion to me that I go get a coconut donut and all will be better when I return.

It’s how I handle the situation when I‘m the system guy. Abject internal mortification coupled with a smooth verbal map of the directions to catering.

Then, engineer properly shooed… warp 3 triage. After all, whether you own everything or work for a PA company, when the keys are handed off to a guest engineer, you are effectively saying, “I’ve checked this through and it meets with my approval”.

If something is wrong, three things are assumed: 1) You didn’t check it through carefully enough; 2) The problem is beyond your skill level; 3) You own your own big floppy shoes and red nose and “know what? I think they match my cape very nicely.”

In all fairness, I’ve been handed rigs by a shop that I had no part in prepping. Many times it was as if the shop manager sat me down, looked deeply into my eyes and told me he was putting a long piece of splintery two by four in the truck and I would know what to do with it when the time came.

On days like that it’s sometimes easier to introduce myself as the LD who’s actually only helping out with sound… but I can’t. No matter what the back-story, the guy or gal coming in today only wants to know that it works and works properly. No excuses. My reputation that day will be tied to the functioning state of that rig.

But back to the “L&R and which one of these things is not like the other.” There are, without a doubt, very subjective aspects of our field. What sounds good to one person may sound like two ferrets mamboing on glass to another.

Equally there are simply some cold hard objective facts as well. Things like… your PA is not time aligned. That is a trademark. (If time could be aligned, I’d be rich and you would be my slave.

I’d also have a giant Sweet Tart dispenser in my bedroom. Or should 13 of 26 drivers be blown on one side of the PA finding half out of polarity on the other side does not make things a wash and give you time for a nap.

Without tuning this into a very special episode, here’s my basic idea. Before anyone you could look foolish in front of shows up (excluding video), turn the PA on and listen to it.

Even if it sounded great last night, do a band-pass check and then turn the whole thing on together. If it sounds different, now’s a great time to figure out why. If the PA is flown, press the big button under the down arrow on the motor control. You might as well check this ahead of time since you really only have a two in five chance somebody won’t say something later.

Here’s another tip. If you’re running late and have just discovered a discrepancy; tell the band engineer straight up, “Hey, there’s something here I need to sort out, sorry. Can you give me a couple minutes?”

Say this even if it’s a complete lie and you have no idea what’s wrong. I guarantee if the guy (or gal) is qualified and professional, they will immediately add 10 “attaboys” into your petty cash fund just for caring.

Look earnest enough and they might even get you water and do helpful things like tell road stories that have no bearing on anything except tacitly coupling their name with someone famous. (Ooooh… just the thought of it makes me yearn for tomorrow.)

I’m attending a meeting later this week with all of the band engineers on the planet. After the pre-screening (just like at the airport except it weeds out whining, complaining, and technically incompetent FOH guys), I’ll shout over the gate to the three people that make it through and discuss these issues with them.

Here’s my proposal; I’ll try and get them not torture your PA so badly that it gets in a snit and stomps off, if all the system engineers on the planet vow to stop trying the Jedi mind trick on us when we walk in…

“There is nothing wrong with this PA, let us pass… there is nothing wrong with this PA, let us pass.”

As the kids say… Peace. I hope they mean it.

Sully is a veteran live sound engineer who now works with L-Acoustics.

{extended}
Posted by admin on 06/29 at 05:04 AM
Live SoundFeatureBlogStudy HallEngineerLoudspeakerMeasurementSignalSound ReinforcementSystemTechnicianPermalink

Friday, June 26, 2015

RE/P Files: The Grateful Dead, A Continual Development Of Concert Sound

PSW Top 20 presented by Renkus-Heinz

 
This article originally appeared in the June 1983 issue of Recording Engineer/Producer magazine.

The Grateful Dead have been playing their unique brand of improvisational, eclectic music going on 18 years now.

Though their records are modest sellers, and more or less ignored by radio and the “establishment” press, the Dead are consistently among the highest-grossing concert acts in the country.

What they do musically is improvisational, existential, and not always satisfactory; but since the beginning the Dead have been attended and experimented upon by forward-looking sound specialists, always seeking to improve the quality of their live sound.

Dan Healy has been mixing the Dead’s concerts since the band first took to the San Francisco clubs and ballrooms, and he says he’s never been bored.

To Healy, the Dead is “a vehicle that enables an aggregate of people to experiment with musical and technical ideas. It’s a workshop and a breadboard, as well as a dream and a treat. There’s no place in the world that I know of that would give me this much space to experiment and try new things and also to hear good music.”

The Dead’s own people have developed equipment and techniques to improve the state of the sound reinforcement art, and they have invited others to use Grateful Dead gigs as live testing grounds.

“We live on the scary side of technology, probably more than we ought to,” guitarist Bob Weir concedes. But you don’t learn much from maintaining the status quo, and the Dead have always encouraged experimentation and sought new knowledge in many areas.

Dan Healy

The Early Days
The first PA system Healy operated, at the Fillmore Auditorium in San Francisco, consisted of a 70-watt amp, two Altec 604s, and a two-input microphone mixer.

“And that was far out compared to what was there the week before,” he recalls. When Healy and his fellow soundmen started trying to put better systems together, they found that the hardware available was not very advanced.

“The first thing we did was go get tons of it, only to find that that was only a stopgap measure,” Healy remembers. “It was obvious that there was nothing you could get off the shelf that you could use. Furthermore, there were no answers to our questions in journals or texts; where the equipment ended, so did the literature and research. What we needed was past the point where R&D had taken sound equipment.” So they set out to find the answers for themselves.

Healy and the Grateful Dead became willing guinea pigs for John Meyer, then of McCune Sound; Ron Wickersham of Alembic; and others on the scene who were looking for ways to deliver music painlessly and efficiently at the often ridiculously high SPLs of the San Francisco sound and rock music in general.

“Those guys were long in the design and prototype area,” Healy explains, “and we were long in the criteria. We built a system and scrapped it, built another one and scrapped it. We never had a finished system, because by the time we’d get one near completion it was obsolete in our minds, and we already had a new one on the drawing boards.”

The concept of speaker synergy and phase coherency in particular was understood by the early Seventies, and several designers had come up with ways of implementing it. John Meyer and McCune Sound developed a three-way, tri-amped single-cabinet system with crossovers that reduced phase shift considerably. It was a significant improvement, but there was plenty of work yet to do.

While Meyer was in Switzerland studying every aspect of speaker design, acoustics and the electronics of sound, Healy and Alembic and the rest took off in other directions.

The Dead debuted a new system at San Francisco’s Cow Palace on March 23, 1974, in a concert dubbed “The Sound Test.” Bassist Phil Lesh calls it the “rocket gantry” and maintains that it was the best PA the Dead ever had.

“It was the ultimate derivation of cleanliness,” Healy explains. “No two things went through any one speaker. There was a separate system for the vocals and separate systems for each guitar, the piano, and the drums. You could get it amazingly loud, and it was staggeringly clean, cleaner than anything today. It still holds the record for harmonic and most especially intermodulation distortion.”

Healy calls this system’s theory of operation the “as above, so below theory. If you stack a bunch of speakers vertically and stand close to one, you hear the volume of that one speaker. If you move a little farther away, you hear two speakers; move away some more and you hear three. If you have a lot of them stacked up high, you can move quite a ways away and the volume stays the same.”

There was no mixing board in the house. Each musician controlled his own instrumental volume, because his speaker stack was its own PA system.

Guitarists Bob Weir and Jerry Garcia each had about 40 12-inch speakers in vertical columns, and bassist Lesh had a quadrophonic system. Vocals also were delivered to the band and the audience by the same speakers. Each singer had a pair of mikes, wired out of phase so that background sound arriving equally at both was canceled, while what was sung into one mike was passed on to the amplifier.

Healy recalls one unfortunate incident a year before the gantry system was officially unveiled, when some of these principles were tested at a concert in Stanford University’s basketball arena.

“We spent maybe $20,000 on amplifiers, crossovers, and stuff,” he recalls, “and we rebuilt a lot of Electro-Voice tweeters. We pink-noised the room from the booth and got it exactly flat. If you flatten a system from a hundred feet away, it’ll sound like a buzzsaw, and it did.

The Wall Of Sound, a.k.a. “The Sound Test System,” in action at the Hollywood Bowl, 1974. Photo by Richard Peshner.


“We started the show, and in the first two seconds every single one of those brand-new tweeters was smoked. We went through all those changes to put protection devices in, and they never worked, they blew long after the speakers were gone.”

There was no hope of replacing the 80 or more tweeters they’d blown, so Healy says they “opened up the tops of the crossovers, equalized a little bit and faked it.”

Healy points out philosophically that recovery from such catastrophes is “another thing that you learn after enough years. Recovery is your backup buddy.” He also notes that the years of experience make it much easier to estimate what will work and what won’t, so it’s easier to avoid disaster.

[This writer happened to have been in attendance at that Stanford concert. Although there were some rather long pauses while the equipment was worked on, the show itself was a good one, and a high time was had by all.]

It was economics that caused the “Sound Test” system to be dismantled. The gasoline crisis of the mid-Seventies made it unfeasible to truck tons of speakers, amplifiers and spares plus two complete stages which leapfrogged so that one could be set up before the PA arrived from the last gig.

“It began to eat us up after a while,” says Healy. “Remember that we were trying to take this across the country and interface with halls: set up the equipment, play a show for 20,000 people, tear it down, then show up the next day in another city and do it again for three weeks in a row, or a month, or six months.

“We were damn lucky,” he adds. “We got a tremendous amount of knowledge out of that system before it became such a burden that it started to distract from the music.”

Smaller Can Be Beautiful
When the Dead resumed touring in 1976, after a 21-month hiatus, PA technology had advanced sufficiently that it was no longer necessary to isolate each instrument and run it through a separate speaker system—not to mention the fact that it was economically impossible to truck those mountains of gear around.

“Efficiency comes down to the number of boxes that you have to carry, of weight in a semi-truck going down the highway,” Healy observes.

Not only was it impractical, but it was no longer necessary. In the intervening years, what Healy and the Dead wanted—a system that performed as well as the Wall of Sound,  but which was “one fourth the size and four times as efficient”—came into existence. “The system we have now is better than the ‘74 system, overall, even though the ‘74 system may have been better in certain ways.”

The Dead currently tour with a PA owned by Ultra Sound, using speaker systems and associated electronics by Meyer Sound Labs. “Meyer has been able to extend the low and high frequencies without hopelessly distorting the rest of the sound,” Healy notes. “That’s actually the main significance.” And by arranging the speaker cabinets to work together in a very precise way across the whole frequency spectrum, it takes fewer drivers to cover the desired area, and intelligibility is uniformly good nearly everywhere.

With the quality of the PA hardware firmly in hand, Healy says that the Dead’s concert setup these days goes through subtler changes and refinements.

Ultra Sound concert system for a Grateful Dead concert at the Oakland Auditorium, December 1981. Main flown speaker system is made up of Meyer Sound Laboratories MSL3 cabinets. Photo by Kurt Anderson.

One interesting development came to Healy almost by accident, and resulted in a very useful device to make his job easier.

“The vocal mike is the loudest one in the mix,” he explains, “and if it’s open on the stage it’s picking up drums or guitars from 15 feet away, and adding them in 15 milliseconds later—which is that many degrees of phase cancellation—and the net result is a washing-out of the mix. You can’t use audio amplitude to gate those mikes, because the guitars are frequently louder at the mike than the voice that’s standing right in front of it.

“So a certain amount of me always had to be on the watch for the singers so I could turn their mikes on,” he continues. “That was annoying, and it kept me from being able to listen on a more general level.

“The Paramount Theater in Portland, Oregon, has a balcony that’s right on top of the stage. I was looking down at the guitar players, and it all connected for me. I’m a musician myself, and I know that one of the most embarrassing things that happens when you’re playing rock ‘n’ roll is running into the mike and banging yourself on the lip or being a mile away from it when it’s time to sing.

“That night in Portland I realized that every musician has a kind of home base where he puts his foot in relation to the stand so he knows he’ll be right at the mike. It was duck soup: I got the kind of mats they use to open doors at the grocery store, then designed and built the electronics that gated the VCAs [to control the mike-preamp gain], and lo and behold, it worked!”

For keyboardist Brent Mydland, the situation wasn’t so simple. John Cutler, who works with the Dead in R&D as well as other capacities, designed a system around the sonar rangefinders used in Polaroid cameras. Using discrete logic rather than a full-blown microprocessor, Cutler came up with an automatic gate that opened the mike when Mydland’s head came within singing distance of either of his two mikes.

Detail of the right-hand cluster of MSL3s at the Oakland Auditorium in 1981. Photo by Kurt Anderson.

“It’s just one of those things that came about as a means to an end,” says Healy. “I built the floormat [device] just so I could be freed from switching on microphones.”

Rather than get involved in marketing a device like this, which Healy says is “not my business,” he just has a few extra circuit boards made. “If somebody comes by and wants to try it, we give them the cards and a parts list.”

Because every Grateful Dead gig is different—no songlist, plenty of room for instrumental improvisation, no pre-arranged sound cues to speak of—mixing for the band has never settled into a routine for Healy.

“Some nights they start out screaming and get softer, and some nights they start in one place and stay there,” he says. “There isn’t really any good or bad in it—it’s just a different night in a different way. From the start to the end of the show, it’s a continuous progression, figuring out how to spend the watts of audio power that you have in such a way that it’s pleasant and human.”

It’s been years since Healy went into a hall and pink-noised the sound system. “I leave my filter set flat, and I dial it in during the first couple of songs. After enough years of correlating what I see and hear, I know what frequencies, how much, and what to do with it.”

Test equipment is on hand for reference, but Healy prefers to rely on his ears. “You have a speedometer in your car, but you don’t have to use it - or even necessarily have it. You don’t need it to know how fast you’re going, but it’s there for reference: That’s how I use the SPL meter and the real-time analyzer.”

In the “hockey-hall-type spaces” the Dead play in these days, Healy likes to set up about 85 feet from the stage.

“In my opinion—and my opinion only, for that matter—the ideal combination of near-field and far-field is 85 feet. I don’t like to be far enough into the far field that it’s a distraction, but for me it’s important to hear what the audience hears. Healy considers himself the audience’s representative to the band, comparing notes with the musicians after shows, and telling them things they might not want to hear “if I feel I have to.”

He also encourages—within reason—those members of the Dead’s following who bring their recording gear to concerts.

“I’m sympathetic with the tapesters, because that’s what I used to be,” he says. “I remember buying my first stereo tape machine and my first two condenser microphones, sweating to make the payments, and going around to clubs and recording jazz. So I’ve sided with the tapesters, helped them and given them advice and turned them on to equipment.

Side stack of the Ultra Sound system at Ventura County Fairground, July 1982. Photo by David Gans.

“I learn a lot from hearing those tapes,” he continues. “The axiom that ‘microphones don’t lie’ is a true one. If you put a microphone up in the audience and pull a tape and it doesn’t sound good, you can’t say, ‘It was the microphone,’ or ‘It was the audience.’ You’ve got to accept the fact that it didn’t sound good. When you stick a mike up in the audience and the tape sounds cool, it’s probably because the sound was cool. So it’s significant to pay attention to the tapes.”

Even after 18 years of working with the Dead, Healy says he still enjoys going to work every day. “I’ve been doing it so long that I don’t even look at it as a job,” he explains. “It doesn’t get stale for me on any continuous basis. I react more to ‘Tonight was a good night,’ or ‘It wasn’t so good.’ I can have a bad night and go home discouraged and kicking the dog, grumble-grumbie, but I’m always ready to start again tomorrow.”

Additional Coverage
The Grateful Dead System At The Oakland Auditorium, December 1982

According to Howard Danchik of Ultra Sound, ‘‘The Dead’s system, as always, was run in stereo. The main speakers were flown, and comprised 12 MSL3s at each side of stage, plus a center cluster of eight (four lelt and four right channel), also above band.

“Suspended from the side clusters are three Meyer Sound Labs UPA cabinets, angled downward to fill in for those at the front of the audience. There are also four UPAs below the lip of the stage at the center (two left and two right) for the spectators at the very front-center, plus one UPA at the rear of each main cluster, pointed up and back for spectators in the balcony directly to the sides of the stage.

“Each MSL3 is driven by 650 watts RMS of amplification—225 to each 12-inch speaker (two per cabinet), and 200 to the four piezo tweeters. One MSL processor is used to drive all the MSL3s on each side; two (one per channel) to drive the center cluster; and two (one per channel) for the front and sidefill UP1As.

Diagram of the system in Oakland, 1982.


“The subwoofers were made up from eight MSL 652-R2 subwoofer road cabinets (two 18-inch drivers, front-mounted) on each side, stacked on their sides, four wide and two high. Each speaker is driven by 225 watts of Crest amplification. The processor takes a full bandwidth signal from the house mix, and extracts 80Hz and below for the subwoofers.

“Additional speaker systems included: for the lobby four UPAs (stereo, via Meyer processors); for the bars one UPA in each bar (mono, one processor each); the kitchen one UPA (mono, one processor); and the kids’ room a pair of Hard Truckers five inch cubes (mono, no processor).

“All power was provided by Crest amps, 225 W RMS per channel into 8 ohms. House mixer was a Jim Gamble custom board, 40-in/8 stereo submasters, wtth automatic built-in mono output The monitor mixer was a Gamble custom 40/16 console. House effects included a Lexicon 22/lX digital reverb and Super Prime Time; dbx Boom Box subharmonic synthesizer; a collection of vocal gates; and an autopanner, homemade by Dan Healy & company. Microphones included Shure SM78s for vocals, plus a new Neumann mike for Jerry Garcia, and Sennheiser 42ls, AKG C451s and C414s.’’

Editor’s Note: This is a series of articles from Recording Engineer/Producer (RE/P) magazine, which began publishing in 1970 under the direction of Publisher/Editor Martin Gallay. After a great run, RE/P ceased publishing in the early 1990s, yet its content is still much revered in the professional audio community. RE/P also published the first issues of Live Sound International magazine as a quarterly supplement, beginning in the late 1980s, and LSI has grown to a monthly publication that continues to thrive to this day.

Our sincere thanks to Mark Gander of JBL Professional for his considerable support on this archive project.

{extended}
Posted by House Editor on 06/26 at 11:47 AM
Live SoundFeatureAnalogConcertConsolesEngineerLoudspeakerMicrophoneMixerSound ReinforcementStageSubwooferSystemPermalink

“Equalization” – Now There’s A Loaded Word

Why is there so much EQ available on so much kit, if it's so awful? Every console, analog or the other way, is loaded with equalization from the inputs to the outputs and all points in between

If one follows the literature (and street talk) in both the audiophile and professional sound communities, “equalization” is a very bad thing.

If you use it, you get, in no particular order, comb filtering, phase shift, lack of transparency, non-linear response, one note bass, harshness, mid-fi sound, lack of neutrality, proof of your status as an amateurish guitar-store soundman, as well as proof of your status (from the audiophile perspective) as a deaf knuckle-dragging roadie.

Standing on Mars, as they used to say in the History Department (I don’t think they would have liked “EQ” either – it would have offended their sense of ideological purity), you get the impression that users of EQ are obviously agents of the beast, or at best deaf, inexperienced, or misguided.

Uh huh. Then why is there so much EQ available on so much kit, if it’s so awful? Every console, analog or the other way, is loaded with equalization from the inputs to the outputs and all points in between.

Our loudspeaker management systems nominally have dozens of points on the sends, as well as points local to the various driver groups in the bandpass. The more advanced management systems contain high zoot options such as FIR filters, which remain the best sounding (if somewhat hard to use on the fly) EQ schemes I have heard.

Most of our freestanding mic preamps have a band of EQ or two, and many of us still insert parametrics or thirds into main or monitor sends. The various digital recording technologies are loaded with EQ, much to the horror of the old school analog recording purists who would eschew level adjustment or limiting during a live 2-track recording, let alone any futzing with the signal by some clown with his hand on a tone control.

That’s right, “tone control.” Don’t like those words, do you? Very negative connotation indice, as the marketing types would say.

Visions appear of friends recently returned from the Vietnam War with Sansui receivers playing Hendrix through mondo Pioneer speakers with all the tone gains slammed and the “loudness” control engaged – now that was a pure sound, right?

Actually, it wasn’t too bad. Enormous bump at 80 Hz, big warm out of phase mid frequencies, and a really icky irritating high end – perfect for Jimi’s speed guitar stuff. Thing sounded like a jukebox at a truck stop.

More importantly, though it was the worst technically, it was musical in the sense that it was true to the music that would most likely be played on it. Not the right thing for Fritz Reiner and the CSO doing Mahler 4, but totally appropriate for Hendrix, Elvis, Hank Williams, Sinatra, Sam Cooke, and the other popular artists native to such systems.

The audio (and audiophile) fundamentalists in the audience are now growing uneasy, as it would appear that I am about to make the case for what might be termed “technological relativism,” as in the purity of the medium must be compromised to deliver the full emotional (or business-related, as we will see) content of the message. Yup – that is exactly what I’m saying.

The good news is that there are systems and methods for every form of audio belief. The trick is to correctly match the system (and method of operation, as in how much EQ, if any) to the nature of the event.

If you do your sound work, whatever it is, as an expression of a set of fundamental beliefs, operating outside of a business or emotional (as in musical) model, you will not hit the mark as often as the technological relativists, who say screw it and grab an equalizer to fix the mess, or perhaps, defeat every equalizer to fix a different mess.

For example, my main teaching rig consists of eight stacks run in various configurations, depending on what’s happening any given day. Each stack has a double 18-in and a three-way (15”/6.5”/1”). There’s huge power and a network of nine XTA’s behind the thing.

It normally runs with 24 dB/octave Linkwitz/Riley slopes, and a ton of parametric in both the PEQ and driver sends. Audiophiles would be horrified. Yet I can stand in front of any of the stacks with an open mic and yank it up to 105 dB or so and there will be no feedback. That’s important if you are in a live sound classroom with a lot of guest musicians and open mics.

The rig sounds OK in these circumstances, but if we indulge in a little audio absolutism, we can make it sound better, with the caveat that the system will go into instant feedback in the “pure sound” mode with an open mic. In the “pure sound,” or audiophile absolutist mode, there is also a definite limit on sound pressure levels, as to get the rig to sound this good we have to go to second order Bessel slopes and take the high pass down to 13 Hz from 28 Hz.

And, of course, no equalization in this mode, period.

When we do this, the rig approaches audiophile clarity, extension and transparency. If we take it louder than 90 dB or so, the drivers start to distort, but up to that point it sounds amazing. Unfortunately, the experience in this mode is limited to playback from disc or file.

Let’s analyze goals then – in this instance, if the goal is super sound quality, we can’t have any EQ. If the goal is open mic stability and high SPL, we need massive EQ. Neither way is “right” – the situation will define which one solves the particular problem.

For the audio pragmatist/relativist, EQ nominally relates to two goal situations – the “tone control” make it sound “better” issue, and the “no feedback” issue. There is an inherent conflict between these goals – if you hack too much in the EQ domain for feedback suppression, it will sound hollow or weird or something. Excessive concern with tonal purity will result in unacceptable gain before feedback with open mics, and probably get you fired. The trick is to strike a balance between the two.

Such a balance cannot be measured (you knew that one was coming – ha ha), of course. The operator has to have those files onboard, based on show experience and a hell of a lot of listening to high-resolution playback systems and live acoustic events. Listening to MP3’s on a laptop with a set of $50 headphones isn’t going to get it. I spend too much time and money on my main home rig, and it seems to change rather often, which is typical for most audiophiles.

This is not the place for product recommendations for optimum home listening, as that solution exists only as a moving target. There are some things out there, at various price points that deliver the necessary, as the Brits would say. Maybe, if it’s clearly understood that I’m nobody’s sales slave anymore, I can outline a few of these systems for those in the market for same, in an upcoming article.

What I will do now is provide a little generic EQ advice to aid and abet the continued employment of the budding live sound operator. That’s impossible, some will say – there is no such thing as generic EQ – there are too many variables. True – rooms and systems and event types are all over the place. But we do have some live sound situational consistency: the nearfield open mic situation – monitor world.

I was moderately amused when part of my world suffered an infestation of carpetbaggers flogging IEM’s as the highest manifestation of “new” technology in the live sound biz. I pushed back on that one in a series of articles, though it should be noted that I have dealt with IEM’s pretty much every weekend since 1997, along with real monitors. To hear these geezers tell it, by today’s date freestanding stage monitors would have gone the way of the Model T, printed media and Holley four-barrels.

As I said at the time, not so fast, bud. We still see a lot of black boxes and stacks on stages everywhere. Long-term – who knows? For what it’s worth, here is a bit of generic EQ data that might be of some help to live sound types early in their learning curve, which will get them pointed in the right direction in their search for gain before feedback from them little boxes.

Caveats first, though. We assume that you are using professional kit – this might work with MI level equipment, but I’m not making any guarantees. We also assume that your goal is to get things rather loud – you don’t need to do this kind of EQ hacking to achieve stable moderate sound pressures.

This little curve arose from doing rather a lot of monitor work. My rules were to re-do the system for every use – just to keep my chops up. I couldn’t help but notice some consistency in final monitor EQ curves in loud situations. It could be said that this is in some degree a variation on the age-old smiley-face, but this did not come from preconceptions – it documents what happened when I really had to slam some mixes.

Who knows, maybe there’s something (Fletcher-Munson, I suspect) to that smiley face thing after all. BTW, in my parlance, “Void” means a 100% reduction at a given frequency. For this chart, assume that you are running at 12 dB/octave with the gain swing for each filter from -12 dB to + 12 dB.

Figure the post EQ SPL of this mix (depending on the quality of the signal chain and the monitors) at between 115 dB SPL and 122 dB SPL.

31.5: Void if there are no drums in the mix, carry it at 0 dB if there’s a lot of kick.

40: Same as above

50: -6 dB if there are no drums in the mix, 0 dB if there are.

63: -3 dB if there are no drums in the mix, 0 dB if there are.

80: -1.5 dB for deep voiced singer, 0 dB for higher voice.

100: -2.5 dB for deep voiced singer, -1 dB for higher voice.

125: -3 dB for deep voiced singer, -1.5 dB for higher voice.

160: -8 dB

200: -10 dB

250 -4 dB

315: -6 dB

400: -6 dB, though often this one ends up a bit lower than 315 Hz

500: -10 dB to Void

630: -7 dB

800 -6 dB

1K: -6 dB

1K25 : -2 /-3 dB

1K6: Same as above

2K: -3 dB

2K5: 0 dB – this one’s the key, in my view, to vocal breakthrough in a loud situation – if I can avoid cutting this at all, that’s a good thing.

3K15: -1 /-1.5 dB

4K: -2 dB

5K: -2 dB

6K3/16K: Carry these as close to 0 dB as possible, unless you are using one of the new school condensers, which may require cuts of -1/-3 dB at these frequencies.

Go ahead – cut this curve into your favorite wedge and give it a listen. It represents, of course, UK monitor method, which is what I do and what I teach. East Coast US guys love that 800 Hz stuff, which I don’t.

Old school Nashville guys always wanted the wedges to sound like the “bigs” in a recording studio – these days they do the IEM thing, but I assume that they are still into the flat type sound in their wedges.

The Texans and LA guys were more into an assault at 1K 6 Hz, or what I would call the metal look. I’ve noticed many Canadians wanted a “bark” at 1K 25 Hz, but the sample on that one isn’t that big.

Remember, the Brit style curve noted here presumes a loud wedge on a loud stage – the lower mid-range cuts leave a hole for all the backline guitar noise.

Regardless of your geographical prejudices, significant equalization is needed to get most stage monitors (indoors, anyway) where they need to be for most of the artists we face in live sound work. That isn’t good or bad, it just is.

Though we could say that such EQ methods are the epitome of audio relativism (as in, screw purity, we’re going to whack this sucker until it’s loud enough), there is one absolute here that should be remembered the next time someone pushes that old fundamentalist purity rap: This use of EQ deals with a stage soundfield with the backline factored in.

And it predicts and accepts the fact that the artist may have a bit of a hearing loss. Hearing loss, in this business?

Now there’s an absolute.

{extended}
Posted by Keith Clark on 06/26 at 07:24 AM
Live SoundFeaturePollEducationEngineerMonitoringProcessorSound ReinforcementPermalink

Thursday, June 25, 2015

Church Sound: Snare Drum Mixing Tips

This article is provided by DerekSoundGuy.com.

The snare drum and electric guitar are often the driving force of a song despite the kick and bass being considered the foundation of a mix.

Turn down the snare drum and the congregation will feel reluctant to clap with the song. 

Mic the top and bottom of the snare drum. The top mic will catch the stick hitting the drum and pick up much of the body of the snare. The bottom mic will help pick up the actual snare sound to really cut through the mix. Change the polarity of the bottom mic.  Otherwise, it’s possible some phase cancellation with the two mics will occur.

The snare EQ depends a lot on the snare drum you’re drummer is using and what kind of sound you’re looking for. Mixing that thunderous snare of Def Leppard is different from mixing a punk band with a 3-inch snare drum. Match the snare sound with the song. 

That being said, the snare needs some body. The low-mids are a good place to start. Don’t go too low because room is needed for the toms. The 250-450 Hz range is a good place to boost. Try a high-pass filter up to about 125-150 Hz to cut out any low rumbling.

On the top snare microphone, the stick slap is around the 1-2 kHz area. The boost amount depends on the desired sound. A boomy snare sound will need more in this range. For a thinner snare sound, trying to stay away from the ‘80s arena rock snare, cut in this range or just don’t boost as much.

On the bottom snare drum, to pick up the actual snare, boost in a higher ranger; 3-6 kHz will bring this out. This is a real test in mixing here, making sure to get the proper EQ on the top and bottom mic. 
Don’t boost these frequencies in both microphones. For example, a boost of 6 kHz in the bottom mic plus a boost at the same place on the top mic will often just give the snare a big bad ring.

Speaking of big bad rings, snare drums often have lots of these. Sometimes it’s meant to be there, sometimes the drummer has no idea their drum rings like that.  Decide if it adds to the overall drum mix or if it’s just an annoying ring.

Sometimes the ring goes unnoticed when the rest of the kit is played, but it’s best to find the frequency of the ring and try to eliminate it.

Remember, when trying to eliminate the ring, the overall snare sound should not be hurt. For instance, if I cut at 3 kHz because of a ring but lose the impact of the actual snare, did I actually improve the sound? This is especially true if the ring goes unnoticed behind the rest of the kit.

Should the snare be compressed? I compress it. Some don’t. The question is, how strong do we put that compressor? I like to compress the drum just in case the drummer decides to do something a little out of the ordinary. Also, in case they do some rim shots, I don’t want to have to ride the fader. 

Set the compressor to a 2:1 ratio with a generous threshold with a fast attack and a slow release. If you don’t notice any huge volume differences with his playing then at least there’s a compressor backup in case something strange happens. But, with these settings, the snare isn’t over-compressed which would take away the dynamic of the drummer’s playing.

Gate? Again, yes. This can also be a good way to get rid of a ring. It also depends on the sound the band wants. I’ve put a really tight gate on snare drums and it has sounded great. But don’t take away the body of the snare by making it sound too tight and short. We want to let the drum breathe here. This is another place to play with. Turn some knobs on the gate and see what it does to the drum sound; did it help? If it did, then keep going If it makes it worse, turn them back.

Snare drums are interesting instruments; what can be very good on one snare drum, in one song might be completely wrong on another snare drum in another song. Use the above as guidelines to find what sounds best for you.

Derek Sexsmith is the media director at Creekside Church in Waterloo, Ontario, Canada. He writes on the life of a church tech at DerekSoundGuy.com.

{extended}
Posted by House Editor on 06/25 at 11:25 AM
Church SoundFeatureStudy HallEducationEngineerMixerProcessorSound ReinforcementStagePermalink

A Beautiful Sonic Treat: Microphone Approaches For Acoustic Performances

Take a breath of fresh air on a country morning.

That’s the sensation you get from a well-amplified acoustic ensemble.

Guitar, upright bass, mandolin, dulcimer, banjo – all produce a sweet, airy sound that can be captured with the right approach.

Acoustic music heard over a sound reinforcement system is all about beauty and naturalness, not hype.

Listen to a number of well-recorded CDs of old-time country, bluegrass and acoustic jazz. In most cases you’ll hear no effects except some corrective EQ and maybe just a little reverb.

Let’s look at some ways to capture that delicate sound and prevent feedback.

Picking It Up

An acoustic instrument can be picked up in four ways: with a microphone on a stand; with a contact mic; with a pickup fed into a preamp or DI box; and with a distant large-diaphragm condenser mic (LDC).

A good mic choice for acoustic instruments are small-diaphragm cardioid condenser models. The cardioid pattern reduces feedback, while the condenser transducer captures a detailed, accurate sound, in which you can hear each string within a strummed chord. Of course, the venerable Shure SM57, a dynamic unidirectional (cardioid) does a good job too, especially when feedback is a problem.

Some musicians might prefer a contact mic, which is a miniature clip-on condenser type like a lavalier (Figure 1). The advantages are consistent sound from gig to gig, an uncluttered stage, and freedom of movement. The musician is not tied to a single position near a stand-mounted mic.

Figure 1: A contact mic on a fiddle. (click to enlarge)

Other musicians might prefer a piezo or magnetic pickup. Sensitive only to string vibrations, it has more gain before feedback than a mic, and total isolation. However, this produces an “electric” rather than “acoustic” sound, missing the resonance of the instrument’s body and air chamber.

Some EQ can help – try a narrow cut at 1.2 to 1.5 kHz, along with some high-frequency roll-off. Pickups also prevent phase cancellations between two mics deployed for a singing guitarist.

More Methods

Hybrid systems combine a pickup with a contact mic, providing a blend of high volume from the pickup and “air” from the mic.

Usually, the performer decides on the blend and polarity of the two sources.

Here’s a trick to prevent feedback with this configuration: Send just the pickup signal to the stage monitors, and send just the mic signal to the house loudspeakers.

In the mixer channel for the pickup, turn down the fader and turn up the monitor send. In the mixer channel for the mic, turn up the fader and turn down the monitor send. The monitors don’t feed back and the audience hears the true timbre of the instrument.

Because a pickup is high impedance, it needs to be loaded by a high-Z input, ideally about 1 megohms. Many active direct boxes can provide that input impedance, but most passive (transformer-coupled) DIs do not. They load down the pickup and result in a dull or thin sound. For this reason, many performers feed their pickup into a preamp with a high-Z input and low-Z output.

The single-mic technique can also be effective. One or two LDCs are placed on stage at chin height, with each picking up two to three musicians who balance themselves acoustically by moving toward or away from the mic. This method looks “old-timey” while the musicians’ choreography (weaving in and out) adds visual interest.

Of course, you lose control of the mix. But many musicians prefer to work that way, rather than relying on a sound mixer who may know only how to mix rock ‘n’ roll. (Acoustic music can be a whole different animal.)

It’s good practice to use a high-pass (low-cut) filter on each mic channel to reduce low-frequency rumble and feedback. As the musician is playing, start with a very low filter frequency, raise it until the sound thins out a little, then back off.

Techniques
Here are some suggested techniques for mic’ing various instruments with a stand-mounted mic. As always, there’s not a specific right way to handle this job – feel free to experiment and just do whatever works. That said…

Figure 2: A guitar mic’ing technique. (click to enlarge)

Acoustic guitar: Place a mic about 3 or 4 inches away, just to the right of the sound hole as viewed from the audience (Figure 2). A mic directly in front of the sound hole makes a boomy effect because the sound hole resonates around 80 to 100 Hz.

If the amplified sound has too much bass or is too “thumpy,” roll off some lows. If the sound is too harsh, cut around 3 kHz.

Fiddle: A mic about 8 inches over the bridge works well. Aim it toward the f-holes for a warmer sound, or toward the neck for a thinner sound. A singing fiddler can be mic’d with a single microphone aiming at the player’s chin.

Banjo: Aim at a spot halfway between the bridge and the lower edge, about 3 inches away. If the sound is too hollow, the player can stuff a rag inside the banjo. You might also cut a little around 500 Hz.

Mandolin: Many mandos have a thin, harsh sound. You can warm it up by close-mic’ing the lower f-hole. Adjust EQ to taste.

Upright bass: An EQ’d pickup can be the best choice to prevent leakage and feedback. When mic’ing, avoid placing it directly over the f-hole because the sound there is muddy and hollow. A mic located a few inches under the bridge, aimed at the body, can capture a deep, tight sound.

You might mix in another mic close to the plucking fingers for definition, and roll off the lows in the pluck mic (Figure 3).

It’s common to wrap a Shure SM57 (or similar) in foam behind the tailpiece, with the mic aiming up toward the bridge. This lets the bassist move around more freely.

Make sure that the front grille is not covered in foam, and apply EQ to get a natural sound.

Hammered dulcimer: Place a mic about 8 inches over the front edge aiming at the center of the soundboard.

Figure 3: A bass mic’ing technique. (click to enlarge)

Lap dulcimer: Aim a mic down at a soundhole a few inches away.

Dobro: This instrument is basically a guitar held on the lap and played slide-style with a bottleneck. You can mic the soundhole a few inches away and roll off the lows if the sound is too bassy. Roll off some highs if the sound is too cluttered and bright.

Grand piano: Place one mic over the treble strings, 8 inches up and 8 inches horizontally from the hammers. Place another mic over the bass strings, 8 inches up and about 2 feet from the hammers (toward the tail of the piano).

Of course, there are dozens of other grand piano mic’ing methods, but this one has always worked very well for me. If the sound is tubby, cut a little around 300 Hz.

Upright piano: Aim two mics at the soundboard about 8 inches away, dividing the piano in thirds.

Small drum set: Try an LDC overhead at forehead height or a little higher, plus a mic in the kick. Typical kick EQ is a cut around 400 Hz to remove the “papery” sound, and a boost around 4 kHz to clarify the attack. You might stuff a towel or blanket inside the kick to tighten the beat.

Flute: Aim a mic halfway between the mouthpiece and the tone holes. Use a foam pop filter to prevent breath sounds.

Clarinet: Place a mic about 8 to 12 inches from the side.

Figure 4: Clip-on mic on a sax. Credit: DPA Microphones. (click to enlarge)

Sax: Try to aim the mic so it picks up the tone holes and the bell. Aiming it too much at the bell only can produce a harsh, uneven tone. There are a variety of clip-on mics that work well for this application (Figure 4).

Vocal: Angle the mic about 45 degrees upward so the cardioid pattern’s “dead” rear aims at the floor wedge. If the polar pattern is supercardioid or hypercardioid, point the mic more horizontally so that the mic’s null (at 110 degrees to 125 degrees off-axis) is aiming at the floor monitor.

Also, ask performers to sing close to their mics, either with lips touching or a couple inches away.

Amplifying acoustic music presents a different set of approaches and circumstances than the typical amplified rock show. But if you can capture the delicate sound of acoustic instruments, your audience is in for a beautiful sonic treat.

AES and SynAudCon member Bruce Bartlett is a recording engineer, audio journalist, and microphone engineer. His latest books are “Practical Recording Techniques 5th Ed.” and “Recording Music On Location.”

{extended}
Posted by Keith Clark on 06/25 at 07:18 AM
Live SoundFeatureStudy HallMicrophoneSound ReinforcementStagePermalink

Wednesday, June 24, 2015

In The Studio: Reverb Vs Delay

Article provided by Home Studio Corner.

 
When I say reverb what comes to mind? How about delay?

For a lot of people who are just starting out with recording and mixing, they may think that reverb is that awesome plug-in you use to make everything sound like it’s in a cathedral. And when they think of delay you may think of The Edge from U2.

The truth is, there is SO MUCH you can do with reverb and delay to enhance your mixes, and the most effective ways are usually the most subtle. I don’t use huge cathedrals and dotted eighth-note delays all the time, but I do use both reverb and delay plugins on almost every mix I do.

How do you pick between the two?

Too Much Of A Good Thing
I’ve said this before, one of the sure signs of an amateur mix is too much reverb. The same is true for delay, or really any effect. You get so excited about this new plugin, and it sounds SO good in your ears that you don’t realize that your entire rock mix is drowning in a huge hall reverb.

Subtlety is your friend. A good rule of thumb for dealing with reverb and delay? If it’s obviously there, you probably used too much. Of course you want people to hear it, but you don’t want it to be so loud that it’s distracting. It can be a tough balance.

There are times where a big huge delay or reverb is perfectly appropriate, but for the most part you want to keep it simple, keep it subtle.

Reach For Reverb First
If you’re debating whether to use reverb or delay in your mix, reach for the reverb first. And don’t go crazy with a bunch of different reverbs. You probably don’t need a separate reverb for drums, vocals, guitars, and keys.

Here’s what I do. I’ll set up a single “Large Room”-style reverb and I’ll sometimes set up a second reverb for my drums (depending on how much I like the sound of the room mics).

The job of the drum reverb is to be my room sound. Room mics sometimes don’t cut it. Or maybe you record drums in a small room, but you want them to sound like they were recorded in a bigger room. That’s how I use reverb for drums. I want it to sound like a pair of room mics in a nice big studio. So I send a small amount of the snare, toms, and sometimes the overheads to a dedicated drum reverb. The same rule applies. If the reverb is obvious, I turn it down.

My Large Room reverb is for everything else. I’ll run a little bit of vocals, guitars, keys, pretty much a little bit of everything in the mix except for drums and bass (bass + reverb = mud).

The purpose of this reverb is to simply give the mix some space. I’m not looking for a big long decay and a huge cathedral sound here. I’m keeping the decay under 1 second.

The idea here is to make all the instruments sound less “in your face” and more “in your room.”

If you record everything in your studio one-track-at-a-time, you’ll end up with a bunch of nice, clean tracks. The problem is that they can sometimes sound dry.

The solution? Add in some reverb. If done well, you won’t be able to hear it too much. You’ll just notice that the tracks have some space and seem to be more “glued” together.

When To Use Delay
Delay can obviously be used as an effect, like The Edge’s signature dotted eighth-note guitar delay. But did you know that a delay can do a lot of the same things a reverb does?

Sometimes if a reverb simply isn’t giving me the space I want without making the mix sound too washed out, I’ll reach for a delay. Sometimes a simple slap-delay is exactly what I need. Reverbs have tails to them, and sometimes those tails can be GREAT for a mix, and sometimes they can just cause more muddyness and confusion in the mix. That’s when a delay can come in handy, since it’s simply a delayed signal without all the decay of a reverb.

A nice, short stereo delay can give your tracks that sense of space without hogging up a bunch of real estate in the mix. A longer stereo delay can make guitar and background vocal tracks sound much softer and more roomy. It’s great for getting that warm, thick sound out of those bed tracks.

My advice to you? Try both reverb and delays. Mess around with them and see if you can get some interesting sounds out of them. You never know until you try. And remember to keep things subtle.

The best mixes are usually the ones with a bunch of small, subtle pieces of “ear candy” throughout the song. Mmmm…candy…

Joe Gilder is a Nashville-based engineer, musician, and producer who also provides training and advice at the Home Studio Corner.Note that Joe also offers highly effective training courses, including Understanding Compression and Understanding EQ.

{extended}
Posted by Keith Clark on 06/24 at 12:21 PM
RecordingFeaturePollProcessorSignalStudioPermalink

Six Common Audio Cable Problems And Solutions

This article is provided by Behind The Mixer.

 
Water was streaming out the end of our driveway. The main water pipe going to the house had sprung a leak. 

There were only two ways of fixing the problem; digging up the driveway for finding and repairing the leak or running a new line without disturbing the driveway at all. 

A problem with an audio cable isn’t always solved by running a new cable. Sometimes you’ve got to look under the driveway.

There are six common cable problems you’ll find when working with audio. All but two of these show themselves by not passing audio through to the mixer.

1) A bad cable. This is usually an easy one to spot and resolve. If no sound is coming through from the sound source and everything else checks out OK, then swapping in a different cable usually solves the problem. This goes for all cable types, from XLR to TRS cables.

2) Wrong cable in use. A TS might have been used instead of a TRS or vice versa. Make sure you use the right cables and check the user manuals for audio equipment if you don’t know which to use.  An easy way to avoid this with stage cabling is by keeping cables bundled with their appropriate equipment whenever it’s taken off the stage. You can also label inputs/output, if they aren’t already labeled, by using a bit of tape and a Sharpie to put on the bottom or back of the electronic equipment with a note as to the cable type.

3) Plugged into the wrong place. This typically happens when you plug an input into an output jack or vice-versa. It’s easy to do this with DI boxes if you don’t pay attention to what you’re doing.  I’ve probably set up DI boxes a thousand+ times and it’s easy to zone out. I’m just sayin’...

4) Seating. You might not hear any sound because the cable was not properly seating in a piece of equipment. This usually happens with guitars/basses when they plug the instrument cable into their ax. Whenever I’m not getting a signal from a guitarist/bassist, this is the problem. I’ll ask the musician to re-seat the cable.

5) Hum in the channel. This can happen for a variety of reasons.  If it’s because of the cable, it’s likely an unbalanced cable that’s running parallel to another unbalanced cable or electrical cord.  Cross those cables at a 90-degree angle and secure the angle with a little gaffer tape.

6) Cable used incorrectly. You’ll see this problem occur more in the booth than on the stage. A perfect example is connecting an iPod to the mixer. You can’t just plug a stereo TRS cable signal into the mixer. The mixer is expecting a balanced signal via the TRS cable but you’re giving it a stereo signal. This can cause intermittent audio or generally poor audio. Fix by using a task-appropriate direct box or other transformer. Read more about iPod connections here.

And, for more information on audio problems, check out this article on resolving line check problems.

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians. He can even tell you the signs the sound guy is having a mental breakdown.

{extended}
Posted by Keith Clark on 06/24 at 07:04 AM
Church SoundFeaturePollStudy HallInterconnectSound ReinforcementPermalink

Tuesday, June 23, 2015

Ten Reasons Why Church Sound Systems Cost More

 

A letter to a church building committee might read:

Thank you again for the opportunity to provide you with a proposal for the sound system for your house of worship.

While we appreciate your interest in “good stewardship” in the funding of this project, and understand your request for “church pricing” for the work, the following points should be kept in mind when determining the best value for the dollars spent.

#1 - Dynamic Range
Church sanctuaries are usually quieter than other “places of gathering,” and as such the sound system must be quieter than usual to prevent audible noise in the audience area.

Our proposal provides for 96 dB of dynamic range - a figure typical for recording studios and other critical listening environments.

This extended dynamic range assures that the sound system will not be the “weakest link” when it comes to system performance.

Audio equipment is not “plug and play.” There are no strict standards that all manufacturers follow when establishing the operating parameters of their equipment.

All electrical devices produce noise, that annoying “hiss” that can be heard in the background on some systems during quiet portions of the service. Audible hiss can be eliminated from a sound system if its gain structure is adjusted properly.

This process is carried out after the system in installed, and when done properly, will result in the maximum potential of all equipment to be realized. Our proposal includes an accurate and meticulous adjustment of the gain structure of the sound system.

#2 - Energy Ratios
Many listening environments have a “sweet spot” for which the sound system performance is optimized.

In a house of worship, every seat must be optimized for adequate signal-to-noise ratio and suitable early-to-late energy ratios.

Our proposal provides a minimum of 25 dB signal-to-noise ratio and an appropriate early-to-late energy ratio for your type of worship - for every seat in the audience area.

#3 - Uniform Coverage
Many auditoriums are plagued with “hot” and “cold” spots in the sound coverage.

This can usually be attributed to interaction between multiple loudspeakers, and is unavoidable when more than one loudspeaker is required to provide sound coverage for the audience.

A good design assures that there is even coverage in the audience area, and that no seats are rendered unusable by loudspeaker interaction.

Our design addresses this critical issue, assuring you that there will be excellent sound quality at every listener seat.

#4 - Versatility
While it is possible to design sound systems that are optimized for speech or music, your system must perform well for speech and music.

Since the attributes of these two types of systems are often at odds, this is a very difficult task.

The proposed system has the accuracy and clarity required for speech reproduction, while maintaining the extended frequency response and power handling required for music.

#5 - Hum and Buzz
Audible hum is a major detriment to a church sound system. It usually results from improper grounding practices, either in the installation of the wiring or the actual equipment.

Off-the-shelf equipment must often be modified to work without hum.

The proposed system shall be grounded properly, and all system wiring shall be routed and shielded properly.

The proposed equipment will be tested for proper grounding, and and suitable modifications made when necessary, ensuring “hum-free” operation.

#6 - Gain-Before-Feedback
Whenever a microphone is placed in the same room as a loudspeaker, the potential for feedback exists.

Things that aggravate this further are multiple microphones and long miking distances; both necessities for most churches.

Two things are required for a system to work properly.

The sound system must be extremely stable, meaning that loudspeaker array design and mic placement are critical to the end result.

Your sound personnel must understand the limitations of the sound system and be trained to manage the open microphones and working distances for people using the system.

Our proposal addresses these issues, providing a stable system along with operator training to assure that feedback does not hinder the performance of the system.

#7 - Wireless Microphones and radio frequency interference (RFI)
Sound systems can be adversely affected by frequencies above the audible band.

They must be properly shielded against such, and appropriate filtering devices must be installed when required.

Wireless microphones provide some excellent benefits for houses of worship. These are actually small radio stations that broadcast on a specific frequency.

The selection of frequency is critical to the mic’s proper operation. The operating frequencies for your wireless mics must be carefully selected to work properly in the presence of other RF broadcasts in your area.

#8 - “Clean” Installation Practices
An important yet often overlooked aspect of a sound system design is the installation of the system.

It is imperative that proper interconnect practices are carried out, and that all applicable electrical codes are observed.

A “clean” installation means that wiring has been concealed as much as possible, and that the finished system blends well with the decor of the building.

Wall plates and connectors must be wired properly for the system to work correctly. Our proposal includes a meticulous check of all cables for proper termination and identification.

A system wiring diagram will be presented to you upon the completion of the system so that future modifications to the system can be made correctly and at the lowest possible cost.

#9 - Professional Equipment
There are many brands of equipment available in the audio marketplace. Fortunately, there are many reputable pro audio companies that make equipment suited for your needs.

Our proposal only includes equipment from such companies. Our years of experience in the audio field have enabled us to eliminate marginal equipment from our inventories.

We deal only with companies that provide reliable, repairable products. All proposed loudspeakers have been “stress tested” for safety, and can be suspended above a congregation with confidence. In addition, all equipment meets applicable codes for fire safety and radio frequency emissions.

#10 - Calibration, Training and Documentation
A properly calibrated sound system will be much easier for your personnel to operate. A significant amount of expertise is required to make a system “user friendly.”

The proposed system must be calibrated using advanced audio and acoustic instrumentation. Upon completion of this process, all controls that do not require user adjustment must be rendered inaccessible.

After calibration, your personnel will be trained to operate the system, and a user’s manual shall be compiled which will include equipment manuals, system wiring diagrams, and operating instructions.

In conclusion, your sanctuary is a critical listening environment for speech and music.

As such the sound system must provide adequate acoustic gain, intelligible speech, even coverage and extended bandwidth to all listener seats. The best value in a sound system is one that meets all of these criteria.

Such a system will provide years of trouble-free service and serve to complement your worship services.

There is much more to a sound system than acquiring some equipment. An audio professional can work with you from the planning stages and save you considerable time and money on your system.

Most importantly, you will have a system that has been tailored to your specific performance needs and aesthetic requirements, and money spent in the future can be used to compliment the existing system rather than replace it.

Pat & Brenda Brown lead SynAudCon, conducting audio seminars and workshops online and around the world. For more information go to www.prosoundtraining.com.

{extended}
Posted by Keith Clark on 06/23 at 11:26 AM
Church SoundFeatureStudy HallAVBusinessInstallationInterconnectSound ReinforcementSystemTechnicianPermalink

Monday, June 22, 2015

The Eight Constants Of Vocal Recording

This article is provided by Bobby Owsinski.

 
By my count, there are 8 “constants” that we find in vocal recording. These are items or situations that almost always prove to be true.

Just keeping them in mind can save you a lot of trouble in the search for a sound that works for you and your vocalist.

Here are a few tips taken from The Recording Engineer’s Handbook and The Music Producer’s Handbook to help you get a great vocal sound quickly and easily without any of those nasty side effects.

—————————————————————

1) Your mic selection, amount of EQ, and compression used is totally dependent on the voice you are recording. Setting up the same signal chain for everyone might work sometime, but best to keep an open mind (and ears) before you settle on a combination.

2) The best mic in the house doesn’t necessarily get the best vocal sounds. There is no one microphone that works well on everything, especially a instrument as quirky as the human voice.

3) A singer who is experienced at recording knows which consonants are tough to record and knows how to balance the them against the vowels to get a good final result. A singer with this kind of experience will make you look like a genius.

4) With a good singer, many times you’ll get “the sound” automatically just by putting him/her in front of the right microphone. On the other hand, with a bad singer (or even a good singer that just doesn’t adjust well to the studio), no amount of high priced microphones or processing may be able to get you where you need to go.

5) In general, vocals sound better when recorded in a tighter space, but not too tight. Low ceiling rooms can also be a problem with loud singers as they tend to ring at certain lower mid range frequencies, which might be difficult to eliminate later.

6) Windscreens are actually of little use when recording a vocalist with bad technique. There are two different sorts of people in this category: the people who have never sung with sound reinforcement, and the people who have developed bad habits from using a mic on stage.

7) Decoupling the stand from the floor as well as the microphone from the stand will help eliminate unwanted rumbles. Often times a microphone isolation mount isn’t enough. Place the stand on a couple of mouse pads or a rug for a cheap but efficient solution.

8) Just marking the floor with tape might get the vocalist to stand in the right position in front of the mic, but she can easily move her head out of position. An easy way to have a vocalist gauge the distance is by hand lengths. An open hand is approximately eight inches while a fist is about four inches. By saying, “Stay a hand away,” the vocalist can easily judge his distance and usually doesn’t forget.

Bobby Owsinski is an author, producer, music industry veteran and technical consultant who has written numerous books covering all aspects of audio recording. For more information be sure to check out his website, and go here to acquire a copy of The Recording Engineer’s Handbook and here to acquire a copy of The Music Producer’s Handbook.

{extended}
Posted by Keith Clark on 06/22 at 05:32 PM
RecordingFeatureBlogStudy HallEngineerMicrophoneSignalStudioPermalink

What Is An Impulse Response?

Editor’s Note: The following article is an excerpt from the new white paper “Smaart 7 Impulse Response Measurement and Analysis Guide,” available as a free download (pdf) from Rational Acoustics. Go here for the direct download of the guide, and also note that sample IR wave files used for figures within the document can be downloaded as a zip file here.

———————————————————-

What is an impulse response? In the most basic terms, an impulse response (IR) can be defined as the time domain (time vs amplitude) response of a system under test (SUT) to an impulsive stimulus.

The word “system” in this case could mean something as small as a microphone or a single transducer, something as simple as a single filter on an equalizer. Or, it might mean something as big as a concert hall or sports arena, as complicated an entire sound system or a combination of the two. Smaart users of course are most often concerned with sound systems and their acoustical environments.

In the context of acoustical analysis, you might also think of an impulse response as the acoustical “signature” of a system. The IR contains a wealth of information about an acoustical system including arrival times and frequency content of direct sound and discrete reflections, reverberant decay characteristics, signal-to-noise ratio and clues to its ability to reproduce intelligible human speech, even its overall frequency response. The impulse response of a system and its frequency-domain transfer function turn out to be each other’s forward and inverse Fourier transforms.

An acoustical impulse response is created by sound radiating outward from an excitation source and bouncing around the room. Sound traveling by the most direct path (a straight line from the source to a measurement position) arrives first and is expected to be the loudest. Reflected sound arrives later by a multitude of paths, losing energy to air and surface absorption along the way so that later arrivals tend to come in at lower and lower levels.

Figure 1: An acoustical impulse response consists of sound from an excitation source arriving at a measurement position by multiple pathways, both direct and reflected. Here we see the path of direct sound from the source to the microphone in red, followed by a first order reflection in blue, a second order reflection in green and higher order reflections in gray. Later arrivals tend to pile on top of each other forming a decay slope.


In theory this process goes on forever. In practice, the part we care about happens within a few seconds – perhaps less than a second in smaller rooms and/or spaces that have been acoustically treated to reduce their reverberation times.

The arrival of direct sound and probably some of the earliest arriving reflections will be clearly distinguishable on a time-domain graph of the impulse response. As reflected copies of the original sound keep arriving later and later, at lower and lower amplitude levels, they start to run together and form an exponential decay slope that typically looks like something close to a straight line when displayed on a graph with a logarithmic amplitude scale.

Anatomy Of An Acoustical Impulse Response

Although no two non-identical rooms ever have identical impulse responses, there are a few component features that we can identify in some combination in almost any acoustical impulse response. These include the arrival of direct sound, early reflections, reverberant build-up and decay, and the noise floor. Figure 2 shows an acoustical impulse with its component parts labeled. Descriptions for each follow.

Figure 2: An acoustical impulse response with its common component parts labeled. This is a semi-log time domain chart with time in milliseconds on the x axis and magnitude in decibels on the y axis.


Propagation Delay. The time that it takes for direct sound from the sound source to reach the measurement position is the propagation delay time. This may include throughput delay for any DSP processors in the signal chain in addition to the time that it takes for sound to travel through the air.

Arrival Of Direct Sound. Since the shortest distance between two points is always the straightest line, the first thing we expect to see when looking at an impulse response (IR) is the arrival of direct sound from whatever sound source we’re using to stimulate the system under test. Depending on what we’re trying to learn, the source could be an installed sound system, an omnidirectional loudspeaker brought in specifically for measurement purposes, a balloon pop or a shot from a blank pistol, or in a pinch, maybe hand claps or someone slamming a case lid shut.

In most cases we’d also expect the first arrival to be the loudest, and correspond to the highest peak we can see in the IR, and in most cases we’d be right. There can be occasional circumstances where that might not turn out to be strictly true, but in the vast majority of the cases it should.

Discrete Reflections. After the arrival of direct sound, the next most prominent features we tend to see are sound arriving the next most direct paths; the lowest order reflections. Sound that bounces off one surface to get from the excitation source to a measurement position is called a first-order reflection, two bounces gives you a second order reflection and so on. Reflected sound can be useful or detrimental, depending on factors such as its relative magnitude and timing in relation to the direct sound and the extent to which it is clearly distinguishable from the diffuse reverberant sound.

Early Decay, Reverberant Build-Up, And Reverberant Decay. Following the arrival of direct sound and the lowest order reflections, sound in a reverberant space will continue bouncing around a room for a while, creating higher and higher order reflections. At any given listening position, some of this reflected energy will combine constructively over a relatively short period of time, resulting in a build-up of reverberant sound, before air loss and absorption by the materials that make up reflecting surfaces begins to take their toll. At that point, the reverberant decay phase begins.

In practice, you may or may not be able to see the reverberant build-up in an impulse response as distinct from the direct sound and early reflections. Sometimes it can be quite clearly visible, other times not so much. By convention, the first 10 dB of decay after the arrival of direct sound in the reverse-time integrated IR (we will get to that in the next section) is considered to be early decay. Reverberant decay is conventionally measured over a range from 5 dB below the level of direct sound down to a point 30 dB below that on the integrated IR, or 20 dB in a pinch.

Noise Floor. In theory, the reverberant decay phase of the IR continues forever, as an ideally exponential curve that never quite reaches zero. In practice it reaches a point relatively quickly where we can no longer distinguish it from the noise floor of the measurement. Noise in an IR measurement can come from several sources, including ambient acoustical noise, and electrical noise in the SUT and the measurement system, quantization noise from digitizing the signal(s) for analysis, and process noise from DSP processes used for analysis.

Uses For Impulse Response Measurement Data

Delay Time Measurement. The most common use for impulse response measurements in Smaart is in finding delay times for signal alignment in transfer function measurements and for aligning loudspeaker systems.

Each time you click the delay locator in Smaart, an IR measurement runs in the background. In this case all we really care about is the initial arrival of direct sound, which is typically so prominent that you can pick it out with high confidence even when signal-to-noise ratio of the IR is poor, so we don’t even bother displaying the results.

Smaart simply scans for the highest peak and assumes that to be the first arrival, and most of the time that works very well.

Occasions where automatic delay measurements might not work well include measurements of low-frequency devices or any case where you’re trying to measure a directional full-range system well off axis, in a location where a prominent reflection can dominate the high frequencies.

In the latter case, it’s possible for reflected HF energy to form a higher peak later than the arrival of direct sound, requiring you to visually inspect the IR data to find the first arrival.

Reflection Analysis. Another common use for IR measurements is in evaluating the impact of problematic discrete reflections. Reflected sounds can be beneficial or detrimental to a listener’s perception of sound quality and/or speech intelligibility, depending on a number of factors.

These factors include the type of program material being presented (generally, speech or music), the arrival time and overall level of the reflected sound relative to the level of direct sound, and the frequency content and the direction from which they arrive. As a general rule, the later they arrive and the louder they are (relative to direct sound), the more problematic they tend to be.

Reverberation Time (T60, RT60…). Reverberation time is kind of the grandfather of quantitative acoustical parameters. First proposed by Walter Sabine more than 100 years ago, T60 or RT60 reverberation time is the time that it takes for reverberant in a room sound to decay by 60 decibels from an excited state (after the excitation signals stops). It is one of the most widely used (and in some cases perhaps misused) quantities in room acoustics.

Although it is quite possible for two rooms with identical reverberation times to sound very different, when evaluated band-by-band it can still give you some idea as to the overall character of the reverberant field in a given room. In concert halls it can give you an idea of perceived warmth and spaciousness for music. In auditoriums, it is often used as a rough predictor of speech intelligibility.

Early Decay Time (EDT). Early decay time ends up being the decay time for direct sound and earliest, lowest-order reflections. Since the earliest reflections tend to be the most beneficial in terms of separating sounds we want to hear from reverberation and background noise, EDT can give you some clues about overall clarity and intelligibility in a room and/or system. EDT, like RT60, is conventionally normalized to the time it would take for the system to decay 60 dB at the measured rate of decay.

Early-To-Late Energy Ratios. Early to late energy ratios are a direct measure of the sound energy arriving within some specified interval following the arrival of direct sound, vs the energy in the remaining part of the IR. These provide a more direct method of evaluating the relationship between beneficial direct sound and early reflections that a listener hears versus the amount of (potentially detrimental) reverberation and noise, than inferences made from the early and reverberant decay rates.

Speech Intelligibility Modeling. Early to late energy ratios such as C35 and C50 have long been used as objectively measurable predictors of subjective speech intelligibility. In the 1970s Victor Peutz came up with Articulation Loss of Consonants (ALCons), a predictive metric for intelligibility of intelligibility based in the volume of a room and its reverberation time, the directivity of loudspeakers and distance from source to the listener.

Later on, Peutz revised the equation to use a direct-to-reverberant energy ratio in place of volume, distance and loudspeaker Q, making ALCons a directly measureable quantity. More recently, the speech transmission indexes (STI and STIPA) have emerged as metrics that are generally more robust. All of these can be calculated from the impulse response of a system.

Thanks to Rational Acoustics for providing this article. Go here for direct download (pdf) of the entire guide.

{extended}
Posted by Keith Clark on 06/22 at 08:32 AM
Live SoundFeatureBlogStudy HallLoudspeakerMeasurementMicrophoneSignalSoftwareSound ReinforcementSystemPermalink

Friday, June 19, 2015

Five Tips For Managing Cash Flow In A Commercial Integration Business

This article is provided by Commercial Integrator

 

The integration business can be a fickle place. Factors such as seasonality, dependence on government spending and the way the overall economy can influence spending can present unique challenges for businesses.

With the potential for such volatility, commercial integrators are faced with cash flow issues. Most CIs have accounting departments, but cash flow is a different animal. Cash flow is unique because an organization can be generating strong profits but have no money.

Unlike profitability, which is simply accounted for by revenue less cost of doing business, cash flow is dependent on the actual collection of receivables generated from the work that is being performed.

Therefore, you can be showing a profit because you have invoiced for work being done, but you may not see the cash immediately. In fact, in this business it isn’t uncommon for receivables to linger for 30, 60 or even more than 90 days.

It can be detrimental to your business when large chunks of receivables start aging. It can cost you interest, relationships with suppliers and problems with customers.

And while the only genuine way to create more working capital is to continuously generate profit and keep the cash in the business, there are ways for businesses to be more strategic with cash flow:

Receivables Management. Most of our customers have a plethora of bills to pay. Many integrators don’t have stringent policies for staying on top of receivables by sending reminders, making calls when they come due and building relationships with customer payable teams. If you aren’t calling and asking for payment, the customer may be thinking you don’t need the money that badly.

Down Payments. At some point it seemed that asking for down payments became taboo. Salespeople often feel that it is a potential sticking point in getting the deal. The problem is that our projects can take months, and while progress billing can alleviate some of the strain, it still doesn’t account for the fact that most vendors want payment in 30 days. A substantial down payment that covers a good component of the “cost” of the equipment can allow for better management of cash flow throughout the job.

Credit Worthiness. There are so many tools for understanding customer credit worthiness. A lot of companies want or need the business so bad that they will overlook bad credit or not even check it. Make sure at the very least you know what you are getting into by using tools like Dun & Bradstreet Credibility to determine what type of pay to expect from a customer. Sometimes the best business deals you do are the ones you don’t.

Credits, Rebates & Co-Op. In this business there are a lot of opportunities to earn credits, rebates and marketing dollars from vendors. Collecting, however, isn’t always easy. Just like receivables management, this needs to be managed so you don’t end up with tens or hundreds of thousands in rebates that aren’t claimed.

Borrow When You Don’t Need It. I know it sounds funny, but this is more of a statement of preparation for growing businesses. I often hear business owners saying, “I’m all cash-based and I don’t need any credit.” I think that is wonderful, and I hope that you can sustain that forever. But when you are running that way, it is a good time to work with a bank to set up a line of credit. I call this a “just-in-case fund.” There is nothing wrong with running a cash business, but not being prepared is a risk you should not and do not have to take.

In the business we are in, cash flow can be the difference of success and failure. The focus on it should be as high as it is on sales and marketing. Without cash, all the sales in the world mean nothing.

Daniel L. Newman currently serves as CEO of EOS, a new company focused on offering cloud-based management solutions for IT and A/V integrators. He has spent his entire career in various integration industry roles. Most recently, Newman was CEO of United Visual where he led all day to day operations for the 60-plus-year-old integrator.

Go to Commercial Integrator for more content on A/V, installed and commercial systems.

 

{extended}
Posted by Keith Clark on 06/19 at 07:09 AM
AVFeatureBlogProductionAudioAVBusinessInstallationPermalink
Page 52 of 204 pages « First  <  50 51 52 53 54 >  Last »