Friday, August 09, 2013
QSC Audio Q-Sys Integrates Audio With Video At Indian Statistical Institute
Provides echo-free video conferencing at Key Institute of National Importance in Astarget="_blank"sam
A QSC Audio Q-Sys audio management system, based around a Core 250i integrated processor, has been installed at the Tezpur centre of the Indian Statistical Institute (ISI) in Assam, northeast India, as part of a conference room refurbishment designed to provide improved audio input to an existing video conferencing system.
The new system, supplied by Pro Visual Audio, the QSC distributor in India, and installed by the Assam-based QSC System Integration Partner Cineworth Sales and Services, now offers ISI staff clear, echo-free conferencing facilities due to the inclusion of AEC, the Acoustic Echo Cancellation algorithm.
A standard feature in Q-Sys Designer Release 3.0 (and subsequent releases) and available for all Q-Sys Cores, AEC helps insure that audio feedback is not introduced into the system to the detriment of speech quality and comprehensibility.
According to Biplob Saha, deputy general manager – sales at Pro Visual Audio, ISI staff were using an existing video conferencing system, but were unhappy with the clarity and comprehensibility of the audio.
“They wanted an open system that could be expanded in the future, and so Cineworth suggested Q-Sys,” Saha says. “ISI also considered an alternative system from Bose, but in the end they liked the future-ready nature of the Q-Sys platform, and the fact that it’s based on industry-standard Layer 3 networking technology. They were also impressed by the AEC algorithm and that helped the QSC solution score over the competition.”
The new system accepts input from various Audio Technica microphones placed around the conference room and then passes the feeds into the QSC Core 250i processor.
Audio from the connected parties on the conference system is played back through four QSC AD-S82H speakers placed around the room and driven by a QSC CX302 amplifier.
Upon completion of the installation, Deepak Gracias, general manager – technical at Pro Visual Audio, who helped Cineworth in programming the Core 250i, explains that he could program the whole system very quickly and the sound from the QSC ADS-82H loudspeakers was pristine.
Wednesday, August 07, 2013
Radial Engineering Announces Updated SW8 Back Tracking Switcher
Ability to manage standby status more effectively, mute the main audio outputs to enable on-the-fly editing, and more
Radial Engineering has announced that it has updated the SW8 backing track switcher with several new features and functions.
The SW8 is designed to switch 8 audio channels simultaneously. Backing tracks are recorded on two separate systems and each is sent to the SW8 where the user may manually select between the playback systems or have the SW8 automatically switch between them when a drop-out occurs via an internal gate.
Inputs include a choice of 1/4-inch TRS or 25 pin D-subs. The outputs can be via a series of transformer isolated low-Z XLR outputs to run through a snake system or a D-Sub.
According to Radial president Peter Janis: “Over the past several years, the Radial SW8 has gained tremendous market penetration with major tours such as The Eagles, Rush, Rihanna, Maroon 5 and Cirque Du Soleil using it, along with being employed in a wide variety of permanent installations to provide safety backup.
“During this time we’ve had many discussions with the technicians who use the SW8 on a daily basis. This input has driven us to develop a next-generation switcher that will elevate the performance in several key areas such as managing standby status more effectively, muting the main audio outputs to enable on-the-fly editing and providing remote controllability with LED status indicators.”
New features include a “standby” function that enables the technician to stop the show in between songs when the artist chooses to speak to the audience. A global mute turns off all outputs, allowing the tech to monitor playback tracks and prepare playback cue points.
Two signal status LEDs have been added for both A and B playback systems providing visual indicators to show signal present.
External remote control options now include either a footswitch or desktop options with LED status indicators.
To help eliminate switching noise with less than ideal sources, a series of filters have been added on all channels. And, gate level controls have also been relocated to the front panel for easier manipulation.
The updated SW8 is now shipping.
Transform Your Mind: Chapter 3 Of White Paper Series On Transformers In Audio Now Available
An in-depth look into the anatomy of premium transformers
Chapter 3 of PSW’s ongoing free white paper series, entitled “Anatomy Of A Transformer,” is now available for free download. (Get it here.)
The white paper series is presented by Lundahl, a world leader in the design and production of transformers. The new chapter provides an in-depth focus on the design and manufacture of an ideal transformer, with a tour of Lundahl’s modern facility in the town of Norrtälje, about 70 km north of Stockholm, Sweden, providing a highly interesting, not to mention highly useful, insight on the process.
The series of papers is authored by Ken DeLoria, senior technical editor of ProSoundWeb and Live Sound International.
Note that Chapter 1: An Introduction to Transformers in Audio Devices, and Chapter 2: Transformers–Insurance Against Show-Stopping Problems, are also still available for free download. Several more free white papers on transformers and related audio topics will be posted here on PSW and available on a regular basis.
Again, download your free copy of chapter 3 of the white paper series here.
Wednesday, July 31, 2013
Compact Fidelity: Capturing Ray Fuller Live At Buddy Guy’s Legends
Deploying a lean yet powerful live recording system
A live recording of a recent performance by blues maestro Ray Fuller at Buddy Guy’s Legends club in Chicago, consisting of non-stop 60- and 90-minute sets before a live audience, was captured in its entirety by a lean yet powerful system devised by freelance engineer Mike Picotte.
He had a very good reason for wanting to keep things a small and light as possible: the remote recording rig had to be situated several flights of stairs above the legendary venue.
The heart of the setup incorporated a recently introduced Antelope Audio Orion32 multi-channel interface that recorded 26 tracks of audio over USB 2.0 at 96 kHz to a MacBook Pro running Pro Tools.
Picotte, who also works with Sweetwater Sound, acquired the signal by placing a splitter snake onstage so he could access all of the venue’s microphones, then supplemented the house mics with his own to create a redundant rig with plenty of alternate audio source possibilities. “I wanted to have everything covered and then decide later what I actually needed, so I brought my own mic locker with me just in case,” he adds.
Ray Fuller live at Buddy Guy’s Legends. (click to enlarge)
The snake was run up the stairs into a small room where he was able to record and monitor the performance using a Dangerous D-Box system and a pair of Sennheiser HD 380 closed back headphones.
The preamplifier stage employed two TRUE Precision 8s and an Audient ASP008, which were connected to the Orion32. And for increased accuracy that could only enhance fidelity, he also had an Antelope 10M Atomic Clock in his rack.
“Performance-wise, the Orion32 was amazing,” Picotte notes. “There was no lag time on the screen, and no hiccups at any point during the entire gig. I ran 26 channels at 96k without stoppage and had no issues whatsoever.”
Previously, Picotte usually recorded to an external drive. “Now, with all the testing that I’ve done, I record directly to the solid state drive on the MacBook via the Orion32, then immediately transfer the audio to a secondary hard drive following each set,” he explains.
A closer look at Picotte’s rack-mounted recording rig. (click to enlarge)
Before the performance, he evaluated the Orion32 over MADI into a Pro Tools HD system and USB into Pro Tools natively. “Although I tested it at a 64 sample rate,” he says, “I ran a 1,024 sample buffer size during the gig because there was no artist mix, therefore no need to tax the computer system.”
Most importantly, he characterizes the tracks he captured as extremely detailed and accurate. “There was great separation on the drums and particularly good transient response on the kick, snare and overhead,” he reports. “Some converters will soften the transients, or it will feel like they are not coming across like they do live.
“The Orion32 gave me the best representation of how the band actually sounded in the club, and with no coloration. All the channels were clean, and the depth and stereo imaging were outstanding.”
Friday, July 26, 2013
Church Sound: MIDI Over Network Part 2—Ways To Apply It
Getting various tech elements to seamlessly talk with each other
Last time, I talked about how to set up a MIDI network in your tech booth (or anywhere else for that matter).
Today, let’s look at what can be done with it. Or, more correctly, I’ll tell you what I’m doing with it. Before we go any further, we should first talk about the structure of MIDI commands.
Channels, Notes, Values, Oh My!
The basic structure of a MIDI command is a Channel, a Note and a Value. There are 16 possible channels, 128 possible notes, and 128 values for each notes. Oddly, channels run from 1-16, while the notes and note values from from 0-127. Go figure.
In addition to Note-On, you can also specify Note-Off, Control Changes, Program Changes and a few other things that don’t concern us. For the most part, I use Note-On and Control Change.
To send commands to Reaper, my DAW for recording, I use CC’s, which are nice because they don’t require a value. I can send a Channel 1, CC 1 for example and Reaper drops a marker at the current location (handy for marking the start of the message—yes it’s sent from the Message snapshot).
That’s not a default, I assigned CC 1 in the Reaper shortcut menu. Most of the other apps respond to Note-On commands.
These are a few of the commands I’ve set up in Reaper. The list is practically endless…
Auto-Start The Walk Out Music
I’ve long had a dream that I could fire the walk out snapshot and music would automatically start playing from Mixxx (our DJ app that plays walk in/out music).
That dream is now a reality. I spent a few minutes configuring Mixx to listen for commands on MIDI channel 5. Because I can start the playback “Decks” with a MIDI command, I send one command that brings the crossfader over to Deck A, and another that starts Deck A.
All I have to do beforehand is load the song that I want to start into Deck A. I typically do that near the beginning of the message so I don’t forget.
I mentioned MIDI channel 5; we’ll talk about that in a little bit. Setting up those commands in Mixxx took a little effort, but I discovered the MIDI Learning Wizard in Mixxx that made it easy. What else can we do?
Once you set Mixxx up to listen to MIDI commands via your network setup, you can then run the learning wizard to assign commends to the controls you want to fire.
Fire Lighting Cues
It’s summertime, and the living is easy. And the volunteers are scarce. Between camps, vacations and the usual attrition, we’re short on lighting guys right now.
But that’s no problem because I can fire lighting commands from audio snapshots. I configured the Hog to listen to MIDI commands on channel 2, and can easily hit the “Go” button with a simple MIDI 2, 50, 1 command.
Sometimes, we’ll have several lighting cues per song, and I typically only do one audio snapshot per song. In those cases, I’ll create “dummy” snapshots that do nothing but fire lighting cues (no recall of any audio parameters).
I label them exactly the same as my lighting cues so I can keep track of where I am. This takes a little setting up, but it’s actually quite easy now that I’ve done it once or twice.
You can also do a whole lot more with MIDI on your Hog. Here’s an excerpt from the manual on how to set it all up. Not comfortable with that? Well how about this:
Start The Message Timer
With the MIDI module in ProPresenter, you can assign MIDI commands to all sorts of things. We use Timer 1 to give our pastor a timer for his message.
Problem is, the ProPresenter operators occasionally forget to start it. But the audio guy has to hit the message snapshot. So why not start the timer from the audio snapshot.
With this system, it’s as easy as pie. With a simple Note-On 1, 32, 1 command, the timer starts right when our pastor starts. Beautiful.
You can assign whatever values you want to the various functions. This is what I came up with, but you can do whatever you like.
One caveat to this, however. And I learned this the hard way so you don’t have to. ProPresenter listens to every MIDI channel and responds to notes from every channel (to my friends at Renewed Vision—this would be a great thing to address at some point…).
My idea had been to give each computer it’s own channel so we wouldn’t be triggering commands on the lighting desk that were intended for the audio playback. It’s easy to do on the Hog and in Mixxx.
But ProPresenter takes commands from all channels. At least out of the box.
This simple MIDI Pipe blocks all messages that don’t come in on channel 1. This makes sure a lighting or Mixxx command I send on channels 2 or 5 doesn’t trigger a ProPresenter command.
A cool little program called MIDIPipe lets me filter MIDI messages by channel. I built a “pipe” that takes MIDI commands in from my network connection, filters them to allow only channel 1 messages through, then spits it out to ProPresenter.
Like VMPK, I have it set to launch this pipe at log in and hide, so we don’t even see it.
If you don’t want to bother with this level of segregation, you could just block out your notes to different roles. For example, designate the block of notes 0-40 to ProPresenter, 41-80 to lighting, 81-120 to audio, or whatever.
That would work, but depending on what you’re controlling might be tricky. So I like my method better.
Once the pipe is built, you have ProPresenter connect to that instead of the Network Session.
So, this is getting long and I suspect we’re closing in on overload, so I’m going to split this up and finish our discussion of the MIDI command structure next time.
Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.
Thursday, July 25, 2013
Extron Introduces PC 101 Energy-Saving AC Power Controller
Provides remote power management for AV devices
Extron Electronics has introduced the PC 101, a 1-input, 1-output AC power controller designed to provide remote power management for AV devices.
When paired with a controller or control processor equipped with relays, such as the MediaLink MLC 226 IP or IP Link IPL 250, the PC 101 can be configured to turn a device on or off at scheduled times for security and energy savings purposes.
It features a contact closure control input and tally output, which can be used for visual feedback. The slim, compact form factor and IEC connectors on pigtails enable convenient in-line use with other devices and discrete mounting behind displays or other equipment.
The PC 101 has a power rating of 100-240 VAC, 50/60 Hz, allowing for worldwide compatibility.
“The PC 101 adds convenience and power savings potential to a wide variety of AV devices,” says Casey Hall, vice president sales and marketing for Extron. “With its compact design, flexible mounting options, and worldwide power compatibility, the PC 101 is designed for easy integration into any new or existing AV system.”
In addition to the new PC 101, Extron offers a full line of AC power and device controllers that offer flexible, centralized, Web-based power and device management.
Red Bull X-Fighters World Series Motocross Tour Takes Riedel Gear To Battle In Madrid
Team depends on Riedel Artist digital matrix intercom system and other solutions
The Red Bull X-Fighters World Series, the planet’s biggest freestyle motocross (FMX) tour, utilized Riedel Communications intercom and radio systems this year’s recent event at Madrid’s Plaza de Toros de Las Ventas, known as the home of bullfighting in Spain.
The Plaza de Toros de Las Ventas has been part of the Red Bull X-Fighters World Tour for more than a decade, and the competition in Madrid has intensified each year. The tight confines of the bullring put the crowd right at the heart of the action as motocross bikes roar across the track and fly through the air.
To maintain reliable and clear communication in spite of intense noise, the Red Bull X-Fighters team depended on the architecture of Riedel’s Artist digital matrix intercom system and solutions such as the company’s MAX noise-canceling headsets.
In Madrid, as at other tour stops, the Riedel systems supported show coordination among event directors and stage managers, lighting and PA staff, and scoring officials, as well as the work of the team’s sports, medical, and security staff.
Three Artist systems were networked in a self-healing fiber ring that guaranteed complete redundancy in the event of a fiber break.
Artist 1000 Series control panels were divided between the front of house (FOH), staff, and event judges. Nine Performer digital partyline beltpacks were distributed among the judges and the follow-spot lighting crew.
These systems were complemented by 150 Motorola radios, with eight Riedel RiFace universal radio interfaces serving as radio repeaters to ensure coverage across the venue.
The Riedel architecture allowed for the quick setup of a flexible backbone that could be adapted to support the vast majority of additions. The crew was able to make changes with the click of a mouse rather than having to run additional cabling.
With the Artist Director configuration and control software, technicians also could adjust levels remotely to improve the clarity, utility, and safety of event communications. The Artist system interfaced with an OB truck, where the event was recorded and prepared for air and streaming to a live audience around the globe.
Posted by Keith Clark on 07/25 at 10:35 AM
Wednesday, July 24, 2013
Understanding Acoustic Feedback & Suppressors
Inside the unique solutions to tame the howl of feedback
Acoustic feedback (also referred to as the Larsen effect) has been roaming around sound reinforcement systems for a very long time, and everyone seems to have their own way to tame the feedback lion.
Digital signal processing opened up the microphone to some creative solutions, each with its own unique compromises. This article takes a closer look into that annoying phenomenon called acoustic feedback and some of the DSP based tools available for your toolbox.
Gaining Insight into Feedback
Every typical sound reinforcement system has two responses, one when the microphone is isolated from the loudspeaker (open-loop) and a different response when the microphone is acoustically coupled with the loudspeaker (closed-loop).
The measured response of the output of a system relative to its input is called its transfer function. If the measured open-loop response of a system has constant magnitude across the frequency range of interest you can model the system using a level control followed by some delay.
Looking at the transfer function of a simple level change and delay element can provide insight into the behavior of acoustic feedback in real world situations.
Figure 1. Open (flat) / Closed (peaked) Loop Responses, Delay = 2ms, Gain = 0 dB
The top half of Figure 1 compares two magnitude responses. The flat (blue) line represents the magnitude of an open-loop system (no feedback) with unity gain (0 dB) and 2 ms of delay. The peaked (red) curve is the same system after the feedback loop is closed.
The closed-loop has peaks that correspond with zero degree phase locations shown in the lower half of the figure. The closed-loop valleys correspond with the 180 degree phase locations. Feedback is a function of both magnitude and phase. Even though the open-loop gain is the same at all frequencies, only frequencies that are reinforced as they traverse the loop (near zero degrees of phase shift) will runaway as feedback.
Figure 2. Open (flat) / Closed (peaked) Loop Responses, Delay = 10 ms, Gain = -3 dB
Figure 2 shows the effects of reducing the gain by 3 dB and increasing the delay to 10 ms. Notice that the closed-loop gain reduces significantly (more than the 3 dB of open-loop attenuation that was applied) and that the potential feedback frequencies (areas of 0 degrees phase shift) get much closer together.
The zero degree phase locations repeat every 360 degrees of phase change.
For a linear phase transfer function you can calculate the frequency spacing of potential feedback locations as a function of delay time. The equation for calculating the delay time is:
Delay Time (sec) = -ΔPhase / (ΔFrequency x 360)
When ΔPhase = 360 degrees (the phase difference between two 0 degree phase locations), this leaves:
ΔFrequency = 1 / Delay Time (sec)
when ΔPhase is 360 degrees
This means that the potential feedback frequency spacing = 1 / delay time (in seconds). The following shows the potential feedback frequency spacing for various delays.
1 / 0.002 sec. = 500 Hz spacing (for 2 ms of delay)
1 / 0.010 sec. = 100 Hz spacing (for 10 ms of delay, shown below)
1 / 0.1 sec. = 10 Hz spacing (for 100 ms of delay)
This implies that adding delay makes the potential for feedback worse (i.e. there are more potential feedback frequencies because they are closer together).
Practical experience will tell you otherwise. This is because delay also affects the rate at which feedback grows and decays. If you have 10 ms of delay between the microphone and loudspeaker and +0.5 dB of transfer function gain at a potential feedback frequency, then feedback will grow at a rate of 0.5 dB / 10 ms or +50 dB / second. If you increase the delay to 100 ms then the growth rate slows to +5 dB / second.
Here is another observation regarding gain and its relationship to feedback: For a fixed delay you can calculate the growth rate of a feedback component if you know how far above unity gain the open-loop system is at a particular feedback frequency.
This means that if you are at a venue and can hear feedback growing (and can estimate its growth rate) you can calculate roughly how far above unity gain the system is (this also means your kids probably call you a nerd).
As an example if you estimate that feedback is growing at a rate of 6 dB / second and you know that the distance from the loudspeaker to microphone is 15 feet then you know that the gain is roughly only (6 x 0.015) or 0.09 dB above unity gain. So… you only need to pull back the gain by that amount to bring things back into stability.
Of course the rate of change also applies to feedback as it decays. If you pull the gain back by 0.09 dB the feedback will stop growing. If you pull back the gain by 0.2 dB then the feedback frequency will decay at close to the same rate that it was growing. If you reduce the gain by 3 dB (below the stability point of unity) it will decay at a rate of 200 dB / second.
Note also that anything that changes phase will affect the feedback frequency locations. This includes temperature changes as well as any filtering and delay changes. If you analyze how temperature changes affect the speed of sound and look at the corresponding effective delay change that a temperature shift yields, you end up with an interesting graph.
Figure 3 shows the shift of a feedback frequency based solely on how temperature affects the speed of sound. The interesting points are that feedback frequency shifts are larger at higher frequencies and the potential for feedback frequency shifts could be significant (depending on your method of control), but more on this later.
Figure 3. Feedback Frequency Shift vs Frequency (for six temperature changes)
Feedback is both a magnitude and phase issue.
Increasing system delay, increases the number and reduces the spacing, of potential feedback frequencies.
Delay also affects the rate at which a feedback frequency grows or decays.
To bring a runaway feedback frequency back into control you simply need to reduce the gain below unity. However, it will decay at a rate based on its attenuation and delay time.
Temperature changes (and anything else that affects phase) affect the location of feedback frequencies.
Methods for Controlling Feedback
Understanding feedback is one thing, taming it is quite another. There are three main methods used by equipment manufacturers for controlling feedback.
The Adaptive Filter Model method (similar to a method used in acoustic echo cancellation), the Frequency Shifting method and the Auto-Notching method.
Most of this discussion is on auto-notching as it is the most commonly used method.
Adaptive Filter Modeling
This method is very similar to algorithms used in acoustic echo cancellation for teleconferencing systems.
The idea is to accurately model the loudspeaker to microphone transfer function and then use this model to remove all of the audio sent out the local loudspeaker from the microphone signal.
Figure 4. Adaptive Filter As Used In Acoustic Echo Cancellation
Figure 4 shows a teleconferencing application. The audio sent out the loudspeaker originates from a far-end location, and the removal of this audio from the local near-end microphone keeps the far-end talker from hearing his own voice returned as an echo.
The far-end talker’s voice is used as a training signal for the modeling. This modeling is an ongoing process since the model needs to match the ever-changing acoustic path.
During this modeling any local speech (double talk) acts as noise which can cause the model to diverge. If the model is no longer accurate then the far end speech is not adequately removed.
In fact, the noise added from the inaccurate model can be worse than not attempting to remove the echo at all. Much care is taken to avoid the divergence of the path model during any periods of double talk.
A sound reinforcement application is shown in figure 5. Here there is no far-end speech to feed the model. The local speech is immediately sent out the loudspeaker and is the only training signal available.
The fact that the training signal is correlated with the local speech (seen as noise to the training process) provides a significant problem for the adaptive filter based modeling. This is particularly true if it is trying to maintain a model that is accurate over a broad frequency range.
Figure 5. Adaptive Filter As Used In Feedback Suppression
To overcome this problem some form of decorrelation is introduced (such as a frequency shift). This helps the broad band modeling process but adds distortion to the signal. As with the teleconferencing application if the model is not accurate further distortion occurs.
The decorrelation, along with any added distortion due to an inaccurate model, makes this method less appealing for some venues. The big advantage to this type of a feedback suppressor is that your added gain before feedback margin is usually greater than 10 dB.
Frequency shifting has been used in public address systems to help control feedback since the 1960’s. Feedback gets generated at portions of the transfer function where the gain is greater than 0 dB.
The loudspeaker to microphone transfer function, when measured in a room, has peaks and valleys in the magnitude response.
In frequency shifting all frequencies of a signal are shifted up or down by some number of hertz. The basic idea behind a frequency shifter is that as feedback gets generated in one area of the response it eventually gets attenuated by another area.
The frequency shifter continues to move the generated feedback frequency along the transfer function until it reaches a section that effectively attenuates the feedback. The effectiveness of the shifter depends in part on the system transfer function.
It is worth pointing out that this is not a “musical transformation” as the ratio between the signal’s harmonics is not preserved by the frequency shift. A person’s voice will begin to sound mechanical as the amount of the shift increases.
While “audible distortion” depends on the experience of the listener most agree that the frequency shift needs to be less than 12 Hz.
How much added gain before feedback can be reasonably expected? The short answer is only a couple of dB. Hansler (1) reviews some research results that indicate that actual increase in gain achieved depends on the reverberation time as well as the size of the frequency shift.
Using frequency shifts in the 6-12 Hz range, a lecture hall with minimal reverberation benefited by slightly less than 2 dB. An echoic chamber with reverberation time of greater than 1 second could benefit by nearly 6 dB by the same frequency shift.
Digital signal processing allows frequency-shifting techniques in a large variety of applications. When used in conjunction with other methods such as the adaptive filter modeling previously mentioned, it can provide an even greater benefit.
However, the artifacts due to the frequency shifting are prohibitive in areas where a pure signal is desired. Musicians are more sensitive to frequency shifts, so think twice before placing a shifter in their monitor loudspeaker path.
Automatic notch filters have been used to control feedback (2) since at least the 1970’s. Digital signal processing allows more flexibility in terms of frequency detection as well as frequency discrimination and the method of deploying notches.
Auto-notching is found more frequently among pro-audio users than the other methods because it is easier to manage the distortion.
When considering automatic notching algorithms there are three areas of focus: frequency identification, feedback discrimination and notch deployment.
Frequency identification typically is accomplished by using either a version of the Fourier transform or an adaptive notch filter. Both methods of detection allow the accurate identification of potential feedback frequencies.
While the Fourier transform is naturally geared toward frequency detection, the adaptive notch filter can also determine frequency by analyzing the coefficient values of the adaptive filter. However, detection of lower frequencies (less than 100 Hz) are problematic for both algorithms.
Fourier analysis requires a longer analysis window to accurately determine lower frequencies and the adaptive notch filter requires greater precision.
There are two main methods used in discriminating feedback from other sounds. The first method focuses on the relative strength of harmonics. The idea is that while music and speech are rich in harmonics feedback is not.
Note that either of the frequency detection methods (Fourier transform or adaptive notch filter) could be used to determine the relative strength of harmonics. It is easier to think in terms of harmonics if you are using a Fourier transform, but just as frequency can be determined by analyzing coefficients so also can analyzing the relationships between sets of coefficients identify harmonics.
There are drawbacks in utilizing harmonics as a means of identifying feedback. First, feedback is propagated through transducers and transducers have non-linearities. This means that feedback (especially when clipped) will have harmonics. Also, feedback does not always occur one frequency at a time.
If you remember the discussion on the properties of feedback there is potential for a feedback frequency anywhere the phase of the loudspeaker to microphone transfer function is zero degrees. For a system with 25 ms of delay (roughly 25 ft) this occurs every 40 Hz, and the zero degree frequency locations get closer together as the delay increases.
It is not possible to ensure that simultaneous feedback frequencies will never be harmonically related. The potential for feedback with harmonics needs to be balanced against the fact that some non-feedback sounds (tonal instruments such as a flute) have weak harmonics, blurring the area of accurate discrimination.
Another method for discriminating feedback from desirable sound is to analyze feedback through some of its more unique characteristics. This can be done without analyzing harmonic content. For example a temporary notch can be placed on a potential feedback frequency.
Feedback is the only signal that will always decay (up stream of the filter) coincident with the placing of the notch. However, because placing a temporary notch is intrusive some other mechanism needs to be used to identify potential feedback frequencies before a temporary notch is placed for verification.
One such useful characteristic is that a feedback frequency is relatively constant over the time that its amplitude is growing. This constant frequency combined with a growing magnitude proves very useful as a precursor to the temporary notch.
The final area in auto-notching algorithms is the deployment of the notches. Most auto-notching feedback suppressors allow the user to identify filters as either fixed (static) or floating (dynamic) in nature. This designation refers to the algorithm’s ability to recycle the filter if needed.
If a feedback frequency is identified the algorithm looks to see if a notch has already been deployed at that frequency. If found the notch will be appropriately deepened. If not found then a new filter is deployed (fixed filters are allocated before floating filters). If all filters are allocated then the oldest floating filter is reset and re-deployed at the new frequency.
Another useful feature is to give the user the option of having the algorithm turn down the broadband gain (with a programmable ramp back time) instead of recycling a floating filter if all filters are used up. Adjusting the broad band gain does not increase the gain margin but it does provide a measure of safety once all of the available filters are gone.
An area in notch deployment that requires careful attention is the depth and width of notches used to control feedback frequencies. To bring a feedback frequency back into stability the system’s open-loop transfer function gain simply needs to be below unity at that frequency. A desirable transfer function will have peaks that are reasonably flat through the frequencies of interest.
The depth of the notch used to control a feedback frequency should not be greater than the relatively hot area of gain that caused it, plus a little safety margin.
This means notches on the order of a couple of dB, not tens of dB. If the auto-notching algorithm is placing notches with a depth of 20 dB or more, something is wrong. One area to look at is the bandwidth of the notches used.
There is a tendency with these algorithms to try and use notches that are as narrow as possible, with the mistaken belief that the cumulative response will be less noticeable. What usually ends up happening is that several narrow notches get placed at a depth of 20 dB or more to lower the overall gain 2 or 3 dB in a larger area.
Furthermore, high Q (narrow) notches are less effective at controlling feedback during environmental changes (such as temperature mentioned above) than are low Q (wide), shallow notches. This means if you use low Q, shallow notches you will be less likely to have notches deployed that are not performing any function other then hacking up the hard work you put in on your frequency response.
Most auto-notching algorithms allow you to select the default width and maximum depth of the notches used.
How much additional gain before feedback can be achieved from auto-notching? If you had a perfectly flat frequency response then the auto-notching algorithm would not provide any additional gain margin.
The best the algorithm can do is pull down the gain in a finite number of locations. If you had a handful of peaks then the auto-notch could provide additional margin based on how much higher the peaks are above the remaining response. Typically the auto-notch provides only a couple of dB of additional gain before feedback.
Despite the lack of large additional gain margin there are still two other significant reasons for having an auto-notch in the system. First, the auto-notch provides a simple tool to aid in the identification of problem spots in the response when the audio system is first installed. Second, it provides a safety net that can remain in place to cope with the ever-changing acoustic path (unwanted additional reflections, gain change etc.).
Acoustic feedback is both a magnitude and phase issue. As such, changes in the system’s phase response due to delay, filtering or temperature changes impact potential feedback frequencies.
If notch filters are used to control feedback they should be placed after all other changes are made to the system’s phase response to ensure their utility. They should also be wide enough to ensure their ongoing usefulness despite changes to the feedback path.
In order to bring a runaway frequency back into stability the magnitude simply needs to be taken below the unity gain mark plus a couple of dB for a safety margin. In addition to a slightly expanded gain margin, the auto-notch tool provides a simple means for ringing out a room as well as leaving a safety net after the original installation is complete.
In addition to auto-notching algorithms, adaptive filter models and frequency shifting algorithms also provide useful ways to suppress feedback and increase a system’s gain before feedback margin.
An adaptive filter model based feedback suppressor relies on an accurate model of the loudspeaker to microphone acoustic path in order to remove feedback from a receiving microphone. If the model is inaccurate then distortion can occur.
A decorrelation process is used to improve the convergence characteristics of the broad band adaptive filter. This decorrelation can also add a limited amount of distortion. However, the adaptive filter model is capable of greater than 10 dB of additional gain before feedback.
The utility of the frequency shifter depends on the system where it is applied. As a general rule the frequency shifter will provide a greater gain margin in a more reverberant space than in a smaller less reverberant space. The frequency shift should be kept to less than 12 Hz to minimize audible distortion.
Acoustic feedback has been roaming around sound systems for some time. The tools just outlined provide a set of unique solutions each with its own compromises. Getting the most out of the tool requires understanding the problem and the proposed solution. With the proper tools in place, perhaps our memories of the howl and screech that characterize the Larsen effect will begin to slowly fade away.
1. Eberhard Hansler and Gerhard Schmidt, Acoustic Echo and Noise Control (John Wiley & Sons Inc, Hoboken, New Jersey, 2004). pp. 144-146
2. Roland-Borg Corporation, 1978. Comprehensive Feedback Elimination System Employing Notch Filter, United States Patent #4,088,835.
A PDF of this article is available for download on the Rane website.
Presented with permission from Rane Corporation.
In The Studio: Six Tips For Great Electric Guitars Without Amps
Making it work with direct guitar recording
With direct guitar recording into virtual amps you can now rock out through the guitar chain of your dreams in the comfort of your home studio and without the neighbors calling the cops.
In this article I’ve outlined a few tips and best practices for getting great guitar tones without amps or mics.
—Go Direct: “Going direct” means connecting your guitar into your recording interface by means of the high impedance (Hi-Z) instrument input if available or through a DI box (direct injection box). A direct box converts the hi-impedance signal from the guitar to low-impedance, mic level signal to connect to any preamp.
Great results can be had with either though the best results come from the best signal chain including quality cables and DI box. The Radial JDI and J48 are professional studio standards and are actually quite affordable.
—Avoid hum and buzz: LCD and LED monitors (computer screens) are now the norm and while they don’t emit as much noise as CRT monitors some noise can still be picked up by your guitar pickups. Computer fans, cell phones and even a wrist watch can be sources of unwanted noise in your guitar tone.
To reduce and avoid the noise sometimes you just need to move around the room a little and the noise will be gone. Additional noise suppression is best done with a noise gate plugin in your DAW.
—A virtual guitar rig: The quality and quantity of virtual guitar amps has increased dramatically over the past few years to the point where you may not even want a real amp anymore. Amplitube 3, Guitar Rig 4, and POD Farm 2 (to name just a few) are all top notch virtual guitar processing systems.
They’ve really raised the bar in sound quality. The flexibility and ability to select from hundreds of pedal, amp, cabinet and mic combinations at will makes them invaluable tone shaping tools for guitar, bass and beyond.
—Latency: With guitar recording, the lower the latency the better. 128 samples is good, but 64 samples or lower is ideal. Latency affects the way you play and you want the immediacy that plugging into a real amp has.
You’ll need a good firewire interface to achieve this kind of stable low latency.
—Timing and tuning: As always, timing of the performance and tuning of the instrument are so important. Most of the major virtual amp packages have a tuner option, check the tuning often! With a DAW you can actually see and hear how far off your timing is.
Record each section of the song multiple times, choose the best or best bits of each performance and edit a composite that’s in time and in tune.
—Filtering out the bad stuff: Virtual amps don’t sound 100 percent real but they’re getting closer all the time. Where the current virtual amp systems fall short is in the cabinet and microphone options. I always use an EQ after the amp plugin to get rid of the harshness and fizz that can make an amp sound fake.
If you boost with a sharp Q between 3 kHz and 12 kHz you can usually find 2-3 really nasty areas. Isolate the frequency and cut it out by a few dB.
Here’s how that sounds: No EQ (mp3) | Fizz Cut (mp3)
These are just a few tips to get you started with direct guitar recording.
Jon Tidey is a Producer/Engineer who runs his own studio, EPIC Sounds, and enjoys writing about audio on his blog AudioGeekZine.com. To comment or ask questions about this article, go here.
Thursday, July 18, 2013
Riedel Supplies 3G Productions With Versatile Intercom Systems For Live Events
Riedel Communications announced that audio company 3G Productions has invested in Riedel gear to support its growing business in live-entertainment and corporate events.
Riedel Communications announced that audio company 3G Productions has invested in Riedel gear to support its growing business in live-entertainment and corporate events.
3G Productions, based in Las Vegas and Los Angeles, is a provider of live production services, sound equipment rentals, sales, and installed sound. The Riedel Artist digital matrix intercom systems and associated keypanels, beltpacks, and headsets facilitate fast, flexible setup and quick reconfiguration, as well as the broadcast-quality audio critical in sophisticated live communications applications.
“To meet the needs of our diverse and well-regarded client base, we relentlessly analyze and pursue the best-performing equipment on the market,” said Keith Conrad, director of marketing at 3G Productions. “Riedel gear meets this standard, and we look forward to developing a strong and lasting relationship with the company.
“As our business continues to grow, the quality and versatility of Riedel’s intercom systems will help us to continue providing our clients with the highest quality of service.”
In an initial purchase of Riedel equipment, 3G Productions invested in Artist 64 and Artist 32 frames with fiber support of up to 10 km. Each of four purchased Performer C-44 PLUS systems can serve as a stand-alone digital partyline system for smaller requirements.
The nine OLED Artist 1100 keypanels accompanying the Artist frames include four 16-key desktop units, two 12-key in-rack units, and three 26-key in-rack units. 3G Productions also purchased 24 beltpacks and two dozen headsets in a variety of styles. More recently, the company announced plans to purchase an additional Artist 128 frame to meet future demand.
The Artist digital matrix intercom system is based on a dual optical fiber ring to form a single large full-summing, non-blocking distributed matrix with 1,024 x 1,024 ports. While the system “feels” like a single unit, it has no limitations in the number of cross-points within or between the different nodes of the system.
Allowing nodes to be placed miles from one another when necessary and providing 64 and 32 intercom ports per matrix frame, the Artist systems purchased by 3G Productions allow for a high degree of decentralization of the entire matrix in a very cost-effective way. With the freedom to optimize placement of the matrix frames and intercom stations, 3G Productions can reduce the cost and time required for wiring and installation.
Wednesday, July 17, 2013
Microphone Approaches For A Wide Range Of Meeting Facilities & Applications
Basic principles and selection of the right mics for the specific situation
In order to select a microphone for a specific application, and to apply it properly, it’s first necessary to know the important characteristics of the sound source(s) and of the sound system.
Once these are defined, a look at the five areas of microphone specifications will lead to an appropriate match. Finally, proper use of the microphone, by correct placement and operation, will insure best performance.
Following are recommendations for some of the most common meeting facility sound applications.
The desired sound source, for a lectern microphone, is a speaking voice. Undesired sound sources that may be present are nearby loudspeakers (possibly overhead) and ambient sound (possibly ventilation, traffic noise, and reverberation).
The sound system in this and the following examples is assumed to be high quality with balanced low-impedance microphone inputs.
The basic performance requirements for a lectern microphone can be met by either dynamic or condenser types, so the choice of operating principle is often determined by other factors, such as appearance.
In particular, the desire for an unobtrusive microphone is best satisfied by a condenser microphone, which can maintain high performance even in very small sizes. If phantom power is available, a condenser is an excellent choice. If not, dynamic types, though somewhat larger, are available with similar characteristics.
For the microphone to match the desired sound source (the talker’s voice) it must first have a frequency response which covers the speech range, (approximately 100Hz to 10kHz).
Within that range the response can be flat, if the sound system and the room acoustics are very good, but often a shaped response will improve intelligibility. Above 10 kHz and below 100 Hz, the response should roll off smoothly, to avoid pickup of noise and other sounds outside of the speech range, and to minimize proximity effect.
The choice of microphone directionality that will maximize pickup of the voice and minimize undesired sounds, is unidirectional. This type will also reduce the potential of feedback since it can be aimed toward the talker and away from loudspeakers.
Depending on how much the person speaking moves about, or on how close the microphone can be placed, a particular type may be chosen: a cardioid for moderately broad, close-up coverage; a supercardioid or a hypercardioid for progressively narrower or more distant coverage.
The electrical characteristics of the microphone are primarily determined by the sound system: in this case, a balanced low - impedance type would match the inputs on the mixer.
Of course, this would be the desired choice in almost all systems due to the inherent benefits of lower noise and longer cable capability.
The sensitivity of the microphone should be in the medium-to-high range since the sound source (speaking voice) is not excessively loud and is picked up from a slight distance. Again, this is most easily accomplished by a condenser type.
The choice of physical design for a lectern microphone must blend performance with actual use. The most effective approach is a gooseneck-mounted type, which places the microphone close to the sound source and away from both the reflective surface of the lectern and noise from the handling of materials on it.
Another approach is the use of a boundary microphone on the lectern surface, but this method is limited by lectern design and by the potential for noise pickup.
As mentioned above, the desired physical design may also suggest the operating principle. The most effective small gooseneck or boundary styles are condensers.
The ideal placement of a lectern microphone is 6 to 12 inches away from the mouth, and aimed toward the mouth. This will give good pickup of the voice and minimum pickup of other sources.
Also, locating the microphone a few inches off-center will reduce breath noise that might occur directly in front of the mouth. It is not recommended that two microphones be used on a lectern as comb filtering interference is likely to occur.
Proper operation of the microphone requires correct connection to the sound system with quality cables and connectors, and correct phantom power if a condenser is used. Use a shock mount to control mechanical noise from the lectern itself.
Some microphones are equipped with low-cut or low-end roll-off filters, which may further reduce low frequency mechanical or acoustic noise.
Goosenecks should be quiet when flexed. It is strongly recommended that a pop filter be placed on the microphone to control explosive breath sounds, especially when using miniature condenser types.
Good technique for lectern microphone use includes:
—Do adjust the microphone position for proper placement.
—Do maintain a fairly constant distance of 6 to 12 inches.
—Don’t blow on microphone, or touch microphone or mount when in use.
—Don’t make excess noise with materials on lectern.
—Do speak in a clear and well-modulated voice.
The desired sound source at a meeting table is a speaking voice. Undesired sounds may include direct sound, such as an audience or loudspeakers, and ambient noise sources such as building noise or the meeting participants. A boundary microphone is the physical design best suited to this application.
It will minimize interference effects due to reflections from the table surface and will also result in increased microphone sensitivity. A condenser type is the most effective for this configuration, due to its high performance and small size.
The frequency response should be slightly shaped for the vocal range and will usually benefit from a slight presence rise. A unidirectional (typically, a cardioid) pattern will give the broadest coverage with good rejection of feedback and noise.
Finally, the microphone should have a balanced low-impedance output, and moderate-to-high sensitivity.
Placement of the microphone should be flat on the table, at a distance of two to three feet from, and aimed towards the normal position of the talker.
If possible, it should be located or aimed away from other objects and from any local noise such as page turning. If there is more than one distinct position to be covered, position additional microphones according to the 3-to-1 rule.
The microphone should be connected and powered (if necessary) in the proper fashion. If the table itself is a source of noise or vibration, isolate the microphone from it with a thin foam pad.
A low-frequency filter may be a desirable or even necessary feature. A pop filter is not normally required. And make certain the microphones are never covered with papers.
Good technique for meeting table microphone use includes:
—Do observe proper microphone placement.
—Do speak within coverage area of the microphone.
—Don’t make excess noise with materials on table.
== Do project the voice, due to greater microphone distance.
Handheld Speech Microphone
The desired sound source, for a handheld microphone, is a speaking voice. Undesired sounds may include loudspeakers, other talkers, ventilation noise, and other various ambient sounds.
Suitable microphone performance for this application can be provided by dynamics or condensers. Due to frequent handling and the potential for rough treatment, dynamic microphones are most often used, though durable condensers are also available.
The preferred frequency response is shaped with a presence rise for intelligibility and low roll-off for control of proximity effect and handling noise.
These microphones are typically unidirectional. A cardioid pattern is most common, while supercardioid and hypercardioid types may be used in difficult noise or feedback situations.
Balanced low-impedance output configuration is standard while sensitivity may be moderate-to-low due to the higher levels from close-up vocal sources.
Finally, the physical design is optimized for comfortable hand-held use, and generally includes an integral windscreen/pop filter and an internal shock mount. An on-off switch may be desirable in some situations.
Placement of handheld microphones at a distance of four to twelve inches from the mouth, aimed towards it, will give good pickup of the voice, relative to other sources.
In addition, locating the microphone slightly off-center, but angled inward, will reduce breath noise.
With high levels of unwanted ambient noise, it may be necessary to hold the microphone closer. If the distance is very short, especially less than four inches, proximity effect will greatly increase the low-frequency response.
Though this may be desirable for many voices, a low-frequency roll-off may be needed to avoid a “boomy” or “muddy” sound. Additional pop filtering may also be required for very close use.
Use of rugged, flexible cables with reliable connectors is an absolute necessity with handheld microphones. A stand or holder should also be provided if it is desirable to use the microphone hands-free. Finally, the correct phantom power should be provided if a condenser microphone is used.
Good technique for handheld microphone use includes:
—Do hold microphone at proper distance for balanced sound.
—Do aim microphone toward mouth and away from other sounds.
—Do use low frequency roll-off to control proximity effect.
—Do use pop filter to control breath noise.
—Don’t create noise by excessive handling.
—Do control loudness with voice rather than moving microphone.
The desired sound source, for a lavalier microphone, is a speaking voice. Undesired sources include other talkers, clothing or “movement” noise, ambient sound, and loudspeakers.
A condenser lavalier microphone will give excellent performance in a very small package, though a dynamic may be used if phantom power is not available or if the size is not critical.
Lavalier microphones have a specially shaped frequency response to compensate for off-axis placement (loss of high frequencies), and sometimes for chest resonance (boost of middle frequencies) .
The most common polar pattern is omnidirectional, though unidirectional types may be used to control excessive ambient noise or severe feedback problems. However, unidirectional types have inherently greater sensitivity to breath and handling noise.
Balanced low-impedance output is preferred as usual. Sensitivity can be moderate, due to the relatively close placement of the microphone. The physical design is optimized for body-worn use.This may be done by means of a clip, a pin, or a neck cord. Small size is very desirable.
For a condenser, the necessary electronics are often housed in a separate small pack, also capable of being worn or placed in a pocket.
Some condensers incorporate the electronics directly into the microphone connector. Provision must also be made for attaching or routing the cable to minimize interference with movement. Wireless versions simplify this task.
Placement of lavalier microphones should be as close to the mouth as is practical, usually a few inches below the neckline on a lapel, a tie, or a lanyard, or at the neckline in the case of a woman’s dress.
Omnidirectional types may be oriented in any convenient way, but a unidirectional type must be aimed in the direction of the mouth.Avoid placing the microphone underneath layers of clothing or in a location where clothing or other objects may touch or rub against it. This is especially critical with unidirectional types. Locate and attach the cable to minimize pull on the microphone and to allow walking without stepping or tripping on it.
A wireless lavalier system eliminates this problem and provides complete freedom of movement. Again, use only high quality cables, and provide phantom power if required.
Good technique for use of lavalier microphones includes:
—Do observe proper placement and orientation.
—Do use pop filter if needed, especially with unidirectional.
—Don’t breathe on or touch microphone or its cable.
—Don’t turn head away from microphone.
—Do mute lavalier mic when using lectern or table microphone.
—Do speak in a clear and distinct voice.
The desired sound source is a group of talkers. Undesired sound sources may include loudspeakers and various ambient sounds. The use of audience microphones is governed, to some extent, by the intended destination of the sound.
In general, high level sound reinforcement of the audience in a meeting facility is not recommended. In fact, it is impossible in most cases, unless the audience itself is acoustically isolated from the sound system loudspeakers.
Use of audience microphones to cover the same acoustic space as the sound system loudspeakers results in severe limitations on gain before feedback. The absolute best that can be done in this circumstance is very low level reinforcement in the immediate audience area, and medium level reinforcement to distant areas, such as balconies or foyers.
Destinations such as isolated listening areas, recording equipment, or broadcast audiences, can receive higher levels because feedback is not a factor in these locations.
A condenser is the type of microphone most often used for audience applications. They are generally more capable of flat, wide-range frequency response. The most appropriate directional type is a unidirectional pattern, usually a cardioid.
A supercardioid or a hypercardioid may be used for slightly greater ambient sound rejection.
Balanced low-impedance output must be used exclusively and the sensitivity should be high because of the greater distance between the source and the microphone. This higher sensitivity is also easier to obtain with a condenser design.
The physical design of a microphone for audience pickup should lend itself to some form of overhead mounting, typically hanging. It may be supported by its own cable or by some other mounting method.
Finally, it may be a full size microphone, or a miniature type for unobtrusive placement. A particular method that is sometimes suggested for overhead placement is a ceiling-mounted microphone, usually a boundary microphone. This position should be used with caution, for two reasons.
First, it often places the microphone too far from the desired sound source, especially in the case of a high ceiling. Second, the ceiling, in buildings of modern construction, is often an extremely noisy location, due to air handling noise, lighting fixtures, and building vibration.
Remember that a microphone does not reach out and capture sound. It only responds to the sound that has travelled to it. If the background noise is as loud or louder at the microphone than the sound from the talker below, there is no hope of picking up a usable sound from a ceiling-mounted microphone.
Placement of audience microphones falls into the category known as area coverage. Rather than one microphone per sound source, the object is to pick up multiple sound sources with one (or more) microphone(s).
Obviously, this introduces the possibility of interference effects unless certain basic principles, such as the “3-to-1 rule”are followed.
For one microphone, picking up a typical audience, the suggested placement is a few feet in front of, and a few feet above, the heads of the first row. It should be centered in front of the audience and aimed at the last row.
In this configuration, a cardioid microphone can cover up to 20-30 talkers, arranged in a rectangular or wedge-shaped section.
For larger audiences, it may be necessary to use more than one microphone. Since the pickup angle of a microphone is a function of its directionality (approximately 130 degrees for a cardioid), broader coverage requires more distant placement.
As audience size increases, it will eventually violate the cardinal rule: place the microphone as close as practical to the sound source.
In order to determine the placement of multiple microphones for audience pickup, remember the following rules:
1) The microphone-to-microphone distance should be at least three times the source-to-microphone distance (3-to-1 rule).
2) Avoid picking up the same sound source with more than one microphone.
3) Use the minimum number of microphones necessary.
For multiple microphones, the objective is to divide the audience into sections that can each be covered by a single microphone. If the audience has any existing physical divisions (aisles or boxes), use these to define basic sections.
If the audience is a single large entity, and it becomes necessary to choose sections based solely on the coverage of the individual microphones, use the following spacing: one microphone for each lateral section of approximately 8 to 10 feet.
Microphone Positioning For Audience Pick-Up
If the audience is unusually deep (more than 6 or 8 rows), it may be divided into two vertical sections of several rows each, with aiming angles adjusted accordingly. In any case, it’s better to use too few microphones than too many.
Once hanging microphones are positioned, and the cables have been allowed to stretch out, they should be secured to prevent turning or other movement by air currents or temperature changes. Fine thread or fishing line will accomplish this with minimum visual impact. Use only high quality cables and connectors, particularly if miniature types are specified.
Many older meeting facilities are very reverberant spaces, which provide natural, acoustic reinforcement for the audience, though sometimes at the expense of speech intelligibility. In spaces like this, it is often very difficult to install a successful sound system as the acoustics of the space work against the system.
Most well-designed modern architecture has been engineered for a less reverberant space, both for greater speech intelligibility, and to accommodate modern forms of multimedia presentations. This results in a greater reliance on electronic reinforcement.
The use of audience microphones is normally exclusively for recording, broadcast, and other isolated destinations. It is almost never intended to be mixed into the sound system for local reinforcement.
If it is desired to loudly reinforce an individual member of the audience, it can only be done successfully with an individual microphone placed amid the meeting participants: a stand-mounted type that the member can approach or a handheld type (wired or wireless) that can be passed to the member.
Good technique for use of audience microphones includes:
—Do place the microphones properly.
—Do use minimum the number of microphones.
—Do turn down unused microphones.
—Don’t attempt to “over-amplify” the audience.
—Do speak in a strong and natural voice
Today, the life of meeting facilities extends far beyond just meetings, to include classes, plays, and social events. Sound systems can play an important role in all of these situations.
While it’s not possible to detail microphone techniques for every application, a few examples will show how to use some of the ideas already presented.
Though most classrooms are not large enough to require the use of a sound system, it is sometimes necessary to record a class, or to hold a very large class in an auditorium. In these cases, it is suggested that the teacher wear a wireless lavalier microphone to allow freedom of movement and to maintain consistent sound quality.
If it is desired to pick up the responses of students, it is possible to use area microphones in a recording application, but not with a sound system. A better technique is for questions to be presented at a fixed stand microphone, or to pass a wireless microphone to the student.
Microphone use for plays and other theatrical events involves both individual and area coverage. Professional productions usually employ wireless microphones for all the principal actors. This requires a complete system (microphone, transmitter, receiver) for each person, and the frequencies must be selected so that all systems will work together without interference.
While it’s possible to purchase or rent a large number of wireless systems, it’s often more economical to combine just a few wireless systems with area microphones for the rest of the players.
Use unidirectional boundary microphones for “downstage” (front) pickup, and use unidirectional hanging micro-phones for “upstage” (rear) pickup. Always use a center microphone, because most stage action occurs at center stage.
Use flanking microphones to cover side areas but observe the 3-to-1 rule and avoid overlapping coverage. Turn up microphones only as needed.
Social events, such as dances, generally require only public address coverage. Use unidirectional, hand-held or stand mounted microphones. Dynamic types are excellent choices, because of their rugged design.
The microphone should be equipped with an on-off switch if it is not possible to turn down the microphone channel on the sound system. In any case, turn up the microphone(s) only as needed.
Outdoor use of microphones is, in some ways, less difficult than indoor. Sound outdoors is not reflected by walls and ceilings so that reverberation is not present. Without reflected sound, the potential for feedback is also reduced.
However, the elements of nature must be considered: wind, sun, and rain. Because of these factors, dynamic types are most often used, especially in the likelihood of rain. In any case, adequate windscreens are a must.
Microphone principles are the same outdoors, so unidirectional patterns are still preferred. Finally, because of frequent long cable runs outdoors, balanced low-impedance models are required.
Though it is one of the smallest links in the audio chain, the microphone is perhaps the most important. As it’s the connection between sound source and the sound system, it must interact efficiently with each. Choosing this link successfully requires knowledge of sound and sound systems, microphones, and the actual application.
Through the examples given, the correct selection and use of microphones for a variety of meeting facility sound requirements has been indicated. Applying these basic principles will assist in many additional situations.
The subject of microphone selection and application for meeting facility sound systems is ever changing, as new needs are found and as microphone designs develop to meet them. However, the basic principles of sound sources, sound systems, and the microphone that links them remain the same, and should prove useful for any future application.
This article is provided by Shure.
Tuesday, July 16, 2013
Extron Introduces H.264 Streaming Media Decoder For AV Applications
Decodes live AV streaming content from SME 100 encoders and plays back AV media files available from network shares
Extron Electronics has introduced the SMD 101, a compact, high performance H.264 streaming media decoder used with Extron SME 100 encoders to provide end-to-end AV streaming systems.
The SMD 101 is designed specifically for use in professional AV streaming applications to decode live AV streaming content from SME 100 encoders or to play back AV media files available from network shares. It accepts streaming resolutions up to 1080p/60 and outputs a variety of resolutions, from 640 x 480 up to 1920 x 1200.
Fill/Follow/Fit aspect ratio management provides choices for managing streaming content that does not match the display. This compact, energy-efficient decoder is an counterpart to the SME 100 encoder to deploy in simple overflow and monitoring applications or multi-channel streaming systems and high resolution signage systems.
“The SMD 101 distinguishes itself from other decoders with its advanced AV signal processing and integration-friendly control options,” says Casey Hall, vice president of sales and marketing for Extron. “These features provide simple solutions to the many challenges encountered when integrating streaming into AV systems, and make the SMD 101 compatible with a wide range of AV streaming applications.”
The SMD 101 is adaptable to different network conditions and streaming requirements offering both push and pull streaming configurations. Audio output signals are available as HDMI embedded audio as well as analog stereo audio, making it directly compatible with embedded display speakers or existing audio systems.
The SMD 101 offers integration-friendly control capabilities including an optional handheld IR remote, wired IR, RS-232, or Ethernet. And an easy-to-navigate web interface provides simple, flexible control and management.
Friday, July 12, 2013
Unsettled Science: The Latest Findings On Micro-Molecular Sonic Imprinting
Signals developing minds of their own...
Editor’s Note: Nothing discussed here is true…Right?
For several decades, military research teams around the world have quietly been studying micro-molecular imprinting and the many ways that it can disrupt critical communication systems.
You’ve probably experienced problems yourself, in a small way, but indications strongly suggest that as copper, Ethernet, and fiber signal transport systems become more and more heavily traveled, the issues will eventually reach a critical state. And few industries will be affected more than the entertainment technology sector.
Apparently what happens is this: as signals are repeatedly transmitted through galvanic electrical conductors – and this includes fiber optics as well – the molecular structure of the conductors begin to take on a form of their own, not related to the original cable or fiber material.
While still under research, this is believed to be one of the leading causes of emails not being replied to. In many instances replies actually were made but never got past the molecular imprinting. (See “The Problems of Email for Mission Critical Applications” by Commander K. Jennings, Communications Branch of the US Military-Industrial Research Complex).
A simple fix was identified in which the email server’s Ethernet cable merely needs to be reversed in direction, which in turn corrects the imprinting. But as it’s rather inconvenient to constantly be unplugging and re-plugging Ethernet cables, I propose that a reversing switch (Figure 1) be implemented instead. Schematics are available upon request.
And, dear reader, in case you fail to see that we’re thinking far ahead on your behalf, the switch (as shown) is capable of handling the new high-current Ethernet protocol that will be introduced in mid 2015.
Figure 1: The project requires the latest in high-tech design and fabrication. (click to enlarge)
The worldwide supply of silicon is widely known to be collapsing at a logarithmic rate, which means that devices based on silicon components such as DSP, TTL, CPU and other LSI “chips” will soon be superseded by vacuum tubes, or valves…as some like to call them.
In fact, numerous studies predict that valves will soon be the active element of choice for Ethernet routers, switches, and long-distance relays. Penny stocks are already soaring at start-ups such as Westenhouse, R3A, and Cylvania – all of whom are building valve manufacturing facilities in super-secret locations even as this is written.
The reversing switch proposed here will easily handle the higher voltages and current demanded by the new TCP (tube control protocol) with a minimum of fuss. Rubber gloves are, however, recommended to avoid electrical shock when activating the switch.
While fiber transport systems would seem to mitigate the problems of molecular imprinting, they actually exacerbate them.
One tragic story we’ve heard occurred when the fiber system from a Twisted Sister tour was put back on the road with Sister Sledge without first deploying best “flushing” practices. The molecules had no idea what to do. The result was that “We Are Family” became “We’re Not Gonna Take It,” much to the chagrin of the Sledge siblings, who started shouting “twisted sister” (followed by numerous vulgarities) at each other during the first concert of their tour.
But perhaps the worst incidence to date was when the fiber system from a tour by The Who was next used for a Frankie Lymon tribute band. Due to a severe case of micro-molecular imprinting, the popular 1950s song “Why Do Fools Fall in Love” became “Won’t Get Fooled Again,” and it led to an immediate, well-publicized breakup between the only-just-divorced lead singer and his new supermodel fiancé. Word is that he still can’t figure out what happened or “Who” might have done this to him.
Of course, analog systems are just as notorious. In another well-publicized fiasco a couple of decades ago, U.S. President George H.W. Bush’s insightful statement of “no new faxes” inadvertently morphed into “no new taxes,” which was not something he meant to say. It seems that the system had been used at a tax convention just before his visionary statement about the future of global electronic communication. Who would have guessed that he envisioned the demise of the fax machine in favor of email at such an early stage?
Fortunately, most problems can be mitigated by a short purging session prior to re-use. A new app called iNeutralize™ is available for all major platforms. It supplies the much-needed information of how to best accomplish system flushing in the minimal amount of time.
For example, the micro-molecular imprint from a six-month Justin Bieber tour can be quickly “flushed” in a few minutes by playing Hendrix (any selection will do), followed by a moment or two of the “1812 Overture.”
The point is that extreme caution is advised when re-using a transport system, be it digital, copper, or fiber, especially if you’re going to be working with music one day and corporate industrials the next. You don’t want a speech about “pilates” to inadvertently emerge as Emerson, Lake and Palmer’s “Pirates.”
Fortunately, judicious usage of iNeutralize will remove the risk. Additional information will be reported in this publication as soon as it is declassified.
Ken DeLoria spent several months
in captivity working with a secret government agency as a consultant on sonic molecular imprinting, and how it’s played a pivotal role in inadvertently altering numerous public pronouncements of politicians, with a special emphasis on campaign promises.
Middle Atlantic Introduces Sit Stand Functionality For ViewPoint Consoles
System functionality promotes proper ergonomics, and is now available pre-installed
Middle Atlantic Products has introduced height-adjustable Sit/Stand functionality for its family of ViewPoint consoles and technical furniture.
Sit/Stand lift system functionality promotes proper ergonomics for healthy work environments, and is now available pre-installed in welded ViewPoint console bays or as a stand-alone work station model.
These new models feature three convenient pre-set height settings for fatigue elimination and increased user comfort, especially in multi-shift environments.
The Sit/Stand lift legs do not intrude in equipment mounting bays, leaving the full console bay width available for equipment. Quick shipment is possible with a 3-week standard lead time, the same offered for all standard ViewPoint orders.
Available in the same variety of finishes as the complete ViewPoint system, the availability of new Sit/Stand models expands the line to make an offering that is ideal and flexible for a number of situations and uses.
All Viewpoint technical furniture ships fully welded for reduced assembly times.
Middle Atlantic Products
Posted by Keith Clark on 07/12 at 03:03 PM
Church Sound: MIDI Over Network—Getting Started
It's easier than you think -- and makes things much easier
I’m a big fan of networking, connectivity and remote control. I like it when I can get computers to do the boring stuff so I get to do the fun stuff.
This post is another installment in that series. Today, we’re talking about how to send MIDI commands over a network. I’ll leave the what and why for the next post, and today focus on the how.
Here’s the basic idea: I have a digital audio console that can send MIDI commands with snapshots (or macros, but that’s another post). Also in the tech booth are other various pieces of gear—lighting consoles, audio playback applications, ProPresenter—that can receive and act on MIDI commands.
And while I could set up an elaborate MIDI distribution network with 5-pin MIDI cable, interfaces and MIDI DAs, that’s so, well, ordinary. And it’s a pain. Not to mention expensive. Besides, there’s a much simpler way if you’re using Macs. It goes like this:
Sitting on my DiGiCo SD8 console is a 17-inch MacBook Pro. In between the MBP and SD8 is a MOTU FastLane MIDI interface. I go out of the SD8, in to the FastLane, which is connected via USB. The MBP becomes the “master sender” (that’s a technical term I made up) for the rest of the network.
The MBP is connected to my Sound network via Ethernet. We have a Mac Mini at FOH that does a bunch of stuff, but for this exercise, it runs a DJ app called Mixxx. What’s cool about Mixxx is that it can be controlled by MIDI.
Also in the tech booth is another Mini that is Bootcamped into Windows 7, and runs the Hog 4PC lighting controller. That Mini is also connected to my Sound network (via wireless for reasons that are beyond the scope of this article).
Further down the the booth sits our Mac Pro running ProPresenter with the MIDI module. That Mac is connected to our regular in-house network. However, because the sound network is NAT’ed out to the house network, we can still get MIDI signals out there.
So that’s the basic layout, here’s how it works.
Built-In MIDI Networking
Every Mac running OSX (starting with Leopard I believe) as a network module for MIDI. You’ve probably seen it but didn’t know how it worked.
Start by opening Audio MIDI Setup. Once there, select Show MIDI Window from the Window menu. Sitting there, innocuously enough is a “Network” icon. Double-clicking on it opens this dialog box:
When we set up a session, it looks like this. I’ve named this session MIDI Master so I can easily tell it apart from others on the network.
To start, click the + under the My Sessions window. You can name it whatever you like, but I suggest naming it something useful, like the computer name or a nickname you’ll remember.
You can also specify the Bonjour name, which just makes it easier to make sure everything is connected. Once everything is named, click on “Enable” to make it active.
Now go around to the other computers you want to connect and do the same thing. Keep the names discreet so you don’t loose track of what you’re doing. Once you have everything active, you can simply start connecting the “nodes” to the “master.” From the node, select your master computer from the Directory, and hit connect.
I’ve made my 17-inch MBP the “master,” and everything connects to it. You can actually send MIDI notes bi-directionally, but in my case, I want to send commands from the SD8 to all the computers in the booth. Hence, everything else is a node. When you’re done, the dialog looks like this:
Here you can see the other computers on the network, and I’ve connected to the Hog PC. Any notes generated on my MIDI Master will be sent to the Hog PC right now.
What About Windows?
Sadly, not all of our production gear runs on Mac OS. Sometimes, we need to suffer in Windows.
But all is not lost. Some enterprising young man wrote a little program called RTPMIDI that basically does exactly what the Mac OS MIDI network stack does.
He even copied the dialog box exactly so set up is exactly the same. I loaded this up on the Mini that we use for the Hog and it locked right up to the MBP master.
Because it uses Bonjour, connecting is usually as easy as waiting for the node to show up in the dialog box and hitting connect. If Bonjour gives you trouble, you can also enter an IP address to connect directly.
Generally, the connections will re-establish themselves after a power cycle, but sometimes they don’t. It’s become part of my normal workflow to go around and make sure everything is connected after we power up all the computers each weekend.
Oddly, the RTPMIDI Windows computer connects the most reliably. I can usually connect the ProPresenter Mac from the MBP, but the FOH Mini fails to connect when I do that way, so I have to go to the Mini itself and connect back to the MBP. I’m not sure why. But once it’s all connected, it works quite well.
Once you do all of this (and it actually only takes a couple of minutes to configure and a few seconds to connect everything), what’s going on? Well, basically, any MIDI notes that originate on any computer will be sent over the network to every other node on the network. So I can now send MIDI commands and notes from the SD8 to any computer in the booth. But there is a catch.
You Need A Translator
Or more correctly, something to generate the MIDI notes. Or perhaps even more correctly, something to get the MIDI notes coming in from the SD8 via the FastLane to go out to the network.
I have to try it, but I think MIDIPipe would do this, but I ended running a small, free program called VMPK (Virtual Midi Piano Keyboard). Basically, I tell it to listen to MIDI commands coming in from the FastLane and send out MIDI to the network. It doesn’t do anything else but pass notes through.
The only thing VMPK does is take notes in from the FastLane and sent them back out to the network. Note that the output name is Network MIDI Master, which is the network session we set up earlier.
I have this set to launch automatically and hide, so we don’t even know it’s running. It just sits there in the background and sends notes along to the network.
It’s Easier Than It Sounds
Yes, it took me over 1,000 words to describe the process, and I spent a few hours getting it all figured out. But now that I’ve done the hard work, it won’t take you more than a few minutes to get it all working.
And once it is, what can you do with it? If you’re like me, your mind is already racing with possibilities. If not, stay tuned for the next post and I’ll tell you what we’re doing with it.
Mike Sessler is the Technical Director at Coast Hills Community Church in Aliso Viejo, CA. He has been involved in live production for over 20 years and is the author of the blog, Church Tech Arts . He also hosts a weekly podcast called Church Tech Weekly on the TechArtsNetwork.