Primer: Acoustic Characteristics Of Live Sound Reinforcement

March 08, 2013, by Tim Vear

prosoundweb

Sound Waves
Sound moves through the air like waves in water. Sound waves consist of pressure variations traveling through the air.

When the sound wave travels, it compresses air molecules together at one point. This is called the high pressure zone or positive component (+).

After the compression, an expansion of molecules occurs. This is the low pressure zone or negative component (-).

This process continues along the path of the sound wave until its energy becomes too weak to hear. The sound wave of a pure tone traveling through air would appear as a smooth, regular variation of pressure that could be drawn as a sine wave.

Frequency, Wavelength & Speed of Sound
The frequency of a sound wave indicates the rate of pressure variations or cycles. One cycle is a change from high pressure to low pressure and back to high pressure.

The number of cycles per second is called Hertz, abbreviated “Hz.” So, a 1,000 Hz tone has 1,000 cycles per second.

Schematic of a sound wave.

The wavelength of a sound is the physical distance from the start of one cycle to the start of the next cycle. Wavelength is related to frequency by the speed of sound. The speed of sound in air is about 1130 feet per second or 344 meters/second. The speed of sound is constant no matter what the frequency.

The wavelength of a sound wave of any frequency can be determined by these relationships:

Loudness
The fluctuation of air pressure created by sound is a change above and below normal atmospheric pressure. This is what the human ear responds to. The varying amount of pressure of the air molecules compressing and expanding is related to the apparent loudness at the human ear. The greater the pressure change, the louder the sound.

Under ideal conditions the human ear can sense a pressure change as small as 0.0002 microbars (1 microbar = 1/1,000,000 atmospheric pressure). The threshold of pain is about 200 microbars, one million times greater!

Obviously the human ear responds to a wide range of amplitude of sound. This amplitude range is more commonly measured in decibels Sound Pressure Level (dB SPL), relative to 0.0002 microbars (0 dB SPL).

0 dB SPL is the threshold of hearing Lp and 120 dB SPL is the threshold of pain. 1 dB is about the smallest change in SPL that can be heard. A 3 dB change is generally noticeable while a 6dB change is very noticeable. A 10 dB SPL increase is perceived to be twice as loud!

Sound Propagation
There are four basic ways in which sound can be altered by its environment as it travels or propagates: reflection, absorption, diffraction and refraction.

Ambient sounds.

1. Reflection - A sound wave can be reflected by a surface or other object if the object is physically as large or larger than the wavelength of the sound. Because low frequency sounds have long wavelengths they can only be reflected by large objects.

Higher frequencies can be reflected by smaller objects and surfaces as well as large. The reflected sound will have a different frequency characteristic than the direct sound if all frequencies are not reflected equally. Reflection is also the source of echo, reverb, and standing waves:

Echo occurs when a reflected sound is delayed long enough (by a distant reflective surface) to be heard by the listener as a distinct repetition of the direct sound.

Reverberation consists of many reflections of a sound, maintaining the sound in a reflective space for a time even after the direct sound has stopped.

Standing waves in a room occur for certain frequencies related to the distance between parallel walls. The original sound and the reflected sound will begin to reinforce each other when the distance between two opposite walls is equal to a multiple of half the wavelength of the sound.

This happens primarily at low frequencies due to their longer wavelengths and relatively high energy.

2. Absorption - Some materials absorb sound rather than reflect it. Again, the efficiency of absorption is dependent on the wavelength. Thin absorbers like carpet and acoustic ceiling tiles can affect high frequencies only, while thick absorbers such as drapes, padded furniture and specially designed bass traps are required to attenuate low frequencies.

Reverberation in a room can be controlled by adding absorption: the more absorption the less reverberation. Clothed humans absorb mid and high frequencies well, so the presence or absence of an audience has a significant effect on the sound in an otherwise reverberant venue.

3. Diffraction - A sound wave will typically bend around obstacles in its path which are smaller than its wavelength. Because a low frequency sound wave is much longer than a high frequency wave, low frequencies will bend around objects that high frequencies cannot.

The effect is that high frequencies tend to have a higher directivity and are more easily blocked while low frequencies are essentially omnidirectional. In sound reinforcement, it is difficult to get good directional control at low frequencies for both microphones and loudspeakers.

4. Refraction - The bending of a sound wave as it passes through some change in the density of the environment. This effect is primarily noticeable outdoors at large distances from loudspeakers due to atmospheric effects such as wind or temperature gradients. The sound will appear to bend in a certain direction due to these effects.

Direct Vs Ambient Sound
A very important property of direct sound is that it becomes weaker as it travels away from the sound source. The amount of change is controlled by the inverse-square law which states that the level change is inversely proportional to the square of the distance change. When the distance from a sound source doubles, the sound level decreases by 6dB. This is a noticeable decrease.

For example, if the sound from a guitar amplifier is 100 dB SPL at 1 ft. from the cabinet it will be 94 dB at 2 ft., 88 dB at 4 ft., 82 dB at 8 ft., etc. Conversely, when the distance is cut in half the sound level increases by 6 dB: It will be 106 dB at 6 inches and 112 dB at 3 inches!

On the other hand, the ambient sound in a room is at nearly the same level throughout the room. This is because the ambient sound has been reflected many times within the room until it is essentially nondirectional. Reverberation is an example of non-directional sound.

For this reason the ambient sound of the room will become increasingly apparent as a microphone is placed further away from the direct sound source. In every room, there is a distance (measured from the sound source) where the direct sound and the reflected (or reverberant) sound become equal in intensity.

In acoustics, this is known as the Critical Distance. If a microphone is placed at the Critical Distance or farther, the sound quality picked up may be very poor. This sound is often described as “echoey”, reverberant, or “bottom of the barrel”. The reflected sound overlaps and blurs the direct sound.

Critical distance may be estimated by listening to a sound source at a very short distance, then moving away until the sound level no longer decreases but seems to be constant. That distance is critical distance.

A unidirectional microphone should be positioned no farther than 50 percent of the critical distance, e.g. if the critical distance is 10 feet, a unidirectional mic may be placed up to 5 feet from the sound source.

Highly reverberant rooms may require very close microphone placement. The amount of direct sound relative to ambient sound is controlled primarily by the distance of the microphone to the sound source and to a lesser degree by the directional pattern of the mic.

Phase Relationships & Interference Effects
The phase of a single frequency sound wave is always described relative to the starting point of the wave or 0 degrees. The pressure change is also zero at this point. The peak of the high pressure zone is at 90 degrees, the pressure change falls to zero again at 180 degrees, the peak of the low pressure zone is at 270 degrees, and the pressure change rises to zero at 360 degrees for the start of the next cycle.

Two identical sound waves starting at the same point in time are called “in-phase” and will sum together creating a single wave with double the amplitude but otherwise identical to the original waves. Two identical sound waves with one wave’s starting point occurring at the 180 degree point of the other wave are said to be “out of phase” and the two waves will cancel each other completely.

Sound pressure wave.

When two sound waves of the same single frequency but different starting points are combined the resulting wave is said to have “phase shift” or an apparent starting point somewhere between the original starting points.

This new wave will have the same frequency as the original waves but will have increased or decreased amplitude depending on the degree of phase difference. Phase shift, in this case, indicates that the 0 degree points of two identical waves are not the same.

Phase relationships.

Most sound waves are not a single frequency but are made up of many frequencies. When identical multiple frequency soundwaves combine there are three possibilities for the resulting wave: a doubling of amplitude at all frequencies if the waves are in phase, a complete cancellation at all frequencies if the waves are 180 degrees out of phase, or partial cancellation and partial reinforcement at various frequencies if the waves have intermediate phase relationship. The results may be heard as interference effects.

The first case is the basis for the increased sensitivity of boundary or surface-mount microphones. When a microphone element is placed very close to an acoustically reflective surface both the incident and reflected sound waves are in phase at the microphone.

This results in a 6 dB increase (doubling) in sensitivity, compared to the same microphone in free space. This occurs for reflected frequencies whose wavelength is greater than the distance from the microphone to the surface: if the distance is less than one-quarter inch this will be the case for frequencies up to at least 18 kHz.

However, this 6 dB increase will not occur for frequencies that are not reflected, that is, frequencies that are either absorbed by the surface or that diffract around the surface.

High frequencies may be absorbed by surface materials such as carpeting or other acoustic treatments. Low frequencies will diffract around the surface if their wavelength is much greater than the dimensions of the surface: the boundary must be at least 5 ft. square to reflect frequencies down to 100 Hz.

The second case occurs when two closely spaced microphones are wired out of phase, that is, with reverse polarity. This usually only happens by accident, due to miswired microphones or cables but the effect is also used as the basis for certain noise-canceling microphones.

In this technique, two identical microphones are placed very close to each other (sometimes within the same housing) and wired with opposite polarity. Sound waves from distant sources which arrive equally at the two microphones are effectively canceled when the outputs are mixed.

Polarity reversal.

However, sound from a source which is much closer to one element than to other will be heard. Such close-talk microphones, which must literally have the lips of the talker touching the grille, are used in high-noise environments such as aircraft and industrial paging but rarely with musical instruments due to their limited frequency response.

It is the last case which is most likely in musical sound reinforcement, and the audible result is a degraded frequency response called “comb filtering.” The pattern of peaks and dips resembles the teeth of a comb and the depth and location of these notches depend on the degree of phase shift.

Multi-mic comb filtering.

With microphones this effect can occur in two ways. The first is when two (or more) mics pick up the same sound source at different distances. Because it takes longer for the sound to arrive at the more distant microphone there is effectively a phase difference between the signals from the mics when they are combined (electrically) in the mixer.

The resulting comb filtering depends on the sound arrival time difference between the microphones: a large time difference (long distance) causes comb filtering to begin at low frequencies, while a small time difference (short distance) moves the comb filtering to higher frequencies.

The second way for this effect to occur is when a single microphone picks up a direct sound and also a delayed version of the same sound. The delay may be due to an acoustic reflection of the original sound or to multiple sources of the original sound.

Reflection comb filtering.

A guitar cabinet with more than one speaker or multiple loudspeaker cabinets for a single instrument would be examples. The delayed sound travels a longer distance (longer time) to the mic and thus has a phase difference relative to the direct sound.

When these sounds combine (acoustically) at the microphone, comb filtering results. This time the effect of the comb filtering depends on the distance between the microphone and the source of the reflection or the distance between the multiple sources.

The 3-to-1 Rule
When it is necessary to use multiple microphones or to use microphones near reflective surfaces the resulting interference effects may be minimized by using the 3-to-1 rule.

For multiple microphones the rule states that the distance between microphones should be at least three times the distance from each microphone to its intended sound source. The sound picked up by the more distant microphone is then at least 12 dB less than the sound picked up by the closer one. This insures that the audible effects of comb filtering are reduced by at least that much.

For reflective surfaces, the microphone should be at least 1 and a 1/2 times as far from that surface as it is from its intended sound source. Again, this insures minimum audibility of interference effects.

Strictly speaking, the 3-to-1 rule is based on the behavior of omnidirectional microphones. It can be relaxed slightly if unidirectional microphones are used and they are aimed appropriately, but should still be regarded as a basic rule of thumb for worst case situations.

3-to-1 rule.

Potential Acoustic Gain Vs. Needed Acoustic Gain
The basic purpose of a sound reinforcement system is to deliver sufficient sound level to the audience so that they can hear and enjoy the performance throughout the listening area.

As mentioned earlier, the amount of reinforcement needed depends on the loudness of the instruments or performers themselves and the size and acoustic nature of the venue. This Needed Acoustic Gain (NAG) is the amplification factor necessary so that the furthest listeners can hear as if they were close enough to hear the performers directly.

To calculate NAG: NAG = 20 x log (Df/Dn)

Where: Df = distance from sound source to furthest listener

Dn = distance from sound source to nearest listener
log = logarithm to base 10

Note: the sound source may be a musical instrument, a vocalist or perhaps a loudspeaker.

The equation for NAG is based on the inverse-square law, which says that the sound level decreases by 6 dB each time the distance to the source doubles. For example, the sound level (without a sound system) at the first row of the audience (10 feet from the stage) might be a comfortable 85 dB. At the last row of the audience (80 feet from the stage) the level will be 18 dB less or 67 dB.

In this case the sound system needs to provide 18 dB of gain so that the last row can hear at the same level as the first row. The limitation in real-world sound systems is not how loud the system can get with a recorded sound source but rather how loud it can get with a microphone as its input. The maximum loudness is ultimately limited by acoustic feedback.

Potential Acoustic Gain (PAG).

The amount of gain-before-feedback that a sound reinforcement system can provide may be estimated mathematically. This Potential Acoustic Gain involves the distances between sound system components, the number of open mics, and other variables. The system will be sufficient if the calculated Potential Acoustic Gain (PAG) is equal to or greater than the NAG. Here is an illustration showing the key distances:

The simplified PAG equation is:

PAG = 20 (log D1 - log D2 + log D0 - log Ds) -10 log NOM -6

Where: PAG = Potential Acoustic Gain (in dB)

Ds = distance from sound source to microphone
D0 = distance from sound source to listener
D1 = distance from microphone to loudspeaker
D2 = distance from loudspeaker to listener
NOM = the number of open microphones
-6 = a 6 dB feedback stability margin
log = logarithm to base 10

In order to make PAG as large as possible, that is, to provide the maximum gain-before-feedback, the following rules should be observed:

1) Place the microphone as close to the sound source as practical.

2) Keep the microphone as far away from the loudspeaker as practical.

3) Place the loudspeaker as close to the audience as practical.

4) Keep the number of microphones to a minimum.

In particular, the logarithmic relationship means that to make a 6 dB change in the value of PAG the corresponding distance must be doubled or halved.

For example, if a microphone is 1 ft. from an instrument, moving it to 2 ft. away will decrease the gain-before-feedback by 6 dB while moving it to 4 ft. away will decrease it by 12 dB.

On the other hand, moving it to 6 in. away increases gain-before-feedback by 6 dB while moving it to only 3 in. away will increase it by 12 dB. This is why the single most significant factor in maximizing gain-before-feedback is to place the microphone as close as practical to the sound source.

The NOM term in the PAG equation reflects the fact that gain-before-feedback decreases by 3 dB every time the number of open (active) microphones doubles.

For example, if a system has a PAG of 20 dB with a single microphone, adding a second microphone will decrease PAG to 17 dB and adding a third and fourth mic will decrease PAG to 14 dB. This is why the number of microphones should be kept to a minimum and why unused microphones should be turned off or attenuated.

Essentially, the gain-before-feedback of a sound system can be evaluated strictly on the relative location of sources, microphones, loudspeakers, and audience, as well as the number of microphones, but without regard to the actual type of component. Though quite simple, the results are very useful as a best case estimate.

Understanding principles of basic acoustics can help to create an awareness of potential influences on reinforced sound and to provide some insight into controlling them. When effects of this sort are encountered and are undesirable, it may be possible to adjust the sound source, use a microphone with a different directional characteristic, reposition the microphone or use fewer microphones, or possibly use acoustic treatment to improve the situation. Keep in mind that in most cases, acoustic problems can best be solved acoustically, not strictly by electronic devices.

General Rules
Microphone technique is largely a matter of personal taste—whatever method sounds right for the particular instrument, musician, and song is right. There is no one ideal microphone to use on any particular instrument. There is also no one ideal way to place a microphone. Choose and place the microphone to get the sound you want. We recommend experimenting with a variety of microphones and positions until you create your desired sound.

However, the desired sound can often be achieved more quickly and consistently by understanding basic microphone characteristics, sound-radiation properties of musical instruments, and acoustic fundamentals as presented above.

Here are some suggestions to follow when miking musical instruments for sound reinforcement.

1) Try to get the sound source (instrument, voice, or amplifier) to sound good acoustically (“live”) before miking it.

2) Use a microphone with a frequency response that is limited to the frequency range of the instrument, if possible, or filter out frequencies below the lowest fundamental frequency of the instrument.

3) To determine a good starting microphone position, try closing one ear with your finger. Listen to the sound source with the other ear and move around until you find a spot that sounds good. Put the microphone there. However, this may not be practical (or healthy) for extremely close placement near loud sources.

4) The closer a microphone is to a sound source, the louder the sound source is compared to reverberation and ambient noise. Also, the Potential Acoustic Gain is increased—that is, the system can produce more level before feedback occurs. Each time the distance between the microphone and sound source is halved, the sound pressure level at the microphone (and hence the system) will increase by 6 dB. (Inverse Square Law)

5) Place the microphone only as close as necessary. Too close a placement can color the sound source’s tone quality (timbre), by picking up only one part of the instrument. Be aware of Proximity Effect with unidirectional microphones and use bass rolloff if necessary.

6) Use as few microphones as are necessary to get a good sound. To do that, you can often pick up two or more sound sources with one microphone. Remember: every time the number of microphones doubles, the Potential Acoustic Gain of the sound system decreases by 3 dB. This means that the volume level of the system must be turned down for every extra mic added in order to prevent feedback. In addition, the amount of noise picked up increases as does the likelihood of interference effects such as comb-filtering.

7) When multiple microphones are used, the distance between microphones should be at least three times the distance from each microphone to its intended sound source. This will help eliminate phase cancellation. For example, if two microphones are each placed one foot from their sound sources, the distance between the microphones should be at least three feet. (3-to-1 Rule)

8) To reduce feedback and pickup of unwanted sounds:

a) place microphone as close as practical to desired sound source

b) place microphone as far as practical from unwanted sound sources such as loudspeakers and other instruments

c) aim unidirectional microphone toward desired sound source (on-axis)

d) aim unidirectional microphone away from undesired sound source (180 degrees off-axis for cardioid, 126 degrees off-axis for supercardioid)

e) use minimum number of microphones

9) To reduce handling noise and stand thumps:

a) use an accessory shock mount (such as the Shure A55M)

b) use an omnidirectional microphone

c) use a unidirectional microphone with a specially designed internal shock mount

10) To reduce “pop” (explosive breath sounds occurring with the letters “p,” “b,” and “t”):

a) mic either closer or farther than 3 inches from the mouth (because the 3-inch distance is worst)

b) place the microphone out of the path of pop travel (to the side, above, or below the mouth)

c) use an omnidirectional microphone

d) use a microphone with a pop filter. This pop filter can be a ball-type grille or an external foam windscreen

11) If the sound from your loudspeakers is distorted the microphone signal may be overloading your mixer’s input. To correct this situation, use an in-line attenuator, or use the input attenuator on your mixer to reduce the signal level from the microphone.

Supplied by Shure Incorporated.

 



Return to articleReturn to article
Primer: Acoustic Characteristics Of Live Sound Reinforcement
http://www.prosoundweb.com/article/acoustic_characteristics_for_live_sound_reinforcement