Every object in nature has a “preferred” or “natural” frequency at which it will vibrate
April 11, 2012, by Neil Thompson Shade
Previously (here and here), we’ve been looking at sound on a “microscopic” level, examining particle motion as sound propagates through air.
This time, let’s look at a larger picture of sound wave propagation.
A vibrating object will disturb the surrounding air medium causing localized changes in pressure and particle displacement with the transference of acoustical energy in the form of a sound wave.
Waves can be broadly classified as being either transverse or longitudinal. The distinction for each wave type corresponds to the relationship to which direction the particles in the medium move relative to the direction the wave moves.
For a transverse wave, the particles of the medium move at right angles to the direction of the wave propagation.
Examples of transverse waves include ocean waves, vibrating strings on a musical instrument, and light and other electro-magnetic radiation.
For a longitudinal wave the direction of the wave propagation is parallel (same direction) as the motion of the medium particles.
Sound waves in air are unique in that they propagate as longitudinal waves. Figure 1 shows concepts for transverse and longitudinal wave motion.
Figure 1. Transverse (top) and longitudinal wave motion (bottom) showing transmission medium motion and wave propagation direction. Image credit: Sennheiser. (click to enlarge)
A sound wave in air will propagate away from a source as a wavefront with the speed of 343 m/s. The sound particle vibrations travel outward from the source with the same phase. Sources can take the form of simple geometry, called point or monopole (e.g., single loudspeaker or small hole in a wall), line (e.g., moving vehicular traffic or many closely spaced loudspeakers), plane (e.g., top plate of double bass, large building surfaces, or large two dimensional loudspeaker array), or complex comprising two or more simple sources (e.g., musical instrument or machine).
The wavefront takes on a spherical geometry at a distance from the source that is larger than the source dimensions and are called spherical waves. At farther distances the wavefronts appear to be flat (planar) and are called plane progressive waves. Plane waves are sound waves in the simplest form.
The opposite of a plane progressive wave is the standing wave. And yet another wave type is the cylindrical wave, due to a series of point sources radiating in-phase with each other that results in a line source.
Waves, regardless of whether spherical, cylindrical, or plane, can be considered to be simple or complex. A simple wave is a wave that comprises only one frequency, such as a sinusoid. A complex wave comprises a fundamental sinusoid and harmonics, with the harmonics either being simple integer multiples (2, 3, 4…etc.) of the fundamental sinusoid or non-integer multiples (1.35, 2.21, 3.05 etc.), as occurs for many percussion instruments.
Through a mathematical process called Fourier Analysis, we can decompose a complex wave into the fundamental and harmonic frequencies, their relative amplitudes, and phase relationships. Figure 2 shows a Fourier analysis of simple and complex waves.
Figure 2. Fourier analysis of simple (top) and complex waves (middle and bottom). Figures to the left show wave amplitude as function of time. Figures to the right show wave amplitude as a function of frequency (fundamental and harmonics). Top figure is for a tuning fork; middle figure is for a clarinet; and bottom figure for a trumpet. (click to enlarge)
Two or more simple or complex sound waves can combine with each other through an additive process called the law of superposition. The resulting complex wave, assuming that the waves are linearly related, is the sum of the displacements due to each sound source.
Sound waves are linearly related when each is directly proportional to displacement. Non-linear acoustic behavior typically occurs when the source sound pressure level exceeds 140 dB.
While conceptually simple, the law of superposition is complex since the particle displacement (ξ) and particle velocity (u) of each sound wave may not always be in the same direction because the sound waves can arrive from any arbitrary location. Remember too, that particle displacement and particle velocity are functions of time and frequency.
Thus, the law of superposition requires a vector summation of waves. Waves from opposite directions will add momentarily together at a finite point in space and then pass through each other as the wavefronts continue propagating in their respective directions.
Waves can add constructively, resulting in greater amplitude, or destructively, resulting in reduced amplitude. The law of superposition describes this. What determines the resultant amplitude through constructive or destructive addition is the relative phase of the waves.
Waves that are perfectly in-phase (0-degree phase difference) add together with no destructive behavior. Waves that are perfectly out-of-phase (180-degree phase difference) add together to result in effectively zero amplitude. Most complex waves have phase relationships that vary as a function of frequency and do not combine in such simple relationships as described above.
JUST BEAT IT
One interesting wave addition phenomenon is that of beats. Beats occur when two sinusoids of slightly different frequency, typically less than 15 Hz apart, combine at a point in space.
Because the two waves have slightly different frequencies, they will have varying phase relationships, resulting in times when the waves are partially in-phase and partially out-of-phase with each other.
Thus, the waves will add constructively and destructively resulting in slowly varying amplitude.
For example, if the two frequencies are 220 and 229 Hz, the sinusoids will be in-phase and interfere constructively 9 times per second and be out-of-phase 9 times per second, and interfere destructively. The sound level will vary from loud to soft at a rate of 9 Hz.
Most people can perceive beat frequencies up to about 15 Hz. Beyond this value the sensation of “roughness” occurs with no beating. Further separation of the two sinusoids results in perceiving each as a separate frequency. This is one basis for determining the critical bandwidths of the ear.
Figure 3 shows the generation of a beat frequency.
Figure 3. Generation of beat frequency (bottom) from two sinusoids of slightly different frequencies (top and middle). (click to enlarge)
When a sound wave approaches a boundary surface, a portion of the incident energy is reflected and a portion is absorbed by the surface. The absorbed sound is the sum of the dissipated losses within the boundary medium and the portion transmitted through the boundary.
The characteristic impedance of the boundary surface determines the ratio of absorbed sound to incident sound. The physical density of architectural materials is higher than air and results in most of the sound energy being reflected away from the boundary surface.
Two broad classes of sound reflections can occur: standing waves and specular reflections. Standing waves are the result of the law of superposition. Specular reflections are not based on the law of superposition. The sound absorption mechanism described above is applicable to both standing waves and specular reflections.
Standing waves result from interference of two or more waves that repeatedly pass through each other when traveling back and forth between the room boundaries. The result is a wave that appears stationary having regions of maximum amplitude (antinodes) and minimum amplitude (nodes).
ON THE SURFACE
For rooms, the standing waves are referred to as room modes. Three types of room modes occur: axial, tangential, and oblique. Each room mode type is supported by an increasing number of room surface pairs.
Figure 4. Axial standing waves fundamental mode (top), second mode (middle), and third mode (bottom). (click to enlarge)
Axial modes require two opposite room surfaces (one pair); tangential modes require four room surfaces (two pairs); and oblique modes require six room surfaces (three pairs).
Axial modes are the most audible. The tangential and oblique modes are respectively 6 and 12 dB less than the axial modes. Figure 4 shows axial standing waves (room modes).
A specular reflection occurs when the incident angle from the incoming wavefront at the boundary surface equals to the reflected angle from the boundary. This reflection phenomenon only occurs when the wavelength of the incident sound is less than approximately one-fourth the boundary surface dimension.
For the above conditions, the reflections can be approximated as rays and laws of geometrical optics apply. Figure 5 shows simple specular reflection. The wavelength for low frequency sound is often equal to or larger than the room dimensions. When this occurs, there are no specular reflections, and wave acoustics is used for analysis.
Figure 5. Specular reflection where angle of incidence equals angle of reflection. (click to enlarge)
One key concept to remember when sound is incident at a physical boundary is the particle velocity (v) and acoustic pressure (p) are 90° (π/2 radians) out-of-phase with each other. At a boundary, the particle velocity will be zero and the pressure will be a maximum.
This is important when considering sound absorption of materials: the maximum absorption at the lowest frequency of interest will occur at a distance equal to λ/4 from the boundary. At this distance the particle velocity will be a maximum for the frequency corresponding to λ/4. Since most “acoustical” materials are frictional absorbers, a maximum particle velocity will result in the greatest sound absorption.
Resonance is the reinforcement of sound by synchronous vibration. Every object in nature has a “preferred” or “natural” frequency at which the object will vibrate. Imposing an oscillatory force of the same frequency as the object’s natural frequency will cause the object to vibrate at maximum amplitude will little energy input from the exciting force. Changing the “forcing” frequency by a small amount will effectively decrease the resonant response.
DO THE MATH
When examining a system at resonance, we will observe a maximum peak at the resonant frequency (fO).
The height of the resonant peak will depend on the degree of damping within the vibrating system.
The resonant frequency response can be either very sharp, centered around a high amplitude narrow frequency bandwidth (Δf), or quite broad with lesser amplitude.
The desired acoustical response will determine which characteristic is best.
Systems that have a sharp resonance characteristic are called “high Q”; those with a broad resonance are called “low Q”.
The Q term refers to quality factor and can be calculated by the following equation:
Q = quality factor, unitless
fO = resonance frequency, Hz
Δf = frequency bandwidth, Hz, taken as the -3 dB down points about the resonant frequency
Figure 6 shows both high Q and low Q resonance response.
Figure 6. High Q resonance (without damping) and low Q resonance (with damping). (click to enlarge)
RULES OF THUMB
Try to remember the following, or key them into your PDA or computer “cheat sheet.”
- Most everyday sounds we encounter are complex waves comprising many frequencies.
- Simple point sources radiate sound as spherical wavefronts assuming the wavelength is smaller than the source dimensions. At greater distances from the source the wavefronts flatten out and become planar.
- Sounds combine due to the law of superposition and the resultant amplitude depends on the amplitude, frequency, and relative phases of each wave.
- Waves will reflect from room surfaces. Specular reflection requires the wavelength to be at least one-fourth the room surface dimension.
- Resonance is the response of a system when driven at its natural frequency. The sharpness of the resonance will depend on the damping within the system. Rooms have special resonant phenomena called room modes.
Neil Thompson Shade has 30 years of experience in consulting and teaching acoustics, noise control and sound system design. He is president and principal consultant of Acoustical Design Collaborative, Ltd., located in Baltimore, and he has also been taught acoustics, sound system design, computer modeling and related topics at the Peabody Institute of Johns Hopkins University.
Related articles by Neil Thompson Shade:
Acoustic Fundamentals And The Nature Of Sound
Getting To The Basis Of Everything We Hear