Also, these deviations are different at each location in the room. Therefore, the only practical solution is to measure the actual response of the completed system and correct it as needed with additional circuitry.
This turns out to be a bit trickier than one might expect, however. If a pure tone, slowly swept in frequency, is fed over a sound system and the resulting level is measured at a point in the audience area, it will be found to consist of strong peaks and valleys, tens of decibels in amplitude, and spaced at intervals of about 1 Hz, caused by room resonances.
It’s almost impossible to get meaningful information from such readings. Besides, we don’t perceive these variations because they are averaged by our hearing process in ways that are only partly understood. The measurements must incorporate averaging which simulates the hearing process.
However, this presents us with a shopping list of unanswered questions pertaining to the measurement techniques. What frequency resolution (bandwidth) is needed?
A first assumption might be to use a bandwidth similar to that of the auditory (critical bandwidth) filters, but system measurements are typically done with third-octave filters, which are considerably wider than critical over much of the spectrum.
Should the analysis be done with a swept filter, which yields more information, or is a stepped filter technique acceptable? What amplitude smoothing or averaging is appropriate? How many measurement locations should be taken, and where should they be located? And exactly how should the individual measurements be averaged to yield the overall system response?
Despite countless practical field experiments in this area, beginning at least 65 years ago, little critical research has been carried out. As a result, there exist only a few de facto standards, and the actual results of these procedures vary considerably in quality.
In addition to the these considerations, it might be expected that nonlinear distortion in any of the system’s components, especially the loudspeakers, would significantly affect its timbre, but such does not seem to be the case. The distortion levels of modern components, properly used, are low enough to be unnoticeable in a reinforcement situation.
As the name suggests, intelligibility is the measure of how easy or difficult it is to understand speech over a system. It’s ultimately measured subjectively and directly, typically using rhyming words as the test signal.
The execution of this test is tedious and time-consuming with only one test subject, which is quite inadequate. Different subjects will render somewhat different results even under apparently identical conditions, and conditions vary significantly with location, program sound levels, room noise, hearing acuity, and many other factors.
Tools like Rational Acoustics Smaart and Meyer SIM can be of great help along with an entire understanding of what’s happening with a system and room? (click to enlarge)
The typically broad variance of test results makes it difficult to determine whether a system is actually performing acceptably or not. It hardly seems worth the rather considerable effort required to execute such a test, but there may be little choice.
Because of these difficulties, a lot of effort has gone into devising an objective test regime, with several products resulting. All involve dedicated gear and techniques, which, while not simple, are quite preferable to subjective tests.
These objective tests have been demonstrated to produce results comparable to those obtained subjectively in some, but not all, conditions. Unfortunately, the worst correlations tend to occur in conditions that produce low scores, exactly where accurate results are most desired. In fact, after extensive experience with all the commonly used objective techniques, Mapp has concluded that all are inadequate.
More Physical Approach
It gets worse. Low intelligibility scores, which indicate serious problems, usually provide little or no information on the nature of these problems. Sometimes one or more physical problems are apparent in such cases, but are these really the causes of the poor performance?
Often, the only way to be sure is to correct the problems and see if that improves the scores. Of course, this may be completely impractical, and in fact, there may be multiple problems, some masking others, so that correcting the most obvious might accomplish nothing useful.
A much more practical approach might be to identify exactly which physical factors adversely affect speech intelligibility, and how, and calibrate physical measurements to subjective effects.
If this were accomplished, then not only would meaningful test methods be available, but effective design criteria could be established to predict results and avoid problems in the design stage. Some significant work has already been done in this area, with results pointing to the ratio of direct to reflected (or reverberant) sound being the most important factor.
Think this over, work on your own list, and next time, we’ll look at key quality factors of music reinforcement systems. (You can read the follow-up article here.)
At the time this article was published in LSI, Bob Thurmond served as principal consultant with G. R. Thurmond and Associates of Austin, TX. Also note that it was originally presented as a paper at the 146th Meeting of the Acoustical Society of America.