The majority of amplifiers used in sound systems today are of a constant voltage type. That is to say, their output voltage remains constant independent of the load placed on them. Of course, the load must be within the specified operational limits for a given amplifier. The salient point is that for a given drive voltage, a lower impedance loudspeaker will have greater SPL output than a higher impedance loudspeaker; all other items being equal.
Shouldn’t this be reflected in the sensitivity specification of the loudspeaker? Why then would one want to use a 2.0 Vrms signal to drive a 4-ohm loudspeaker and a 2.83 Vrms signal to drive an 8-ohm loudspeaker to determine their respective sensitivities?
Think about it this way; let’s connect two virtually identical loudspeakers to an A/B selector switch driven by the same amplifier. The only difference between these loudspeakers is that one is half the impedance (rated at 4 ohms) than the other (rated at 8 ohms).
When switching between these two loudspeakers the output voltage of the amplifier does not change, however, the current drawn from the amplifier does. This results in the loudspeaker with the lower rated impedance producing greater SPL.
Measuring and specifying sensitivity with the same voltage, regardless of the impedance of the DUT, would accurately reveal the SPL differences that occur.
From these examples, I hope that it’s clear that the input signal and the magnitude (frequency) response of a loudspeaker will determine the SPL generated, not just the sensitivity rating of the loudspeaker. It’s much better to have knowledge of the loudspeaker’s response in the form of a graph than a single sensitivity number. The latter may be derived from the former.