Recently, I explored the history of stage monitoring (here) – from the very first incidences of loudspeakers being spun around for a performer to hear themselves, to the crystal-clear quality offered by monitor wedges and in-ear monitoring (IEM) technology today. So where to next? What will be the future of monitors?
While personal mixers that give musicians individual control of multiple channels via a tablet have been around for some time courtesy of companies such as Aviom and Allen & Heath, a new software-based product from Audio Fusion Systems has taken this idea and brought it to mobile phones. Called Audiofusion, the system uses the iPhones of the musicians as a wireless receiver for their IEMs. Specifically, audio is sent digitally to an Apple computer running the company’s proprietary SoundCaster software.
This computer is connected to a dedicated wireless router, along with the iPhones being used for monitors. Sixteen audio channels are available via an app on each iPhone which allows them to create their own mix. Although seasoned monitor engineers and musicians may well be shuddering at this prospect, it might have a market with entry-level setups that can’t yet stretch to afford a true RF solution.
Putting Things In Place
Last year I stepped into a new world of monitor mixing with the Klang 3D IEM system, which I utilized on The War of the Worlds musical tour (LSI February 2019). Klang’s innovation uses the principles of binaural hearing – that is, how we hear, distinguish and locate sounds naturally – to allow the engineer to “place” different mix elements around the performer in a 3D soundscape within their IEMs.
Using the data gathered from mind-boggling amounts of research, the technology creates subtle inter-aural differences in level, time and coloration to replicate the experience of hearing a sound come from beneath, or behind, or one side, or high up to “move” an instrument to where the engineer places it. All with the user-friendly iPad or laptop interface.
My experience was that it allowed me to create an extraordinary feeling of spaciousness within what could have been, at 168 inputs, a very crowded mix. Listening to the mix felt as though I had actually removed my IEMs and was hearing all of the sound sources acoustically, but with absolute clarity and at exactly the desired relative volumes.
At that time, the technology allowed me to “pick off” the individual lines post EQ from the desk via MADI and send them to the Klang:Fabrik unit which drives the system. This meant that if I wanted to give more than one person a 3D mix, I would need to control their individual mix send levels via the app or an external fader bank – entirely possible, but a different process from the usual desk mix.
Since then, however, Klang has joined forces with DiGiCo to create fully-integrated software that allows the engineer to “pick off” the individual lines post aux send, so that any changes made to that send on the desk are seamlessly reflected in the mix, and the engineer can continue to mix exactly as they’re used to doing. I’m very much looking forward to using this new integration on a forthcoming tour this coming summer, and am personally convinced that 3D IEM mixing is the way forward. Not only does it sound fantastic, but because the brain doesn’t have to work hard to make sense of a synthetic stereo mix but hears what it “expects” to hear in a real-world, three-dimensional situation, the levels of both ear and mind fatigue post-show are greatly reduced.
Even more importantly, the space created by mixing in this way generally allows the central element of a mix (i.e., click for a drummer) to be turned down by as much as 6 dB. That’s enormously significant when it comes to protecting the hearing of the people we mix for. (By the way, Klang and DiGiCo have some additional exciting news to announce very soon, and I’ll be sharing these new developments in my next article.)