Mixer

Wednesday, November 13, 2013

In The Studio: An Interview With Legendary Engineer Shelly Yakus

Recorded John Lennon, Blue Oyster Cult, Alice Cooper, and many more

Shelly Yakus is one of the true legends of the engineering trade, and his storied career demonstrates the value of an early start.

You might even say he was born to record. His father and uncle were co-owners of Ace Recording in Boston, and young Shelly was a studio “rugrat” as far back as he can remember.

As a young man, dazzled by the excitement of the New York studio scene, in 1967 Yakus applied for a job as an assistant at Phil Ramone’s fabled A&R Recording.

After cutting his teeth on sessions by The Band (Music from Big Pink) and Van Morrison (Moondance), Yakus moved on to another staff position at The Record Plant.

There he recorded and/or mixed records for everybody from John Lennon (Walls and Bridges) to Patti Smith (“Because the Night”), Blue Oyster Cult (Agents of Fortune), Alice Cooper (School’s Out), and the Raspberries (“All the Way”) among many others.

While still holding his staff job at Record Plant, he started freelancing with producer Jimmy Lovine; one of their first efforts was Tom Petty’s breakthrough album, Damn the Torpedoes.

After that, as a freelancer and later as chief engineer at A&M, Yakus logged credits on hits by Don Henley, U2, Lone Justice, and Bob Seger.

In this interview, Yakus touches on sessions by the Band, Van Morrison and John Lennon while reflecting on the essential elements—both immutable and ephemeral—of the music recording art.

The interview took place in August at Yakus’ new recording home, Tongue and Groove Studios in downtown Philadelphia.

Owned by vintage instrument and gear collector Michael Block and his partner Dave Johnson, Tongue and Groove is a place with the 1950’s analog gear intertwines with the 21st century digital reality—the starting point of our conversation.

Bruce Borgerson: This is amazing. I’ve been in dozens of studios, many that have lots of vintage gear, but I’ve never seen anything like this. Like those Presto tape recorder electronics out there.

Shelly Yakus: Yeah, isn’t that amazing. You know, my dad had a studio in Boston, and we had those tape recorders.

I remember when we bought them new, and those tape recorders could make a 71/2 copy that was so good that you would have to stop the machine to know whether you were listening to the master or the copy.

BB: I’ve heard of them vaguely, but they must have gone out, what, by the late fifties?

SY: I would say they went out in the sixties, but I’m just guessing.

They didn’t make machines for very many years, they just couldn’t keep up with Ampex. But they were great machines, believe me. We used to use them every day at the studio.

BB: Was this at Ace?

SY: Yes, that was my dad’s studio.

BB: I checked that out on the internet, and I found out that Freddy Cannon did his first hits there.

SY: Right. A little trivia thing, here, do you know what his last name was?

BB: It was in that story, but I can’t recall.

SY: Freddy Picarello. He was a go-fer in the studio, when I was a kid. I used to go there on weekends and in the summer, and he would go for coffee. A lot of talent came out of Boston.

BB: So you were a studio kid?

SY: Yes. I remember being ten years old and asking my dad, “Can I learn how to cut a record?”

We had these Presto lathes, they were fixed pitch cutting lathes for lacquers, and we used to do a lot of that work, cutting lacquers for a 78 or a 45, for a local band project.

They would use those for demos and something to take home. So I remember he said to me, “Shelly, when you can see over the top of the tables, you can start cutting records.”

And that happened when I was about fourteen, when I started working there regularly on summers and weekends.

But the most important thing I learned there was how to listen. That’s what my dad gave me, that’s what that place gave me.

I remember the moment when I finally got it, and that was my foundation for everything I did after that. And I got a feel for the business.

But that business belonged to my dad and his brother. He wanted me to stay there and take it over from him, but the demand for quality in Boston wasn’t like it was in New York.

We used to get four track tapes in from New York, and I would hear them and they would sound just remarkable to me. It had a lot to do with the producers in New York saying to the engineer, “No, I’m not happy with that drum sound, let’s keep working on it.”

In Boston, they were happy with whatever you gave them. I didn’t see the chance for a whole lot of growth. So I left and went to New York and got a job there, at A&R.

BB: When was that?

SY: That was August of 1967 when I started there, a kid fresh out of Boston.

BB: How did you get the job?

SY: That was interesting. A year before, my dad had a client that wanted to get some records pressed, and they used to use Decca Records to do their pressings.

So if a client wanted a thousand 45s, they would do the recording and mixing there and then send me down to New York to bring it to Decca for mastering.

I remember going there, and while I was there—I had never been to a New York studio before and it was incredibly interesting to me. And so I remember saying to one of the mastering engineers, “I would love to see some more studios in New York.”

The mastering engineer told me that the guy who had just walked in the room five minutes before, that guy said I should go over to Mirror Sound where Brooks Arthur was working.

Brooks Arthur was a guy that had top twenty records all the time, did a lot of those Red Bird records…so anyway, I got into see him. As matter of fact, they introduced me to this guy name Max Rupfel(?), and I didn’t quite understand who this guy was.

He was going around to all these studios. He said, “Come with me, kid, and I’ll take you around, I’m going to visit a bunch of studios.” And he would walk in like he owned these places.

I couldn’t understand how he could get into all these places I finally found out that he was the musician’s union representative. In those days, it was against the rules to overdub.

If you got caught overdubbing, you had to pay these tremendous fines, you had to pay the whole orchestra again. So he got me in all these amazing studios.

So one studio he took me two was A&R, and I saw Phil Ramone doing a session on 48th Street, which was their original studio, and Donny Hahn was his assistant engineer, and he went on to become a great engineer.

He also took me over to Mirror Sound, this really unusual studio where Brooks Arthur worked, he was engineering this really unusual song, I think it was “Give Us Your Blessings” on Red Bird Records.

He had a whole bunch of songs as an engineer and mixer on the charts every week, a remarkable career.

What was most amazing to me there was, when I came in the room, there was this tape machine against the wall, and then this tape coming off it in a loop that went across the room and around the mic stand, and it had thunder and lightning on it.

It was part of this record they were mixing. They would bring up a fader and you’d hear thunder, but it was a long loop so you wouldn’t hear the same thunder twice. I was watching them do this and I was thinking, “Holy shit, listen to how creative this is, look what they’re doing here.”

BB: So did you start looking around for a job at that point?

SY: No, not right away. It was approximately a year later I came back to do another mastering job, and I went back to Mirror Sound again to see Brooks Arthur, because I had made the contact.

They told me he didn’t work there anymore, he was working up at A&R Recording. So I went over there, and he was off that day, but I met Roy Cicala.

Roy took me around and showed me the place. Every great group was recording in that studio. It was remarkable. I asked if I could apply for a job there.

He said, well, yes you can, but it takes some time to get an appointment. I told him I was going back to Boston that day, so he got me an appointment with Don Fry, one of the owners.

He interviewed me, I left and went back to Boston, and for a while in between I worked at WMUR TV in Manchester, New Hampshire. I was a cameraman for the Uncle Gus kids show and the New Hampshire Bandstand, which they would do in the parking lot.

While I was up there, I got this call from A&R, and they asked if I was still interested in a job there. I said yeah, and they asked how soon I could be there. I gave my two weeks notice and headed down, and stayed in the YMCA.

BB: So you started, obviously, as a second?

SY: Actually, on my first session at A&R, I was the second to the second engineer. I was assisting a guy name Major Little, who was a professional second engineer.

He had no desire to be a mixer or producer, just a great assistant. They put me with him, and the first session we worked on was Dionne Warwick produced by Bert Bachrach and Hal David, with probably a forty-piece orchestra.

I’m helping him set this thing up. Phil Ramone was the engineer. I don’t remember all three songs, but I think two of them were “Valley of the Dolls” and “Alfie.”

That was my first day on the job, to see this incredible piece of music being done. This stuff sounded amazing. After that I worked on Leontyne Price and the Vienna Boys Choir, with sixty musicians and forty kids, something like that. She was in the booth, but it was all live.

And then we used to do stuff like Oscar Peterson and Count Basie. A&R was an amazing training ground. Bob Ludwig and Elliot Scheiner and I probably started within three weeks of each other.

They start by teaching you to set up sessions, then they put you in the mastering room for a while, they wanted you to be well rounded. I avoided the mastering room, but Bob loved it.

They used to do three sessions a day in each room. One of the rooms, the large room, they would book ten to one o’clock, usually a large band doing a commercial or a piece of music, two or three songs.

You had an hour to break that down for another session, which was two to five, plus one, which means you had the option of taking another hour if you needed it.

Then at seven o’clock at night, the rock’n’ roll started, and that’s what I was interested in, because Roy Cicala would do most of that. He would stay there until all hours of the night.

BB: What were some of the pop acts you did in those years, as an assistant?

SY: I did Peter, Paul and Mary, the Hair original cast album, Mitch Ryder and the Detroit Wheels, but also a number of unknown things.

BB: How did they do the tracking back then? Was it eight track, or still four?

SY: They had an eight track when I got there, and then they had some Ampex four tracks, two tracks and monos.

And you would have one of each in the studio, though not always the eight track, at least not when I started. If you were the assistant, you were responsible for all those machines.

And if you worked with a guy like Phil Ramone, he’s trying to mix this live, and it’s going crazy in control room. They had this thing called the jukebox, which was about the size of a jukebox without the glass top, and it would split the signal up.

It would go the eight track if they had it, and pass through to the jukebox, and there you would decide which of the eight tracks you would mix to send to the four track by throwing switches.

That’s why if they listened in mono, the balance was always right, because if the bass and drums were on the same track, sometimes bass, drums, acoustic guitar and electric guitar and percussion all went to the same track.

So the only way you could get the balance was to listen in mono, so you knew that as they were going on to the four track you had that balance right.

Then it was also split to go to mono, and sometimes they would try to do a stereo mix at the same time. Then they also had a four track in there for echo and delay.

So Phil would be there mixing it live, and they would go in and use the eight track for a remix only if they missed something in the live mix. In that day, it was viewed more as a safety, and everything else was viewed as a master.

In the mix room, as I recall, they had an Altec board, a 3M 8-track and the rest were Ampex 440s. I remember when Eddie Kramer first came in there, and he said, “Mate, if you could please just show me how to use the room.

You don’t have to hang around.” I showed him around, then he puts on a tape and pulls up the faders and “A Whole Lotta Love” comes out of the speakers, straight from the eight track.

It was amazing. In that day, the stuff that went to tape was huge sounding. For one thing, the boards all had transformers, which the modern boards don’t have.

People equate the modern boards with clarity and top end, but really many of them only have that at the expense of no real low end, or should I say a lacking in low end.

Transformers, in my opinion, are the only way that you can capture what’s going on out in the studio. You notice that a lot of people with modern boards are brining in racks with Neve or API modules with transformers in them.

BB: Now that we’re waxing philosophical, I wonder if you could back up and talk some more about how you learned to listen at your dad’s studio.

Learning to listen for what? How?

SY: Everyone hears, but not everyone listens. By that I mean, one day I’m doing some tape copies for a client of my dad’s, some 50 copies that are going to a radio station.

They wanted fifty of them. I bring out the fifty, and my dad spot checked them. He had this little Wollensak machine, and I was sitting there—this was when I was about sixteen—and he takes tapes out to spot check them, plays a few, then on one he says, “Did you hear that?”

I said, no. He rewinds it, plays it again. Still didn’t hear it. This went on for about ten minutes, but then finally he points his finger when it happens. Still didn’t hear it. All of a sudden I hear this dropout, very subtle and minute, but it was there.

It didn’t go away, but just for a moment it dropped in volume. At that moment, it all changed for me. After that, I listened to everything. In that ten minutes, I went from a person who couldn’t hear a dropout to one who did. It was the foundation of everything to come. Before that, I was hearing but I was not listening.

BB: So, when you started listening, what did you hear?

SY: Everything. It was amazing. For example, when we were doing four and eight track, I could listen to records done in New York and tell you which studio it was done at.

When we went to sixteen track, it was tougher, and when we went to twenty-four I couldn’t tell anymore. The studios in New York all had distinctive sounds, a combination of the rooms and the equipment, the main engineers who were doing them.

I learned the sound of Bell, of A&R, of Media Sound, or Mirror Sound. You could hear it on the radio. But it all went out the window with 24-track. Sixteen tracks on two-inch tape was as far as you could go and still maintain the personality of a room.

The twenty-four track machines started to eat up the clarity of the instruments.

BB: Let’s talk some about one of the landmark albums of the late sixties, the Band’s Music from Big Pink.

SY: That was recorded between A&R, four track, and a studio in LA, where they did it eight track. When it was mixed, they had a lot of difficulty getting the eight track to come together the same way as the four track.

For example, on the four track songs, if Levon sang while playing the drums, then the vocal and drums went on the same track, with some echo. Whatever he sang as lead vocal, that was on the drum track. Also, the bass and piano were on a track, but the organ was separate.

So you have to get that combination right, and the only way to do it is to listen in mono. You need the masking of the instruments to get the EQ right.

Remember, if you put them together on a track, you had to get a great bass drum sound right off, you had to work on that until it would stick out enough to work with the bass, but still the snare and the hi-hat had to be there.

When it went to 24-track, that’s one reason it didn’t’ sound as good. When you had to EQ something live off the floor…EQ’ing something twice, a little twice, is better than EQ’ing it a lot once, and much better than EQ’ing it a lot twice.

Those equalizers, if you touch them just a little, get a gentle slope, it works. But if you crank it up, it gets harsh sounding. With 24-track, those decisions were left to later, so they didn’t get THE bass drum sound or THE snare sound. Then they’d EQ to try to fix it.

One of the things they stopped doing with 16 and 24 was they stopped adding echo to the snare.

But back then you had to put it on the track, because you were combining it with acoustic guitar and bass. So you had to get it right, a complete and finished drum sound.

Well, when we went to 16-track, I continued to do that, to put echo on the snare, be it chamber or EMT. Nothing excessive, just a halo around the snare, something that would make the snare sound special.

So when you got to the mix, you would have the snare separate but it would have a little chamber on it. So when you put another effect on that snare, you were putting it on an entire, complete sound.

So when you add your EQ and effects to that sound, it’s totally different than taking it off a tape that is dry as a bone, maybe a little EQ. You won’t get the same sound, and it’s not as good a record.

BB: And they would let you do it?

SY: The problem is, producers were scared of this. I would tell them that I’m putting this echo on the track and they would say “Oh no, don’t, you can’t do that!”

And we would talk about it. I would express why I thought it was better, and some would allow me to do it. But most wouldn’t. They would say, “Well, what if I want it dry in the mix?” I’d say, “When’s the last time you’ve had a dry snare?” “Well, never, but what if I do?”

I used to put tape delay right on the electric guitar. The producer would say, “What are you doing?” “Don’t you like the sound of the guitar?” “Yeah, it’s great, but don’t put it on the tape.”

But I’d tell them that if you try to do it later, it won’t be the same, it won’t sound as good.

BB: Back to Big Pink. What was your role in that project?

SY: I was both first engineer and assistant. Donny Hahn did most of the recording at A&R. He wasn’t a rock ‘n roll engineer, he did mostly big band stuff and commercials.

He knew that I was working on all the rock’n’ roll stuff. He asked me to be his assistant. He had a fabulous sense of balance. I started as the assistant, but during the recording I worked up to his equal, which is why they gave me credit.

It was not an easy album to record. It took a lot of fooling around, putting cardboard partitions between the drums, figuring out how to record them to sound like they sounded to us in the room.

The mixed that album twice, both times at A&R. I think Tony May did the mixing. On the first one they had horns, and they didn’t like that one.

They were doing it on an Altec board with limited EQ and not a whole lot of outboard gear. It had to be on the tracks or you couldn’t take it very far in the mix.

So I think Donny Hahn and I really captured the essence of that group.

And the way we laid it out in the room was unique. I remember Robbie Robertson had a speaker, shaped like a cube—I had never seen this before.

I had it sitting on two wooden chairs, stretched between the two, and he would throw the switch and it would start rocking back and forth. He said it was a speaker in there, spinning around.

BB: But it wasn’t a Leslie?

SY: No, I’d never seen it before. I saw things going on in that room I had never heard of. I think they had a lot of stuff made for them.

The most remarkable sound was, on one song where Garth played what I remember was a Lowery organ , on a lot of the songs he had the signal from the organ, before it went to the Leslie, go through telegraph key, and on the telegraph key you have a tension spring so that you could adjust.

He loosened the spring, and whacked this key, and it started bouncing up and down so it was making and breaking contact, and then started playing the into to one of those songs.

Can you imagine me standing in the studio, watching this key go up and down, hearing this sound coming out of his organ…well, I’d been in the studio since I was a kid and I was seeing shit go down that I’d never seen before.

Remember, there were no racks of digital boxes back then, so people had to be very clever to come up with new sounds.

BB: Did they do many overdubs?

SY: They did some horn overdubs, but those weren’t used on the final mix.

BB: Do you remember how the drums were miked?

SY: It’s hard to remember for sure, but I would suspect we used a Telfunken 251 as the overhead, and an Altec ‘salt shaker” on the snare.

The bass drum was probably an E-V 666. We had mics on the toms as well, but I’m not sure what those were.

I do remember we worked a lot on getting the drums to not ring in sympathy the other drums, because we didn’t have gates back then.

I can see the session like it was yesterday. I remember how they were set up in the room, around this seven foot Steinway grand.

All the players were really close to each other, in a large room—the same one we used for Dionne Warwick with the orchestra—but only a small part of it was used.

It had a beautiful hardwood floor, high ceilings, and the room itself just had a great sound. There was a drum riser, it kind of like a half wall of three sides of the drums.

BB: Why the drum riser?

SY: Drum risers change the sound of drums a lot. It’s very hard to find a good sounding drum riser but when you get one, the perspective of drums in the mix is totally different.

It changes the way the drums sit in the mix. When they are not connected to the floor, it becomes a whole different animal. It’s the same if you have a guitar amp on the floor.

It couples with the floor, and the floor becomes and extension of the speaker, so you get all this low stuff that you have to roll off or filter out. The same thing happens with drums. They become part of the floor.

You get more clarity with the riser, usually with even more bottom. In the final mix, the drums are in a place that is a better place than being on the floor. I’ve tried building risers at various times, using the heaviest lumber, but sometimes it just doesn’t work well. They are hard to do.

BB: Let’s move along to another album that endures as a classic, Van Morrison’s Moondance.

SY: We recorded a lot of Moondance on eight track, I’m pretty sure. I don’t remember the studio being sixteen at the time. The band played great. It went very fast. Van didn’t talk much at all.

He’s a very introverted guy. The only thing I remember him saying during all those sessions, was, “Can you put more bottom on my voice, “ because he has a very thin sounding voice.

BB: And those sessions were all done at A&R?

SY: Yes, but in two different rooms.

BB: How did you get assigned to those sessions?

SY: It was all up to the girls who did the book at the studio, who assigned the engineers for the sessions. Producers would call and book the studio, and they would ask who’s available. Those girls could make or break your career.

BB: On Moondance, what else different, compared to Big Pink.

SY: Van recorded the vocals live, in a booth, and I had a Pultec on him. It was simply a matter of capturing the sound of him and his band, but in my way.

You see, if you listen to the records I’ve done, you’ll hear that they all sound different, because I’m recording different bands or the same band at different times in their career. Each one has their own personality and sound.

But I suppose you can hear me, maybe you can hear my drum sound, and my overall thing—whatever it is I do, just trying to capture the band but in the process developing my own sound.

But all of those bands were totally different, not like today where a lot of what you hear all sound like they were done by the same producer on the same board in the same studio. There’s a sameness to the sound.

I was on a committee to pick the nominees for the best engineered record for the Grammy awards, and we had over 170 CDs to listen to, and out of the I only found five that really sounded different.

But that didn’t happen back then. I have to be careful about talking about, “back in the day,” because people say, “Well, that’s old.”

Well, just because it’s old doesn’t it’s wrong, and just because it was the original way or recording doesn’t mean it doesn’t have value today.

BB: So what’s the reason? It is technology, or MTV or just a different aesthetic in music today?

SY: I’m not sure. But I know one thing that doesn’t change. Guys who do what we do for a living, we are emotional salesmen.

At the end of the day, all we are doing is selling emotion. You can slice it or dice it and hold these pieces up to the light all you want, but it’s all the same thing.

It’s about the song, it’s about the performance, and it’s about getting that across to the listening public.

If as the recording engineer and mixer, if I can get that across to the listener, even being squeezed through MP3, even in a department store with the speakers twenty feet up, then my client wins.

That’s the theory I talk about to my clients. In a car, at sixty miles an hour, with the windows down and maybe the top down in a convertible, and you can’t really hear the bass and you can barely hear the words, but you still get the effect, you still get the feeling of that song.

Or the woman who is vacuuming in the bedroom and the radio is in the kitchen, over this noise you can get a little beat of the melody. So if you can get it across to them, then you can move them to go out and buy it.

So I think the only way that you can get records in people’s collection that they will listen to over and over is to get that feeling, that emotion, across the distance between the speakers and their ears.

BB: The next landmark of you career was working with John Lennon. Did you find that intimidating?

SY: Are you kidding? I had skid marks in my underwear, I had fudgey drawers! You know, here’s a guy who had been around the block several thousand times by then, and I could tell he know, he just simply knows.

So I’m hoping that I can give him what he needs. He’s used to working with George Martin, for God’s sake!

BB: He was producing, right?

SY: He was. So I wanted it to go smoothly. It was a very professional session, so you do your best to make everybody happy and come up with a sound that works. It turned out to be really terrific sessions.

BB: And that was at Record Plant?

SY: Yes, I was a staff engineer there at the time. I was there for ten years, from 1970 to 1980, though I did start some freelancing by 1978.

BB: Was that where you hooked up with Jimmy Lovine, there at Record Plant?

SY: Yes, he started out as my assistant. But he was sharp. It didn’t take him long to figure out there was no money in engineering, so he wanted to be a producer.

The first thing we did together was Patti Smith’s “Because the Night.” I mixed that with him. It turned out to be a big hit, so we figured we could have some success together.

“Why don’t we do more stuff?” he said, and he was seven years younger than me, just a kid 23 years old. He’s talking to me about going out and doing stuff, but I was thinking, “Hey, I get a paycheck here every week.”

I’m supposed to leave here and take a chance with this kid? So I did! (Laughs.) So we got this opportunity to do Tom Petty, and many more opportunities followed after that.

{extended}
Posted by Keith Clark on 11/13 at 09:57 AM
RecordingFeatureBlogStudy HallEngineerMixerStudioTechnicianPermalink

Tuesday, November 12, 2013

Perspective: Skeptics & Believers

The validity of both sides of the debate

Let me clarify perspectives. On one side we have the perspective of believers, the “anything is possible crowd,” where the sky is not the limit, and whether a concept is repeatable or provable is not nearly as important as the fact that it was written, thought, or spoken.

On the other side we have the perspective of skeptics, the self-proclaimed “investigators of verifiable proof ” residing in a world of science, which is based upon identifying dependable and repeatable ideas from which real-world functioning successes can be built.

Both sides inspire massive rivers of money flowing to support their respective causes. Both construct items of perceived value and usefulness.

Both sell or pass freely their thoughts and revelations to attract others to follow and swallow. Whether it is a crystal necklace that heals, an automobile that transports, or a process of thought that helps one navigate life, both trains of thought have long and twisted histories peppered with successes and misconceptions.

However, due to the dramatic differences in perspective, neither side is able to truly resolve the expertise of the other.

The “true skeptic” can no more prove a certain type of music is beautiful than a “true believer” can construct a cell phone that actually functions. It’s easy to understand why science is useful, and easy to feel why adding the complexities of beauty and art improves our lives beyond the monotony of what is purely utilitarian.

So what’s the problem? Well, from my point of view, it is the middle ground, that gray area between facts and recreation, the things we purely feel or think that science has yet to be able to adequately explain to a level that fits what we feel to be occurring.

In working with sound, the credibility of science comes into question when we’re told that something can’t be heard - yet we do hear something. In our confusion, we believe we have taken every variable into account, only to find the most remarkable surprises still remain.

These false assumptions are a feeding ground for a tangled garden of ideas.

The skeptics are doing all they can to excavate and form clean rows of well-organized thoughts, while the believers are immersed in weaving fact and fiction into complex and intoxicating stories and patterns.

Conflicting Influences
Meanwhile a third perspective also exists, wherein both viewpoints are viewed as desirable, sellable, marketable, and therefore useful.

Regardless of the propagation of education over time and eons into the future, I will personally make a jump to the conclusion that our world will always contain some balance of believers and skeptics. It’s impossible to live our lives without the rules of science, just as it’s impossible to live without the influences of art, pleasure and those magical stories that are so deeply woven into our lives.

It is when one side denies the relevance, importance, and/or necessity of the other that voids are created, allowing pseudo-science and other forms of blurred perspective to gain traction.

When art attacks science and vice versa, each undermines its own integrity while attempting to discredit the other. To tell an amazing story is one thing, to claim it is a factual account is entirely another. To measure the various nuances in sound is one thing, to claim the nuances definitively can or cannot be heard is another.

So just as I laugh at the absurdity of people who actually buy colored stones to tape to their audio cables and their ignorance of the astronomical improbability that there will be any form of realizable alteration of the sound, I also believe that it is a failure of the science world to embrace the unknown that allows this ignorance to fester.

Yes, science does try to quantify the importance and realities of art, just as the world of art tries to harness science as well. Science teaches us that there are things that are known and things that are not yet known.

Art teaches us there are things we “know” and “feel” that defy definition and measure.

Science is by nature methodical and cold, while the attraction to the warmth and mystery of art inspires our desire to escape being characterized and labeled as another predictable reproducing food eater called “a human.” We know in our minds that we see, feel and hear so much more than even the most complex analysis system seems to account for.

Jumping To Conclusions
This train of thought was inspired by a video I recently watched of an “Audio Myths” workshop held at the AES show in New York City in 2009.

The presentation featured several audio industry luminaries shedding light on various myths found in both the consumer and professional audio fields.

I enjoyed the clarifications on human perceptions, yet as the video progressed to a discussion of “what we can and cannot hear,” I found myself feeling swindled a bit, and also was tempted to jump to conclusions.

For example, if the power of suggestion can inspire us to hear things that are not there, would not the opposite also be true? As the various sounds are played, are we convincing ourselves we can’t hear them?

What about the cumulative effects of several independently inaudible aspects combining?

Just as it is important not to jump to the assumption that I can hear something, it is equally important not to write off something as inaudible (or irrelevant) without doing due diligence. In the end though, and in defense of the presentation, a clear point was repetitively made: “this is just to help keep things in perspective.” With that, I concur.

Now let’s take a big step back and ask ourselves: what would accurate audio reproduction sound like if perfected? How can we determine what is - or is not - important for audio accuracy, if we have yet to create audio accuracy?

Whether we use $10,000 audio cables or 192K converters or razor-flat microphones, the real question remains: has anyone truly ever heard a recording played back where they were compelled to search around the room to find where the live band was hiding?

In other words, how come we can know there is a garage band rehearsing a block away yet when we sit dead center in front of the finest sound system money can buy, the best we can come up with is a descriptive range of similarities to live?

Proving It
What if one side or the other was truly able to prove their position? What if we could tape colored rocks to cables with the result that when we heard a system, we would all swear up and down that there were actual musicians in the room? Would that not be a game changer?

Then we could actually prove things, such as vinyl albums sound more realistic, or that ever-faster D to A converters sound more realistic, and so on.

The room would not matter, just as the room does not matter with the garage band. (“Oh, you were playing live in a lousy room so I thought you were a recording.” Yeah, right.)

The old Memorex advertising claims aside, has anyone ever actually heard sound reproduction so clear that they were unable to tell it was not in real time?

I haven’t, but when and if I ever do, it will probably be a good place to start testing whether some of the other more “scientifically dubious” products and concepts actually function and make any sort of difference.

Dave Rat heads up Rat Sound Systems Inc., based in Southern California, and has also been a mix engineer for more than 25 years.

{extended}
Posted by Keith Clark on 11/12 at 07:12 PM
Live SoundFeatureBlogOpinionEducationEngineerMixerSound ReinforcementSystemTechnicianPermalink

Eventide Introduces The Mixing Link Mic Preamp With Effects Loop & Phantom Power

Makes guitar stompbox effects available to vocalists

Eventide has introduced The Mixing Link, a microphone preamp plus effects loop in a compact stompbox form factor that fits on a pedal board or in a backpack, and makes guitar stompbox effects available to vocalists.

“Our pedals were designed for guitar but most of the effects really work well for voice. The Mixing Link makes it rather easy for singers to connect to most guitar stompboxes and, with its high quality mic pre, it’s at home both on stage and in the studio,” said Tony Agnello of Eventide and Manifold Labs.

Features:

—Microphone preamplifier with FX Loop in a compact from factor

—FX Loop. Effects send/return accommodates balanced and unbalanced signals and interfaces with consoles or guitar pedals easily

—FX Footswitch. Latching or momentary footswitch control of effects loop for performance effects

—Works with a wide range of microphones including condenser and ribbon microphones with up to 65 dB of clean gain

—Aux I/O. Connection supporting stereo input and mobile device send level

—Guitar amp output and headphone monitor output with separate master volume control

—International universal 9-volt DC power supply included, or 9-volt battery

—48-volt phantom power for condenser microphones

—Balanced XLR output which supports DI and Line levels (-10 dB to +18 dBu)

—Instrument and balanced line level inputs (-10 dB to +18 dBu)

—Versatile mix control supports three modes of operation—100 percent microphone plus effects, microphone/effects balance (effects mix), and 100 percent FX (no microphone)

Shipping in December, The Mixing Link will be available from authorized Eventide dealers and at Eventide.com.

Eventide

{extended}
Posted by Keith Clark on 11/12 at 04:10 PM
Live SoundRecordingChurch SoundNewsProductInterconnectMicrophoneMixerProcessorSignalStageStudioPermalink

Mackie Master Fader Version 2.0 For DL Series Mixers Now Available (Includes Video)

App provides significant workflow updates and increased control

Mackie has announced the release of Master Fader v2.0, the primary control app for Mackie DL Series digital mixers.

Now available as a free download, it delivers new features that are a direct result of customer feedback. Because DL Series mixers are completely software controlled from an iPad, iPhone and iPod Touch, Mackie can add features with a simple app update.

“Master Fader v2.0 is all about increased control and workflow improvement,” states Ben Olswang, Mackie product manager. “We’re very excited to add many of the great features you have been requesting, all at zero cost to the user.”

The new features of Master Fader v2.0 range from small performance and workflow updates to items that will change how the app is used.

New features like input channel and aux send linking are ideal for controlling stereo inputs sources like keyboard or when using in-ear monitors. The addition of mute groups and view groups provides customizable control of the work surface.

Further, the new Quick Access Panel gives instant access to important controls without taking up screen real-estate. Numerous additional navigation enhancements are designed to deliver an intuitive, productive workflow.

Major features of Master Fader v2.0:

·      Input channel linking
·      Aux send channel linking
·      Mute groups
·      View groups
·      Pre-DSP aux source
·      Quick access panel

Further updates include:

·      Improved compressor/limiter graph UI
·      Option for independent channel aux mute for each aux send
·      Main mute
·      Other navigation enhancements
·      iOS7 Support


Major workflow updates like channel/aux linking and access to mute/view groups also apply to a new version of Mackie’s My Fader control app, built specifically for iPhone and iPod touch. This ensures that on-stage performers mixing their own monitors benefit from these updates and provides a greater level of control for the front of house engineer looking for pocketable control while roaming the venue.

Master Fader v2.0 is available for free download from the App Store. Existing users will be able to update directly on their iPad here. My Fader v2.0 is available here.

Users are advised to read the full release notes (here)  before installing the Master Fader v2.0 update. iOS7 users should be aware that this update could happen automatically depending on their app settings. Mackie has provided information (here) on being prepared for Master Fader v2.0 using iOS7.

Master Fader v2.0 compatible with iOS 5.1 or later. My Fader v2.0 compatible with iOS 6 or later.

 

 

Mackie

{extended}
Posted by Keith Clark on 11/12 at 09:40 AM
AVLive SoundChurch SoundNewsVideoProductAVConsolesDigitalMixerSoftwarePermalink

Yamaha CL Digital Console Update Version 1.7 Available In December

Offers significant improvements that will be useful for mixing engineers in festival and similar complex event setups

Yamaha Commercial Audio Systems has announced the December availability of Version 1.7 software upgrade for the Yamaha CL Series digital console.

The upgrade, available via free download, is based on significant end user input and will provide enhanced efficiency and versatility along with significant improvements that will be useful for mixing engineers in festival and similar complex event setups. Many of the new features are already familiar to PM5D users.

New CL V 1.7 features include Selective Load/Save for set up data such as scene memory, libraries, etc., can now be individually loaded from or saved to USB memory providing an efficient way to load complex setup data.

The HA Option for changing input patches now makes it possible to select whether the end user wants to use the HA settings from the patched port as is, or copy the channel HA settings to the patched port in order to quickly change input patches when mixing in a fast-paced environment without having to copy HA settings.

“As with all Yamaha Commercial Audio products, updates are implemented based on suggestions primarily from our end users,” states Marc Lopez, marketing manager, Yamaha Commercial Audio Systems. “Their input is quite essential, now, and for future generation of products.”

Also with the CL V 1.7 update, in the Sends On Fader mode, the assignable encoders (Gain/Pan/Assign knobs) can now be used to adjust channel level and panning for sends to stereo buses.

In addition, Custom Fader Bank setups can be stored to, and recalled from, individual scenes. At busy events that involve multiple engineers, this feature can make it easy to change custom fader bank settings without having to switch users.

And, the sends from input channels to buses on which the send point is set to PRE, can now be assigned to DCA groups for muting via a DCA Mute Option for PRE sends.

Other new features in CL V 1.7 include improved Channel Name Display In the Sends On Fader mode where channel ON/OFF status is now indicated in the channel name display. Additionally, a “black” 9th color has been added for the Channel Color Bar. More points are available in the Metering Point field on the meter display with the addition of “Pre GC Meter” and “Post Digital Gain Meter”.

Also in CL V 1.7, the Analog HA gain and Digital HA gain indication have been improved and are both displayed in the Selected Channel View Gain/Patch field at all times.

The HPF status of R Series and similar external HA units are shown in the Gain/Patch field as well. DCA/MUTE group and mute name display are now shown in the DCA/Mute Group Assign Mode pop-up display.

Improved Channel Link display indicators in the CH Link Mode pop-up display will make it easy to see current link group settings.

In addition to the above new features, CL V 1.7 includes Extended Cue monitor adjustment range that extends from -30 dB to +20 dB. It is now possible to specify whether latched or unlatched external switches are connected to the GPI input ports.

And, when mounting I/O devices on the Dante network, the device type can now be detected without the description in the device label, allowing for better customization of names.

Yamaha Commercial Audio Systems

{extended}
Posted by Keith Clark on 11/12 at 09:08 AM
AVLive SoundChurch SoundNewsProductConsolesDigitalMixerNetworkingProcessorSoftwarePermalink

Monday, November 11, 2013

In The Studio: How The Sync Head (And The Overdub) Changed Recording Forever

A significant impact on how the music was made
This article is provided by BAMaudioschool.com.

 
Once upon a time there was no recorded music, and you could only listen to live music. Brilliant musical performances occurred and vanished into the air except for whatever musical memories or emotions were remembered by the listeners.

Early recordings were made with a single microphone cutting direct to vinyl. Then came tape, then stereo tape and so on to 8, 16 and even 24 tracks. As track numbers increased, engineers were able to separate more instruments for finer sound control.

Of all the developments in audio technology, I believe the most profound was the sync head. Sure, the stacking of tracks was important and lead to mixing, but the sync head changed the most about how music was made.

Prior to the advent of the sync head (a record head on a tape recorder that also has playback capabilities), the only way to build upon previously recorded material was to play the material on one tape deck and record a combination of it and new sounds to a second tape deck.

For the most part, recording was a matter of capturing a complete performance. Then along came the sync head and the overdub became possible. Musicians could listen to previously recorded tracks while recording new ones, and the new tracks would be perfectly in sync. 

In fact, if you put the machine into record somewhere in the middle of the song and then took it out of record shortly after, you could replace sections of performances.

This led to many changes.

1. Musicians Stopped Playing Together. Now that parts could be added at any time, it was no longer necessary to have an entire band playing a full song along with a singer for every take of the song. The band could perform the song one time and the singer could perform over and over until the take was perfect, each time recording a new track over that one performance by the band. 

Granted, that meant that the band could not change how they were playing in response to something the singer was doing since the band was already recorded, but overall it was a major improvement in music production. All you needed to do was get one good take from the band, then you could send them home and not worry about paying them while the singer was getting it right.

Unfortunately as the number of instruments playing together reduced to the point of recording each instrument individually starting with drums, then on another day bass, then piano (etc), the musical communication and variation that would normally occur as musicians responded to each other’s live playing became less a part of the music. 

Yes, you could now examine every part under a microscope and make sure each performance was perfect and exactly what was intended…but you no longer have the communal musical interpretation of a particular song. Each part would only be able to interact with what was already recorded, often leaving the drummer nothing to interact with but a click track.

2. Musicians Had Less Pressure To Perform With Consistent Quality. Since you could always go back and re-sing a vocal, there was less pressure to get it right as there was when an expensive band was backing you for every take. 

Eventually as it became possible to go in and out of record for very tight periods it became possible to replace individual words or even syllables. Vocal performances became collages from many performances rather than a single vocal interpretation.

3. Producers And Artists Could Get The Performance They Wanted Instead Of Compromising. Although it was always possible for a producer to elicit a performance out of a singer in the same way a director elicits a particular performance out of an actor, now it was possible to save different versions to move and combine as desired for a final, perfect vocal.

4. Volume Dynamics Became Less A Function Of Performance And More A Function Of Mixing. Since the musicians were no longer performing together, they were no longer changing their dynamics according to what each other were playing. 

The dynamic interaction that is an important part of communal music had to be created afterwards by the mixing engineer.

5. Individual Musicians Could Play More Of The Parts. This meant artists with a strict understanding of how they wanted their music performed and the ability to play the necessary instruments (such as Stevie Wonder) could really do it all themselves.

I know that some of these points are direct contradictions. But the sync head and the overdub changed recording forever.

For more, look up these names: Tom Edison, Emile Berliner, Les Paul, and Tom Dowd.

Bruce A. Miller is an acclaimed recording engineer who operates an independent recording studio and the BAM Audio School website.

{extended}
Posted by Keith Clark on 11/11 at 06:01 PM
RecordingFeatureBlogStudy HallEngineerMixerSignalStudioTechnicianPermalink

Church Sound: Do Digital Mixers Lead To Laziness?

The potential downsides of automation through technology
This article is provided by Behind The Mixer.

 
The automobile wasn’t invented because someone wanted a new means of travel. It was because someone was tired of walking. The recording device wasn’t invented because someone wanted a technology that could capture sound. It was because someone was tired of taking notes in class. 

Are these statements true? Oh, I’d guess there is a shred of truth in them somewhere. But what is true is that automation-through-technology can lead to laziness and when the church service is in full swing, you shouldn’t look like our friend the sleeping cat pictured at left.

Most of us would quickly deny being lazy behind the mixer. But, looking at this age of technology and what the future holds, audio production technology has reached a point where it does allow you the ability to be lazy, specifically through the use of recall-able mix scenes.

There are three scenarios in which your digital mixer can lead to laziness.

Scenario 1:

This one is tempting when you have the same people in the band every week. You create one scene and label it “music” and use it for every song, every week, every month; no EQ adjustments, no effects changes, maybe a volume tweak here or there.

Result:

You’re mixing just as lazy as when you had an analog mixer and rarely touched the EQ knobs. Congratulations, all of your songs have the same generic sound. You might say I’m hyper-sensitive to this form of live mixing. You’d be right.

Scenario 2:

You create a good baseline mix for the first service with the mindset you will improve your mixes (saving the scenes) through your multiple services so the last service will sound the best. After all, you get the most people at the last service.

Result:

You’re doing a huge disservice to the congregation and missing the point of your job. You should have the first service sounding the best it can sound. The people attending this service are no less important than those attending the last service. Subsequently, if you’re doing this, you’ll start hearing comments like “the first service never sounds as good as the last service.” Is that what you want to hear?

Scenario 3:

During the worship practice / sound check, you spend your time creating great song mixes. You save each song as a digital scene so come service time you only have to recall the scene for the song.

Result:

Your service-time mix suffers because the acoustic properties of the room have changed because now the room is full of people. What sounded great in the empty sanctuary now only sounds so-so.

It’s better than being in scenario 1 or 2, but it’s still not where you should be.

The good news is you know the importance of distinct song mixes but you’ve allowed yourself to be lazy and miss out on sculpting those mixes into even better mixes for each service.

Not only do the room’s acoustic properties change when it’s full of people, but as I mentioned in another article, mixing for the moment and you can’t completely pre-mix for that moment.

Your mixing needs to be somewhat re-active to the congregation as the mood fits. But, I digress.

Fight The Lazy!

Let’s break this down into steps:

1. Create different song mixes.

Does your worship music on your iphone all sound the same?  No. Don’t use the same scene for all of the songs. It’s OK to have a baseline mix but consider it a starting point.

If your musicians change from week to week, then the baseline might not be possible. It depends on how the bands are grouped and the functionality of your mixer. Some mixers can save channel settings separately while others save all the channels together as one scene.

Bottom line, songs are mixed differently and you need to work with the same mindset.

2. Plan out how you will use scenes.

You can use them per song, per element, or per a group of elements. I use around five scenes per service. Each scene is for one song plus any elements before or after where a logical break occurs.

For example, if the last song of the worship set is concluding with the last notes ringing out, it’s OK if the person speaking starts talking so the music sound stays as needed. 

There are all sorts of ways of arranging scenes. Take your schedule and break it out into logical groups.

3. Plan your first service like it would be your last.

Even if you only have one service each weekend, put your energy into creating the best song scene mixes possible during your worship practice and your sound check.

By the time the first service rolls around, you should know you have done your best. You should expect to make some minor changes to your mixes but those are simply part of live mixing.

Consider the first service (each service) as if it was the last service you were ever mixing. You want it to be the absolute best.

The Take Away

The ability to recall scenes takes a large burden off your shoulders. You can get better individual song mixes and, in the case of multiple services, you can create a consistent sound from one service to the next. This is all good but it doesn’t mean that you can stop mixing.

The mix that worked during practice might need tweaking when you hear it with a room full of worshippers. In the case of multiple services, after reflecting on the first service, you might discover you could improve your mix for the next service by modifying a vocal mix. 

And let’s not forget mixing for the moment. Recalling scenes is great but don’t let those saved settings define your mix.

P.S.  Remember to save your scene changes!

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians.  He can even tell you the signs the sound guy is having a mental breakdown.

{extended}
Posted by Keith Clark on 11/11 at 04:37 PM
Church SoundFeatureBlogStudy HallConsolesDigitalEngineerMixerTechnicianPermalink

Thursday, November 07, 2013

Bainbridge Island Museum Of Art Relies On Lectrosonics Aspen Processing Systems

CRC Technologies adds new Lectrosonics audio equipment to existing system at Bainbridge Island Museum to expand the system's capabilities.

Bainbridge Island Museum of Art’s mission is to engage a diverse population with the art and craft of the northwest region.

Recently, what started as a project to fix some shortcomings with the AV system in the museum’s auditorium, turned into a considerably larger project.

Seattle, WA-based CRC Technologies, an AV systems design / build firm, was initially contracted to make some enhancements to the existing AV system.

Jay Nichols, manager of the AV Division, realized that by adding some components to the existing system, he could help them accomplish what they needed.

Those additions included a Lectrosonics SPNDNT Dante capable DSP mixer / processor and two DNTBOB 88 breakout boxes to augment the capabilities of the ASPEN SPN2412 24-input audio processor already in their possession. 

“I started off just fixing the initial install in the auditorium that was done by another company,” Nichols explained. “After the customer realized how good the equipment was that they already owned, I turned into a salesman and convinced them to expand the system—not cascade two systems together.

“I became the engineer when it came time to create schematic and line drawings to show the customer that the expansion would work and, finally, with the help of my installation crew, I became the installer and system programmer for the final commissioning of the new system.”

According to Nichols, the Lectrosonics SPN2412 and SPNDNT are located in the Frank Buxton Auditorium equipment rack while the two DNTBOB 88 interfaces are located in the basement equipment rack.

“The SPN2412—originally deployed by the original contractor—is used to mix microphones and line level sources from three stage floor boxes, wireless microphones, and preamp signals from the surround sound processor used for the auditorium.

“This system was expanded to add a museum-wide paging microphone for emergency announcements. Further, the SPNDNT is used to mix the audio from sources throughout the museum and disseminate them to any of the sixteen zones throughout the museum.”

“The DNTBOB 88 breakout boxes are used to take the analog audio signal from the Crestron DM video switch and send it to the SPNDNT for mixing,” Nichols continued. “These two interface units also take the mixed signal from the SPNDNT and connect it to the 70-volt amplifiers used by the system.”

The Aspen SPN2412 processor’s ability to mix the variety of signals required for the multipurpose auditorium without the need for someone to ‘run the soundboard’ was a huge factor in the project.

“By adding the combination of the SPNDNT Dante processor and the two DNTBOB 88 breakout boxes, we achieved the perfect solution to expand on the existing SPN2412 install and make the entire system seamless,” Nichols added.

The Bainbridge Island Museum of Art’s AV system expansion was completed in June and since that time, Nichols reports that the customer is extremely happy and everything is working just as he envisioned.

Lectrosonics

{extended}
Posted by Julie Clark on 11/07 at 11:57 AM
AVNewsAVInstallationMixerProcessorSound ReinforcementSystemPermalink

Working On The Stage Sound—Moving From Mixing House To Monitors

A voice of experience provides a run-through on success at the monitor position

A recent assignment placed me behind a monitor console once again. It had been a while since I stage-mixed on a regular basis, so I enjoyed the change of scenery.

But this end of the snake presents a very different challenge from a front of house mix or a system engineering position.

Here, the fruits of my labors were not intended for the masses, but rather, were tailored to specific individuals and each of his or her needs, wants, desires… and idiosyncrasies.  And yes, IEM has fully come of age, but not everyone will go there.

Here are some of my rules for setting up successful stage mixes.

Objective
To me, the first and most important stage-mixing rule is to understand exactly what you are trying to accomplish. (As with most things in life!)

The objective is for the player or artist to hear what they need or want to hear, in a way that makes sense to them. Do not confuse this with the idea that you are there to make it sound good to you! The two do not necessarily coincide. Wedge mixes do not generally sound like front of house mixes.

Be Realistic
Face it; on a one-off with an unfamiliar band all you can do is give it your best shot. If it’s a couple of folks with acoustic guitars, you’re probably “in there”. If it’s Godzilla meets Metalhead, well… set up accordingly.

If you’re going on tour with a band, try to find out as much as possible about them. Perhaps the guy who was sitting in the seat before you got there would be a good place to start.

Make a plan, but don’t try to reinvent the wheel on the first day. Many musicians get used to their mixes sounding a certain way, and right or wrong be prepared to leave it that way.

But if you’re lucky enough to tour with some receptive players, you’ll have plenty of time to try different things and fine-tune your “stage sound” as you go!

First Things First
Assuming this is a tour, you’ll probably receive information about what goes in to the mixes, but it’s best to speak directly to the band members if possible. This is your starting point.

Following that initial information, you set up for your first sound check. When they begin playing and I am comfortable with my initial mixes, the next thing I like to do is walk around to the various positions and listen.

I mean really LISTEN carefully to what everyone is hearing. It will change as you move around depending on your proximity to various instruments, amplifiers and wedges.

It may change from song to song depending on the volume of the instruments. Make mental notes of what you hear. This will be the foundation for building a successful “stage sound” later.

Psycho
You must also play psychiatrist a bit and try to get inside the player’s heads.

It’s important to understand the difference between a guy who will ask for his guitar in the wedge in front of him while standing in front of a Marshall stack turned up to eleven, and the guy who wants a taste of the keyboards because they are on the opposite side of the stage. If it’s all about volume and ego… (fill in the blank).

Loudspeaker Placement
I’m always amazed at how many guys don’t take the time to really place the loudspeakers properly.

Aim them at the players’ faces, and away from troublesome acoustic instruments. (Like a grand piano) Try to keep from firing into open microphones, thank you.

Drum fills are particularly troublesome. I like to get them as far down-stage as possible alongside the riser, and aim them just up-stage of the drummer.

Orient the box so that the narrowest horn dispersion is in the horizontal plane. (Usually on its side) This will help to keep the foldback out of the tom and overhead microphones.

Be careful when you are using more than one enclosure on a mix. Play with the placement of your wedges and find out what works. You’ll be amazed at what a difference a few inches can make when it comes to hot spots and nulls.

Usually I try to find a place where they are close enough together and down-stage to still be in front of the musician, but far enough apart to aim the high frequency axis past the microphone at his ears.

When they’re too far apart, you lose that “in your face” feel. Avoid crossing the HF axis from both boxes at the microphone itself, and also be prepared for reflections from hats or costumes.

For fill loudspeaker positions, if you have multiple enclosures try to stack them, as opposed to a side-by-side configuration.

Horns that are not splayed properly will have several well-defined nulls and peaks in their response when acoustically added together. This is a classic case of non-coincident arrivals at the listener’s position and cannot be fixed with an equalizer!

You would have to splay the boxes for a very wide coverage pattern in order to add the horns together properly. (Depending on the horns of course) There are many more enclosures with 60-degree horns than with 30-degree horns.

Low Frequency Reality Check
Look around you. A reality check will tell you that if you have a relatively large house system with low frequency and sub-bass enclosures that your monitors will not be able to compete with the LF information on stage when everything is up to show speed.

Unless, of course, you want to turn everything up to “warp nine,” or add lots of sub-bass enclosures to you monitor rig, but this generally results in escalating levels with the backline amps and then the house system to overpower all of the information coming off of the stage. I think we all know what this leads to!

If you have to overpower the band with your stage rig, the house mixer will hate you and the show will suffer for it! (Just as it does if the band plays too loud.)

Use the low frequency information from the house system to fill out the bottom end in your “stage sound.”

If you’re carrying a smaller house system or playing on well-damped theater stages, this effect is not so prevalent and you can maintain a full bandwidth from your monitor system.

Pulling It Together
The best approach is to try to meld the backline amps, wedges and house loudspeakers into a system that all works together to attain the overall stage sound you are looking for.

To develop this environment, the spectral response of the mixes should be tailored to fill in what is not heard on stage from the backline amps and the house system.

This usually involves a lack of nearby instruments and VLF frequencies coming from the wedges. (A bonus for you!)

This is where the receptive players come in. You may have to point out the low frequency phenomena during a sound check, but it will be obvious to them if they listen.

Also point out the nearby instruments and how they may be heard without being very loud in their mix. Maybe even re-aim a stage amplifier to be more effective.

How many times have you seen guitar players wailing away with their speakers aimed at their rear-ends? Tilt them back and aim them at their heads. I promise they have no idea what kind of havoc they cause the house mixer about 75 to 100 feet away.

Of course this doesn’t work in every situation. It depends on the music, the venue and the players among other things.

But if you can make these principles work you can achieve the most clarity with the least volume in your wedges.

Use localization to help keep things clear on stage. It is easier to hear different instruments if they are coming from different directions. The fewer sources in any mix, the easier it is to hear them a noisy environment.

Also consider the individual instruments and a mix containing all of them. You have a certain bandwidth in which to fit them.

It’s pretty easy if it’s just a violin and a tuba, but not so straightforward with several guitars and keyboards and drums. Work at making all of the instruments sound different and fill the available spectrum with more distinct differences between them.

If a player insists on a particular tone in his monitor, but it doesn’t work for the rest of your mixes’ split the input into multiple channels on your desk so that you can tailor the sound for everyone.

Dan Laveglia is a long-time system engineer who has worked with Showco and Clair Brothers, serving top concert artists.

 

{extended}
Posted by Keith Clark on 11/07 at 11:46 AM
Live SoundFeatureBlogStudy HallConcertConsolesEngineerInterconnectLoudspeakerMicrophoneMixerMonitoringProcessorStageTechnicianPermalink

Wednesday, November 06, 2013

Soundcraft Vi1 Speeds Up Training Times For Royal Holloway University of London

Digital console helps crew serve an average of 10 different productions a week

The Student Union at the Royal Holloway University of London host a variety of events at a multifunctional venue, which include live music, clubs, conferences, theaters, trade shows, and more. Its technical crew is responsible for approximately 10 different productions a week, and recently acquired a Soundcraft Vi1 console in order to train more students in front of house audio mixing.

“I purchased a different brand of digital console 18 months ago for our FOH training,” explains Karl Travers, technical manager of the Students Union. “My staff here consists of students and it was taking them quite a while to learn on that console, so we started looking at the Soundcraft Vi Series for a replacement. After working with the Vi1 I knew instantly that the user interface was far more straightforward and more intuitive for the crew to learn.”

The highly praised Vi1’s user interface became popular due to its widescreen Vistonics interface, which retains the same ‘walk-up’ user-friendliness of the other Vi consoles. It displays all parameters for 16 channels side by side, on a single 22-inch Vistonics touchscreen. Also, the Vi1 includes built-in Lexicon effects as well as GEQ’s on all output busses, eliminating even more hassles such as additional sound equipment.

“I’ve found that the console UI is very user-friendly with a short learning curve, which has enabled me to get my staff trained in a short amount of time. This in turn leads to quick and confident operation during shows,” says Travers.

“In the last few weeks we’ve done about seven club nights, a few bands, live PA acts with MC’s and several conferences,” he concludes. “So far, the Vi1 has performed extremely well, and the user-definable pages make setup for different shows very easy. On top of that, its gain structure is sweet and requires little work to achieve the desired results.”

The Vi1 console was purchased through UK-based Arc Sound. Soundcraft is distributed in the UK by Sound Technology Ltd.

Soundcraft
Harman Professional

{extended}
Posted by Keith Clark on 11/06 at 11:39 AM
AVLive SoundChurch SoundNewsTrainingAVBusinessConsolesDigitalEducationMixerPermalink

Church Sound: Alternatives To Using Y-Cables With Source Devices (iPod, Laptop, CD Player)

Avoid burning out the outputs of your components

 
Question: Will anything bad happen if I use a Y-cable on my CD player or laptop to hook its two outputs into one input on my mixer? I’m running out of channels.

Answer: The answer is yes, probably. If not immediately, then some time in the future. What happens is that most modern audio gear has a very low output impedance, typically under 100 ohms. This is great in that it can drive audio over very long cable runs while ignoring interference from light dimmer buzz and cell phones, but bad in that a short circuit will cause its output transistors to put out too much current and overheat, eventually killing them.

But here’s the crazy thing…if you’re running the exact same signal out both the left and right outputs of your CD player, laptop, or iPod, say from a mono sound track, then there will be no current flow between the left and right output stages and all will be well.

However, if you then play a music track with a lot of dissimilar info on the left and right channels, say from a split-track song with music on the left channel and guide vocals on the right channel, then there will be essentially a short circuit current between the left and right output stages. This is very hard on the CD player’s and iPod’s electronics, and they will begin to overheat internally.

So if you only play these backing tracks once in a while or for only a few minutes at a time, then the output stages may never overheat enough to burn out. However, play these same backing tracks for an extended period of time (perhaps 30 minutes) and you’ll probably find that one of the outputs of your CD player, laptop, or iPod has been burned out. Not a good day for your gear. 

The best way to combine two outputs into a common input is by using a box with special build-out resistors that limit this current. Whirlwind makes a box called the podDI that not only safely combines the two output signals from the sound source into a common input on the console/mixer, it also gives provides separate volume knobs so you can turn the left and right channels up and down in volume independently. (The podDI is pictured above with a Y-splitter.)

In addition,  it provides a balanced XLR output transformer that’s perfect for isolating the ground of your gear from the PA system and stopping that nasty power supply buzz that often occurs when using a laptop as a sound source that’s powered from its own 120-volt transformer.

You can buy one for about $75 from Full Compass Systems here.

Mike Sokol is the chief instructor of the HOW-TO Church Sound Workshops. He has 40 years of experience as a sound engineer, musician and author. Mike works with HOW-TO Sound Workshop Managing Partner Hector La Torre on the national, annual HOW-TO Church Sound Workshop tour. Find out more here.

{extended}
Posted by Keith Clark on 11/06 at 10:12 AM
Church SoundFeatureBlogStudy HallConsolesInterconnectMixerSignalPermalink

Monday, November 04, 2013

Church Sound: How To Create A Song Mix Blueprint in Five Steps

Your mixes will come together a lot faster and ultimately sound better
This article is provided by Behind The Mixer.

 


Have you looked at the set list for next weekend? Do you have any idea what songs you’ll be mixing? The standards, right? 

A worship team worth its weight in salt (that’s a lot of salt) will be rotating in new songs now and then. The musicians will practice their respective parts, the worship leader will have an arrangement selected, and as a team they will practice the song until it’s good enough for playing for the congregation. 

You are the final musician on that team, mixing all of their sounds together into a song lifted up in worship. What have you done to learn that song?

Following are the steps I take whenever I see a new song on the set list. I’ve mentioned before about the importance of getting a copy of the song which the band will be using as their blueprint. 

This list goes way beyond that. It’s a way of creating your own mix blueprint. It’s a way of ensuring you are just as prepared as the musicians when you mix the song for the first time.

1. Listen To The Song

Get a copy of the song which band is modeling the style and arrangement. The worship leader will likely tell you something like “we’re doing the 10,000 Reasons song By Matt Redman in the same way he has it on the 10k Reasons album.”  You can jump onto Spotify or YouTube and look up the same version, if you don’t already happen to have it in your personal music collection.

Listen a few times to get the general overall song feel. Is it slow or fast? Simple or complex? Does it have a big sound or a “small set” feel? Get the big picture.

2. Create A Song Breakout Order

From the musical side of things, a song is arranged into several common areas. You might think of this as the verses and the chorus.  For your blueprint, start with the following six areas. This list can be expanded as I’ll soon discuss, but for now this is the best place to start.

Intro: Song intros can start in many different ways. It can be full on instruments, a slow drum beat, a rhythm guitar, or even a scripture reading over the instruments.

Verse: The verses of a song tend to have the same arrangement but can have a different number of instruments as a means of providing song movement.

Chorus: Choruses, like verses, can have slightly altered arrangements. A common arrangement change is the last chorus being sung without any instruments.

Bridge: Not all songs have a bridge. The bridge is often used to contrast with the verse/chorus and prepare for the return of the verse/chorus. It can have a time change and even a key change.

Instrumental: Instrumental sections of a song can be a few measures or it can be a long passage, depending on the arrangement.

Outro: The outro can have the same variety as the intro or you might have a lack of an outro. For example, the song immediately ends after the last chorus.

3. Listen And Fill Out The Breakout Basics

You know the general feel and flow of the song, now you need to sketch out the basic outline. You will need to adjust your breakout order if you have verse and chorus differences. 

For example, the second verse might have a different arrangement than the first verse. If this is the case, modify your notes such as:

Verse 1: Drums come in with only the snare and hi-hat

Verse 2: Full on instruments

Consider this example of breakout notes:

Intro: solo piano with singer reading a passage of scripture

Verse 1: Drums and bass added

Verse 2+: All instruments with only lead singer

Chorus: Backing vocalists used only in the chorus

Bridge: N/A

Instrumental: Piano over other instruments

Outro: Ends with acoustic guitar and piano

4. Listen For The Mix Details

It’s time to focus in on the mix details. Consider this sample of a breakout:

Intro: Piano leads/sits on top of rhythm acoustic guitar w/very heavy overall acoustic feel.

Verse: Drums and bass used in a gentle supportive way. Both instruments sitting far back in the mix. No backing singers. Snare distant in mix.

Chorus: Backing vocalists singing at same volume with lead singer (singing in harmony)

Bridge: N/A

Instrumental: Piano dominates the instrumental, push volume. Piano sounds bright.

Outro: Piano and acoustic guitar with piano ending first and then acoustic guitar finishes the last few bars of the song.

Note: Studio engineer/producer Bobby Owsinski has a short article here on the questions he asks himself on how he wants to create a song arrangement. While he’s focusing on creating the new song for the FIRST time, they are good questions that can be applied to listening to a song as part of your mix prep.

In this step, you are noting where the instruments and vocals sit in the mix. You should have also noted any mix points, like “piano sounds bright.” 

You don’t need to write down, “expect a 560 Hz cut on the electric guitar” but you should write enough that describes what you’d expect to mix, if it’s a bit out of the ordinary or worth noting. 

For example, in the song 10,000 Reasons, there is a distinct tom hit three times in a row. I heard the tom sound described as having a “tribal drum” sound. That tells me I need it upfront in the mix and how to mix it.

5. Pick Out The Effects

This is the last step in creating your mix blueprint. Listen to the song and look for the ways in which effects are used. Then make a list of the instruments/vocals which have those effects used and describe how they sounded.

For many worship bands, the effects will stay the same throughout the song but if you want to copy an arrangement with effects changes, then go for it.

The Take Away

The musicians put in a lot of time preparing for the church service…and if they don’t, they should. You need to put in time preparing your mixing plans when a new song comes along. 

Listen to a copy of the song for the general feel. Create your breakout list with your song basics. Then go back and add in your mix notes. 

It’s really nice to stand behind the mixer during practice and look down at my mix notes for a new song. Your mixes will come together a lot faster and ultimately sound better because of your extra planning.

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians.  He can even tell you the signs the sound guy is having a mental breakdown.

{extended}
Posted by Keith Clark on 11/04 at 01:50 PM
Church SoundFeatureBlogStudy HallConsolesEducationEngineerMixerSound ReinforcementTechnicianPermalink

Friday, November 01, 2013

RE/P Files: Quincy Jones & Bruce Swedien—The Consummate Production Team

Talking with the "dynamic duo" in October,1989

From the archives of the late, great Recording Engineer/Producer (RE/P) magazine, this feature offers an interesting discussion with a true “dynamic duo” of the recording world. Quincy Jones is interviewed first, followed by Bruce Swedien. This article dates back to October 1989. The text is presented unaltered, along with the original graphics.

If the word “professionalism” can be epitomized by one of the most successful producers currently working in the recording industry, that man must surely be Quincy Jones.

The reports of his humanity, care and response to the needs of his recording “family,” and an almost telepathic rapport with his favorite engineer, Bruce Swedien, truly makes Quincy Jones a consummate producer.

During the many session hours that R-e/p spent with Quincy and Bruce in the studio, it became readily apparent that their complimentary skills — Quincy’s proven track record as a musician, composer, arranger, and record producer, married with Bruce’s mastery of the recording process —has resulted in a production team whose numerous talents overlap to a remarkable degree.

Having worked with Bruce Swedien on so many innovative album sessions, including Michael Jackson’s Off The Wall, George Benson’s Give Me The Night, The Dude, and Donna Summer’s Summer of ‘82, it came as no surprise to anyone in the industry that Quincy Jones should make such a clean sweep of this year’s Grammys, collecting a total of seven awards, including five for The Dude alone.

The following conversations with this illustrious production team were conducted during tracking dates for Michael Jackson’s upcoming album Thriller, at Westlake Studios, Los Angeles.

This is the first in a two part series of the conversations between R-e/p, Bruce Swedien, and Quincy Jones. Stay tuned for the next installment where R-e/p’s Jimmy Stewart speaks with Bruce Swedien.

R-e/p (Jimmy Stewart): How do you first get involved with a particular recording project? For example . . . Michael Jackson.

Quincy Jones: We were working on The Wiz together, and Michael started to talk about me producing his album. I started to see Michael’s way of working as a human being, and how he deals with creative things; his discipline in a media he had never worked in before.

I think that’s really the bottom line of all of this. How you really relate to other human beings and build a rapport is also important to me; energy that’s a great feeling when it happens between creative people.

I’ve been in some instances where I have admired an artist’s ability, but couldn’t get it together with them as a human being. To truly do a great job of producing an artist, you must be on the same frequency level. It has to happen before you start to talk about songs.

R-e/p (Jimmy Stewart): Then the important aspect, to your mind, is fostering a family feel during a project?

Quincy Jones: Yes. It’s a very personal relationship that lets the love come through. Being on the other side of the glass is a very funny position — you’re the traffic director of another person’s soul. If it’s blind faith, there’s no end to how high you can reach musically.

R-e/p (Jimmy Stewart): Is the special rapport you establish with an artist based on them saying something unique that triggers off an area in your creative mind?

Quincy Jones: That’s the abstract part which is so exciting. I consider that there are two schools of producing. The first necessitates that you totally reinforce the artist’s musical aspirations.

The other school is akin to being a film director who would like the right to pick the material.

As to what choice of production style I would adopt, your observations and perceptions have to be very keen. You have to be able to crawl into that artist, and feel every side of his personality —to see how many degrees they have to it, and what their limitations are.

R-e/p: Once that working rapport hits been established, how do you plan the actual recording project?

QJ: I think you have to dig down really to where yon think the holes are in that artist’s past career. I’ll say to myself. “I’ve never heard him sing this kind of song, or express that kind of emotion.”

Once you obtain an abstract concept or direction, it’s good to talk about it with the artist to see what his feelings are and if you’re on the right track. In essence, I help the artist discover more of himself.

R-e/p: Do you became involved with the selection of songs for the album?

QJ: On average, I listen maybe 800 to 900 tapes per album. It takes a lot of energy! I hear songs at a demo stage, and would like to think that the songwriter is open for suggestions.

If I say, “We need a C section,” or “Why don’t we double up this section” the writers with whom I’ve had the most success must be mature enough and professional enough to say, “Okav, I’m not going to be defensive about any suggestion you make.”

R-e/p: So what do you listen for in a song? The lyrics, melody, arrangement, instrumentation ...

QJ: I listen for something that will make the hair on my right arm rise. That’s when you get into the mystery of music. It’s something that makes both musical and emotional sense at the same time: where melodically it has something that resembles a good melody.

Again that’s intangible too. because it’s in the ears of the listener. So basically I’m saying… it transcends analysis. A good tune just does something to me.

R-e/p: Once the songs have been sorted, what runs through your mind prior to the studio session dates?

QJ: I try to get the feeling that I’m going into the studio for the first time every-ime. You have to do that because if we started to get to a stage where Bruce Swedien and I had a specific way of recording it wouldn’t work for us.

I’m sure some things overlap, because that’s part of our personality, but we try to approach it like everytime is the first time: we’re going to try something so that we don’t get into routine type of procedures.

With Rufus Chaka Kan I’ll do one kind of a thing, where we will have rehearsals at their home, and talk about things. Maybe even come in the session with everybody and do it like “Polaroids.”

That way you can hear what everything sounds like rough, and feel what the density, structure and contour of the song is all about.

Other times, like with The Brothers Johnson, we used to go in with just a rhythm machine, guitar and bass, and do it that way. We did the Donna Summers album with a drum machine and synthesizer, so that I could really focus on just the material. But with Bruce Springsteen everyone played live, as in a concert.

For George Benson’s album. Lee Ritenour came over and helped us with different guitar equipment to get some new sounds.

At the same time that Lee was there dealing with the equipment, and George was trying it out, Bruce Swedien came over for a whole week to just listen to George with his instrumentals and vocals, like a screen test.

R-e/p: You have obviously established a close affinity in the studio with Bruce Swedien.

How do the pair of you interact with one another, and how does he make the moves with you?

QJ: The thing is, what’s great about working with Bruce is I like him as a human being. In a funny way, we’ve the same kind of background.

The first record we did together was probably Dinah Washington. During that period of time we recorded every big band in the business. We did a lot of R&B in Chicago in those days ... a lot of big records.

Bruce’s first Grammy nomination was in 1962 for Big Girls Don 7 Cry. He studied piano for eight years, and did electrical engineering in school.

Along the way he recorded Fritiz Keiner and the Chicago Orchestra. And a lot of his time was also spent recording commercials.

So, from the sound aspect, and the musical aspect, the two of us kind of cover 360 degrees . . . well at least 40. We feel comfortable in any musical environment.

Bruce handled the pre-recording and shoot. He also designed some of the equipment for the location sound, did the post scoring, the dubbing and the soundtrack album.

To do all of it, that’s unheard of! Usually there are three to four different people to handle all those facets.

R-e/p: Is there a standard procedure you use for recording the various parts of a
song?

QJ: Each tune is different. “State of Independence” for the Donna Summer’s album is a good example of a particular process I might use.

We started with a Linn Drum Machine, and created the patterns for different sections. Then we created the blueprint, with all the fills and percussion throughout the whole song.

From the Linn, we went through a Roland MicroComposer, and then through a pair of Roland Jupiter 8 synthesizers that we lock to. The patterns were pads in sequencer-type elements. Then we program the Minimoog to play the bass line.

The programs were all linked together and driven by the Roland MicroComposer using sync codes. The program information is stored in the Linn’s memory, and on the MicroComposer’s cassette.

At this point all we had to do was push the button, and the song would play.

Once it sounds right we record the structured tune on tape, which saves time since you don’t have to record these elements singly on tape with cutting and editing. This blueprinting method works great when you’re not sure of the final arrangement of the tune.

We can deal with between three and five types of codes, including SMPTE on the multitrack.

With all these codes, we have to watch the record level to make sure it triggers the instrument properly. Sometimes we had to change EQ and level differences to make sure we got it right.

R-e/p: Do you try and work in the room with the musicians, or stay in the control room with Bruce?

QJ: I like to work out in the room with each player, running the chart down, and guiding the feel of the tune.

We will usually run it down once, then I’ll get behind the glass to hear the balance and what is coming through the monitor speakers, which is the way it will be recorded and played back.

Once I get the foundation of the tune on tape, and know it’s solid and right, it is easy for me to lay those other elements to the song. It’s the song itself that’s the most important element we are dealing with.

R-e/p: Any particular “tricks of the trade” that you’ve developed over the years for capturing the sounds on tape?

QJ: Bruce is very careful with the bass and vocals, and we try to put the signal through with as little electronics as possible.

In some cases, we may bypass the console altogether and go direct to the tape machine.

Any processing, in effect, is some form of signal degradation, but you are making up for it by adding some other quality you feel is necessary — we always think of these considerations.

Bruce has some special direct boxes for feeding a signal direct to the multi-track, and which minimally affect the signal.

With a synthesizer we very often can go line-level directly to the machine, while with the bass you need a pre-amp to bring it up to a hot level.

Lots of times we will avoid using voltage-controlled amplifiers, because there will be less signal coloration. Also, if possible we avoid using equalization. Our rules are to be careful, and pay close attention to the signal quality.

R-e/p: The rhythm section is often considered to be the “glue” that holds a track,together. What do you listen for when tracking the rhythm section?

QJ: I listen to the feel of the music, and the way the players are relating to that feel. My energy is directed to telling the players what I want from them to give the music its emotional content, and Bruce will interpret technically the best way in which to capture the sound on tape.

And we may try something new or different to highlight that musical character. Because Bruce has a good musical background, he is an “interpreter” that is part of the musical flow.

I like players who have a jazzman’s approach to playing. They have learned to play by jamming with lots of different people, and you can push them to their limits.

I don’t like to get stuck in patterns, so I need players who can quickly adjust to changes in feel. They must also be able to tell a story through their instrument. I look for players who can do it all! |Laughter]

R-e/p: You obviously have a keenly evolved sense of preparation for a recording project. How do you go about planning a typical day in the studio?

QJ: We do our homework after we leave the studio. Bruce will always have a tracking date planned out, with track assignments for the instrumentation, and so on.

For overdubbing, he will work out how the work-tape system will be structured, and Matt [Forger] our assistant will be responsible for carrying out that task.

I zero in on what my day’s work is going to be by listening to the musical elements; how they interact and work in the song in my listening room at home.

Bruce does the same by working out in his mind the best method of capturing the music, and structuring these elements so they can be used in future overdubbing and mixing.

I keep a folder for each tune, and make notes as the tune progresses. It may be that changing a stereo image to mono is one way to strengthen an element: stereo for space; mono for impact.

If it’s a wrong instrument or color it will be redone. Bruce understands the music and the musical balance, and never loses his perspective.

Our communication after all these years working together is very spontaneous. This is one of the reasons for our success!

R-e/p: It’s obviously important to you that Bruce is able to read music. How does this kelp you in the studio?

QJ: The way we work with music charts, I can get to any part of the tune. It’s fast for drop-ins, and you never end up making a mistake. Bruce will make notes on his music chart to he used later in the mix.

R-e/p: How often do you listen to a work cassette during an album session?

QJ: I’ll listen over and over again to a song until it’s in my bones. Some songs have just a chord progression and no tune.

Others may be a hook phrase and a groove, and sometimes the song may call for a lot of colors. Each song is different . . . when it’s played on the radio and jumps, I’m happy.

To keep the session vibe up, I use nick names for the guys I work with: “Lilly” for Michael Boddicker; “Mouse” for Greg Phillinganes; “Boot” for Louis Johnson; “Worms” for Rod Temperton.

And Bruce has many nicknames; it depends upon the intensity in the control room.

If things are going a little rough and I need a hired gun, I call Bruce “Slim”! [Smiles across room at Bruce Swedien]

And the way I keep in touch with the tracking musicians is to use slang: “Anchovy” is a mess up; “Welfare Sound” is when you haven’t warmed up to the track or the tune; “Land Mines” are tough phrases in an arrangement.

R-e/p: How do you gauge that a track is happening in the control room?

QJ: I listen on Auratones for energy and performance at about 90 dB SPL. I’m coming from a radio listener concept.

I have two speakers set up in front of my producer’s desk. I don’t have to ask Bruce to move so I can listen to his set of speakers, and we never play the two pairs of speakers at the same time. When i’t's a great take you can see through it!

R-e/p: With such wide experience over that I think is necessary to have in a record.

Digital sometimes gets a little too squeaky clean for me. But I know it’s going to improve, because it’s a wonderful direction.

With album sessions becoming more and more complicated, both technically as well as artistically, do you think a producer has to be a good arranger too?

QJ: I don’t know, because everybody produces with his strength.

That ability can come from the strength of an engineer, player, singer, instrumentalist, arranger, or a combination of these things.

R-e/p: As founder and president of your own record company, Qwest, do you find it hard sometimes to combine the creative ability of a producer, with the business side of running a label?

QJ: Let me give you some background. In I960 I got in trouble with a jazz band I had on tour, and when I came home with my tail between my legs from Europe I took a job with Mercury Records for about seven years, in A&R, and eventually vice president.

During the course of that time I had to understand a whole different area of the record business that I wasn’t even aware of before.

It was a big company because Mercury merged with Philips, which is now Polygram, and we started Philips Records in this country.

It was an incredible education, because I use to think that all these companies get together once a week to plan how to get new artists on the label.

You should be so lucky that you get past being an IBM number on a computer with a profit and loss under your name or code number. That gave me an insight into understanding what corporate anatomy was all about.

Understanding the rules of the game is important for a producer with a huge company like Philips, which is dealing with raw products, television sets, vacuum cleaners, and all the rest, at the that time we were doing $82 million a year worldwide, and music was only about 2% of the total.

R-e/p: So how do you communicate with the business person?

QJ: Somewhere along the way it’s got to make sense if it’s going to cost money. If you want to go to Africa and make a drum record, for example, you’re going to have to figure out how to get it done for the people who put up the money.

Somewhere along with your creative process you have to ‘scope out what the situation is, get your priorities straight, and don’t let that interfere with your creativity.

If they put a pile of money right in front of you, there’s no way to correlate the essence of what that means, and yet still tie it into the creation of music.

Being a record company president is a lot of responsibility, hut it’s going to be okay. To become a successful record company president, you have to apply and reinforce your creative side with a business side, but you can’t lead with the business side.

Jimmy Stewart’s interview with Bruce Swedien begins on the next page.

R-e/p: How do you see your role as engineer: working behind the console and handling the technical side of the recording process?

Bruce Swedien: Well, I guess I have to go back just a little bit in my background to really answer that question.

Number one, working in Chicago in the jingle field was tremendous training for getting a very fast mix, and being ready to roll because, quite frequently, jingle sessions only last an hour.

I recorded all the Marlboro spots, where it wasn’t unusual to have a 40-piece orchestra scheduled for nine o’clock downbeat, and literally be ready to roll at 9:05.

And when the band is rehearsing, I’m getting little balances within the sections: when the rhythm section is running a certain thing down, I’ll use that time to get the rhythm balance ready. It happens very quickly.

I guess I learned a lot about not wasting any movement or motion from the musicians in Chicago. In the early days of the jingles business — about 1958 through ‘62 - I worked with probably some of the finest musicians in Chicago at that time.

They were masters at making the most out of one little phrase. As they were putting the balance together and rehearsing parts, I would be getting it together very quickly behind the console. That was really great training for me.

R-e/p: Too much of an emphasis on the technical side of recording is often said to intimidate an artist in a studio.

How do you try and get into the musical groove with the musicians and the producer?

Bruce Swedien: You should be prepared down to the last detail, and get to the studio early. Start setting up early; there will always be things to do that you’re not apt to think of unless you have enough time.

If your session starts at 9:00 AM, be there at 8:00 AM - give yourself at least an hour to set up and prepare the average sized session.

Reduce as much of the routine of your work to a regular habit, and always do each job associated with the session in the same order. By scheduling all these mundane mechanical aspects of recording lo habit, your mind will be free to think of the creative facets of your work.

R-e/p: What type of musician do you like to record?

Bruce: A musician who gives it up… doesn’t hold back. Sometimes that’s a rare quality. So many musicians go into studios and they kind of tippy toe around, or they just don’t want to commit themselves.

I listen for the real sound of the instrument and player, not the interpretation.

I like to get to know the player and learn his sound. Ernie Watts, he’s my kind of player… disciplined! His energy is instant!

He never holds back; he’ll get it on the first or second take, because he’s so used to giving it up. Most of the solos on his album Chariots of Fire are first takes with the band. And that’s unusual for today . . . really unusual!

R-e/p: Obviously the cue/headphone mix is important to musicians in the studio. How do you help them get into the track?

Bruce: If the instrumentation is small enough, I’ll split the Harrison console [at Westlake Studio] and send to the multitrack with half of the faders, and use the rest for returns.

In that way you also get the cue mix on the multitrack return faders. It’s easier to see what you’re doing with the sound using the faders for the cueing mix, as opposed to monitor pots.

R-e/p: Quincy commented that it’s important to him that you are able to read music. What do you consider that a young engineer, in particular, should know about the musical side of recording to be a master engineer?

Bruce: I would say the best training is to hear acoustic music in a natural environment. Too many of today’s young engineers only listen to records. When a natural sound or orchestral balance is required, they don’t know what to do.

My folks took me to hear the Minneapolis Symphony every week all through my childhood, and those orchestral sounds have been so deeply imprinted that it’s very easy for me to go for an orchestral balance when that’s necessary.

And I’m talking about the whole range — even a synthesizer that is a representation of the orchestra. But, to put that sound in its correct placement in a mix is not easy.

My first advice would be to study the technical end first, so that you know the equipment and what it will do, and what it won’t do. Then hear acoustic music in a natural environment to get that benchmark in your mental picture

I think that it is very important for an engineer to understand a rhythm chart or lead sheet. I always make up my own chart with bar numbers on music paper and, as the song develops, I’ll add notes and sometimes musical phrases that will be needed for the mix.

R-e/p: Is it important to have a relative sense of pitch?

Bruce: No question about it… an absolute must. And I think a knowledge of dynamics is important too.

It’s not unusual in classical music to have a 100 dB dynamic range from the triple pianissimo to triple forte, and we cannot record that wide a range with equipment. In addition, it’s virtually impossible with most home playback systems to reproduce that dynamic range.

So, in recording we frequently have to develop a sense of dynamics that does not necessarily hold true with the actual dynamics of the music.

It’s possible to do that with little changes of level — what I would call “milking” the triple pianissimo by maybe moving it up the scale a little bit. And when you get to the triple forte maybe adding a little more reverb or something, to give the feel of more force or energy.

You see so many guys in studios with their eyes glued to the meters. I’ve never under-stood that. Take the clarinet, for instance, which can play softly in the sub-tone range; just an “understanding.”

An engineer would have to know how to deal with a player through the interpretation of the music that would be soft. On the VU meter, which only has 20 dB dynamics, so you don’t even see it. In those extremes, your ear is really on its own.

You can’t be the type of guy who has his eyes focused on the VU meter. It’s meaningless, absolutely meaningless.

The ear has to have a bench mark so you know where that dynamic should fall in the overall dynamic range. Quincy is always very aware of that, which is a real treat for me.

R-e/p: Having sat in on several of your sessions, I couldn’t help but notice that you and Quincy have your own jargon in the studio.

Bruce: You know Quincy and I don’t talk much when we work. We spend a lot of time listening: “More Spit” — EQ and reverb; “More Grease” — reverb; “More Depth” — enhance the frequency range, give it more air in the reverb; “Make it Bigger” — beef up the stereo spread; “More Explosive” — bring the level down, add some reverb, adding a trail after it. Quincy picks the sound or effect; I put the thought into application by choosing the “color,” if you like.

R-e/p: While there may be no hard and fast rules in the studio, have you picked up any tips about how to work creatively with a producer?

Bruce: An engineer’s important responsibility is to establish a good rapport with the producer. Nothing is a bigger turn off in a studio than a salty, arrogant personality. I have seen this attitude frequently in an engineer, and heard him describe himself as “Honest.”

It is very important for the engineer to know what a particular producer favors in sound. Producers vary somewhat in interpretation of a style, or musical character.

R-e/p: How does the engineer set a good vibe with the producer and the musicians?

Bruce: It’s a two-way proposition. I’ve been in situations early in my career — fortunately I don’t have to deal with that any more — where producers were not inclined to allow the engineer to be involved in a recording project.

I don’t think that’s the case anymore, at least in the upper level of the business, because it’s a fact that engineers do contribute a lot of useful input.

Yes, it’s absolutely true that an engineer can help an incredible amount in the production of music.

R-e/p: After the tracking and overdubs, how do you set yourself up for the mix?

Bruce: I’ll have many multitrack work tapes. For example, on Donna Summer’s tune “State of Independence” I had eleven 24- track tapes — each tape has a separate element.

Then I combine these tapes into stereo pairs. In the case of synthesizer, horns, back ground vocals, strings, sometimes I will use a fresh tape, or there are open tracks on the master tape.

The original rhythm track is always retained in its pure form. I never want to take it down a generation, because the basic rhythm track carries the most important elements, and I don’t like to loose any transients.

With synthesizer or background vocal you could go down a generation without loosing quality. I call this process pre-mixing, and we use whatever technical tricks it takes to retain sound quality.

We pre-mix the information on two tapes, and bring them up through the console. Having established the balance all the way through the recording process, we then listen to all the elements, and Quincy will make the decision based on what the music is saying.

We usually have more than we need. This stage is editing before we master —listening to everything once saves time, and we don’t have to search for anything.

Sometimes though, we may have to go back and re-do a pre-mix if the values are not right. For example, a background vocal part may have the parts stacked, and one of these parts might be too dominant.

Or sometimes everything sounded fine when we were recording the element, but with everything happening on the track the part gets lost.

Then we go back and re-establish a new balance by pre-mixing that particular element again. We also pay close attention to psycho- acoustics — in other words, what sound excites the listener’s ears. These are the critical things in the mix that will make the difference between a great mix and a so-so mix.

Also, we are sensitive to the reverb- content. Quincy may ask me to bring more level up on a given element. I may suggest adding more reverb, which will create more apparent level.

I establish what the mix will be, and Quincy will comment on the little changes and balances; these are the subtle differences that make for a great mix. We overlap our skills. Quincy becomes the navigator, and I fly the ship!

R-e/p: Does it take very long to get a mix that you both like?

Bruce: Quincy will work with me for the first few days until all the production values are made. Then we close down the studio and I will polish the mix until I like it.

After I get it right, Quincy receives a tape copy for the final okay. Because of Quincy’s business phone calls we have found that to be the best way to finish a mix. We know the mix is right when we’ve made the musical statement that we set out to make.

R-e/p: Do you use automation during the final mixing?

Bruce: Yes, because it gives you more time to listen by playing the mix away from the basic moves. Automation is a tool I use for re-positioning my levels. Then 1 can make my subtle nuances in level changes to get the right balance.

For monitoring the final mixes I am a firm believer in “Near-field” or low-volume monitoring. Basically, all this requires is a pair of good-quality bookshelf speakers. These are placed on top of the desk’s meter panel, and played at a volume of about 90 dB SPL or less. My reason for using Near-field monitor-ing is twofold.

The most important reason is that by placing the speakers close to the mix engineer, and using an SPL of no more than 90 dB, the acoustical environment of the mixdown room is not excited a great deal, and therefore does not color the mix excessively.

Secondly, a smaller home-type book shelf speaker can be used that will give a good consumer viewpoint. My personal preference for Near-field monitors is the JBL 4310; I have three sets.

Each musical style has its own set of values. When mixing popular dance music, for example, we must keep in mind the fact that the real emotional dance values are in the drums, percussion and bass, and these sounds must be well focused in the mix.

Making a forceful, tight, energetic rhythm mix is like building a house and making sure the foundation is strong and secure. Once the rhythm section is set in the mix, I usually add the lead vocal and any melody instruments. Then, usually the additional elements will fall in place.

For mixing classical music or jazz, how; ever, an entirely different approach is required.

This is where the mixing engineer needs a clear knowledge of what the music to be mixed sounds like in a natural acoustical environment. In my opinion, this one area is where beginning engineers could benefit their technique a great deal.

It is absolutely essential to know what a balanced orchestra sounds like in a good natural acoustical environment. Often, the synthesizer is used to represent the orchestra in modern music. A knowledge of natural orchestral balance is necessary to put these sound sources in balance, even though traditional instruments are not necessarily used.

R-e/p: You have provided us with a studio setup plan of the recent Donna Summer sessions at Westlake. How do you plan the tracking and overdubs?

Bruce: I generally record the electric bass direct. I have a favorite direct box of my own, which utilizes a specially custom-made trans-former. It’s very large and heavy and, to my ear, lends the least amount of coloration to the bass sound, and transfers the most energy of the electric bass on to the tape.

From my own personal experience though, active direct boxes are very subject to out-side interferences, such as RF fields — you can end up with a bass sound that has a lot of buzz or noise on it.

The miked electric bass technique alone usually does not work very well, primarily because there are very It few bass amplifiers that will reproduce fundamental frequencies with any purity down to the low electric bass range. In jazz recording the string bass is always separately miked.

My favorite mike is an Altec 21-B condenser, wrapped in foam and put in the bridge of the instrument; I own four of these vintage mikes that I keep just for this purpose. You also can get a good string bass pick-up with an AKG-451, placed about 10 inches away from the finger board, and not too far above the bridge.

Bruce: Quincy came up with the term to describe the way I work — my “philosophy for recording music” if you like.

To be more specific, it’s really my use of two multitrack machines with SMPTE codes — “Multichannel Multiplexing.”

Essentially, by using SMPTE timecode I can run two 24 track tape machines in synchronous operation, which greatly expands the number of tracks available to me.

Working with Quincy has given me the opportunity to record all styles of music. „With such a variety of sounds to work with I could see that single multitrack recording not enough to capture Quincy’s rich sounds.

I began experimenting with Maglink timecode to run two 16 track machines together in sync. This offered some real advantages, but since then I have expanded my system to use SMPTE timecode and two ,24-track tape machines.

The first obvious advantages that come to mind are: lots of tracks, and space for more overdubbing. With a little experience I soon found *hat the real advantage of having multiple machines with Quincy’s work if that I can retain a lot more true stereophonic recording right through to the final mix.

An additional major advantage is that once the rhythm tracks are recorded, I make a SMPTE work tape with a cue rhythm mix on it, and then put the master tape away until the mix. In this way we can preserve the transient response that would be diminished by repealed playing during overdubbing.

Quincy usually has the scheme for the instrumentation worked out for the song so we can progressively record the elements on work tapes. For example: Work tape A may have have background vocals; Work tape B lead vocals; Work tape C horns and strings; and Work tape D may have 10 tracks of synthesizer sounds to get the desired color.

All of these work tapes contain a pitch tone, SMPTE timecode, bar numbers cues, sometimes a click track, and cue rhythm mix.

R-e/p: What kind of interlock device do you use to sync the multitracks?

Bruce: We use two BTX timecode synchronizers. A BTX Model 4500 is used to synchronize the two multitrack machines, and I keep the 4500 reader on top of the console in front of me to provide a SMPTE code readout We work with real time from the reader, and don’t depend on the auto-locator during the work tape stages.

There are times when I’ll use the “Iso” mode on the 4500 to move one element on a tape to a different place in the tune.

Say, for instance, you have a rhythm guitar part that isn’t tight in a section; I’ll find it on the slave work tape and move it to the new location on the master work tape.

R-e/p: How long does it take to make a work tape?

Bruce: We start by adding the SMPTE code, and I’ll make a few passes with a mix until I like it Then we record the pre-mix on to the new work tape.

I’m very fussy about the sound, and we’ll listen back and forth between the master and the slave tape to make sure the sonics match before we move on to the next work tape.

I always want Quincy and the dubbing musicians to hear my best. It takes about three hours per work tape to finish the job. To keep all the tape tracks in tune, we also calibrate the speed of tapes by going through a digital readout.


Editor’s Note: This is a series of articles from Recording Engineer/Producer (RE/P) magazine, which began publishing in 1970 under the direction of Publisher/Editor Martin Gallay. After a great run, RE/P ceased publishing in the early 1990s, yet its content is still much revered in the professional audio community. RE/P also published the first issues of Live Sound International magazine as a quarterly supplement, beginning in the late 1980s, and LSI has grown to a monthly publication that continues to thrive to this day. Our sincere thanks to Mark Gander of JBL Professional for his considerable support on this archive project.

{extended}
Posted by Keith Clark on 11/01 at 06:04 PM
RecordingFeatureBlogStudy HallDigitalEducationEngineerMicrophoneMixerSignalStudioPermalink

Sound Operators & Musicians, Working In Harmony

Avoiding the "deadly sins" that separate tech and creative sides

Over the past several years, I’ve had the privilege of being a musical performer and worship leader, as well as a church sound engineer and technician.

This has provided unique perspective from both sides of the platform; what I’ve learned on one side has helped me do better on the other side, and vice versa.

Through this process, I’ve noted several problems and solutions that apply to the technical side, the creative side, and both. I’ve refined these observations and practices into what I call the “Seven Deadly Sins.” Let’s get started.

Deadly Sin #1: Messing with the stage mix. Few things are more frustrating for a musician than a bad mix on stage. We’re a picky lot, and further, when an acceptable stage mix is achieved, we don’t want it to change.

Therefore, the first rule for the sound mixer is avoid adjusting input gain once a service has started. Even a slight adjustment can be a HUGE detriment.

Also, please don’t mess with monitor sends during a service. Certainly there have been times when the stage is too loud - often, we musicians tend to play louder when the adrenaline starts flowing. (Of course, others actually get timid and play/sing softer.)

Resist the temptation of making major changes mid-stream; not only will this distract the musicians, but also in all likelihood, changes will serve to make things even worse from a sonic perspective.

Instead, work on preparation that will eliminate these problems before they start. Pay close attention to how things sound during rehearsal, how sound is reacting with the room, and project what will happen when the room is full for services.

And, pay even closer attention during services, making observations and notes about what’s happening at “crunch time,” when true performance characteristics are being exhibited and an audience is on hand.

Of course, this is easiest to do when you’re using the same system in the same room with the same musicians. In most cases, the first two variables don’t change, and with respect to the third, note the techniques and mix approaches that result in the most consistency, regardless of who’s playing or a particular style.

Observe, experiment, formulate and then act - in advance.

Deadly Sin #2: Trusting untrained “critics.” While serving as director of technical ministries at a large church, I had the privilege of working with a talented director of worship. However, he had an annoying trait of trusting an elderly lady of the congregation to provide critiques of my house mix and overall sound quality.

She would wander through the sanctuary during rehearsals, listen and then report back to him. My goodness - this is an individual who had no experience with sound or music and who couldn’t even make the cut during choir tryouts! Talk about demoralizing…

The bottom line is that this person’s opinion mattered just like any other member of the congregation, but in no way was she qualified to serve as a reference. Her suggestions were useless, and actually would have been detrimental had I chosen to follow them

The lesson? Sometimes musicians and worship leaders find it difficult to trust the sound people. But please, let logic prevail. In most cases, leaders of a church technical staff have the necessary experience to do their jobs correctly.

If sound people seem to be lacking in ability and knowledge, they must pursue proper training. If it seems that they lack the “ear” to provide a properly musical mix, then they need to fill another role while others who do have this particular talent should be encouraged to put it to use.

And church sound staff members must always be honest with themselves and constantly seek to improve their skills any way possible.

Deadly Sin #3: The word “no.” Musicians often possess a certain confidence that sometimes can border on arrogance. We get an idea or vision and we’re quite sure it can come to life, and with excellent results. This is simply a part of the creative process.

It’s up to the sound team to foster this creative spirit, not squash it. Therefore, the word “no” should fall toward the bottom of the response list.

For example, if a musician asks for an additional drum microphone, the answer should not automatically be “no.” This suggests that the sound person has no care about the creative vision, no care about striving for improvement.

Instead, how about a response along the lines of, “I’ll see what I can do. And, if you don’t mind my asking, what do we want to achieve with this extra mic?” This is a positive, can-do attitude that’s supportive and can be infectious.

Also, by inquiring further, the sound person may be able to help deliver a solution better suited to achieve the new creative vision. Maybe it’s not an extra drum mic that’s needed but another approach, like additional drum isolation.

The point is to ask, which begets learning, which begets support and collaboration, which begets a better performance.

Deadly Sin #4: Unqualified knob “twiddlers.” Musicians like knobs and blinking lights, so naturally, they want to fiddle with the sound system. The confidence/arrogance mentioned previously plays into this as well - we believe there’s no task we can’t be great at, regardless of lack of training and experience.

But the reality is that musicians usually know just enough to be dangerous when it comes to operating a sound system. The same goes for house and monitor mixing.

The irony is that musicians indeed can be among the best “sound” people in the congregation, perhaps better than many sound technicians, due to their musical ear.

However, too many cooks spoil the broth. The solution is fairly simple and straightforward: someone is either a musician or a sound tech/mixer for a given service.

If you’re a musician, this means hands off the sound gear. If you’re the mixer, do the best job possible, and support the musician. One individual does one thing, the other does the other thing, and you meet in the middle with mutual respect and collaboration, striving together to make everything better.

Deadly Sin #5: Not holding one’s tongue (or, how I offered a suggestion and made things worse…).

When I’m mixing, I want everything to sound as good as possible.

Sometimes, however, things are happening on stage that just seem to get in the way of the sonic nirvana that’s etched in my brain.

Perhaps it’s a guitar that’s too loud, perhaps it’s an off-key singer, or perhaps “everything” just isn’t working. (Mama told me there’d be days like this, and mama was right!)

Should we feel some obligation to offer some advice? Of course. Should we act on this feeling? Well…

Telling a musician he or she isn’t sounding too good is kind of like telling an artist you don’t like his/her painting.

How many times have you looked at a painting and asked, maybe sarcastically, “they want how much for this?”I may not like someone’s “art” but in the minds of many, including the creator of that art, it’s serious, meaningful and perhaps brilliant.

The moral of the story is to hold one’s tongue and consider the big picture. Ask the question: will our ultimate goal be furthered if I suggest a change? (No matter my intentions – how will this input be received?)

The bottom line is that there are facts, and there are opinions, and the truth often lies between. Often you can lose more than could ever be gained by pushing your own agenda, no matter how “right” you may be.

Tossing out opinions can also ruin the team spirit so vital to the mission, and yes, also the joy of praise and worship. And showing distrust and/or lack of respect for others may lead the worship leader to question your own goals, agendas and visions.

Obviously there are exceptions. If a guitar is just so loud that you can’t create a good mix below 110 dB, best to gently encourage a change.

If a singer is off-key to a noticeable degree, maybe mention it to the worship leader, subtly and behind closed doors. If the leader agrees, change becomes his/her responsibility.

I’ve learned a lot from talented production people. They’re always positive, always put full effort into their work, and always have an attitude of appreciation toward everyone else they work with.

This attitude transcends minor problems, leading everyone to follow the example, resulting in a better production. It’s a self-fulfilling prophecy, one attained through the power of encouragement and positive thinking.

Deadly Sin #6: Being negative during a service. Sometimes things just don’t go right in a given service. But in virtually all cases, it’s not because every single individual isn’t trying their best, applying their heart fully.

The worst thing that can happen on these days is to draw attention to the problems. This is especially important for worship leaders to keep in mind.

Never apologize for bad sound during a service. If it’s that bad, people will notice without anything being said.

Rather, concentrate on making it through that service, and address problems afterward. Often, the vast majority of the audience doesn’t even notice problems until they’re pointed out.

Now, how best to address significant sound problems. The fact: today’s cars often have better sound than most churches. It’s time to change that. Get the sound people training, and get them the equipment needed to make things work.

You can spend days (weeks, months and years!) talking about how to fix sound problems. In fact, as a sound contractor, that’s how I occupy most days.

The best (and only) way of solving serious sound problems is to work with a qualified consultant and contractor. Select these individuals carefully, and bring them in as part of your team.

And don’t criticize others on your team for things that - in all likelihood - aren’t even their fault!

Deadly Sin #7: Assuming the other person is capable of understanding your thought process.

In 99 percent of churches, technical people and music people are like fire and ice. The logical mind and the creative mind. (Thank God for the fact that we are all doing this for a higher purpose or we would have killed each other years ago!)

We all need to learn how to communicate better. This is especially important because the way worship services are being done is changing, in many cases quite radically in terms of employing production. This requires more people be involved both as performers/contributors and in technical/creative support.

If we don’t communicate, we won’t enjoy what we’re doing and therefore we won’t participate. The church has a lot of work to do, and we can ill afford to lose people who desire to help out.

How do we start to understand each other’s thought processes? Drum roll, please…

I know you’re probably looking for a magic approach or series of steps to achieve better understanding, but in my experience, it all comes down to spending time together.

Hang out, fellowship, pray, study, talk, and practice together. Technicians, learn to play an instrument. Musicians, develop an understanding of sound.

One final piece of advice. I worked with a church here in Michigan - eventually my wife and I started attending there - and I became involved as a musician and technical advisor. This church had constantly battled technical difficulties and had learned to accept mediocre (at best) sound.

They moved into a new facility and purchased some pretty nice equipment expecting great things. Indeed there were improvements, but sound still wasn’t where we wanted it to be.

I suggested that the sound staff attend rehearsals, and after three months, the difference was astonishing.

And not only did sound improve dramatically through better understanding and coordination, but we also had great fun!

Rehearsals didn’t consist of just musical practice. It was “practice time” and “small group time,” all in one. Everyone became friends and co-developed a shared, common goal of excellence through cooperation and understanding.

We were all truly part of the worship team, and that sense of unity gets better to this day. The simple act of inviting the sound people to rehearsals turned out to be the biggest improvement the music department has experienced.

Most importantly, more than really altering things significantly on the technical side, it changed attitudes and opened up minds.

{extended}
Posted by Keith Clark on 11/01 at 05:42 PM
Church SoundFeatureBlogStudy HallTrainingBusinessEducationEngineerMixerSound ReinforcementTechnicianPermalink

Thursday, October 31, 2013

Avid S3L Helps Deliver Legendary Performances At Stanford Jazz Festival

Engineer Lee Brenkman strategically used the layers of S3L’s compact surface to handle a variety of mixing tasks

Founded in 1972, the Stanford Jazz Workshop has been nurturing talent for over 40 years, bringing in some of the world’s greatest artists to mingle with students of the Jazz Camp by day, followed by performances at the Stanford Jazz Festival by night.

Countless jazz greats have participated over the years, and this year was no exception, with Herbie Hancock, Dr. Lonnie Smith, and Stanley Clarke just a few of the highlights of the nonprofit organization’s 42nd season.

Most of this year’s performances were held in Stanford’s Dinkelspiel Auditorium, with Bay Area live sound veteran Lee Brenkman taking on sound reinforcement responsibilities again. Brenkman, who has been associated with the Jazz Festival for over a decade, also heads up sound at two iconic San Francisco clubs—The Great American Music Hall and Slim’s.

This year, although he faced many of the same unique challenges that plague the festival, Brenkman found a new solution that provided the workflows and sound quality he needed.

“One thing about the Stanford Jazz Festival,” he explains, “is that time is short. Even though there’s usually just one band per concert, they use the stage of the venue as a classroom until 5 pm, which means that between 5 pm and 6:30 pm, we have to turn it around into a performance space again.”

A longtime user of Avid live systems at the two clubs, Brenkman chose the new Avid S3L for this year’s festival, as the modular, networked system enabled a simplified setup with the best possible sound quality.

“All the festival techs were really fascinated with the system, really liking the size, the Cat-5 snake—all the things that make setting up and tearing down a system a drag,” he says. “We ran a couple of runs of Cat-5 and were able to keep the snake in place, just striking the stage boxes at the end of the night to get them out of the way of the classroom kids. Just changing out our usual console for this was an enormous improvement in sound quality—it was really audible. Everybody agreed that it just sounded noticeably better.”

As the primary sound engineer for the festival, Brenkman strategically used the layers of S3L’s compact surface to handle a variety of mixing tasks.

“I mix the for the house, I mix the monitors, and I’m doing a completely separate mix for the recording, because at Dinkelspiel [Auditorium], for example, the amount of trap drums I need on the recording is much more than what I need in the auditorium,” he explains. “So what I did is assign all the head amps to two layers. The top layer of 16 [channels] was for the PA, and then I could switch to [channels] 17–32, and those were my recording mix. On average, I was doing four monitor mixes, and in some cases, six. I did not feel at any time that the console was too complicated to grab at something fast.”

The S3L’s integrated recording features matched up well to Brenkman’s other responsibility at the festival—capturing the performances of the jazz legends each night.

“Part of the job is to do archival recording of every concert, and just coincidentally, the Stanford [University] Archive of Recorded Sound, which is where [the files] all eventually go, their preferred format for live performances is 24-bit, 48 kHz WAV files. Well guess what? I’ve got a USB key that I can [plug directly into the S3L System and create recordings] that I dump onto a hard drive at the end of the season and hand to them. God it’s an incredibly capable little system.”

Avid

{extended}
Posted by Keith Clark on 10/31 at 03:00 PM
AVLive SoundNewsAVConcertConsolesDigitalMixerMonitoringSound ReinforcementPermalink
Page 28 of 134 pages « First  <  26 27 28 29 30 >  Last »