Mixer

Monday, January 13, 2014

Soundcraft Vi4 Consoles Foster Collaboration For LIVE ART: Tree of Life In Virginia

Consoles offer unique capabilities to overcome several production challenges

LIVE ART: Tree of Life is a unique collaboration that brings together recording artists, musicians and children at the 3,200-seat Landmark Theater in Richmond, VA, with Virginia-based Soundworks handling live sound for the event, utilizing Soundcraft Vi4 digital consoles for both house and monitors.

Produced by the School of the Performing Arts in the Richmond Community (SPARC), this winter’s LIVE ART: Tree of Life featured Grammy Award winner and SPARC alumnus Jason Mraz, along with k.d. lang, Christina Perri and other artists performing with more than 200 children of all abilities.

Any show with multiple performers is always a challenge, but the LIVE ART: Tree of Life event presented an unusual logistical hurdle: the monitor console was located behind some of the scenery, which blocked monitor engineer Bryan Hargrave’s view of the pit band and the bands and performers at one side of the stage.

The fact that the house band was positioned in the orchestra pit, right in front of the main PA system, did not make his or FOH engineer Grant Howard’s job any easier. A group of children in the box seats at stage left were also in front of the PA and had to be miked for the show.

Thanks to some careful aiming of the PA speakers and the Soundcraft Vi4’s functionality and its ViSi Remote iPad control, the show went off without a hitch even though it was, in the words of Howard, “like performance art, musical theater and bands playing all rolled into one,” and mixed seasoned veterans with children with little or no experience in front of an audience.

Soundworks used 62 inputs in the FOH console and configured the output with left/right/subs on an aux configuration for the main outputs plus two more feeds going to the under balcony and a feed dedicated to the hard of hearing. In addition to the complex live mixing duties, it was necessary to generate 62 channels of audio for a multi-track recording to be used for a planned PBS documentary film. 

“It was about as simple a hookup as I’ve ever seen for such a complex routing scheme,” Howard notes. “We just plugged in the MADI interface, turned on the consoles, assigned things accordingly, and that was it.”

“The ability of the Vi4 to be controlled by an iPad was an absolute lifesaver,” adds Hargrave. “In any other situation, not being able to see the performers during the show would have been a severe disadvantage.”

Instead, during rehearsals Hargrave was able to walk away from the monitor console, listen to the mix from anywhere in the theater and make adjustments.

“We had 16 powered monitors across the stage and I could walk right up to a performer, the piano player or a group of singers, hear exactly what they were hearing and use the iPad to make adjustments to the monitor mix right on the spot,” says Hargrave. “Without the iPad app, I would have had to walk to and from the console a hundred times.”

After dialing in their monitor and FOH mixes, Hargrave and Howard saved their mixes using the console’s Snapshot feature. They had only two days of rehearsals to dial in the sound for the more than 200 people who were onstage at various times, and found the Vi4’s ability to recall scenes indispensible. “By the time the show went on, all I had to do was scroll from one Snapshot to the next and the mixes were right there,” adds Hargrave.

“This was the most out of the ordinary show we’ve done on the Soundcraft Vi4, but we found it to be perfectly adaptable to the situation,” Howard points out. “In my experience it’s also the easiest console to teach visiting engineers how to operate.”

“Doing the LIVE ART: Tree of Life event is a humbling experience,” states Soundworks president Steve Payne. “It’s just absolutely astounding to watch these kids perform with artists of the highest caliber. This is truly a once-in-a-lifetime opportunity. Many of these kids have special needs and face challenges on a daily basis. Watching them perform on this stage is like watching a flower blossom. The process is hugely rewarding for all of us—the kids, the performing artists, the crew and the audience alike.”

Soundcraft
Harman Professional

{extended}
Posted by Keith Clark on 01/13 at 03:06 PM
AVLive SoundNewsAVConcertConsolesMixerNetworkingSoftwareSound ReinforcementPermalink

Church Sound: Good Technical Stewardship Isn’t Just About The Money

This article is provided by Gowing Associates.

 
Stewardship: stew·ard·ship [stoo-erd-ship, styoo-]; noun; 1) the position and duties of a steward, a person who acts as the surrogate of another or others, especially by managing property, financial affairs, an estate, etc.; 2).the responsible overseeing and protection of something considered worth caring for and preserving

When I first became a Christian and heard about the term “stewardship,” I believed (as I suspect most of us do) that being a good steward meant saving every possible penny and cutting costs to the bone to maximize the money that God wanted us to use for His ministry.

So I dutifully did that. Fixing up old, obsolete equipment, “McGiver’ing” up ways to do things without having to spend any money. And I patted myself on the back for a job well done.

Boy, was I wrong. In my rush to pinch pennies, I was ripping off God. How egotistical was I that I thought that to do things for God’s pleasure meant cheaping out on the one sacrificial gift God gave me to please Him! And to think I used to brag about how much I didn’t spend! If we really, really mean what we say and what we tell ourselves, then we cannot look at money as the sole object of stewardship.

John Wesley:

Do all the good you can,
By all the means you can,
In all the ways you can,
In all the places you can,
At all the times you can,
To all the people you can,
As long as ever you can.

It’s not about the money. Or more precisely, money isn’t the most important facet of the complex face of stewardship. Read that quote again from John Wesley. It exemplifies what proper stewardship is all about.

Proper stewardship means being responsible for overseeing and protecting something considered worth caring for and preserving. The definition of the word that I opened with doesn’t say anything about not spending money or finding the cheapest alternative possible, even if it means replacing that cheapest alternative in a few years because it was, well, cheap.

Proper stewardship means using someone who knows what they are doing and who has the experience to back up that knowledge to balance the needs of the church with the funds of the church. Even if it means telling a church that it would be better to wait to afford the proper equipment.

Good, reliable, professional equipment costs money. There’s no way around that. It’s expensive for several reasons. It’s been around for years and gone through the hands of a lot of pros. While there is a certain amount of brand name overhead involved,  there’s a reason that these brands have been around and can get top dollar.

Note that this doesn’t mean that there’s only one “right” piece of equipment. but it does mean that you need to look at equipment purchases from a value proposition instead of a cost basis.

For example, you can buy a 24-channel audio mixer for about $500 on the low end and go up to $4,000 on high end. That’s for an analog mixer. If you don’t know what you’re looking for and just took a cursory glance at the specs, chances are that they’d look similar and you’d be justifiably right to question why a church should spend more than $500.

What you wouldn’t know is that the reason that the $500 mixer costs that little is because it’s cheaply made, it’s a rip-off of other designs, it only sounds good in a very narrow range of parameters, and it will (not may) break down in a quarter of the time in comparison to other (better) mixers.

At the other end of the scale, the $4,000 mixer is probably going to be way too much for a church unless there is: A) a worship team that’s as good as Hillsong, and/or, B) the rest of the system complements and does justice to the fine signal that the big-bucks mixer is feeding into the rest of the signal chain.

A similar situation exists with churches that are planning a building expansion project. I’ve seen churches that have a multi-million dollar budget for a new building not plan for an acoustical consultant to come in at the planning stages to help analyze and adjust the dimensions and properties of the proposed building. If this isn’t done, odds are quite good that the acoustics of the new building will be disappointing to say the least, and it will have to be mitigated with a costly retrofit of acoustical material (and other measures).

An acoustical consultant isn’t cheap. Figure on about $2,000 – $5,000 (or more) for a qualified one to accurately model the building and present recommendations. These recommendations can range from building dimensions redesign to material changes to acoustical treatment.

On the face of it, that amount of money seems high. And it is. But in the scale of a million dollar project, it is not it. Particularly if a costly retrofit is needed after the fact.

Proper stewardship means building a realistic budget for the most expensive area of the church outside of the building itself. The technical ministry, by its very nature, is expensive and complex. Don’t put people in charge of it that don’t know anything about the ministry or the equipment. You wouldn’t pick someone out of the congregation and give them the responsibility to pastor the church every week, would you? You certainly wouldn’t want someone in the pulpit who didn’t have the right training and experience.

Invest in the people. Invest in creating a budget that includes training and includes replacement costs. Know the acceptable lifespan of equipment. Budget for repairs or preventative maintenance.

This also means budgeting for downtime for the equipment while it gets maintained. I can’t tell you how many churches I’ve visited that have mixers that haven’t been cleaned in years and aren’t covered. Do you know how much dust is in that mixer? Enough to inhibit proper functionality.

A proper yearly maintenance cleaning of a mixer takes about two days of a qualified repair tech’s time. It will cost about $150 to $200 for the cleaning. Expensive? Not from the big picture viewpoint that a new, decent mixer will cost at least $1,000.

Money and funds will always be a concern for the church, especially smaller ones. But by creating a yearly budget and putting small amounts toward that budget every week, you can build up to a working budget without having to have crisis moments.

This will also allow you to involve the congregation. They need to know that there are fixed costs with the equipment and equipment wears out. Involve them in the success or failure of the ministry. We are all in this together. Let’s promote proper and responsible stewardship!

Brian Gowing has helped dozens of churches meet their technology requirements. He works towards shepherding the church, analyzing their technical requirements, sourcing the equipment, installing the equipment and training the volunteer personnel. As he likes to say, “equipping the saints with technology to help spread the Good News.”

{extended}
Posted by Keith Clark on 01/13 at 11:58 AM
Church SoundFeatureBlogTrainingBusinessConsolesEducationInstallationMixerSystemPermalink

Friday, January 10, 2014

Sound Image Joins U.S. Live Console Partner Network

SSL continuing to add to its U.S. reseller network for Live consoles

Solid State Logic is continuing to add to its U.S. reseller network for SSL Live consoles with the appointment of professional touring and contracting company Sound Image.

Headquartered in Escondido, CaA, Sound Image also has offices in Arizona and Tennessee, and has been providing sound reinforcement services to many of the industry’s biggest rock, country and pop acts for more than 40 years.

“It’s about time Solid State Logic got out of the house,’’ says Dave Shadoan, president of Sound Image. “Everyone here at Sound Image is looking forward to a long and mutually beneficial relationship with SSL through the sales, service and use of the new SSL Live.”

“Sound Image is a well-established leader in the live market and its extensive customer base offers an unparalleled opportunity for the SSL Live to gain exposure in North America,” states Jay Easley, SSL vice president of Live Consoles in the Americas. “The company’s legacy, staff and commitment to excellence closely align with SSL’s core values. We are truly excited about the partnership between Sound Image and SSL.

“With this growing network of highly reputable partners in place, we are able to present the Live console to a greater scope of audio professionals and affirm its abilities with artists who are already devout SSL console loyalists.”

Solid Stage Logic
Sound Image

{extended}
Posted by Keith Clark on 01/10 at 03:23 PM
AVLive SoundNewsBusinessConsolesEngineerMixerSound ReinforcementPermalink

The 10 Most Frequently Asked Questions About Mastering

1. What is mastering and the role of the mastering engineer?

Mastering is essentially the step of audio production used to prepare mixes for the formats that are used for replication and distribution. It is the culmination of the combined efforts from the producer, musicians, and engineers to realize the musical vision of the artist.

Each stage of the audio production process, from pre-production to mastering, builds on each other and is dependent on the previous process.

Mastering is the last opportunity to make any changes to positively affect the presentation of your music before it evolves from a studio environment to the outside world.

An awareness of the differences between the roles of mixing and mastering engineers should be noted. While the tools may be similar, the perspectives between mixing and mastering are very different. When mixing, the focus is on the internal balance of individually recorded tracks and effects used both sonically and creatively for a single piece of music.

An album cannot be heard in its entirety until the job of a mix engineer is completed. The mastering engineer picks up where the mix engineer leaves off. Mastering is geared toward creating the balance required to make the entire album cohesive. The mastering engineer is most concerned with overall sonic and translation issues.

A mastering engineer works with the client to determine proper spacing between songs and how songs will be ordered on the CD. The flow of an album must appeal to the listener; it should engage them and take them on a musical journey as determined by the artist. Any final edits will be addressed during the mastering process as well.

Finally, the role of the mastering engineer is to provide preparation and quality control of the physical media send to the plant for replication. This includes listening to the premaster CD to verify integrity, along with the more technical aspects such as encoding text, UPC/EAN and ISRC codes, checking for errors within the media and providing any necessary documentation such as a PQ list.

2. Is mastering always necessary?

A writer’s words are not complete until the editor approves them. A painter’s work is not complete until it has been matted and framed. A musician’s work requires the same treatment. Audio production should not be rushed, finished haphazardly or completed “just to get it out there”. A finished product should reflect all of the work of the artist, producers and engineers that carry that vision forward.

Even a “perfect” mix needs mastering to a degree. In this case, you want the mastering to be as transparent as possible so that the original sound is maintained while preparing it for the final media.

As mentioned earlier, it is difficult for a mixing engineer to know how an entire album will sound in its entirety while mixing an individual track. In some cases a given track may be perfect on its own. However, when that track is placed within the context of an album, slight adjustments in level or frequency balance may be required.

Given the amount of music distributed online, an album needs to stand up from start to finish to be noticed in such a competitive market. If the final goal is to create a product that is ready to be played on the radio, distributed online, or sold as a physical product, it should be mastered.

Mastering helps say something about the professionalism of the artist, from the arrangement of certain styles of songs to the volume of the recording to the pacing of the tracks. If an artist is serious about their music, they should make sure that someone with experience signs off on the finished product.

3. What kind of improvements can be expected from mastering?

Mastering can help to achieve the correct balance, volume, and depth for a style of music. It can add clarity and punch to music, giving it more vitality.

The idea behind mastering is that a product will sound better after it is treated by the mastering engineer. The degree with which a mastering engineer can achieve this is dependent on the given mixes. In some cases there may be limitations or compromises that need to be made.

One limitation of mastering is the inability to restore severely distorted material. Distortion in a mix is like corrosion; once present it cannot easily be removed and has permanently destroyed a part of the material.

While mastering can mask the effect of some types of distortion, it is essentially covering blemishes that should be addressed before the mastering stage. A common misconception is that mixes should be as “hot” as possible. With the advent of 24 bit digital technology there is no reason why mixes have to “go into the red.”

Most mastering engineers recommend a cushion of anywhere between -6 to -10 dBFS from peak level to help ensure that clipping does not take place and to allow room for processing. In addition to peak level, the crest factor (peak-to-average ratio) is very important. While dynamic range can always easily be reduced, it is very difficult to undo the effects of over compression or limiting.

If the internal balance of a stereo mix is off, there may be compromises in the sound of the mastered track that will need to be made. For example, if cymbals or a vocal is very sibilant and bright while other parts of the mix are dark, it can be difficult to balance the overall sound in a way that enhances all elements.

In addition to frequency, levels between tracks may also be an issue. If the mastering engineer is given a stereo mix (as is usually the case) specific individual components of the mix cannot be completely isolated and processed separately.

While there are techniques such as de-essing, mid/side processing, equalizing or compressing for a specific imbalance, the results will likely not be as good as with a mix not having these issues and allowing the mastering engineer to address the balance on the whole.

One method of getting around internal balance issues is to provide alternate mixes. Some examples are vocal up/down mixes or mixes where one EQ is favored over another. Another method is supplying the mastering engineer with “stems” or sub mixes of the stereo track.

These might include a separate stereo mix of the vocals or instruments that when summed together are the same as the stereo mix minus any stereo bus processing.

In this case the mastering engineer is placed slightly in the role of a mix engineer and can make adjustments that wouldn’t be possible with a stereo mix alone. Another advantage with using stems is that alternate masters can easily be created such as radio edits, instrumental and vocal-only masters. 

Another area where “fixing it in the mix” is better than “fixing it in mastering” is when dealing with the issue of noise.  Mute automation on individual tracks should be used where there are noises during sections of a track that are not contributing to the mix.

Some examples are electric guitar hum/buzz on intros, outros, and breaks, bleed from headphones on the vocal track when the vocalist is not singing, drummers laying down their sticks after cymbals have faded but while other instruments are still playing at the end of a track.

4. What are some tips to help ensure the best possible master?

Audio quality can be very subjective. Before hiring a mastering engineer for a project you should have a clear objective on how you would like the finished project to sound.

Communication of these objectives between client and engineer is a key component to the success of a project. The language used to describe the character of audio can be ambiguous. Terms like “brassy,” “fat” and “present” mean different things to different people.

One of the skills of a great mastering engineer is to able to translate this loose terminology into the specific technical processes required to achieve the client’s goals in a non-obtrusive way.

Some mastering engineers find reference tracks from clients to be helpful. Reference tracks can be worth a thousand words, because they serve to demonstrate the sonic objectives of the client.

My personal preference is to receive mixes that are as close as possible to what the finished product should sound like, but with enough leeway to be able to manipulate the sound in order to mold a cohesive album. Some general tips toward achieving this are:

Knowing your room and monitors. If you are using smaller nearfield monitors for mixing, be sure to listen to the mixes on a system that has extended bass to ensure that there are not low end bass problems.

If your monitors or room “color” the sound in any way be sure to compensate to ensure that the mix will translate well on other systems.

Fix track related issues before mastering. Listen for issues like excessive sibilance, uneven or exaggerated frequencies, phase or polarity problems, bad edits, depth and width of the sound field, and the relative levels of instruments and vocals. 

I recommend listening to a mix in mono in order to hear if anything disappears or becomes exaggerated as well as listening to the mix at different levels and positions within your room. This can sometimes make an issue more obvious due to a different perspective.

Leave enough of the mix dynamics intact, so that the engineer can make adjustments not only in the overall level but in the punch and clarity of the transients.

Don’t use any processing on the master bus that will interfere with processing that is best performed while mastering. This may include exciters and harmonic enhancers, EQ, normalization and limiting used to achieve a higher overall volume.

Leave the heads and tails of a mix intact so that there is room ambience before the music starts and enough of the music at the end to be able to tailor the fade out. Having a bit extra at the start and end can also be useful so that a “noise profile” can be created for noise reduction systems that use this as a technique for removal of broadband noise.

Use mute and volume automation to remove extraneous noises from the individual tracks. Noises include headphone bleed when the vocalist is not singing, hum from electric guitars during breaks, and a drummer who may lay down his sticks after the cymbals fade at the end of a song, but before the final fade out of other instruments.

5. What should I send to the mastering engineer?

Mixes should be delivered in a format that alters the sound by the least amount. For digital mixes, an uncompressed format (AIFF or WAV) should be used rather than compressed formats like MP3 or AAC.

You should speak to the mastering engineer that you will be working with to verify the formats that they accept.

I recommend staying with the same sample rate used in the original tracks, unless mixing through an external converter. In that case, increasing the sample rate has its benefits. The bit depth should match the one used during the mix session rather than supplying tracks on audio CD where truncation and optionally dithering of the original tracks is applied.

I also prefer that digital mixes be sent as a single stereo interleaved file rather than split stereo files in order to help ensure phase coherence. While a standard when sending analog tape for mastering, reference tones are becoming a lost art with digital.

If mixing through an analog board or to an external device, having an unaltered 1k reference tone (corresponding to 0 VU on the console) can help to identify issues where left and right channels are not calibrated or set properly. If you’re not attending the session, be sure to send all documentation regarding the sample rate, bit depth, format, and a listing of the filename with the full name of the song for each file.

Also note if there are alternate mixes of the same track (e.g. vocal up/down). A listing of the song order is also necessary along with requirements for song spacing and fades if not printed on the original mix. If CD text, UPC/EAN or ISRC codes are to be added to the final CD they must also be included in the listing.

Documentation may include information about your audio chain such as equipment and processing used (particularly if applied on the overall mix), what you feel are some of the enhancements that you would like to hear in each mix, along with any other information that you feel will be useful to the mastering engineer.

6. How much will mastering cost?

Prices vary depending on the profile and experience of the engineer, previous credits, along with studio costs and overhead. Typical rates are based on:

—Flat rate per album usually tiered based on the total number of tracks, sometimes with a total hourly cap.

—Flat rate per track or number of minutes per track.

—An hourly rate that can include additional costs for media due to time spent verifying and listening to the disc.

Some studios may also charge more for attended sessions versus non-attended sessions where the final product is delivered and approved by mail or Internet.

Costs for mastering vary anywhere from $10 per song to $500 per hour for well regarded professionals. Given that mastering is a subjective service-based business, as opposed to a commodity which can more easily be compared, caveat emptor applies.

Assuming that both quality and cost are considerations, set a realistic budget for mastering at the start of your project. Sometimes independent artists will not have anticipated the costs for mastering until a project is completed. This forces them to use lower quality alternatives that are not necessarily best for their project. It’s a good idea to research the studios that will work within your budget. Call them to discuss the details of project and their approach.

In addition to gaining a better understanding of their process you will be getting a feel for the quality of their customer service. Some studios provide a demo of your material to ensure that they meet your expectations; others may charge for this service.

In either case, this is a good way to hear the quality of their work before committing to the cost of an entire album.

7. How much of a role does gear play versus the talents of a mastering engineer?

As the saying goes, “It’s the driver not the car”. A good engineer can work around limitations while a bad engineer will likely produce poor results, great gear or not.

This does not entirely discount the aspect of the gear. Having gear which is made specifically for mastering makes a big difference, not only in the quality of the sound, but in how quickly and easily the engineer can perform his work.

This includes equalizers, compressors and the usual components that most associate with the term “gear” as well as quality converters, monitors, and the room where the mastering engineer works. Any of these can skew the perception of what an engineer hears potentially causing them to make decisions that wouldn’t happen given better accuracy.

There are many hardware and software companies claiming the ability to allow anyone without prior experience to use a particular preset, match frequency curves with references, or use other methods which will allow them to master their own music.

These “cookbook” approaches really miss the point of what the mastering process should be about. This approach cannot replace the skill acquired by an experienced engineer.

The processing performed should bring out the elements of the mix that are most important to each song. This requires both an artistic and technical evaluation.

Using a generic EQ or compressor setting to try to achieve this doesn’t address the individual characteristics of the song that make it unique or the specific problems that it may have in translating those elements.

8. What is the best (fill in the blank) for mastering?

This is a question that is often asked within mastering forums. The simple answer is that there are no “best” or one-size-fits-all solutions. If there were, mastering houses would look more like a chain of department stores with the same type of room, monitors, and gear.

Just as the processing chain used for a particular piece of music will vary according to the character of that track, the hardware and software chosen by an engineer is based on his workflow and tastes.

There are however some common characteristics among mastering studios. The following are what I would consider the universal set of tools ranked in order of importance.

—A discriminating pair of ears. The ability to critically analyze issues that will interfere with the enjoyment and translation of a piece of music is the most fundamental tool of a mastering engineer.

—Knowledge and taste. Having the technical knowledge to be able address the problems heard in a mix and the taste to know whether or not to use a given technique.

—An accurate room and monitors. A good pair of monitors in a bad room can misrepresent a mix as much as a bad set of monitors in a good room.

—Both room and monitors work together to produce a listening environment that will not distort the presentation of a mix causing an engineer to potentially make bad decisions.

—A transparent processing chain. As with physicians, one credo of the mastering profession is to “do no harm”. Mastering engineers go through great effort and expense to ensure that their processing chain is as distortion and noise-free as possible.

—Everything from the type of cables to the software and hardware used is analyzed and potentially modified to reduce any ill-effects caused by the processing chain.

—Processing which provides additional “color”. What would seem like a contradiction to a transparent chain is the addition of hardware or software which actually adds distortion in order to enhance a track.

This includes both new and vintage hardware which adds tube distortion, transformer or tape saturation, along with software based modeling algorithms. The intent of these effects is to add warmth, thickness and depth to mixes that would otherwise sound thin or too “digital.”

9. Should you choose an engineer based on their “style”?

Ten different mastering engineers working in the same room with the same equipment will create ten totally different masters, each sounding great on their own. If you ask those same engineers to go back and reproduce any given master, you are likely to get ten almost identical masters back.

While each individual mastering engineer has his own style, it is important that he is able to separate himself from his style when needed. An engineer should never let his personal taste interfere with the goal of the artist he is working with. Again, this is where communication with the client is a crucial element.

A good mastering engineer should be well versed in a variety of different categories of music. In general, there is no reason why an engineer known for creating great country albums cannot produce a great rock album.

While an engineer’s work should be able to transcend musical genres, if a mastering engineer has a certain style that is appealing to you as the artist, you should consider working with him.

It’s important that both the engineer and the artist can communicate in a way that is complimentary to both individuals.

10. Which is more important, a technical background or musical one?

A mastering engineer should be well versed both technically and musically. The craft of the engineer is to be able to know good music and know how to make that music sound better.

Still, while a technical background is extremely important in the mastering world, that background should not interfere with the aesthetics.

Likewise, any personal feelings an engineer has about the stylistic choices of the music he is mastering should ultimately be discussed with the musician. It is because of this that an engineer’s musical background should not hinder his craft.

Given a technical background, some mastering engineers are capable of making modifications to equipment to create a more transparent sound, or provide color according to their taste and needs.

Having a musical background, particularly in the area of pitch, allows an engineer to identify frequency issues relating to musical notes and can speak directly to the musician about these issues in their terms.

An engineer should make sure that he strays away from favoring either background. While most engineers come from one or the other, their craft is in combining the two.

A mastering engineer should remain as objective as possible while still providing necessary feedback and insight from both a musical and technological perspective.

Tom Volpicelli is the president and founder of The Mastering House and has an extensive list of mastering and mixing credits to his name.

{extended}
Posted by Keith Clark on 01/10 at 02:01 PM
RecordingFeatureBlogStudy HallAnalogDigitalEducationEngineerMixerProcessorStudioSystemPermalink

Wednesday, January 08, 2014

Church Sound: EQ Basics & Essentials

This article is provided by Bartlett Audio.

 
Suppose you’re listening to the house sound system reproducing a play or musical. Some of the actors’ voices sound “puffy” or “muffled,” as if they were covered in a blanket. Other actors might sound “spitty” or overly sibilant.
 
Fortunately, those problems can be fixed with the equalization knobs (EQ) in your mixing console. EQ adjusts the bass, treble, and midrange of a sound by turning up or down certain frequency ranges.

To do that, EQ operates on the spectrum of the sound source—its fundamental and harmonic frequencies. The spectrum helps to give the instrument or voice its distinctive tone quality or timbre. If some of these frequencies change in level, the tone quality changes.

An equalizer raises or lowers the level of a particular range of frequencies (a frequency band), and so controls the tone quality. That is, it alters the frequency response. For example, a boost (a level increase) in the range centered at 10 kHz makes voices sound bright and crisp. A cut at the same frequency dulls the sound.

Types Of EQ
Equalizers in a mixing console range from simple to complex. The most basic type is a bass and treble control (often labeled LF EQ and HF EQ). Its effect on frequency response is shown in Figure 1-A.

Figure 1: Click to enlarge.

Typically, this type provides up to 15 dB of boost or cut at 100 Hz (with the low-frequency EQ knob) and at 10 kHz (with the high-frequency EQ knob). You have more control over tone quality with a 3-band or 4-band equalizer: you can boost or cut several frequency bands (Figure 1-B).

Sweepable EQ is even more flexible. You can “tune in” the exact frequency range needing adjustment (Figure 2-A).

Sweepable EQ is often incorrectly called “parametric,” which also allows control of bandwidth. The parametric equalizer allows continuous adjustment of frequency, boost or cut, and bandwidth —the range of frequencies affected. Figure 2-B shows how a parametric equalizer varies the bandwidth of the boosted portion of the spectrum.

A graphic equalizer (not shown) is usually external to the mixing console. This type has a row of slide potentiometers that divide the audible spectrum into 5 to 31 bands. When the controls are adjusted, their positions graphically indicate the resulting frequency response. Normally a graphic equalizer is used to equalize the house loudspeakers, both to flatten the response and to notch out feedback frequencies.

Figure 2: Click to enlarge.

Some engineers prefer to use an external parametric EQ or an automatic feedback device for feedback control. (The use of graphic EQ is beyond the scope of this article).

So far we’ve classified equalizers according to the frequency bands they control. They also can be classified by the shape of their frequency response.

A peaking equalizer (Figure 3-A) creates a response in the shape of a hill or peak when set for a boost. With a shelving equalizer, the shape of the frequency response resembles a shelf, as in Figure 3-B.

A filter causes a roll-off at the frequency extremes. It sharply rejects (attenuates) frequencies above or below a certain frequency.

Figure 3-C shows three types of filters: low-pass, high-pass, and band-pass. A 100 Hz high-pass filter (low-cut filter) attenuates frequencies below 100 Hz. Its response is down 3 dB at 100 Hz and more below that. This removes low-pitched noises such as room rumble, microphone handling noise, and mic breath pops.

Figure 3: Click to enlarge.

A filter is named for the steepness of its roll-off: 6 dB per octave (first-order), 12 dB/octave (second-order), 18 dB/octave (third-order), and so on.

The frequency response and placement of each microphone affect tone quality as well. In fact, microphones and mic placement can be considered as equalizers.

How To Use EQ
If your mixer has bass and treble controls, their frequencies are preset at the factory (usually at 100 Hz and 10 kHz).

Set the EQ knob at 0 to have no effect (“flat” setting). Turn it clockwise for a boost; turn it counter-clockwise for a cut. If your mixer has sweepable EQ, one knob sets the frequency range while another sets the amount of boost or cut.

As we said, the tone quality of a voice or instrument depends on the relative levels of its fundamentals and harmonics. Listed below are the fundamental and harmonic frequencies for female and male voices:

Female voice fundamentals: 175 Hz -1.175 kHz
Female voice harmonics: 2 Hz-12 kHz
Male voice fundamentals:  87 Hz-494 Hz
Male voice harmonics:  1 Hz-12 kHz

Basically, if the sound is thin or lacking fullness, turn up the lower end of the fundamentals. If the tone is too bassy or tubby, turn down the fundamentals. If it’s muddy or unclear, turn up the harmonics. Turn down the harmonics if the tone is too harsh or sizzly.

Below is a list of common sound problems and suggested EQ settings that can fix them. The amount of boost or cut is up to you – whatever sounds right. About 3 to 6 dB should be all you need in most cases.

Puffy, nasal, or chesty: Cut 500 Hz to 800 Hz
Dull, muffled, sibilants are hard to hear: Boost 10 kHz
Sizzly, “s” sounds are too strong: Cut 10 kHz
Bassy, boomy: Cut 100 Hz (males) or 200 Hz (females)
Thin, tinny: Boost 100 Hz (males) or 200 Hz (females)

Uses Of Equalization
Here are some applications where EQ comes in handy.

Improving tone quality: This is the main use of EQ, as described above.

Special production effects: Extreme equalization reduces fidelity, but it also can make interesting sound effects. Sharply rolling off the lows and highs on a voice, for instance, gives it a “telephone” sound. A 1 kHz band-pass filter does the same thing.

Reducing noise: You can reduce unwanted low-frequency sounds—air-conditioner rumble, floor thumps, and breath pops—by turning down low frequencies below the voice range. For example, a female actor’s lowest frequency is about 200 Hz, so you’d set the equalizer’s frequency range to 40 or 60 Hz and apply cut. This roll-off won’t change the actor’s tone quality, because the roll-off is below the actor’s voice range. Better yet, use a high-pass filter (low-cut filter) set to 200 Hz.

Compensating for microphone placement: Often you must place a lavalier microphone under a costume in order to hide the mic. Unfortunately, the costume fabric does not transmit the mouth’s high frequencies well, so you hear a dull, muffled tone quality.

A high-frequency boost on the console can compensate for this loss. (Just be careful not to cause feedback). Placing a lavalier mic on the chest creates a rise in the response around 730 Hz, which can give a chesty or puffy quality. Applying an EQ cut at the same frequency will result in a more natural sound.

Headworn mics may or may not need EQ, depending on the microphones’ frequency response. The voice sounds brightest when miked in front, and becomes progressively duller to the side, above, or below the mouth.

The proper use of EQ is basically simple. Know the sound of the unamplified human voice, and adjust the EQ knobs to make your PA sound like that.

A member of AES and SynAudCon, Bruce Bartlett is a live sound and recording engineer, microphone designer (http://www.bartlettaudio.com), and audio journalist. His latest book is Practical Recording Techniques, 6th Edition.

{extended}
Posted by Keith Clark on 01/08 at 04:22 PM
Church SoundFeatureBlogStudy HallDigitalEducationMixerProcessorSignalStudioSystemPermalink

Tuesday, January 07, 2014

Monitor Engineer Andy Robinson Chooses DiGiCo SD5 For Jessie J “Alive” Tour

Robinson also had a comprehensive talkback system setup that allowed him and the band members to all talk to each other during the show

The end of 2013 saw Jessie J playing the major arenas of the UK and Ireland on the 20-date Alive tour, with monitor engineer Andy “Baggy” Robinson’s utilizing a DiGiCo SD5 for his mixes.

“I’ve mixed in my usual style of using a DiGiCo console, making it work for it’s money,” Robinson notes with a smile. For the tour, he had to account for a four-piece band and three backing vocalists as well as Jessie, noting, “I doubled up on quite a lot of channels for Jessie, so I was using all but six of the faders.”

As well as monitoring, Robinson also had a comprehensive talkback system setup that allowed him and the band members to all talk to each other during the show.

Robinson states that what he appreciates most about the SD5 is its intuitive layout and compact size. “The fact that I can sit down and still reach all the controls, including the macro buttons, is great,” he says. “I use those a lot and having them in blocks of ten is another thing that makes the SD5 really straightforward to use.”

The tour received very favorable media reviews, thanks in no small part to its high production values, as well as the talent on stage, of course.

“Providing monitors for a show of such high caliber is extremely rewarding,” Robinson concludes. “I also used this console on Jessie’s summer festival tour. It was brand new out of the box. I like all DiGiCo consoles, but the SD5 works really well for me. They got this one absolutely right for monitors.”

DiGiCo

{extended}
Posted by Keith Clark on 01/07 at 01:40 PM
AVLive SoundNewsAVConsolesEngineerMixerSound ReinforcementPermalink

Church Sound: Four Vital Production Tips To Propel Your Audio To The Next Level

This article is provided by Behind The Mixer.

 
Vital…Propel…Next Level…Can four production tips actually make THAT MUCH of a difference? Yes, they can! 

The sad part is a good number of people aren’t using these tips and their sound is suffering. 

Answer this question; when does your mixing work begin? Before you answer, I’ll give you three choices:; once you enter the sound booth, once you enter the sanctuary, or once you get the song list? 

The problem is there are many who want to learn but there aren’t that many good teachers. That’s where this list of four vital tips comes into play. These are the simple things that should be done, could easily be done, but many times aren’t being done. 

Let’s change that.

1. Mic Instruments The Right Way. I’m occasionally pulled into a church to listen to their music mix and make recommendations. Before the event starts, I check out how the instruments are miked.  The wrong mic setup will have a hugely negative impact on their mix. In many cases, mixing tweaks can’t compensate for the poor mic setup.

Poor mic setups can be categorized in two forms; too far and too close. Mics that are located too far from the instrument will pick up a lot of stage noise and won’t pick up enough of the instrument. For example, a kick drum mic located too far away from the drum head would give you a dull kick drum sound and a bunch of stage noise.

Mics that are too close to the instrument can produce a distorted signal or a poor sound. For example, if an instrument microphone was set up with an acoustic piano and the microphone is placed too close to the piano strings. In this case, instead of capturing the full sound of the piano, the resulting sound is dominated by the frequencies produced by a handful of strings.

Instruments should be miked so you hear the best representative sound of the instrument and the least amount of stage noise. It’s the live environment so sound isolation isn’t possible but you do have the ability to get really close.

Oh, and make sure you’re using the right microphone.

2. Learn To Set The Channel Gain. Second to microphone location is gain setting. And gain setting is the second place where I see people make mistakes. The problem is it’s assumed the GAIN (a.k.a. TRIM) knob is a volume control and from there, it’s easy to mess things up. (Hey, I’m not judging…I used to think the same thing myself.)

How do you know if your gain settings are whacked? Do you hear a lot of hiss in a channel even when the musician is playing? Do you have feedback issues all the time? Are your fader controls normally down near the bottom of the fader slot?  If you answered yes to any of these, chances are you have gain issues.

The GAIN controls the level of audio signal coming into the mixing board. Along with the audio signal, there is the presence of electrical line noise that’s part of any audio system.  When the GAIN control is set too low, you hear this noise in the channel. When the GAIN level is set too high, you experience problems like audio feedback.

Each channel’s gain should be set so you have the best audio signal-to-noise ratio (S/N ratio). This means you hear a strong signal and little-to-no electrical noise.

3. Don’t Treat Musicians Equally. There is a time and a place for bias, and this is one of them.

I remember it like it happened yesterday. I watched the sound guy during the sound check and I couldn’t believe my eyes. The band took the stage, he set all of the channel volumes at the same audible volume level and then he stopped. There was no mixing or volume balancing. It was all singers and instruments coming out of the main speakers at the same volume.

In my complete guide to church audio production, I go into detail on volume balancing and there is even an audio file where I mix instruments all at the same level and then compare it to a properly volume balanced mix. The difference is dramatic. Don’t treat musicians equally!

The problem, I believe, is that what you hear and what you think you hear are two different things. For example, if the band is doing a final song that should sound big with full-on instruments and everyone singing, then you might think you should bump up all of the channel volumes. 

But as soon as you do that, your whole mix falls apart because the bass is stepping on the electric guitar that’s stepping on the acoustic guitar and all the instruments are stepping on the vocals. Taking this scenario as an example, it would be better to boost of house volume so the overall balance of instruments and vocals stays the same. I digress.

How do you learn where musicians should “sit in the mix?” I’ve said this before but I can’t stress it enough; analyze professional recordings of the song. So whatever Chris Tomlin song your worship band is playing next week, get a copy of the original and listen to it over and over.

Listen to one instrument through the whole song. Is it “upfront” in the mix? Is it more of a supporting instrument in the background? Imagine all of the musicians on the stage and their location on the stage is based on where you hear them in the mix. That’s the best way to learn where musicians should “sit in the mix.”

A quick tip on evaluating your volume balance: test channel volumes by muting them.

4. Push Your Pride Aside. Whenever I find significant problems with a music mix, it’s because the sound tech messed up in one or more of the above three areas. So what about this last one?

Maybe it’s a guy thing. Maybe it’s a geek thing. Maybe it’s just me, but I doubt it. If something is wrong, I want to figure out why. If I’m learning something new, I want to figure out the how’s and why’s and where’s and all of that stuff. While I applaud anyone who desires to learn, I will applaud even more for the person who asks for help.

The last point comes down to this; no matter how long you’ve been mixing, no matter how young or old you are, there will always be something you can learn from another audio tech. And one of the best ways I’ve found of learning is by creating my mix, during band practice, and then asking another tech to show me how they would change my mix to make it better.

The Take Away
I called these four tips vital because they involve the foundation for all mixing work. You must get the first three right before starting your hands-on mixing. 

As for the fourth, on being foundational…you have to have the right mindset on mixing and learning and desiring to create the best sound for your congregation. I’ve seen what happens when pride gets in the way, and it’s not pretty.

Ready to learn and laugh? Chris Huff writes about the world of church audio at Behind The Mixer. He covers everything from audio fundamentals to dealing with musicians.  He can even tell you the signs the sound guy is having a mental breakdown.

{extended}
Posted by Keith Clark on 01/07 at 01:19 PM
Church SoundFeatureBlogStudy HallAnalogConsolesDigitalEngineerMixerProcessorSound ReinforcementPermalink

Friday, January 03, 2014

River Oaks Church Retires Analog Console, Upgrades To Yamaha CL5 Digital

“We are living by the console’s iPad App; our engineers stay on ground level during rehearsals and worship to make sure our mix is perfect." -- Tim Blaum, program director

The 600-seat River Oaks Community Church, in Goshen, IN, holds contemporary worship services, and on any given Sunday, serves 1,200 of its congregation.

With a rotating pool of 25 musicians, a standard worship service consists of 5-10 members on stage. The church recently retired its aged analog console and upgraded to a new Yamaha Commercial Audio CL5 digital audio console and Rio 3224-D and 1608-D input/output boxes, purchased from Mid-America Sound of Greenfield, IN.

“We had an analog console for about 10 years that outlived its welcome”, states Tim Blaum, program director at the church and installer of the CL.“We realized we needed to have additional control and mixing space.”

Blaum notes that on occasion, the church had to rent a Yamaha M7CL for outdoor gigs or other special services when they needed a little extra control.

“We have a volunteer team that is comfortable in the analog environment using a full bank of physical faders because it’s what we’ve been used to, but we knew it was time to upgrade to digital. Several industry professionals recommended we investigate the new Yamaha CL Series since it has several added benefits. After weighing our options, we heeded their advice and decided to go with the CL5 because of its power and features.”

Prior to the new CL’s arrival, volunteers viewed online tutorial videos, having been familiar with the M7CL. “Yamaha digital consoles have always been strong and reliable, and with the CL’s price point, the console will give us room to grow and expand our worship ministry,” Blaum says. “Talented musicians with years of training and skill lead our ministry, and our production team is eager to help them sound the best they can with our new desk.” The church is currently using the Dante network locally due to its pre-existing snake structure.

He also notes that the front of house location is an elevated mezzanine, which makes mixing difficult. “We are living by the console’s iPad App; our engineers stay on ground level during rehearsals and worship to make sure our mix is perfect. The labeling on the CL was a major enhancement, but we really love just about every feature it has since we came directly from the analog world. The console sounds great, and since its installation, we’ve had numerous compliments about our mix greatly improving. No doubt, this is an exciting step forward for us.”

River Oaks Community Church
Mid-America Sound
Yamaha Commercial Audio

{extended}
Posted by Keith Clark on 01/03 at 02:20 PM
AVLive SoundChurch SoundNewsAVConsolesInstallationMixerSoftwareSound ReinforcementPermalink

Tuesday, December 24, 2013

Premier Production & Sound Services Deploys JBL, Crown, Soundcraft For Voodoo Experience 2013

Harman components for sound reinforcement system serving more than 100,000 in New Orleans

For the third consecutive year, Baton Rouge, LA sound company Premier Production & Sound Services (PSS) supplied the sound reinforcement system for the recent Voodoo Experience festival, relying on a range of products from Harman Professional.

The 3-day festival presents array of more than 80 artists and more than 100,000 people at New Orleans City Park, with PSS deploying a system at the Flambeau Stage with eight JBL Professional VTX V25 full-size line array elements per side and four stacks of three S28 subwoofers arranged in cardioid configuration.

PSS also provided 12 VerTec VT4886 subcompact line array elements atop the subwoofers for front fill. Twenty-eight Crown Audio IT 12000 HD amplifiers powered the loudspeakers

Stage sound was furnished by 16 JBL SRX712M floor wedges, plus 12 VT4886 loudspeakers and six VT4883 subcompact subwoofers, driven by nine Crown IT 4x3500 HD amplifiers.

In addition, PSS supplied two Soundcraft Vi6 digital live sound consoles for front of house and monitor mixing. Brian Gordon of PSS (head audio engineer and co-owner) and his staff used the JBL VTX-LZ-K laser accessory kit to precisely align all the loudspeakers in the system.

“This was our third year of working on the Voodoo Experience festival and our second with the JBL VTX Series line arrays,” states PSS co-owner and director of operations Russ Bryant. “The sound was absolutely awesome. At a large-scale outdoor rock festival like this, the demands on a loudspeaker system are tremendous in terms of the sheer SPL and coverage required—and the V25 line arrays are phenomenal in their ability to deliver clear sound even at high volume, with pristine, effortless headroom especially in the upper midrange and high frequencies.”

The Crown IT 4x3500 HD amplifiers also made life easier for PSS. “Having four channels in one amp allowed us to significantly cut down on the amount of cabling we needed and save a lot of time and labor,” he notes. “We are very happy with the flexibility they offer and the fact that they can be optimized for use with VTX and VerTec loudspeakers using the V5 DSP presets.”

At festivals with dozens of artists on the bill, smooth transitions between acts are crucial. “We wound up using the Soundcraft Vi6 consoles for 31 front of house and monitor changeovers,” says Bryant. “I can tell you that the Vi6 was much easier to use than other consoles that were at the festival.”

He adds: The Vi6 still passes signal while you’re loading show files into it where some other consoles will not, which is a major benefit during the hectic time constraints of a big festival.”

Premier Production & Sound Services (PSS)
JBL Professional
Crown Audio
Soundcraft
Harman Professional

{extended}
Posted by Keith Clark on 12/24 at 08:38 AM
AVLive SoundNewsAmplifierAVConsolesDigitalLine ArrayMixerSound ReinforcementPermalink

Monday, December 23, 2013

Compact Avid S3L Provides Plenty Of Capability On Latest Primal Scream Tour

Long-time Avid user, front of house engineer Michael Brennan, selects S3L for funtionality, form factor

For Scottish alt rockers Primal Scream’s latest tour, space for the crew to carry equipment was at a premium. Front of house engineer Michael Brennan was faced with having to fit his kit into a small trailer that would be towed behind the tour bus as it worked its way across Europe and on to Japan for a flying visit before returning to the UK for the tour’s final dates.

Ordinarily the lack of space would have meant going on tour without a mixing console. However, Brennan, who has worked on Avid mixing systems for the past eight years, approached the company about using an alternative solution – specifically its new compact S3L mixing system.

“I wanted to use the S3L on this Primal Scream tour for several reasons,” says Brennan. “I knew that its sound quality is second to none and because of the compact size of the board and the stage-boxes, I would be able to fit it into the trailer. Additionally, the show file I had already created on my VENUE was compatible with it, so I was confident the transition would be simple.”

The new console also made its Japan debut on the tour, having been supplied locally by Clair Global Japan. “It’s perfect for a small club tour and because of the S3L’s modularity and networked Ethernet AVB architecture, the setup time is so quick, plus it’s really easy to scale up if you need to,” explains Brennan. “I can set up a 64-channel desk, 48 lines of stage-box and a 128-channel, 75-meter Cat-5e multi in around 10 minutes. If you add in its space saving size, it’s just awesome. I couldn’t have taken a board on tour if it wasn’t so small and modular. I’ve had to carry this one in a transit van, splitter, truck, car and plane, and it’s been really easy to move around.”

He adds, “It’s also proving really popular in the venues because it has such a small footprint that promoters can sell more seats, which they obviously love.”

Because Brennan regularly uses Avid’s Virtual Soundcheck feature, he always dials in the show before he goes on tour. “When the band comes in for rehearsals, I record the set onto Pro Tools directly from the board, and then I’m able to sit down with the artist at FOH, locate any point in the set and allow the artist to hear my mix. The scene function has been a major breakthrough and is very powerful, as it gives the artist a sense of involvement and confidence as they’re able to really know how they sound. It’s a very creative tool that allows me and the band to realize and achieve the sound we both want.”

Brennan’s a long-time Avid user, having first worked with a D-Show on a Super Furry Animals tour in 2005. When he then got his hands on an Avid Mix Rack System he was so impressed he bought one of his own, which has accompanied him on every tour he’s been involved with since. He has also mixed, produced and recorded many albums over the past 15 years, always using Pro Tools, for artists including Snow Patrol, Primal Scream, Mogwai and Super Furry Animals.

On Primal Scream’s 2010 Screamadelica tour, Brennan recorded every show on Pro Tools, and was then able to mix a live DVD in stereo and 5.1 formats. He says, “The compatibility of Pro Tools with the Avid Profile surface made this a simple process, and I was even able to mix it on the Profile surface as I already had the song mix dialed in.”

Avid

{extended}
Posted by Keith Clark on 12/23 at 12:46 PM
AVLive SoundNewsAVConcertConsolesDigitalMixerSound ReinforcementPermalink

Tuesday, December 17, 2013

Church Sound: Mixing Like A Pro, Part 4—Making EQ Work For You

Looking at the actual sonic makeup of the sounds that come from the stage
This article is provided by CCI Solutions.

 
Editor’s Note: Go here to read previous installments in this series.

In the previous article, we took a look at various equalization (EQ) tools, identified their functions and what they can do for us. After all, we can’t use our tools effectively if we don’t know what they are or how to use them.

We’ve covered what EQ does to shape frequencies in our sound system audio, but to know how to make EQ work for us we have to look at the actual sonic makeup of the sounds that come from the stage.

Putting EQ Into Words
Before we focus on what we’re EQ’ing, we need to learn how to interpret common language into tangible EQ adjustments. You know what I’m talking about, comments like “it’s too edgy” or “it sounds muddy.” What does that actually mean?

The chart below gives us some hints using words like rumble, muddy and edge. With the help of this chart, we can take an educated guess that when someone says an input sounds “honky,” they’re referring to something in the 440-1,000 Hz range.

It’s not an exact science, I know, but getting a good feel for what responses are elicited by certain frequencies can help us in making EQ adjustments quickly. So the next time someone tells you the electric guitar sounds “edgy” or “crunchy,” you know you need to quickly look at the 2,000-4,000 Hz range to attack your problem.

Click to enlarge. Go here for an interactive version of this chart.

Focusing On Frequency Ranges
Everything we hear is made up of a range of frequencies. Each sound that hits our eardrums is made up of a collage of frequencies at a blend of sound pressure levels that our brain interprets as “the sound.”

Just as each person’s voice has a unique makeup and signature, every instrument or vocals has a makeup of frequencies that is unique to it. In order to talk about how to EQ an input, we need to learn what frequencies are involved in the sound sources we’re working with.

The chart is a great place to start to begin to understand what frequencies make up the sounds we experience on a Sunday morning. It shows us the range of any given source and it shows us the frequencies we need to focus on—and not focus on.

For example, the range of a guitar will typically start around 80 Hz and will top off around 5 kHz. Knowing there is nothing being produced below 80 Hz, the first thing we can do is turn on the low cut/high pass filter to eliminate any low frequency junk that our guitar isn’t actually producing.

We also know that the guitar isn’t producing frequencies over 5 kHz, so turning up the highs above that just adds unhelpful noise. Based on this chart we know our focus needs to be between 80 and 5 kHz.

Critical Sound — The Voice
Our most critical source in the church, the human voice also has clear-cut frequency ranges, regardless of whether your voices are singing or speaking.

While everyone’s voice has minor variations, the male voice produces frequencies between 100 and 16 kHz. While the female voice also shares the top end of 16 kHz, it doesn’t typically produce any frequencies below 240 Hz.

The first thing this should tell you is your low cut or high pass filter should almost always be engaged on these inputs. As you can see on our chart (previous page), the warmth or boominess of the voice is between 100-250 Hz, so most of the time there is nothing worth having below 100 Hz.

The most important frequency range in the voice in my opinion, and the one I see most commonly mis-adjusted, is the intelligibility range in the high-mids (2 kHz to 4 kHz).

When listening to vocals that are “honky” or “tinny,” I often see sound guys reach for the high-mids and adjust those down to try and improve the sound.

As we can see on the chart, we’re actually attacking the intelligibility when lowering the high-mids, and missing the “honky/tinny” sound that’s in the 400-2 kHz range. It seems like such a small miss on paper, but this mistake will often cost the vocals their clarity in the mix.

What To Do With Q
If you’re fortunate enough to have a full parametric EQ with a Q knob, you have a tool that allows us to get very specific with our EQ adjustments or make general, sweeping changes. Most of the time we want to make fairly general adjustments and a single octave change is great, which is a Q value of 1.

If you’re needing to subtract a bit of “honkiness” from your guitar though, a 2 dB cut centered at 700 Hz with a lower Q (maybe .7) will give you a broader, wider cut to effect everything in that 400-1 kHz range.

Or if you’re fighting a particular frequency for feedback, you can make your Q very high (maybe 5 or 6) so that you are narrowly notching out the frequency that’s causing you issues, leaving the rest of the frequencies alone and keeping some semblance of natural sound.

The Q, if you have it, really gives you a great deal more flexibility in the adjustments you make.

Feedback
Speaking of feedback, one last thing our chart can help us with is learning what individual frequencies sound like. We’ve all dealt with feedback at some point. A frequency that’s sensitive enough that when amplified the mic picks it up from a monitor or speaker again and again creating a feedback loop.

When feedback occurs, one common way to attack it is adjusting the EQ to decrease the gain of that frequency. To be able to do that quickly and effectively, we need to know what individual frequencies sound like.

Frequency Killing
At the bottom of the chart is a standard 88-key piano that shows what frequency each note produces. When it comes to training your ears to be able to quickly respond to feedback, sitting at a piano or keyboard with this chart can help you learn exactly what frequencies sound like.

Try it sometime: sit at a keyboard and focus on a typical problem range of 200-500 Hz and press one key over and over, training your brain to recognize the tone of each frequency.

The middle C is a great place to start, with a frequency of 256 Hz. I find this to be a common problem for many churches. Then jump up to middle A, with a frequency of 440 Hz. Do this occasionally, spanning the entire frequency spectrum and you’ll be a frequency killer in no time!

Wrap Up
Hopefully at this point you’re feeling more confident in what EQ is and what it can do for you. The difficult part is that there is no clear cut formula for what will and won’t work.

I can’t tell you that you should always cut certain frequencies to get a great sounding input. Our vocals and instruments are living, breathing, unique things and they all have their own flavor.

On top of that, every sound system and facility have their own nuances that come into play. When it comes to EQ, you have to trust what you’re hearing. Use the chart. Print one out and keep it at your sound console for reference.

 

Duke DeJong has more than 12 years of experience as a technical artist, trainer and collaborator for ministries. CCI Solutions is a leading source for AV and lighting equipment, also providing system design and contracting as well as acoustic consulting. Find out more here.

 

{extended}
Posted by Keith Clark on 12/17 at 04:34 PM
Church SoundFeatureBlogStudy HallEducationEngineerMixerProcessorSound ReinforcementTechnicianPermalink

In The Studio: Tips For Better Take Management

One of the major differences between an aspiring producer and an established one...
This article is provided by the Pro Audio Files.

 
One of the major differences I’ve seen between an aspiring producer and an established producer is simple playlist (take) management. Great producers will usually have a very clean session in regard to organization and take management.

Gizmos
Technology is great. It allows us to do things that have never been done before, all in the comfort of our own homes. But when is it a hinderance? When do we become a prisoner of all the possibilities? When do we start to drown in endless options?

Established producers often have a lot of clarity within their sessions. They’re not concerned with countless possibilities, rather the best option.

This means when it comes to comping tracks and saving takes, decisions are made quickly.

Saving 20 takes per part may seem like a reasonable idea to many. What if you want a different variation on the part? Not sure the timing is locked? Not sure which take has the best tuning? What if? What if? What if?

Too many “what if’s” lead to a muddy production. It’s important to make decisions. Clarity throughout the process is important. Firstly, because it affects the performances.

A guitar chord that’s off is going to trigger the bass note to be off and then the percussionist has a hard time locking in. Before long, you have a session where the whole band is a little shaky. Not making decisive decisions can create a spiral effect on the stability of the production.

Momentary Lapse Of Reason
There is also the memory lapse effect. You record a bunch of takes and while you’re working, everything seems clear in your mind: Take 12 had a good bit, take 15 was mostly good, but you want to grab the beginning from take 4.

If you put the song down for a few days and come back to the session it’s going to be hard to remember the nuances between takes.

Commit. If it’s still not good enough, re-track it. At this point, you’re better off getting a single take then a patched edit for the sake of feel. I’m always in favor of replaying the part rather than extensive edits. It will take the same amount of time and the full take will still sound better.

Worm Hole
Aspiring producers/musicians get caught in the trap of playing too much and not listening. I like to set a rule of stopping after 4 takes and giving a really good listen. Don’t set record to do an endless loop. Loop recording means you’re not listening and most likely spacing out at times.

Perspective
It’s hard to hear the music the way it really sounds while you’re playing. This is another reason why you need to stop and listen as often as you can. If you’re the producer and player your perspective is biased.

When you stop, put your instrument down and trust your ears. Listen, make notes, and re-take. Don’t be noodling on your instrument while listening. This is the only way to make really fine adjustments. It may seem like it’s the long approach, but in reality, it will save you time.

Hit It
Here is how I like to track a vocal session.

First, I’ve taken time to choose the correct mic, preamp, compressor, incense, tea, lighting and dialed in a headphone mix. (Note: It’s very important to have a great headphone mix. It will result in less fatigue and frustration from the performer.)

Next, I like record a couple of full passes before we even think about punches. Let the performer get into the vibe of the song.

After 3-4 takes, stop. Take a few second break for water and then listen. Before we listen, I make sure we both have a pencil and paper. As we review each take, we write notes of what we liked or didn’t.

Listening to 8 takes in a row is overwhelming! It’s too much to digest. Plus, I’ve heard that if you listen to 9 takes in a row it could cause bowel irritation. Ok, I made that up. But, if I have to listen to 9 takes in a row of the 3rd part background vocal I’m going to be calling my friend Johnny Walker Red… And we’re gonna have a loooong chat, if ya know what I mean.

When the last take has completed playing, we compare notes and see if we have a comp. In the event the overall performance is not there, we repeat the 3-4 take run, break, then listen, take notes, comp.

If we just need a few bits, we comp the the take and punch in where needed. Notice I mention we comp BEFORE we punch!

Performance Drift
There is something I like to call “Performance Drift.” This is when the artists’ performance changes dramatically from the first take to the last. Volume, expression, and enthusiasm may have shifted during flight. Limiting tracking to 3-4 takes at a clip prevents performance drift as there will be breaks and reviewing that keeps it fresh.

Hash It Out
Don’t use recording as your practice. Need to review something because it’s not right? Stop playback and run it. Work it out. Be prepared and ready when the red light is on.

Don’t have the mindset of “I’ll fix it later.” The performance will always suffer. Even though we know comping and punching is an option, it’s good to pretend that it is not. A coherent take will always sound better.

Binary Composting
Don’t be a take-hoarder. Go ahead and delete! Don’t be afraid. Why live in the past, when you can be in the present? Last take only so-so? DELETE.

It’s also good idea to delete all unused audio from your sessions. It’s no use carrying around that baggage. No reason to have 20 gigs of audio that you’re not using. A bloated session is harder to backup or track down a file if need be. Plus, it takes longer to load.

If you’re not using it, send it off to greener pastures (aka your trash bin). Think of it as composting for 1’s and 0’s. Dare I say binary composting?!?!

Adios
Before I tell the musician a session is over, I make sure I have a comp I can live with. It should include all crossfades and edits cleaned. I want to know I have the part and what it sounds like. Leave nothing to the imagination…except which island your summer home will be on after your single blows up.

 
Mark Marshall is a producer, songwriter, session musician and instructor based in NYC.

Be sure to visit The Pro Audio Files for more great recording content. To comment or ask questions about this article, go here.

{extended}
Posted by Keith Clark on 12/17 at 04:05 PM
RecordingFeatureBlogStudy HallBusinessConsolesEngineerMixerStudioTechnicianPermalink

Friday, December 13, 2013

API Launches “The Box” at Vintage King’s LA & Nashville Stores (Includes Video)

Heavy Melody Music in NYC picks up three "Boxes"

Automated Processes, Inc. (API) chose Vintage King Audio’s LA and Nashville showrooms to launch and demonstrate The Box, its new small-format recording/mixing console designed for professional project studios, home studios, and production facilities.

Coinciding with the launch of The Box, NYC’s Heavy Melody Music ordered three consoles for their studios. “Composer and sound designer Neil Goldberg is one of the partners and he had a nicely outfitted studio, but asked us how The Box would fit into his rooms,” notes Jacob Schneider, VK sales rep. “He was struggling with a bunch of summing mixers, monitors, control units, and pre amps but with The Box he found he could cover all of these needs right in one Box.”

“We had a great turnout and loads of enthusiasm for The Box events at Vintage King Los Angeles and Vintage King Nashville,” adds Dan Zimbelman, API director of sales. “While in LA, we also shot a short video for those who couldn’t be there in person.” (See the video below)

Building on API’s heritage of high-quality recording consoles, The Box is optimized for the digital era, providing all the functions needed for production not provided by most DAWs, including mic preamps, input signal processing, high-quality mix bus, cue sends with talkback, monitor control, and more, without the redundant capacities of larger consoles.

Most significantly, The Box provides the “all discrete” API sound in an efficient, cost-effective package.

The Box is specifically designed for audio professionals with project or home studios who require a smaller format console with that “big console sound.” It includes the same circuitry, performance and API sound as the company’s successful Vision, Legacy Plus, and 1608 consoles.

“The Box offers an easy, turnkey solution for recording and mixing,” states API president Larry Droppa. “It’s a great option for people who record a few channels at a time, but demand the warmth and punch that a large API console delivers. In addition to four inputs, full center section control, and 16 channels of API’s famous summing, the icing on the cake is a classic API stereo compressor on the program bus. Now you can truly record and mix—in The Box.”

 

 

Check out a slide show of the VKLA launch here.

And, see a slide show of the VK Nashville launch here.


API
Vintage King Audio 

{extended}
Posted by Keith Clark on 12/13 at 10:53 AM
AVNewsVideoProductConsolesDigital Audio WorkstationsManufacturerMixerProcessorStudioPermalink

Tech Tip Of The Day: Pre/Post Confusion

What is the difference between a post-fader and a pre-fader aux send and in what situations would I use either one?
Provided by Sweetwater.

 
Q: I was recruited to be on our church tech team, and I’m really glad I said yes.

I really enjoy this whole audio thing, however, there’s so much I don’t understand, but I know it’s a learning process. Anyway, everybody tells me I should just ask questions, so I guess one thing that’s tripping me up is all this pre/post stuff.

What’s the difference between a post-fader and a pre-fader aux send and in what situations would I use either one?

A: On a console, pre and post indicate possibilities to override the normal signal routing for various purposes.

For example, a post-fader aux send taps the incoming signal from the channel at a point after the channel fader. This means that when the channel fader is down, no signal is sent out the aux send(s) on that channel.

Post-fader aux sends are generally used as “effects sends,” to send a signal out from a particular channel to an effects processor.

Since the channel fader controls the level of signal being sent to the main mix as well as the level of signal being sent out the aux send, when the channel fades down, the level of the “wet” signal follows the level of the “dry” signal. If the level of the wet signal did not follow the level of the dry signal, the effect would still be heard after the channel fades out.

A pre-fader aux send taps the incoming signal from the channel at a point that is before the channel fader. So, when the channel fader is down, the signal is still being sent to the aux bus.

Pre-fader aux sends are helpful for live sound reinforcement situations where the FOH console is doubling as the stage monitor mix console. When setting up stage monitor mixes, it is ideal to be able to control the level of these mixes independently from the front-of-house mix.

If the position of the channel fader affected the level of that channel in each monitor mix, it would be necessary to constantly adjust the monitor mix (Aux) Sends after changing the level of the channel fader. Or more simply put, when a screaming guitar solo is boosted in the front-of-house mix, everybody on stage would get an earful from their monitor mix.

There is also another distinction known as pre or post EQ, which at this point should be fairly self-explanatory. Any pre fader send could still be pre or post EQ. In a live situation, pre fader and pre EQ sends are usually best where the mixer may be feeding stage monitors.

Additionally, there are options such as PFL (or pre-fade-listen), which generally sends a signal to monitor outputs regardless of the setting of that channel’s fader, and simultaneously mutes the other channels. In other words, PFL allows you to solo a channel even if the fader is pulled all the way down.

Note that on most consoles, this affects monitors only, and does not interfere with main, tape, or aux outs. In broadcast situations, PFL is often referred to as “cueing.”

For more tech tips go to Sweetwater.com

{extended}
Posted by Keith Clark on 12/13 at 10:23 AM
Church SoundFeatureBlogStudy HallConsolesEducationMixerSignalSystemTechnicianPermalink

Thursday, December 12, 2013

Perspective: Meeting The Challenges Of The Gig

Identifying your primary audience is key.

As a sound engineer working in the concert and corporate event markets, I’ve found it useful to identify my primary (most important) audience for every gig.

Is it the band that hired me? The band manager? The promoter? The people buying tickets to the show? The people that own the venue? The sound company I’m working for?

It can be tough to figure. The easy answer is you work for who pays you, but it can be a bit more complicated than that. Maybe the following experiences can help answer the question.

I toured with a particular artist for several years. We played medium-sized venues in larger cities (Roseland Ballroom in New York, The Warfield in San Francisco, etc.) in addition to being the support act on several arena tours.

At some point, the band started asking me to mix them “as loud as possible” (after completing a record with a producer who monitored at extremely high SPL).

They were already frequently too loud before I put them through the PA.

Sometimes I had a hard time getting the vocals above the ambient level of the guitars, even in larger venues.

I knew their fans - after all, they would talk to me at every show - and they didn’t like it.

They took the lyrics seriously, and would sing along the entire set. The clearer the vocals, the louder the fans sang (and the higher the energy levels in the room).

Although the band hired me, the fans really paid the bills, so I identified them as my primary audience.

If they weren’t happy, the band would eventually hear about it, so I worked to convince the band that the fans really wanted to hear the vocals and understand the words above all else.

Once they understood, the band stopped insisting on a punishing loud mix, and even began turning down their guitars if I couldn’t get the vocals audible over the stage volume.

Figuring out the primary audience at corporate events is even trickier because there are additional audience layers in play, such as event planners and clients.

A few years ago I traveled to Tampa to mix a band at an official NFL party tied into the Super Bowl - a large event (3,000 people) in the main hall of the Tampa convention center.

A local sound company provided a multi-zoned system, with main and delay line arrays, subs, and front fills.

All About Business
Although the band hired me, I’ve done enough events like this to know that the event planner calls the shots, often at the direction of the client paying the bills.

If the planner tells me to turn it down, I do - even if the band wants me to turn it up.

Luckily in this particular case, the band has done these types of events for years, so they understand that it’s all about business.

(As far as I can tell, the pecking order seems to be, in order of importance: attendees, food, interior design, floral, lighting, sound, entertainment.)

At sound check, the event planner asked me to turn up the volume.

I happily complied, knowing that once the party started I would almost certainly be asked to turn it down. (It’s usually best to keep this knowledge to yourself and let the situation play out rather than offer any resistance.)

I arranged my mix accordingly, putting the band on a VCA and inserting a compressor on the main stereo bus.

I also decided that, when asked to turn down the volume, I could decrease the level of the delays and main arrays while still maintaining the volume (and energy) on the dance floor by turning up the level in the front fills.

The Point
Sure enough, as soon as the band hit the stage, a woman I’d never seen before asked me to turn down the volume.

I said O.K., and politely asked her name, and then asked the system tech to radio the event planner and find out if the woman had the authority to make the request. The event planner said yes - the woman was the assistant to the NFL commissioner.

Here, finally, was my real main audience f o r the event. The person paying the bills for a corporate event wants less volume, then no problem.

I turned down the arrays a couple of notches and also took some 2-3 K bite out of them, then brightened up the overall mix (8 kHz-plus) a bit for clarity, and pushed up the volume in the front fills.

For the next three hours, the band played, the party-goers drank, ate, schmoozed (and finally started to dance), and I was left to actually mix the show instead of responding to requests to turn it up or down.

The event planner will most likely book the band again, I will most likely mix the band again, and we can all continue to make a living.

And that’s the point. To make a living working in sound, we often find ourselves having to do things that those paying the bills find enjoyable.

Do it, and politely, and you just might be asked back.

Nick Pellicciotto has worked in the live sound industry for over 15 years, touring as a mix engineer for acts like Fugazi, Lucinda Williams, and Modest Mouse.

{extended}
Posted by Keith Clark on 12/12 at 12:00 PM
Live SoundFeaturePollProductionAudioConsolesEducationEngineerEthernetLine ArrayLoudspeakerMixerMonitoringSignalSound ReinforcementSystemTechnicianPermalink
Page 28 of 136 pages « First  <  26 27 28 29 30 >  Last »