Analog

Friday, October 03, 2014

Put Another Nickel In…

Somewhere in its early years, the coin operated record player acquired the name “jukebox.” There are several theories about the origin.

The most accepted is that the word “juke” is a corruption of the word “jook,” an African American slang term for dancing. The source of the music for this dancing would have been called a “jookbox.”

A second version is that “jook” meant “sex” which may have made sense since brothels were some of the first establishments to install jukeboxes, thus replacing the piano player. A third source of the word may have been from the term “jute,” or “jute joints” where jute pickers would relax, drink and dance.

Whatever the source of its name, the jukebox of the 1920s was generally associated with “speakeasies” and the “low-life” of prohibition since they were featured entertainment in such places.

To pay to hear a record played first started through the entrepreneurial activities of carnival and penny arcade operators who made their own recordings and then charged admission to hear them on the newly invented gramophone. It was in response to requests by this group of users that the phonograph/gramophone manufacturers began to produce prerecorded product.

This was an unexpected life-line for the Columbia company that in 1890 seemed headed for liquidation, because the intended use of the phonograph as a dictating machine had been a dismal flop. Columbia and Edison began to realize that their market was somewhere else. They also recognized that in order to sell players, they had to produce and manufacture prerecorded product that the public wanted to hear.

Initially the preferred programs for coin-operated players were comic songs, bands, monologues, and whistling. The revenues from these “pay for play” machines was amazing in light of the fact that the quality was poor and the selection meagre. In 1891, some machines earned up to 14 dollars a day—a lot of money at the time.

A penny arcade from the early 20th century.

While accepting there was a market for coin operated carnival players, Edison feared they might create the impression that the phonograph was only a toy. His worries were unjustified, since the showman-operated players cultivated a consumer appetite for recorded music and a desire for home players.

As the turn of the century approached, mainstreet penny and nickel arcades were becoming an increasingly popular center for entertainment. There were hundreds of different coin operated amusements. The most popular of these were those that played music. Into this market came the nickelodeon and the jukebox.

The first jukebox appeared close on the heels of the introduction of the phonograph. Louis Glas installed an Edison cylinder system at the San Francisco Royal Palace in 1889.

The Automatic Entertainer from the John Gabel Company.

In 1906, the Automatic Entertainer, which used flat disks recently invented by Berliner, was introduced by the John Gabel Company. The system was entirely mechanical but required regular winding of its spring mechanism. It was popular in spite of the poor quality.

In Paris, at the Pathe Salon du Phonograph, patrons could choose a musical selection, which would be played for them from the floor below where there were a battery of players. As in San Francisco, they would hear their selection through long listening tubes connected to the player’s diaphragm.

Composer Claude Debussy, after hearing this system for a few coins, was concerned that the low cost of the disk and its availability would have the effect of cheapening the music. He did, however, acknowledge that the discs preserved a certain magic.

In 1913, Debussy wrote: “In a time like ours, when the genius of engineers has reached such undreamed proportions, one can hear famous pieces of music as easily as one can buy a glass of beer. Should we not fear this domestication of sound, this magic preserved in a disc that anyone can awaken at will? Will it not mean a diminishing of the secret forces of the art, which until now have been considered indestructible? “.

Debussy, like so many other classically trained musicians had fears that this new technology would impact on his beloved art, and probably his concert income. The jukebox and nickelodeon changed the way people heard the music of the day by placing it within reach of the masses.

Mechanical jukeboxes continued to be one of many amusement machines in these penny arcades, but in the late 20s with the introduction of the electric phonograph, motors and amplification, the modern jukebox became a reality.

In 1926, J.P. Seeberg, a Swedish immigrant to the U.S., invented an electric system that was coin operated and would play any of eight records. A year later, Automated Musical Instruments introduced its electric jukebox. Unlike their mechanical predecessors, that could only be heard by fee-paying patrons standing near the machine, these systems were capable of filling an entire room with sound.

These innovations further popularized the jukebox, and so began the modern jukebox craze.

The other two major manufacturers of jukeboxes appeared in the early 1930s. Wurlitzer, a long-time manufacturer of pianos and player pianos, introduced its first jukebox in 1933. And in 1935 David Rock-Ola (his real name), whose company had been building scales and coin-operated games, introduced its first jukebox.

When the great depression occurred in the 1930s, the jukebox business became the one bright spot for the record industry. For the public, a nickel would pay for six plays and like the movies of the day provided a few minutes escape from the depression.

A 1936 Wurlitzer Model 35 prototype jukebox.

There were two other historical events that helped the jukebox gain prominence.

The repeal of prohibition in the U.S. in 1933 meant that there were now tens of thousands of bars, clubs, and other drinking establishments that were installing jukeboxes for entertainment.

The second was the outbreak of World War II, and the relocation of millions of young soldiers to camps in far-away locations. For entertainment, the armed forces installed hundreds of jukeboxes in PX’s and service clubs all over America and overseas.

While these young people would have frequented their local jukebox back home, those machines would have had only a couple of types of music in the 24 available selections, and would have been chosen to suit the area and the jukebox’s clientele.

But the military jukeboxes were unique in that they were stocked with a range of music to satisfy the varied tastes of those who had come from every part of the country and ethnic background. American blues, gospel, country and pop records were all thrown together on military jukes that introduced GIs to all sorts of music that came from outside of their home community and culture.

Almost overnight, American regional music, never really played on radio before, was heard by those from every region of the country. Many of these young people were also musicians that would now explore, absorb, learn, appropriate, and embrace pop music styles they had never heard before.

After the war this would have a significant impact on the coalescing of those musical roots that would form rock and roll.

On the home front during World War II, there was a growing juvenile delinquency problem with so many parents unable to pay attention to their teenagers. Dad was away at war, and Mom was working in a defense plant.

During the early 1940s, throughout America, youth centers were opened for after-school and weekend activities. To bring in the teens, free jukeboxes were brought in, turned up, and rarely turned off. The program was successful.

But, by the late 40s, the jukebox had fallen out of favor with the conservative establishment and was increasingly considered a corrupting influence. One prominent critic wrote in 1948 that the jukebox was responsible for “the musical tastes of America’s youth starting on a steady decline.” That year Frank Sinatra was the most popular artist in the country. For such critics, things would get far worse.

For many Americans in the early 1950s, rock and roll was the devil’s tool, and existed for no other purpose than to morally corrupt the youth. For the first time teenagers had their own beat, and it could be found blasting out of the malt shop jukebox.

The Wurlitzer Model 1015.

By 1956 there were somewhere around 750,000 jukeboxes swallowing dimes in America. Since most radio stations were only playing the most sanitized rock and roll selections, the jukebox was the source for the majority of rock music, particularly those machines in racially mixed neighborhoods. These machines had records of black artists who were singing rhythm and blues and early rock.

The public had heard from the pulpit and conservative press about the evil, passion firing sounds thumping from those machines sitting at the end of the bar or in the middle wall of the malt shop, but when Evan Hunter’s book, The Blackboard Jungle, was made into a movie in 1955, the older public was convinced. They had not beaten Hitler to see their children’s minds lost to the devil’s music.

The Rowe RPM45.

When you added up the title song “Rock Around The Clock” with the images in the movie, it was obvious to anyone over 30 that rock and roll equaled teenage delinquency. The jukebox had become an integral part of rock and roll imagery.

In many areas of America, the government required a sticker on the jukebox stating that “minors are forbidden by law to operate this machine,” but generally, the jukes remained uncensored.

However, the jukebox operators were frequently placed under suspicion of jukebox stacking, a form of payola where they would be paid to put a record in the machine. Those who operated jukeboxes didn’t kick this image until the 1970s.

Coin operated music delivery systems did not decline as gramophones became a common addition to homes. The opposite was the case. With the spread of domestic record players within the upper middle class, along with radio, a desire was created for recorded music throughout the entire population.

Coin operated systems allowed anyone for the price of a few pennies to hear their favorite and/or the latest record. Increasingly, these customers were the young. In general the first phonographs were controlled by older people (parents) whose musical tastes were toward classical and music of their generation. 

To hear the latest. young people had to go to the juke at their local hangout. Not until the late 1950s was the cost of reproduction systems, headphones, and the records themselves so affordable that young people could have a record player of their own that they could control.

Most of them got that first record player with the detachable speakers as a Christmas present from parents who never realized that from that day forward “turn it down” would become one of their most-often used phrases.

The record player: hi-fi in its day.

Choosing what records would go in the jukebox was probably the origin of the “Hit Parade,” due to the limited number of records that could go into a machine, and the practice of installing new records weekly based on which ones were and were not played.

The jukebox brought the choice of what music would be played down to who wanted to hear a song badly enough to spend a nickel. Often these would include recordings of local acts that were prominent in that specific community. In the mid 1930s, every jukebox held a smattering of local releases.

By 1940, those who chronicled the U.S. record industry were recognizing the importance of the jukebox. Jack Nelson wrote in Billboard that “coin operated phonographs, through a tremendously wide distribution, appeal to millions of individuals everyday, thus ensuring for this industry an important part in the next phase of American music.”

The inner workings of a vintage jukebox.

The jukebox had become a significant centerpiece anywhere small-town America gathered, and record sales to the jukebox operators were becoming significant. It provided anyone with nickle instant grass-roots musical satisfaction.

As Chris Pearce describes it, “It was the jukebox into which the lonely trucker at the coffee shop dropped his nickel to inspire dreams of his baby back home, the jukebox that the kids made for in Chuck Berry’s song when they wanted to hear something really hot, the jukebox that linked communities whose local operator stocked it with songs and dances from the old country.”

From the 1920s to the 1960s, jukeboxes electronically and mechanically advanced by increasing the capacity of their changers, better amplifiers and speakers, selectors at each table, roll around selector, and so on.

Of paramount importance was the “look” of the machine. The jukebox had to be visually exciting. The exterior design became a key to the jukebox’s success. Seeberg and Wurlitzer hired top industrial designers just when Modernism was coming into vogue.

Translucent colored plastic was starting to be widely used and was ideally suited for the illumination of the jukebox. Most manufacturers believed that the customer wanted to see the record changer work and a cabinet that lit up.

Wurlitzer dominated the post W.W. II market with its classic machine, the 1015, which featured colored arcs and floating bubblers. But in 1948, Seeberg introduced the first jukebox to handle 100 selections, the Select-O-Matic 100.

The number of records that could be played had gone from a couple of dozen records to 50 records, with both sides available for play. Until the introduction of the Select-O-Matic 100, the industry believed that 24 titles were all that were necessary for a selection of “pop” songs.

The other jukebox manufacturers quickly redeveloped their mechanisms to accommodate more records when it became obvious that the customers wanted a wider selection, and by 1956, 200 titles were available in a jukebox.

The Rock-Ola Bubbler.

The expansion in capacity also meant that a wider variety of records could be available. Country and western and rhythm and blues could finally live in the same jukebox with Perry Como, Bing Crosby, Bill Haley and Elvis.

Unquestionably the biggest change to hit the jukebox industry came in 1948, when RCA introduced the 45. Not only did they sound better than the 78s. but they were lighter, smaller, and the center hole was large and more suitable for automated operation.

In short, it was the perfect record for a jukebox. The 45 in the jukebox of the 1950s would become the focal point of the teenager and the first line source of rock and roll.

Until television forced radio to reinvent itself, radio was the mass medium, and with few exceptions had generally ignored blues, country, and other regional or “fringe” music. The jukebox filled this void.

The tabletop jukebox—personalized music from back in the day.

In the 1950s, it was the jukebox where teenagers would find the latest in music. They were doing what Teresa Brewer suggested -  “put another nickel in…”—but they were selecting Chuck Berry, whose advice was to go “up to the corner and round the bend, right to the juke joint you go in. Feeling the music from head to toe, round and round, and round you go. Hail, hail, rock and roll! Deliver us from the days of old!”

Teresa didn’t know it, but Chuck was saying her days as a pop artist were numbered, as was the style of recordings she made.

These machines were more than music delivery systems, their external designs were trend setters in the art deco movement and an important aspect of their popularity. They offered the latest music at a time when most of the public could not afford to buy a record, much less their own playback system.

A Wurlitzer magazine ad.

The jukebox was key to the popular spread of country, hillbilly, rhythm and blues, and of course the development of rock and roll music. For a generation, the jukebox at the local hang-out was the only place that some of the “hippest” and latest rock and roll could be heard.

Their significance has declined over the last few decades but in the 1940s through the early 1960s they were an important focus for the young. Rock and roll might have been beaten down by the establishment if it had not been for the existence of jukeboxes in every bar, hamburger drive-in, bowling alley and malt shop where young people congregated.

For some of those who were there, Buddy Holly and Bill Haley will never sound better then when they were first blasting from a jukebox after inserting a nickel in a Wurlitzer. For those who weren’t there, its hard to capture it all, since it wasn’t just the jukebox that held the sound, it was where it was happening in time and place when teenagers and rock and roll were being invented.

As a 1950s Wurlitzer ad stated, “For millions, the jukebox was ‘America’s favorite nickel’s worth of fun’.”

Tom Lubin is an internationally recognized music producer and engineer, and is a Lifetime Member of the National Academy of Recording Arts and Sciences (Grammy voting member). He also co-founded the San Francisco chapter of NARAS. An accomplished author, his latest book is “Getting Great Sounds: The Microphone Book,” available here.

 

{extended}
Posted by Keith Clark on 10/03 at 03:01 PM
RecordingFeatureBlogStudy HallAnalogBusinessEngineerLoudspeakerRemoteStudioPermalink

Australia’s Studio One Flight Up Achieves Fantastic Sound With API 1608

Australian studio installs API 1608.

A self-proclaimed ‘analog-minded person,’ Nick Irving, owner of Studio One Flight Up in Sydney, works with independent singers, songwriters and musicians who are driven to create great records. While the studio is home to several vintage analog outboard gear pieces, Irving wanted a brand-new, reliable analog console to bring together the sound he works to achieve.

Deb Sloss of Studio Connections Australia advised him towards a 32-channel API 1608 console, fitted with twenty-four classic API 550A EQs, and eight 560 EQs.

“It’s a dream console, really,” said Irving. “Computers make great multi-track recorders, but for the actual work of recording and mixing sessions, I need to be ‘hands-on”.”

The moveable size of the 1608 played a role in Irving’s decision, allowing the studio to eventually grow into a new building. It also comes in 16-channel sections, allowing the actual console to expand, or even be retrofitted.

“It’s chock full of API’s sensational EQs and mic preamps, silky-smooth faders, nice, bright, clear LED lights in all the buttons, and full metering with the brightly-lit VU meters.”

Studio One Flight Up uses the 1608 for both recording and mixing needs. “My sounds just get better and better now that I have an API console. Switching the metering between input and direct out is fantastic and very useful. Having buttons to engage the insert points is also a great and useful feature.”

With the 1608, Irving and his crew do not need extra DI boxes, as each channel of the console has an instrument input. “It’s also great that the power supplies are quiet enough that they don’t have to be housed in a separate machine room,” Irving added.

Now that the studio renovations are complete, Irving is recording and collaborating on an album with Grammy award-winner Myles Heskett (ex-Wolfmother).

“It is proving to be lots of fun. We’re really enjoying working on the API console. Everything I’ve done on it sounds fantastic – solid, crisp, clear.” Irving proclaimed, “I’ll never need another console in my lifetime!”

API

{extended}
Posted by Julie Clark on 10/03 at 10:46 AM
RecordingNewsAnalogConsolesStudioPermalink

Monday, September 29, 2014

Church Sound: The Value Of Input Sheets

This article is provided by ChurchTechArts.

 
Now that I’m traveling to even more churches than ever, I’ve seen some very creative console layouts. And pretty much everyone looks at me funny when I ask them for an input sheet.

I used input sheets every weekend for over eight years—even though most weeks we could have gotten away without one. But I’m a big fan of consistency, and once I settle on a good way of doing things I like to keep doing it.

Keeping You Organized
As I said, I’ve seen some interesting console layouts. Sometimes, those things happen because it’s the fastest way to something done, and it just stays that way.

But when you put it on paper, it’s easier to see that having the drums scattered all over the console doesn’t make sense. I also find that putting things on paper is a great way to think through better ways of doing it. Sometimes, we get in such a routine that we don’t even notice there is a better way of accomplishing a task until we write it down. Then it leaps off the paper to us.

I’ve also realized that we have been doing something the hard way for a while, and it’s time to simplify. Again, this comes from writing it down and looking it over.

Spot Problems Ahead Of Time
Ever show up for a weekend service and find you are short a few vocal microphones? Or perhaps you don’t have enough direct boxes (DIs) to cover all the keyboards and guitars. Or maybe you’re just out of channels on the console.

These issues are a lot easier to solve on Tuesday than they are on Sunday morning. Making up an input sheet earlier in the week will head those issues off at the pass. Even if your set up is relatively stable week to week, it’s still nice to know that you have what you need.

Better Communication
When you have an input sheet, you can hand a copy to someone on your team and they know how to set up the stage. Everyone knows what plugs into what. I figured I could either spend my set up time answering questions from my guys on where to plug things in, or empower them to do it themselves. I always prefer the latter.

Help With Troubleshooting
Ever been working your way through sound check only to find you have no signal from the acoustic guitar? After checking the tuner, we tend to start looking at all kinds of exotic problems that it might be.

But before doing that, make sure it’s plugged in to the right input. An input sheet will help you verify that you’re in the right snake, sub snake or stage input, and patched into the right channel on the board. Instead of tracing wires, you can quickly verify patching. Often, that solves the problem.

I really can’t find any downside to using an input sheet each week. They only take a few minutes to make and often save a lot of time during the weekend. Next time, I’ll give you a few examples of input sheets so you have some ideas for creating your own.

Examples
The easier we can make the mixing process for our team, the more successful they can be. We already know mixing is hard, but let’s not make it harder with poor organization.

Here I’m going to show you a few examples of input sheets so you have some ideas on what information to include and how to organize it. This first one is one of the first I did. Looking back on it, I already see some issues that I would address today. But it served it’s purpose back then, and was a huge improvement over what we had (which was nothing).

Basic Sheet

image

This input sheet could be divided into three groups of information. The first three columns provide the patching information. Here, you find the board channel, the stage input and any sub-snake assignment.

The next three columns provide application information. What type of input is it, who will be using it and are there any special notes to be aware of. Finally, we see routing, monitoring and bussing information, along with a note on phantom power.

Armed with nothing but this input sheet (and a stage plot), my volunteer set up crew could completely wire the stage for me during the week, and I could quickly verify it on rehearsal night.

Looking back on it, I would change some things if I were doing it today. I would rearrange the console to follow a more conventional layout, and would color code more. But at the time, it worked well. Equipment-wise, we were using a 32-channel analog console and an Aviom system for monitoring.

Intermediate Sheet

image

This next sheet was developed by a friend of mine, Tyler Kanishero. He’s using a Yamaha M7 console with a couple of cards, and did a great job of putting all the information you’d need on a single sheet.

On the left side, you see all 48 input channels on the console and what plugs into them. Inputs are direct, stage and cards. In the middle you have the mixe and matrix assignments. On the right, the Aviom and output assigns are clearly listed.

This example goes into more detail, but still keeps the information clearly and easy to find. About the only thing I would change on this is to add color. As you can see, he has the same information I had in my basic example, but it’s organized differently.

Like a console setup, it doesn’t matter so much how you do it, as long as it makes sense in your context. Of course, there are advantages to doing things similarly to industry standards. But make sure it works for you.

Advanced Sheet

image

This is the sheet I developed jointly with Isaiah Franco. I started it, he did a lot of work on it, then I tweaked it some more after he left. We use a lot of cool Numbers features for drop down menus, and a ton of if-then statements to auto-fill much of the content.

This sheet is four pages long and presents the information in a few ways. The first two pages are for the stage team. They get all the information for patching and set up through the patch list and stage diagram. All of the wireless mic and IEM assignments are also clearly spelled out.

The second two pages are for the front of house engineer. In reality, most of that info was already dealt with in the baseline show file, but it’s good to know what is there.

This one was tweaked and massaged over five years, and I’m pretty happy with it. It’s overkill for many situations, however. If you have a smaller set up, you don’t likely need this much information. However, there are principles that should be useful.

Remember, it’s less important how you do the input sheet, and more important that you do it. Figure out what works for you and start. You’ll be glad you did.

Here (below) are the sheets in PDF. Everyone is going to ask for the originals; I don’t have all of them, so just build them yourself in Excel or Numbers. It’s good practice.

Intermediate_Input_Sheet.pdf

Advanced_Input_Sheet.pdf

Mike Sessler now works with Visioneering, where he helps churches improve their AVL systems, and encourages and trains the technical artists that run them. He has been involved in live production for over 25 years and is the author of the blog Church Tech Arts.

 

{extended}
Posted by Keith Clark on 09/29 at 01:56 PM
Church SoundFeatureBlogStudy HallAnalogConsolesDigitalEngineerMixerSignalSound ReinforcementTechnicianPermalink

Friday, September 26, 2014

Wide Hive Studios Reopens With Expanded 48-Channel API 1608

Expanded 1608 unifies studio’s sound, streamlines workflow, and allows clients to tap into huge collection of outboard gear with ease

Wide Hive Studios in San Francisco’s Bay Area re-opened its doors earlier this year after a number of recent renovations and updates.

A decade ago, veteran producer, engineer, and musician Gregory Howe folded his costly studio in San Francisco for a cozy, less expensive space in nearby Albany, CA. He centered the studio around a 16-channel API 1608, which he later expanded to 32 channels through a 16-channel expander.

Now, with his most recent round of upgrades, Howe has added another 16-channel expander to create a 48-channel API 1608.

“I’ve been working with API gear for a long, long time,” Howe says. “I love the API sound. To me, it walks the perfect line between cleanliness, straight-up rock, and audiophile fidelity.”

The recently-expanded 48-channel 1608 unifies the studio’s sound, streamlines its workflow, and also allows clients to tap into Howe’s huge collection of outboard gear with ease. “The new 16 channels primarily serve as returns from the equipment racks. We now have the flexibility and sound to do whatever we want,” he says.

Wide Hive books jazz, funk, hip-hop, and soul artists exclusively. Since the console’s expansion, Howe has used it to record several tracks, which are receiving airplay, brisk sales, and profuse blessings from critics. Of note, swing jazz guitarist Calvin Keys cut Electric Keys with the help of the 1608 and the Wide Hive Players, an in-house collective group of jazz musicians.

Reviews of Wide Hive releases often comment on their excellent sound quality. “I’m a huge believer in analog summing,” he notes. “Digital summing involves a massive calculation that necessitates sacrifices. I can hear those sacrifices in the music. I’m looking forward to the cohesion we’ll have when the whole console is API.”

API

{extended}
Posted by Keith Clark on 09/26 at 01:06 PM
RecordingNewsAnalogConsolesInstallationMixerProcessorStudioPermalink

Wednesday, September 24, 2014

AES Unveiling Inaugural “Raw Tracks” Series At 137th Convention

New series will offer an intimate, behind-the-scenes look at the making of various recordings by influential artists spanning decades and genres.

The Audio Engineering Society (AES) is pleased to debut their new “Raw Tracks” series as part of this year’s Recording and Production Track at the upcoming 137th Audio Engineering Society Convention, taking place October 9-12, 2014, at the Los Angeles Convention Center.

This new series features top-name producers and engineers who will discuss and deconstruct influential, classic recordings from some of music’s most highly regarded artists. Attendees can learn firsthand about details of the sessions — the gear that was used, recording technique, and other insightful production information.

Sessions in the new series include:

Recording & Production: RP1 - Raw Tracks: Fleetwood Mac — A Master Class presented by Ken Caillat about the recording of a classic song from the hit album, Rumours.

Recording & Production: RP2 - Raw Tracks: David Bowie — A track-by-track Master Class featuring a classic David Bowie recording, presented by Ken Scott.

Recording & Production: RP3 - Raw Tracks: Pet Sounds — A Master Class by three-time GRAMMY®-winner Mark Linett about two songs – “Wouldn’t It Be Nice” and “God Only Knows” – from the Beach Boys’ seminal album Pet Sounds.

Recording & Production: RP7 - Raw Tracks: Red Hot Chili Peppers — A Master Class featuring Andrew Scheps that explores the classic song “Pink As a Floyd.”

“We are pleased to be able to offer this new ‘behind-the-scenes’ track featuring the names behind the hits. Attendees are in for a truly unique experience – when it comes to learning about how the biggest hits came to life in the studio, there’s nothing like hearing it directly from those who were there,” said Bob Moses, AES Executive Director.

For further information about the Raw Tracks series, FREE Exhibits-Plus badges (pre-registration required), premium All Access badges, Hotel and Technical Program information, and more visit the AES website.

Audio Engineering Society
Audio Engineering Society

{extended}
Posted by Julie Clark on 09/24 at 10:50 AM
RecordingNewsAnalogDigitalEducationStudioPermalink

Friday, September 19, 2014

Allen & Heath ZED Mixer installed In Indian Medical Institute

Audio Cratz installs new Allen & Heath ZED console in new interactive seminar suite at SAIMS.

The Sri Aurobindo Institute of Medical Sciences (SAIMS) situated at Indore, Madhya Pradesh, India, recently installed an Allen & Heath ZED-420 4 bus analog USB mixer in its new state-of-the-art interactive seminar suite.

SAIMS provides a wide range of clinical diagnostic facilities and manages major surgical procedures including Open Heart surgery, Neurology & Neurosurgery, and Knee & Hip Joint replacement. The Institute’s building is also a teaching facility, housing all clinical and non-clinical departments, lecture theatres, fully-equipped labs, dissection halls, a museum, and a library.

Systems integrator, Audio Cratz, was appointed to fulfill the AV requirements for the new centre, specifying Allen & Heath’s ZED-420 in the control room to manage the live audio requirements and also capture stereo recording and video feed from the operations.

The latest addition to the building is the LASER 60-capacity interactive seminar suite, where live operations can be streamed from up to 3 theatres simultaneously. Doctors demonstrate using endoscopic and conference cameras, and students are able to ask questions during the operation.

“Allen & Heath has always designed great quality audio products and we wanted the best for the new centre, which is one of the top seminar facilities in India. We have successfully conducted a number of interactive sessions following the installation, and the system is proving to be very successful,” concluded Dr. Vinod Bhandari, chairman, SAIMS.

Allen & Heath
Sun Infonet

 

{extended}
Posted by Julie Clark on 09/19 at 11:06 AM
AVNewsAnalogAVConsolesInstallationSound ReinforcementPermalink

Thursday, September 18, 2014

API 1608 Console Powerful Solution For Chile’s CHT Estudios

CHt Estudios adds 1608 console to handle expanding business.

With sound quality a top priority, mix and mastering engineer Gonzalo González E. wanted to expand his studio offerings, while maintaining the high quality of recording for which his clients are accustomed.

With the needs of his growing studio in mind, González was advised by 57 Pro Audio, API’s Chilean distributor, to add a standard 1608 console.

The 1608 is a perfect match for the size of CHT Estudios, as well as both the independent and bigger international label clients that it works with.

“We usually record bands of rock, pop, hip-hop, reggae, and folk music of Chile,” said González. “Many people in Chile receive this new console as good news for the recording industry.”

Along with the 1608, CHT Estudios has a pair of 3124 preamps, which González says he uses on everything. He also uses a 527 compressor in his API lunchbox.

“With the 1608, we can now use the preamps and EQ, and send to record in DAW, and at the same time. We can also use faders of the same channels to listen to tracks from DAW, which is very useful.

“My favorite feature is the ability to use EQs as an insert for gear outside the console. We can also sum thirty-two channels of DAW using sixteen channels on faders, with eight stereo returns and eight program bus ins.”

Coming up in the next few months, the studio will continue recording and mixing for an array of recognized artists.

“All of the features make the console a powerful solution for a studio like ours,” said González. “It is a good inspiration for us, and for everybody who likes music with a very good quality of recording.”

API Audio

{extended}
Posted by Julie Clark on 09/18 at 01:04 PM
RecordingNewsAnalogConsolesStudioPermalink

Monday, September 15, 2014

Audient At Heart Of New Studio For Film & TV Star

Actor Michael Chiklis installs Audient ASP8024 console in his home studio.

New Audient ASP8024 console-owner Michael Chiklis has spent the last few weeks getting acquainted with his 24-channel Audient desk before hot-footing off to New Orleans to start shooting the television series, “American Horror Story: Freak Show”.

“I recorded the first tracks through the ASP8024,” enthuses Chiklis, who is better known for his acting career than for his high spec, Los Angeles home studio. “I recorded a cover of a Police song just to test the system and it really went beautifully.”

“Very basic bass, drums, guitar and vocal tracks sounded warm and clean through these mic pres,” he continues. “In fact, I’m very impressed just how good everything sounds with no compression or effects, just solid musicianship captured with Bob Heil mics through the Audient pres into ProTools 11—gorgeous.

“I’m very excited to start recording originals now. We have some soundproofing and trapping tweaks to finish, but we’re essentially ready to go at this point and now it’s all about the music.”

Chiklis grew up in a musical family and is himself an accomplished drummer and vocalist, and also plays guitar and bass with his band MCB.

“I am over the moon with this console! I even engineered one of the songs myself,” he confesses. “I am definitely not an engineer but between the intuitive lay out of the board and the relatively self-explanatory new ProTools 11 HD, I was able to do it - the tracks sound amazing!”

The British-made desk in Chiklis’s studio comes complete with ‘Dual Layer Control’ module, allowing the console to operate in both the analogue and digital domain. The pres are warm and lush and with no added compression or plugins - the sound is gorgeous! So far so great!

“The three engineers that I work with can’t wait to really put the console through its paces. They seem to all be very impressed with the clean, simplicity of its layout as well as its warm analogue sound,” adds Chiklis. “I have a number of incredible musicians waiting in the wings to lay down tracks as soon as we are up and running.”

Audient

 

{extended}
Posted by Julie Clark on 09/15 at 02:38 PM
RecordingNewsAnalogConsolesSound ReinforcementStudioPermalink

Drawmer Introduces 1973 FET Stereo Compressor, Distributed In U.S. By TransAudio Group

Includes three independent compressor sections with two variable-frequency 6 dB/octave crossovers to separate them into low, middle, and high frequency sections

Ivor Drawmer, maker of analog—and now digital—signal processing equipment, drew on his 30-plus years behind the soldering iron to create the Drawmer 1973, a new 3-band FET stereo compressor. It is being distributed in the U.S. by TransAudio Group, available now at a price of $1,825.

The Drawmer 1973 includes three independent compressor sections with two variable-frequency 6 dB/octave crossovers to separate them into low, middle, and high frequency compression sections.

Each section contains familiar threshold, gain, attack, and release controls, along with gain-reduction metering. Moreover, each section can be independently muted or bypassed for confusion-free setup and monitoring.

The low section possesses a “Big” switch for enhanced low-end, whereas the high section possesses an “Air” switch for enhanced high-end.

The three sections are recombined to form the “wet” signal, which can be mixed to variable degree with the dry signal for easy parallel compression. Illuminated VU meters make monitoring compression and output intuitive and, yes, fun.

“Certainly, the Drawmer 1973 owes some of its sound and functionality to Ivor’s experience designing the classic Drawmer 1960 and 1968 compressors, as well as to the Drawmer S3 signature series multiband tube compressor,” says Brad Lunde, president of TransAudio Group. “But it also has a sound and operation all its own. It’s capable of solving problems single-band compressors simply cannot, such as compressing only the low end, raising its average level relative to everything else, and giving your mix a bit more bass without changing the overall level.

“It has a sound quality that cannot be matched by other analog processors, never mind plug-ins,” he continues. “It will be popular among mixers and EDM mixers alike. The 1973’s layout is impressive. Unlike most other multiband compressors, the 1973’s controls are easy to understand at a glance and work to inspire creative use.

“The real news here may be the 1973’s affordable price. Those in need of stereo multiband compression with Drawmer’s quality can have it for the cost of Drawmer’s famous single-band stereo tube compressor, the 1968.”

image

 

TransAudio Group

{extended}
Posted by Keith Clark on 09/15 at 11:31 AM
Live SoundRecordingNewsProductAnalogProcessorSignalSound ReinforcementStudioPermalink

Thursday, September 11, 2014

Keep It Cool: Three Rack Ventilation Methods

This article is provided by Commercial Integrator

 
It’s a truism that almost nothing is 100 percent efficient; a measure of the inefficiency of most devices we deal with is how much heat they produce.

Heat is energy that has been lost for one reason or another and is not available to do the task at hand, whether that is moving our car along the road, moving a loudspeaker’s cone to produce sound, or moving large quantities of 1’s and 0’s around at very high speeds.

Heat not properly dealt in our AV or IT systems can cause problems.

Digital electronics — be they satellite receivers, DVD players, codecs, or computers — may “lock up” and become unresponsive when overheated. Analog components appear to be more heat-tolerant, but in reality electrolytic capacitors are drying out and thinner-than-hair wires inside integrated circuits and transistors are being subjected to repeated thermal cycles of excessive expansion and contraction, leading to premature failure.

Modern AV and IT systems consist of various electronic components frequently mounted in racks, which may themselves be freestanding or in closets or other enclosures. Each electronic component in the system will generate some heat, and the systems designer and end user can ignore this at their peril.

The trivial case, in which a few devices mounted in a skeletal rack frame, in the open, in conditioned space, and consuming very low amounts of power, can safely be ignored. But such systems are few and far between today. More typical is the rack containing many power-hungry devices, all mounted in a rack either shrouded by side and back panels or located in a closet, millwork — or both.

In these cases, ignorance of likely damage from heat will be far from blissful. Overheated components will express their displeasure in any number of ways, from sub-par performance to catastrophic failure.

There are several ways to reduce the temperature within a rack. One is through passive thermal management; allowing natural convection currents to let heated air rise and exit at the top of the rack while cooler air enters through an opening at a low point.

Convection, while ‘free,’ is a very weak force. It is dependent on the small difference in density of hot and cold air, which is why a hot air balloon is huge, yet capable of lifting only light loads.

Convection currents are easily blocked or disrupted should a vent be even partially obscured. Heat loads today, given the increasing use of digital devices and the tendency to install more equipment in smaller racks and enclosures, are too often beyond the ability of convection to even approach the necessary level of heat removal.

Another way to cool a rack is through air conditioning, or active refrigeration. Air conditioning systems, properly sized and installed, let us set rack temperatures as low as we want; the only caveats being that we don’t cool below the dew point and condense moisture on our equipment, or raise our energy bill to unacceptable levels.

While expensive to buy, install, and operate, air conditioning systems that are dedicated to electronic systems may be the only practical solution when heat loads are large.

Be aware when the air conditioning system is shared with people, as when the supply and/or return ducts are an extension of an HVAC system that also serves the building and its occupants.

The danger is that the thermostat may turn the system off when the occupants are comfortable or keep it from running at all in the cooler parts of the year, while the electronics are still generating the same amount of heat.

There is also the extreme situation of HVAC systems installed in temperate areas. They can become the building’s heating system in cold weather.

If these potential problems can be avoided, dedicated air conditioning is an effective cooling technique, and in some cases the only practical solution to avoid damage by overheating.

Guidelines are not complicated; cool air should be delivered via a supply point high and in front of the rack, while the return for heated exhaust air should be located high and behind the rack.

Of the many types of analog and digital equipment being installed today, almost all fan-equipped components draw cooling air in front of their front panels and exhaust it to the rear. The arrangement described allows a “waterfall” of cold air to fall in front of the rack where it can be pulled in, while a high-mounted exhaust fan in the top of the rack, or high on its rear panel, pulls heated air out into the return duct.

Integrators can accommodate those components without internal fans by placing passive vent panels below them. If the exhaust fan has been properly sized, it will pull conditioned air in. In some cases, it may be necessary to use one or more small fans inside the rack to prevent pockets of stagnant heated air from accumulating.

If the building’s HVAC system can accommodate the extra heat load, it may only be necessary to use the third rack cooling technique. This will provide active thermal management using only strategically-located fans, eliminating the cost and complexity of refrigeration.

Moving the necessary number of cubic feet or air through a rack every minute can be accomplished using ventilation systems available on the market. For freestanding racks, it is a matter of pulling heated air out from the top of the rack and replacing it with cool room air entering at the bottom (we have made the assumption that the rack is in a conditioned space, and that the building’s HVAC system can deal with the heat generated in the rack).

Fan systems are available which can be mounted near or at the top of the rack. They draw heated air up from below and discharge it through their front panel into the room. Other systems discharge the heated air straight up through the top of the rack.

Neither of these systems is effective when the rack itself is enclosed in a closet or millwork. In this case, we must first get the hot air out of the rack, and then get the hot air out of the closet. Systems are available that perform both functions; they pull air up from lower parts of a rack, then move it through flexible tubing to an area outside the closet.

Better ventilation systems represent a trade-off between moving air and generating noise. When the system is in a remote equipment room, noise is not an issue; when it’s in the board room, noise from fan motors and air movement becomes bothersome. Consulting with cooling system makers’ technical personnel is a great help during the design process.

Frank Federman is CEO of Active Thermal Management.

Go to Commercial Integrator for more content on A/V, installed and commercial systems. And, read an extended version of this article at CorporateTechDecisions.com.

{extended}
Posted by Keith Clark on 09/11 at 01:37 PM
AVFeatureBlogStudy HallAmplifierAnalogAVDigitalInstallationInterconnectPowerProcessorTechnicianPermalink

Monday, September 08, 2014

Large-Format Audient ASP8024 Console Chosen For Studio des Bruères In France

As well as the attractive feature set, the large-format desk creates a visual impact in the studio

An Audient ASP8024 console was the first choice for Jean-Christian Maas, owner of Studio des Bruères, a highly specified studio situated in a tranquil yet accessible location in Poitiers, France.

Dedicated to the production—and co-production—of artists, Maas offers aa different service to that of standard commercial studios.

“We prefer to work on projects that we record and mix, so that we can maximize the synergy and efficiency,” he explains. “The bulk of our work is recommended by word-of-mouth and are mostly jazz and classical music, but more recently we have also had some pop/rock.

“The Audient ASP8024 console is exactly what we were looking for because it suits the way we work in both the analog and digital domains,” Maas continues. “We use a wide range of software and hardware outboard and therefore were after a very transparent console to preserve signal integrity. Its transparency also allows us to use it as a summing unit (with real panoramic faders) and greatly contributes to the overall sound of the final mix.

“The preamps have an excellent dynamic and the EQ strips are more than correct. The routing is very well thought out too,” he concludes.

As well as the attractive feature set, the large-format desk—supplied by Funky Junk France—creates a visual impact in the studio. Comprising 100 square meters of analog consoles and outboard, vintage instruments and an array of digital tools, Studio des Bruères is perhaps best described as a venue created by musicians for musicians.

Audient

{extended}
Posted by Keith Clark on 09/08 at 08:08 AM
RecordingNewsAnalogConsolesInstallationMixerStudioPermalink

Tuesday, September 02, 2014

Second Edition Of “Small Signal Audio Design” By Douglas Self Now Available From Focal Press

Provides an extensive repertoire of circuits that can be put together to make almost any type of audio system.

Focal Press has just released the second edition of Small Signal Audio Design by Douglas Self, providing ample coverage of preamplifiers and mixers, as well as a new chapter on headphone amplifiers. The handbook provides an extensive repertoire of circuits that can be put together to make almost any type of audio system.

Essential points of theory that bear on practical performance are lucidly and thoroughly explained, with the mathematics kept to a relative minimum. Self’s background in design for manufacture means that he keeps a wary eye on the cost of things. The book also includes a chapter on power-supplies, full of practical ways to keep both the ripple and the cost down, showing how to power everything.

The book also teaches how to:

—Make amplifiers with apparently impossibly low noise

—Design discrete circuitry that can handle enormous signals with vanishingly low distortion

—Use humble low-gain transistors to make an amplifier with an input impedance of more than 50 Megohms

—Transform the performance of low-cost-opamps, how to make filters with very low noise and distortion

—Make incredibly accurate volume controls

—Make a huge variety of audio equalizers

—Make magnetic cartridge preamplifiers that have noise so low it is limited by basic physics

—Sum, switch, clip, compress, and route audio signals

The second edition is expanded throughout (with added information on new ADCs and DACs, microcontrollers, more coverage of discrete op amp design, and many other topics), and includes a completely new chapter on headphone amplifiers.

Author Douglas Self studied engineering at Cambridge University, then psychoacoustics at Sussex University. He has spent many years working at the top level of design in both the professional audio and hi-fi industries, and has taken out a number of patents in the field of audio technology. He currently acts as a consultant engineer in the field of audio design.

Find out more and get Small Signal Audio Design, 2nd Edition here.

Focal Press

{extended}
Posted by Keith Clark on 09/02 at 12:36 PM
AVLive SoundRecordingNewsProductTrainingAmplifierAnalogAVDigitalEducationMixerProcessorSound ReinforcementStudioPermalink

Tuesday, August 26, 2014

Transform Your Mind: Chapter 6 Of White Paper Series On Transformers Now Available

Chapter 6, the final installment of PSW’s ongoing free white paper series on transformers, is now available for free download. (Get it here.)

The new installment, entitled “Exploring The Electrical Characteristics Of Audio Transformers,” explores the basic electrical characteristics of audio transformers to better understand the differences among various types, and why one transformer is better for a given application than another.

The white paper series is presented by Lundahl, a world leader in the design and production of transformers, and is authored by Ken DeLoria, senior technical editor of ProSoundWeb and Live Sound International,  who has mixed innumerable shows and tuned hundreds of sound systems, and as the founder of Apogee Sound, developed the TEC Award-winning AE-9 loudspeaker.

Note that Chapter 1: An Introduction to Transformers in Audio Devices, Chapter 2: Transformers–Insurance Against Show-Stopping Problems, Chapter 3: Anatomy Of A Transformer, Chapter 4: An Interview With Per Lundahl, and Chapter 5: Understanding Impedance In Audio Transformers, are also still available for free download.

Again, download your free copy of “Chapter 6: Exploring The Electrical Characteristics Of Audio Transformers”—and get the entire set—by going here.

Lundahl

{extended}
Posted by Keith Clark on 08/26 at 06:47 AM
AVFeatureBlogProductTrainingWhite PaperAnalogAVEducationMeasurementMicrophoneProcessorSignalPermalink

Monday, August 18, 2014

CADAC CDC four Digital Console Fronts Large-Scale System For Celebration At Iconic Ibiza Club

Console heads up large-scale system incorporating Funktion-One loudspeakers powered by Full Fat Audio amps with XTA processing

Iconic Ibiza club Space recently celebrated its 25th anniversary with a birthday bash centered on its outdoor Flight Club arena, with regular Ibiza sound specialists Project Audio providing a CADAC CDC four compact digital console in front of a large-scale system incorporating Funktion-One loudspeakers powered by Full Fat Audio amps with XTA processing.

The club’s earlier 2014 season “Opening Fiesta” in late May saw the Funktion-One rig fronted with a CADAC LIVE1 analog console. Ibiza, noted for its legendary nightlife, is the third largest of the Balearic Islands in the Mediterranean Sea, 50 miles off the coast of the city of Valencia, in eastern Spain.

Dave Millard, founder of Full Fat Audio, was Project Audio’s sound engineer for both events at Space, working in partnership with Funktion-One chief Tony Andrews and Project Audio’s Ibiza system technician George Yankov.

“We used the LIVE1 on the opening party, but for the 25th Anniversary we needed to wireless mic a troupe of flamenco dancers on stage and use some effects on them, so we went with the CDC four for that,” says Yankov. “It was a real pleasure to use both the analog LIVE1 and digital CDC four. The audio performance of both consoles is equally excellent. I cannot recall another desk so transparent and with so much drive and finesse.

“The CDC four really allows the audio to breath and just does not sound digital at all,” he continues. “Every nuance of a recording or live input can be heard, with even subtle changes to the controls. Bass performance is exciting with every note precise. Build quality is also first class and user interaction is straightforward.”

The 25th anniversary party featured a line-up of Playa d’en Bossa regulars and legends, including Nina Kraviz, Carl Craig, Jimmy Edgar, Shaun Reeves, Layo and Bushwacka!, Alfredo, José Padilla, Jose De Divina and César De Melero, a four-hour set from Erick Morillo, and ‘cameo’ spots from Fatboy Slim and Annie Mac.

CADAC

{extended}
Posted by Keith Clark on 08/18 at 10:29 AM
AVLive SoundNewsAnalogAVConsolesDigitalMixerSound ReinforcementSystemPermalink

Thursday, August 14, 2014

Digital Dharma: A/D Conversion And What It Means In Audio Terms

This article is provided by Rane Corporation.

 
Like everything else in the world, the audio industry has been radically and irrevocably changed by the digital revolution. No one has been spared.

Arguments will ensue forever about whether the true nature of the real world is analog or digital; whether the fundamental essence, or dharma, of life is continuous (analog) or exists in tiny little chunks (digital). Seek not that answer here.

Rather, let’s look at the dharma (essential function) of audio analog-to-digital (A/D) conversion.

It’s important at the onset of exploring digital audio to understand that once a waveform has been converted into digital format, nothing can inadvertently occur to change its sonic properties. While it remains in the digital domain, it’s only a series of digital words, representing numbers.

Aside from the gross example of having the digital processing actually fail and cause a word to be lost or corrupted into none use, nothing can change the sound of the word. It’s just a bunch of “ones” and “zeroes.” There are no “one-halves” or “three-quarters.”

The point is that sonically, it begins and ends with the conversion process. Nothing is more important to digital audio than data conversion. Everything in-between is just arithmetic and waiting. That’s why there is such a big to-do with data conversion. It really is that important. Everything else quite literally is just details.

We could even go so far as to say that data conversion is the art of digital audio while everything else is the science, in that it is data conversion that ultimately determines whether or not the original sound is preserved (and this comment certainly does not negate the enormous and exacting science involved in truly excellent data conversion.)

Because analog signals continuously vary between an infinite number of states while computers can only handle two, the signals must be converted into binary digital words before the computer can work. Each digital word represents the value of the signal at one precise point in time. Today’s common word lengths are 16-bits, 24-bits and 32-bits. Once converted into digital words, the information may be stored, transmitted, or operated upon within the computer.

In order to properly explore the critical interface between the analog and digital worlds, it’s first necessary to review a few fundamentals and a little history.

Binary & Decimal
Whenever we speak of “digital,” by inference, we speak of computers (throughout this paper the term “computer” is used to represent any digital-based piece of audio equipment).

And computers in their heart of hearts are really quite simple. They only can understand the most basic form of communication or information: yes/no, on/off, open/closed, here/gone, all of which can be symbolically represented by two things - any two things.

Two letters, two numbers, two colors, two tones, two temperatures, two charges… It doesn’t matter. Unless you have to build something that will recognize these two states - now it matters.

So, to keep it simple, we choose two numbers: one and zero, or, a “1” and a “0.”

Officially this is known as binary representation, from Latin bini—two by two. In mathematics this is a base-2 number system, as opposed to our decimal (from Latin decima a tenth part or tithe) number system, which is called base-10 because we use the ten numbers 0-9.

In binary we use only the numbers 0 and 1. “0” is a good symbol for no, off, closed, gone, etc., and “1” is easy to understand as meaning yes, on, open, here, etc. In electronics it’s easy to determine whether a circuit is open or closed, conducting or not conducting, has voltage or doesn’t have voltage.

Thus the binary number system found use in the very first computer, and nothing has changed today. Computers just got faster and smaller and cheaper, with memory size becoming incomprehensibly large in an incomprehensibly small space.

One problem with using binary numbers is they become big and unwieldy in a hurry. For instance, it takes six digits to express my age in binary, but only two in decimal. But in binary, we better not call them “digits” since “digits” implies a human finger or toe, of which there are 10, so confusion reigns.

To get around that problem, John Tukey of Bell Laboratories dubbed the basic unit of information (as defined by Shannon—more on him later) a binary unit, or “binary digit” which became abbreviated to “bit.” A bit is the simplest possible message representing one of two states. So, I’m six-bits old. Well, not quite. But it takes 6-bits to express my age as 110111.

Let’s see how that works. I’m 55 years old. So in base-10 symbols that is “55,” which stands for five 1’s plus five 10’s. You may not have ever thought about it, but each digit in our everyday numbers represents an additional power of 10, beginning with 0.

Figure 1: Number representation systems.

That is, the first digit represents the number of 1’s (100), the second digit represents the number of 10’s (101), the third digit represents the number of 100’s (102), and so on. We can represent any size number by using this shorthand notation.

Binary number representation is just the same except substituting the powers of 2 for the powers of 10 [any base number system is represented in this manner].

Therefore (moving from right to left) each succeeding bit represents 20 = 1, 21 =2, 22 =4, 23 =8, 24 = 16, 25 =32, etc. Thus, my age breaks down as 1-1, 1-2, 1-4, 0-8, 1-16, and 1-32, represented as “110111,” which is 32+16+0+4+2+1 = 55. Or double-nickel to you cool cats.

Figure 1, above, shows the two examples.

Building Blocks
The French mathematician Fourier unknowingly laid the groundwork for A/D conversion in the late 18th century.

All data conversion techniques rely on looking at, or sampling, the input signal at regular intervals and creating a digital word that represents the value of the analog signal at that precise moment. The fact that we know this works lies with Nyquist.

Harry Nyquist discovered it while working at Bell Laboratories in the late 1920s and wrote a landmark paper describing the criteria for what we know today as sampled data systems.

Nyquist taught us that for periodic functions, if you sampled at a rate that was at least twice as fast as the signal of interest, then no information (data) would be lost upon reconstruction.

And since Fourier had already shown that all alternating signals are made up of nothing more than a sum of harmonically related sine and cosine waves, then audio signals are periodic functions and can be sampled without lost of information following Nyquist’s instructions.

This became known as the Nyquist Frequency, which is the highest frequency that may be accurately sampled, and is one-half of the sampling frequency.

For example, the theoretical Nyquist frequency for the audio CD (compact disc) system is 22.05 kHz, equaling one-half of the standardized sampling frequency of 44.1 kHz.

As powerful as Nyquist’s discoveries were, they were not without their dark side, with the biggest being aliasing frequencies. Following the Nyquist criteria (as it is now called) guarantees that no information will be lost; it does not, however, guarantee that no information will be gained.

Although by no means obvious, the act of sampling an analog signal at precise time intervals is an act of multiplying the input signal by the sampling pulses. This introduces the possibility of generating “false” signals indistinguishable from the original. In other words, given a set of sampled values, we cannot relate them specifically to one unique signal.

Figure 2: Aliasing frequencies.

As Figure 2 shows, the same set of samples could have resulted from any of the three waveforms shown. And from all possible sum and difference frequencies between the sampling frequency and the one being sampled.

All such false waveforms that fit the sample data are called “aliases.” In audio, these frequencies show up mostly as intermodulation distortion products, and they come from the random-like white noise, or any sort of ultrasonic signal present in every electronic system.

Solving the problem of aliasing frequencies is what improved audio conversion systems to today’s level of sophistication. And it was Claude Shannon who pointed the way. Shannon is recognized as the father of information theory: while a young engineer at Bell Laboratories in 1948, he defined an entirely new field of science.

Even before then his genius shined through for, while still a 22-year-old student at MIT he showed in his master’s thesis how the algebra invented by the British mathematician George Boole in the mid-1800s, could be applied to electronic circuits. Since that time, Boolean Algebra has been the rock of digital logic and computer design.

Another Solution
Shannon studied Nyquist’s work closely and came up with a deceptively simple addition. He observed (and proved) that if you restrict the input signal’s bandwidth to less than one-half the sampling frequency then no errors due to aliasing are possible.

So bandlimiting your input to no more than one-half the sampling frequency guarantees no aliasing. Cool…Only it’s not possible. In order to satisfy the Shannon limit (as it is called - Harry gets a “criteria” and Claude gets a “limit”) you must have the proverbial brick-wall, i.e., infinite-slope filter.

Well, this isn’t going to happen, not in this universe. You cannot guarantee that there is absolutely no signal (or noise) greater than the Nyquist frequency.

Fortunately there is a way around this problem. In fact, you go all the way around the problem and look at it from another direction.

If you cannot restrict the input bandwidth so aliasing does not occur, then solve the problem another way: Increase the sampling frequency until the aliasing products that do occur, do so at ultrasonic frequencies, and are effectively dealt with by a simple single-pole filter.

This is where the term “oversampling” comes in. For full spectrum audio the minimum sampling frequency must be 40 kHz, giving you a useable theoretical bandwidth of 20 kHz - the limit of normal human hearing. Sampling at anything significantly higher than 40 kHz is termed oversampling.

In just a few years time, we saw the audio industry go from the CD system standard of 44.1 kHz, and the pro audio quasi-standard of 48 kHz, to 8-times and 16-times oversampling frequencies of around 350 kHz and 700 kHz, respectively. With sampling frequencies this high, aliasing is no longer an issue.

O.K. So audio signals can be changed into digital words (digitized) without loss of information, and with no aliasing effects, as long as the sampling frequency is high enough. How is this done?

Determining Values
Quantizing is the process of determining which of the possible values (determined by the number of bits or voltage reference parts) is the closest value to the current sample, i.e., you are assigning a quantity to that sample.

Quantizing, by definition then, involves deciding between two values and thus always introduces error. How big the error, or how accurate the answer, depends on the number of bits. The more bits, the better the answer.

The converter has a reference voltage which is divided up into 2n parts, where n is the number of bits. Each part represents the same value.

Editors Note: For those working the math, “2n parts” is also known to be “2 to the nth power.”  Use the x^y function on a scientific calculator to achieve the correct result.

Since you cannot resolve anything smaller than this value, there is error. There is always error in the conversion process. This is the accuracy issue.

Figure 3: 8-Bit resolution.

The number of bits determines the converter accuracy. For 8-bits, there are 28 = 256 possible levels, as shown in Figure 3.

Since the signal swings positive and negative there are 128 levels for each direction. Assuming a ±5 V reference [3], this makes each division, or bit, equal to 39 mV (5/128 = .039).

Hence, an 8-bit system cannot resolve any change smaller than 39 mV. This means a worst case accuracy error of 0.78 percent.

Each step size (resulting from dividing the reference into the number of equal parts dictated by the number of bits) is equal and is called a quantizing step (also called quantizing interval—see Figure 4).

Originally this step was termed the LSB (least significant bit) since it equals the value of the smallest coded bit, however it is an illogical choice for mathematical treatments and has since be replaced by the more accurate term quantizing step.

Figure 4: Quantization, 3-bit, 50-volt example.

The error due to the quantizing process is called quantizing error (no definitional stretch here). As shown earlier, each time a sample is taken there is error.

Here’s the not obvious part: the quantizing error can be thought of as an unwanted signal which the quantizing process adds to the perfect original.

An example best illustrates this principle. Let the sampled input value be some arbitrarily chosen value, say, 2 volts. And let this be a 3-bit system with a 5 volt reference. The 3-bits divides the reference into 8 equal parts (23 = 8) of 0.625 V each, as shown in Figure 4.

For the 2 volt input example, the converter must choose between either 1.875 volts or 2.50 volts, and since 2 volts is closer to 1.875 than 2.5, then it is the best fit. This results in a quantizing error of -0.125 volts, i.e., the quantized answer is too small by 0.125 volts.

If the input signal had been, say, 2.2 volts, then the quantized answer would have been 2.5 volts and the quantizing error would have been +0.3 volts, i.e., too big by 0.3 volts.

These alternating unwanted signals added by quantizing form a quantized error waveform, that is a kind of additive broadband noise that is generally uncorrelated with the signal and is called quantizing noise.

Since the quantizing error is essentially random (i.e. uncorrelated with the input) it can be thought of like white noise (noise with equal amounts of all frequencies). This is not quite the same thing as thermal noise, but it is similar. The energy of this added noise is equally spread over the band from dc to one-half the sampling rate. This is a most important point and will be returned to when we discuss delta-sigma converters and their use of extreme oversampling.

Early Conversion
Successive approximation is one of the earliest and most successful analog-to-digital conversion techniques. Therefore, it is no surprise it became the initial A/D workhorse of the digital audio revolution. Successive approximation paved the way for the delta-sigma techniques to follow.

The heart of any A/D circuit is a comparator. A comparator is an electronic block whose output is determined by comparing the values of its two inputs. If the positive input is larger than the negative input then the output swings positive, and if the negative input exceeds the positive input, the output swings negative.

Therefore, if a reference voltage is connected to one input and an unknown input signal is applied to the other input, you now have a device that can compare and tell you which is larger. Thus a comparator gives you a “high output” (which could be defined to be a “1”) when the input signal exceeds the reference, or a “low output” (which could be defined to be a “0”) when it does not.

Figure 5A: Successive approximation, example.

A comparator is the key ingredient in the successive approximation technique as shown in Figure 5A and Figure 5B. The name successive approximation nicely sums up how the data conversion is done. The circuit evaluates each sample and creates a digital word representing the closest binary value.

The process takes the same number of steps as bits available, i.e., a 16-bit system requires 16 steps for each sample. The analog sample is successively compared to determine the digital code, beginning with the determination of the biggest (most significant) bit of the code.

Figure 5B: Successive approximation, A/D converter.

The description given in Daniel Sheingold’s Analog-Digital Conversion Handbook offers the best analogy as to how successive approximation works. The process is exactly analogous to a gold miner’s assay scale, or a chemical balance as seen in Figure 5A.

This type of scale comes with a set of graduated weights, each one half the value of the preceding one, such as 1 gram, 1/2 gram, 1/4 gram, 1/8 gram, etc. You compare the unknown sample against these known values by first placing the heaviest weight on the scale.

If it tips the scale you remove it; if it does not you leave it and go to the next smaller value. If that value tips the scale you remove it, if it does not you leave it and go to the next lower value, and so on until you reach the smallest weight that tips the scale. (When you get to the last weight, if it does not tip the scale, then you put the next highest weight back on, and that is your best answer.)

The sum of all the weights on the scale represents the closest value you can resolve.

In digital terms, we can analyze this example by saying that a “0” was assigned to each weight removed, and a “1” to each weight remaining—in essence creating a digital word equivalent to the unknown sample, with the number of bits equaling the number of weights.

And the quantizing error will be no more than 1/2 the smallest weight (or 1/2 quantizing step).

As stated earlier the successive approximation technique must repeat this cycle for each sample. Even with today’s technology, this is a very time consuming process and is still limited to relatively slow sampling rates, but it did get us into the 16-bit, 44.1 kHz digital audio world.

PCM, PWM, EIEIO
The successive approximation method of data conversion is an example of pulse code modulation, or PCM. Three elements are required: sampling, quantizing, and encoding into a fixed length digital word. The reverse process reconstructs the analog signal from the PCM code.

The output of a PCM system is a series of digital words, where the word-size is determined by the available bits. For example. the output is a series of 8-bit words, or 16-bit words, or 20-bit words, etc., with each word representing the value of one sample.

Pulse width modulation, or PWM, is quite simple and quite different from PCM. Look at Figure 6.

Figure 6: Pulse width modulation (PWM).

In a typical PWM system, the analog input signal is applied to a comparator whose reference voltage is a triangle-shaped waveform whose repetition rate is the sampling frequency. This simple block forms what is called an analog modulator.

A simple way to understand the “modulation” process is to view the output with the input held steady at zero volts. The output forms a 50 percent duty cycle (50 percent high, 50 percent low) square wave. As long as there is no input, the output is a steady square wave.

As soon as the input is non-zero, the output becomes a pulse-width modulated waveform. That is, when the non-zero input is compared against the triangular reference voltage, it varies the length of time the output is either high or low.

For example, say there was a steady DC value applied to the input. For all samples, when the value of the triangle is less than the input value, the output stays low, and for all samples when it is greater than the input value, it changes state and remains high.

Therefore, if the triangle starts higher than the input value, the output goes high; at the next sample period the triangle has increased in value but is still more than the input, so the output remains high; this continues until the triangle reaches its apex and starts down again; eventually the triangle voltage drops below the input value and the output drops low and stays there until the reference exceeds the input again.

The resulting pulse-width modulated output, when averaged over time, gives the exact input voltage. For example, if the output spends exactly 50 percent of the time with an output of 5 volts, and 50 percent of the time at 0 volts, then the average output would be exactly 2.5 volts.

This is also an FM, or frequency-modulated system—the varying pulse-width translates into a varying frequency. And it is the core principle of most Class-D switching power amplifiers.

The analog input is converted into a variable pulse-width stream used to turn-on the output switching transistors. The analog output voltage is simply the average of the on-times of the positive and negative outputs.

Pretty amazing stuff from a simple comparator with a triangle waveform reference.

Another way to look at this is that this simple device actually codes a single bit of information, i.e., a comparator is a 1-bit A/D converter. PWM is an example of a 1-bit A/D encoding system. And a 1-bit A/D encoder forms the heart of delta-sigma modulation.

Modulation & Shaping
After 30 years, delta-sigma modulation (also sigma-delta) emerged as the most successful audio A/D converter technology.

It waited patiently for the semiconductor industry to develop the technologies necessary to integrate analog and digital circuitry on the same chip.

Today’s very high-speed “mixed-signal” IC processing allows the total integration of all the circuit elements necessary to create delta-sigma data converters of awesome magnitude.

Essentially a delta-sigma converter digitizes the audio signal with a very low resolution (1-bit) A/D converter at a very high sampling rate. It is the oversampling rate and subsequent digital processing that separates this from plain delta modulation (no sigma).

Referring back to the earlier discussion of quantizing noise, it’s possible to calculate the theoretical sine wave signal-to-noise (S/N) ratio (actually the signal-to-error ratio, but for our purposes it’s close enough to combine) of an A/D converter system knowing only n, the number of bits.

Doing some math shows that the value of the added quantizing noise relative to a maximum (full-scale) input equals 6.02n + 1.76 dB for a sine wave. For example, a perfect 16-bit system will have a S/N ratio of 98.1 dB, while a 1-bit delta-modulator A/D converter, on the other hand, will have only 7.78 dB!

Figures 7A - 7E: Noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.

To get something of a intuitive feel for this, consider that since there is only 1-bit, the amount of quantization error possible is as much as 1/2-bit. That is, since the converter must choose between the only two possibilities of maximum or minimum values, then the error can be as much as half of that.

And since this quantization error shows up as added noise, then this reduces the S/N to something on the order of around 2:1, or 6 dB.

One attribute shines true above all others for delta-sigma converters and makes them a superior audio converter: simplicity. The simplicity of 1-bit technology makes the conversion process very fast, and very fast conversions allows use of extreme oversampling.

And extreme oversampling pushing the quantizing noise and aliasing artifacts way out to megawiggle-land, where it is easily dealt with by digital filters (typically 64-times oversampling is used, resulting in a sampling frequency on the order of 3 MHz).

To get a better understanding of how oversampling reduces audible quantization noise, we need to think in terms of noise power. From physics you may remember that power is conserved, i.e., you can change it, but you cannot create or destroy it; well, quantization noise power is similar.

With oversampling, the quantization noise power is spread over a band that is as many times larger as is the rate of oversampling. For example, for 64-times oversampling, the noise power is spread over a band that is 64 times larger, reducing its power density in the audio band by 1/64th.

Figures 7A through 7E illustrate noise power redistribution and reduction due to oversampling, noise shaping and digital filtering.

Noise shaping helps reduce in-band noise even more. Oversampling pushes out the noise, but it does so uniformly, that is, the spectrum is still flat. Noise shaping changes that.

Using very clever complex algorithms and circuit tricks, noise shaping contours the noise so that it is reduced in the audible regions and increased in the inaudible regions.

Conservation still holds, the total noise is the same, but the amount of noise present in the audio band is decreased while simultaneously increasing the noise out-of-band—then the digital filter eliminates it. Very slick.

As shown in Figure 8, a delta-sigma modulator consists of three parts: an analog modulator, a digital filter and a decimation circuit.

The analog modulator is the 1-bit converter discussed previously with the change of integrating the analog signal before performing the delta modulation. (The integral of the analog signal is encoded rather than the change in the analog signal, as is the case for traditional delta modulation.)

Figure 8: Delta-sigma A/D converter. (click to enlarge)

Oversampling and noise shaping pushes and contours all the bad stuff (aliasing, quantizing noise, etc.) so the digital filter suppresses it.

The decimation circuit, or decimator, is the digital circuitry that generates the correct output word length of 16-, 20-, or 24-bits, and restores the desired output sample frequency. It is a digital sample rate reduction filter and is sometimes termed downsampling (as opposed to oversampling) since it is here that the sample rate is returned from its 64-times rate to the normal CD rate of 44.1 kHz, or perhaps to 48 kHz, or even 96 kHz, for pro audio applications.

The net result is much greater resolution and dynamic range, with increased S/N and far less distortion compared to successive approximation techniques—all at lower costs.

Good Noise?
Now that oversampling helped get rid of the bad noise, let’s add some good noise—dither noise. Dither is one of life’s many trade-offs. Here the trade-off is between noise and resolution. Believe it or not, we can introduce dither (a form of noise) and increase our ability to resolve very small values.

Values, in fact, smaller than our smallest bit… Now that’s a good trick. Perhaps you can begin to grasp the concept by making an analogy between dither and anti-lock brakes. Get it? No? Here’s how this analogy works: With regular brakes, if you just stomp on them, you probably create an unsafe skid situation for the car… Not a good idea.

Instead, if you rapidly tap the brakes, you control the stopping without skidding. We shall call this “dithering the brakes.” What you have done is introduce “noise” (tapping) to an otherwise rigidly binary (on or off) function.

So by “tapping” on our analog signal, we can improve our ability to resolve it. By introducing noise, the converter rapidly switches between two quantization levels, rather than picking one or the other, when neither is really correct. Sonically, this comes out as noise, rather than a discrete level with error. Subjectively, what would have been perceived as distortion is now heard as noise.

Lets look at this is more detail. The problem dither helps to solve is that of quantization error caused by the data converter being forced to choose one of two exact levels for each bit it resolves. It cannot choose between levels, it must pick one or the other.

With 16-bit systems, the digitized waveform for high frequency, low signal levels looks very much like a steep staircase with few steps. An examination of the spectral analysis of this waveform reveals lots of nasty sounding distortion products. We can improve this result either by adding more bits, or by adding dither.

Prior to 1997, adding more bits for better resolution was straightforward, but expensive, thereby making dither an inexpensive compromise; today, however, there is less need.

The dither noise is added to the low-level signal before conversion. The mixed noise causes the small signal to jump around, which causes the converter to switch rapidly between levels rather than being forced to choose between two fixed values.

Now the digitized waveform still looks like a steep staircase, but each step, instead of being smooth, is comprised of many narrow strips, like vertical Venetian blinds.

Figure 9: A - input signal; B - output signal (no dither); C - total error signal (no dither); D - power spectrum of output signal (no dither); E - input signal; F - output signal (with dither); G - total error signal (with dither); H - power spectum of output signal (with dither).

The spectral analysis of this waveform shows almost no distortion products at all, albeit with an increase in the noise content. The dither has caused the distortion products to be pushed out beyond audibility, and replaced with an increase in wideband noise. Figure 9 diagrams this process.

Wrap With Bandwidth
Due to the oversampling and noise shaping characteristics of delta-sigma A/D converters, certain measurements must use the appropriate bandwidth or inaccurate answers result. Specifications such as signal-to-noise, dynamic range, and distortion are subject to misleading results if the wrong bandwidth is used.

Because noise shaping purposely reduces audible noise by shifting the noise to inaudible higher frequencies, taking measurements over a bandwidth wider than 20 kHz results in answers that do not correlate with the listening experience. Therefore, it’s important to set the correct measurement bandwidth to obtain meaningful data.

Dennis Bohn is a principal partner and vice president of research & development at Rane Corporation. He holds BSEE and MSEE degrees from the University of California at Berkeley. Prior to Rane, he worked as engineering manager for Phase Linear Corporation and as audio application engineer at National Semiconductor Corporation. Bohn is a Fellow of the AES, holds two U.S. patents, is listed in Who’s Who In America and authored the entry on “Equalizers” for the McGraw-Hill Encyclopedia of Science & Technology, 7th edition.

 

{extended}
Posted by Keith Clark on 08/14 at 02:08 PM
AVFeatureBlogStudy HallAnalogAVDigitalEthernetInterconnectNetworkingProcessorSignalPermalink
Page 2 of 42 pages  <  1 2 3 4 >  Last »