Tuesday, September 16, 2014

Getting The Most From Amp Tone Controls

Tone controls image
I'm always kind of baffled when I hear a band live and there's no separation between instruments, especially between guitar players. Then I think back to when I was a young player and remember, "They just don't know how to set their tone controls yet."

For too many players, setting those amp tone controls is such a random act with little thought behind it. Here's an excerpt from my Ultimate Guitar Tone Handbook (written with the excellent guitar player, composer and author Rich Tozzoli) that gives some context as to how to get those most out of these controls.

"So often players are confused by the tone controls on their amps. What’s the best way to set them? Is there a method for doing so? In order to get the most out of them, it’s best to understand the reasons why they’re there in the first place.

The biggest reason for having tone controls is so that all the frequencies of your instrument speak evenly so no particular range is louder or softer than any other. Shortly after the first amps were developed with only a single “Tone” control, manufacturer’s noticed that players might be using guitars with different types of pickups with their amps, so more sophisticated tonal adjustments were really necessary. A guitar with a humbucking pickup might sound too boomy through an amp, but if you roll off the low-end with the bass control, the frequencies even out. Likewise, a Strat might be too light on the low-end or have too much top-end, but a simple adjustment would make all frequencies come out at roughly the same level.

Another place where tone controls come in handy is if you have a frequency that really jumps out, as compared to all the rest, either because of the way the amp is overdriven or because of a pedal. Often a slight adjustment of the Treble, Middle or Presence control can alleviate the problem, although these controls will also adjust all the frequencies around the offending one as well.

Where tone controls are especially effective is how the guitar fits within the context of the mix of the song. You want to be sure that every instrument is distinctly heard and the only way to do that is to be sure that each one sits in it's own particular frequency range, and the tone controls will help shape this. It's especially important with two guitar parts that use similar instruments and amps (like two Strats through two Fender Super Reverbs). If this occurs, it’s important to be able to shape your sound so that each guitar occupies a different part of the frequency spectrum. To make our example work in the mix, one guitar would occupy more of a higher frequency register while the other would be in a lower register, which would mean that one guitar has more high end while the second guitar is fatter sounding, or both guitars might have different mid-range peaks. 

Not only do guitars have to sonically stay out of the way of each other, but they have to sit in a different frequency space than the bass and drums (and vocals, keys, percussion, and horns if you have them) too. As a result, you either adjust the tone controls on your amp or try another guitar so it fits better in the sonic space with everything else. While the engineer can do this with equalization either during recording or mixing, it’s always better if you get as close to the sound as possible out in the studio first because it will save time and sound better too.


The best way to get an ear for how guitars are sonically layered is to listen carefully to a number of hit songs in almost any genre and really dissect how everything fits together. Of course, the producer, engineer or artist (if you’re playing on someone else’s recording) will also have specific ideas as to the sound they’re looking for in the track, and will guide you in that direction."

To read additional excerpts from the Ultimate Guitar Tone Handbook and my other books, go to the excerpts section of bobbyowsinski.com.
----------------------------------

Monday, September 15, 2014

Using Sound To Talk To Atoms

Atom and particles image
I love when I hear about audio or music being used for things other than what we're familiar with, and this is a really good one. One of the latest uses for audio is in quantum physics where scientists from Chalmers University of Technology in Sweden are using sound to communicate with with atoms.

Quantum physics deals with particles at a nanoscopic or sub-atomic scale and a great number of everyday products like the transistor, the laser and the MRI machine operate on a quantum scale. You might say that modern audio, which is so dependent upon the microchip, is a direct result of what happens way down there in the quantum universe.

In this case, the scientists first decided to try to listen to the sound emitted by an atom, which is the weakest sound which can be detected (imagine the amplifier needed for that!). After that they decided to try to manipulate the atom with sound, which proved to be a success.

It turns out that using sound works a lot better than light, which is what is usually used for this purpose, because it travels 100,000 times slower, which provides more control over the atoms and their particles. Because the atom is so large compared to the wavelength of the sound used, it can be customized to only react to certain frequencies.

Lest you think you might try this at home, you should know that the experiments where carried out at near absolute zero temperature and the scientists used a frequency of 4.8GHz, which equates to a D28 note. That's 20 octaves higher than the highest note on a grand piano!
----------------------------------

Sunday, September 14, 2014

New Music Gear Monday: iCon Digital iControls Pro

While many have gotten used to mixing in the box with a mouse or trackball, some of us just have to have some faders under our fingers to feel comfortable. Unfortunately many of the mainstream DAW controllers can be way more money than a home studio can bear. That's why the iCon Digital iControls Pro may be the perfect solution in those situations.

The iControl Pro offers 8 motorized touch sensitive channel faders plus a master, as well as transport control, 9 encoder knobs and a jog wheel all in an ergonomic aluminum form factor. It also has solo and mute buttons for each channel, the ability to switch to different banks and layers, and DAW horizontal and vertical zoom controls. The unit has 2 USB inputs to allow for daisy chaining devices  using Mackie control for Ableton, Cubase, Samplitude and Logic Pro, and Mackie HUI control for Pro Tools. iMAP software also allows all of the controls to be mapped for MIDI too.

Best of all, iControl Pro only retails for $429 and streets for even less. Find out more about it on the iCon Digital site, as well as the company's other fine controllers. Check out the nice overview below from Sounds and Gear.


----------------------------------

Thursday, September 11, 2014

Boston "Rock And Roll Band" Isolated Drums and Bass

It's fun to go back and listen to the hits from when rock was in its infancy to hear what the recording and production techniques were like back then. Here's a good example of one of the turning points in music production - it's "Rock and Roll Band" from Boston's first album.

This is really the song that started it all for the band as it's the one that first got the attention of both the band's managers and the record label. What you'll hear is Jim Masdea on drums and Boston leader Tom Scholz on bass. Here's what to listen for:

1. Listen to how tight the bass and drums are, and how near perfect both tracks are performed. The bass is sometimes ever so slightly ahead of the drums, but both are about perfect in their execution. That was a big departure in 1975 (when the song was recorded) when most songs still had a much looser feel, and it was a taste of what production would become a decade later.

2. The drums are in mono. They're very well-balanced (especially the ride cymbal, which is usually lost on most recordings) and have a nice medium dark reverb on them that doesn't get in the way.

3. The sound of the bass is interesting. Leave it to Scholz to not record a bass as a bass. There's some sort of very short delay or modulation on it, so the midrange is mostly in the middle but the extreme low end is puffed out to the sides. Of course, you need to listen on headphones to really hear this.

Above all, this track still really holds up because it was made so well, and as always, a great song is always remembered.




----------------------------------

Wednesday, September 10, 2014

Sending Money Via Audio

Bitcoin audio key image
A Bitcoin Audio Key
Bitcoin is going to save the world, or not, depending upon who you talk to these days. It's part of a growing shadow economy built around encryption, secret keys and anonymity, and there's a lot to be said for that.

But that's not enough for many of the more spy-centric Bitcoin users. As a result, a company known as Sound Wallet has taken encryption to a new extreme by taking a BIP38 encrypted key and converting it to a sound file. The average listener just hears static, but if you use an Android app called AndroSpectro you can dig out the encrypted key from the noise.

That's still not enough for some paranoid users though, as a user called Krach will cut you a 7" vinyl record for the utmost in security. The burst of energy right at the beginning of the record contains the key, but then gradually blends into some harsh electronic music.

I guess you could call this the ultimate safe, since no one would even think to look through ones record collection for their money. Then again, for most of us banks still work OK.
----------------------------------

Tuesday, September 9, 2014

3 Tips For A Better Sounding iTunes Encode

iTunes mastering prep image
iTunes is becoming less and less important as downloads wane in desirability, but that doesn't mean you can ignore it completely. There are those that still want to purchase their songs, especially at higher sample and bit rates, so having the best sounding AAC files is still worth striving for.

In that spirit, here's an excerpt from the latest Mastering Engineer's Handbook 3rd edition that provides 3 tips for a better sounding iTunes encode.

"There are a number of tips to follow in order to get the best sound quality from an iTunes encode. As it turns out, the considerations are about the same as with MP3 encoding:


1. Turn it down a bit. A song that's flat-lined at -0.1 dBFS isn't going to encode as well as a song with some headroom. This is because the iTunes AAC encoder sometimes outputs a tad hotter than the source, so there's some inter-sample overloads that happen at that level that aren't detected on a typical peak meter, since all DACs respond differently to it. As a result, a level that doesn’t trigger an over on your DAW’s DAC may actually be an over on another playback unit.

If you back it down to -0.5 or even -1 dB, the encode will sound a lot better and your listener probably won't be able to tell much of a difference in level anyway. 

2. Don't squash the master too hard. Masters with some dynamic range encode better. Masters that are squeezed to within an inch of their life don't; it’s as simple as that. Listeners like it better too.

3. Although the new AAC encoder has a fantastic frequency response, sometimes rolling off a little of the extreme top end (16kHz and above) can help the encode as well.

Any type of data compression requires the same common sense considerations. If you back off on the level, the mix buss compression and the high frequencies, you’ll be surprised just how good your AAC encode can sound.

Remember that iTunes still does the AAC encoding. You're just providing a file that's been prepped to sound better after the encode.

----------------------------------

Monday, September 8, 2014

New Internet Speed Is10X Faster Than Google Fiber

Internet speed is becoming increasingly important to everyone who records. Being able to collaborate in real time, whether it be creatively during tracking or listening to a mix as it's going down, is becoming more the norm every day, so any increase in Internet speed is always most welcome.

Until now Google Fiber was thought to be the Internet speed record holder at around 1 gigabit per second, which is way faster than the average 10 megabit broadband speed that most of America uses right now. The problem with Google Fiber is that it's currently only been installed in Kansas City, Austin and Provo, and will take a lot of time and trillions of dollars to roll out across the US.

Now comes a brand new technology from the famous Bell Labs that raises speeds to a ridiculous 10 gigabits per second (10 times more than Google Fiber) called "XG-FAST." The speed is fantastic, but the real breakthrough is that these speeds can be had over the existing copper landline structure already in place instead of layer new fiber, something that scientists had thought to be theoretically impossible.

But there is a catch - XG-FAST is only working in the lab so far, so it's not anywhere near to coming to a computer near you, but then again, neither is Google Fiber. That said, I'd put my money on Bell Labs getting this in the hands of consumers like you and me way before the Google boys do. And it can't get here fast enough!
----------------------------------

Sunday, September 7, 2014

New Music Gear Monday: DDMF The Strip Plugin

DDMF The Strip image
There are some people who are in love with any and all things Neve, but even if that doesn't describe you, you'll still enjoy this new plugin by DDMF called The Strip.

The Strip sure does look somewhat like a vintage Neve channel strip, but it doesn't exactly try to emulate one. Instead it just goes for the best series of processors that it can be, which includes high and low pass filters, a 5 band EQ, a gate and a compressor. Plus the order of the order of the EQ and compressor is interchangeable, a nice feature not often found in channel strip-type plugins.

The Strip boasts some very low CPU usage so you can slap one on every channel if you want without having to worry about running out of processor power. It's also available for Windows and Mac OSX, and in VST, RTAS, AU and AAX 32 and 64 bit formats.

Best of all, it's only $39 until the end of September, with a free demo version available. Check it out!
----------------------------------

Friday, September 5, 2014

Electronic Music Superstar Stonebridge On The Latest Inner Circle Podcast

Stonebridge is one of the most musical and cutting edge electronic music producers on the scene today and I'm really pleased to have him on the latest edition of my Inner Circle Podcast.

Stone has been producing international hits since 1993 but most recently he's had some big ones with Ne-You, Britney Spears and Jason Derulo.

In this show he talks all about his technique, how he got into DJing and producing, and gives some advice that's useful to anyone in the business.

I'll also talk about the the new Yahoo music video service as well as provide some tips for managing your studio time.

You can listen by going to bobbyoinnercircle.comiTunes or Stitcher.
----------------------------------

Thursday, September 4, 2014

Rush "Tom Sawyer" Song Analysis

Rush in concert image
We haven't done a song analysis for a while, so here's an excerpt from my Deconstructed Hits: Classic Rock Vol 1 book. It's Rush's "Tom Sawyer," a perennial FM radio favorite and the first single from their breakout Moving Pictures album from 1981. The song is a part of the defining moment in the band’s history when they finally broke out to world-wide superstardom.

The song was written on a band summer rehearsal holiday spent on a farm outside of Toronto. Poet Pye Dubois presented the band with a poem entitled “Louis The Lawyer,” which drummer Neil Peart then modified, and bassist/vocalist Geddy Lee and guitarist Alex Lifeson set to music.

THE SONG
As with everything Rush, "Tom Sawyer" is complex and doesn't follow a standard form, but that's why they're so well liked, right? The form looks something like this:

intro/chorus ➞ verse ➞ B-section ➞ C-section ➞ chorus ➞ interlude ➞ solo ➞ 
intro ➞ verse ➞ B-section ➞ C-section ➞ chorus ➞ outro

You can dispute exactly where the chorus is, but the popular thinking is it's where the "Tom Sawyer" lyric is mentioned. None the less, the song is as unconventional as it is interesting.

While most of the song is in 4/4 time, the solo begins in 7/8, then switches to 13/16. It then returns to 4/4 until the outro, where it again changes to 7/8. 

The lyrics are poetry set to music, instead of the other way around. There’s no overt need to rhyme if it doesn’t fit the thought, which is a whole lot better than forcing it and having an awkward lyric or cadence.

THE ARRANGEMENT
Rush's songs are fairly bare-bones in that they're meant to be played live, so there's not a lot of obvious layering. The guitars are doubled and heavily effected to make them bigger, but you can hear how they effectively use only a single less effected guitar in the first turnaround of the solo, then the second has the full guitar sound to change the dynamics.

Arrangement Elements
  • The Foundation: drums
  • The Pad: synthesizer on the intro and outro, high register synth in solo beginning and outro
  • The Rhythm: high hat
  • The Lead: lead vocal, guitar solo, 
  • The Fills: none
Rush uses synthesizers very creatively, from the Oberheim OB-X swell in the intro and outro, to the Moogish sound in the interlude and outro. Also, the lead vocal is doubled in the C-section, which differentiates it from the other sections.

THE SOUND
The mix of “Tom Sawyer” is as interesting as is the song form. Neil Peart's drums are way up in front and the snare has a nice pre-delayed medium room on it that you can only hear in the beginning when the drums are played by themselves. All of the other drums are dry. The snare is fairly bright, as is the high hat, which is featured in the mix since it keeps the motion of the song moving forward. The kick and snare are compressed well to make them punchy and in your face without seeming squashed. The cymbals are nice and bright but pulled back in the mix.

Geddy Lee's vocal has a timed delay with a medium reverb wash that blends seamlessly into the track, which also has a bit of modulation that you can hear as it dies out. Once again, you can only hear it during the intro when the song is fairly sparse. His bass has that Rickenbacker treble sound yet still has a lot of bottom, despite the distortion.

Alex Lifeson's guitar is doubled using a short delay, and slightly chorused with a medium reverb wash for the huge sound that glues everything together. In the case of the solo guitar, the reverb is effected and then spread hard left and right. It also uses the same guitar sound as the rhythm guitar, which is unusual, since solos usually have a different sound on most records. 

Listen Up:
  • To the modulation at the end of the reverb on Geddy Lee’s vocal.
  • To how large the stereo synthesizers on the intro of the song are.
  • To the stereo effect on the Moog synth at the beginning of the solo and the outro.

THE PRODUCTION
Any power trio has to have great musicians to have everything sound big and cohesive, and Rush does just that. Peart's drumming is absolutely rock solid, without a beat ever feeling like it drifted even a microsecond out of time, yet still feels organic. The way he’s placed in the mix totally holds it together, yet it never feels as if he’s the one featured. As with most other hits, it’s the energy of the track that pulls you in, which goes to show you that without a near perfect basic track, it’s difficult to keep the track interesting.

You can read additional excerpts from the various Deconstructed Hits volumes, as well as my other books on the excerpt section of bobbyowsinski.com.


----------------------------------

Wednesday, September 3, 2014

The Amish Town That's The Center Of The Touring Universe

Clair Bros logo image
Many of you know that I grew in a small town in Eastern Pennsylvania called Minersville, but what you don't know that it's about 40 minutes down the road from what has become the center of the concert touring universe. That town of only 9,400 is called Lititz and it's become a total company town in the same way that Hollywood is built around movies.

Lititz is the home of Clair Brothers, the largest sound company in the world, and Tait Towers, a leading company in the staging and lighting part of the business.

Clair Brothers was started very modestly by Gene and Roy Clair in 1968 with a pair of Altec A-7s behind the Four Seasons (for $90 a gig). They quickly realized that they needed a better way to move the gear around after touring with the likes of the Grateful Dead and Jefferson Airplane, and began to manufacturer their own speaker systems, with Rod Stewart getting the first system custom painted in white. This gear has become the centerpiece of the company as time has gone on and as the company grew into permanent installation work as well.

Tait Towers was started by Michael Tait, who handled the lighting for Yes at the time, and moved to the Amish country town in 1978 because of the relatively low costs and fairly close proximity to Philadelphia and New York. Both companies now employ over 750 people in the Lititz area alone.

What's even more interesting is that fact that Clair and Tait are now coming together to build a new state-of-the-art rehearsal/pre-production center known as Rock Lititz Studios that's designed for major concert acts to get their show together before going out on tour. As it is know, most acts have to privately rent out a large venue or aircraft hanger, where the costs, location or accommodations aren't the most convenient or cost-effective. The Rock Lititz "entertainment campus" is truly a first in the industry.

The first client booked for the Rock Lititz was to be U2, but they have since cancelled since the band is still working on their album and have postponed their tour, but there seems to be no problem filling the time slot as acts are lining up to use it as it nears completion.

There's a great article about Clair Bros, Tait Towers and Lititz over on the Wall Street Journal that goes into much more detail.
----------------------------------

Tuesday, September 2, 2014

Mastering Your Songs In 6 Steps

Audio Mastering image
When I began writing the latest 3rd edition of The Mastering Engineer's Handbook, one of the things that I wanted to find out from some of the mastering greats was how they approached a project. In other words, what were the steps they took to make sure that a project was mastered properly. Interestingly, the majority of them follow 6 primary steps, some consciously followed and some unconsciously. Here's an excerpt from The Mastering Engineer's Handbook that outlines the technique.

"If you were to ask a number of the best mastering engineer’s what their general approach to mastering was, you’d get mostly the same answer.

1. Listen to all the tracks. If you’re listening to a collection of tracks such as an album, the first thing to do is listen to brief durations of each song (10 to 20 seconds should be enough) to find out which sounds are louder than the others, which ones are mixed better, and which ones have better frequency balances. By doing this you can tell which songs sound similar and which ones stick out. Inevitably, you’ll find that unless you’re working on a compilation album where all the songs were done by different production teams, the majority of the songs will have a similar feel to them, and these are the ones to begin with. After you feel pretty good about how these feel, you’ll find it will be easier to get the outliers to sound like the majority than the other way around.

2. Listen to the mix as a whole, instead of hearing the individual parts. Don’t listen like a mixer, don’t listen like an arrangement and don’t listen like a songwriter. Good mastering engineers have the ability to divorce themselves from the inner workings of the song and hear it as a whole, just like the listening public does. 

3. Find the most important element. On most modern radio-oriented songs, the vocal is the most important element, unless the song is an instrumental. That means that one of your jobs is trying to make sure that the vocal can be distinguished clearly.

4. Have an idea of where you want to go. Before you go twisting parameter controls, try to have an idea of what you’d like the track to sound like when your finished. Ask yourself the following questions:
  • Is there a frequency that seems to be sticking out?
  • Are there frequencies that seem to be missing?
  • Is the track punchy enough?
  • Is the track loud enough?
  • Can you hear the lead element distinctly?
5. Raise the level first. Unless you’re extremely confident that you can hear a wide frequency spectrum on your monitors (especially the low end), concentrate on raising the volume instead EQing. You’ll keep yourself out of trouble that way. If you feel that you must EQ, refer to the section of the EQing later in the chapter.

6. Adjust the song levels so they match. One of the most important jobs in mastering is to take a collection of songs like an album, and make sure they each have the same relative level. Remember that you want to be sure that all the songs sound about the same level at their loudest. Do this by listening back and forth to all the songs and making small adjustments in level as necessary."

Following these steps just like the mastering greats do will ensure that not only will your project sound better, but you'll avoid some of the pitfalls of mastering your own material as well.

----------------------------------

LinkWithin

Related Posts Plugin for WordPress, Blogger...