Over a year ago I felt like I needed to stop being a solo songwriter. I knew several people from church who felt the same way, and I invited them to join together to actually accomplish something. After several false starts three of us started meeting regularly. We didn’t accomplish much until one of the members of the fledgling group mentioned, totally out of the blue, an old Fanny Crosby song called Redeemed, that he grew up singing in Baptist church.
While the song didn’t lend itself well to modernization, the theme is certainly universal. It did, however, spark an idea for a chorus. We also cannibalized as many lines from the song as we could to use for inspiration. A couple of those lines ended up in the pre-choruses almost word-for-word. Unfortunately, after we worked up a chorus we were stumped. I also wasn’t thrilled with the chords/top line we had come up with for the chorus.
There was one more person who had expressed an interest in the group but lived too far away to make the bi-weekly get-togethers. I sent the lyrics to her, purposely holding back the melody we had come up with and asked if she could come up with a melody and sing it into her phone for me. She spent more time than I realized rehearsing it in her head and sent me something very close to what we ended up using.
Once we had a workable chorus, the melody for the verses actually came fairly quickly. After that, though, things sat for a while – members of the group has various family conflicts and we weren’t sure where to go from there. Then, one Friday afternoon swimming laps in the YMCA pool (I do good thinking there) a bridge came to me. I wasn’t sure it would fit, but sang it into my phone back in the locker room nonetheless.
In retrospect, a competent producer/songwriter would likely have thrown it out as it maybe doesn’t fit with the rest of the song. Fortunately, I am neither, and it has since become our favorite part. So, the final form of the song is Chorus twice, Verse 1-Chorus, Verse 2-Chorus, followed by the Bridge and then one last Chorus. The first and last choruses are done rubato with just voice, piano and bass. The middle part has drums and guitar with violin and organ sprinkled in, and the bridge is a full gospel choir – at a much faster tempo than the rest of the song. And it works. Here’s the final product, read on for the details of recording and mixing this song.
I wrote about the recording session for the basics back in December. There were a lot of lessons learned at that session. One very important lesson that didn’t really sink in until later was the importance of having appropriate video gear. More about that below. Before Dena left to go back home to Atlanta she generously laid down several violin tracks for me to comp together some sweetening.
Once I got the basics back to my home studio I synced the tempo track to the live recording and then substituted the original piano and drums back in, This arduous task was made more bearable by the ability of Cakewalk to “tab to transient.” That is, when you have a track highlighted, pressing the tab key will move the NOW time to either the next detected transient in an audio track or the beginning of the next note in a MIDI track. It’s not perfect, but boy did it make things go faster. I found that mostly using the lead vocal, rather than the piano, resulted in a more accurate tempo since lead vocal was the more important of the two to the timing.
The picture above shows my two songwriting buddies, Geoff and Rob, laying down bass and acoustic guitar for the track in our songwriting lair at Geoff’s house. We’ve since started working on our next song, this time more in the style of Wake or Turn It Up, so watch this space. I already know who I want to collaborate with on the recording…
Lastly, at home again, I added an organ part and then drafted my daughter to help me fill out the gospel choir by overdubbing several different parts. I also changed the piano part to make it more “gospel” which also involved changing a few of the chords.
Editing is the unsung step in the modern production process. The tightening of timing and tuning is what gives modern productions the sound that is expected by today’s audience. I don’t care how much people protest that they want to hear pure, unedited performances, they don’t really mean it. That being said, the trick is to not overdo it. With the tools available to the modern producer it is far too easy to suck the life out of a performance.
In this track I did minimal tuning to Brandy’s voice and no timing work. This is because I needed her vocals to match exactly to the video shoot, which was done in two takes, meaning not a lot to choose from. There were a couple of places where I really wish I’d had a little more flexibility, but almost no one will notice except me. I edited all the other vocals to match Brandy’s – So long as the attacks of each word line up, the endings can be a bit off and no one will really notice.
The most significant tuning I did was to the bass, to match the new chords I substituted. Fortunately I didn’t change any of the chords where the guitar was playing. Since I don’t have the version of Meoldyne with Direct Note Access that would have required re-recording the guitar part.
I also went through the piano, guitar and bass performances and edited them to more closely match the drums. The MIDI drum patterns I used are all taken from real performances, so they’re not perfectly tight to the grid. To tighten the band what I did was listen through to just the band and stop when I heard something too sloppy for me to bear. Generally this entailed clipping and nudging individual notes for the bass, or clipping short phrases for the piano or guitar and using the stretch algorithm in Cakewalk to make the timing line up where I wanted it to. This is remarkably effective. Once everything was tuned and timed to my satisfaction, I rendered all the MIDI tracks to audio and created a new project file to do the mix.
In total there are 22 tracks. This is one of the larger projects I’ve recorded. If I’d had live drums like I’d originally wanted it would have been over 30, since the Addictive Drums just outputs a stereo pair. (You can output AD to individual tracks if you like, but I don’t really need to trade complexity for the extra flexibility that gives you.) In the end most of the tracks were vocals, with nine audio tracks and three more aux tracks grouping the choir tracks by part. Once I had each part blended properly I sent all tracks of that part to an aux track. I did this so that I could adjust the parts against each other with a single fader, rather than having to manipulate 3-5 individual channels.
Step one in mixing for me is to set up the standard per-track processing. I finally got smart enough to create a track template that I can then apply to each channel quickly. The template has the console emulator at the top, followed by the tube saturation, EQ and finally the CA-2A compressor at the bottom. I don’t use the CA-2A on every track, but enough of them that it’s easier to delete it from the ones I don’t want than add it to the ones I do.
After that I sett the high-pass filter for each track (raise the cutoff until the there is an audible change to the track, and then back it off until I can no longer hear it). This gets rid of any low frequency rumble that is muddying the track or activating the compressor when it needn’t. The trick is to pull back the cutoff until it’s not affecting the meat of the track – cutting too much can result in an anemic and thin sounding mix.
The last step in mix prep was an experiment. I added the Hornet Mk3 VU Meter to each track. I learned about this plug-in while watching tutorials on gain staging. The Hornet has a built in auto-gain function that will set the output to constant level you tell it to. I pretty much worked on this project, but definitely didn’t on the next one (see future post) so I don’t think I’ll be using this in the future. The problem is that there is no way to put a plug-in before the channel strip (which is where it should go) and then more plug-ins AFTER the channel strip. It’s an either/or situation.
The composite screen shot to the left is the full mixer winder, showing the channel strips for each channel, as well as the buses on the far right. You can see the basic EQ curves I applied to each channel, as well as the list of plug-ins on each. Let’s take a look at a couple of the highlights:
The lead vocal has the most processing of any of the vocal channels. In addition to the high-pass filter to remove the aforementioned rumble, I’ve added a wide boost around 700 kHz to help bring out the presence and body in Brandy’s beautiful alto voice. The Clariphonic DSP further enhances her highs in a way that help her lead vocal cut through an otherwise busy mix, and does it in a way that doesn’t create sibilance. I love this processor. Finally, the ValhallaPlate is creating just a little extra space around her vocal, a little more ambiance than the rest of the mix.
The acoustic guitar also has a relatively large amount of processing. with two additional plug-ins following the channel strip. In this case, it’s all about getting the guitar bright enough to shimmer across the top of the mix. As such, I’ve used a relatively aggressive high-pass filter. This time I really did want to cut out some audible lows in the instrument, and the frequency display on the EQ shows how much I’m lopping off. This is followed by a rather significant boost to the highs.
The Plektron WTC Comp follows the channel strip. I don’t know where I first discovered the WTC Comp, but it’s long been my go-to on acoustic guitar. It just does something special to the signal that I can’t do any other way. At the end of the chain is another EQ, this time the Nomad Factory Pultec clone that came bundled with Sonar Platinum, the BQ25. This is boosting the highs a bit more following the compression. If you solo the guitar it sounds pretty terrible, but with the whole mix it’s just about perfect.
One of the things I did to prepare for this mix was to listen intently to some similar gospel tracks that were currently on the charts. One in particular that I couldn’t get enough of was Nobody Like You by Miranda Curtis. One of the main things I noticed was that the reverb was entirely on the sides of the audio soundstage, and the middle is almost completely dry. This creates a bubble of ambiance around the music. I created this effect with the Voxengo MSED mid-side processor. I used this tool to turn down the middle of the reverb signal independently of the sides. With automation I was able to change this balance throughout the song, with less reverb during the quiet moments and more when the arrangement ramps up.
The reverb itself is the Waves IR1, which I picked up on sale over Christmas. While I have a number of reverb plug-ins, I grabbed this one because I didn’t have a 64-bit convolution reverb. I tend to struggle with reverb, often sounding too much like a person singing through a tube. The basic reverbs included with this processor sound really good, and as long as I don’t push it too hard I’m actually really happy with the results.
Finally, I used the Cakewalk Quad-Curve EQ to cut off the highs and lows from the signal to make sure that the reverb doesn’t swamp the main mix.
The processing on the Master Bus is fairly simple. Other than the standard channel emulator and tape processor, There’s a buss compressor, EQ and limiter. The compressor is the Native Instruments Solid Bus Comp, which is an emulation of the SSL bus compressor. The EQ is set to gently roll off some of the highs. I remember first hearing about this technique on the UBK Happy Fun Time podcast, and it blew my mind. The idea that its okay to roll off the highs on the master bus, even after spending so much effort sweetening the highs in the mix. As I listened to the mix the top end was just fatiguing. I wasn’t sure what was wrong, but remembering this trick made all the difference. Finally, the Unlimited limiter manages the final levels to hit the loudness sweet spot for YouTube.
I bounced the audio down to 48k 24-bit stereo and prepared to make the music video. I started in Windows Movie Maker but quickly found that it was going to be too difficult to sync the audio with the video. Also, the last-minute nature of the video shoot led to a very shaky result. Hitfilm, Movie Maker and Dvinci Resolve all failed to stabilize the shot without unacceptable artifacts in the image. Finally I downloaded the trial version of Adobe Premiere, which had a powerful enough stabilization processor to handle it. Not perfect, but good enough.
Once I had stabilized video, I imported the opening credits that I created in Movie Maker, the stabilized video clips and the 48k audio into HitFilm Express and created the basic video. I knew that I wanted to include the song lyrics at the bottom of screen so people could sing along if they liked, but HitFilm’s text options are too finicky to easily do this.
So, back into Movie Maker I went. I heard that Davinci Resolve has released a new version with significantly improved text tools, so in the future I may be able to just do everything in Resolve. More to come. Anyway, in Movie Maker I added all the lyrics and saved out the video again, ready for uploading to YouTube.
The biggest lesson learned is to get properly prepared and get it right at the source. Fixing it in post is a pain in the neck and never gets as good results as simply getting it right the first time. The irony is that it took so long to get the original recording session scheduled there’s not much excuse for not having the arrangement completely settled prior to that session.
I also should have prepped technically for the session better. The lack of a MIDI cable and a dead video camera battery led to all sorts of compromises later in the process. I’ve done several subsequent sessions since and have applied these lessons to great effect. Hopefully by summer my blog will be caught up with real life, but it’s a good problem to have – too much recording and mixing.
Until next time…