• Skip to primary navigation
  • Skip to content
  • Skip to primary sidebar
No Bull**** Mixing

No Bull**** Mixing

  • Blog
  • About
  • Contact

Archives for September 2019

What is Mixing? An Introduction to the Craft

September 28, 2019 by b_five Leave a Comment

A friend of mine recently asked me for advice on what to buy so that he could produce and create his own music. He’s an amazing vocalist, and wanted to start recording. He also asked me something that took me by surprise:

“So, like… what is mixing?”

Sometimes I forget that there was a time when I didn’t know the answer, either. When I first started recording, I thought you just needed to set up the mics, hit record, and give a good performance. When I was in school studying music in 2008, I wanted to make a demo for my string quartet to help us get gigs. I bought a pair of mics, which I thought were good at the time, and set them up in our practice space to record. As you can expect, the results weren’t great; even with a great performance, the tracks still sounded dull and thin. My colleagues and I were disappointed, and I felt like I had let them down. After that experience, I decided that the next step in my career needed to be filling in the gaps of my understanding of recording. So I went to school for audio engineering, and fell in love with the art and science of mixing.

So what IS it?!

At the most basic level, “mixing” is the process of taking multiple sounds and combining them into a single sound. Most commonly this is done using a Digital Audio Workstation (DAW) like GarageBand, Pro Tools, or Ableton Live, but can also be done in real time using a mixing console, which you see in recording studios.

Of course, that doesn’t do much to explain the process. Currently, most mixing is done “in the box”, meaning inside a computer, using the software I just mentioned. Here’s how it works.

If I were to mix a song for someone, they would first send me all of the audio tracks that are part of the song: drums, guitars, basses, synths, vocals, etc. I would then import those tracks into my software. I use Pro Tools mostly, which is the industry standard, and you will see it in almost every professional recording studio. Once I have all the tracks in my Pro Tools session, organized nicely, It looks like this:

Color coding is important to me. Red: Drums, Yellow: Synth, Blue: Bass, Green: Vocals

From here, I can process the sounds as I see fit (more on that later). Then, the software combines (“sums”) everything together, with the end result being just two tracks, Left and Right.

When you listen to your music library, you are listening to a single stereo audio file – the output of the mixing software, such as an .MP3 or .WAV file. This is the job of the mixing engineer, to take all of those sounds that were recorded, and combine them into a single stereo file. But there’s more than just the technical aspect of combining the sounds, because it must be done artfully, in a way that appropriately conveys the intent of creator. Whether the artist wants the final product to make you dance, cry, or rage, I have to use my skillset to most accurately convey that emotion. And there are ways to do it!

This is why mixing is both an art and a science.

Sound can be compared with visible light, because both of them are the manifestation of how we perceive a range of frequencies. With light, the colors of the rainbow represent the all the frequencies of light that we can see, with red being the lowest frequency, and violet being the highest. As mixing engineers, we “paint” with sound onto our medium, just as an artist would combine multiple colors onto their canvas. However, instead of a painter’s canvas, our medium is a digital WAV file (years ago, it was analog tape). We take all the sounds of a song, film, podcast, or whatever sounds you are working with, and combine them all onto our canvas, i.e. a stereo .WAV file.

You might be wondering, as I did, what is so complex about mixing, and how can someone’s entire career be devoted to it? Or even, why do you need a person to do it, when we have software?

What Do Mixing Engineers Do?

Like I said earlier, fundamentally, mixing is just combining sounds. So let’s start there. In the early days of recording, there were no multi-track recorders. Les Paul was responsible for revolutionizing the recording world, by popularizing the multitrack tape recorder. Before that, engineers recorded with a single microphone.

Think about it, if we were only using a single microphone, how could we mix? Instead of recording all of the sounds separately, we “mixed” by placing the musicians at different positions inside the room. For example, if we wanted a saxophone louder, the sax player would have to take a couple steps toward the microphone. These days, we can record with as many microphones as we’d like, and adjust their volume levels later.

Basic mixing is just that – adjusting volume levels using faders, of each sound, so that everything is audible: a “balanced mix”. We can also record automation – changes in fader levels over time, for example, a sax player stepping toward the microphone for a solo. In the industry, the phrase “Just pushing up faders” refers to mixing something that didn’t need much work done, because you are simply adjusting the volume levels of each track.

But what if it does needs more work? Let’s take a look at some of the problems that can arise and how mixing engineers deal with them.

The Sound Frequency Spectrum

Just as a painter can’t simply mix every color together, we can’t simply combine all the sounds together and expect it to sound pretty.

Remember my analogy of sound and visible light? Just as we have a range of colors that we can see, we also have a range of frequencies that we can hear. If you’ve ever heard the term “white noise”, that’s the sound of every audible frequency occurring at the same time, just like white light is a combination of all colors. It’s not pleasant, but since it contains every frequency, it’s great for masking other sounds, which is why it’s used to help people sleep. And that is the first, and biggest, problem for mixing engineers – the interaction of the sounds we are combining.

With visible light, colors can be defined by the frequency ranges that they occupy, just like how a radio station is found on a particular frequency. When we combine colors, they interact, and we perceive it as an entirely new color (red + yellow = orange). This is also true for sound. When we are mixing, all of the sounds we combine will interact with each other. I’m sure you are familiar with the sound of a telephone ring while you are waiting for the other line to pick up. Did you know that the “ringing” is actually due to the interaction between two specific frequencies? Check it out!

These are just two specific frequencies interacting. These are called “pure tones” because the sonic information is made up of just one specific frequency. In the real world, sound is made up of tons of different frequencies interacting all at once, just like how the white noise I described earlier is a combination of all audible frequencies. So, interaction isn’t always a bad thing, it’s a natural part of sound in the real world. I hope now you can see how deep this rabbit hole goes!

Visualizing sound

Let’s take a look, literally, at some audio. You may have seen audio waveforms, they look like this:

Audio waveforms are representations of sound, with time on the X-axis and volume (synonymous with amplitude) on the Y-axis.

About 3.7 seconds of audio of an effected guitar

Waveforms are useful for visualizing sound in a DAW, but they do not contain all the information of the sound. The element it’s missing of course, is frequency. Here’s a full visual representation of sound, starting with a sweep through the entire range of human hearing, followed by a composition by Hildegard von Bingen.

In this spectrogram, frequency information is represented like a heat map, with the lowest frequencies on the left and the highest on the right. The reason I wanted to show you the full spectrogram of these sounds is so you can visualize our “empty canvas”, and see how it is filled as we add sound. We only have a limited amount of space on our canvas; you can see how cluttered it gets with only a solo voice. Now imagine that spectrogram being “written” onto our empty .WAV file. We have to make everything fit.

In order to make everything fit, we need to turn down, or attenuate, the frequencies of some sounds, to make room for others. The most common and effective way to do this is by using Equalization, or EQ. If you’ve ever adjusted the bass or treble on a stereo system, you’re adjusting the EQ, either boosting or attenuating (turning down) a certain frequency range. The difference is that a stereo’s EQ affects the entire track, and the range of frequencies it is affecting is predetermined by the manufacturer. In the mixing world, we have complete control over what frequencies we want to adjust, and on which sounds we want to apply it. As you can imagine, the possibilities are endless!

This is just the beginning, but I hope this help to clarify what mixing is and some of the challenges within the craft. There are plenty more problems that can arise, but this is a good starting point. Stay in the loop by signing up for my newsletter, and if you are a true beginner but still feel unfulfilled, drop me a comment and let me know what’s on your mind!

Filed Under: Beginner Tagged With: beginner, introduction, mixing

Mixing Walkthrough – A Step-by-Step Guide to Making it Through Any Mix

September 6, 2019 by b_five Leave a Comment

In this article, I have organized the mixing process into 10 steps, so that even if you get stuck along the way, you can use this as a guide to steer you toward making progress in your mix.

Last week, I released a video on preparing your session for mixing. We’ll basically be picking up from there. Watching it is not a prerequisite, however, if you’d like to follow along, I do suggest taking a look. I also have the full Pro Tools template I’ll be using available here for free.

The steps are organized chronologically as you will work through your mix, although you will find yourself coming back to certain steps as you treat all of your tracks. Follow these steps and I assure you that you will have a decent mix! Don’t worry if you get stuck on a certain step; you can always come back and refine something further later on.

1. Editing, Balance, and Logistics

Editing is the final step in preparing your mix, but I wanted to include it here, because it is still part of the mixing process. Although this step is concerned more with logistics, you still have the ability to make creative decisions that will help you add to your mix later on.

While in this stage, continue to listen to each track in your session, so that you increase your understanding of the track as a whole, a.k.a. its “form” or “structure”. In Pro Tools, use NumPad Enter to add appropriate markers. Be sure to check the tracks with and without solo enabled, to see how it fits “in the mix”. Use the fader to establish a general volume balance, but don’t worry if it things get covered up here and there. We’re not trying to make it perfect yet, just to establish a rough overall balance so that it is listenable.

While listening, balancing, and editing, I like to take care of other logistical steps along the way. In addition to adding memory locations or markers, so that I can quickly skip to certain sections, you can also save useful window configurations, set the correct BPM so that delays will work nicely, and anything else you might come across that will help speed up your workflow later on. Use this step as the “discovery” stage; you can brainstorm ideas and set things up for later, like particular groupings, bussing, or setting up specific FX tracks. Once you’ve made your way through editing each track, you can move on to the next step.

2. Subtractive EQ, High- and Low-Passing

This is where the fun begins! In this step, we start to listen critically. Our goal here is to remove or attenuate (turn down) frequencies that are undesirable. How do we know what’s undesirable? By listening. Check it out.

The first thing to listen for at this stage is harshness. This is usually caused by a particular resonance within the sound, for example, sibilance in a voice, a standing wave captured by a microphone, or the natural resonance of an instrument like a kick or snare.

Use an EQ with a built-in analyzer to help (I like H-EQ); if there is a particular resonating frequency, it may show up as a peak, making your life easier. If not, you’ll have to locate any offensive frequencies yourself. If nothing sticks out at you immediately, try turning it up. If you’re listening at high volume and don’t find yourself cringing at anything, great! If you do find an offensive frequency, use your EQ to pull out a few dB.

To demonstrate, let’s take a look at a Jedi Master perform this step. Behold, the one and only George Massenburg.

Don’t get too caught up in this stage. If there’s nothing sticking out at you, don’t go on a witch hunt looking for frequencies to take out. However, as you go through each track, it is a good idea to make note of the ranges all of your instruments are occupying. In the next step, we will “make room” for certain sounds, so if you know where those sounds lie in the frequency spectrum, you’ll know where to look.

Lastly, you should spend some time determining what sounds you can high pass and low pass. I use a hi-pass filter on almost every track in my session. Low frequencies contain the most energy, and take up the most headroom. Because of this, they will also make compressors work harder. Don’t be afraid to hi-pass your tracks. I will even sacrifice some low end on various instruments, like piano, snare, and guitar, if those instruments aren’t playing a key role in the track. The same goes for low-pass filters; high frequencies can easily add up and become very harsh-sounding if left unchecked.

3. Apply a warm compress | Controlling dynamics

Compression

Great! You’ve used subtractive EQ and removed unwanted frequencies from your audio. Now we can start to “place” our sounds in the mix. You should have adjusted your faders by now so that you have a rough/listenable mix. However, you may find that despite your editing and EQing, some elements are still getting lost/covered up, or that they stick out too much at certain points. This is a problem a compressor can solve.

Compressors are used when you want the audio to sit in a “pocket” – meaning that you want the volume to stay generally the same throughout the track. The compressor will limit how loud the sound will be, making them quieter, and then you can turn up the output gain so that the quiet parts are louder. This results in reduced dynamic range, but allows you to place sounds at just the right volume so that they can be heard.

Since compression is a very complex topic, I will go in-depth in another article, but I’ll share some guidelines with you. Using a compressor will require some critical listening while we adjust the settings for each sound we want to process. Most likely, each track will require a different compressor setting, determined in large part by the attack transient of the audio. As I mentioned, the goal of compression is to control the dynamic range of your sound so that all parts of it are audible. For example, if you can hear the “slap” of the beater on your kick drum, but the low-end resonance is being masked, you can use compression to help. All the ways you can use compression is beyond the scope of this article, but you can find useful tactics for compressing any sound on the internet. Just make sure you have a goal in mind for what you want your compressor to do for you. And I’ve said this before, but no matter what gear or plugin you use, RTFM!

Gating/Expanding

A Noise Gate or Expander will increase your dynamic range, rather than limit it. This is an important step, because it ensures that there is audio where you want it, and no audio when there shouldn’t be any. By adjusting your Gate settings, you can guarantee that there will be absolute silence in areas where there should not be anything audible, similar to why we edited our audio to begin with.

An expander is just a gate that, instead of silencing the audio, will attenuate it by the specified amount.

A/B-ing Your Adjustments

Always check your work by bypassing/un-bypassing your plugins. This allows you to be sure you are making positive contributions to your mix.

When bypassing an insert, make sure that the volume is the same whether it is in or out. If not, you brain can play tricks on you, making you think that your processing sounds better than the original, when in fact, it’s just louder. You can do this by checking the output meters on the plugin itself, or on the track in your DAW, as you bypass the plugin.

If you’ve ever heard the term “gain staging”, it refers to controlling the signal level as it travels through your processing chain. Many plugins have metering built in, so make sure you are checking this, since you could be getting distortion from clipping a plugin’s input, without even knowing it.

Ableton Live has handy meters between your plugins, so that you can easily keep track of your gain staging

When making the final decision on using any kind of processing, make sure you are asking yourself, “Is this solving a problem?” Always check your processing un-soloed in the mix, and make sure it is “adding value” to your track.

4. Create space for the elements you want to hear

At this point, you are becoming very familiar with all of your tracks. You know what frequency range they occupy… and, you also know that the same frequency range is occupied by something else!

Remember, our job is to “mix” all of our sounds together, so that everything is audible. So what do we do when multiple sounds are occupying the same frequency range? Let’s start by identifying what those frequencies and instruments are. Then, we can use our mixing tools appropriately.

I’ve divided the audible spectrum into “frequency spaces” that make sense to me. You’ll notice that as you go from low to high, more frequencies are included. This is because of the logarithmic nature of how we perceive sound. This also means that your EQ movements will be affecting more frequencies the higher you go, and in the lower range, you will need to be more precise and surgical with your EQing.

Lows – 20 Hz – 150 Hz

In this space lives mostly kick drum and bass. It is a sacred space. Low frequencies carry the most energy and take up the most headroom, so if there’s something not so important here, filter it out! Between 100-150Hz, you may find the “body” of some elements like male voice, guitar, and snare – use your judgement, and decide if you really need those elements in this range.

Any sounds below 80 Hz will really need to move air, which is why we use a subwoofer to hear this range. It’s a nice big speaker lined with flexible material that gives it the ability to travel back and forth, so it can push and pull a large volume of air. Be very careful about what you’re feeding it. A nice bass line and a tiny bit of kick drum will be plenty. I find that most kicks I work with have their “punch” between 80-120Hz – and I always high-pass filter my kicks.

Hi pass everything!

Low-mid – 150 Hz – 500Hz

This range is where it starts to get really busy. The lower end of melodic instruments and voice lives here, in addition to many percussion instruments. You will have to be very judicious here, as sounds in this range can still take up a significant amount of headroom. Here are some guidelines for what to look out for. Consider these starting points:

  • You can probably take out some kick drum here. As I said before, your kick’s “punch” is most likely coming from between 100-150Hz. Try pulling out 400Hz on your kick, and go from there.
  • Male voice, around 300Hz, can be very “growl-y”. That being said, I have one client who absolutely needs 300Hz. Like George says in the video above, you need to hear how it sounds boosted and attenuated, so do some experimenting.
  • Try this: Put a sharp low-pass filter across your whole mix. Pull it down to 500Hz. What do you hear? What melodic instruments do you hear? Decide what is important, and what you can remove. Always be sure to check your adjustment in the mix, not just soloed.
  • Closer to 150Hz will be the “body” of many instruments. However, you may not need it in the mix. For example, I will heavily attenuate background vocals in this range, because I only want the main vocal to retain that “fullness”. The same goes for many acoustic instruments, even piano and guitars. Depending on their purpose in the mix, it might be okay to attenuate this range for them.

Mids – 500 Hz – 1500 Hz

Honk honk! This is the range that REALLY cuts through your mix. This is also where the most overlap will happen. Not only do you have essentially every instrument and voice here, but also the harmonics of the lower frequencies (*wink wink* you may want to revisit them and scoop out this range, if they are interfering).

High Mids – 1500 Hz – 5000 Hz

This range is where much of the “harshness” lives. I usually find some very harsh frequencies in vocals between 2000 to 3500 Hz. In the higher end of this range will be the upper limit of melodic fundamentals, with the highest note on a grand piano being a high C at 4186 Hz.

As we did before, throw an EQ on your mix bus, but this time, hi-pass it. Bring the filter up to 1500 Hz, and listen for what’s there. Decide what’s needed and what isn’t.

Highs – 5000 Hz – 20,000 Hz

Above 10,000 Hz is commonly referred to as “air”, “shine”, or “brightness”. Boosting 10 – 12 kHz in a vocal can add clarity that really makes it rise above the rest of your mix, however, only if this range isn’t occupied by many other things. When these frequencies add up, they can become a problem.

One thing to look out for in this range is sibilance in vocals. But hey, there’s a tool for that! You can use De-Esser, which is essentially a “ducking” effect combined with an EQ. It will listen for just the frequency you tell it to, and then attenuate the signal if that frequency gets too loud. And you don’t have to use it on just vocals; de-essers can be effective at treating harsh frequencies in all kinds of things. You may find that using multiple de-essers helps, since there can be harshness at multiple frequencies. Just don’t overdo it! De-essing too heavily can make your audio sound unnatural.

Other Tools for Creating Space

Now is a good time to start talking about other ways to create space. EQ isn’t our only tool, you know! We can also use a technique called “ducking”. Ducking is the process of using a compressor to reduce the volume of a sound, but only while another sound is playing. It “ducks” the audio, making it move out of the way, while the sound we want to hear comes through. Check out my quick video on the topic for a demonstration!

Like I said in the video, you can use this technique anywhere, just don’t overdo it. You can also use automation to achieve similar results, but then you would be doing the work manually, taking more time. By using sidechain compression, you are programming the compressor to do the work for you, and can be more effective if you are trying to move through a mix quickly. Later on, you can polish things up with automation.

5. Stereo imaging

The last technique we will talk about to help create space is stereo imaging. You can use panning and other effects to move sounds around in the stereo field, so that they don’t interfere with certain elements. Remember, if something is panned center, it simply means that the sound is coming out of both the left and right speakers equally. If there is a sound you’d like to bring out, and it is panned to the center, like a lead vocal, your solution could be as simple as panning the competing elements left or right.

Now, I said that center-channel information is when the sound is coming out of both the left and right channels equally. Since we have two ears, if the information is the same in both our ears, we perceive that sound as being in “the center”. BUT, the information has to be exactly the same, and occurring at the exact same time.

You see, our brains are very complex. We have the ability to determine, almost instantly, if a sound came from five feet to our left, or 100 feet away to our right. Without us even knowing it, our brains perceive a millisecond difference between when that sound hits our left ear versus our right ear. Now then, what would happen if we delayed one side of a sound in our mix? Rather than panning, which would reduce the volume in one side, we can instead delay the sound in one side. This is, in fact, a common technique used to make more room for center-channel information, “spreading” out the affected sound so that we don’t perceive it as in the middle. I have an FX channel set up for this in my Pro Tools template, and I will link to a Quick Tip video in a future update so that you can see a demonstration of the technique.

6. ENHANCE! Add color, shine, and sweetness

Now that we’ve done damage control, mitigated potential problems and made space for the things we want, we can finally add to our sound and bring out the best elements in our mix.

First, think about what the key elements are in your mix. What absolutely needs to come out? You’ll probably want to add some “air” to your vocals – boosting 10-12k or above will help. You’ll also probably want to make your bass nice and smooth, and your kick really punch. Boosting key frequencies with EQ can help, but it may not always be the solution.

In this step, it may be helpful to split certain tracks or busses into multiple frequency ranges, so that you can treat those ranges differently. For example, let’s say you have your kick drum going to your kick buss. You can duplicate that buss, so that you have two. Then, hi-pass one, and low-pass the other at the same frequency. One way that this would be useful, is to use the ducking technique I described earlier on the low frequencies. Side-chain it to your bass line, then they don’t interfere with each other. That way, your grooving bass line is audible all the way through!

Saturation is another great tool for this stage. There is a free plugin that I absolutely love by Softube available here. From their website: “Use it to fatten up bass lines, add some harmonics and shimmer to vocals, or simply destroy your drum loop.” Saturation is a type of distortion that adds harmonic content to the audio. Be careful, as it’s easy to overdo it.

No matter how you decide to affect your audio, I want to mention one very important idea – parallel processing. We have to remember that everything we are doing here is affecting the sound artificially, and it’s very easy to move too far from the original sound we had. This is not always a bad thing, but we spent so much time polishing the audio, that we may want to keep some of that original audio in the mix. To do that, we can apply all of our processing alongside the original audio. If you are an Ableton user, you’re very lucky, because you can do this very easily with an audio effect rack. For the rest, you can simply duplicate the audio you want to process, and do all of your processing on the duplicated track. From there, use the faders to adjust the mix of your dry/effected signal.

7. Practical effects with purpose | Bringing it all together

Now it is time to add cohesion to your mix. We’ve mixed everything together as best we can on an individual level, so now we can take a step back and see how we can affect the mix as a whole.

First, are there any elements that should sound like they are together? The most common example of this would be percussion elements – we can affect all of our acoustic percussion so that they sound like they are in the same room. You can use – you guessed it – a “room” reverb, in stereo, and send each element to it, panning the sends appropriately. You may even want to send additional instruments to it. If you haven’t already, you may want to buss similar elements to the same aux tracks so that you can process them on the group level. Then, if you decide to send the group’s buss to a reverb, the panning information is retained.

Okay, now let’s talk about vocals. You probably want your main vocals to stand out, but not so much that it sounds completely separate from the rest of your mix. You can use reverb creatively so that your vocals mesh nicely. I normally use four different types of reverbs on my vocals – SUBTLY! In order of how much I apply, they are:

  1. Small room reverb – helps with “dryness” without pushing the sound too far back
  2. Vocal plate reverb
  3. Mono spring reverb
  4. Long hall reverb, with no early reflections

I may also send my main vocals to reverbs that are shared by other instruments, if I think its still not meshing well. Using reverb is very tricky, and takes a lot of experience to master. Start with just a little bit, nothing ruins a mix quite like too much reverb. ALWAYS remember to control your reverb – use aux tracks, then EQ and compression after the reverb unit. In the real world, distant sounds lose high frequency information, so the longer your reverb, the more you may want to roll off the high frequencies. And just like the rest of our audio, we want to control the dynamics of our reverb too, so that it stays at just the right volume.

8. Creative effects

Go wild. This one’s all you! I can’t tell you what to do here – let your creativity flow. I put this step fairly late in the process, because I believe that you should focus on sonic quality first, then get creative once you’ve got things sounding the best they can be.

9. Add depth, polishing

It’s all coming together. At this point, you are fine-tuning the mix, making sure everything is sitting in just the right place, including your effects and reverb. Try not to make any large sweeping changes, as this could throw off your whole mix. Since you already put in the work doing dynamics control with compression, a 3dB change on your bass track might completely change your balance, so start small if you think something needs to be adjusted.

You’ve added a lot, so double check for those problem frequency ranges, like the highs, 10 kHz and above. Continue to see what you can take out, like in earlier steps, if some elements still aren’t coming through. Listen for harshness, and use solo/mute to see where it’s coming from. At this stage, I like to really crank up the volume on my mix. During a master class with engineer/producer “Bassy” Bob, he mentioned that the sign of a great mix is one you can really turn up loud!

10. Final automation

In this step, we are simply adjusting faders, creating movement within our mix. Again, subtlety is key. Some examples of what to do here would be pushing up your vocals 1-2dB for that last hook, bringing out a solo section, or pulling down some effects right after a drop. When doing your final automation passes, listen to the track from beginning to end. We are effecting the listener’s experience here, turning our mix into a story, keeping them hooked and interested. You have to be in the same mindset the listener would be in, so that you can make the decisions that will guide their experience.

One trick I use frequently is to have the track end with many of the elements 1.5dB louder than they started. It gives the mix a sense of movement and energy, and also counteracts ear fatigue during the course of the track. Use this trick at your discretion!

—————————

Final thoughts

This is by no means and end-all guide to mixing, but it does address many of the things you should be thinking about while mixing. Many of the topics covered are just my workflow, and I encourage you to steal what you can and make it your own. Let me know how you do! I’m really interested in how you guys move forward.

I plan to release more content demonstrating many of the topics covered here, so stay tuned if you are interested. If there’s something specific you’d like to see, drop a comment and I will prioritize it! I hope this article has helped you think about mixing in a different way. As always, let me know if it has (or hasn’t!), I look forward to your emails! Peace!

Filed Under: Uncategorized Tagged With: workflow

Primary Sidebar

Sign up for No Bull**** Mixing

I'll send you concise articles and videos for improving your mixes, and nothing else!
* = required field

powered by MailChimp!

Recent Posts

  • What is Mixing? An Introduction to the Craft
  • Mixing Walkthrough – A Step-by-Step Guide to Making it Through Any Mix
  • Video – Preparing Your Session for Mixing (Pro Tools)
  • How I Learned to Avoid “Getting Stuck” in My Mix
  • The Pro Tools Shortcuts I Can’t Live Without

Archives

  • September 2019 (2)
  • August 2019 (3)

Categories

  • Beginner
  • Pro Tools
  • Uncategorized
  • Workflow

Copyright © 2019 · Genesis Sample on Genesis Framework · WordPress · Log in