A friend of mine recently asked me for advice on what to buy so that he could produce and create his own music. He’s an amazing vocalist, and wanted to start recording. He also asked me something that took me by surprise:
“So, like… what is mixing?”
Sometimes I forget that there was a time when I didn’t know the answer, either. When I first started recording, I thought you just needed to set up the mics, hit record, and give a good performance. When I was in school studying music in 2008, I wanted to make a demo for my string quartet to help us get gigs. I bought a pair of mics, which I thought were good at the time, and set them up in our practice space to record. As you can expect, the results weren’t great; even with a great performance, the tracks still sounded dull and thin. My colleagues and I were disappointed, and I felt like I had let them down. After that experience, I decided that the next step in my career needed to be filling in the gaps of my understanding of recording. So I went to school for audio engineering, and fell in love with the art and science of mixing.
So what IS it?!
At the most basic level, “mixing” is the process of taking multiple sounds and combining them into a single sound. Most commonly this is done using a Digital Audio Workstation (DAW) like GarageBand, Pro Tools, or Ableton Live, but can also be done in real time using a mixing console, which you see in recording studios.
Of course, that doesn’t do much to explain the process. Currently, most mixing is done “in the box”, meaning inside a computer, using the software I just mentioned. Here’s how it works.
If I were to mix a song for someone, they would first send me all of the audio tracks that are part of the song: drums, guitars, basses, synths, vocals, etc. I would then import those tracks into my software. I use Pro Tools mostly, which is the industry standard, and you will see it in almost every professional recording studio. Once I have all the tracks in my Pro Tools session, organized nicely, It looks like this:
From here, I can process the sounds as I see fit (more on that later). Then, the software combines (“sums”) everything together, with the end result being just two tracks, Left and Right.
When you listen to your music library, you are listening to a single stereo audio file – the output of the mixing software, such as an .MP3 or .WAV file. This is the job of the mixing engineer, to take all of those sounds that were recorded, and combine them into a single stereo file. But there’s more than just the technical aspect of combining the sounds, because it must be done artfully, in a way that appropriately conveys the intent of creator. Whether the artist wants the final product to make you dance, cry, or rage, I have to use my skillset to most accurately convey that emotion. And there are ways to do it!
This is why mixing is both an art and a science.
Sound can be compared with visible light, because both of them are the manifestation of how we perceive a range of frequencies. With light, the colors of the rainbow represent the all the frequencies of light that we can see, with red being the lowest frequency, and violet being the highest. As mixing engineers, we “paint” with sound onto our medium, just as an artist would combine multiple colors onto their canvas. However, instead of a painter’s canvas, our medium is a digital WAV file (years ago, it was analog tape). We take all the sounds of a song, film, podcast, or whatever sounds you are working with, and combine them all onto our canvas, i.e. a stereo .WAV file.
You might be wondering, as I did, what is so complex about mixing, and how can someone’s entire career be devoted to it? Or even, why do you need a person to do it, when we have software?
What Do Mixing Engineers Do?
Like I said earlier, fundamentally, mixing is just combining sounds. So let’s start there. In the early days of recording, there were no multi-track recorders. Les Paul was responsible for revolutionizing the recording world, by popularizing the multitrack tape recorder. Before that, engineers recorded with a single microphone.
Think about it, if we were only using a single microphone, how could we mix? Instead of recording all of the sounds separately, we “mixed” by placing the musicians at different positions inside the room. For example, if we wanted a saxophone louder, the sax player would have to take a couple steps toward the microphone. These days, we can record with as many microphones as we’d like, and adjust their volume levels later.
Basic mixing is just that – adjusting volume levels using faders, of each sound, so that everything is audible: a “balanced mix”. We can also record automation – changes in fader levels over time, for example, a sax player stepping toward the microphone for a solo. In the industry, the phrase “Just pushing up faders” refers to mixing something that didn’t need much work done, because you are simply adjusting the volume levels of each track.
But what if it does needs more work? Let’s take a look at some of the problems that can arise and how mixing engineers deal with them.
The Sound Frequency Spectrum
Just as a painter can’t simply mix every color together, we can’t simply combine all the sounds together and expect it to sound pretty.
Remember my analogy of sound and visible light? Just as we have a range of colors that we can see, we also have a range of frequencies that we can hear. If you’ve ever heard the term “white noise”, that’s the sound of every audible frequency occurring at the same time, just like white light is a combination of all colors. It’s not pleasant, but since it contains every frequency, it’s great for masking other sounds, which is why it’s used to help people sleep. And that is the first, and biggest, problem for mixing engineers – the interaction of the sounds we are combining.
With visible light, colors can be defined by the frequency ranges that they occupy, just like how a radio station is found on a particular frequency. When we combine colors, they interact, and we perceive it as an entirely new color (red + yellow = orange). This is also true for sound. When we are mixing, all of the sounds we combine will interact with each other. I’m sure you are familiar with the sound of a telephone ring while you are waiting for the other line to pick up. Did you know that the “ringing” is actually due to the interaction between two specific frequencies? Check it out!
These are just two specific frequencies interacting. These are called “pure tones” because the sonic information is made up of just one specific frequency. In the real world, sound is made up of tons of different frequencies interacting all at once, just like how the white noise I described earlier is a combination of all audible frequencies. So, interaction isn’t always a bad thing, it’s a natural part of sound in the real world. I hope now you can see how deep this rabbit hole goes!
Let’s take a look, literally, at some audio. You may have seen audio waveforms, they look like this:
Audio waveforms are representations of sound, with time on the X-axis and volume (synonymous with amplitude) on the Y-axis.
Waveforms are useful for visualizing sound in a DAW, but they do not contain all the information of the sound. The element it’s missing of course, is frequency. Here’s a full visual representation of sound, starting with a sweep through the entire range of human hearing, followed by a composition by Hildegard von Bingen.
In this spectrogram, frequency information is represented like a heat map, with the lowest frequencies on the left and the highest on the right. The reason I wanted to show you the full spectrogram of these sounds is so you can visualize our “empty canvas”, and see how it is filled as we add sound. We only have a limited amount of space on our canvas; you can see how cluttered it gets with only a solo voice. Now imagine that spectrogram being “written” onto our empty .WAV file. We have to make everything fit.
In order to make everything fit, we need to turn down, or attenuate, the frequencies of some sounds, to make room for others. The most common and effective way to do this is by using Equalization, or EQ. If you’ve ever adjusted the bass or treble on a stereo system, you’re adjusting the EQ, either boosting or attenuating (turning down) a certain frequency range. The difference is that a stereo’s EQ affects the entire track, and the range of frequencies it is affecting is predetermined by the manufacturer. In the mixing world, we have complete control over what frequencies we want to adjust, and on which sounds we want to apply it. As you can imagine, the possibilities are endless!
This is just the beginning, but I hope this help to clarify what mixing is and some of the challenges within the craft. There are plenty more problems that can arise, but this is a good starting point. Stay in the loop by signing up for my newsletter, and if you are a true beginner but still feel unfulfilled, drop me a comment and let me know what’s on your mind!