Thursday, June 1, 2017

Mix Bus Processing Isn't Mastering

This week I thought I’d take some time to cover something I end up talking about quite a bit. It’s not uncommon to see people referring to their “mastering” chain and it’s a few plugins on a mix bus. Let’s clear something up, that’s still mixing. If you decide to work with just plugins on your mix bus, then you have decided not to get your music mastered.

The mastering stage of audio production and the processes it contains can’t be distilled down to a couple of plugins on a mix bus. I spend a bit of time covering subjects and issues like this in the book I’m writing, but for this post,  I'd like to really quickly hit a couple of highlights.
  • Mastering isn’t just aesthetic processing
  • Loss of Perspective
  • Different Workflows
  • Different Mindsets
  • Lack of Quality Assurance

Mastering isn’t just aesthetic processing

The activities that make up the mastering stage of audio production go beyond just the aesthetic processing that some often associate with the process (louder, brighter, etc.). The engineer is identifying issues that have cropped up or persisted through the recording and mixing process as well as preparing the audio for distribution on various platforms and encoding formats. Error checking, metadata, deliverable creation and many other components make up the mastering phase.

Loss of perspective

It’s no secret that mixing a song takes longer than mastering a song. It’s easy to lose perspective in these situations and become accustomed to hearing elements of a mix a particular way. So if there is a problem, the problem starts to sound normal.

Different Workflows

Most modern mix engineers don’t mix a song in passes. What I mean by this is they don’t start from the beginning of the song and play in through in passes working on the song as it plays. This was much more of a thing when tape was used in the mixing process. Today many mix engineers will tackle various parts of the song in pieces.

Mastering, on the other hand, should be done in passes. This way you are looking at the song as a whole and not in its various pieces and how they fit together.

Mastering and mixing are two different mindsets

This is a subject I don’t often hear people talking about. Mastering is more surgical and precise than mixing. It’s part creative and part science. Rather than working with a distribution of elements in a frequency spectrum, we are working with a distribution of frequencies that have elements in them. We also have to deal with physical limitations of formats and the expectation of the public as it relates to the various genres we are working on. The analytical side kicks in with correcting of errors as well. Mixing leans much more to the emotional side, simply doing what feels right even if that is doing something extreme.

In mastering, it's a balancing act between frequencies, emotion, and elements trying to come up with a best of both worlds compromise that is better than where the mix left off.

Lack of Quality Assurance

It’s not possible to be your own quality check. This is especially true in the moment while you are mixing. Different than the previous point I made about the loss of perspective, this one has to do with someone else performing the master. A different set of ears, in a different room, with a different monitoring system goes a long way to catch issues before releasing music out into the world and ensuring it sounds as good as possible.

What You Should Do?

Remember when music creation process used to be collaborative? Make it that way again. Pass the mastering on to a mastering engineer, someone who makes the process their specialty. The right engineer will provide perspective and make your work that much better.

If you decide to master the music yourself, print the mix and master the music in another session. Take a break and come back to the song later, preferably a day or two after you have put some time between the mix and mastering process. Why put it in another session and not keep it in the mix session? Because there is too much temptation to just start mixing again. Maybe that's what needs to happen, but there is less temptation when just dealing with a stereo mix. You may even surprise yourself.

Thursday, May 4, 2017

5 Reasons You Should Care About Audio Quality

If you are an artist today, you know how hard it is to get heard. Listeners nowadays are constantly bombarded from all directions for competition for their attention. So here’s an interesting question, why should you care audio quality?

The quality of a recording is the frame and presentation of your music. An art gallery doesn’t just pull a piece of dusty art from the back and lean it up against a wall on the floor. They clean it up, frame it properly, and hang it at eye level. They also organize and display it with similar works. Only in the music world, the similar works are all of the big name artists that you know and love.

Just to clarify what I mean by audio quality, I am talking about all of the things that go into how a recording is presented. This is from the initial capture of the instruments to the final mastering.

Here are 5 reasons artists should care about audio quality.

1. Immediate Attention

I already mentioned competition. Competition for attention, let alone your music, has never been higher. A high-quality recording can capture attention immediately and give your music a listening change. The maintaining of attention is up to how good the song is.

2. Avoid Artistic Penalties

Average listeners can’t always differentiate between a bad song and a bad recording. So a good song poorly recorded may be perceived as bad. After initially listening to your music, a listener may decide that they don’t want to continue and getting them to revisit in the future could be difficult.

3. Demonstrate Commitment 

A high-quality recording demonstrates effort and commitment. It says that you took the time to do things right. If you don’t care about how your art is presented, why would you expect a listener to care? Listeners want to know if they dedicate time to you, that you are going stick around.

4. More Money

Higher quality audio means more plays, which equals more money in your pocket. The old paradigm of purchasing an album once just isn’t there anymore. A listener no longer has to take a chance on your album. You now need streams to make album sales and the more streams, the better. More sonically pleasing music has the highest chance of getting repeat plays in online streaming platforms.

5. Remove Regret 

You have to live with the recording of your material forever. Maybe not forever, but at least as long as you are alive. In the digital age it’s possible your music might last forever. Artistic regret is something that we have all felt and will all feel again. One thing you can make sure you don’t have regrets about is the quality of the recording.

Final Thought

As a final thought,  I see quite a few memes like the one below relating to audio quality.

Even though there is some truth to this, a high-quality recording will translate better on all listening environments even through compressed audio formats to something like MP3.

When people listen on a phone speaker while painting a room or earbuds while jogging, they are listening for convenience. People listening for convenience, aren’t listening for quality. Many times in these situations listeners are not even listening to the music, the music is background noise while they kill time to make some other activity move faster.

To say that these situations mean that audio quality doesn’t matter isn’t accurate. Quite often you have to capture people’s attention elsewhere first before even making it to their "out and about" playlist. This is also the lowest common denominator in the listening situation. Are you making music to be someone’s background noise or are you making music to engage people? If the answer is the latter, then audio quality should matter to you. Don’t make the lowest common denominator your focus, present your music right the first time.

Thursday, April 27, 2017

Working With A Mastering Engineer

In recent years people have tried to make a case for the devaluation of the mastering process. Everything from people throwing plugins on a mix bus and calling it mastering to using automated online tools to perform the task. But the success of a mastering project is just as much about you as it is the mastering engineer and there are steps you can take to ensure you make the most out of the experience. Done well, you will find the benefits of working with a mastering engineer go far beyond the sound of the music alone. This post is a start for making the most out of the experience.

Know What Mastering Is

Even in 2017, I still feel it’s still important to define what mastering is. I also think It helps to look at mastering not as a single activity, but a collection of activities. It is a stage in music production that is the last step in the creative process and the first one in the distribution process. 

Mastering is part creative and part technical. It’s a balance between the aesthetic processing applied to increase fidelity, expectations of the public, and the physical limitations of various destination formats. Simply put, music should sound better after being mastered.  

This stage consists of quality control activities identifying issues that may have slipped through previous stages of the music production process. It is the last chance to catch any errors and make changes before being released to the world. Many things can cause errors in audio files. CPU spikes and misbehaving plugins are at the top of that list, but mastering also identifies various issues presented in the mix as well. 

To sum it up, the overall goal of mastering are to increase fidelity and prepare audio for distribution. 

Know What Mastering Is Capable Of

Mastering is not miracle work and a good job won’t fix a poor mix. We are dealing with a single stereo track (excluding stem mastering and surround situations). This means that processing decisions often affect multiple elements at the same time. 

Mastering is a constant balancing act, and sometimes it feels like a puzzle with the engineer constantly having to weigh the benefits and drawbacks of each of their corrective and aesthetic processing choices. For example, if the vocals in a track are far too bright, the processing applied to tame them may make other elements of the music sound dull. 

If there are problems with the mix, it’s best to get it fixed during the mixing stage. A good mastering engineer can help you identify where issues are and point you in the right direction. They can let you know if your material is ready to be mastered. 

Mastering can, however, take a good mix and make it great. This should be the goal. That doesn’t mean that you can’t send a non-ideal mix to an engineer for mastering. It just means that it's what you should be shooting for. 

Are You Happy With The Mix?

Unless you tell them otherwise, the engineer has to assume that you are satisfied with the mix. This is why it’s so important to communicate with the engineer. Sending a mix you are unhappy with expecting to be thrilled with the master is setting yourself for disappointment. 

Maybe you aren’t unhappy with the mix, but you just feel a couple of things could be better. Try to articulate what you don’t like as well as what you think could be better. The more information you communicate, the better your odds of getting back what you want. 

Have Clear Expectations

Have clear expectations about what you want the audio to sound like and what deliverables you expect to get in return. Some people just want WAV files back that they can upload themselves to their online aggregator. Some people want DDP images with metadata that they can send to a CD replicator. Know what you need in return. 

Don’t just send your tracks to a mastering engineer and hope for the best. “Just do what you do” when you have expectations is not a good recipe for success. Make sure you are articulating what you are going for on the creative side. Sending references of things you are expecting or at least material that you like the sound of can point the engineer in the right direction. It doesn’t mean you want to sound exactly like the reference (or that the mix could), it’s merely a direction. Not providing references and just saying, “I want this to be its own thing” is great, just don’t be surprised when you get something back that you weren’t expecting.  
You may be releasing on multiple destination formats too. Letting the mastering engineer know this is going to be the case allows them to understand what adjustments they need to make for the particular format. Vinyl has different limitations than an audio CD which is different than loudness compensated streaming. 

Another thing you want to articulate to the engineer is how loud do you want the master to be.  We live in a multi-format world with different requirements. The mastering engineer can help make some recommendations in these departments. 

Musical Partnership

Find an engineer who is interested in your work and is not just pushing your music through like an assembly line. The musical assembly line approach won’t maximize your relationship and isn’t ideal for your music. Someone you work with should be willing to provide you feedback on the mix pointing out problem areas letting you know where to improve. They offer a different perspective and perspective is a lot of the aesthetic portion of mastering. In a true partnership, they also want what’s best for you and your music. 

Careful With The Mix Bus

In the old days, you were limited to the compressor in your console and maybe a couple of other pieces of outboard. In the DAW-driven world, you can put an unlimited number of things on the mix bus, and some mixers certainly maximize this. 

The mix bus is where mastering engineers and mix engineers can sometimes not see eye to eye. Every mastering engineer has different preferences on how they would like mixes delivered. Some want all mix bus processing removed and others do not care. It’s best to talk to the mastering engineer you are working with and see what they expect

My personal view on this when mastering for my customers is if there is processing that is shaping the sound, it should be left on. That includes EQ, Compression, and various other special processing. If it’s shaping your sound and holding elements together, keep the processing in place. Just watch for potential issues such as the compressor pumping in a way that may not be pleasing. Too much of something is not always good in an audio context. 

I do however ask that any loudness maximization is removed and that I’m left with some headroom. This means not performing any limiting or other loudness processors such as clippers. 

With headroom, I ask for peaks to be somewhere between -2 to -6 dBFS. 

Beware of special processing. These are processors that add harmonics to your material and include things like tape and tube emulators. It’s easy to get sucked into how something initially sounds and far too easy to get accustomed to too much. These tools should be used sparingly and carefully on the overall mix bus.  

You can also provide two versions of the mix to the mastering engineer as well. One of them with mix bus processing and the other one without. This way the engineer can choose the one they feel they can make sound the best. 


You might have noticed a bit of a theme by now. Communication is essential. Even if you don’t know the lingo of audio engineering try to articulate what you like and do not like. Most experienced engineers are pretty good at distilling what you are going for regardless of any lingo barrier. The more you communicate, the higher your chance of continued success with your audio projects. 

Feel free to ask questions. Someone who is not willing to converse with you probably isn’t going to be interested in your music either. Ask them about their process, audio viewpoints, and anything else you find relevant. Get to know them. Certainly be mindful of their time, but an interested engineer shouldn’t find you a bother. I constantly have people from all over the world reaching out to me just to chat about audio gear and various other topics. I make time for it because I enjoy the conversation. 

In Closing

Hopefully, with this post, I've set the groundwork for starting a relationship with a mastering engineer. This isn’t the be all end all, but if you haven’t had a lot of experience working with a mastering engineer this is a start. With just a few steps you will find you can maximize your relationship and get far better results that last much longer than the album you are currently working on.

Monday, February 6, 2017

Audio Reference Gap

Comparing Audio

Let's talk about something that every audio engineer does and is of particular importance in a mixing and mastering context, that's comparing audio. If you are mixing, you may be comparing your mix to a rough mix of the song that was provided or maybe you are comparing to a commercial release. 

In mastering it's critical to compare the mastered version of the song to the raw mix to ensure that the processing steps taken are making an improvement to the mix and essential elements from the mix are retained. The small subtle details are crucial at this stage. 

I've started to use a term recently to describe accuracy issues with comparisons of audio material. This is something I call the Audio Reference Gap.

Audio Reference Gap

The audio reference gap is a gap in process or quality causing inaccuracies with A/B comparisons. The larger the gap, the more inaccurate the comparison.

The reference gap consists of the following items. Timespan, DAC Difference, Level Difference (optional)

Compromises in these areas make it difficult or far too inaccurate to make any reasonable comparison. Why is this important? Well, because decisions made during this process are made off of a skewed perception of the audio.

This has to do with things like the length of time between the comparison or the physical difference of the audio between the comparison (made by different DACs).

Let's take a look at a few of these areas in more depth.


The length of time between comparisons is critical to the accuracy of that comparison. The human brain doesn't have the capacity to remember details of an audio comparison beyond about a second. The more subtle the difference the closer together the comparison should be. This means that if someone were to listen to audio, get up and patch a piece of equipment in and listen again, they wouldn't perceive any of the details of that comparison. Sure, if something is so stark in contrast that it's overwhelming that may be perceived, but we are talking about being able to recognize details, even minute details while performing audio processing tasks.  

The timespan of the comparison should be kept as short as possible and well under the 1-second mark for critical comparisons.

Monitoring Path Difference

Differences in the monitoring path can have a pretty large difference in the way that two pieces of audio are perceived. The monitoring path consists of everything from your Digital to Analog Converter (DAC) to your ears. 

Comparing audio through two different DACs adds problems to the comparison. Not all DACs are created equal and different specifications, converter chips, components in the signal path can affect the audio in various ways.

One DAC may be clear and punchy, and the other may be less clear and have problems with low-end focus. So if you are comparing a mix or master, you may be making adjustments to the audio that are unnecessary or have a negative impact. 

Mastering is all about subtleties so even if two different DACs are both excellent and nearly identical they are still different and compromising your comparison.

Level Difference (Optional)

We all know that level difference can fool us into believing one piece of compared audio is better just because it is louder. The reason I added this as optional is that there may be times when you are comparing audio when you want to know about the level difference. 

Better Comparisons

The point of all of this is to reduce the reference gap and remove as many obstacles as possible. This will put you on the path to working better and making better audio decisions in all of your processing tasks.

The timespan should be kept as short as possible to ensure your perception of the material is as accurate as possible. This span should be well under the 1-second mark preferably to where it feels like it's instantaneous. A good monitoring controller is essential for this and will vastly improve your workflow.

The same DAC should be used for all A/B comparisons. This way inconsistencies won't cause you to make inaccurate processing decisions. I can't stress enough the importance of having good digital to analog converters. Since every decision in recording, mixing, and mastering (assuming digital audio is involved) is based on your DAC.

Lastly, if level difference isn't something you are comparing in your A/B, then you are going to want to make sure you level match the material you are comparing. This will allow for a more reasonable comparison and not get fooled by the louder version.

Hopefully, this gives some food for thought while you are performing your critical audio tasks or considering purchases for your studio. Your workflow and audio quality will thank you. 

Friday, May 13, 2016

Recording Dynamic Vocals With The Dangerous Compressor

This is our first episode of tips and tricks. It was a good test run for us to get used to the cameras and workflow. The folks at Dangerous Music were kind enough to help us out and put some finishing touches on it. They are awesome and make amazing gear.

We have more of these videos planned for the year so it's pretty exciting. This first episode deals with recording a very dynamic vocal part. The issue this presents is that the vocal part goes from quiet to very loud affecting levels and possibly clipping converters.

It's all about the capture of the performance and letting a singer be inspired when they are inspired.

We use the Dangerous Compressor to show how it's not only a great mixing and mastering compressor but also a great tracking compressor. We use one side as a limiter and the other as a compressor so they work in tandem. This shows how you can get a massive amount of gain reduction with no artifacts. It's a great piece of gear.

This video also highlights the importance of having both clean and colored analog gear.


Monday, April 25, 2016

Quick Master Fader Automation Tips

So you have this master fader. It's just sitting there begging you to do something with it, but is that really what you want to do? Maybe not. Here are some tips to think about before adding that automation to the master fader.

What are you trying to accomplish?

Think about it, what are you trying to do by automating moves on the master fader? This is the first step in determining if this is the right move or not. You may be surprised if you give it some thought.

This applies equally to mastering as well as mixing. So these moves are not just something that should be thought about during mixing. In this case it might not be a master fader, but just a regular fader move on a channel depending on your DAW.

Ultimately though it can come down to two things, are you trying to trying to create a volume effect or are you trying to change the feeling of parts?

Volume Effects

Volume effects are what I like to call overt volume moves that are meant to be heard. The most obvious of these are fade ins and fade outs. You may also want to really bring down the volume in an intro or other part of a song in order to create a stark contrast between two parts of a song.

Volume effects are the best candidate for master volume changes and automation since this is the single point at which these can be done.

Feeling Changes

Feeling changes are something else entirely. Maybe you are trying to change the feeling of a song from say the verse to the chorus. Many people reach for the master fader to change the volume balance between the verse and chorus. Maybe they do this because it's easy or maybe they do it because they don't know better, but depending on the situation this might not be the best way to accomplish this. This isn't just isolated to amateurs many pros do this as well.

If you think about making volume changes to the master fader is only going to change the volume of the part. If you have processing on your master bus it won't change the level going in to those processes. Say for instance you have a compressor compressing the signal on the master fader. The compressor is still going to compress the signal at the same level. If you automated the individual channels going in to the master the compressor would loosen up causing a different feel to the part.

Quite often it's these changes going in to the master fader that makes the changes we are looking for. By doing individual volume automation changes on the channels bringing down the volume in the verses will allow compression to loosen up a bit and give the part a different feel. This will also affect other types of processing like parallel processing as well allowing these to open up.

In many cases this is actually the change that people are looking for, not strictly the volume change. Sure it takes more work to automate the items in to the master fader, but these are probably the droids you are looking for.


There are some caveats to the individual channel vs master fader volume changes. If you have no processing on your master bus, then other than changing the balance between individual tracks it won't have the larger impact that it would if you did have master bus processing. 

If you have really strong parts in a verse or there is very little dynamic change between a verse and chorus it may not have the effect you are looking for. It may feel like it steps down in the chorus instead of up since the compression will kick in harder. In these cases maybe a rebalancing my be too much work or something would be lost.

Of course, maybe you just like the way the master fader automation sounds for the particular part. 


Here are some general rules, well not rules, more like guidelines. As with anything audio though, whatever works, works. 

  • If you are performing volume effects, use master fader automation.
  • If you are trying to change the feeling from part to part (ie verse to chorus) use individual track automation.
Give it a try. See which method works best, but don't just grab the master fader because it's easy. Have Fun.

Thursday, February 4, 2016

Automatic Audio Mastering Services are Bad at Mastering

Why are Automatic Mastering Services Bad At Mastering?

If you think about it with the exception of touch, what's more personal and human than music? It's something that people feel very passionate about. There aren't very many things that can both make you feel great, laugh, and cry all at the same time. Stick with me because I do have a point.

Computers and algorithms don't listen to music and no, Shazam doesn't count. Yet computers are being tasked to process music and create feeling from that content automagically. Now, this is certainly different than a computer-based tool that a human uses to make purposeful adjustments to audio material, these automated services are blanket processing tasks applied to pieces of audio to reach some predetermined endpoint.

There has been a rise in these services focused on automated mastering of audio material. It makes sense because it's an easy grab for cash and no humans have to be involved but isn't that the problem? That there are no humans involved?

The point I am making with this post is that automatic mastering services like LANDR are a bad idea, fundamentally from the start. I'm not saying this because I'm opposed to the technology, I'm saying this because there are certain things that an algorithm just can't fathom when it comes to artistic expression. The more disturbing thing as of late is that services such as CD Baby and TuneCore are now starting to push LANDR very hard and recently. I have recently seen a LANDR plugin for Studio One. I think it's important that people understand just what mastering is and what the drawbacks are when it comes to auto-processing algorithms.

What Mastering Is

Before we evaluate the effectiveness of an automated process to do a task, we need to understand the task itself. So what is mastering? I know it seems like an obvious answer but the lines have become extremely blurry the past few years and for many people, it's become a part of another process they have going on like mixing (which is a topic for another blog post). The perception is that mastering is just making things louder and brighter just isn't true. Although those two elements are certainly part of the mastering process, they do not define what mastering is.

Quite a few tasks happen during a typical mastering project. Mastering is the final step in the creative process and the first step in the distribution process. It's a pretty important step to get right and not something that should be taken lightly; however, it has in the past few years. It's the last chance to catch any errors or issues with a production before the world hears it. As you can imagine this encompasses much more than EQ and limiting. A mastering engineer can identify these issues and correct them or send them back to the mixer or artist for correction. This ensures what goes out to the world is the best representation of the artist and their material.

A mastering engineer also creates the content for delivery. This includes entering all of the metadata, DDPs, masters for various formats, and the list goes on.

Automatic Mastering Service Drawbacks

For people who care about the quality of their music and how it's perceived the drawbacks of these services are probably of high importance to you. People who don't care probably wouldn't have made it this far in the post anyway. This isn't an all-encompassing list, but it's a start.


I think the biggest nail in the coffin for these services is that music isn't just about the way it sounds, it's about how it feels. The feeling of musical content is something that's hard to quantify and something an algorithm certainly can't do. Think about it, sometimes it's hard to articulate just why something feels better. You can have two pieces of the same audio material that sound similar but one just feels better. Every artist wants the people consuming their music to feel it. So this is an critical.

Comparing two different types of processing for the same activity may result in one feeling better. A human can easily A/B processing tasks and determine which one sounds and feels better, an algorithm cannot. In order for a computer processing material to make those types of decisions it would need to have specific parameters in place to determine that and of course code in place to make that decision.

Passion is another trait that is unique to the human side of the music process. Being passionate about something means going the extra mile and striving for the best results. Humans want to create partnerships that are mutually beneficial and will go above and beyond in cases where this passion runs deep. An algorithm doesn't care about you or your music, algorithms are cold like that.

We've already seen quite a bit of death of musical feeling lately. A lot of material is hard quantized to a grid or a loop that is clicked and dragged in to a timeline. So another step in the process removing humanity from the production process would just be one more chip away.


Sometimes your computer is in a rap rock mood and you are in a folk kind of mood. Of course this statement is ridiculous, but it sums up an important point. Computers don't listen to music and do not have context for musical genres. Just think about that for a second, an algorithm for mastering audio has no idea what type of music it is processing.

A processing algorithm also doesn't have knowledge of current trends in these musical genres. There is an ebb and flow to musical aesthetics that constantly change. I mean is the genre more tolerant or less tolerant to compression? What about loudness levels? Does it typically need more low end and less mids? I think this stuff isn't easy to quantify consistently, which is a problem. Even if there is a preset for "X" genre the perception of what that processing sounds like will be different.

Beyond the genre context what about instrument context? I don't think any human would argue against there being a difference between a human voice, a guitar solo, or even a cymbal for that matter. So what happens when it comes time to balance these elements or balance them as much as can be done in the mastering process? There are times where a vocal may be a bit too sibilant but the cymbals sound fine. Many issues can arise in a mastering project and are something that a mastering engineer can identify and potentially fix and an algorithm can not.

Aesthetic Processing

Speaking of perception, perception is something that computers don't do either. I mean when was the last time you saw a Dell computer worrying about whether it looks sexy? The mastering process is the last opportunity to make aesthetic changes and enhancements to the overall sound of the material prior to release. There is no doubt that some processing works better for certain pieces of material than others. It's all program dependent. When do you use an Opto compressor vs a VCA? What about minimum phase vs linear phase EQ? What about saturation? These are just a couple of the many decisions that need to be made during the mastering process.

There are no one size fits all processing tasks in mastering. A mastering engineer is very purposeful in their processing using exactly what is necessary for the track. Some tools work better than others for certain types of material. This is why you can't just have an assembly line approach to artistic material. At the end of the day do we want all music to sound the same?

Quality Assurance

As stated previously audio mastering is more than just EQ and volume changes, it's the last step in the musical process and the first step in distribution. That means it's the last opportunity to catch any issues with the material or make any changes prior to being distributed to the world. This is something that automatic mastering services just can't do. Clicks, pops, and even other less obvious issues will just happily be processed by these services.

Distribution with issues are a clear drawback automated mastering services. Why would you create releases with audio issues?


Automatic mastering services can't provide you feedback. That's right, they won't allow you to become a better mixer and they won't allow you to improve you skills. It has never been more easy to create audio and put it out there. A vast majority of this material is being produced in less than optimal environments. Acoustic issues and poor monitoring can lead to plenty of issues in a mix.

A mastering engineer can provide you feedback that allows you to improve your mixing skills and even help diagnose acoustical problems in your room. Quite often an experienced mastering engineer has insight on room acoustics from past studio builds and working with experts. That alone should be worth the price of hiring a mastering engineer. It's really hard to provide a value to this and yet often it's included with the price of a mastering job.

Say for instance that you typically have problems with a low mid build up or that frequently your bass is being masked by your kick drum. Having a mastering engineer as a partner can be a great way to identify issues like this and help you avoid them in the future. A mastering engineer can provide you feedback on these issues and help you identify issues with your room and mixes.

At the end of the day music in general is a very collaborative process and it's just not possible to collaborate with an algorithm. For some reason there is this badge of courage people wear nowadays where they say, "Hey look what I wrote, mixed, and mastered". But when you are so close to the project sometimes obvious issues will creep in to the final product. When you use an algorithm instead of a human these issues continue to persist.


Every move in audio mastering should be very purposeful. A good mastering engineer is never on autopilot just throwing processing at material because it has worked in the past. With many kinds of processing there are drawbacks to that processing as well. So blanket application of processing to material is a bad idea. Take equalization for example both regular and linear phase EQs have issues. These issues are items such as phase shift and pre-echo (aka pre-ringing). A human can identify when these become problematic and determine whether they are acceptable or not.

Even something that seems simple like multi-band compression. Multi-band compression uses a series of filters in order to create the various bands it uses for processing. Just like any other type of filtering this creates the same issues that crop up with EQ. When in a mastering context processing is only applied in a very purposeful manner. Each tool is chosen as the right tool for the job and used only where necessary. This reduces the overall artifacts from processing.

Revisions, Formats, and Stems

You can't converse with an algorithm or articulate an artistic vision. You can't tell an algorithm, I like what you are doing there, but I feel there should be more low end. This is certainly something you can do with a human. At the end of the day the mastering engineer works for the artist/producer. An algorithm can't work for an artist, it just does what it does.

Also maybe you need masters for various formats? You may be releasing a CD, digital distribution, and even online streaming. All of these formats require additional thought and processing in order to make them successful. Currently these services aren't set up for this.

There are situations where stems are provided for mastering or at least provided in some part. The mixer may provide a stem of the instrumental and a vocal stem. That way if something is too sibilant the mastering engineer can deal with that independently of the instruments in a mix. Obviously automatic mastering services don't handle these situations.

The Sound

If you think about what the algorithm is doing it's probably no surprise at the outcome of processing through these services. Running through an algorithm processes all material the same way. Algorithms aren't purposeful processing tools. Results from services like LANDR tend to be overly harsh with audible artifacts. Basically making for an unenjoyable listening experience. Is this what you want for your music?


Technology and advances have allowed people to be more capable than ever before. Full digital audio workstations are now at everyone's fingertips and it has never been more easy to create music. But there is a downside to technology and hopefully this blog post pointed out some of these issues. Algorithms can't determine things like pleasant levels of saturation or when a processing task enhances a mix and brings it more together to make it sound "finished". Only a human can.

Before you use one of these services ask yourself, do you really care about your music and how it's perceived? If you really care about your music than care enough about it to do things right. There are so many advantages to using a mastering engineer to prepare your music for distribution and hopefully this post summed a few of them up. Work with an engineer who is passionate about getting the results you are looking for. Create a partnership with your mastering engineer. Choosing a mastering engineer over an algorithm will make your music that much better and take it to the next level.

Mix Bus Processing Isn't Mastering

This week I thought I’d take some time to cover something I end up talking about quite a bit. It’s not uncommon to see people referring t...