Thursday, February 4, 2016

Automatic Audio Mastering Services are Bad at Mastering


Why are Automatic Mastering Services Bad At Mastering?


If you think about it with the exception of touch, what's more personal and human than music? It's something that people feel very passionate about. There aren't very many things that can both make you feel great, laugh, and cry all at the same time. Stick with me because I do have a point.

Computers and algorithms don't listen to music and no, Shazam doesn't count. Yet computers are being tasked to process music and create feeling from that content automagically. Now, this is certainly different than a computer-based tool that a human uses to make purposeful adjustments to audio material, these automated services are blanket processing tasks applied to pieces of audio to reach some predetermined endpoint.

There has been a rise in these services focused on automated mastering of audio material. It makes sense because it's an easy grab for cash and no humans have to be involved but isn't that the problem? That there are no humans involved?

The point I am making with this post is that automatic mastering services like LANDR are a bad idea, fundamentally from the start. I'm not saying this because I'm opposed to the technology, I'm saying this because there are certain things that an algorithm just can't fathom when it comes to artistic expression. The more disturbing thing as of late is that services such as CD Baby and TuneCore are now starting to push LANDR very hard and recently. I have recently seen a LANDR plugin for Studio One. I think it's important that people understand just what mastering is and what the drawbacks are when it comes to auto-processing algorithms.

What Mastering Is

Before we evaluate the effectiveness of an automated process to do a task, we need to understand the task itself. So what is mastering? I know it seems like an obvious answer but the lines have become extremely blurry the past few years and for many people, it's become a part of another process they have going on like mixing (which is a topic for another blog post). The perception is that mastering is just making things louder and brighter just isn't true. Although those two elements are certainly part of the mastering process, they do not define what mastering is.

Quite a few tasks happen during a typical mastering project. Mastering is the final step in the creative process and the first step in the distribution process. It's a pretty important step to get right and not something that should be taken lightly; however, it has in the past few years. It's the last chance to catch any errors or issues with a production before the world hears it. As you can imagine this encompasses much more than EQ and limiting. A mastering engineer can identify these issues and correct them or send them back to the mixer or artist for correction. This ensures what goes out to the world is the best representation of the artist and their material.

A mastering engineer also creates the content for delivery. This includes entering all of the metadata, DDPs, masters for various formats, and the list goes on.

Automatic Mastering Service Drawbacks

For people who care about the quality of their music and how it's perceived the drawbacks of these services are probably of high importance to you. People who don't care probably wouldn't have made it this far in the post anyway. This isn't an all-encompassing list, but it's a start.

Feeling

I think the biggest nail in the coffin for these services is that music isn't just about the way it sounds, it's about how it feels. The feeling of musical content is something that's hard to quantify and something an algorithm certainly can't do. Think about it, sometimes it's hard to articulate just why something feels better. You can have two pieces of the same audio material that sound similar but one just feels better. Every artist wants the people consuming their music to feel it. So this is an critical.

Comparing two different types of processing for the same activity may result in one feeling better. A human can easily A/B processing tasks and determine which one sounds and feels better, an algorithm cannot. In order for a computer processing material to make those types of decisions it would need to have specific parameters in place to determine that and of course code in place to make that decision.

Passion is another trait that is unique to the human side of the music process. Being passionate about something means going the extra mile and striving for the best results. Humans want to create partnerships that are mutually beneficial and will go above and beyond in cases where this passion runs deep. An algorithm doesn't care about you or your music, algorithms are cold like that.

We've already seen quite a bit of death of musical feeling lately. A lot of material is hard quantized to a grid or a loop that is clicked and dragged in to a timeline. So another step in the process removing humanity from the production process would just be one more chip away.

Context

Sometimes your computer is in a rap rock mood and you are in a folk kind of mood. Of course this statement is ridiculous, but it sums up an important point. Computers don't listen to music and do not have context for musical genres. Just think about that for a second, an algorithm for mastering audio has no idea what type of music it is processing.

A processing algorithm also doesn't have knowledge of current trends in these musical genres. There is an ebb and flow to musical aesthetics that constantly change. I mean is the genre more tolerant or less tolerant to compression? What about loudness levels? Does it typically need more low end and less mids? I think this stuff isn't easy to quantify consistently, which is a problem. Even if there is a preset for "X" genre the perception of what that processing sounds like will be different.

Beyond the genre context what about instrument context? I don't think any human would argue against there being a difference between a human voice, a guitar solo, or even a cymbal for that matter. So what happens when it comes time to balance these elements or balance them as much as can be done in the mastering process? There are times where a vocal may be a bit too sibilant but the cymbals sound fine. Many issues can arise in a mastering project and are something that a mastering engineer can identify and potentially fix and an algorithm can not.

Aesthetic Processing

Speaking of perception, perception is something that computers don't do either. I mean when was the last time you saw a Dell computer worrying about whether it looks sexy? The mastering process is the last opportunity to make aesthetic changes and enhancements to the overall sound of the material prior to release. There is no doubt that some processing works better for certain pieces of material than others. It's all program dependent. When do you use an Opto compressor vs a VCA? What about minimum phase vs linear phase EQ? What about saturation? These are just a couple of the many decisions that need to be made during the mastering process.


There are no one size fits all processing tasks in mastering. A mastering engineer is very purposeful in their processing using exactly what is necessary for the track. Some tools work better than others for certain types of material. This is why you can't just have an assembly line approach to artistic material. At the end of the day do we want all music to sound the same?

Quality Assurance

As stated previously audio mastering is more than just EQ and volume changes, it's the last step in the musical process and the first step in distribution. That means it's the last opportunity to catch any issues with the material or make any changes prior to being distributed to the world. This is something that automatic mastering services just can't do. Clicks, pops, and even other less obvious issues will just happily be processed by these services.

Distribution with issues are a clear drawback automated mastering services. Why would you create releases with audio issues?

Feedback

Automatic mastering services can't provide you feedback. That's right, they won't allow you to become a better mixer and they won't allow you to improve you skills. It has never been more easy to create audio and put it out there. A vast majority of this material is being produced in less than optimal environments. Acoustic issues and poor monitoring can lead to plenty of issues in a mix.

A mastering engineer can provide you feedback that allows you to improve your mixing skills and even help diagnose acoustical problems in your room. Quite often an experienced mastering engineer has insight on room acoustics from past studio builds and working with experts. That alone should be worth the price of hiring a mastering engineer. It's really hard to provide a value to this and yet often it's included with the price of a mastering job.

Say for instance that you typically have problems with a low mid build up or that frequently your bass is being masked by your kick drum. Having a mastering engineer as a partner can be a great way to identify issues like this and help you avoid them in the future. A mastering engineer can provide you feedback on these issues and help you identify issues with your room and mixes.

At the end of the day music in general is a very collaborative process and it's just not possible to collaborate with an algorithm. For some reason there is this badge of courage people wear nowadays where they say, "Hey look what I wrote, mixed, and mastered". But when you are so close to the project sometimes obvious issues will creep in to the final product. When you use an algorithm instead of a human these issues continue to persist.

Artifacts

Every move in audio mastering should be very purposeful. A good mastering engineer is never on autopilot just throwing processing at material because it has worked in the past. With many kinds of processing there are drawbacks to that processing as well. So blanket application of processing to material is a bad idea. Take equalization for example both regular and linear phase EQs have issues. These issues are items such as phase shift and pre-echo (aka pre-ringing). A human can identify when these become problematic and determine whether they are acceptable or not.

Even something that seems simple like multi-band compression. Multi-band compression uses a series of filters in order to create the various bands it uses for processing. Just like any other type of filtering this creates the same issues that crop up with EQ. When in a mastering context processing is only applied in a very purposeful manner. Each tool is chosen as the right tool for the job and used only where necessary. This reduces the overall artifacts from processing.

Revisions, Formats, and Stems

You can't converse with an algorithm or articulate an artistic vision. You can't tell an algorithm, I like what you are doing there, but I feel there should be more low end. This is certainly something you can do with a human. At the end of the day the mastering engineer works for the artist/producer. An algorithm can't work for an artist, it just does what it does.

Also maybe you need masters for various formats? You may be releasing a CD, digital distribution, and even online streaming. All of these formats require additional thought and processing in order to make them successful. Currently these services aren't set up for this.

There are situations where stems are provided for mastering or at least provided in some part. The mixer may provide a stem of the instrumental and a vocal stem. That way if something is too sibilant the mastering engineer can deal with that independently of the instruments in a mix. Obviously automatic mastering services don't handle these situations.

The Sound

If you think about what the algorithm is doing it's probably no surprise at the outcome of processing through these services. Running through an algorithm processes all material the same way. Algorithms aren't purposeful processing tools. Results from services like LANDR tend to be overly harsh with audible artifacts. Basically making for an unenjoyable listening experience. Is this what you want for your music?

Conclusion

Technology and advances have allowed people to be more capable than ever before. Full digital audio workstations are now at everyone's fingertips and it has never been more easy to create music. But there is a downside to technology and hopefully this blog post pointed out some of these issues. Algorithms can't determine things like pleasant levels of saturation or when a processing task enhances a mix and brings it more together to make it sound "finished". Only a human can.

Before you use one of these services ask yourself, do you really care about your music and how it's perceived? If you really care about your music than care enough about it to do things right. There are so many advantages to using a mastering engineer to prepare your music for distribution and hopefully this post summed a few of them up. Work with an engineer who is passionate about getting the results you are looking for. Create a partnership with your mastering engineer. Choosing a mastering engineer over an algorithm will make your music that much better and take it to the next level.


Mix Bus Processing Isn't Mastering

This week I thought I’d take some time to cover something I end up talking about quite a bit. It’s not uncommon to see people referring t...