Over the past few weeks, everyone at the studio has been involved in this constant discussion about what the right approach to mastering a streaming service should be. Should we still aim to sound sonically loud or upfront as people call it? Or should we aim to sound the most dynamic of the batch? How do we arrive at the right level of loudness for these streaming platforms? Is the loudness war really over?
In this post today I’ll summarize my learnings over the past few weeks and how one can look at mastering moving forward. If you don’t know what mastering means, please go back to the blogs and understand what the process of mastering means as taught in various audio engineering courses.
How we consume has fundamentally changed:
Music up until a few years ago was primarily consumed via CD or through a medium where the final audio file was locally stored on your disc or a computer. So everyone had their copy of the master file which they could then play on their systems.
With the advent of these streaming platforms, we now have one streaming service that holds virtually all music on some of their remote servers and we can access this given library of music via their App. So now we don’t really have an exact copy of this file with us, it is being streamed to us via this service.
What the streaming platforms realized was, having a varied range of music right from Classical to Metal in the same library was going to be a bit of a hassle when it comes to the loudness of each of these tracks. The idea wasn’t novel, but the goal was to reduce the amount of volume that the end listener had to keep changing if he moved from one song to another.
Imagine how annoying it can be when you’re out on a jog with your headphones on and have to constantly keep changing the volume of the songs as they keep changing. Most of you have probably never experienced this because of what we call “Normalization”.
What normalization does is analyze all the tracks within this so-called library of music and assign each track a number based on how loud or soft it is. So let’s say we have a metal track that was mastered extremely loud. This software analyzes the loudness using an algorithm and a loudness measuring method called the LUFS(Loudness Units Full Scale) and assigns it a loudness value like say -6LuFS.
Now this same algorithm will analyze another classical piano piece that is extremely soft and assigns it a value of -18LuFS.
Now if you were to play both these tracks right one after the other it will be a drastic jump in volume and in some cases can even damage your hearing if you’ve pushed the volume to the mix when you were listening to the soft song which will be extremely loud when you move to the loud song. Ideally what you’d want in this case is that both these tracks be in similar loudness levels so that you don’t have to keep changing the volume and so it can protect your hearing as well.
That’s where normalization steps in, in on two instances:
1. If your track exceeds the loudness value that the streaming platform recommends, it will turn the volume down.
2. If your track is under the loudness values that the platform recommends it will turn the volume up.
So this normalization technique in simple terms is essentially turning the volume knob up and down for you so that YOU as a listener have an optimal listening experience.
Another thing to consider is that various streaming platforms use various codecs for compression, some that you may commonly know are Mp3, OGG, etc.
These compression algorithms also depend on factors such as
A. Connection Speed
C. Mobile or Desktop
Now that we’ve understood how Normalization works on these streaming platforms, the first thing that we need to realize is even if we make the loudest master, our track will be lowered by the streaming platform.
This goes against everything that you may have heard as a mastering engineer, but it’s true. Under normal circumstances, if you were to listen to two files on a CD the louder one would sound better, but here with all the tracks being normalized the loudness wars are over.
Now as a mastering engineer, you can work with your artists to create the most dynamic Masters without losing on any loudness penalty. This can be understood in depth by joining an audio engineering course in Pune.
Let’s take two instances:
We have track A which is extremely loud at -8LuFS, but we know to achieve a louder Master we have to give away “DYNAMIC Range”
On the other hand, we have track B which is at -14LuFS but has a higher dynamic range than track A.
When both these tracks are on the streaming platform, both of them will play at the same loudness level i.e -14LuFS since Spotify’s required LEVEL is -14LuFS and the louder master will be turned down by 6dB to -14LuFS.
Here track B really has the upper hand since that the track that actually has a higher dynamic range and is as loud as track A.
So in fact making really loud masters for streaming platforms is the worst thing to do since, you’re not only being turned down, you’re losing out on the dynamic range which would have added to a far more pleasing listening experience. The art of Mastering a song is a skill you develop from various sound engineering courses in Pune.