This month I want to take a look at an audio production process that is commonly used but often misunderstood. It’s something that I get asked about fairly regularly and there is definitely a best practice that can be applied to this process which is often sadly lacking.
That process is normalisation (or normalization if you’re American).
What is normalisation?
Normalisation has a few different meanings in audio engineering circles. Does anyone else remember picking up a soldering iron to normalise or semi-normalise their patchbay? What we’re thinking about today is normalising audio in a digital editor or DAW.
Normalisation is very simply raising or lowering the volume of an audio file to a set level. That’s it in a nutshell.
Imagine your client has specified a peak level of -3dB (you can also normalise to an RMS (or average) level,more on that later). You could find what your peak level is (easier with an editor than a DAW), then use an amplify process to add or subtract volume to get to that peak level. Alternatively, you could simply normalise to that value – much simpler. You simply open up your normalisation dialogue box, set the peak level and press Apply. The audio file will then be examined and the levels raised or lowered so the peak is at the level you’ve set.
Because the normaliser needs to examine the file and adjust levels like this it cannot be done in real time, so you won’t find normalisation as an insert/aux effect in your DAW.
Normalisation vs compression
I do sometimes get asked what the difference between normalisation and compression is.
- Compression will change the relation between the loud bits and quiet bits of your audio, and normalisation doesn’t.
- Both can be used to increase or decrease the overall volume of audio.
- Compression will squash the peaks to be closer to the rest of the waveform, and normalisation will leave the peaks intact relative to the rest of the waveform.
- Compression is constantly monitoring and changing the volume of the file in a millisecond by millisecond way. Normalisation does one volume change over the entire file and that’s it.


How NOT to normalise
This is what I really want to say in this blog. This bit is the reason I’ve tackled this topic this month. Every so often I see in forums a voice artist outlining their production process as something like…
Record -> normalise -> noise reduction -> normalise -> HPF -> normalise -> compress -> normalise -> EQ -> normalise -> noise gate -> normalise.
Their usual question is ‘have I missed anything out?’ (I may have exaggerated the chain there slightly for comic effect). There’s lots wrong with that processing chain, but sticking to thinking about normalising. If you’re having to normalise that much you need to seriously re-assess your entire workflow and approach to audio production. Google ‘gain staging’ and you can simplify this kind of workflow.
How TO normalise
Normalisation is a process that should only be done ONCE. And it should be the last thing you do to an audio file before you send it to the client. And you only need to do it if the client has specified what the peak or average levels should be. The rest of the production chain needs headroom to allow for transient peaks and to prevent clipping.
If you need to normalise after your recording you should increase the gain on your interface to record at better levels. (“Oh, but that makes the noise floor louder.” “Yes it will. If it makes the noise floor too loud then you need to take some time improving your studio to a professional spec.”). If your further processing leaves your levels too low for the next part of your processing chain you need to pay attention to the gain staging. Technically you could normalise to a lower level between each stage of processing, but there’s no need if you’re doing the other processing stages properly.
Peak Normalisation
Peak normalisation is done as explained above – client has requested peaks to be at a certain level, so you open your normalisation tool, select ‘peak’ enter the value and hit Apply. Job done.
Or is it? If you normalise a wav and then convert to mp,3 you may find that your peaks are now higher than the desired level. So Save As mp3 before you normalise, or normalise, convert and normalise again. Here I will allow 2 normalisations even though I said it’s something you should do only once.
True Peak Normalisation
Normalisation is meant to ensure that the highest level of any sample in your audio is exactly as you set it to be. But because computers are getting cleverer, it’s becoming more common that audio decoders and players take notice of inter-sample peaks. Inter-sample peaks are where the soundwave likely was between the points that samples were taken. If you imagine driving past 2 speed cameras at 30mph, it doesn’t mean you haven’t driven faster than that between them. If we record the peak-normalised value for 2 samples in a row, it’s highly unlikely that the soundwave didn’t exceed that value between the 2 points. And so now some normalisers can normalise to a dBTP (decibel true peak) level as well as a straight dB level. I will always opt to normalise to dBTP whenever possible.
Loudness Normalisation
More and more frequntly clients are asking for levels to be normalised for loudness. This differs from peak normalisation, although the process of applying normalisation is pretty much the same.
Average Loudness is abetter measure of how loud something will sound than peak normalisation is . Imagine a scenario where you have 1 peak that’s 10dB louder than the rest of the audio. When you use peak normalisation the file will be adjusted so that the one peak is set to the desired level, but the rest of the audio will still sound 10dB quieter. Using loudness normalisation may result in the peak clipping, but it will sound around the same level as other audio normalised to the same level. The level you normalise to is very different from peak normalisation. For peak you’ll normalise to -3dB ( for example), whereas average would be somewhere around -20dB.
RMS or LUFS?
There’s a whole blog to be written about loudness measurement, but for now let’s just say that when you’re normalising for average loudness you’ll be normalising to either dB RMS (root mean squared)or dB LUFS (loudness units full scale). RMS is the older system and LUFS is more modern and more accurate. LUFS is gaining popularity, particulrly in broadcast. RMS is still used for ACX and other ‘older’ platforms. Most metering tools will do one or the other. I don’t think I own one that does both.
What level should you normalise to?
Again this is a question that crops up quite often. And the answer is, there isn’t an answer. Normalisation isn’t a standard kind of process. So as I said earlier if your client hasn’t specified a level there’s no need to impose one as a hard and fast rule. ACX and other audiobook publishers state a peak level of -3dB, but even this isn’t a hard and fast rule. It just means peaks shouldn’t be higher than that – so if your peaks are at -4.2dB you’ll still pass ACX specs, whereas -2.9dB wouldn’t. They also require average level of between -23dB and -18dB RMS – this is the important one. So there is a balancing act to delivering ACX specs.
The only thing really to add to that is that the higher level you normalise to the louder the audio will be, so particularly for auditions, you may want to normalise to -1dB.
And never normalise to 0dB. Technically there’s nothing wrong with it as you haven’t clipped, but many systems will hit the clip lights if the levels reach 0dB as it will be assumed clipping has occurred (see True Peak Normalisation).
Life before normalisation
You can stop reading now if you like.
Normalisation is a fairly recent addition to audio processing. It’s only in the digital age that it’s been possible for the reasons outlined above (the bit about not being able to do it in real time). When everything was recorded onto tape and mastered onto another tape, normalisation wasn’t achievable. But there was a process that they could use to ensure levels didn’t peak. And it’s one that’s still available to us now and provides a good alternative to normalisation.
Life outside normalisation
If you’re using a DAW, or just don’t want to normalise you can use this pre-digital technique that’s been tweaked for the digital age – you can use a brickwall limiter. Something like the Waves L2 Ultramaximiser. To use something like this you set the ceiling to the peak level. Then as you lower the threshold, the levels are boosted until they hit the ceiling. Setting so you’re just clipping the tops off a little (1 or 2dB of gain reduction) will make your audio loud without clipping, and without normalising. Just be aware of converting to mp3 after running through this kind of process as peaks slightly above the ceiling can occur during the encoding process, but if there’s no normalised level to stick to it can be a good way of ensuring great levels without clipping.
If you need more help with this – or other production topics – get in touch with Rob. One to one tuition on all aspects of production is available.