Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Dither Preceding Lossy Encoding (Read 10355 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Dither Preceding Lossy Encoding

Hi,
I asked this question at another forum, and was advised to get further information from the developers here.  I've read on Wikipedia that TPDF dither should always be used if the signal being dithered is to undergo further processing.  Given that the vast majority of online AAC and MP3 files are made from 16 bit dithered PCM masters (CDR, DDP, WAV) supplied by the mastering studio (further processing the signal), what settings or dither should be used or avoided in making the original 16 bit PCM masters?  And what are the consequences, technical or audible, in the final AAC or MP3, of using the wrong dither or settings?
Also, is anyone here familiar with what's used in the UV22 and UV22HR, and would they be suitable for dithering the original 16 bit master, given that it would be used for AAC and MP3 encoding.
Thanks.

Dither Preceding Lossy Encoding

Reply #1
In theory, you should dither when you reduce the bit depth.    Dither will have no effect on lossy compression (MP3 or AAC), for a couple of reasons...  MP3 & AAC use floating-point (no fixed-integer bit depth) and lossy-compression throws-away stuff you can't hear.  Since dither is somewhere around -80 to -90 dB, it's going to be thrown-away.

In practice, it's very unlikely that you can hear the effects of dither (at 16-bits or more).    So, it's not too important if you use dither or not, or which dither algorithm you choose.  If you think you can hear it, choose whatever sounds best. to you.

Dither Preceding Lossy Encoding

Reply #2
... what settings or dither should be used or avoided in making the original 16 bit PCM masters?  And what are the consequences, technical or audible, in the final AAC or MP3, of using the wrong dither or settings?

Why don't you directly put the 24-bit masters into the AAC/MP3 encoder?

Using noise shapers like UV22 for AAC/MP3 prior to encoding doesn't help you much since their effect is destroyed in most AAC/MP3 decoders, which only round to 16-bit or use unshaped dither (if at all).

Since dither is somewhere around -80 to -90 dB, it's going to be thrown-away.

  Which encoder does that? For the record, the Winamp AAC encoder doesn't.

Chris
If I don't reply to your reply, it means I agree with you.

Dither Preceding Lossy Encoding

Reply #3
Maybe I don't understand the question, but as C.R.Helmrich said, just feed the master directly to the encoder in whatever format you have.  Do not convert it to any other format.

Dither Preceding Lossy Encoding

Reply #4
Well, maybe downsample it if it's above 48 kHz?

I know the AAC format itself supports 96 kHz, but is it a *requirement* that all decoders support it?
"Not sure what the question is, but the answer is probably no."

Dither Preceding Lossy Encoding

Reply #5
Downsampling is usually a good idea for space and compatibility reasons, but there is no need to dither.  You can feed the output of your resampler directly to the encoder at > 16 bit resolution.

Dither Preceding Lossy Encoding

Reply #6
Thanks everyone.  I realize that encoding is better done from higher res sources, but 44.1k 16 bit is still the required delivery format for masters which will be used for MP3 and AAC encoding (Mastered for iTunes excepted, which requires 24 bit).  That's what I'm talking about, the 16 bit master (CDR,DDP,WAV) which will be sent from the mastering studio to the record label for MP3 and AAC encoding.  My question was about the dither used on that 16 bit master, and what the technical bad effect of using anything other than TPDF would be on the encoded MP3s and AACs.

It would be great if the record labels required or accepted 32 bit float or 24bit 96k files for encoding to MP3, with reduced level to prevent clipping, as with MFiT, but that sort of thing is still only in the beginning stages, and hasn't happened yet.

(And if anyone knows what's used in UV22HR, and if it's considered equivalent to TPDF, I'm still looking for an answer to that.)

Dither Preceding Lossy Encoding

Reply #7
OK, if you're bound to 16-bit mastering you can use a gentle noise shaper like UV22HR. Noise shaping is similar to TPDF dither, but the dither noise will sound less loud (it uses a pseudo-random noise and error feedback to spectrally shape the error introduced by the 24-to-16-bit conversion. UV22(HR) looks roughly like the SNS1 curve plotted here).

Chris
If I don't reply to your reply, it means I agree with you.



Dither Preceding Lossy Encoding

Reply #10
Thanks for the link.

So should I assume that for all the 16-bit content which was encoded to lossy at transparent settings and played back without audible problems, special consideration was made when reducing bit depth from the higher resolution masters when prepared to CD in order to achieve this?

I'm all for best practice, but I really must challenge the portion of the OP that relates to audible consequences for non-test tone material.

Dither Preceding Lossy Encoding

Reply #11
Are there any 24-bit samples that demonstrate choosing one type of dither over another when reducing depth to 16 bits as a problematic for lossy encoding?

I don't think any dither is problematic for the process of lossy encoding. Even a "stupid" encoder wasting bits on the inaudible high-frequency noise "bump" that some noise shapers produce should still be able to code transparently if the bit-rate is high enough.

True, when you lossy-encode a 24-bit signal and put it through a 16-bit decoder which only truncates, you might end up with harmonic and/or time-varying distortion which isn't present in the original master. But you might have the same problem when you play a lossless 24-bit file through a 16-bit truncating decoder.

IMHO, now that most DACs and OSs can handle 24-bit audio, there shouldn't be any 16-bit lossy decoders any more. The nice side-effect would be that you could both encode and decode in 24-bit, allowing more of the potential dynamic range of the 24-bit master to reach listeners with 24-bit-capable hardware.

Chris
If I don't reply to your reply, it means I agree with you.

Dither Preceding Lossy Encoding

Reply #12
Again, I'm cool with best practice; I just think the discussion needs a little grounding for those who might otherwise get carried away.  IMO, when there are readers with varying degrees of knowledge, anal-retention from the experts can foster fear, uncertainty and doubt.  That was the feeling I got from that other discussion, though maybe I should read it again.

Thanks for the response.

Dither Preceding Lossy Encoding

Reply #13
Hey guys,

am I right, that there is no need to dither from the DAW because in the next step the lossy encoders will throw away the noise anyway?

My setting is the following:

DAW: Ableton Live. Internal Resolution 32 Bit.
Encoder: XLD for AAC. Accepts max. 24 Bit input.

Rule of thumb: Dither once at the last step and stay as high-bitty as possible before.

So I render to 24 Bit AIFF without dither and directly put that into XLD. Correct?

Or am I mixing up quantisation error noise w/ dither noise in my argumentation andmy non-dithering will create quantization error artifacts that might be audible in the destination file?

Dither Preceding Lossy Encoding

Reply #14
When you reduce from 32 bit to 24 bit, it makes little difference whether you dither or not because the dither will be well below audible levels.

If you were going to then encode with an encoder that only takes 16 bit input then it would make a difference.

Dither Preceding Lossy Encoding

Reply #15
... what settings or dither should be used or avoided in making the original 16 bit PCM masters?  And what are the consequences, technical or audible, in the final AAC or MP3, of using the wrong dither or settings?

Why don't you directly put the 24-bit masters into the AAC/MP3 encoder?

Using noise shapers like UV22 for AAC/MP3 prior to encoding doesn't help you much since their effect is destroyed in most AAC/MP3 decoders, which only round to 16-bit or use unshaped dither (if at all).

Since dither is somewhere around -80 to -90 dB, it's going to be thrown-away.

  Which encoder does that? For the record, the Winamp AAC encoder doesn't.

Chris



Any AAC encoder working at any reasonable bit rate pn any demanding signal is going to toss most, if not all, of the original dither in a 16 bit signal. It's not explicit, it's implicit.
-----
J. D. (jj) Johnston

Dither Preceding Lossy Encoding

Reply #16
The purpose of dither is to eliminate distortion of quantization error, no? Therefore, the question should not be
'can you hear the dither at such and such a low level?'
or
'does the encoder "throw away" dither at such and such a low level?'
but rather
'can you hear the distortion when no dither is applied?'

The distortion is real, it is part of the data stream, it can be demonstrated easily enough with simple test tones or audio at low enough bit rates. That dither eliminates the distortion is real: the data stream is changed by dithering.  The dither itself may or may not be audible, depending on the settings and the noise shaping. Hearing the distortion when dither is not applied is going to be a BIG challenge with any real music at any resolution above 8 to 12 bits.