Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Lowpass Filters before Lossy Compression (Read 8457 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

Lowpass Filters before Lossy Compression

What's the point in using a lowpass filter as a preprocessing step before compressing music using a lossy algorithm?  I understand that the lowpass filter removes frequencies that humans can't hear very well, and that this helps save bits for more important frequencies, but shouldn't the psychoacoustic model do this instead?  This way, in the few parts of the song where the high frequencies may be worth keeping even at low quality settings, they'll be kept.

Lowpass Filters before Lossy Compression

Reply #1
Correct. In some encoders, the low-pass filter is the psychoacoustic model, or a part of it. Agreed, that makes for a very crude model, but it's very efficient and works for most signals.

Chris
If I don't reply to your reply, it means I agree with you.

Lowpass Filters before Lossy Compression

Reply #2
lowpass filter is just a tool, that encoders may use.

This is dependant on both, the implementation and the format.
For example, Musepack uses (or used to use) an adaptive lowpass. It filtered depending on the complexity.

Also, some formats are more problematic than others. for MP3, using a lowpass is more adequate. It allows the psychoacoustic model to work harder on other parts.  (A lowpass filter at 16Khz might be heard, but it is less annoying than an artifact at 3Khz).


Ideally, an adaptive filter is the better option, but a varying filter can cause an artifact called "warbling" (of course, warbling is a very extreme case).

Lowpass Filters before Lossy Compression

Reply #3
What's the point in using a lowpass filter as a preprocessing step before compressing music using a lossy algorithm?  I understand that the lowpass filter removes frequencies that humans can't hear very well, and that this helps save bits for more important frequencies, but shouldn't the psychoacoustic model do this instead?  This way, in the few parts of the song where the high frequencies may be worth keeping even at low quality settings, they'll be kept.


It is generally a lot more computationally efficient to roll off frequencies that there is no intent of processing, than putting them through the psychocoustic model and throwing them away after they have been heavily processed.

Lowpass Filters before Lossy Compression

Reply #4
What's the point in using a lowpass filter as a preprocessing step before compressing music using a lossy algorithm?  I understand that the lowpass filter removes frequencies that humans can't hear very well, and that this helps save bits for more important frequencies, but shouldn't the psychoacoustic model do this instead?  This way, in the few parts of the song where the high frequencies may be worth keeping even at low quality settings, they'll be kept.


It is generally a lot more computationally efficient to roll off frequencies that there is no intent of processing, than putting them through the psychocoustic model and throwing them away after they have been heavily processed.


My reply to that is, "who cares?".  On the encoding end (usually a desktop PC), CPU power is so cheap and plentiful and stuff encodes so fast that I'd gladly accept a 5-10x decrease in encoding speed in exchange for dropping the transparency threshold of a codec by 16 kbps and/or getting a small but noticable improvement in quality.  Sometimes, when transcoding FLAC to OGG, I think hard drive bandwidth, not CPU time, is the bottleneck anyhow.

Back before LAME got -vbr-new, and back before I owned a DAP that supported Vorbis, I used to encode to -V6 -q0 for portable listening.  It took a lot longer, but (I think; I never formally ABXed it, and probably won't now because it's obsolute) was well worth it.

Lowpass Filters before Lossy Compression

Reply #5
My reply to that is, "who cares?".  On the encoding end (usually a desktop PC), CPU power is so cheap and plentiful and stuff encodes so fast that I'd gladly accept a 5-10x decrease in encoding speed in exchange for dropping the transparency threshold of a codec by 16 kbps and/or getting a small but noticable improvement in quality.

Storage space is so cheap and plentiful...

Lowpass Filters before Lossy Compression

Reply #6
My reply to that is, "who cares?".  On the encoding end (usually a desktop PC), CPU power is so cheap and plentiful and stuff encodes so fast that I'd gladly accept a 5-10x decrease in encoding speed in exchange for dropping the transparency threshold of a codec by 16 kbps and/or getting a small but noticable improvement in quality.

Storage space is so cheap and plentiful...


Not on portable devices yet.  The point is that CPU cycles on a desktop PC are a lot more cheap and plentiful than storage space on a DAP.

Lowpass Filters before Lossy Compression

Reply #7
My understanding is that a low-pass filter is used in encoders for a reason other than saving CPU cycles.
This reason is improvement of quality by preventing "musical noise" artifact from happening. W/o bandwidth restriction, encoded signal exhibits high degree of spectrum sparsity at high frequencies, because quantization truncates most coefficients to 0. This spectrum sparsity is perceived as "birdies" or "musical noise".
Low-pass filtering removes frequencies where the artifact is likely to occur and allocates more bits to lower frequency ranges to reduce it there too.

Lowpass Filters before Lossy Compression

Reply #8
What's the point in using a lowpass filter as a preprocessing step before compressing music using a lossy algorithm?  I understand that the lowpass filter removes frequencies that humans can't hear very well, and that this helps save bits for more important frequencies, but shouldn't the psychoacoustic model do this instead?  This way, in the few parts of the song where the high frequencies may be worth keeping even at low quality settings, they'll be kept.


It is generally a lot more computationally efficient to roll off frequencies that there is no intent of processing, than putting them through the psychocoustic model and throwing them away after they have been heavily processed.


My reply to that is, "who cares?".  On the encoding end (usually a desktop PC), CPU power is so cheap and plentiful and stuff encodes so fast that I'd gladly accept a 5-10x decrease in encoding speed in exchange for dropping the transparency threshold of a codec by 16 kbps and/or getting a small but noticable improvement in quality. 


Now we're down to specifics.

When the high end roll-off is say 16 KHz, there is little or nothing lost. Coding audio > 16 Khz has a highly speculative return at best.

At lower frequencies, there are potential audible losses. However, there are two different markets for audio media - one that is based on transparancy, and one that is based on basic intelligibility. When you are tring despirately to minimze file sizes (to facilitate use on space-limited portable/micro players, or to provide acceptable download speeds) then transparancy goes flying out the window. It is well known that a 5 KHz bandpass is just fine for dialog where inteligibility is the issue.



Lowpass Filters before Lossy Compression

Reply #9
W/o bandwidth restriction, encoded signal exhibits high degree of spectrum sparsity at high frequencies, because quantization truncates most coefficients to 0. This spectrum sparsity is perceived as "birdies" or "musical noise".
Low-pass filtering removes frequencies where the artifact is likely to occur and allocates more bits to lower frequency ranges to reduce it there too.

This is my understanding as well. It is also my understanding that the LPF is a gross fix for the problem. More advanced encoders recognize the situation heuristically and avoid birdies without the need for draconian filtering.

Lowpass Filters before Lossy Compression

Reply #10
My understanding is that a low-pass filter is used in encoders for a reason other than saving CPU cycles.
This reason is improvement of quality by preventing "musical noise" artifact from happening. W/o bandwidth restriction, encoded signal exhibits high degree of spectrum sparsity at high frequencies, because quantization truncates most coefficients to 0. This spectrum sparsity is perceived as "birdies" or "musical noise".
Low-pass filtering removes frequencies where the artifact is likely to occur and allocates more bits to lower frequency ranges to reduce it there too.


In Vorbis, isn't this what noise normalization was meant to solve?

 

Lowpass Filters before Lossy Compression

Reply #11
W/o bandwidth restriction, encoded signal exhibits high degree of spectrum sparsity at high frequencies, because quantization truncates most coefficients to 0. This spectrum sparsity is perceived as "birdies" or "musical noise".
Low-pass filtering removes frequencies where the artifact is likely to occur and allocates more bits to lower frequency ranges to reduce it there too.

This is my understanding as well. It is also my understanding that the LPF is a gross fix for the problem. More advanced encoders recognize the situation heuristically and avoid birdies without the need for draconian filtering.


Birdies and "musical noise" souds very much to me like the results of insufficient dithering.