Skip to main content

Notice

Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: FLACOUT from Ken Silverman (Read 10865 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.

FLACOUT from Ken Silverman

Hello,

Available here (Thanks to nikkho).

        AiZ

FLACOUT from Ken Silverman

Reply #1
I was curious enough to give it a try. I like my files being subset compatible so I ran it with /sub parameter. After 35 minutes of processing of a test file the program crashed. My test file was a 33.9 MB CD sourced regular 16 bit stereo FLAC.

FLACOUT from Ken Silverman

Reply #2
Darn, that thing is slow.

FLACOUT from Ken Silverman

Reply #3
Hahaha, why does this exist?  It's been an hour and my file is at 11%.
Is it brute forcing the blocks and modeling using FLAC's super-secret setting?

I'm assuming the decode speed is going to be slower also.

-From the verbose output it's brute forcing parameters for "mode", "order" (max lpc order), and "lpcbits" (qlp coeff precision).

FLACOUT from Ken Silverman

Reply #4
It says it's using variable blocksize, which is indeed an in-subset possibility that the reference encoder does not use. It is really tiny, only 20kB, that's pretty impressive.

edit: Oh, and for anyone who's going to try out, it's two-pass, so don't be fooled. It is really slow  Knowning FLAC, decoding will probably still be really fast. Still, if the promise of 3 to 5% better than the reference encoder is kept, that would be really impressive. It would beat TAK on pure compression. That's too good to be true. Still waiting for the first file to finish encoding though
Music: sounds arranged such that they construct feelings.

FLACOUT from Ken Silverman

Reply #5
Tried a metalcore track ("Until It's Gone" by Linkin Park's "The Hunting Party").

Code: [Select]
C:\Users\ferongr>flacout in.flac out.flac
In:29494475 bytes            in.flac
Out:29240222 bytes            out.flac
Wrote out.flac
Took 4505.755 sec.


The track is 3:53 in length. Flacout speed was 0.05x realtime for a meager 255KB reduction. Decoding speed of the original is ~670x per thread. Decoding speed of the output file is ~618x per thread, quite large a difference.

Edit: Flacout removes ALL metadata. Something of note.

FLACOUT from Ken Silverman

Reply #6
Took 3 hours to reduce the filesize by 511KB. Not very efficient if you ask me... 

FLACOUT from Ken Silverman

Reply #7
This was very daunting and labouring.

I picked the shortest track I can think from my personal rips. It was classical music (Douglas Pipes - "Trick'R'Treat (2007) Original Motion Picture Soundtrack": Track 2 "Meet Charlie" (0:45 seconds))

It took 1027.011 sec (17+ min) and only reduced file size marginally (63183 byte difference, see pic below).
(I should have tried a more heavier track instead of simple classical orchestration).
I do notice, Ferongr mentioned, that all the metadata was stripped.

MediaInfo reveals nothing about the writing library.
Q-Dir (quad-directory viewer replacement instead of Windows Explorer) doesn't read any data at all, except for filesize.
Length and bitrate don't appear for the FLACOUT encode.


I have an Intel Core-i5 2320@3.00GHz, 4 logical cores.
It only consumed, at most, 40% 20-30-ish% (forgot I had other tasks running in the background; not that it ever impacted overall performance on anything at any single time) CPU usage and a single thread.
I also have 2 SSD's where I kep the original FLAC file and the encoded FLAC file.

Most of my music collection consists of motion picture scores (classical), so I don't see a huge impact on saving space with this.
The reduced file size is no real benefit when a much more apt solution would be to get another external to backup my backups.
I can't see myself going through the trouble of tagging all over again.
I don't like automatic tagging as it relies heavily on what other people submit, and I rarely like what anyone else does.

I just started looking into alternatives for FLAC encodes, and have yet to try FLACCL.

I can safely cross FLACOUT off my list and move on.

Thanks for the share and interest, but alas, it is short lived.
I like to use "HD audio" in PaulStretch. "HD audio", lol.

FLACOUT from Ken Silverman

Reply #8
Here Avira reports it as "TR/Crypt.XPACK.Gen" Virus. Maybe a false positive. I wanted to test against flacCL. It can create smaller files already with default -8 without using something non subset compatible or variable bitrate. Anyone tried this? flacCL works nicely with integrated HD Intel graphics if you didn't know.
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

FLACOUT from Ken Silverman

Reply #9
^^^ Probably because the file is (i think) a compressed binary.

Not too surprising that the author would pack the binary given that he writes utilities for repacking compressed files

FLACOUT from Ken Silverman

Reply #10
Never understood the point of packing executables. To waste memory when executing?
"I hear it when I see it."

FLACOUT from Ken Silverman

Reply #11
I did run it and my PC didn't explode so most likely no virus.

It took 1343 sec. for encoding a 4:28 min song with /sub and a fixed blocksize of 4096 running my Intel Ivy @4400.
It stopped working with solely /sub 3 times exactly at the same 98.1% with this same file.
I used /sub and a blocksize of 4096 to comparte it directly to flac and flacCL
The result is a bit disappointing. No more testing planned.

flacCL -8            23.350.986 Bytes
flac 1.30 -8      23.386.104 Bytes
flacout /sub /v4096   23.358.609 Bytes
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

FLACOUT from Ken Silverman

Reply #12
Never understood the point of packing executables. To waste memory when executing?


With authors like Ken Silverman and Ian Luck, it's likely to protect their secrets slightly better than no compression at all.

With the demoscene, it's always about compressing the most into a tiny executable, since they have categories for intros by KB size. Frequently, the compression tools they use, like Beropacker and .kkrunchy, set off at least a dozen AV scanners with every executable they spit out, and often take 5-10 seconds to load on any system with a heuristics based real time scanner installed.

FLACOUT from Ken Silverman

Reply #13
flacout /sub /v4096   23.358.609 Bytes

Sorry it was late here. Correctly it must be: flacout /sub /b4096
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

FLACOUT from Ken Silverman

Reply #14
I've been wondering whether using a variable blocksize could lead to smaller FLAC files, but it seems that this implementation is no proof. Flake didn't too well with variable blocksizes either, but according to Justin (the Flake developer), the algorithm wasn't really tuned nor smart. Maybe a possibility for FLACCL to look into?
Music: sounds arranged such that they construct feelings.

FLACOUT from Ken Silverman

Reply #15
When i remember right it was spec of flac but Mr. Coalson himself found not enough avdantage in using it. It most likely could have broken playback on many devices and players when pushed into a release.
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

Re: FLACOUT from Ken Silverman

Reply #16
Necroposting because I don't think flacout needs a new thread. There's been a few years, and some upgrades to reference FLAC. I ran a test over the week. Only 2 minutes from each of my 38 CDs, not much material.
 

TL;DR:
  • No use in this for size - it loses to flac 1.5.0 -8per15 -l32 on average and on most material. Loses at size subset-to-subset as well, even without beefing up 1.5.0 with additional -A options.
  • And it is sloooow. 8/15/25/44? hours on a corpus that flac -8 (single-threaded yes!) does in less than half a minute.
  • But, sure good that there exists another independent variable blocksize implementation.


What I did:
Extracted two minutes (starting at the 5 minutes mark) from each of the 38 CDs. No more, because I knew flacout was slow (but I didn't know how slow).
Ran flacout through various settings (subset and non-subset) and flac through various versions and settings; Timed (with echo:|time), used metaflac to remove all metadata and then compared sizes.
Note, 1.5.0 was only ran single-threaded. Other flac versions ran: 1.3.2 and 1.2.1. All on the same Windows machine. CPU: i5-7500T, not much juice but not ancient.


- "Key results"- "percent" means percent, not percentage points.
  • If you don't require subset, then 1.5.0 at -8per15 -l32 --lax made 0.15 percent smaller files than full-compression flacout.
    1.5.0 did better at all my three genre divisions, but "only" at 23 of 38 files. Biggest impact: 2.6 percent(!) for 1.5.0, and 0.8 percent for flacout.
    1.5.0 took nearly 3 hours, which is peanuts compared to flacout; honestly I did not expect flacout to be that slow, so I used only echo:|time which doesn't record date - but when it took 15 hours in subset, and 25 (obviously not 1!) hour at fixed blocksize, I think "44" is a better guess than "20" or "68".
    Had 1.5.0 lost, then I would have beefed it up with more windowing functions to get similar speeds, but now I don't bother. I could have tried that with 1.3.2, which lost to flacout by a similar figure - but nopes.
  • You could of course improve by selecting the smallest of the two for each of the 38. flacout will by default do that. That would gain 0.08 percent over -8per15 -l32 --lax
  • If you require subset, then 1.5.0 at -8per8 beats flacout /sub by 0.3 percent. And fifty-four times faster.
  • flacout subset does beats "any 1.2.1 tried". Maybe I should have tested 1.3.0 (before the partial_tukey and punchout_tukey functions), but I don't have one at the moment, and it wasn't officially released as Windows exe and who needs it except to test ...


- More results. flacout ran with /force (overwriting no matter whether it is worse!). Percent relative to the one above:
Smallest:   flac 1.5.0   at -8per15 -l32 --lax.   Took nearly 3 hours. 
+0.15%:   flacout   at full features - non-subset and variable block size. "44 hours" with the reservation mentioned ...
+0.14%:   1.3.2   at -8per15 -l32 --lax.
+0.07%:   flacout at fixed blocksize 4096, but not forcing subset. 25 hours.
+0.04%:   1.5.0   at -8per8 (subset!).    Seventeen minutes.
+0.10%:   1.5.0   at -8. Took twenty-five seconds.
+0.10%:   1.3.2   at -8per8.
+0.10%:   flacout   at subset (variable blocksize).   15 hours plus.
+0.02%:   flacout   at subset fixed blocksize 4096.   8 hours
+0.06%:   1.2.1    at -8per15 -l32 --lax.   Note 1.2.1 -8 did only one windowing function; nowhere as efficient, but faster. (Also its -8 implied "-e".)
+0.02%:   1.3.2   at -8.   Half a minute!
+0.24%:   1.2.1   at -8.   Which had an implicit "-e". With only one windowing function I think it corresponds to 1.3.0's "-5er6 -l12".
Weirdly, 1.2.1 did worse at -8per8 than at -8.

Re: FLACOUT from Ken Silverman

Reply #17
And there also is this Exact Rice thing...
Is troll-adiposity coming from feederism?
With 24bit music you can listen to silence much louder!

 

Re: FLACOUT from Ken Silverman

Reply #18
And there also is this Exact Rice thing...

OK, I re-ran: flaccid (hey @cid42 !) and 1.5.0 in the "Exact Rice" build (yours). And can confirm that 44 hour figure for flaccid: 0.03x realtime.

 * Exact Rice takes 0.04% to 0.06% off here. 0.01 of that is due to two particular samples. Also I did a run several apodization functions get it slower, just to compare to flacout's subset setting which took 15 hours. This is apples vs oranges, since Exact Rice ran --lax:  -l32 -pe -r15 -A <lotsofthem> took 17 hours. And clocked in best, "of course" - though one genre division it got beaten by flaccid.

 * flaccid (hey @cid42 !) from this build, which within subset could beat any flacout without hacking around with apodization functions, using this command:
--mode peakset --blocksize-list 768,1536,2304,3072,3840,4608 --analysis-comp 8 --output-comp 8p --queue 8192 --tweak 1 --merge 0
Took 27 minutes. Beat the heaviest flacout on average, at two of the three genre divisions (classical being the exception, most certainly due to longer-than-subset blocks) and at 24 of 38 samples. Also in the "other" genre section it beat the heaviest 1.5.0 I tried (see below).
First I actually ran the first in the "Probably not worth it" list at https://hydrogenaudio.org/index.php/topic,123248.msg1022236.html#msg1022236 , that is replaced the "8p" by "8pe" and the "8" by "8p"; and I admit I chose that out of curiosity whether it could beat flacout. Yes it did on the same 24 of 38 samples, improving 0.03 percent over the setting above, which I then ran subsequently. that one took an hour and a half.

(Wouldn't an "easy fix" for flaccid as a re-compressor for flac files, would be to copy over the source file's frame if flaccid cannot improve upon it? It won't guarantee to always improve, since the coded number takes more space in a variable-size frame, but that's nitpickery.)


Other notes:
 * Even admitting --lax, partition order 8 was the highest actually found, by the 1.5.0 exact or inexact build (didn't try flacout nor <1.5.0).
Order 8 means  2^8 = 256 partitions, which for blocksize 4096 that means 16 samples in a partition before it spends 4 bits changing the parameter. A few order 8's were found in Mozart (classical music yes), Kraftwerk (only a few) and The The (ditto) - and lots of them in the Emperor clip. Mind, only two minutes from each.
"-r15" would never be for real at blocksize 4096, where the theoretical max is order 12 and that is only possible when the predictor is "0" and it pays off to spend 4 bits for each sample to encode its Rice exponent. The encoder does limit it by itself, so typing "15" on auto-pilot shouldn't change anything.
 * Even if I could get the "Exact Rice" build down to 0.2 points better than flacout, there are still 9 out of 38 signals where flacout does better. For six of those, flacout also beats flaccid, so it is not only the variable block size - but it could be the option to choose high block size.
 * As for the sample that made biggest impact - two minutes of the Jordan Rudess synth keyboard - it is big, and 1.5.0 inexact at -8 is good enough to beat flacout's slowest setting. 1.5.0 (inexact) at -8 improved seven percent over 1.3.2 at -8, likely from going to double-resolution calculations. Boosting 1.5.0 exact to ultra-slow non-subset and it took 3 percent off the still-twice-as-slow flacout (also non-subset).

@sven_Bent , you mentioned flacout recently. Looks like you have better alternatives.