--- Miroslav Lichvar <lichvarm@phoenix.inf.upol.cz> wrote:> On Thu, Oct 17, 2002 at 09:51:02AM -0500, Brady Patterson wrote: > > ... But it seems to me that you could make a decent guess > > about when something "new" happens based on the second derivative > of the signal > > (where the first derivative is the difference between a given > sample and the > > previous, and the second is you-get-the-idea). > > > > Here's my rationale: high-amplitude, high-frequency sections are > the hard ones > > to encode, or at least will work best in their own frame. Those > > characteristics imply a high first derivative. You want to put > such sections > > in their own block, and boundaries of such blocks will be where the > second > > derivative is relatively high. > > > > Okay, that's not quite right, since the first derivative will be > negative about > > half the time, and large negative has the same effect as lange > positive. So I > > think what you really want is the first derivative of the absolute > value of the > > first derivative. > > > > Then there's the question of where to put the boundaries. Some > trial-and-error > > is probably the best approach here. For files on which the above > formula is > > consistently high, it will probably be desirable to set the limit > high to avoid > > too much frame overhead. > > Well, i took 10 CD and test my primitive implementations of these > algos. Here are my results: > > size encoding time > (0) 6401778544 1.0000 > (1) 4193699407 0.6551 1.0000 1.00 > (2) 4180011683 0.6529 0.9967 1.18 > (3) 4186509853 0.6540 0.9983 1.15 > > "best" CD: > (0) 503448568 1.0000 > (1) 349525363 0.6942 1.0000 > (2) 347167639 0.6896 0.9933 > (3) 347864119 0.6910 0.9952 > > "best" track: > (0) 44111804 1.0000 > (1) 28091683 0.6368 1.0000 > (2) 27769870 0.6295 0.9885 > (3) 27864205 0.6317 0.9919 > > where: > (0) wav files > (1) flac files, fixed blocksize 4608 > (2) flac files, variable blocksize, "lpc idea" > (3) flac files, variable blocksize, watching average of absolute > values of first and second derivativeInteresting, looks like the best case is ~ 0.75% increase in compression for 18% increase in encode time. The compression increase is similar to my old brute force test but much faster. The question is, is it worth it from the user's point of view? Josh __________________________________________________ Do you Yahoo!? Y! Web Hosting - Let the expert host your web site http://webhosting.yahoo.com/
On Sun, Oct 20, 2002 at 11:50:12PM -0700, Josh Coalson wrote:> Interesting, looks like the best case is ~ 0.75% increase in > compression for 18% increase in encode time. The compression > increase is similar to my old brute force test but much faster. > The question is, is it worth it from the user's point of view?Here is another test. I've rewritten my brute force util, it is much faster (these 2 albums took about 8 hours) and decrease of compression ratio isn't very big. First is one of "worse" albums, where previously was no improvement and second one is the "best" album from my previous test. tr. raw flac -8 flac-vbs -8 (bf) diff ---------------------------------------------------------------------- 01 42465360 32921806 (0.7753) 32797052 (0.7723) 0.00294 02 49309680 35592614 (0.7218) 35360927 (0.7171) 0.00470 03 44892624 31174614 (0.6944) 30756995 (0.6851) 0.00930 04 48933360 35896765 (0.7336) 35544467 (0.7264) 0.00720 05 46223856 32552195 (0.7042) 31966877 (0.6916) 0.01266 06 54512304 38597716 (0.7081) 38183807 (0.7005) 0.00759 07 62233920 47307456 (0.7602) 47103582 (0.7569) 0.00328 08 50081136 35248709 (0.7038) 34802683 (0.6949) 0.00891 09 46722480 34866768 (0.7463) 34649209 (0.7416) 0.00466 10 105181440 66135505 (0.6288) 65403955 (0.6218) 0.00696 11 42140784 31847746 (0.7557) 31632150 (0.7506) 0.00512 12 39497136 26761582 (0.6776) 26599003 (0.6734) 0.00412 13 41717424 29667034 (0.7111) 29427846 (0.7054) 0.00573 14 60982656 41929582 (0.6876) 41659950 (0.6831) 0.00442 15 41465760 29593363 (0.7137) 29311228 (0.7069) 0.00680 1-15 776359920 550093455 (0.7086) 545199731 (0.7023) 0.00630 01 30964080 21620551 (0.6982) 21329002 (0.6888) 0.00942 02 38984400 26756131 (0.6863) 26376519 (0.6766) 0.00974 03 33831168 26303654 (0.7775) 26101924 (0.7715) 0.00596 04 56497392 37413032 (0.6622) 36704124 (0.6497) 0.01255 05 25756752 19550862 (0.7591) 19305960 (0.7495) 0.00951 06 30611280 15424648 (0.5039) 15245573 (0.4980) 0.00585 07 36637104 23893567 (0.6522) 23538606 (0.6425) 0.00969 08 35258832 24666148 (0.6996) 24323401 (0.6899) 0.00972 09 24587808 18761332 (0.7630) 18574225 (0.7554) 0.00761 10 29470560 21236888 (0.7206) 20987622 (0.7122) 0.00846 11 44111760 27948908 (0.6336) 27319813 (0.6193) 0.01426 12 23138976 18018696 (0.7787) 17821689 (0.7702) 0.00851 13 44027088 32130608 (0.7298) 31623073 (0.7183) 0.01153 14 49570752 34641760 (0.6988) 34184309 (0.6896) 0.00923 1-14 503447952 348366785 (0.6920) 343435840 (0.6822) 0.00979 So there is still big room for improvement. And i believe this test don't show us maximum, what we can get from variable blocksizes. If anyone want to help me to find the right procedure, my hacks are here: http://phoenix.inf.upol.cz/~lichvarm/flac-vbs/flac-bf.cc http://phoenix.inf.upol.cz/~lichvarm/flac-vbs/flac-vbs.patch -- Miroslav Lichvar
But it all sounded so good... :) Thanks for trying it though. -- Brady Patterson (brady@spaceship.com) How come I can't hurt this damn turtle? On Tue, 22 Oct 2002, Miroslav Lichvar wrote:> You are right, with such results it isn't. And these files are > non-Subset anyway. It was just an experiment.
On Sun, Oct 20, 2002 at 11:50:12PM -0700, Josh Coalson wrote:> Interesting, looks like the best case is ~ 0.75% increase in > compression for 18% increase in encode time. The compression > increase is similar to my old brute force test but much faster. > The question is, is it worth it from the user's point of view?You are right, with such results it isn't. And these files are non-Subset anyway. It was just an experiment. -- Miroslav Lichvar