On Sun, 27 Apr 2003, Josh Coalson wrote:> > Paul, cool to see you here... I thought bonk was in limbo > since I couldn't even see the homepage for a while. >There's a reason that web server is called yoyo... Bonk was a neat idea that turned out to be not too practical, so i haven't been working on it much.> > I actually did study bonk a little when I discovered it. I > didn't go far enough to attribute the compression differences > to prediction vs. residual. Frankly your LPC analysis method > was over my head but I'm guessing the advantages have more to > do with the lossy aspect. >There's not much too it, essentially the noise introduced by a linear predictor can be made to nicely match the perceptual masking effects you get with hearing, if the predictor is *really* large. Really large predictors are quite slow, so it's not a very practical aproach. There's also some tricky stuff in there for correcting the quantization of reflection coefficients by adjusting later coefficients: quantization of the predictor is interleaved with the actual calculation of coefficients.> The residual coding method was interesting. I guess I could > try popping it in. The litmus test really for new methods is > 1) is it unencumbered by patents; 2) is it fast enough decoding > to meet FLAC goals. #2 is easy enough to verify; #1 is always > tricky. >Well, no one's sent me any nasty letters yet. I didn't base it on any existing technique (except very loosely on Rice coding), and haven't found anything similar since then.> Cool, this doc is new to me. I'll take a look.Should have written bonk up ages ago, but various events intervened :-( On Sun, 27 Apr 2003, Miroslav Lichvar wrote:> However, decoding will not be very fast. I think it is possible to > speed up bonk implementation (with some additional tables), but > probably it will never be as fast as decoding of rice codes.Agreed, there's too much juggling going on to compete with rice codes, but maybe it can run at an acceptable speed with a bit of tuning... did you tell it to write the least significant bits separately (set the parameter base_2_part=true)? Also altering the algorithm so that low_bits is larger will produce a speedup (but impact the compression ratio). And there's a fair scope for optimization. Overall, the speed (after encoding the least significant bits and dividing them out) is going to be of the order of the sum of the integers to be encoded. Paul Email: pfh@csse.monash.edu.au one ring, no rulers, thecircle.org.au
On Sun, Apr 27, 2003 at 07:28:48PM +1000, Paul Harrison wrote:> On Sun, 27 Apr 2003, Miroslav Lichvar wrote: > > > However, decoding will not be very fast. I think it is possible to > > speed up bonk implementation (with some additional tables), but > > probably it will never be as fast as decoding of rice codes. > > Agreed, there's too much juggling going on to compete with rice codes, but > maybe it can run at an acceptable speed with a bit of tuning... did you > tell it to write the least significant bits separately (set the parameter > base_2_part=true)? Also altering the algorithm so that low_bits is larger > will produce a speedup (but impact the compression ratio). And there's a > fair scope for optimization.Interesting is that for slightly larger low_bits is compression ratio sometimes better, at least in my case :). Maybe it takes more iterations to adapt step variable well at beginning, if low_bits is set too low. Is there something better than low_bits = bits_to_store(energy/(list.size()*2)) for optimal speed/compression_ratio ratio, maybe something with minimum and maximum value?> Overall, the speed (after encoding the least significant bits and > dividing them out) is going to be of the order of the sum of the integers > to be encoded.Ok, but in bad case it have to loop too many (say 50) times over whole block. This is not going to be very fast. -- Miroslav Lichvar
On Mon, 28 Apr 2003, Miroslav Lichvar wrote:> > Interesting is that for slightly larger low_bits is compression ratio > sometimes better, at least in my case :). Maybe it takes more > iterations to adapt step variable well at beginning, if low_bits is > set too low. >If the values being encoded are small, it's possible it is adapting to changes in the frequency of particular values over time :-)> Is there something better than > low_bits = bits_to_store(energy/(list.size()*2))Well, this parameter here ^ may be adjusted, maybe even low_bits = bits_to_store(energy*whatever/list.size()) instead. There may be a level that gives an optimial tradeoff between adapting to the distribution and adapting to changes over time. I haven't really investigated this.> for optimal speed/compression_ratio ratio, maybe something with > minimum and maximum value?...> Ok, but in bad case it have to loop too many (say 50) times over whole > block. This is not going to be very fast. >Actually, that's just my shoddy unoptimized code. It should be possible to make it always run in the order of the sum of the values, so the above measure is quite good. The trick would be to keep a list of indicies of the items that are still above the current cutoff value, and remove indicies from this list as they fall below the cutoff. Paul Email: pfh@csse.monash.edu.au one ring, no rulers, thecircle.org.au