Hello,
I have improved a little the NHW codec.I have improved the dequantization
in the wavelet HH band, and little improved the -l2 (and lower) quality
setting.I have also added a -l3 lower quality setting (-15Ko, exp), I have
increased the quantization for this other setting, and I start to remove
higher wavelet coefficients, but this is not optimal... The artifacts start
to be noticeable for some images at -l3 setting (even if the neatness would
be still good...), really need to better modelise the wavelet coefficients
(and/or predict them) for the other lower settings (and -l3) at first.
Will certainly have to look at SPIHT and EZW algorithms.But just a very
quick question, as far as I have seen, all the codecs that use prediction
(intra block prediction or frequency domain prediction), all use a very
advanced (and impressive) context modeling, and arithmetic coding.Would
this (impressive) entropy coding scheme be required to encode the
prediction residuals effectively, as my codec does not use it?
Many thanks,
Raphael
2013/5/29 Raphael Canut <nhwcodec at gmail.com>
> Hello,
>
> I have finally added 2 lower quality settings for the NHW codec: -l1
> (-5Ko) and -l2 (-10Ko).I use a quantization of 0.935 and 0.88 (kind of
> quantization), and I decrease residual coding on the first order wavelet
> image.
>
> I have updated the demo page: http://nhwcodec.blogspot.com/ .
>
> These 2 lower quality settings are still experimental.If you could find
> time, would be interested in any opinion on this approach, and if it could
> be acceptable or not? Because lower quality settings seem quite difficult
> with my algorithm... -I prefer not remove higher wavelet coeffs for these 2
> settings...-
>
> Many thanks,
> Raphael
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.xiph.org/pipermail/theora/attachments/20130619/0a8b3c2b/attachment.htm