Hi again, I have improved precision of my codec (on the encoder and decoder).I have also improved the -h1 quality setting.-Source code and binaries at http://nhwcodec.blogspot.com/-. I am still trying to improve precision of my codec, with keeping my low-complexity (fast) approach. I do not totally use the reference (and impressive) block prediction with different modes + residual coding scheme, but I just apply residual coding on the first order wavelet "image" (for example for a 512x512 image, I just code the errors on the 256x256 wavelet "high-resolution" part), hence the lack of precision.I try to compensate it with a little more neatness of the wavelet 5/3 filterbank, but that's right that precision seems visually more important. Any opinion on this approach or on the codec in general is very welcome. Many thanks again, Raphael -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/theora/attachments/20120725/9386a3d4/attachment.htm
Hello, Just a quick message to let you know that I have improved precision of my codec (on the encoder and decoder).I have also updated 2 new test images on my demo page... I still don't have made an image comparison, but I have re-tested Google WebP on more images, and WebP is very good. Don't know what kind of interest projects like mine could have, but however if you found time to take a quick look, any comment, suggestion,... is still very welcome. Many thanks, Raphael 2012/7/25 Raphael Canut <nhwcodec at gmail.com>> Hi again, > > I have improved precision of my codec (on the encoder and decoder).I have > also improved the -h1 quality setting.-Source code and binaries at > http://nhwcodec.blogspot.com/ -. > > I am still trying to improve precision of my codec, with keeping my > low-complexity (fast) approach. > I do not totally use the reference (and impressive) block prediction with > different modes + residual coding scheme, but I just apply residual coding > on the first order wavelet "image" (for example for a 512x512 image, I just > code the errors on the 256x256 wavelet "high-resolution" part), hence the > lack of precision.I try to compensate it with a little more neatness of the > wavelet 5/3 filterbank, but that's right that precision seems visually more > important. > > Any opinion on this approach or on the codec in general is very welcome. > > Many thanks again, > Raphael >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/theora/attachments/20120822/165b808b/attachment.htm
Hello, Just a very quick message to let you know that I have a little improved my codec. About the PSNR and SSIM measurements, yes that's right that my codec has worse results than JPEG, WebP,... If I can try to justify it(...), my codec would have an additional "source of error", penalizing these measurements, which is maybe that it increases, a little, neatness of image (which also certainly includes a denoising...). So to sum up very (very) quickly, JPEG and WebP would just decrease neatness and my codec would decrease precision and increase neatness, but just decreasing precision would be an error... But I am not an expert, and do not hesitate to correct me if you think it is necessary.I also totally agree that precision is more important than neatness. Many thanks, Raphael 2012/7/25 Raphael Canut <nhwcodec at gmail.com>> Hi again, > > I have improved precision of my codec (on the encoder and decoder).I have > also improved the -h1 quality setting.-Source code and binaries at > http://nhwcodec.blogspot.com/ -. > > I am still trying to improve precision of my codec, with keeping my > low-complexity (fast) approach. > I do not totally use the reference (and impressive) block prediction with > different modes + residual coding scheme, but I just apply residual coding > on the first order wavelet "image" (for example for a 512x512 image, I just > code the errors on the 256x256 wavelet "high-resolution" part), hence the > lack of precision.I try to compensate it with a little more neatness of the > wavelet 5/3 filterbank, but that's right that precision seems visually more > important. > > Any opinion on this approach or on the codec in general is very welcome. > > Many thanks again, > Raphael >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/theora/attachments/20120914/f2e4e9a6/attachment.htm
Hello, I don't have really advanced on my codec at the moment, but a very interesting thing I have read recently, is that there is normally no reason that frequency prediction would be more difficult than (2-D) spatial prediction (for image).For my codec, I have tried to predict little wavelet (hf) coefficients, but it is really very difficult and I don't have results.It is maybe more difficult to predict little wavelet coeffs as most of them are certainly related with noise parts of image (and are then random and not predictable), but for the others, not related to noise, even if it is 5% of them and if there would be an algorithm to predict them, please do not hesitate to point me out. I have also not compiled the Linux binaries for my codec, and if someone has managed to compile them, please do not hesitate to let me know, would be just great, and could add them on my demo page. Very quickly to finish, I have also not worked on the other quality settings.For the next higher ones, I could start to add residual coding, correct errors on the 512x256 wavelet "high-resolution" part (for a 512x512 image), but higher quality settings is maybe not the most difficult..., the most difficult is to remove data and the lower quality settings.As I would like to keep a good neatness of image, maybe add, for the lower quality settings, a segmentation and denoising function, that will first remove data, even if the image will be then more "flat"? But is it generally a good idea, or not? I don't know the current state of segmentation (and geometric) image compression, and I am really not skilled enough. So there is really a big amount of work to do on my codec, and really (really) too much for me, and of course I can not spend this time if there is not any intended usage for my codec... -My codec doesn't seem to have reached a lot of interest for now-... Many thanks again for your time, Raphael 2012/7/25 Raphael Canut <nhwcodec at gmail.com>> Hi again, > > I have improved precision of my codec (on the encoder and decoder).I have > also improved the -h1 quality setting.-Source code and binaries at > http://nhwcodec.blogspot.com/ -. > > I am still trying to improve precision of my codec, with keeping my > low-complexity (fast) approach. > I do not totally use the reference (and impressive) block prediction with > different modes + residual coding scheme, but I just apply residual coding > on the first order wavelet "image" (for example for a 512x512 image, I just > code the errors on the 256x256 wavelet "high-resolution" part), hence the > lack of precision.I try to compensate it with a little more neatness of the > wavelet 5/3 filterbank, but that's right that precision seems visually more > important. > > Any opinion on this approach or on the codec in general is very welcome. > > Many thanks again, > Raphael >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/theora/attachments/20121017/1528913a/attachment.htm
Hello, I have just re-tested my codec with the YCbCr color space, used in JPEG for example, and that's right that YCbCr is better, more accurate (than YUV).Files are 10% larger, but the results are better with more precision.Maybe I should use this color space by default? I have just added a YCbCr version on my demo page (this version can be compiled from the source code by selecting the YCbCr color space, and with removing the x298 multiply for Y in the decoder...).Any comment is welcome. For the frequency prediction and the small wavelet coefficients, it is certainly a wrong approach to try to predict specifically small frequency coeffs (wavelets, DCT,...) one by one, and that's certainly why I couldn't have any result..., and a better (common) approach is certainly to predict a block of coeffs (freq) or pixels, and then study the variance of the residuals (prediction errors), -and if the variance of the residuals is smaller than the variance of the original block, then prediction is good-.I will have to study it, but it is rather complex... Anyway, any comment on this new version is very welcome. Many thanks again, Raphael 2012/7/25 Raphael Canut <nhwcodec at gmail.com>> Hi again, > > I have improved precision of my codec (on the encoder and decoder).I have > also improved the -h1 quality setting.-Source code and binaries at > http://nhwcodec.blogspot.com/ -. > > I am still trying to improve precision of my codec, with keeping my > low-complexity (fast) approach. > I do not totally use the reference (and impressive) block prediction with > different modes + residual coding scheme, but I just apply residual coding > on the first order wavelet "image" (for example for a 512x512 image, I just > code the errors on the 256x256 wavelet "high-resolution" part), hence the > lack of precision.I try to compensate it with a little more neatness of the > wavelet 5/3 filterbank, but that's right that precision seems visually more > important. > > Any opinion on this approach or on the codec in general is very welcome. > > Many thanks again, > Raphael >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/theora/attachments/20121121/fc73c5d9/attachment.htm