VP3HoSwiYO
2004-Sep-11 13:23 UTC
[Theora-dev] Question about Huffman Tables in Setup Header
There is bit space in Bitstream Header to put Huffman codes in. However, This space can take only 80 kinds of Huffman code set. And This space is divded into DC Huffman code set and AC Huffman set, and There are only 16 choices in each DC and AC Huffman code set. If we want to use this space, we find out best(or proper) performance 80 frequency counts(Huffman code sets) from tens thousands of frame. Dose anyone have an idea or algorithm to find out them? VP3HoSwiYO
Timothy B. Terriberry
2004-Sep-11 17:05 UTC
[Theora-dev] Question about Huffman Tables in Setup Header
VP3HoSwiYO wrote:> If we want to use this space, we find out best(or proper) performance > 80 frequency counts(Huffman code sets) from tens thousands of frame. > Dose anyone have an idea or algorithm to find out them?This is a generic clustering problem. Clustering is NP-hard, but there are a number of reasonable algorithms that will give you an answer, if not necessarily the optimal one, such as K-means based algorithms.
VP3HoSwiYO
2005-Feb-12 19:15 UTC
[Theora-dev] Question about Huffman Tables in Setup Header
I think that this mechanism is not desirable for both theora developers and users. It is important to optimize Huffman code. According to some Magazine, H.264 can decrease 10% bitrate by using Arithmetic compression than Huffman code. And, Once I show you, Comparison VP3 original Huffman code with RangeCoder under special condition. In this comparison, Huffman code can't reach to RangeCoder. However, Huffman code is not inferior to Arithmetic compression. Good Huffman codes like a VP3's are always match for Arithmetic compression. But we have to know that VP3's Huffman codes are going too well. We can imitate its performance easily? To optimize codebooks, Developer will need a lot of time and a lot of sample. And, It is difficlut to evaluate his new idea using best Huffman code immediately. Or, If developer can not produce optimized Huffman code, Users will be forced to use wrong codebooks reluctantly or 2-pass encode to minimize the size(bit-rate). It is easy to guess that Creating such high performance Huffman code is too hard. and I am also apprehensive that this mechanism will be obstacle to many developers. original japanese sentence are: ???????????????????????(??????? ??????????)????????????????????? ???????? ????????????????????????????????? H.264????????????????????????????? ?????10%??????????????????????MEPG/ JPEG???????????????????????? ???????????????????????????VP3?? ?????????????RangeCoder????????????? ?????????????????????????????? ?????????VP3????????????????????? ??????????????? ?????????????????????????????? ??????????? ?????????????????????????????? ???????????????????????????????? ??????????????????????????????? ??????????????????? ?????????????????????????????? ??????????????????????????????? 2?????????????????????????????? ??????????????????????????????? ?????????????????????????????? ???????????????????Theora????????? ??????????????????????????????? ?? VP3HoSwiYO
Maybe Matching Threads
- theora MMX decoder
- using recent theora under Linux
- using recent theora under Linux
- theora encoder reordering, order of puting data from DCT 8x8 blocks to huffman compressor, and puting result of huffman compressor to buffer bitstream memory
- Bitstream encoded huffman tables always the same