similar to: Theora Fast Quality Reduction Transcoding

Displaying 20 results from an estimated 2000 matches similar to: "Theora Fast Quality Reduction Transcoding"

2007 Aug 29
1
Fast quality reduction transcoding
Hi, After a quick read of the Theora spec, I became curious about the possibility of fast quality reduction of Theora videos. The idea is to decode through the Huffman and reverse prediction steps, and then to truncate the coefficients and reencode. My questions are: * Is this a reasonable way to reduce the quality and bitrate of a stream? Will it be comparable in quality to a complete
2003 Jul 23
2
Question about converting VP3 to Ogg Theora
As I understand it the current plan is to make it possible to losslessly transcode VP3 video to Theora video. In my experience, one of the "features" of VP3 is it drops frames in the event that there is little/no movement in a frame, or if "drop frames" is enabled, to drop frames if the data rate is getting too high. I understand that the way that VP3 does this in
2011 Mar 28
3
DCT in Theora
> I put debug code in a function in C, > but the function oc_enc_fdct8x8() not called. > Why? There is no function oc_enc_fdct8x8. It's a macro, which usually calls a platform-specific version via _enc->opt_vtable.fdct8x8, though on some platforms, it will call a specific version directly (e.g., oc_enc_fdct8x8_x86_64sse2 on x86-64). All of the functions with platform-specific
2002 Jul 30
8
rehuff [source attached]
Hi all, Yes, it's true. A new version of rehuff, the tool that losslessly compresses Vorbis files: one that is easy to compile, and that works with newer-than-two-years-ago streams, too! On 1.0 streams, you get about 3% size reduction, and the headers get _much_ smaller (which helps for fast-start network streams). Building it should be easy (you might have to add some -I and -L for
2002 Jul 30
8
rehuff [source attached]
Hi all, Yes, it's true. A new version of rehuff, the tool that losslessly compresses Vorbis files: one that is easy to compile, and that works with newer-than-two-years-ago streams, too! On 1.0 streams, you get about 3% size reduction, and the headers get _much_ smaller (which helps for fast-start network streams). Building it should be easy (you might have to add some -I and -L for
2001 Jan 23
4
rehuff
Hiya, Here is the sources to my "rehuff" program. ./rehuff in.ogg out.ogg does a lossless recoding of a vorbis stream. (It generates optimal huffman codes for the particular stream). This code is meant for developers only, until someone is kind enough to provide good build and configure support for it. I won't. And no installation help questions please. There is a little patch
2008 Feb 28
1
Multi-thread Theora Decoder
Hi all, Does Theora Community have an interest in a multi-thread decoder implementation? I'm starting to work with multi-thread and I thought that Theora Decoder is a good choice for me, because I had been working with it in a FPGA implementation and I have experience with the library. I'm thinking in working with LoopFilter at first. Do you think I could start with it or there is a
2007 Aug 25
1
Theora vs MPEG vs H264
Hi all, I have to compare the theora codec with MPEG and H264. I was googling and I found that the PSNR is a common used parameter. How can I do this with Theora? Thanks -- Leonardo de Paula Rosa Piga Undergraduate Computer Engineering Student LSC - IC - UNICAMP http://www.students.ic.unicamp.br/~ra033956
2017 Jan 27
2
Linking Linux kernel with LLD
On Tue, Jan 24, 2017 at 11:29 AM, Rui Ueyama <ruiu at google.com> wrote: > Well, maybe, we should just change the Linux kernel instead of tweaking > our tokenizer too hard. > This is silly. Writing a simple and maintainable lexer is not hard (look e.g. at https://reviews.llvm.org/D10817). There are some complicated context-sensitive cases in linker scripts that break our approach
2017 Jan 24
5
Linking Linux kernel with LLD
>Our tokenizer recognize > > [A-Za-z0-9_.$/\\~=+[]*?\-:!<>]+ > >as a token. gold uses more complex rules to tokenize. I don't think we need that much complex rules, but there seems to be >room to improve our tokenizer. In particular, I believe we can parse the Linux's linker script by changing the tokenizer rules as >follows. > >
2007 Jan 17
1
Tokenizers?
Hi everyone. First a quick word - I am relatively new to Ruby and Ruby on Rails, but I love learning about it and using it. Currently I am working on extending Boxroom (file repository RoR app) for the CARE Indonsia intranet, where I work as an intern. I am using ferret, and it''s working great. I noticed that if a file contains something like this "applications/entries", this
2013 Jun 25
2
Sourceforge pages (was: Even more brands for links and sourceforge pages)
On 05-06-13 00:27, Erik de Castro Lopo wrote: > Martijn van Beurden wrote: >> Considering flac.sourceforge.net, is this ever going to be updated? In >> case it should be redirected, I checked on my own sourceforge project >> webpage, adding the following two lines to .htaccess should redirect >> traffic to any resource on flac.sourceforge.net to xiph.org/flac >>
2003 Apr 24
2
Huffman decompression
Hello ! A question to all 'Wheel-reinventers': I can build the huffmantrees by hand (on paper) but how to code it? Are there any good URLs out there? Or does the spec supply sufficient information? I tried figure out how oggdec (debugging) does this, but I couldn't get the clou. Thank you Dominik --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project
2003 Apr 08
6
bitpeeler
No offense, Segher, but the output quality of this thing is awful. =) I'll disregard the fact that, at least with *my* compiler, the source tarball I downloaded reduces every packet to zero bytes, which isn't terribly interesting. I decided to set the byte reduction to something constant: I started by dividing each packet's size by 2 just to see what would happen. The resulting ogg
2010 Dec 10
2
Bitstream encoded huffman tables always the same
Hello all, I've been working a little inside the Theora decoder when I found that it seems that many videos had the very same huffman tables encoded into their bitstreams (at least the ones that I could take my time to dissecate). I found that the tables are listed as TH_VP31_HUFF_CODES in the file huffenc.c. I tried to investigate a little bit more to see who was setting the bitstream
2003 Oct 10
2
New entropy coder
Hello, I am a computer engineering student and compression hobbyst and have recently developed a new entropy coding algorithm. It can be used to achieve compression proper of arithmetic coders at very high speeds -almost like Huffman codecs-. Since it could be of interest to you, I send it as an attachment -code and technical report-. Please, drop me a line in case you have any doubt or
2002 Oct 21
3
How to fit Oggs in a specific amount of space?
I took 5 albums (Classical music) and converted them to Ogg Vorbis at "Full Bitrate" (-q10) and all 5 directories take up about 775 Megs which won't fit on a CD. So I ripped them again in WAV first (And give my friend back his CDs) but now I wanna know what quality setting should I use to fit them on 1 CD (The highest possible with total space used just under 700 Megs)
2007 Mar 17
0
Transcoding to Ogg Theora on Windows
Hi, I have written a small app in C# that uses oggdsf to transcode to Ogg Theora on Windows. I thought some of you might find it useful. You can download it from: http://www.a2ii.com/tech/directx/TransTheora.zip Cheers, -daniel ps. it will only handle files with a single audio and/or video stream _________________________________________________________________ Win a trip for four to a
2005 Jan 20
0
katiuska 0.7, dvd ripping and transcoding to theora for KDE :)
hey all, I've just released a new version of Katiuska, you can now rip dvds with it by simply selecting subtitle and audio language + audio and video quality. Katiuska also allows you to transcode any video file to oggtheora. get it here: http://kde-apps.org/content/show.php?content=17831 Requirements: kde with kommander1.1development2 mplayer lsdvd
2017 Jan 23
2
Linking Linux kernel with LLD
Our tokenizer recognize [A-Za-z0-9_.$/\\~=+[]*?\-:!<>]+ as a token. gold uses more complex rules to tokenize. I don't think we need that much complex rules, but there seems to be room to improve our tokenizer. In particular, I believe we can parse the Linux's linker script by changing the tokenizer rules as follows.