similar to: Psychoacoustic model

Displaying 20 results from an estimated 2000 matches similar to: "Psychoacoustic model"

2004 Mar 11
2
Ogg/Vorbisreport, FFT optimizations
I wrote to the list earlier about optimizations of the tremor. Specially replacing the IMDCT with a N/4 FFT. The report is available here: http://www.sandvall.nu/thesis.pdf Short summary: Optimization of the Tremor code under 50kB for a "split" decoder version. ~44 MIPS. Huffman highly unoptimized. IMDCT ~ 8 MIPS. Quick overview of the
2003 Dec 08
2
Encoding, documentation, questions
As part of the documentation of a Vorbis decoding project, a quick explanation of the encoding procedure is required. I quite up to date with the decoding process but not with the encoding part. Is there any documentation explaining the different steps during encoding? What I'm looking for is a list of encoding steps, very briefly explained and more details on a few fundamental steps, like
2003 Oct 24
1
Test .ogg files
Is there a archive with a collection of different .ogg files encoded with different settings that can be used to verify compatibility? * Different window size (e.g. != 256,2048) * Much larger codebooks and other extreme cases that are not generated using a standard encoder. * floor0 Regards -- / Johannes Sandvall --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project
2001 Apr 26
1
From LAME mailing list
Comments? ---------------------------------------- "Mark Taylor" <mt@sulaco.org> wrote: [...] > This is related to one minor objection I have to vector quantization > based codecs like Vorbis and the MPEG4 VQ codec: they do not compute > the quantization noise during the encoding process. The choice of > codebooks (use a big codebook: low quantization noise, use a
2009 Nov 16
2
Theora Fast Quality Reduction Transcoding
Hi. I have been working on a tool whose goal is to reduce the bit rate of theora video by decoding to DCT coefficients, reducing the entropy of the coefficients, and re-tokenizing the stream. I have successfully used the decoder source to extract the DCT coefficients for each block, and I am able to capture any and all relevant information about where the block of coefficients falls in the
2002 Jul 30
8
rehuff [source attached]
Hi all, Yes, it's true. A new version of rehuff, the tool that losslessly compresses Vorbis files: one that is easy to compile, and that works with newer-than-two-years-ago streams, too! On 1.0 streams, you get about 3% size reduction, and the headers get _much_ smaller (which helps for fast-start network streams). Building it should be easy (you might have to add some -I and -L for
2002 Jul 30
8
rehuff [source attached]
Hi all, Yes, it's true. A new version of rehuff, the tool that losslessly compresses Vorbis files: one that is easy to compile, and that works with newer-than-two-years-ago streams, too! On 1.0 streams, you get about 3% size reduction, and the headers get _much_ smaller (which helps for fast-start network streams). Building it should be easy (you might have to add some -I and -L for
2003 Sep 10
1
A new introduction attempt.
I have been using libvorbis for the past few weeks and have been asked to summarise what I have discovered about the codec. There is an early draft of the document at http://www.geocities.com/gatewaystation/vorbis/vorbis.htm - Please forgive the dodgy formatting (it was formerly a MS word document that got converted with their 'save as html' feature). I still have some additions to
2002 Sep 19
3
Using large-scale repetition in audio compression
This idea is so simple that I'm sure it must have been thought of before, and discarded, since AFAIK it's not used anywhere. I did a quick web search but that didn't turn up much, so I figured I'd put it up for discussion here anyway. How about using large-scale repetition in audio compression? I'm thinking of redundancy in repeated pieces of a song, ie a chorus.
2004 Aug 06
0
Optimizing speex for 44.1kHz
Le ven 10/01/2003 à 14:39, John Hayes a écrit : > I've been playing with speex for use in a VoIP application between PC's. One > thing I've found (correlating to the documentation) it that speex runs much > faster and produced much better output when it's fed a 32kHz signal instead > of a 44.1kHz sample rate. This is whether I tell it a 44.1kHz sample rate > and feed
2003 Jul 27
1
oggenc questions
Hello everybody! Some questions concerning oggenc: 1 why is it called oggenc? (vorbisenc would make more sense) 2 Is it true, that oggenc uses only some predefined codebooks (depending on -q)? 2.1 Are they just "random" books, or were they "optimized" in any way 2.2 Is it possible, that some codebooks are stored in the header, but are never used (even in long (>3min) files)?
2001 Feb 22
3
rtp payload format
http://www.xiph.org/ogg/vorbis/doc/draft-moffitt-vorbis-rtp-00.txt This is the Internet-Draft I'll be submitting tomorrow and hopefully presenting at the March IETF meeting. If you see anything major, let me know right away, I'll be submitting this in the morning. jack. --- >8 ---- List archives: http://www.xiph.org/archives/ Ogg project homepage: http://www.xiph.org/ogg/ To
2003 Mar 12
2
encoder block diagram
I've made a block diagram of the encoder because I tried to find out, how it works http://stoffke.freeshell.65535.net/ogg/block.html Although there are specifiation docs, that give very detailed information about single aspects of the encoding (or decoding) , I'm missing documenations that give a more general overview, about how the encoder works. (Vorbis Illuminated seems a bit
2003 Jan 17
2
Ogg Vorbis files can be compressed ?!?
Hi there. I have a short .ogg with Vorbis stream @56kbps from a 44.1 16bits stereo sample, which weights exactly 135'781 bytes. Packed to ZIP, it goes down to 109'485 bytes, that's a 80% ratio ! Even worse with latest RAR3 archive, with which the file size goes to 104'349 (76%). <sceptical> I admit this sample is a bit repetitive, but I guess the probability of having the
2007 Aug 29
1
Fast quality reduction transcoding
Hi, After a quick read of the Theora spec, I became curious about the possibility of fast quality reduction of Theora videos. The idea is to decode through the Huffman and reverse prediction steps, and then to truncate the coefficients and reencode. My questions are: * Is this a reasonable way to reduce the quality and bitrate of a stream? Will it be comparable in quality to a complete
2003 Nov 08
1
Compiling problems libvorbis 2.0
Hi Guys, I want to compile libvorbis, but I get this error message if I made a make. I use Sun Solaris 9 on a UltraSPARC Server and gcc 3.3 can anyone help me, please? <p>Best Thanks Daniel Here the message output from compiling. ----------------------------------------- /usr/ccs/bin/ld -G -z defs -h libvorbis.so.0 -o .libs/libvorbis.so.0.3.0 mdct.lo smallft.lo block.lo envelope.lo
2006 Mar 19
2
Paper on Speex and Vorbis
Hi all, This should please all those who want to know more about Speex or Vorbis. Monty and I have just finished writing this paper: "Improved Noise Weighting in CELP Coding of Speech - Applying the Vorbis Psychoacoustic Model To Speex" which you can get at: http://people.xiph.org/~jm/papers/aes120_speex_vorbis.pdf It's probably the best description of the Speex encoder to data and
2006 Mar 19
2
Paper on Speex and Vorbis
Hi all, This should please all those who want to know more about Speex or Vorbis. Monty and I have just finished writing this paper: "Improved Noise Weighting in CELP Coding of Speech - Applying the Vorbis Psychoacoustic Model To Speex" which you can get at: http://people.xiph.org/~jm/papers/aes120_speex_vorbis.pdf It's probably the best description of the Speex encoder to data and
2005 Aug 26
3
Reg. vorbis for real-time audio
Hi, From the vorbis decoder specification, it is clear that the decoder needs to have all the codebooks before decoding can actually begin. I will appreciate if someone can clear the following questions: 1. I guess the codebooks are derived from the actual input data. Probably the encoder may be making two passes through the input. The first pass finds out the frequency of different symbols
2000 Nov 09
3
Vorbis packet #3, codebooks and their large size
Hi, Am I correct in understanding that the codebooks are *not* adaptive during compression? I see that Packet #3 is written to the stream in the beginning of the encode process with no modification. If the codebooks are not adaptive, then why are codebooks included in the stream at all? Why not pass the mode type (A or B or C...) instead of all the mode info and let the decoder load it's