Displaying 20 results from an estimated 3000 matches similar to: "Thought for the new year"
2000 Aug 22
1
optimization progress
Hi all,
The decoder is down 30% execution time, identical bit output.
Didn't get the mdct yet; 1024 point mdct is a bit much to brute-force,
and I'm not going to hand-unroll the whole thing either (the machine-
unrolled version produced a 1.5M executable; understandably, it wasn't
very fast. Still waiting for processors with 1.5M L1 code caches ;-)
Slowest parts now are:
-- mdct
--
2001 May 23
3
optimisation
what are the main fields where optimisation will take place to improve
the CPU use when decoding Ogg Vorbis files?
--
Venlig hilsen/Kind regards
Thomas Kirk
ARKENA
thomas@arkena.com
http://www.arkena.com
"I was drunk last night, crawled home across the lawn. By accident I
put the car key in the door lock. The house started up. So I figured
what the hell, and drove it around the block a
2004 Jun 02
4
Transient coding: AAC vs. Vorbis
Thread-split from the vorbis-mailing list
("Vorbis determined to be as good as MPC at 128 kbps!")
<p>On Sun, 30 May 2004, Segher Boessenkool wrote:
[Steven So]
SS>> If iTunes AAC can encode castanets with much less pre-echo at
SS>> ABR 128 kbps, then hopefully there will be an imaginative
SS>> (and non-patented) way of doing this in Vorbis without the
SS>>
2000 Nov 21
2
here's the test case, possible solution
Hello all,
Finally I succeeded in uploading the test case I promised.
It's at http://home.wanadoo.nl/segher/test1.wav.bz2 (It is a wav,
the headers are a bit inconsistent, but encoder_example will be
ok with it, as it just skips them).
I did some thinking, and a possible solution is decreasing the
ATH_Bark_dB[] for the lower frequencies. As the comments say,
it's not really an ATH, but
2000 Aug 29
5
Optimization and doubles vs. floats
I saw some mail go by a bit ago about doubles-vs-floats, but I seem to have lost it.
I'm interested in rewriting the mdct code using Altivec on MacOS X. Altivec doesn't support doubles, though -- the only floating point vector type is single precision floats. Vorbis currently has doubles everywhere -- is this really necessary? Doubles are supposedly faster than floats in the PPC
2000 Oct 23
4
More mdct questions
Sorry for starting another topic, this is actually a reply to Segher's post
on Sun Oct 22 on the 'mdct question' topic. I wasn't subscribed properly
and so I didn't get email confirmation and thus can't add to that thread.
So Segher, if the equation is indeed what you say it is, then replacing
mdct_backward with this version should work, but it doesn't.
Am I applying
2000 Dec 20
1
Short block test
Frank Klemm made the clip to test short block switching -
it's made of series of short periodical 'pulses', and this period
gets smaller as time passes.
You can get this file at:
http://www.uni-jena.de/~pfk/Short_Block_Test.wav.gz.gz (1.6MB)
(uncompress it twice with gzip)
I used oggenc beta3 & mp+ 1.7.8
Oggenc gave 160kbps using mode -b 256
mp+ gave 350kbps (using
2003 Apr 08
6
bitpeeler
No offense, Segher, but the output quality of this thing is awful. =)
I'll disregard the fact that, at least with *my* compiler, the source
tarball I downloaded reduces every packet to zero bytes, which isn't
terribly interesting.
I decided to set the byte reduction to something constant: I started
by dividing each packet's size by 2 just to see what would happen.
The resulting ogg
2004 Feb 05
1
Psychoacoustic model
We've implemented a vorbis decoder based on Tremor and as part of the
documentation we're also writing about psycho acoustic models and
encoding.
We're quite up to date with the decoding process and psycho acoustics
in general but unfortunately not on the psycho acoustic encoding used
in Vorbis.
We have a few questions that would we would be very thankful to have
answer to:
Which
2000 Aug 19
3
New LSP code committed
So, it turns out (and another implementation actually explicitly mentions it)
that LSP->LPC computation using the FIR algorithm is very sensitive to noise
(iterative algorithm) and really really requires doubles [we're not kidding].
This was complicating things for folks pursuing fixed point implementations,
and also was a potential source for bugs if FP optimizations got out of hand.
This
2000 Dec 22
1
Different floor, quality improvement
Hello all,
Please try this "patch". It changes the way the noise floor is used
for quantization in a not-so-subtle way.
At the very end of _vp_compute_mask, add the lines:
for(i=0;i<n;i++)
flr[i]=.01f*sqrt(flr[i]);
The .01 is there to ensure the current codebooks will work. We will
really need different, newly-trained codebooks with this change; then
the
2000 Dec 23
1
Look what I found under the Xmas tree!
Hello people,
Looks like Santa Claus thinks I've been a good boy this year.
Here's the third in my performance patch series: d.m.l
Apply after applying d.o.n and d.n.m; I don't know how much of
those got applied to the CVS tree.
What's inside:
Request for help! Look in os.h if you're using a compiler or
processor I don't use (I use gcc on K5, K7, G3).
New MDCT! Now we
2003 May 20
2
mdct_backward with fused muladd?
Can anybody point me at any resources that would explain how to optimize
mdct_backward for a cpu with a fused multiply-accumute unit?
>From what I understand from responses to my older postings, Tremor's
mdct_backward could be rewritten to take advantage of a muladd.
My target machine can do either two-wide 32x32 + Accum(64) -> Accum(64)
integer muladd or eight-wide 16x16 + Accum(32)
2003 Mar 31
5
Rhubarber (advanced peeler)
Hi all,
[For the uninitiated: a "peeler" is a program that transforms
a Vorbis stream into a smaller, (somewhat) lower quality Vorbis
stream, and does so quickly, by just throwing out some data.]
After having prototyped several peelers that aim to peel
to a certain filesize, or to a certain quality, with mixed
success, I've now taken a different route: a peeler that
aims for the
2003 Mar 31
5
Rhubarber (advanced peeler)
Hi all,
[For the uninitiated: a "peeler" is a program that transforms
a Vorbis stream into a smaller, (somewhat) lower quality Vorbis
stream, and does so quickly, by just throwing out some data.]
After having prototyped several peelers that aim to peel
to a certain filesize, or to a certain quality, with mixed
success, I've now taken a different route: a peeler that
aims for the
2001 Sep 05
2
Understanding of Vorbis coder
Hi
I have gone through the document available in the net regarding the
Vorbis encoder /Decoder.
Based on that i have prepared a understanding document on the
encoder/decoder block. I would like to
know whether my understanding of the coder is OK. If there are any
other additional block /information pl. provide me
with the same.
Thanks and regards
S.Padmashri
<HR NOSHADE>
<UL>
2000 Nov 18
3
beta3 problems
Hiya,
Just downloaded beta3, and I actually got it to compile without
too much hassle. Great job!
Still, some problems:
-- (easy): the -V option to ogg123 is broken, --version works.
-- make profile doesn't work (in vorbis-tools), need to pass in
some -pg -static or something like it (doesn't exactly work,
-static is swallowed by libtool; read some docs, needs to be
-all-static
2001 Jan 23
4
rehuff
Hiya,
Here is the sources to my "rehuff" program.
./rehuff in.ogg out.ogg
does a lossless recoding of a vorbis stream. (It generates optimal
huffman codes for the particular stream).
This code is meant for developers only, until someone is kind
enough to provide good build and configure support for it.
I won't. And no installation help questions please.
There is a little patch
2005 Sep 25
3
several questions
VorbisHi
Any help is appreciate.
I have two questions.
First, I looked into the vorbis-?-specific and was confused by the floor1 algorithm.
I think at last the aim is to derive the piecewise curve with the list X and Y,then when encodering ,why not
use the selected point orderly to get the curve ? In other words, the specific use the list of [0,128,64,32,96,16,48,80] ,
why can not use
2002 Jul 30
8
rehuff [source attached]
Hi all,
Yes, it's true. A new version of rehuff, the tool that losslessly compresses
Vorbis files: one that is easy to compile, and that works with
newer-than-two-years-ago streams, too!
On 1.0 streams, you get about 3% size reduction, and the headers get _much_
smaller (which helps for fast-start network streams).
Building it should be easy (you might have to add some -I and -L for