Hello! I am encoding small snippets of audio (e.g. 100ms) that contain important audio from the first sample till the last one. (means they dont start/end silent). Doing so raised a couple of questions I couldnt solve by reading the docu/faq/internet search. 1. Does the "complexity" parameter influence only the speed of the encoder or also the speed of the decoder? (I need fast decoding, but have a lot of time for encoding). 2. Does Speex use information from the previous frame to encode the next frame, or are they totally independent? I know that I can decode the frames independently, but what about the encoder? 3. I noticed that when I encode+decode, SPEEX inserts about ~16ms of silence at the beginning, this is probably the lookahead time. I am chopping this manually off right now, but the codec should do this transparently. First because your audio data is time shifted, and second you need to store more audio data than the original audio has, to not clip the end. There is another effect: because the lookahead is not equal to the framesize (looks like about half of it), the actual audio you feed in starts in the middle of the first frame, were the first half is zero padded. That causes Speex to somehow "fade in" the audio, since it assumes comming from silence. The only ad-hoc solution that came to my mind is to feed in #(framesize-lookahead) samples first, to generate the first "garbadge" frame, throw it away and proceed as usual. This makes the usage of Speex pretty clumsy, if your constraints are "what you feed is what you get". Any ideas, or am I doing something wrong? Regards, Thilo Koehler --