Hello, I'm wondering if anyone has done this before and has any advice, or if anyone in general has ideas about it. I just implemented transmitting synthesized speech (text-to-speech) over Speex (narrowband) in an application. I'm using Swift from Cepstral (http://www.cepstral.com). The voice I'm using is a pretty deep male voice. I'm telling Swift to generate audio at 8khz, then encoding each chunk of audio output by Swift and sending it to a client. One interesting thing I've noticed is that as I increase Speex's encoding quality, the output in the client sounds smoother (at my normal quality value of 5 or 6 it sounds OK but occasionally has a hesitation or glitch) but "thinner" -- less full or less resolution. Using the noise filter and changing the complexity parameter don't seem to matter. I'll be experimenting with this more, but if anyone is interested I can send some audio data generated by the Swift synthesizer. Or, if anyone has any suggestions for how to tweak the synthesized audio for better encoding by Speex, that would also be helpful (I don't know very much about audio or audio signal processing yet.) Thanks! Reed