> 3) a more general question:
> i have not been able to find any reason why
> 10 lpcs are used. i suppose 10 lpcs are
> enough for prediction and using more coeffs
> would not have made too much difference.
> and 10 is even, good for lpc->lsf. had read
> somewhere "historically", during the analog
> era, this number is used. or maybe 10 is analagous
> to a perfect number! ;-)
In linear prediction the number of coefficients required for a suitable model
are determined from the spectral content of the source. For each peak (formant)
frequency in the spectrum 1 pole is required. Two coefficients are required for
each pole in the model.
With human speech it is typical to see one peak resonance for the fundamental
frequency (typically 50-440Hz) and one additional peak per 1000Hz. From this we
can see that the best order for an LPC speech model is based upon the bandwidth
of the sampled audio. In speex linear prediction is used to model the vocal
tract of the speaker.
For narrow-band mode we low-pass filter the speech to 4000Hz. Using our
knowledge of speech we choose a suitable model of 5 poles (1 pole for the
fundamental frequency and 4 poles for the vocal tract). Since each pole
requires 2 coefficients we arrive at 10 LPC coefficients.
For wide-band speex in addition to the above model we also high-pass filter the
speech at 4000Hz. This limits the audio to a 4000Hz bandwidth (4000-8000Hz).
Here we only require a 4 pole model as the fundamental frequency is below our
band of interest and only 8 LPC coefficients are used.
Hope this helps,
Tom