> When you count the adaptation step size in case where the filter has
already had
> minimal adaptation, you use the following expression:
>
> r = (0.7*r + 0.3*15*RER*e)/e*(power[i] + 10) = (0.7*leak_estimate*Yf[i] +
> 0.3*15*RER*(Rf[i] + 1))/((Rf[i] + 1)*(power[i] + 10)).
Actually, the QCONST16(.3,15) doesn't mean (.3*15), but the value .3 in
Q15 format (fixed-point).
> Why do we need this weighted sum and the component 0.3*15*RER*(Rf[i] + 1)?
First, the reason for multiplying by (Rf[i]+1) is because I'll be
dividing by it in the next line (and I didn't want to do divisions for
both terms).
> Why
> use the correlation-based RER if we can do the same in the frequency
domain? If
> we removed this component, the expression would be exactly like in the
method
> described in "On adjusting the learning rate in frequency domain echo
> cancellation with double-talk". Or am I missing something?
The reason for using the RER is a bit of a kludge, that is I have found
that it helps, but I cannot explain it for now, except by saying that it
forces the learning rate to be more similar across frequency.
> Also, why do we use an increasing adaptation rate as the filter gets closer
to
> adaptation but isn't yet adapted? Wouldn't it make sense to
decrease it?
That's not the case. What makes you say think that?
Jean-Marc