I used to run in the same problem.
A difference higher than 3Hz in sampling rate between mic
and speaker data can cause a totally failure of AEC.
However, I have not seen a proper solution to this problem in real time.
If you roughly know the difference of sampling rate between the speaker and
the mic file,
you may try to resample each frame of mic file according to it.
Otherwise, or if the sampling rate difference fluctuate significantly over
time,
the only possible method I can think of is try to resample each frame of mic
file to maximize its cross-correlation with the coordinate frame in the
speaker file (This can be time consuming).
No promising, just a try.
On Wed, Dec 30, 2009 at 4:00 AM, <speex-dev-request at xiph.org> wrote:
> Send Speex-dev mailing list submissions to
> speex-dev at xiph.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.xiph.org/mailman/listinfo/speex-dev
> or, via email, send a message with subject or body 'help' to
> speex-dev-request at xiph.org
>
> You can reach the person managing the list at
> speex-dev-owner at xiph.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Speex-dev digest..."
>
>
> Today's Topics:
>
> 1. AEC: Tips on signal synchronization. (Marco Pierleoni)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 29 Dec 2009 17:38:49 +0100
> From: Marco Pierleoni <pierleoni.m at gmail.com>
> Subject: [Speex-dev] AEC: Tips on signal synchronization.
> To: speex-dev at xiph.org
> Message-ID:
> <8f61aa5b0912290838n3154532buf9b5af40d64a44fe at
mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello,
>
> I am using the speex AEC in a real time application.
> I have experienced that when the mic and the speakers tracks are on sync or
> with a small delay the AEC works very well. I understood that when they
> are
> out of sync the AEC cannot works, so what the "user" should do is
to focus
> in order to have the tracks on sync.
>
> Since I am working in an environment where it is not rare to have tracks
> out
> of sync, I was wondering if you can help me to find a reliable solution.
>
> Right now I am working on two recorded audio track where the mic and the
> speakers are both in sync or out of sync in different parts of the files.
>
> I am trying to perform the synchronization, using the cross-correlation, on
> each frame before to pass on the echo_cancellation.
> In particular the parameters of cancellation will be the frame taken from
> the mic file, and the frame from the speaker files which better
> cross-correlated with the mic frame.
> The result is not very different from the output without synchronization.
>
> So my questions are:
> - Do you think that an approach like this could, in principle, give more
> echo cancellation? Let's forget the performance, just for now.
> - Do you have any idea how to obtain faster re-adapting, if the tracks go
> out of sync?
> - I read the papers cited in the code, but I cannot understand completely
> how the algorithm works and adapts; do you have any available references to
> suggest in order to have a better understanding and to try to obtain an
> echo
> cancellation in these bad situations?
>
> Thanks for your help
>
> Marco Pierleoni.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
>
http://lists.xiph.org/pipermail/speex-dev/attachments/20091229/83c81c63/attachment.html
>
> ------------------------------
>
> _______________________________________________
> Speex-dev mailing list
> Speex-dev at xiph.org
> http://lists.xiph.org/mailman/listinfo/speex-dev
>
>
> End of Speex-dev Digest, Vol 67, Issue 15
> *****************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.xiph.org/pipermail/speex-dev/attachments/20091230/4e537a72/attachment.htm