LiMaoquan2000
2011-Apr-14 11:26 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
Hi All, Many Thanks to Underwood for her excellent review of our big trouble which prevent LMS-based AEC algorithms to be used in most computer. Maybe it can be summaried as follows: 1. Different sample rate of sampling and rendering does exists in most low-cost soundcards (In my experiments over more than 20 soundcards, the differences range from 0.5Hz to more than 50Hz when sample rate is set to 8000Hz). Maybe this is totally caused by hardware which can't be solved by software settings. 2. Static measurement of the difference between sample rates is far from enough. Accurater measurement requires more time to record echo signal in order to get accurater frequency shift from spectral structure. However, the accuracy of the measurement is still limited and not enough for long time work of AEC. For example, in my experiment, I recorded 2^18/8000=32 seconds of echo signal, and the freqency resolution is 8000Hz/(2^17)=0.0625Hz. With a precise resampler (sinc interpolation), the speex AEC got much better performance than before. But there are still audible residue echos after AEC. Freqency resolution of 0.0625Hz is still far from enough. There is still delay drift between near-end and far-end voice which is caused by different sample rates even if it is largely eliminated by the resampling. Moreover, the residue difference will cause overflow or underflow of the buffer in a long time, which is a disaster to the echo canceller. Maybe this paper (Pawig, M., Enzner, G., and Vary, P., Adaptive Sampling Rate Correction for Acoustic Echo Control in Voice-over-IP, IEEE Transactions on Signal Processing, Vol. 58, No. 1, January 2010) points out a correct direction. In this paper, the far-end signal is resampled before send to AEC. It estimates the delay drift between acoustic echo and estimated echo and adjusts step of sampling time. When delay drift is zero, far-end signal after resampling will have the same sample rate with the low-end signal. It seems perfect, but it still have some weakness: 1. If relies on a coarse initial convergence of the LMS filter to estimate delay drift between acoustic echo and estimated echo. If there is a big difference, such as 50Hz, no initial convergence can be established. 2. It is too slow to reach the balance. According to its experiment, it will cost about 35-40 seconds to decrease the frequence difference to 0Hz and ERLE will increase only when frequence difference is very close to 0Hz. These results are under the environment without double talk. Have GIPS and Microsoft some secret high efficient method? They AECs converge very quickly, I could hardly hear any echo in the process. How can they do it? Maoquan> > ---------------------------------------------------------------------- > > On 04/13/2011 02:58 AM, Shridhar, Vasant wrote: > I am doing this right now with no problem. I am not using speex for this at the moment though. Group delay is the biggest problem. I implemented a version where the input and output sample rates are known up front. The routine than interpolates between the jitter. This should solve the problem. The crystals used to clock the input and output have very fine tolerances on most standard audio cards. > > Do you mean the group delay of your interpolation filter? I don't see > why that is an issue. At the echo cancellation point it just looks like > a bit more echo delay. I also don't know why you use the word jitter in > relation to interpolation. The jitter you have is in the reception time > of blocks of samples, which makes the assessment of sampling rates hard, > but doesn't affect the actual interpolation. > > We are talking about two clocks, which are not synchronised, and which > may drift in frequency significantly over fairly short periods of time. > The issue is accurately assessing the sampling rate difference, to phase > locked levels of accuracy, so the resampling is precise. You can find > sampling rates like 8000/s and 8100/s, which is a disaster for most echo > cancellers. If the clock rate difference is assessed to 0.1Hz accuracy, > and the 8100/s sampled signal is resampled to 8000.1/s, you would still > need to totally readapt the canceller every 10s, including periods of > double talk. That is too fast for the canceller to ever be working well. > You really need a very accurate assessment of the sampling rate > difference, so you can essentially eliminate all difference between the > two rates. > > Assessing the sampling rate difference accurately is not hard, if you > have plenty of time. Doing it in a shorter period is where the challenge > lies. You are decoupled from a precise real time view of the sampling > process. All you can base your sampling rate assessment on is long term > assessments of sample rates, or an analysis of how the echo is drifting > through the samples. From the last 10 years you will find a number of > papers published in IEEE and other journals about this problem, as it > pertains to echo cancelling in conferencing, and other distributed > setups. In these systems, synchronisation of various echo laden signals > is impractical. All the papers I've seen come down to doing basically > the same thing - resampling based on a best assessment of echo drift > rates. It seems like its still a research topic, and it seems like > existing solutions have their problems. Fraunhofer have recently > released a conferencing echo handler with a vague description of how it > works, but a clear indication that it isn't even trying to cancel the > echo. It is juggling gains, and performing other tricks, to make the > echo perceptually tolerable - an approach which has historically worked > pretty well (e.g. the DSP Group solution from the 90s). At least one > person reported, on this list, that their solution is the best around. > > Vas > > ________________________________________ > > From: Li Maoquan [limaoquan2000 at 126.com] > > Sent: Tuesday, April 12, 2011 2:48 PM > > To: Shridhar, Vasant > > Cc: speex-dev > > Subject: Re:RE: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? > > > > Hi Shridhar, > > > > Sample rate conversion is not enough to solve this problem. I have tried this method several months > > ago. The first step is to measure the difference between sample rate of capturing and rendering. Then > > resampling (by what you said "sinc interpolation") one signal to eliminate the difference. The frequency > > step in my experiment is less than 0.1Hz. I have tried speex AEC after resampling, much more echo is > > cancelled than the one without resampling. But there is still echo can be heared. > > After all, frequency step of sample rate conversion is limited, mismatch is still exist after resampling. > > Someone told me that capture and render codec have different clock generator which shift independently. > > And LMS algorithm is very sensitive to the difference between sample rates. > > > > Sincerely > > Maoquan > > > > At 2011-04-12 21:46:26?"Shridhar, Vasant" <vasant.shridhar at harman.com> wrote: > > I would imagine that it is handle through basic asynchronous sample rate conversion. There is a lot of literature out there on the different techniques to do this. A common method is sinc interpolation. This is how I have handle these types of things in the past. > > > > Vasant Shridhar > > > > From: speex-dev-bounces at xiph.org<mailto:speex-dev-bounces at xiph.org> [mailto:speex-dev-bounces at xiph.org<mailto:speex-dev-bounces at xiph.org>] On Behalf Of LiMaoquan2000 > > Sent: Tuesday, April 12, 2011 12:36 AM > > To: speex-dev > > Subject: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? > > > > > > Hi all, > > > > We all know that mismatch between clocks of ADCs of far-end voice and near-end voice is not allowed in a time-domain or frequency-domain LMS based AEC system. It means that capture and render audio streams must be synchronized to a same sample rate. However, I found that this restriction is removed in microsoft AEC from Windows XP SP1. Anyone knows how microsoft AEC do it? This technology is much helpful for us to implement AEC in common PC. We know that most low-cost soundcards have different sample rates in capturing and rendering which prevents LMS based AEC from being used in most computer. > > > > http://msdn.microsoft.com/en-us/library/ff536174(VS.85).aspx<http://msdn.microsoft.com/en-us/library/ff536174%28VS.85%29.aspx> > > In Windows XP, the clock rate must be matched between the capture and render streams. The AEC system filter implements no mechanism for matching sample rates across devices. ............. In Windows XP SP1, Windows Server 2003, and later, this limitation does not exist. The AEC system filter correctly handles mismatches between the clocks for the capture and render streams, and separate devices can be used for capture and rendering. > > > You have posted the same thing before, but ignored replies because you > didn't like them. The paragraph you quoted can be taken as a clear > statement that MS precisely resample the signals. However, if you read > the whole page it is less clear. The key thing that paragraph is talking > about is big sampling rate changes - like taking a 48k/s signal and a > 16k/s signal, and resampling the 48k/s one to 16k/s, so cancellation can > work. That is the thing which seems to have been added in XP SP1. The > paragraph seems to imply that fine resampling happens, but if you read > the rest of the page it comes from, things are not so clear. There are > many vague and unclear things on that page. If they had brilliantly > solved this problem, everyone should be relying on the MS canceller for > their Windows solutions, but that doesn't seem to be the case. It seems > many soft-phones rely on their own echo handling solutions, and many do > not handle echo very well. > > Steve-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/speex-dev/attachments/20110414/2e1f8320/attachment-0001.htm
Steve Underwood
2011-Apr-15 17:04 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
On 04/14/2011 07:26 PM, LiMaoquan2000 wrote:> Hi All, > Many Thanks to Underwood for her excellent review of our big trouble > which prevent LMS-based AEC algorithms to be used in most computer. > Maybe it can be summaried as follows: > 1. Different sample rate of sampling and rendering does exists in most > low-cost soundcards (In my experiments over more than 20 soundcards, > the differences range from 0.5Hz to more than 50Hz when sample rate is > set to 8000Hz). Maybe this is totally caused by hardware which can't > be solved by software settings. > 2. Static measurement of the difference between sample rates is far > from enough. Accurater measurement requires more time to record echo > signal in order to get accurater frequency shift from spectral > structure. However, the accuracy of the measurement is still limited > and not enough for long time work of AEC. For example, in my > experiment, I recorded 2^18/8000=32 seconds of echo signal, and the > freqency resolution is 8000Hz/(2^17)=0.0625Hz. With a precise > resampler (sinc interpolation), the speex AEC got much better > performance than before. But there are still audible residue echos > after AEC. Freqency resolution of 0.0625Hz is still far from enough. > There is still delay drift between near-end and far-end voice which is > caused by different sample rates even if it is largely eliminated by > the resampling. Moreover, the residue difference will cause overflow > or underflow of the buffer in a long time, which is a disaster to the > echo canceller. > Maybe this paper (Pawig, M., Enzner, G., and Vary, P., Adaptive > Sampling Rate Correction for Acoustic Echo Control in Voice-over-IP, > IEEE Transactions on Signal Processing, Vol. 58, No. 1, January 2010) > points out a correct direction. In this paper, the far-end signal is > resampled before send to AEC. It estimates the delay drift between > acoustic echo and estimated echo and adjusts step of sampling time. > When delay drift is zero, far-end signal after resampling will have > the same sample rate with the low-end signal. It seems perfect, but it > still have some weakness: > 1. If relies on a coarse initial convergence of the LMS filter to > estimate delay drift between acoustic echo and estimated echo. If > there is a big difference, such as 50Hz, no initial convergence can be > established. > 2. It is too slow to reach the balance. According to its experiment, > it will cost about 35-40 seconds to decrease the frequence difference > to 0Hz and ERLE will increase only when frequence difference is very > close to 0Hz. These results are under the environment without double talk. > Have GIPS and Microsoft some secret high efficient method? They AECs > converge very quickly, I could hardly hear any echo in the process. > How can they do it? > Maoquan >I don't know if this has only recently been put on line, but I never noticed it until today - www.iwaenc.org/proceedings/*2008*/contents/papers/9044.pdf That paper is from people at MS describing, in some detail, what the Windows kernel echo canceller does to handle synchronisation issues. It tracks both time varying sample clock drift and hiccups in the sample streams. It seems to handle the drift in a fairly similar manner to the several other papers on the topic from the past 10 years. Steve
Steve Underwood
2011-Apr-16 04:05 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
On 04/16/2011 01:04 AM, Steve Underwood wrote:> On 04/14/2011 07:26 PM, LiMaoquan2000 wrote: >> Hi All, >> Many Thanks to Underwood for her excellent review of our big trouble >> which prevent LMS-based AEC algorithms to be used in most computer. >> Maybe it can be summaried as follows: >> 1. Different sample rate of sampling and rendering does exists in most >> low-cost soundcards (In my experiments over more than 20 soundcards, >> the differences range from 0.5Hz to more than 50Hz when sample rate is >> set to 8000Hz). Maybe this is totally caused by hardware which can't >> be solved by software settings. >> 2. Static measurement of the difference between sample rates is far >> from enough. Accurater measurement requires more time to record echo >> signal in order to get accurater frequency shift from spectral >> structure. However, the accuracy of the measurement is still limited >> and not enough for long time work of AEC. For example, in my >> experiment, I recorded 2^18/8000=32 seconds of echo signal, and the >> freqency resolution is 8000Hz/(2^17)=0.0625Hz. With a precise >> resampler (sinc interpolation), the speex AEC got much better >> performance than before. But there are still audible residue echos >> after AEC. Freqency resolution of 0.0625Hz is still far from enough. >> There is still delay drift between near-end and far-end voice which is >> caused by different sample rates even if it is largely eliminated by >> the resampling. Moreover, the residue difference will cause overflow >> or underflow of the buffer in a long time, which is a disaster to the >> echo canceller. >> Maybe this paper (Pawig, M., Enzner, G., and Vary, P., Adaptive >> Sampling Rate Correction for Acoustic Echo Control in Voice-over-IP, >> IEEE Transactions on Signal Processing, Vol. 58, No. 1, January 2010) >> points out a correct direction. In this paper, the far-end signal is >> resampled before send to AEC. It estimates the delay drift between >> acoustic echo and estimated echo and adjusts step of sampling time. >> When delay drift is zero, far-end signal after resampling will have >> the same sample rate with the low-end signal. It seems perfect, but it >> still have some weakness: >> 1. If relies on a coarse initial convergence of the LMS filter to >> estimate delay drift between acoustic echo and estimated echo. If >> there is a big difference, such as 50Hz, no initial convergence can be >> established. >> 2. It is too slow to reach the balance. According to its experiment, >> it will cost about 35-40 seconds to decrease the frequence difference >> to 0Hz and ERLE will increase only when frequence difference is very >> close to 0Hz. These results are under the environment without double talk. >> Have GIPS and Microsoft some secret high efficient method? They AECs >> converge very quickly, I could hardly hear any echo in the process. >> How can they do it? >> Maoquan >> > I don't know if this has only recently been put on line, but I never > noticed it until today - > www.iwaenc.org/proceedings/*2008*/contents/papers/9044.pdf > > That paper is from people at MS describing, in some detail, what the > Windows kernel echo canceller does to handle synchronisation issues. It > tracks both time varying sample clock drift and hiccups in the sample > streams. It seems to handle the drift in a fairly similar manner to the > several other papers on the topic from the past 10 years. > > Steve >That URL somegot got a couple of "*" characters in it. Try http://www.iwaenc.org/proceedings/2008/contents/papers/9044.pdf If you go to www.iwaenc.org and browse around, they have some interesting stuff on multi-channel EC, getting reverb out of a signal, background noise reduction, and so on. Steve
Possibly Parallel Threads
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
- Speex-dev Digest, Vol 83, Issue 10
- Speex-dev Digest, Vol 83, Issue 10
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?