LiMaoquan2000
2011-Apr-12 04:36 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
Hi all, We all know that mismatch between clocks of ADCs of far-end voice and near-end voice is not allowed in a time-domain or frequency-domain LMS based AEC system. It means that capture and render audio streams must be synchronized to a same sample rate. However, I found that this restriction is removed in microsoft AEC from Windows XP SP1. Anyone knows how microsoft AEC do it? This technology is much helpful for us to implement AEC in common PC. We know that most low-cost soundcards have different sample rates in capturing and rendering which prevents LMS based AEC from being used in most computer. http://msdn.microsoft.com/en-us/library/ff536174(VS.85).aspx In Windows XP, the clock rate must be matched between the capture and render streams. The AEC system filter implements no mechanism for matching sample rates across devices. ............. In Windows XP SP1, Windows Server 2003, and later, this limitation does not exist. The AEC system filter correctly handles mismatches between the clocks for the capture and render streams, and separate devices can be used for capture and rendering. Maoquan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/speex-dev/attachments/20110412/9b10eb0a/attachment.htm
Shridhar, Vasant
2011-Apr-12 13:46 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
I would imagine that it is handle through basic asynchronous sample rate conversion. There is a lot of literature out there on the different techniques to do this. A common method is sinc interpolation. This is how I have handle these types of things in the past. Vasant Shridhar From: speex-dev-bounces at xiph.org [mailto:speex-dev-bounces at xiph.org] On Behalf Of LiMaoquan2000 Sent: Tuesday, April 12, 2011 12:36 AM To: speex-dev Subject: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? Hi all, We all know that mismatch between clocks of ADCs of far-end voice and near-end voice is not allowed in a time-domain or frequency-domain LMS based AEC system. It means that capture and render audio streams must be synchronized to a same sample rate. However, I found that this restriction is removed in microsoft AEC from Windows XP SP1. Anyone knows how microsoft AEC do it? This technology is much helpful for us to implement AEC in common PC. We know that most low-cost soundcards have different sample rates in capturing and rendering which prevents LMS based AEC from being used in most computer. http://msdn.microsoft.com/en-us/library/ff536174(VS.85).aspx In Windows XP, the clock rate must be matched between the capture and render streams. The AEC system filter implements no mechanism for matching sample rates across devices. ............. In Windows XP SP1, Windows Server 2003, and later, this limitation does not exist. The AEC system filter correctly handles mismatches between the clocks for the capture and render streams, and separate devices can be used for capture and rendering. Maoquan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/speex-dev/attachments/20110412/17b526b5/attachment-0001.htm
Li Maoquan
2011-Apr-12 18:48 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
Hi Shridhar, Sample rate conversion is not enough to solve this problem. I have tried this method several months ago. The first step is to measure the difference between sample rate of capturing and rendering. Then resampling (by what you said "sinc interpolation") one signal to eliminate the difference. The frequency step in my experiment is less than 0.1Hz. I have tried speex AEC after resampling, much more echo is cancelled than the one without resampling. But there is still echo can be heared. After all, frequency step of sample rate conversion is limited, mismatch is still exist after resampling. Someone told me that capture and render codec have different clock generator which shift independently. And LMS algorithm is very sensitive to the difference between sample rates. Sincerely Maoquan At 2011-04-12 21:46:26?"Shridhar, Vasant" <vasant.shridhar at harman.com> wrote: I would imagine that it is handle through basic asynchronous sample rate conversion. There is a lot of literature out there on the different techniques to do this. A common method is sinc interpolation. This is how I have handle these types of things in the past. Vasant Shridhar From:speex-dev-bounces at xiph.org [mailto:speex-dev-bounces at xiph.org]On Behalf OfLiMaoquan2000 Sent: Tuesday, April 12, 2011 12:36 AM To: speex-dev Subject: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? Hi all, We all know that mismatch between clocks of ADCs of far-end voice and near-end voice is not allowed in a time-domain or frequency-domain LMS based AEC system. It means that capture and render audio streams must be synchronized to a same sample rate. However, I found that this restriction is removed in microsoft AEC from Windows XP SP1. Anyone knows how microsoft AEC do it? This technology is much helpful for us to implement AEC in common PC. We know that most low-cost soundcards have different sample rates in capturing and rendering which prevents LMS based AEC from being used in most computer. http://msdn.microsoft.com/en-us/library/ff536174(VS.85).aspx In Windows XP, the clock rate must be matched between the capture and render streams. The AEC system filter implements no mechanism for matching sample rates across devices. ............. In Windows XP SP1, Windows Server 2003, and later, this limitation does not exist. The AEC system filter correctly handles mismatches between the clocks for the capture and render streams, and separate devices can be used for capture and rendering. Maoquan -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/speex-dev/attachments/20110413/02a15930/attachment.htm
Shridhar, Vasant
2011-Apr-12 18:58 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
I am doing this right now with no problem. I am not using speex for this at the moment though. Group delay is the biggest problem. I implemented a version where the input and output sample rates are known up front. The routine than interpolates between the jitter. This should solve the problem. The crystals used to clock the input and output have very fine tolerances on most standard audio cards. Vas ________________________________________ From: Li Maoquan [limaoquan2000 at 126.com] Sent: Tuesday, April 12, 2011 2:48 PM To: Shridhar, Vasant Cc: speex-dev Subject: Re:RE: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? Hi Shridhar, Sample rate conversion is not enough to solve this problem. I have tried this method several months ago. The first step is to measure the difference between sample rate of capturing and rendering. Then resampling (by what you said "sinc interpolation") one signal to eliminate the difference. The frequency step in my experiment is less than 0.1Hz. I have tried speex AEC after resampling, much more echo is cancelled than the one without resampling. But there is still echo can be heared. After all, frequency step of sample rate conversion is limited, mismatch is still exist after resampling. Someone told me that capture and render codec have different clock generator which shift independently. And LMS algorithm is very sensitive to the difference between sample rates. Sincerely Maoquan At 2011-04-12 21:46:26?"Shridhar, Vasant" <vasant.shridhar at harman.com> wrote: I would imagine that it is handle through basic asynchronous sample rate conversion. There is a lot of literature out there on the different techniques to do this. A common method is sinc interpolation. This is how I have handle these types of things in the past. Vasant Shridhar From: speex-dev-bounces at xiph.org<mailto:speex-dev-bounces at xiph.org> [mailto:speex-dev-bounces at xiph.org<mailto:speex-dev-bounces at xiph.org>] On Behalf Of LiMaoquan2000 Sent: Tuesday, April 12, 2011 12:36 AM To: speex-dev Subject: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? Hi all, We all know that mismatch between clocks of ADCs of far-end voice and near-end voice is not allowed in a time-domain or frequency-domain LMS based AEC system. It means that capture and render audio streams must be synchronized to a same sample rate. However, I found that this restriction is removed in microsoft AEC from Windows XP SP1. Anyone knows how microsoft AEC do it? This technology is much helpful for us to implement AEC in common PC. We know that most low-cost soundcards have different sample rates in capturing and rendering which prevents LMS based AEC from being used in most computer. http://msdn.microsoft.com/en-us/library/ff536174(VS.85).aspx<http://msdn.microsoft.com/en-us/library/ff536174%28VS.85%29.aspx> In Windows XP, the clock rate must be matched between the capture and render streams. The AEC system filter implements no mechanism for matching sample rates across devices. ............. In Windows XP SP1, Windows Server 2003, and later, this limitation does not exist. The AEC system filter correctly handles mismatches between the clocks for the capture and render streams, and separate devices can be used for capture and rendering. Maoquan
Li Maoquan
2011-Apr-12 19:20 UTC
[Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
Hi Vasant,>I am doing this right now with no problem. I am not using speex for this at the moment though. Group delay is the biggest problem. I implemented a version where the input and output sample rates are known up front. The routine than interpolates between the jitter. This should solve the problem. The crystals used to clock the input and output have very fine tolerances on most standard audio cards.Are you sure? 1. What's the core algorithm of your AEC algorithm? 2. Have you tried common low-cost AC97 soundcards? I have tested more than 20 low-cost soundcards. Only 2 of them have exactly the sample rate of capturing and rendering. Differences of others range from 0.5Hz to 80Hz. The exact reason in detail is still unknown. 3. Group delay is not a problem compared to different sample rate. Because the latter one will cause delay shift and overflow and downflow (sudden change of delay) of buffers, which is a big obstacle to acoustic echo cancellation. Who knows any AEC which is not sensitive to different sample rate? After all, microsoft said its new AEC can do it.>________________________________________ >From: Li Maoquan [limaoquan2000 at 126.com] >Sent: Tuesday, April 12, 2011 2:48 PM >To: Shridhar, Vasant >Cc: speex-dev >Subject: Re:RE: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? > >Hi Shridhar, > >Sample rate conversion is not enough to solve this problem. I have tried this method several months >ago. The first step is to measure the difference between sample rate of capturing and rendering. Then >resampling (by what you said "sinc interpolation") one signal to eliminate the difference. The frequency >step in my experiment is less than 0.1Hz. I have tried speex AEC after resampling, much more echo is >cancelled than the one without resampling. But there is still echo can be heared. >After all, frequency step of sample rate conversion is limited, mismatch is still exist after resampling. >Someone told me that capture and render codec have different clock generator which shift independently. >And LMS algorithm is very sensitive to the difference between sample rates. > >Sincerely >Maoquan > >At 2011-04-12 21:46:26?"Shridhar, Vasant" <vasant.shridhar at harman.com> wrote: >I would imagine that it is handle through basic asynchronous sample rate conversion. There is a lot of literature out there on the different techniques to do this. A common method is sinc interpolation. This is how I have handle these types of things in the past. > >Vasant Shridhar > >From: speex-dev-bounces at xiph.org<mailto:speex-dev-bounces at xiph.org> [mailto:speex-dev-bounces at xiph.org<mailto:speex-dev-bounces at xiph.org>] On Behalf Of LiMaoquan2000 >Sent: Tuesday, April 12, 2011 12:36 AM >To: speex-dev >Subject: [Speex-dev] Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams? > > >Hi all, > >We all know that mismatch between clocks of ADCs of far-end voice and near-end voice is not allowed in a time-domain or frequency-domain LMS based AEC system. It means that capture and render audio streams must be synchronized to a same sample rate. However, I found that this restriction is removed in microsoft AEC from Windows XP SP1. Anyone knows how microsoft AEC do it? This technology is much helpful for us to implement AEC in common PC. We know that most low-cost soundcards have different sample rates in capturing and rendering which prevents LMS based AEC from being used in most computer. > >http://msdn.microsoft.com/en-us/library/ff536174(VS.85).aspx<http://msdn.microsoft.com/en-us/library/ff536174%28VS.85%29.aspx> >In Windows XP, the clock rate must be matched between the capture and render streams. The AEC system filter implements no mechanism for matching sample rates across devices. ............. In Windows XP SP1, Windows Server 2003, and later, this limitation does not exist. The AEC system filter correctly handles mismatches between the clocks for the capture and render streams, and separate devices can be used for capture and rendering. > >Maoquan > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/speex-dev/attachments/20110413/77b847b5/attachment-0001.htm
Seemingly Similar Threads
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?
- Anyone knows how microsoft AEC can deal with mismatches between clocks of capture and render streams?