Hi, I plan to use AEC for a live performance, storytelling for very young children (and their parents!) in a mongolian yourte . Actually the storyteller can make vocal loops, there is an omnidirectional microphone in the center of the yourte, 5 loudspeakers in a circle along the yourte's wall and Pure Data in a linux box. And now she wants to make vocal loops over music and loops over loops... Maybe aec will help her? I did some testing on a small setup, 2 desktop loudspeakers and have very good results (20 to 30dB rejection) with mono music and speex_echo_state_init_mc() set to 1 speaker but nearly no cancellation with a stereo setup. Initializing echo_state with 2 speakers doesn't improve but rather degrade cancellation. Also tried decorrelation but no noticeable effects. And of course sounds are strongly dynamically panned over the 5 loudspeakers. So here are the questions, is there a hope aec can do something for her or do I am dreaming? do I miss something? Surely stupid but as the loudspeakers and the microphone are always at the same place in the yourte I can mesure the impulse response of each loudspeaker as seen by the central microphone and convolve the 5 "far end" signal before feeding the eac's adaptative filter. Will it help? All the best, Jo?l