On Mar 30 23:34:00, fasihw at gmail.com wrote:> We are developing a system in which we are using opus coded, our
> requirement is that encoded audio from two or more smartphones are received
> on cloud server, They are then added and sent to other side where it is
> decoded and played.
Why does that happen in two different places?
Why don't you send the two streams to the final destination,
to decode and play simultaneously?
Is this happening (or expected to happen) in real time,
or do you play the mix with some delay (after mixing the two)?
> The problem that we are facing is that on the server we need to decode the
> audio, Add it using byte array,
Do you realy mean "add", as in "the two samples, being ints or
whatever,
are summed, as numbers, and that is the corresponding sample of the
result"?
That must be clipping a lot -- what is the purpose of this operation?
Do you mean to mix the two (mono?) streams into a stereo file?
Or to mix them into one mono file?
> Encode it and then send it to other side
> where it is decoded and played.
> It is highly inefficient method and gives problematic results.
What "problematic results"?
> We need to add/mix the two encoded sounds on server without the additional
> process of encoding and decoding them at the server.
I don't think you can easily combine/mix two encoded streams.
For example SoX (and I believe other audio software as well)
first decodes the encoded streams into plain PCM to work with.
As a workaround, can you just take the two original streams
and play them simultaneously?
Jan